print logo
RSS FEED

An Unnatural History of the Electronic Mouse

Thursday, October 25, 2012

Technology marches on and the mouse is done for, or so we are told. But what if history had been different?

For fans of technology journalism, it is hard to keep up with the gurus’ ideas of what is obsolete. Surely paper, cash, and television are on their way out, they say.

The latest candidate is the computer mouse and its evolutionary kin, the trackballs, touchpads, and other devices. Way back in 2009, Network World said that touchscreens and voice recognition software would do them in. The Wall Street Journal and Fortune have already proclaimed that the future lies in three-dimensional gestures, literally hand waving. This month, the Washington Post joined in, adding the mouse to technology’s endangered species list.

Mice and keyboards do raise significant health issues. For some people, rotating the hand to operate a conventional mouse, or even typing on a keyboard, may cause carpal tunnel syndrome and other overuse ailments. Some generally higher-priced alternative designs may be healthier. But it is not clear that the conventional mouse or keyboard alone is to blame for users’ ills. For example, in early studies of automated newsrooms, publications using similar equipment had markedly different rates of cumulative trauma disorders — painful musculoskeletal and nervous conditions resulting from repetitive motions or unhealthy positions of the hand and fingers. Workplace stress may play a larger role than variations in hardware or software. Some workers with carpal tunnel syndrome who replaced keyboarding with voice recognition software developed voice overuse issues. It is thus entirely possible that touch- and gesture-based computing will be no healthier than using hardware with conventional interfaces. In fact, medical researchers now consider smartphone screens alarmingly efficient incubators and transmitters of germs.

While we await the actual future of the mouse, we can take a counterfactual view and explore an alternative universe. In reality, the mouse is a surprisingly old device. The interface guru Douglas C. Engelbart first demonstrated it in 1968. The preserved video of his introduction of the mouse-keyboard interface looks uncannily like word processing and Internet email systems of today, even if the display and typography seem relatively crude. His concepts were ready for later developers of graphic interfaces, from the Xerox Palo Alto Research Center to Apple and Microsoft.

In our parallel world, imagine that compact processors, Wi-Fi, and 4G wireless have developed even more rapidly than they actually have, but that limits on some rare earth elements have severely restricted screen sizes. Suppose, in other words, that smartphones, apps, and the Cloud had arrived before full-sized personal computers. We would have a very different view of progress.

Once people accustomed themselves to moving and clicking a mouse, they would wonder how they got along with touch gestures.

The story of computing would be the quest for greater display space as chemists and physicists developed alternatives to rare earths for visualizing data. With the new screens would come alternatives to the multitouch gestures that Apple (in our present universe) has been refining and patenting. In our alternative universe, the just-introduced iPad 8-inch mini would have been a breakthrough in usability from the iPhone 5, and the 12-inch iPad, following it by a few years, would be the true transformative product. Apple and Android engineers would not stop there. They would note that some third-party vendors were offering external devices called keyboards. These imitated the hardware of old-style typewriters to reduce the many errors resulting from virtual keys in tablet screens. Then another brilliant designer would have the idea – inspired, no doubt, by the clamshell cell phones of the 1990s – to sell a tablet with a keyboard connected to the screen by a hinge, which would protect the screen and keyboard when folded and make tablet covers and stands obsolete.

Other engineers would see that 32 or 64 megabytes of memory would not be enough and would include a spinning drive with up to a gigabyte of storage so users would not be entirely dependent on the Cloud, vulnerable as it is to cyberwarfare and terrorism. Columnists and gurus would extol “post-Cloud security.”

The next breakthrough would be the introduction of a separate pointing device to replace finger gestures on a pad. Called a mouse, it would originally have just one button until users demanded two, three, or more for flexibility. Once people accustomed themselves to moving and clicking it, they would wonder how they got along with touch gestures. Gone would be fingerprint oils messing up the screens!

The industry would not stop there. Engineers would note that many people would prefer to have a display closer to eye height, and that laptop thefts were becoming a multibillion-dollar headache for corporations and individuals. Responding to consumer demand, they would create a new generation of more powerful devices with separate monitors. The chassis would not only be difficult to steal, but would make possible constant upgrades with new sound and video cards and space for additional drives, which would also make video editing easier. To answer perennial complaints of shorter-than-advertised battery life, they would equip these devices with built-in power supplies that would work reliably with any wall outlet.

The fully mature personal computer would have arrived. Of course, some pathetic senior citizens would continue to squint at their smartphones and even restore them like aging hippies maintaining their vintage Volkswagen Beetles. But who would care about them? The mouse had at last been born.

Edward Tenner is the author of Why Things Bite Back: Technology and the Revenge of Unintended Consequences and Our Own Devices: How Technology Remakes Humanity, a visiting scholar in the Rutgers School of Communication and Information, and an affiliate of the Princeton Center for Arts and Cultural Policy Studies.

FURTHER READING: Tenner also writes “Apple, Disney, and Dreams of Corporate Utopias,” “The Fine Art of Resilience: Lessons from Stanley Meltzoff,” and “Facebook and the Importance of Being Unimportant.” Mark P. Mills explains “The Next Great Growth Cycle.” David Shaywitz discusses “Saving Steve Jobs' Legacy from a 'Successories' Future.” Michael M. Rosen contributes “Software Patents: Reform, Not Repeal.”

Image by Darren Wamboldt / Bergman Group

Most Viewed Articles

The U-9 and the Realm of the Unexpected By Ralph Kinney Bennett 09/19/2014
Exactly 100 years ago the world was reminded yet again that war — declared or undeclared — is in ...
Why the Government Won't Let Colleges Reduce Tuition By Ike Brannon 09/17/2014
Congress should give permission for private colleges to cooperate in cutting their tuition. It’s ...
Telecommuting: Good for Workers, Good for Bosses By Michael M. Rosen 09/12/2014
Challenges abound, but the trajectory is plain.
Closing the Racial Gap in Education By Jason L. Riley 09/10/2014
The usual explanation for the academic achievement gap is that blacks come from a lower ...
The Minimum Wage Can Never Be High Enough By Ike Brannon 09/07/2014
The minimum wage is a facile non-solution for the complicated problem of poverty in America.
 
AEI