Making Computers Disappear

The philosopher Michael Polanyi, in his 1966 book The Tacit Dimension, suggests that the usefulness of a technology can be measured in proportion to its disappearance. His thesis is that people become so used to a technology and reliant upon it that they mostly cease to notice its impact on their lives. Automobiles are one example, jet engines another.

Alas, for the computer industry, that successful morph into perceptual irrelevance seems frozen on a distant horizon. Stories of digital potholes, crashes and glitches abound -- as do articles filled with doe-eyed wonder at the grand potential of the information superhighway. Computers can be said to be many things, but few can argue they have disappeared.

The philosopher Michael Polanyi, in his 1966 book The Tacit Dimension, suggests that the usefulness of a technology can be measured in proportion to its disappearance. His thesis is that people become so used to a technology and reliant upon it that they mostly cease to notice its impact on their lives. Automobiles are one example, jet engines another.

"People have this bizarre idea that people can take a computer home and it works," said Clayton Lewis, with tongue only partially in cheek, who managed IBM's human factors group at IBM's Watson Research Center in the early 1980s.

That may seem an odd comment for a technology that is used so much. Microprocessors now inhabit nearly every appliance -- toasters, washing machines, cars, cameras. Everything is going digital. Soon, too, the computer in the home will be just like a television. We will log on to do banking, communicating, or video viewing. At work, a computer will become just a tool, rather than some kind of alchemic mechanism, reverently purchased, to turn loss into profit. People will hardly notice computers -- except when they crash, which is today considerably more frequently than automobiles and jets.

So the future of computers may really be their disappearance into the background of everyday life. The trick now for industry is making them vanish, profitably and beneficently. "Suggesting [computer] technology [will be ubiquitous] trivializes the technology," says Bob Stern, senior vice president of strategic planning at Compaq Computer.

His is a long-term view. The computer revolution -- first mainframes, then minicomputers and now PCs and laptops -- has permeated nearly every operation in the business world. Now that same technology is making a full-scale assault from the back office to the front office -- the front line of work -- where delivery people, couriers, salesmen and technicians ply their trades.

Next are the home and consumer markets, where in 1994 more PCs were sold for the first time than in the business market. That has led to the creation of new computer trade publications focused exclusively on the home market, and to much soul-searching in marketing departments trying to figure out how consumer markets for computers differ from business and technical markets.

But much of this thinking may be wrong-headed, says Stern, and it may very well lead companies into technological and marketing dead-ends. His belief -- shared by many strategic thinkers in the computer industry -- is that the current distinction between the consumer and business computing markets is too facile. The two are actually merging.

More and more people conduct their business at home, and more and more businesses are trying to reach into the home via cyberspace. Moreover, the basic, raw communications bandwidth under construction by cable, phone and satellite companies will impose many of the same interface requirements for business and home computers.

So the consumer/business division may be artificial, at least from a technology developer's point of view. Instead, the tectonic shift has occurred along the fault line where computers communicate rather than process data in isolation; they are turning into telephones, but ones that handle video and data as well as voice. The computer becomes a gateway that gives business and consumer users access to the same world of cyberspace resources.

Some call the fault line where computers will sit the Internet, others believe AT&ampT owns it, while still others think Microsoft will be king of the grand convergence with its soon-to-be-launched online service. For computers, the result will likely be the same: small, application-specific devices optimized for communication, with data-processing itself consigned increasingly to servers somewhere on the network.

"There will be multiple computers in the home. In the kitchen, there will be a mounted, flat-screen display," says Stern, speaking from a high-tech lab at Compaq headquarters where he and his team contemplate computing's future.

That kitchen display might be equipped with voice response to take grocery orders, and the system can be programmed to monitor the refrigerator and keep it properly stocked. The living room would have a larger screen, focused primarily on providing mostly passive forms of entertainment -- movies, sports, etc. "When I come home, the last thing I want to do is interact," says Arnold Lund, Stern's vision-making counterpart at Ameritech.

That vision might include a child's room holding a much different kind of computer with its own unique interface. It might be a stuffed animal that tells a story. Then there will be a computer that will handle family finances, and perhaps another devoted to work-related tasks -- dialing into the office or helping to do consulting work from home. Residents of this house will have mobile computers from which to retrieve data from school and work. They might have an electronic address book with them on the road that day and when they go home and throw it on the work desk, it automatically communicates with the address book in the desk computer.

These computers are all managed by a home server, a powerful computer that controls the house local area network and provides the interface to the outside world, which is itself populated with legions of servers. The day may not be too far off when one calls the guy from EDS when the home LAN crashes -- the cyberspace equivalent of calling the Maytag repairman.

Companies such as Compaq will in five or 10 years hope to make most of their money from building devices for these home networks. Likewise, systems integrators and computer services companies will be plumbers and electricians, keeping these systems from crashing.

Still, there are major impediments. Don Norman, who holds the prestigious position of Apple Fellow at Apple Computer, said at a May conference on Human Computer Interaction in Denver, Colo., that "Computers need to be made simpler. The whole trend is to make them more complex."

So computer firms such as Compaq or IBM sell the computing equivalent of Swiss Army knives: These computers do many processing tasks, from playing multimedia programs and sending E-mail to word processors and tabulating spreadsheets. But they are not designed to perform any one task particularly well. Having them do everything creates huge software and hardware requirements, so people need zippy Pentium computers, and soon the P6, not because they want to do something fundamentally different with their computers, but because only a Pentium can handle all the old applications upgrades and communication requirements. That may be why computers have not made workers dramatically more productive.

There's another problem. Lumping so many capabilities on a single computer also makes computer systems apt to crash and incredibly hard and expensive to manage. An influential Gartner Group study showed that the five-year cost of maintaining a computer seat has risen from $19,000 in 1987 to $41,000 in 1994.

Norman doubts these trends can continue, and he consequently believes the computing industry has reached an impasse, with every company groping blindly for a profitable way over the hump and into the next stage of the computing revolution. He blames the current business model in the software industry -- read Microsoft -- for much of the problem.

His case is this: If companies made a software program that worked well -- say, a word processor -- buyers would theoretically never have to buy another one. Software, unlike automobiles or toasters, never wears out.

And software houses can't rely on techniques of built-in obsolescence to stimulate consumption. The software industry's solution to this problem has been upgrades, which basically mean adding a few bells and whistles to an existing product. Trouble is, that makes products more complex, more apt to have bugs, more resource-demanding and, in many cases, less functional -- even with new, superfast hardware. Microsoft's Word program, now in its seventh generation, is one such example. Many fear Windows 95 will be another example -- when it comes out.

The industry may face other, equally profound challenges in the move to so-called object-oriented software approaches. These systems, from firms such as Apple, IBM and Microsoft, will start hitting the mass market in 1996. They will allow users to piece together code into pre-tested objects, which can be linked together in a sort of Lego approach to form applications.

One object might represent a subsystem for printing, another for file copying, and yet another for telecommunications. Such libraries of objects can be mixed and matched, by customers and their consultants, to form applications. If the technology works as promised, customers can build much of their own software. Here's the problem for sellers of applications.

Once customers buy a library of objects, they can build applications themselves; and once having bought reusable objects, they also don't need to buy the objects again. Applications software -- Microsoft's main source of profit -- will cease to be a milk cow for would-be technology farmers. Jakob Nielsen, a lead software engineer and strategist at Sun Microsystems, predicts object-oriented systems will be standard in five to 10 years.

These long-term trends explain in part why Microsoft is so eager to get into new businesses such as online services and banking -- risking the ire of the Justice Department in the process. Meanwhile, Microsoft is changing its licensing practices to allow companies to receive automatic upgrades. That would be more like a newspaper subscription business model than the book publishing model that largely predominates in the software industry today.

But it may be too late, and at any rate Microsoft would be a fool to abandon what is still very much a profit-making approach. Apple's Norman believes that the only way out of this developmental impasse is to develop application-specific devices, rather than general-purpose computers. Customers and their consultants would use objects to tailor and craft applications for these devices, which would be much more communication-oriented and mobile than those in us today.

"People don't want to talk to their computers. They want to talk to other people," says Norman. Let computers do most of the talking to computers -- to monitor inventories, retrieve information, survey business conditions, etc. As much as possible, processing tasks would be taken away from the devices themselves and transferred to network servers in the home, at the office, and in the central offices of phone, cable and entertainment companies.

A prototype effort at the University of California at Berkeley may provide an example of the way computer systems might be put together in the future. Its Department of Electrical Engineering and Computer Sciences is developing what it calls a Mobile Multimedia Terminal. The system is essentially a mobile application of the

X-Windows system, a kind of back-to-the-future, dumb-terminal approach developed at MIT in the 1980s.

X-Windows allows users to run applications on their own computer screens that are actually operating from a network server -- rather than having every user independently run applications. This approach means that the devices people use will require less processing power, so they can also be smaller and cheaper.

What processing power remains can then be devoted almost exclusively to the screen, which is the most critical point of contact for the user anyway. The downside, of course, is that requirements for inter-computer communications will rise.

But the people at Berkeley believe networking technology is rapidly maturing, and should soon allow mobile terminals -- notebook-sized screens, really -- to receive and send data wirelessly. Network servers could handle the processing-intensive tasks of recognizing voice commands or handwritten characters scribbled on the user's pad. "The greatest advantage of this architecture is that [it] has access to massive computational power [on the network], allowing [the computer] to be smarter [with handwriting and speech recognition] than other portable devices [today]," noted a data sheet on the ARPA-funded project, which was demonstrated in Denver last month.

For now, the system only works within a building, which has a radio receiver in every room. But with the development of so-called personal communication systems for mobile phones now taking off, such terminals could soon have a ready infrastructure for wider application.

And the terminals have enough power to display full-motion, digitized video run from network servers at 30 frames per second. Developers are focusing on three applications: access and retrieval of World Wide Web information; a voice-driven command interface; speech and voice recognition. The key, of course, will be for commercial software providers to help build standard development tools and objects that will work with such a system and allow users to easily customize devices for specific applications.

Then, perhaps, computers may truly disappear, but that day still seems a long time coming. Said Terry Roberts, who helped develop the precursor of the Macintosh interface at Xerox PARC, "Technology is not a problem, economic availability is not a problem. The next big question [is] people simply being able to do what they want on a system."


NEXT STORY: Mover