The most accurate predictive letter in computing and telecommunications,
read by industry leaders worldwide.

 

 

SNS Subscriber Edition Vol. 11, Issue 4 Week of January 28, 2008

 

***SNS***

Special Letter:

Cautious Cars and Cantankerous Kitchens:

How Machines Take Control

 

 

 

 

In This Issue

 

 

Feature:

Cautious Cars and Cantankerous Kitchens

 

The Rise of the Smart Machine

About Donald A. Norman

 

In Other House News...

 

How to Subscribe

May I Share This Newsletter?

About SNS

About the Publisher

SNS Website Links

Where’s Mark?

 

By Donald A. Norman

 

Publisher’s Note: I am pleased to announce that well-known venture capitalist Vinod Khosla will be giving the opening remarks for FiRe 2008 this year. Vinod was arguably the first VC in the Valley to see the importance of clean and alternative energy sources, and to put his money where his thoughts were. Today he continues to invest in this exploding sector, through Khosla Ventures, as well as through his partnership in Kleiner Perkins (Al Gore’s new venture home). Vinod’s appearance at FiRe fits perfectly with our “Rapid Response” climate-crisis theme for 2008. To see other speakers and participants, and to register, go to www.futureinreview.com.

 

If you are an SNS Member, and are interested in a possible speaking role at FiRe, you should register now.

 

_______

 

Don Norman has been an SNS Member since our earliest days, and this is appropriate, as I think of Don as the Godfather of Design in American technology. Long before it was popular, Don was in the trenches at companies like HP and Apple, helping to create great products by making sure that human-centered design was an integrated part of the process. When I wrote my latest issue on this subject, Don offered us a taste of his latest work, recast just for our members. Understanding Don’s ideas on design puts all of our readers at the forefront of the front-and-center issue of great design. – mra.

 


» Cautious Cars and Cantankerous Kitchens: How Machines Take Control

[Condensed from Chapter 1 of The Design of Future Things]

 

Book cover imageI’m driving my car through the winding mountain roads between my home and the Pacific Ocean. Sharp curves with steep drop-offs amidst the towering redwood trees and vistas of the San Francisco Bay on one side and the Pacific Ocean on the other. A wonderful drive, the car responding effortlessly to the challenge, negotiating sharp turns with grace. At least, that’s how I am feeling. But then I notice that my wife is tense: she’s scared. Her feet are braced against the floor, her shoulders hunched, her arms against the dashboard. “What’s the matter?” I ask. “Calm down, I know what I’m doing.”

 

Now imagine another scenario. I’m driving on the same winding, mountain road, and I notice that my car is tense: it’s scared. The seats straighten, the seat belts tighten, and the dashboard starts beeping at me. I notice the brakes are being applied automatically. “Oops,” I think, “I’d better slow down.”

 

Do you think the idea of a frightened automobile fanciful? Let me assure you it is not. This behavior already exists on some luxury automobiles – and more is being planned. Stray out of your lane and some cars balk: beeping, perhaps vibrating the wheel or the seat or flashing lights in the side mirrors. Automobile companies are experimenting with partial correction, helping the driver steer the car back into its own lane. Turn signals were designed to tell other drivers that you are going to turn or switch lanes. But now they are also how you tell your own car that you really do wish to turn or change lanes: “Hey, don’t try to stop me,” they tell your car, “I’m doing this on purpose.”

 

I once was a member of a panel of consultants advising a major automobile manufacturer. I described how I would respond differently to my wife than to my car. “How come?” asked fellow panelist Sherry Turkle, an MIT professor and an authority on the relationship between people and technology. “How come you listen to your car more than your wife?”

 

“How come?” indeed. Sure, I can make up rational explanations, but they will miss the point. As we start giving the objects around us more initiative, more intelligence, and more emotions and personality, we now have to worry about how we interact with our machines. What does it say about our society when we listen to our machines more than to our friends and relatives?

 

Why do I appear to pay more attention to my car than to my wife? The answer is complex, but in the end, it comes down to communication. When my wife complains, I can ask her why and then either agree with her or try to reassure her. I can also modify my driving so that she is not so disturbed by it. But there is no way to have a conversation with my car: all the communication is one way.

 

“Do you like your new car?” I asked Tom, who was driving me to the airport after a lengthy meeting. “How do you like the navigation system?”

 

“I love the car,” said Tom, “but I never use the navigation system. I don’t like it: I like to decide what route I will take. It doesn’t give me any say.”

 

Machines have less power than humans, so they have more authority. Contradictory? Yes, but oh so true. Consider who has more power in a business negotiation. If you want to make the strongest possible deal, who should you send to the bargaining table – the CEO or someone at a lower level? The answer is counterintuitive: quite often, it is the lower-level employee who can make the better deal.

 

Why? Because no matter how powerful the opposing arguments, the weak representative cannot close the deal. Even in the face of persuasive arguments, he or she can only say, “I’m sorry, but I can’t give you an answer until I consult with my boss,” only to come back the next day and say, “I’m sorry, but I couldn’t convince my boss.” A powerful negotiator, on the other hand, might be convinced and accept the offer, even if later, there was regret.

 

Mind you, successful negotiators understand this bargaining ploy and won’t let their opponents get away with it. When I discussed this with a friend who is a successful lawyer, she laughed at me. “Hey,” she said, “if the other side tried that ploy on me, I’d call them on it. I won’t let them play that game with me.”

 

Well, machines do play this game on us and we don’t have any way of refusing. When the machine intervenes, we have no alternatives except to let it take over: “It’s this or nothing,” they are saying, where “nothing” is not an option.

 

Consider Tom’s predicament with his car’s navigation system. He asks it for directions and it provides them. Sounds simple. Human-machine interaction: a nice dialogue. A conversation, if you will. But notice Tom’s lament: “It doesn’t give me any say.” Designers of advanced technology are proud of the communication capabilities they have built into their systems. But closer analysis shows this to be a myth. There is no communication, not the real back-and-forth discussion that characterizes true dialogue. What we have are two monologues. We issue commands to the machine and it, in turn, commands us. Two monologues do not make a dialogue.

 

In this particular case, Tom does have a choice. If he turns the navigation system off, the car still functions, so because his navigation system doesn’t give him enough say over the route, he simply doesn’t use it. But other systems do not provide this option; the only way to avoid them is not to use the car.

 

The real problem is that these systems can be of great value. Flawed though they may be, they can save lives. The question, then, is how can we change the way we interact with our machines to take better advantage of their strengths and virtues while at the same time eliminating their annoying, and sometimes dangerous, actions?

 

As our technology becomes more powerful, its failure at collaboration and communication becomes ever more critical. Collaboration means synchronizing one’s activities as well as explaining and giving reasons. It means having trust, which can only be formed through experience and understanding. With automatic, so-called “intelligent” devices, trust is sometimes conferred undeservedly. Or withheld, equally undeservedly.

 

Tom decided not to trust his navigational system’s instructions, but in some instances rejecting technology can cause harm. For example, what if Tom turned off his car’s anti-skid brakes or the stability control? Many drivers dislike these systems, believing that they can control the car better than these automatic controls can. But anti-skid and stability systems actually perform far better than all but the most expert, professional drivers. They have saved many lives. But how does the driver know which systems can be trusted, which not?

 

Designers tend to focus upon the technology, attempting to automate whatever possible for safety and convenience. Their goal is complete automation, except where this is not yet possible because of technical limitations or cost concerns. These limitations, however, mean that the tasks can only be partially automated, so that the person must always monitor the action and take over whenever the machine can no longer perform properly. But whenever a task is only partially automated, then it is essential that each party – human and machine – know what the other is doing and what is intended.

 

 

Two Monologues Do Not Make a Dialogue

 

Two thousand years ago, Socrates argued that the book would destroy people’s ability to reason. He believed in dialogue, in conversation and debate. But with a book, there is no debate: the written word cannot talk back. Today, the book is such a symbol of learning and knowledge that we laugh at his argument. But take it seriously for a moment. Despite Socrates’ claims, writing does instruct because we do not need to debate it with the author. Instead, we debate and discuss with one another, in the classroom, with discussion groups, and – if it is an important enough work – through all the media at our disposal: printed newspapers and magazines, radio and television, Internet websites and discussion boards. Nonetheless, his point is valid: a technology that gives no opportunity for discussion, explanation, or debate is a poor technology.

 

With the automated, intelligent technology now entering our lives, there is no debate or discussion. It simply acts, without explanation. Even if we are able to discuss the actions at a later time, after-the-fact reflection is of little use, for the moment of decision has come and gone.

 

As a business executive and as a chair of university departments, I learned that the process of making a decision was often more important than the decision itself. When a person makes decisions without explanation or consultation, people neither trust nor like the result, even if it is the identical course of action they would have taken after discussion and debate.

 

Many business leaders ask, “Why waste time with meetings when the end result will be the same?” But the end result is not the same, for although the decision itself is identical, the way that it will be carried out and executed – and perhaps most important, the way it will be handled if things do not go as planned – will be very different with a collaborating, understanding team than with one that is just following orders.

 

If the car decides an accident is imminent and straightens the seat or applies the brakes, I am not asked or consulted, nor am I even told why. The action just happens. The car follows an authoritarian style, making decisions and allowing no dissent. Is the car necessarily more accurate because, after all, it is a mechanical, electronic technology that does precise arithmetic without error? No, actually not. The arithmetic may be correct, but before doing the computation, it must make assumptions about the road, the other traffic, and the capabilities of the driver. Professional drivers will sometimes turn off the automatic equipment because they know the automation will not allow them to deploy their skills. That is, they will turn off whatever they are permitted to turn off; many modern cars are so authoritarian that they do not even allow this choice.

 

Don’t think that these behaviors are restricted to the automobile. The devices of the future will present the same issues in a wide variety of settings.

 

Automatic banking systems already exist that determine whether you are eligible for a loan. Automated medical systems determine whether you should receive a particular treatment or medication. Future systems will monitor your eating, reading, music, and television preferences. Some systems will watch where you drive, alerting the insurance company, the rental car agency, or even the police if they decide that you have violated their rules. Other systems monitor for copyright violations, making decisions about what should be permitted. In all these cases, the actions are apt to be done arbitrarily, with the systems making gross assumptions about your intentions from a limited sample of your behavior. There is little or no explanation, little or no chance for debate or discussion.

 

So-called “intelligent” systems have become too smug. They think they know what is best for us. Their intelligence, however, is limited. And this limitation is fundamental: there is no way a machine has sufficient knowledge of all the factors that go into human decision making.

 

But this doesn’t mean we should reject the assistance of intelligent machines. Automation is often helpful and convenient. Moreover, automation in trains, ships, and airplanes has saved lives. Automatic stability control and other enhancements in automobiles have been beneficial. But as machines start to take over more and more, they need to become socialized; they need to improve the way they communicate and interact and to recognize their limitations. Only then can they become truly useful. [This is another major theme of the book of which this chapter is an excerpt.].

 

When I started writing The Design of Future Things, I thought that the key to socializing machines was to develop better systems for dialogue. But I was wrong. Successful dialogue requires shared knowledge and experiences. It requires appreciation of the environment and context, of the history leading up to the moment, and of the many differing goals and motives of the people involved. I now believe this to be a fundamental limitation of today’s technology, one that prevents machines from full, human-like interaction. It is hard enough to establish this shared, common understanding with people, so how do we expect to be able to develop it with machines?

 

In order to cooperate usefully with our machines, we need to regard the interaction somewhat as we do our interaction with animals. Although both humans and animals are intelligent, we are different species, with different understandings and different capabilities. Similarly, even the most intelligent machine is a different species, with its own set of strengths and weaknesses, its own set of understanding and capabilities. Sometimes we need to obey the animals or machines; sometimes they need to obey us.

 

 

Where Are We Going? Who Is to Be in Charge?

 

“My car almost got me into an accident,” Jim told me.

 

“Your car? How could that be?” I asked.

 

“I was driving down the highway using the adaptive cruise control. You know, the control that keeps my car at a constant speed unless there is a car in front, and then it slows up to keep a safe distance. Well, after a while, the road got crowded, so my car slowed. Eventually I came to my exit, so I maneuvered into the right lane and then turned off the highway. By then, I had been using the cruise control for so long, but going so slowly, that I had forgotten about it. But not the car. ‘Hurrah!’ I guess it said to itself, ‘Hey, finally, there’s no one in front of me,’ and it started to accelerate to full highway speed, even though this was the off-ramp that requires a slow speed. Good thing I was alert and stepped on the brakes in time. Who knows what might have happened.”

 

We are in the midst of a major change in how we relate to technology. Up to recently, people have been in control. We turned the technology on and off, told it which operations to perform, and guided it through its operations. As technology became more powerful and complex, we became less able to understand how it worked, less able to predict its actions.

 

Once computers and microprocessors entered the scene, we often found ourselves lost and confused, annoyed and angered. But still, we considered ourselves to be in control. No longer. Now, our machines are taking over. They act as if they have intelligence and volition, even though they don’t. They make decisions for themselves according to their view of events, a view that often does not correspond with our own. And they monitor us, correcting our behavior according to some pre-established norms buried deep within their hidden programming, forbidding this, requiring that, even taking over when we do not do their bidding.

 

Machines monitor us with the best of intentions, of course. When everything works, these smart machines can indeed be helpful: increasing safety, reducing the boredom of tedious tasks, making our lives more convenient, and doing the tasks more accurately than we could. It is indeed convenient that the automobile automatically slows when a car darts too close in front of us, that it shifts gears quietly and smoothly, or in the home, that our microwave oven knows just when the potatoes are cooked.

 

But what about when the technology fails? What about when it does the wrong thing, or fights with us for control? What about when Jim’s auto notices that there are no cars in front of it, so it accelerates to highway speed, even though it is no longer on a highway? The same mechanisms that are so helpful when things are normal can decrease safety, decrease comfort, decrease accuracy, when unexpected situations arise. For us, the people involved, it leads to danger and discomfort, frustration and anger.

 

Today, machines primarily signal their states through alerts and alarms – which means, only when they get into trouble. As a result, when a machine fails, a person is often forced to take over rapidly to avert accident, often with no advance warning, often with insufficient time to react properly. Jim was able to correct his car’s behavior in time. But what if he hadn’t been? He would have been blamed for causing an accident. Ironically, if the actions of a so-called intelligent device leads to an accident, it will probably be blamed on human error!

 

The proper way to provide for smooth interaction between people and intelligent devices is to enhance the coordination and cooperation of both parties, people and machines. But this is often not understood by those who design these systems. Even if it were, there are no commonly accepted ways of doing this.

 

It is difficult for people to convey their intentions to machines, difficult for machines to tell people just the proper amount about the situation. We don’t want machines that are always talking to us, telling us every detail of every action they have taken or are about to take; we just want to know the important things. But how is a machine able to judge what is important and what not, especially when what is important in one situation may not be in another?

 

I have told the story of Jim and his enthusiastic car to engineers from several automobile companies. Their responses have been similar, always having two components. First, they blame the driver. Why didn’t he turn off the cruise control before exiting? I explain that he had forgotten about it. “Then he was a poor driver,” is their response. This kind of “blame and train” philosophy always makes the blamer – or the insurance company, the legislative body, or society – feel good: if people make errors, punish them. But it doesn’t solve the underlying problem. Poor design, and often poor procedures, poor infrastructure, and poor operating practices are the true culprits; people are simply the last step in this complex process.

Although the car companies are technically correct that the driver should remember the mode of the car’s automation, that is no excuse. We must design our technologies for the way people actually behave, not the way we would like them to behave. Moreover, the automobile does not help the driver remember. In fact, it is designed as if to help the driver forget! There is hardly any clue as to the state of the cruise-control system: the car could do a far better job of reminding the driver when it is taking control.

 

When I say this to engineers, they promptly introduce the second component of their response: “Yes, this is a problem, but don’t worry, we will fix it. You’re right, the car’s navigation system should realize that the car is now on the exit road, so it should automatically either disconnect the cruise control or, at least, change its setting to a safe speed.”

 

What we have here illustrates the fundamental problem. The machine is not intelligent; the intelligence is in the mind of the designer. Designers sit in their offices, attempting to imagine all that might happen to the car and driver, and then devise solutions. But how can the designers determine the appropriate response to something that is unexpected? When this happens to a person, we can expect creative, imaginative problem solving. But the “intelligence” in our machines is not in the device; it is in the heads of the designers. So when the unexpected happens, the machine usually fails. After all, these are precisely the events the designers never thought about, or thought about in ways different from the way they actually happened.

 

We know two things about unexpected events: first, they always occur; second, when they do occur, they are always unexpected.

 

I once got a third response from an automobile company engineer about Jim’s experience. He sheepishly admitted that the exit-lane problem had happened to him, but that there was yet another problem: lane changing. On a busy highway, if the driver decides to change lanes, he or she waits until there is a sufficiently large gap in the traffic in the new lane and then quickly darts over there. That usually means that the car is close to the ones in front and behind. The adaptive cruise control is likely to decide it is too close to the car in front and therefore brake.

 

“What’s the problem with that?” I asked. “Yes, it’s annoying, but it sounds safe to me.”

 

“No,” said the engineer. “It’s dangerous because the car in back of you didn’t expect you to dart in front of it and then suddenly put on the brakes. If they aren’t paying close attention, they could run into you from behind. But even if they don’t hit you, the driver behind is annoyed with your driving behavior.”

 

“Maybe,” said the engineer, laughing, “the car should have a special brake light that would come on when the brakes were applied by the automobile itself rather than by the driver, telling the car behind, ‘Hey, don’t blame me, the car did it.’ ”

 

The engineer was joking, but his comments reveal the tensions between the behavior of people and machines. People take actions for all sorts of reasons – some good, some bad, some considerate, some reckless. Machines are more consistent, evaluating the situation according to the logic and rules programmed into them. But machines have fundamental limitations: they do not sense the world in the same way as people do, they lack higher-order goals, and they have no way of understanding the goals and motives of the people with whom they must interact.

 

Machines, in other words, are fundamentally different: superior in some ways – especially in speed, power, and consistency – and inferior in others, especially in social skills, creativity, and imagination. Machines lack the empathy required to consider how their actions impact those around them. These differences, especially in what we would call social skills and empathy, are the cause of the problems. Moreover, these differences – and therefore, these conflicts – are fundamental, not ones that can be quickly fixed by changing the logic here or adding a new sensor there.

 

The result is that the actions of machines are often in contradiction to what a person would do. In many cases, this is perfectly fine: if my washing machine cleans clothes very differently than I would, I don’t care, as long as the end result is clean clothes. The reason machine automation works here is because once the washing machine has been loaded and started, it is a closed environment. Once started, the machine takes over, and as long as I refrain from interfering, everything works smoothly.

 

But what about environments where people and machines work together? Or what happens with my washing machine if I change my mind after it has started? How do I tell it to use a different setting? And once the washing cycle has started, when will the changes take effect? Right away? With the next filling of the machine?

 

Here, the differences between the way the machine and a person would react really matters. Sometimes, it would appear that the machine is acting completely arbitrarily, although if the machine could think and talk, I suspect it would explain that from its point of view, the person is the one being arbitrary. To the person this can be frustrating, a continual battle of wills. To the observer, it can be confusing, for it is never clear who is in charge or why a particular action was taken. It doesn’t really matter whether it is the machine or the person who is correct; the mismatch is what matters, for this is what gives rise to aggravation, frustration, and in some cases, damage or injury.

 

The conflict between human and machine action is fundamental because machines, whatever their capabilities, simply do not know enough about the surroundings, the goals and motives of the people, and the special circumstances that invariably surround any set of activities. Machines work very well when they work in controlled environments, where no pesky humans get in the way, where there are no unexpected events, and where everything can be predicted with great accuracy. That’s where automation shines.

 

In research laboratories around the world, scientists are working on even more ways of introducing machine intelligence into our lives. There are experimental homes that sense all the actions of their inhabitants, turning the lights on and off, adjusting the room temperature, even selecting the music. The list of projects in the works is impressive: refrigerators that refuse to let you eat inappropriate foods, tattletale toilets that secretly tell your physician about the state of your body fluids.

 

Refrigerators and toilets may seem to be an unlikely pairing, but they team up to monitor eating behavior – the one attempting to control what goes into the body, the other measuring and assessing what comes out. We have scolding scales watching over weight. Exercise machines demanding to be used. Even teapots shrilly whistling at us, demanding immediate attention.

 

As we add more and more smart devices to daily life, our lives are transformed both for good and for bad. Good, when the devices work as promised. Bad, when they fail, or when they transform our productive, creative lives into that of servant, continually looking after our machines, getting them out of trouble, repairing them, maintaining them.

 

This is not the way it was supposed to be, but it certainly is the way it is. Is it too late? Can we do something about it?

 

 

» The Rise of the Smart Machine

 

Toward a Natural, Symbiotic Relationship

 

In the 1950s, the psychologist J.C.R. Licklider attempted to determine how people and machines could interact gracefully, harmoniously, or by what he called a “symbiotic relationship,” so that the resulting partnership would enhance our lives.

 

What would it mean to have a graceful symbiosis of people and technology? As noted earlier, we need a more natural form of interaction, an interaction that can take place subconsciously, without effort, whereby the communication in both directions is done so naturally, so effortlessly, that the result is a smooth merger of person and machine, jointly doing the task.

 

There are numerous instances of “natural interaction.” Let me discuss four here that demonstrate different kinds of relations: between people and traditional tools; between horse and rider, between driver and automobile, and one involving machine automation: “recommendation” systems that suggest books to read, music to listen to, and films to watch.

 

Skilled artisans work their materials through their tools, just as musicians relate to their instruments. Whether painter or sculptor, woodworker or musician, the tools and instruments feel as if they are a part of the body, so the craftspeople do not act as if they are using tools; they act as if they were directly manipulating the items of interest: paint on canvas, the sculptured material – wood – or, for musicians, the resulting musical sounds. The feel of the materials provides feedback to the person: smooth and resonant here, bumpy or rough there. The interaction is complex, but pleasurable. This symbiotic relationship only occurs when the person is well-skilled and the tools well-designed. But when it happens, this interaction is positive, pleasurable, and effective.

 

Think of skilled horseback riders. The riders “read” the horse, just as horses can read their riders. Each conveys information to the other about what is ahead. Horses communicates with their riders through body language, gait, readiness to proceed, or by their general behavior: wary, skittish, or edgy; eager, lively, or playful. The horses’ gait, behavioral responses, posture, and the state of body relaxation or tenseness all communicate important information to skilled riders.

 

In turn, riders communicate with horses through body language, by the way they sit; the pressures exerted by their knees, feet, and heels; and the signals communicated through the hands and reins. Riders also communicate ease and mastery or discomfort and unease. This interaction is Positive Example 2. It is of special interest because it is an example of two sentient systems, horse and rider, both intelligent, both interpreting the world and communicating their interpretations to the other.

 

Example 3 is similar to the horse and rider, except that now we have a sentient being interacting with a sophisticated, but non-sentient, machine. At its best, this is a graceful interaction among the feel of the automobile, the track, and the actions of the driver.

 

I sit beside my son while he drives my highly tuned German sports car at high speed on the race track that we have rented for the afternoon. We approach a sharp curve, and I watch as he gently brakes, shifting the car’s weight forward, then turns the steering wheel so that as the front end of the car turns, the rear end, now with reduced weight bearing down, skids, putting the car into a deliberate, controlled skid, known as an “oversteer” condition. As the rear end swings around, my son straightens the steering wheel and accelerates, shifting the car’s weight back to the rear wheels so that we are once again accelerating smoothly down a straightaway, with the pleasure of feeling in complete control. All three of us enjoyed the experience: me, my son, and the car.

 

Example 4, recommendation systems, is very different from the other three, for it is slower, less graceful, more intellectual. Nonetheless, it is an excellent example of a positive interaction between people and complex systems, primarily because it suggests without controlling, without annoyance: we are free to accept or ignore its recommendations. The same logic that allows Internet sites to look over your past selections and recommend new books, movies, or music that you might also like now watches over the television shows you view at home, recording other shows it thinks you might like.

 

These systems work in a variety of ways, but all suggest items or activities that you might like based upon your past selections or activities, by analyzing the similarities among the items in their databases, and by examining what things people liked whose likes and dislikes appear to be similar to yours but that you haven’t yet tried. As long as the recommendations are presented in a non-invasive fashion, eliciting your voluntary examination and participation, they can be helpful. Consider the search for a book on one of the Internet websites. Being able to read an excerpt, the table of contents, the index, and reviews is often beneficial in deciding whether to make a purchase.

 

Better yet, some sites explain why they have made the recommendations, offering to let people tune their database of likes and dislikes. I have seen recommendation systems in research laboratories that watch over your activities, so that if you are reading or writing they suggest articles to read by finding items that are similar in content to what is on your display. These systems work well for several reasons. First, they do offer value, for the suggestions are often relevant and useful. Second, they are presented in a non-intrusive manner, off to the side, without distracting from the primary task, but readily available when you are ready.

 

Not all recommendation systems are so effective, for some are intrusive; some seem to violate one’s privacy. When done well, they demonstrate that intelligent systems can add pleasure and value to our interactions with machines.

 

 

Skittish Horses, Skittish Machines

 

What would it mean for a car and driver to interact much as a skilled rider interacts with a horse? Suppose a car balks, or acts skittish when getting too close to cars ahead, or when driving at a speed the car computes is dangerous? Suppose the car responds smoothly and gracefully to appropriate commands, sluggishly and reluctantly to others? Would it be possible to devise a car whose physical responsiveness communicated the safety status to the driver?

 

What about your house? What would it mean to have a skittish house? I can see my vacuum cleaner or stove acting up, wanting to do one thing when I wanted it to do another. But my house?

 

Today, companies are poised to transform your home into an automated beast, always looking out for your best interests, providing you with everything you need and desire, even before you know you need or desire it. “Smart homes,” these are called, and there are many companies anxious to equip, wire, and control them. Homes that control the lighting according to their perception of your moods, homes that choose what music to play or that direct the television images to move from screen to screen as you wander about the house.

 

All these “smart” and “intelligent” devices pose the question of how we will be able to relate to all this smartness. If we want to learn to ride a horse, we have to practice. Better yet, take lessons. So do we need to practice how to use our home? Take lessons on getting along with our appliances?

 

What if we could devise natural means of interaction between people and machines? Could we learn from the way that skilled riders interact with horses? Perhaps. We would need to determine the appropriate behavioral mappings between the behaviors and states of the horse and rider and those of the car and driver. How would a car indicate nervousness? What is the equivalent for a car to a horse’s posture or skittishness? If a horse conveys its emotional state by rearing back and tensing its neck, what might the equivalent be for a car? What if suddenly your car reared back, lowering its rear end while raising the front, perhaps moving the front end left and right?

 

What about drivers? Would they signify confidence by the way they held the steering wheel or applied the brake? Could a car distinguish the deliberate, controlled skid from an unintentional, out-of-control skid? Why not? The driver’s behavior is quite different in these two cases. Could drivers indicate doubt by hesitating to steer, brake, or accelerate? Some cars today provide full braking power even if the driver does not brake fully. What if the car watched where the driver was looking, measured the driver’s heart rate or anxiety?

 

These are natural interactions, natural signals akin to what the horse receives from its rider, and in fact all of these methods are actually being explored in research laboratories. Research scientists in the automobile companies are experimenting with measures of emotion and attention, and at least one model of automobile sold to the public does have a television camera located on the steering column that watches drivers, deciding whether or not they are paying attention. If the automobile decides that a crash is imminent but the driver is looking elsewhere, it brakes.

 

Similarly, scientists are hard at work developing smart homes that monitor the inhabitants of homes, assessing their modes and emotions, and adjusting room temperature, lighting, and background music.

 

I’ve visited several of these experiments and observed the results. At one research facility in a European university, people were asked to play a stressful video game, then allowed to rest afterwards in a special experimental room equipped with comfortable chairs, friendly and aesthetically pleasing furniture, and specially equipped lighting designed to relax the inhabitants. Indeed, I found it to be a calm and restful environment.

 

The goal of the research was to understand how to develop room environments appropriate to a person’s emotional state. Could a home relax its inhabitants automatically when it detected stress? Or perhaps the home could take on a zingy, upbeat mood with bright lights, lively music, and warm colors when it determined that the inhabitants needed an energy boost to get them active and engaged. Houses that cheer you up – now there is a thought.

 

 

Thinking For Machines Is Easy, Physical Actions Are Hard;
Logic Is Simple, Emotion Difficult

 

Many of us grew up with the robots and giant brains of novels, movies, and television, where machines were all-powerful, sometimes clumsy (think of Star Wars’3CPO), sometimes omniscient (think of 2001’s HAL), and sometimes indistinguishable from people (think of Rick Deckard, hero of the movie Blade Runner: is he human or replicant?). Reality is rather different from fiction: 21st-century robots can’t conduct any meaningful communication with people; indeed, they are barely capable of walking, and their ability to manipulate real objects in the world is pathetically weak.

 

There is an interesting disjunction between the things that people find easy and hard and those that machines find easy and hard. Thinking, which once was held up as the pinnacle of human achievement, is the area in which machines have made the greatest progress, especially any thinking that requires logic and attention to detail. Physical actions, such as standing and walking, jumping, and avoiding obstacles is relatively easy for people – difficult, if not impossible, for machines. Emotions play an essential role in human and animal behavior, acting as a value system for judging what is good or bad, safe or unsafe, while also providing a powerful communication system for conveying feelings and beliefs, reactions and intentions among people. Machine emotions are simplistic.

 

Despite these limitations, many scientists are still striving to create the grand dream of intelligent machines that will communicate effectively with human beings. It is in the nature of research scientists to be optimists, to believe that they are doing the most important activity in the world, and that, moreover, they are close to a significant breakthrough.

 

Although monitoring eating habits wasn’t really possible until recently, we can now attach tiny, barely visible tags on everything: clothes, products, food items, even people and pets, so everything and everybody can be tracked. These are called Radio Frequency Identification tags, RFID. No batteries are required, because these devices cleverly take their power from the very signal that gets sent to them asking them to state their business, their identification number, and any other tidbits about the person or object they feel like telling. When all the food in the house is tagged, the house knows what you are eating: RFID tags plus TV cameras. Microphones and other sensors. “Eat your broccoli.” “No more butter.” “Do your exercises.” Cantankerous kitchens? That’s the least of it.

And what if you decide to do something that the house thinks is bad for you, or perhaps simply wrong? “No,” says the house, “that’s not the proper way to cook that. If you do it that way, I can’t be responsible for the result. Here, look at this cookbook. See? Don’t make me say ‘I told you so.’”

 

Shades of Minority Report, the Steven Spielberg movie based upon the great futurist Philip K. Dick’s short story by that name. As the hero, John Anderton, flees from the authorities, he passes through the crowded shopping malls. The advertising signs recognize him passing by, calling him by name, tempting him with offers of clothes and special sale prices just for him. A car advertisement calls out: “It’s not just a car, Mr. Anderton. It’s an environment, designed to soothe and caress the tired soul.” A travel agency entices him: “Stressed out, John Anderton? Need a vacation? Come to Aruba!” Hey, signs, he’s running away from the cops; he isn’t going to stop and buy some clothes.

 

Minority Report was fiction, but the technology depicted in the movie was designed by clever, imaginative experts who were very careful to depict only technologies and activities that were plausible. Those active advertising signs are already close to reality. Billboards in multiple cities recognize owners of BMW’s MINI Cooper automobile by the RFID tags they carry. The MINI Cooper advertisements are harmless, and each driver has volunteered and selected the phrases that will be displayed. But now that this has started, where will it stop? Today, the billboard requires the person to carry an RFID tag, but this is a temporary expedient.

 

Already, researchers are hard at work, using television cameras to view automobiles and people and then to identify them by their gait, their facial features, or, in the case of autos, their model, year, color, and license plate.

 

This is how the City of London keeps track of cars that enter the downtown area. This is how security agencies expect to be able to track suspected terrorists. And this is how advertising agencies will track down potential customers. Signs in the shopping mall that offer special bargains for frequent shoppers? Restaurant menus that offer your favorite meals? First a short science-fiction story, then a movie, then on the city streets. Look for them at your nearest shops. Actually, you won’t have to look; they will be looking for you.

 

 

Communicating with Our Machines: We Are Two Different Species

 

I can imagine it now: it’s the middle of the night, but I can’t sleep. I quietly get out of bed, careful not to wake up my wife, deciding that as long as I can’t sleep, I might as well do some work. But my house detects my movement and cheerfully announces “Good morning” as it turns on the lights and starts the radio news station. The noise wakes my wife. “Why are you waking me up so early?” she mumbles.

 

The technologists will try to reassure us that all technologies start off by being weak and under-powered, that eventually their deficits are overcome and they become safe and trustworthy. Their message is sensible and logical – and they have been saying it since the beginning of the machine age.

 

At one level, they are correct. Steam engines and steamships used to explode; they seldom do anymore. Early aircraft crashed frequently. Today they hardly ever do. Remember Jim’s problem with the cruise control that regained speed in an inappropriate location? I am certain that this particular situation can be avoided in future designs by coupling the speed control with the navigation system, or perhaps by systems where the roads themselves transmit the allowable speeds to the cars (hence, no more ability to exceed speed limits). Or better, where the car itself determines what speed is safe given the road, its curvature, slipperiness, and the presence of other traffic or people.

 

But faster than we can solve the old problems, new technologies and new gadgets will appear, all with new capabilities. New capabilities will lead to new virtues, along with new problems – unexpected problems at unexpected times. The rate at which new technologies are being introduced into society increases each year, a trend that has been true for hundreds of years. Lots of new devices, lots of potential benefits, and lots of new, unexpected ways to go wrong. Over time, our lives become better and safer, worse and more dangerous, simultaneously.

 

I am a technologist. I believe in making lives richer and more rewarding through the use of science and technology. But that is not where our present path is taking us. Today we are confronting a new breed of machines, one with intelligence and autonomy, machines that can indeed take over for us in many situations. In many cases, they will make our lives more effective, more fun, and safer. But in others, they will frustrate us, get in our way, and even add to the danger. For the first time, we have machines that are attempting to interact with us socially.

 

Social interaction requires effective communication. Communication between two individuals requires that there be a give-and-take, a discussion, a sharing of opinions. This, in turn, requires that each understand the arguments, beliefs, and intentions of the other. We cannot have effective communication with our machines, for we are different species, sensing the world differently, thinking differently. If the world were controlled and regular, the machines would work just fine. But that is not the real world, the world of unexpected happenings, the world of multiple people with multiple and conflicting goals, dreams, and motives.

 

The problems that we face with technology are fundamental. They cannot be overcome by following old pathways. We need a calmer, more reliable, more humane approach. We need augmentation, not automation. Facilitation, not intelligence. It is time for the science of natural interaction between people and machines, an interaction very different than what we have today.

 


 

 

Copyright © 2007 Donald A. Norman. Used with permission. All rights reserved. http://www.jnd.org; don@jnd.org. Excerpt from Chapter 1 of The Design of Future Things. New York: Basic Books. (Available in hardcover and soon, for Kindle.)

 

 

 

About Donald A. Norman

 

Don Norman is the champion of human-centered design. “The well-rounded product,” says Norman, “will enhance the heart as well as the mind, being a joy to behold as well as to use.”

 

Don is the Breed Senior Professor in Design in the School of Engineering at Northwestern University, where he is co-director of MMM, the MBA and Engineering program offered jointly by Northwestern’s schools of Business and Engineering that emphasizes design and operations. He also serves as co-director of the Segal Design Institute.

 

Don is co-founder of the Nielsen Norman Group and has been vice president of Apple Computer and an executive at Hewlett-Packard. He serves on many advisory boards, such as the editorial advisory board of Encyclopedia Britannica and the Department of Industrial Design at the Korea Advanced Institute of Science and Technology (KAIST). He has received honorary degrees from the University of Padova (Italy) and the Technical University of Delft (the Netherlands); the “Lifetime Achievement Award” from SIGCHI, the professional organization for Computer-Human Interaction; and the Benjamin Franklin Medal in Computer & Cognitive Science from the Franklin Institute (Philadelphia).

 

Don is well-known for his books The Design of Everyday Things and Emotional Design. His newest book, The Design of Future Things, discusses the role that automation plays in such everyday places as the home and automobile. He lives at www.jnd.org.

 

___________

 

 

I would like to thank Don for creating this Special Letter version of his latest work for us, and encourage all of our members to read The Design of Future Things.

 

As society takes the path of increasing the amount of technology in our everyday lives, as well as in our most critical tasks, we will be compelled to follow Don’s sage advice regarding simplicity and the proper regard for what is “human” in device and process design. Not doing so will create many frustrated users, put others at risk, and create increased risk of product market failure.

 

I want to live in a world populated by devices and services made by students of Don Norman, so that my car continues to work properly and safely, my refrigerator doesn’t try to become a calculator just because it can, and my TV remote doesn’t end up looking like my TV remote looks today.

 

Thank you, Don, and keep up the great work.

 

 

Your comments are always welcome.


Sincerely,

Mark R. Anderson

CEO
Strategic News Service LLC              Tel. 360-378-3431
P.O. Box 1969                                    Fax. 360-378-7041
Friday Harbor, WA  98250  USA         Email: sns@tapsns.com

 

 

 

 

» How to Subscribe

(All rates $USD)

 

If you are not a subscriber, the prior Strategic News Service item has been sent to you for a one-month trial. If you would like a one-year subscription to SNS, the current rate is $595, which includes approximately 48 issues per year, plus special industry alerts and related materials; two years are $995. Premium Subscriptions, which include passworded access to additional materials on the SNS website, are $895 per year. Subscriptions can be purchased, upgraded, or renewed at our secure website, at: www.stratnews.com. Conversion of your trial to full subscription will lead to 13 months of SNS, no matter when you convert.

 

VOLUME CORPORATE SUBSCRIPTION RATES: Below half price, upon registration with SNS for a minimum of 10 subscriptions at $2950. SMALL COMPANY (10 employees or fewer) SITE LICENSE: $1495. TEACHERS’ GROUP RATE (five teachers): $295.

 

STUDENT and INDEPENDENT JOURNALIST RATE: $295 per year.

 

This service is intended for strategic thinkers who depend upon business technology planning. The SNS charter is to provide information about critical computer and telecommunications issues, trends and events not available to managers through the press. Re-purposing of this material is encouraged, with proper attribution.

 

Email sent to SNS may be reprinted, unless you indicate that it is not to be.

 

 

» May I Share This Newsletter?

 

If you are aware of others who would like to receive this service, please forward this message to them, with a cc: to Mark Anderson at sns@stratnews.com; they will automatically receive a free one-month pilot subscription.

ANY OTHER UNAUTHORIZED REDISTRIBUTION IS A VIOLATION OF COPYRIGHT LAW.

 

 

» About the Strategic News Service

 

SNS is the most accurate predictive letter covering the computer and telecom industries. It is personally read by the top managers at companies such as Intel, Microsoft, Dell, HP, Cisco, Sun, Google, Yahoo!, Ericsson, Telstra, and China Mobile, as well as by leading financial analysts at the world’s top investment banks and venture capital funds, including Goldman Sachs, Merrill Lynch, Kleiner Perkins, Venrock, Warburg Pincus, and 3i. It is regularly quoted in top industry publications such as BusinessWeek, WIRED, Barron’s, Fortune, PC Magazine, ZDNet, Business 2.0, the Financial Times, the New York Times, the Wall St. Journal, and elsewhere.

 

» About the Publisher

 

Mark Anderson is CEO of the Strategic News Service™. He is the founder of two software companies and of the Washington Software Alliance Investors’ Forum, Washington’s premier software investment conference; and has participated in the launch of many software startups. He regularly appears on the CNN World News, CNBC and CNBC Europe, Reuters TV, the BBC, Wall Street Review/KSDO, and National Public Radio programs. He is a member of the Merrill Lynch Technology Advisory Board, and is an advisor and/or investor in Ignition Partners, Mohr Davidow Ventures, Voyager Capital, and others.

Mark serves as chair of the Future in Review Conferences, SNS Project Inkwell, The Foresight Foundation, and Orca Relief Citizens’ Alliance.

Disclosure: Mark Anderson is a portfolio manager of a hedge fund. His fund often buys and sells securities that are the subject of his columns, both before and after the columns are published, and the position that his fund takes may change at any time. Under no circumstances does the information in this newsletter represent a recommendation to buy or sell stocks.

 

» SNS Website Links

 

For additional predictions and information, please visit:

The SNS website: www.stratnews.com

SNS Blog: www.tapsns.com/blog

 

SNS Media Page: www.tapsns.com/media.php

 

SNS Future in Review (FiRe) Conference website: www.futureinreview.com

SNS Members’ Gallery: www.tapsns.com/gallery.php

SNS Project Inkwell: www.projectinkwell.com

Orca Relief Citizens’ Alliance (www.orcarelief.org), a 501(c)(3) non-profit effort to study and reduce Orca mortality rates, supported largely by technology workers. Contributions may be sent to: ORCA, Box 1969, Friday Harbor, Washington 98250.

 

» Where’s Mark?

 

On January 31st, Mark will be hosting a planning meeting for FiRe staff and Advisory Board members in Seattle, at the Inn at the Market. On February 27th, he will be hosting a meeting of the SNS Project Inkwell Steering Committee at in Salt Lake City, followed on February 28th and 29th by Inkwell Working Group meetings. On May 20th-23rd, he will be hosting Future in Review 2008 at the Hotel del Coronado. Please go to www.futureinreview.com for details and registration.

 

 

In between times, he will be cleaning up the pickup truck for a trip to the big city – except you can’t clean the sawdust out at the car wash. What kind of world is this, where you have to wash your truck before you take it to the car wash?

 

 

Copyright 2008, Strategic News Service LLC.

 

“Strategic News Service,” “SNS,” “Future in Review,” “FiRe,” “SNS Ahead of the Curve,” and “SNS Project Inkwell” are all registered service marks of Strategic News Service LLC.

 

ISSN 1093-8494

...


Member's please login to view the complete issue.

OR

Not a Member? Subscribe now to view the complete issue.