Nothing Eileen Oldaker tried could calm her mother when she called from the nursing home, disoriented and distressed in what was likely the early stages of dementia. So Ms. Oldaker hung up, dialed the nurses’ station and begged them to get Paro.
Paro is a robot modeled after a baby harp seal. It trills and paddles when petted, blinks when the lights go up, opens its eyes at loud noises and yelps when handled roughly or held upside down. Two microprocessors under its artificial white fur adjust its behavior based on information from dozens of hidden sensors that monitor sound, light, temperature and touch. It perks up at the sound of its name, praise and, over time, the words it hears frequently.
“Oh, there’s my baby,” Ms. Oldaker’s mother, Millie Lesek, exclaimed that night last winter when a staff member delivered the seal to her. “Here, Paro, come to me.”
“Meeaakk,” it replied, blinking up at her through long lashes.
Janet Walters, the staff member at Vincentian Home in Pittsburgh who recalled the incident, said she asked Mrs. Lesek if she would watch Paro for a little while.
“I need someone to baby-sit,” she told her.
“Don’t rush,” Mrs. Lesek instructed, stroking Paro’s antiseptic coat in a motion that elicited a wriggle of apparent delight. “He can stay the night with me.”
After years of effort to coax empathy from circuitry, devices designed to soothe, support and keep us company are venturing out of the laboratory. Paro, its name derived from the first sounds of the words “personal robot,” is one of a handful that take forms that are often odd, still primitive and yet, for at least some early users, strangely compelling.
For recovering addicts, doctors at the University of Massachusetts are testing a wearable sensor designed to discern drug cravings and send text messages with just the right blend of tough love.
For those with a hankering for a custom-built companion and $125,000 to spend, a talking robotic head can be modeled on the personality of your choice. It will smile at its own jokes and recognize familiar faces.
For dieters, a 15-inch robot with a touch-screen belly, big eyes and a female voice sits on the kitchen counter and offers encouragement after calculating their calories and exercise.
“Would you come back tomorrow to talk?” the robot coach asks hopefully at the end of each session. “It’s good if we can discuss your progress every day.”
Robots guided by some form of artificial intelligence now explore outer space, drop bombs, perform surgery and play soccer. Computers running artificial intelligence software handle customer service calls and beat humans at chessand, maybe, “Jeopardy!”
Machines as Companions
But building a machine that fills the basic human need for companionship has proved more difficult. Even at its edgiest, artificial intelligence cannot hold up its side of a wide-ranging conversation or, say, tell by an expression when someone is about to cry. Still, the new devices take advantage of the innate soft spot many people have for objects that seem to care — or need someone to care for them.
Their appearances in nursing homes, schools and the occasional living room are adding fuel to science fiction fantasies of machines that people can relate to as well as rely on. And they are adding a personal dimension to a debate over what human responsibilities machines should, and should not, be allowed to undertake.
Ms. Oldaker, a part-time administrative assistant, said she was glad Paro could keep her mother company when she could not. In the months before Mrs. Lesek died in March, the robot became a fixture in the room even during her daughter’s own frequent visits.
“He likes to lie on my left arm here,” Mrs. Lesek would tell her daughter. “He’s learned some new words,” she would report.
Ms. Oldaker readily took up the game, if that is what it was.
“Here, Mom, I’ll take him,” she would say, boosting Paro onto her own lap when her mother’s food tray arrived.
Even when their ministrations extended beyond the robot’s two-hour charge, Mrs. Lesek managed to derive a kind of maternal satisfaction from the seal’s sudden stillness.
“I’m the only one who can put him to sleep,” Mrs. Lesek would tell her daughter when the battery ran out.
“He was very therapeutic for her, and for me too,” Ms. Oldaker said. “It was nice just to see her enjoying something.”
Like pet therapy without the pet, Paro may hold benefits for patients who are allergic, and even those who are not. It need not be fed or cleaned up after, it does not bite, and it may, in some cases, offer an alternative to medication, a standard recourse for patients who are depressed or hard to control.
In Japan, about 1,000 Paros have been sold to nursing homes, hospitals and individual consumers. In Denmark, government health officials are trying to quantify its effect on blood pressure and other stress indicators. Since the robot went on sale in the United States late last year, a few elder care facilities have bought one; several dozen others, hedging their bets, have signed rental agreements with the Japanese manufacturer.
But some social critics see the use of robots with such patients as a sign of the low status of the elderly, especially those with dementia. As the technology improves, argues Sherry Turkle, a psychologist and professor at theMassachusetts Institute of Technology, it will only grow more tempting to substitute Paro and its ilk for a family member, friend — or actual pet — in an ever-widening number of situations.
“Paro is the beginning,” she said. “It’s allowing us to say, ‘A robot makes sense in this situation.’ But does it really? And then what? What about a robot that reads to your kid? A robot you tell your troubles to? Who among us will eventually be deserving enough to deserve people?”
But if there is an argument to be made that people should aspire to more for their loved ones than an emotional rapport with machines, some suggest that such relationships may not be so unfamiliar. Who among us, after all, has not feigned interest in another? Or abruptly switched off their affections, for that matter?
In any case, the question, some artificial intelligence aficionados say, is not whether to avoid the feelings that friendly machines evoke in us, but to figure out how to process them.
“We as a species have to learn how to deal with this new range of synthetic emotions that we’re experiencing — synthetic in the sense that they’re emanating from a manufactured object,” said Timothy Hornyak, author of “Loving the Machine,” a book about robots in Japan, where the world’s most rapidly aging population is showing a growing acceptance of robotic care. “Our technology,” he argues, “is getting ahead of our psychology.”
More proficient at emotional bonding and less toylike than their precursors — say, Aibo the metallic dog or the talking Furby of Christmas crazes past — these devices are still unlikely to replace anyone’s best friend. But as the cost of making them falls, they may be vying for a silicon-based place in our affections.
Marleen Dean, the activities manager at Vincentian Home, where Mrs. Lesek was a resident, was not easily won over. When the home bought six Paro seals with a grant from a local government this year, “I thought, ‘What are they doing, paying $6,000 for a toy that I could get at a thrift store for $2?’ ” she said.
So she did her own test, giving residents who had responded to Paro a teddy bear with the same white fur and eyes that also opened and closed. “No reaction at all,” she reported.
Vincentian now includes “Paro visits” in its daily roster of rehabilitative services, including aromatherapy and visits from real pets. Agitated residents are often calmed by Paro; perpetually unresponsive patients light up when it is placed in their hands.
“It’s something about how it shimmies and opens its eyes when they talk to it,” Ms. Dean said, still somewhat mystified. “It seems like it’s responding to them.”
Even when it is not. Part of the seal’s appeal, according to Dr. Takanori Shibata, the computer scientist who invented Paro with financing from the Japanese government, stems from a kind of robotic sleight of hand. Scientists have observed that people tend to dislike robots whose behavior does not match their preconceptions. Because the technology was not sophisticated enough to conjure any animal accurately, he chose one that was unfamiliar, but still lovable enough that people could project their imaginations onto it. “People think of Paro,” he said, “as ‘like living.’ ”
It is a process he — and others — have begun calling “robot therapy.”
At the Veterans Affairs Medical Center in Washington on a recent sunny afternoon, about a dozen residents and visitors from a neighboring retirement home gathered in the cafeteria for their weekly session. The guests brought their own slightly dingy-looking Paros, and in wheelchairs and walkers they took turns grooming, petting and crooning to the two robotic seals.
Paro’s charms did not work on everyone.
“I’m not absolutely convinced,” said Mary Anna Roche, 88, a former newspaper reporter. The seal’s novelty, she suggested, would wear off quickly.
But she softened when she looked at her friend Clem Smith running her fingers through Paro’s fur.
“What are they feeding you?” Ms. Smith, a Shakespeare lover who said she was 98, asked the seal. “You’re getting fat.”
A stickler for accuracy, Ms. Roche scolded her friend. “You’re 101, remember? I was at your birthday!”
The seal stirred at her tone.
“Oh!” Ms. Roche exclaimed. “He’s opening his eyes.”
As the hour wore on, staff members observed that the robot facilitated human interaction, rather than replaced it.
“This is a nice gathering,” said Philip Richardson, who had spoken only a few words since having a stroke a few months earlier.
Dorothy Marette, the clinical psychologist supervising the cafeteria klatch, said she initially presumed that those who responded to Paro did not realize it was a robot — or that they forgot it between visits.
Yet several patients whose mental faculties are entirely intact have made special visits to her office to see the robotic harp seal.
“I know that this isn’t an animal,” said Pierre Carter, 62, smiling down at the robot he calls Fluffy. “But it brings out natural feelings.”
Then Dr. Marette acknowledged an observation she had made of her own behavior: “It’s hard to walk down the hall with it cooing and making noises and not start talking to it. I had a car that I used to talk to that was a lot less responsive.”
Accepting a Trusty Tool
That effect, computer science experts said, stems from what appears to be a basic human reflex to treat objects that respond to their surroundings as alive, even when we know perfectly well that they are not.
Teenagers wept over the deaths of their digital Tamagotchi pets in the late 1990s; some owners of Roomba robotic vacuum cleaners are known to dress them up and give them nicknames.
”When something responds to us, we are built for our emotions to trigger, even when we are 110 percent certain that it is not human,” said Clifford Nass, a professor of computer science at Stanford University. “Which brings up the ethical question: Should you meet the needs of people with something that basically suckers them?”
An answer may lie in whether one signs on to be manipulated.
For Amna Carreiro, a program manager at the M.I.T. Media Lab who volunteered to try a prototype of Autom, the diet coach robot, the point was to lose weight. After naming her robot Maya (“Just something about the way it looked”) and dutifully entering her meals and exercise on its touch screen for a few nights, “It kind of became part of the family,” she said. She lost nine pounds in six weeks.
Cory Kidd, who developed Autom as a graduate student at M.I.T., said that eye contact was crucial to the robot’s appeal and that he had opted for a female voice because of research showing that people see women as especially supportive and helpful. If a user enters an enthusiastic “Definitely!” to the question “Will you tell me what you’ve eaten today?” Autom gets right down to business. A reluctant “If you insist” elicits a more coaxing tone. It was the blend of the machine’s dispassion with its personal attention that Ms. Carreiro found particularly helpful.
“It would say, ‘You did not fulfill your goal today; how about 15 minutes of extra walking tomorrow?’ ” she recalled. “It was always ready with a Plan B.”
Aetna, the insurance company, said it hoped to set up a trial to see whether people using it stayed on their diets longer than those who used other programs when the robot goes on sale next year.
Of course, Autom’s users can choose to lie. That may be less feasible with an emotion detector under development with a million-dollar grant from the National Institute on Drug Abuse that is aimed at substance abusers who want to stay clean.
Dr. Edward Boyer of the University of Massachusetts Medical School plans to test the system, which he calls a “portable conscience,” on Iraq veterans later this year. The volunteers will enter information, like places or people or events that set off cravings, and select a range of messages that they think will be most effective in a moment of temptation.
Then they don wristbands with sensors that detect physiological information correlated with their craving. With a spike in pulse not related to exertion, for instance, a wireless signal would alert the person’s cellphone, which in turn would flash a message like “What are you doing now? Is this a good time to talk?” It might grow more insistent if there was no reply. (Hallmark has been solicited for help in generating evocative messages.)
With GPS units and the right algorithms, such a system could tactfully suggest other routes when recovering addicts approached places that hold particular temptation — like a corner where they used to buy drugs. It could show pictures of their children or play a motivational song.
“It works when you begin to see it as a trustworthy companion,” Dr. Boyer said. “It’s designed to be there for you.”
By AMY HARMON (NYT. July 4, 2010)