Oublier Asimov ? La lente genèse des robots sensibles
Imaginez un monde idéal : vous avez mal à la tête et personne ne se met à passer l’aspirateur, vous souhaitez faire une petite sieste ou vous délecter de votre livre préféré et personne ne vient non plus à ce moment-là vous demander ce que vous voulez manger ce soir ou regarder à la télé. S’il est encore difficile d’accéder à ce genre de synchronisation heureuse dans le monde conjugal, ce ne devrait bientôt plus être le cas dans le monde de la robotique. En tous cas, c’est l’une des principales directions que s’est donnée Shuji Hashimoto, spécialiste international de l’intelligence artificielle, lors de la conférence sur les robots dotés d’intelligence sociale tenue la semaine dernière en Grande-Bretagne. Pour Hashimoto, les robots domestiques doivent être kansei, en japonais dans le texte. En gros, avoir un quotient émotionnel suffisant pour composer avec les affres des sentiments, de l’humeur, de l’intuition ou de la sensibilité nécessaires à une bonne interaction avec leurs maîtres. La solution ? Des capteurs sensoriels mesurant la pression artérielle, le taux de transpiration ou encore le pouls des humains, reliés à des systèmes de réseaux neuronaux chez le robot. Ainsi que la remise en question des « trois lois de la robotique » d’Asimov, qui imposent a priori une contrainte morale complexe aux robots. « Aussi longtemps que nous obéirons aux lois d’Asimov, nous ne pourrons avoir de machine qui soit un véritable partenaire pour l’homme », a déclaré Hashimoto.
Antisocial robots go to finishing school
19 September 2006 / Paul Marks
Imagine having your own humanoid robot. It is great at its job so your floors and windows are gleaming and spotless, but it has an annoying habit of vacuuming the living room when you have a headache, or offering you a meal just as you are drifting off to sleep on the sofa.
If you sometimes have difficulty reading other people's expressions and emotions, imagine how difficult it will be for silicon-brained robots. They will only ever be able to respond to us in an appropriate way if they can understand human moods.
What robots need is kansei. The Japanese term encompasses a raft of emotional notions, including feeling, mood, intuitiveness and sensibility. Without kansei, says Shuji Hashimoto, director of the humanoid robotics centre at Waseda University in Tokyo, the service robots being developed around the world will not be able to acquire the social skills they will need to get along with tetchy, emotional humans.
Last week Hashimoto told a conference on socially intelligent robots at the University of Hertfordshire in Hatfield, UK, that unless researchers start incorporating such emotional concepts into their robots' programming we will be interacting with some pretty insensitive brutes in our living rooms. "Emotion is one of the most crucial factors influencing the success or failure of communication between humans," he says. "Robots are going to need similar emotional capabilities if they are to cooperate smoothly and flexibly with humans in our residential environments."
Hashimoto sees emotive robotics engineering as an essential follow-up to the brute force of today's artificial-intelligence-based programming techniques, in which simple rules are applied to signals generated by a robot's sensors. For instance, when one of today's floor-crawling robotic vacuum cleaners senses it has reached the wall, its computer brain knows it is time to cut its motor and set a new course. Its AI simply pairs sensed situations with a set of pre-programmed actions.
Generating an appropriate response to someone's mood is a much more complicated task, so kansei-enabled robots will need to make use of sensors worn by their owner to spot signs of stress. These could include galvanic skin sensors that detect sweat by measuring the conductivity of the skin, and pulse monitors. Neural networks, which can interpret large amounts of data, will then be able to decide how best to react to the person. Hashimoto's hope is that in this way robots will at least appear capable of intuitive behaviour. So if a robot's owner is sweating and has a racing pulse, say, the robot will sense this and decide that now might not be the time to offer them the TV guide or tonight's dinner menu.
This is no pipe dream, says Elizabeth Croft, a robotics researcher at the University of British Columbia in Vancouver, Canada. She says researchers are already using such physiological sensors alongside face and gesture recognition systems to study the emotional state of people when a robot is present. Her group has monitored how volunteers interact with a robot while they are wired-up with skin conductance, heart rate and facial muscle sensors (see "Sensors and sensibility"). They found that the very arrival of a robot can be enough to affect people. "The behaviour of the robot elicits a measurable physiological response from the user, such as surprise, calm, fear or interest. We can integrate this information into the robot's controller to allow the robot to respond in an appropriate manner," she says.
Hashimoto admits that even this level of complexity is merely technological play-acting. "Kansei robots will seem to understand human feeling to some extent and will appeal to us with their reactions. But they are not machines with a heart; they just look like they have a heart."
Faking it this way is not only philosophically unattractive to roboticists but also has a serious engineering downside: the more a robot attempts to cater for every conceivable human emotional state with appropriate reactions, the more complex its software becomes. This will ultimately make programming robots an immensely complex process, and render their software dangerously untestable.
One way around this programming nightmare, Hashimoto suggests, would be to let robots learn from their environment and construct their own sets of rules. This would force some robotics researchers to free themselves from the self-imposed shackles of the late Isaac Asimov's three "laws" of robotics. Coined in a 1942 short story called Runaround, these laws impose heavy constraints on a robot's software. Asimov's first law says that robots cannot harm a human or let a human come to any harm; the second says robots must always obey humans, unless that infringes the first law; and the third says robots must always protect themselves, unless that infringes the first or second laws. These rules demand enormous complexity from a robot's software, as every new situation must take them into account.
Instead of instilling this "moral framework" from the outset, Hashimoto thinks a robot's intelligence should grow as it ages, learning through trial and error much like a child goes from babyhood through to toddling and eventually to adolescence. "We have to design environments where human and robot learn together," Hashimoto says. Such a move would be controversial, placing robots in many ways on a par with humans. "But humans should not stand at the centre of everything. We need to establish a new relationship between human and machine."
Ditching Asimov would be a major wrench for robot makers, particularly those developing safety-critical domestic service robots, but Hashimoto is sticking to his guns: "As long as we obey Asimov's laws, we will never have a machine that is a true partner for a human," he says.
Sensors and sensibility
How do you measure a person's emotional reactions to robotic faux pas, such as invading personal space? Could robots be developed that can sense these feelings and react accordingly? These were the questions being asked at the socially intelligent robots conference in the UK last week at the University of Hertfordshire. Researchers are approaching the problem in a variety of ways, from analysing a person's body language as they interact with a robot, to detecting physical signs of stress.
BODY LANGUAGE SAYS IT ALL
Dana Kulic´ and Elizabeth Croft at the University of British Columbia in Vancouver, Canada, want robots to respond to our body language. As a first step towards this, they are trying to find out how a robot's actions affect our mood and what kind of sensors a robot can make use of to suss out how we feel. The pair wired up 36 volunteers with various sensors, then used statistical analysis to work out which of a robotic arm's motions and activities made them feel calm, anxious or surprised. It seems that facial expressions are not a useful indicator of mood. More helpful would be to monitor the direction a person's head is facing or gazing in, as a droid will not have to worry so much about its actions if its owner's attention has wandered.
SIGNS OF FEAR
Until wireless sensing is perfected, one option for robot owners hoping to attune their droid to their mood might be to wear a lightweight armband sensor. Christine Lisetti's team at the Eurecom Institute in Sophia Antipolis, France, has been fitting volunteers with an armband capable of sensing up to six physiological and environmental parameters. The SenseWear armband measures galvanic skin response, heart rate, skin temperature, the rate at which heat is lost from the skin, and ambient temperature. After showing 29 volunteers a 45-minute slide show of emotive images and sounds, the researchers were able to determine sadness, anger, fear, surprise, frustration and amusement with varying but pretty high degrees of accuracy. Fear was recognised with 86 per cent accuracy, so robots will know when it's time to back off.
APPROACHING FUSSY OWNERS
Kerstin Dautenhahn, professor of artificial intelligence at the University of Hertfordshire, has been running tests with 1.3-metre-high robots (pictured) trundling around an apartment to see how volunteers react to their presence. "Imagine you have a robot assistant in your home 24/7. How should it approach you? How should it attract your attention? That's what we are investigating," she says.
Her group was surprised to find that people dislike it not only when robots approach them from behind, but also when they approach from the front. This reaction was more pronounced when people were sitting down. It also emerged that extroverts tolerate bad robot behaviour much better than introverts.
It is 2025 and you have just removed the bubble wrap from your new domestic robot. Now you need to teach it how best to serve you. Sylvain Calinon and Aude Billard at the Ecole Polytechnique Fédérale de Lausanne in Switzerland believe we should sit down opposite our domestic droid and play an imitation game with it. By strapping five motion sensors to each of their arms and making actions for a miniature humanoid robot to mimic, the pair trained the robot to make complex motions. This not only taught the robot to move in a more human-like way, but volunteers had so much fun that they wrongly told their droid it had made a mistake so they could keep playing.
From issue 2569 of New Scientist magazine, 19 September 2006, page 28-29