Tuesday, 11 September 2012

Extending Legal Protection to Social Robots - IEEE Spectrum

This is a guest post. The views expressed in this article are solely those of the author and do not represent positions of Automaton, IEEE Spectrum, or the IEEE.

Most discussions of ?robot rights? play out in a seemingly distant, science-fictional future. While skeptics roll their eyes, advocates argue that technology will advance to the point where robots deserve moral consideration because they are ?just like us,? sometimes referencing the movie Blade Runner. Blade Runner depicts a world where androids have human-like emotions and develop human-like relationships to the point of being indistinguishable from people. But Do Androids Dream of Electric Sheep, the novel on which the film is based, contains a small, significant difference in storyline. In the book, the main character falls in love with an android that only pretends to requite his feelings. Even though he is fully aware of this fact, he maintains the one-directional emotional bond. The novel touches on a notably different, yet plausible, reality: humans? moral consideration of robots may depend more on our own feelings than on any inherent qualities built into robots.

This distinction hints at an approach to robot rights that is not restricted to science fictional scenarios. Looking at state of the art technology, our robots are nowhere close to the intelligence and complexity of humans or animals, nor will they reach this stage in the near future. And yet, while it seems far-fetched for a robot?s legal status to differ from that of a toaster, there is already a notable difference in how we interact with certain types of robotic objects. While toasters are designed to make toast, social robots are designed to engage us socially. At some point, this difference may warrant an adjustment in legal treatment.

As technological progress begins to introduce more robotic toys, pets, and personal-care aids into our lives, we are seeing an increase in robots that function as companions. Hasbro?s Baby Alive dolls, Jetta?s robotic dinosaur Pleo, Aldebaran?s NAO next generation robot, the Paro baby seal, or the Massachusetts Institute of Technology (MIT) robots Kismet and Leonardo are examples of social robots that are able to mimic social cues, have various ?states of mind?, and display adaptive learning behavior. Our interactions with them follow social behavior patterns, and often involve our feelings. When we develop emotional relationships to these robots, it is not because they are inherently different from toasters, but because there is a difference in how we perceive them.

Robots vs. toasters: projecting our emotions

Our difference in perception stems from a strong human tendency to anthropomorphize embodied objects with autonomous behavior. In other words, we tend to project lifelike qualities onto robots. This anthropomorphism begins with a general inclination to over-ascribe autonomy and intelligence to the way that things behave, even if they are just following a simple algorithm. But not only are we prone to ascribing more agency than is actually present, we also project intent and sentiments (such as joy, pain, or confusion) onto other entities.

Social robots play off of this tendency by mimicking cues that we automatically associate with certain states of mind or feelings. Even in today?s primitive form, this can elicit emotional reactions from people that are similar, for instance, to how we react to animals and to each other. From being reluctant to switch off robots that give the appearance of animacy, to ascribing mental states to robotic pets, we respond to the cues given to us by lifelike machines, even if we know that they are not ?real.?

We see this effect even when objects are not specifically designed to evoke these feelings. For example, when the United States military began testing a robot that defused landmines by stepping on them, the colonel in command ended up calling off the exercise. The robot was modeled after a stick insect with six legs. Every time it stepped on a mine, it lost one of its legs and continued on the remaining ones. According to the Washington Post, ?[t]he colonel just could not stand the pathos of watching the burned, scarred and crippled machine drag itself forward on its last leg. This test, he charged, was inhumane.? Other autonomous robots employed within military teams inspire fondness and loyalty in their human teammates, who identify with the robots enough to name them, award them battlefield promotions and ?purple hearts?, introduce them to their families, and become very upset when they ?die?. While none of these robots are designed to give social cues, their autonomous behavior makes them appear lifelike enough to generate emotional responses. In fact, even simple household robots like the Roomba vacuum cleaner prompt people to talk to them and develop feelings of camaraderie and gratitude.

While some of the above is coincidental, social robot design is capable of specifically targeting and magnifying this anthropomorphism. When robots are able to mimic lifelike behavior, react to social gestures, and use sounds, movement, and facial expressions to signal emotions in a way that we immediately recognize, this causes an involuntary biological response, shifting our perception. Owners of Sony AIBO dogs (developed in the 1990s), while fully aware that they are dealing with a robot, regularly ascribed lifelike essences and mental states to their artificial companion. The robotic seal Paro, currently used as a therapeutic device in nursing homes, reacts to touches and words. It conveys a sense of animacy by exhibiting emotional states, responding to people?s actions, and learning individual voices. Most of the patients (and other people) who work with Paro treat it as if it were alive.

Psychologist Sherry Turkle explains in her work studying human?robot interaction that this effect is particularly strong with social robots that are designed to evoke feelings of reciprocity. ?Nurturing a machine that presents itself as dependent creates significant social attachments.? She finds that there is a difference between the type of projection that people have traditionally engaged in with objects, such as small children comforting their dolls, and the psychology of engagement that comes from interacting with social robots, which create an effective illusion of mutual relating. While a child is aware of the projection onto an inanimate toy and can engage or not engage in it at will, a robot that demands attention by playing off of our natural responses may cause a subconscious engagement that is less voluntary.

This anthropomorphism is especially plausible when people have little sense of how a complex robot works, and so are especially inclined to assign autonomy, intent, or feelings to actions that actually result from algorithms they do not understand. Small children are regularly confused when asked whether the social robots they interact with experience pain or other sentiments. Elderly people unfamiliar with modern technology struggle with the difference between robotic companions and live animals. But the effect of projection and emotional bonding holds even for those who are perfectly informed as to the exact, detailed functionality of the robots with which they interact. For example, AIBO owners reported that they would remove their AIBO from the room while changing, so that they would not be ?watched?, or that they experienced feelings of guilt when putting the device back in its box. Students in MIT?s Media Lab would often put up a curtain between themselves and Kismet, a social robot that simulates emotion through facial expressions, because the lifelike behavior of the face distracted them. And Cynthia Breazeal, Kismet?s developer, reports experiencing ?a sharp sense of loss? when she parted ways with her creation at the end of her dissertation.

While people have for decades named their cars and developed attachments to their handheld devices, the effect of robots that actively and intentionally engage our ingrained anthropomorphic responses is considerably stronger. We are already disposed towards forming unidirectional emotional relationships with the robotic companions available to us today, and we can only imagine what the technological developments of the next decade will be able to effect. As we move within the spectrum between treating social robots like toasters and treating them more like our cats, the question of legal differentiation becomes more immediate.

Isn?t legal protection a bit far-fetched?

Assuming that we systematically perceive social robots differently than toasters, why and how could this difference lead to a change in law? One reason is that when it comes down to legal treatment, it may not matter whether robots are as smart and as complex as biological life forms. The key insight is that we have an inherent desire to protect the things that we relate to. Many of our legal systems extend protections (beyond property law) to animals that we care about, preventing their abuse. While animal rights philosophy regularly revolves around concepts like sentience or pain, our laws actually indicate that these concerns are secondary when it comes to legal protection. Many successful societal pushes for animal abuse laws have followed popular sentiment rather than consistent biological criteria.

Our animal treatment laws give rise to the question whether our condemnation of abuse is based on a projection of ourselves. In other words, what if our desire to protect animals from harm has less to do with their inherent qualities, and more to do with what it affects in us? A lot of people do not like to see kittens be held by the tail. It is certainly possible that we feel so strongly about this because of the specific details of kittens? biological pain. But it is also possible that it simply causes us discomfort to see a reaction that we associate with suffering. Our emotional relationship to kittens, plus the strong response of the kitten to being held by the tail, may trigger protective feelings in us that have more to do with anthropomorphism than moral obligation. While this view is not likely to be a crowd-pleaser, it appears realistic in light of the differential protections awarded to various animals.

We have an apparent desire to protect those animals to which we more easily relate. Laws governing the treatment of horses, in particular bans on the slaughter of horsemeat in the United States, have been enacted because of the general sentiment that such behavior is offensive. Unlike many Europeans, a large part of the United States population seems strongly opposed to the idea of horses being killed and eaten. This is not justified by any biological differences between horses and cows. Similarly, very few people were interested in early campaigns to save the whales, despite best efforts from advocates. This changed once the first recordings of whale songs reached the public. Touched by the beautiful voices, support for the cause rose dramatically as people discovered whales to be creatures they could relate to. All of this indicates that we may care more about our own sentiment than any objective biological criteria.

When people care deeply about protecting something, there are different ways that the law can address this. One way is by maintaining protection through property rights that are inherent to an owner. But sometimes society pushes for laws that go beyond personal property rights. Although individual horse owners may be able to protect their horses from harm, we may want to ensure the protection of all horses, whether we own them or not. We often care strongly enough to make wider-reaching laws, going so far as to affect other people?s property, for instance by prohibiting farmers from mistreating their chickens, or pet owners from mistreating their dogs. Assuming that our society wants to protect certain animals regardless of their capacities, because of our personal attachments to them, society may someday also want to protect social robots regardless of their capacities.

In the words of Kurt Cobain, ?It?s ok to eat fish, because they don?t have any feelings.?

Even if we agree that projecting emotions is part of why we protect animals, many will argue that we should draw the line at something that does not actually ?suffer?. After all, despite the behavior we display towards them, most of us know that robots are not alive. And while we find differential treatment of animals in our laws, the actual discussions surrounding their moral inclusion do not usually consider anthropomorphism to be a justification. (Even if it were, one might still oppose the idea that laws be based on social sentiment rather than morally consistent criteria.) There may, however, be other arguments that favor the legal protection of social robots.

One reason that people could want to prevent the ?abuse? of robotic companions is the protection of societal values. Parents of small children with a robotic pet in their household are likely familiar with the situation in which they energetically intervene to prevent their toddler from kicking or otherwise physically abusing the toy. Their reasons for doing so are partly to protect the (usually expensive) object from breaking, but will also be to discourage the child from engaging in types of conduct that could be harmful in other contexts. Given the lifelike behavior of the robot, a child could easily equate kicking it with kicking a living thing, such as a cat or another child. As it becomes increasingly difficult for small children to fully grasp the difference between live pets and lifelike robots, we may want to teach them to act equally considerately towards both. While this is easily done when a parent has control over both the robot and the child, protecting social robots more generally would set a framework for society and prevent children from adopting undesirable behavior elsewhere. It could even protect them from traumatizing experiences, for instance from witnessing older children ?torture? a robotic toy on the playground, the likes of which the child has developed an emotional relationship to at home.

Even for fully informed adults, the difference between alive and lifelike may be muddled enough in our subconscious to warrant adopting the same attitudes toward robotic companions that we carry towards our pets. A study of Sony AIBO online message boards reveals that people were dismayed to witness the story of an AIBO being tossed into a garbage can. Not long after the Pleo robot dinosaur became commercially available in 2007, videos of Pleo ?torture? began to circulate online. The comments left by viewers are strikingly polarized ? while some derive amusement from the videos, others appear considerably upset, going so far as to verbally attack the originators and accuse them of horrible cruelty.

The Kantian philosophical argument for animal rights is that our actions towards non-humans reflect our morality ? if we treat animals in inhumane ways, we become inhumane persons. This logically extends to the treatment of robotic companions. Given that many people already feel strongly about state-of-the-art social robot ?abuse?, it may soon become more widely perceived as out of line with our social values to treat robotic companions in a way that we would not treat our pets.

So does this mean we should change our laws?

Whether out of sentiment or to promote socially desirable behavior, some parts of society may sooner or later begin to ask that legal protection be extended to robotic companions. When this happens, lawmakers will need to deliberate whether and how it would make sense to accommodate this societal preference. Aside from the above, there are a few things to consider in this context.

One practical difficulty lies in establish limiting factors. In order to pass protective laws, we would have to come up with a good definition of ?social robot?. This could be something along the lines of ?an embodied object with a certain degree of autonomous behavior that is specifically designed to socially interact with humans.? But this definition may not cover all of the robotic objects that people want to protect (for instance robots that evoke social engagement by accident, such as the above-mentioned military bots), or it may prove to be overly broad. We would also have to clearly determine the extent of protection, including what constitutes ?mistreatment?. Although many issues could be resolved analogous to animal abuse laws, there may be a few difficult edge cases, especially in light of rapidly changing technology. The challenge of drawing these lines is not new to our legal system, but it may take some effort to find the right balance.

Another consideration is that legal changes can be costly, both in terms of the direct costs of implementation and enforcement, and in terms of indirect costs. Since protecting social robots would effectively limit people?s property rights, indirect costs could range from a distortion of market incentives to negative effects on research and development investments. Law influences people?s behavior. Should we begin to think about legal changes, we may want to try to get a better sense of what these could affect.

While it seems likely that people will increasingly develop strong attachments to robotic companions, the question of whether we should legally protect them is by no means simple. However, as technology widens the gap between social robots and toasters, it seems timely to begin thinking about the societal implications of anthropomorphism and how they could be addressed by our legal system.

Kate Darling is an IP research specialist at the MIT Media Lab. This article is?based on a?paper?presented at the?We Robot 2012?conference.?Contact her at kdarling@mit.edu or follow her on?Twitter.

Source: http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/extending-legal-protection-to-social-robots

montrose marshawn lynch earthquake bay area clear channel drexel dale george will

No comments:

Post a Comment