Editor’s Note: Tomorrow Transformed explores innovative approaches and opportunities available in business and society through technology.
Meet Spot, the 160 lbs dog robot that can run, climb stairs and has an uncanny ability to maintain its balance.
Designed by robotics company Boston Dynamics, there were heated discussions online when Google bought the company in 2013, with accusations that Google had gone against its “Don’t be evil” motto by purchasing a company that had worked with the U.S. military and had close ties with the Defense Advanced Research Projects Agency (DARPA).
But more recently, the conversation flared up again, most of it stemming from the video released this week showing Boston Dynamics employees trying to kick Spot over in order to show how robust it is. The video spread around the Internet like wildfire and raised questions about ethics, the future of robotics and Google’s intentions.
As robots begin to act and look more and more like living things, it’s increasingly hard not to see them in that way. And while in principle kicking a robot is not the abuse of a living thing, after watching the video many felt uncomfortable.
Animal rights group PETA gave its view, reminding us that although many thought it inappropriate to kick a robot dog, abuse of actual dogs was a bigger, and ongoing issue:
“PETA deals with actual animal abuse every day, so we won’t lose sleep over this incident,” the group said. “But while it’s far better to kick a four-legged robot than a real dog, most reasonable people find even the idea of such violence inappropriate, as the comments show.”
Spot is a robot, not a real dog, after all.
Noel Sharkey, emeritus professor of artificial intelligence and robotics at the University of Sheffield, UK, told CNN: “The only way it’s unethical is if the robot could feel pain.”
He pointed out our tendency to anthropomorphize inanimate objects. “We as humans attribute human qualities to many things; designers have been using this for years – even cars are designed to look like animals. The more lifelike, or animal like, it is, the more we attribute those qualities on to it,” he said.
“For me as a roboticist that is quite an impressive test, usually when you kick a robot like that it falls over.”
But he did caution: “Many philosophers over the years have said that animals are clockwork but we must be kind to them anyway. By treating something life-like cruelly you are more likely to treat a living thing that way. If they could feel pain it would be completely different.”
Mark Coeckelbergh is professor of technology and social responsibility at De Montfort University, UK, with expertise in the ethics of robotics. He agreed with Sharkey that kicking a robot isn’t itself unethical, but added: “An ethical (question) could be the behavior itself: are violent gestures and behaviors towards anything, even if it’s not conscious, cannot feel pain etc., good?
“I would say that training violent gestures and behavior is not good for the human person and for others, from a virtue ethics perspective, regardless of the moral status of the robot.”
Future of robotics
While is it very easy to speculate about what this means for the future of robot-human relations, if people are offended by the robot being kicked, should Boston Dynamics really be sending “robot dogs” off to war? Spot’s big brother, “Big Dog,” was invented to carry arms and equipment for the army.
However, there are those that see the benefits of having robots carry out tasks that could save lives:
Others warned that potential side effects of these testing methods could include a (somewhat unlikely) robot uprising.
Many are more interested in what Google’s strategy is, as Boston Robotics is the eighth robotics company it has acquired.
In the last few years Google has bought Schaft Inc., which makes bipedal robots that can walk on uneven terrain, climb ladders and clear rubbish, Redwood Robotics, which develops service robots, and Meka Robotics, whose inventions are designed to live and work with human beings.
Whatever Google’s master plan is, questions about how we interact with robots will continue to be asked, and the lines are sure to get ever blurrier.
Coeckelbergh said: “In my papers I argue that appearance is important for our moral experience, and if robots are going to look and behave like this and become more human-like and more animal-like, we will for sure attribute all kinds of things to them: mental status, emotions, and also moral properties and rights.
“In any case, it’s good that these new technologies make us question and discuss how we think about moral status and ethics. In the future we have to learn to deal with these new entities, and if necessary adapt and re-train our moral sensitivities.”