Caveats on Giving Robots the Will to Survive to Accelerate Progress

Researchers are exploring whether a lack of emotions are keeping robot AI from reaching Singularity. They’re also asking the question: if robots are concerned about their own well-being, would they be willing to sacrifice an assigned task or worse yet, sacrifice a programmer in order to survive? Pretty serious considerations must be given before allowing machines to become living beings that can mimic a human’s fight or flight response.

A great article from sciencenews.org, writtin by Tim Seigfried broached this subject on robot emotions.

Will self-aware robots save us or themselves?

There might be a way, to give robots feelings, say neuroscientists Kingson Man and Antonio Damasio. They say simply build the robot with the ability to sense peril to its own existence. It would then have to develop feelings to guide the behaviors needed to ensure its own survival.

“Today’s robots lack feelings,” Man and Damasio write in a new paper (subscription required) in Nature Machine Intelligence. “They are not designed to represent the internal state of their operations in a way that would permit them to experience that state in a mental space.”

Seigfried went on to point out: Feelings motivate living things to seek optimum states for survival, helping to ensure that behaviors maintain the necessary homeostatic balance. An intelligent machine with a sense of its own vulnerability should similarly act in a way that would minimize threats to its existence.

In other words a robot needs to understand which side of its bread is buttered, and who provides the butter.

Further, according to Seigfreids article, protecting its own existence might therefore be just the motivation a robot needs to eventually emulate human general intelligence. That motivation is reminiscent of Isaac Asimov’s famous Laws of Robotics: Robots must protect humans, robots must obey humans, robots must protect themselves.

In Asimov’s fiction, self-protection was subordinate to the first two laws. In real-life future robots, then, some precautions might be needed to protect people from self-protecting robots.

Whether robots learn to survive in a way that preserves the human race is the main question. Seigfried ended his article with a beauty of an idea:

If scientists do succeed in instilling empathy in robots, maybe that would suggest a way for doing it in humans, too.

read more at sciencenews.org