Image Via TechCrunch

To Imbue Machines With “Human” Ethics, We Need First to Define Them

In a recent article on TechCrunch, University of Southern California professor of Law and Economics Gillian Hadfield proposes that in order to best prepare for the coming age of AI and robotic proliferation –and avoid the familiar worst-case-scenarios theorized by the likes of Musk and Bostrom– , we need to better understand first how humans create systems of law, ethics, and normative behavior so that we may instill an understanding of human values and behavior into AI.

When intelligent automated systems begin to interact with humans and with each other, how can we know if our machinery will share the fundamental “human” values and behaviors we take for granted? How can we begin to understand and codify these precepts of human behavior in order to stave off the extraordinary risks that even mundane “weak” AI systems –let alone Superintelligence(s)– pose in the coming decades?

So far the AI community and the donors funding AI safety research – investors like Musk and several foundations – have mostly turned to ethicists and philosophers to help think through the challenge of building AI that plays nice.  Thinkers like Nick Bostrom have raised important questions about the values AI, and AI researchers, should care about.

But our complex normative social orders are less about ethical choices than they are about the coordination of billions of people making millions of choices on a daily basis about how to behave.

How that coordination is accomplished is something we don’t really understand. Culture is a set of rules, but what makes it change – sometimes slowly, sometimes quickly – is something we have yet to fully understand. Law is another set of rules that we can change simply in theory but less so in reality.

As the newcomers to our group, therefore, AIs are a cause for suspicion: what do they know and understand, what motivates them, how much respect will they have for us, and how willing will they be to find constructive solutions to conflicts? AIs will only be able to integrate into our elaborate normative systems if they are built to read, and participate in, that system.

In a future with more pervasive AI, people will be interacting with machines on a regular basis—sometimes without even knowing it. What will happen to our willingness to drive or follow traffic laws when some of the cars are autonomous and speaking to each other but not us? Will we trust a robot to care for our children in school or our aging parents in a nursing home?

Social psychologists and roboticists are thinking about these questions, but we need more research of this type, and more that focuses on the features of a system, not just the design of an individual machine or process. This will require expertise from people who think about the design of normative systems.

read more at techcrunch.com