"Killer robots" may grab headlines, but we face risks from any runaway autonomous system. AAP Image/University of NSW


 

Recently Elon Musk and other AI luminaries made a much-sensationalized call for UN to ban autonomous weapons. While the emergence of autonomous and semi-autonomous weaponry in the near future is itself a terrifying prospect, the development of seemingly more mundane AI technologies —such as self-driving automobiles— also brings up less-obvious but equally important ethical quandaries. AI developers will have to develop proactive solutions and preempt as best as they can these unforeseen risks of tomorrow, and yet the universities and coding camps of today suffer a potentially perilous dearth of any education in ethics.

More than 100 technology pioneers recently published an open letter to the United Nations on the topic of lethal autonomous weapons, or “killer robots”.

These people, including the entrepreneur Elon Musk and the founders of several robotics companies, are part of an effort that began in 2015. The original letter called for an end to an arms race that it claimed could be the “third revolution in warfare, after gunpowder and nuclear arms”.

The UN has a role to play, but responsibility for the future of these systems also needs to begin in the lab. The education system that trains our AI researchers needs to school them in ethics as well as coding.

As in most areas of science, acquiring the necessary depth to make contributions to the world’s knowledge requires focusing on a specific topic. Often researchers are experts in relatively narrow areas, and may lack any formal training in ethics or moral reasoning.

The emergence of ethics as a topic for discussion in AI research suggests that we should also consider how we prepare students for a world in which autonomous systems are increasingly common.

Most undergraduate courses in computer science and similar disciplines include a course on professional ethics and practice. These are usually focused on intellectual property, copyright, patents and privacy issues, which are certainly important.

However, it seems clear from the discussions at IJCAI that there is an emerging need for additional material on broader ethical issues.

The key point is to enable graduates to integrate ethical and societal perspectives into their work from the very beginning. It also seems appropriate to require research proposals to demonstrate how ethical considerations have been incorporated.

As AI becomes more widely and deeply embedded in everyday life, it is imperative that technologists understand the society in which they live and the effect their inventions may have on it. read more at theconversation.com