Roboticist and AI researcher Ayanna Howard. (Source: Association for Computing Machinery)

Roboticist Educator Ayanna Howard Pushes for Questioning AI Systems’ Reliability

A story on MIT’s techologyreview.com explores the suggestion by Ayanna Howard, a 30-year robot design veteran and winner of the Association for Computing Machinery’s 2021 Athena Lecturer Award, that scientists and businesses stop putting too much faith in AI and formalize reviews of its findings.

Howard, who was recognized by the award as a woman who made fundamental contributions in computer science, recently became dean of the college of engineering at Ohio State University after 16 years as a professor at the Georgia Institute of Technology. In the wide-ranging interview, she said she prefers the term “humanized intelligence” because AI should reflect human values.

Her views on making robots adhere to human values made her a “total weird outlier,” Howard said in the interview.

“I looked at things differently than everyone else,” Howard said. “And back then there was no guide for how to do this kind of research. In fact, when I look back now at how I did the research, I would totally do it differently. There’s all this experience and knowledge that has since come out in the field.”

Howard said she began thinking of robots as more than helping people—but rather having a relationship with them—when she was on a team that did a study at the Georgia Institute of Technology concluding that humans shouldn’t overly trust robots, even if they are trained to guide them in an emergency situation. Here’s a summary of the research and its findings, drawn directly from the study:

“To explore this concept, we performed an experiment where a participant interacts with a robot in a non-emergency task to experience its behavior and then chooses whether to follow the robot’s instructions in an emergency or not. Artificial smoke and fire alarms were used to add a sense of urgency. To our surprise, all 26 participants followed the robot in the emergency, despite half observing the same robot perform poorly in a navigation guidance task just minutes before. We performed additional exploratory studies investigating different failure modes. Even when the robot pointed to a dark room with no discernible exit the majority of people did not choose to safely exit the way they entered.”

In the real world, Howard said every time she reads about a Tesla accident, she feels that it confirms that people are trusting in the technology too much. She recommends building “distrust” into the system for humans so that they aren’t overly confident in systems that can fail.

“We’re actually trying an experiment right now around the idea of denial of service. We don’t have results yet, and we’re wrestling with some ethical concerns. Because once we talk about it and publish the results, we’ll have to explain why sometimes you may not want to give AI the ability to deny a service either. How do you remove service if someone really needs it?”

Howard said she is researching another system called “explainable AI,” where the system explains its risks or uncertainties so that people using them can make informed decisions.

read more at technologyreview.com