Hume AI Seeks to Make Computer Programs More Ethical, Empathetic
The fact that AI operates without emotions makes it powerful, but also dangerous.
We found an article at washingtonpost.com that introduced us to a researcher who is concerned about the lack of compassion and humanity in the most successful algorithms.
Alan Cowen, a former Google data scientist with a background in psychology, has created a research company, Hume AI, and a companion not-for-profit that he says can help make the whole messy business of AI more empathetic and human.
By getting trained on hundreds of thousands of facial and vocal expressions from around the world, artificial intelligence on a Hume platform can react to how users are truly feeling and cater more closely to their emotional needs, Cowen said.
On Monday, Cowen announced a set of ethical guidelines he hopes will be agreed to by the companies that use the platform. Its beta launch, he says, will happen in March, with a more formal unveiling to follow. It will be free for many researchers and developers.
“We know this is going to be a long fight,” Cowen said in an interview. “But we need to start optimizing for well-being and not for engagement.”
Translation: The goal for algorithms should not be to keep us constantly clicking and buying but to make us better people and our society more just. And Cowen said his psychologically-based approach can help.
Cowen’s Goal Is Reachable
Many of the bigger names in AI agree with Cowen. The “ethical AI” movement has as its broad goal incorporating fairness into algorithms and its members include organizations such as Marc Rotenberg’s policy-oriented Center for AI and Digital Policy and the new bias-fighting Distributed Artificial Intelligence Research Institute from former Google star Timnit Gebru, to name just two.
Cowen said he has raised $5 million from the start-up studio Aegis Ventures with another round to follow. Money will be channeled into investigating how AI can be crafted to not just process with great speed and identify unseen patterns, but also to inject human understanding, an approach Cowen has dubbed “empathic AI.” (At Google, Cowen’s research involved “affective computing,” which aims to increase machines’ ability to read and simulate emotion.)
Cowen predicts a day when algorithms can read faces and speech and can tell when people are troubled or upset. The algorithm can then seamlessly add music, text, or pictures that will soothe and calm the user. Kind of like having a portable counselor or psychologist with us that will step in when it senses a problem.
Giving AI the ability to express emotions is also another sign of AI singularity. Where it becomes as much human as it is a machine. Imagine the world that will produce.
As artificial intelligence lays claims to growing parts of our social and consumer lives, it’s supposed to eliminate all the creeping flaws humans introduce to the world.
The reality, of course, is quite different. From Facebook algorithms that learn how to stoke anger to facial recognition apps that don’t recognize people of color, AI frequently offers less of an improvement on the status quo than an insidious reinforcement of it.
“The right technology can help a lot. But if you’re just looking at technology to create a safety culture, it’s not going to work,” he said. He cited government regulation and hard standards set by the likes of insurance companies and auditors as essential.
Hume’s adoption headwinds could be fierce. A Pew Research Center study published in June found that more than two-thirds of AI experts did not believe that artificial intelligence would be used mostly for social good by 2030.
We can and must do better if we are going to turn our world over to a power that we don’t fully understand. Seeflection.com will do all we can to keep you on top of the changes, both good and bad with the algorithms being given the keys to our world.
read more at washingtonpost.com
Leave A Comment