CausaLens Introduces Cause & Effect into Machine Learning Decision Making
When you think about how AI works, it’s easy to see it as a sterile, accurate, and inhuman machine. And that’s just what a start-up called causaLens wants to address.
Causal AI is the only technology that can reason and make choices like humans do. It uses causality to go beyond narrow machine learning predictions and can be directly integrated into human decision-making.
We found a lengthy but interesting article at techcrunch.com this week that looks into making AI more human a several levels.
CausaLens’s aim, CEO and co-founder Darko Matovski said, is for AI “to start to understand the world as humans understand it.”
One of the most popular applications of artificial intelligence to date has been to use it to predict things, using algorithms trained with historical data to determine a future outcome. But popularity doesn’t always mean success: Predictive AI leaves out a lot of the nuance, context, and cause-and-effect reasoning that goes into an outcome; and as some have pointed out (and as we have seen), this means that sometimes the “logical” answers produced by predictive AI can prove disastrous.
Popular betting sites covered in the media lately are using AI for predictive betting. Also for instantly evaluating the risk of a game. These companies have extremely costly and powerful AI that consumers compete with.
Considering all of the variables that might be involved, it’s the kind of big data problem that’s nearly impossible for a human, or even a team of humans, to compute. However, a computer can work through it. While it’s not a cure for cancer, this kind of work is a significant step towards evaluating various treatments tailored to the many permutations involved.
CausaLens’s tech has also been applied in a less clinical way in healthcare. A public health agency from one of the world’s biggest economies (causaLens cannot disclose publicly which one) used its causal AI engine to determine why certain adults have been holding back from getting COVID-19 vaccinations so that the agency could devise better strategies to get them on board (plural “strategies” is the operative detail here: the whole point is that it’s a complex issue involving a number of reasons depending on the individuals in question). The Mayo Clinic is a client, too.
Using causaLens could assist in creating new outcomes for all sorts of things. Loans, medicine, education and more have AI attached to their functions. And perhaps for a human to make the most of these AI assists, it’s important is for the AI to understand how a human needs to be assisted.
It’s not that more reasoning and answering “why” weren’t priorities early on, Matovski explained:
“People have been exploring cause and effect relationships in science for a long time. You could even argue Newton’s equations are causal. It is super fundamental in science,” he said — but it’s that AI specialists couldn’t understand how to teach machines to do this. “It was just too difficult,” he said.
Watching the growth of how we are actually creating our AI world, is a good step to try to bring more humanity to algorithms and less AI to human understanding. Causalens says this is how we all can work best with AI. So far their investors seem to agree with them.
read more at techcrunch.com