Clever Hans, the counting horse, relied on cues for his ‘calculations.’

Scientists Studying How AI Learns Weigh ‘Smarts’ with Algorithms

Artificial intelligence systems are powerful tools for businesses and governments to process data and respond to changing situations, whether on the stock market or on a battlefield. But there are still some things AI isn’t ready for.

After amazing the world with its processes and ability to read x-rays faster and with more accuracy than human doctors, AI is screeching to a halt in some medical arenas Here is why. In a piece found at techxplore.com, the team of writers points out a little glitch in AI’s recent reported progress.

There is little doubt that some researchers and tech companies have bet their futures on AI. Their belief is that it won’t be long before AI is solving the world’s problems in the blink of an eye. As it turns out maybe not so much. However, the only people who understand how AI works are the people who are inventing it. Now researchers are asking whether various AI algorithms are really thinking and solving problems or if they getting the answers by picking the average answer to a particular question.

Researchers from TU Berlin, Fraunhofer Heinrich Hertz Institute HHI and Singapore University of Technology and Design (SUTD) have provided a glimpse into the diverse “intelligence” spectrum observed in current AI systems, specifically analyzing these AI systems with a novel technology that allows automated analysis and quantification.

These researchers have developed algorithms that can observe which data AI uses in making its decisions. They call it “explainable AI.”

The most important prerequisite for this novel technology is a method developed earlier by TU Berlin and Fraunhofer HHI, the so-called Layer-wise Relevance Propagation (LRP) algorithm that allows predictions according to which input variables AI systems make their decisions. Extending LRP, the novel Spectral relevance analysis (SpRAy) can identify and quantify a wide spectrum of learned decision-making behavior.

By using their newly developed algorithms, researchers are finally able to put any existing AI system to a test and also collect quantitative information from them: starting from basic problem-solving behavior to cheating strategies up to highly elaborate “intelligent” strategic solutions.

Which brings us to Clever Hans. Clever Hans was a horse in the 19th-Century that people thought was able to count. His owner would ask a simple math question and the horse would correctly answer it. Well, it turns out the horse was only watching the owner’s body language to guess which number he should use his hoof to count out.

It turns out some AI programs use the same strategy occasionally.

The researchers were also able to find these types of faulty problem-solving strategies in some of the state-of-the-art AI algorithms, the so-called deep neural networks—algorithms that had been considered immune against such lapses. These networks based their classification decisions in part on artifacts that were created during the preparation of the images and have nothing to do with the actual image content.

“Such AI systems are not useful in practice. Their use in medical diagnostics or in safety-critical areas would even entail enormous dangers,” said Klaus-Robert Müller. “It is quite conceivable that about half of the AI systems currently in use implicitly or explicitly rely on such Clever Hans strategies. It’s time to systematically check that so that secure AI systems can be developed.”

Read more at techxplore.com/partners/singapore-university-of-technology-and-design/