A humanoid robot “reads” a book using algorithms. It’s a mystery how most AIs reach conclusions. (Source: Wikimedia)

Algorithms Show Humans Their Processes in Thinking and Reaching Decisions

The field of artificial intelligence (AI) has created computers that can drive cars, synthesize chemical compounds, fold proteins, and detect high-energy particles at a superhuman level.

However, these AI algorithms cannot explain the thought processes behind their decisions. A computer that masters protein folding and also tells researchers more about the rules of biology is much more useful than a computer that folds proteins without explanation.

The very machines mankind invented and began to train now require those very machines to train mankind—and to show people how AI thinks about things.

Forest Agostinelli, Assistant Professor of Computer Science, University of South Carolina, has written a piece for news.yahoo.com this week in the first person.

Agostinelli says, “AI researchers like me are now turning our efforts toward developing AI algorithms that can explain themselves in a manner that humans can understand. If we can do this, I believe that AI will be able to uncover and teach people new facts about the world that have not yet been discovered, leading to new innovations.”

Learning from experience

One field of AI, called reinforcement learning, studies how computers can learn from their own experiences. In reinforcement learning, an AI explores the world, receiving positive or negative feedback based on its actions. This has led to algorithms that can play chess at a superhuman level and prove mathematical theorems without any human guidance. As an AI researcher, Agostinelli uses reinforcement learning to create AI algorithms that learn how to solve puzzles such as the Rubik’s Cube.

Peering into the black box

Unfortunately, the minds of superhuman AIs are currently out of reach to humans, Agostinelli writes.

“AIs make terrible teachers and are what the computer science programmer call ‘black boxes.’ “

A black-box AI gives solutions without reasons for its solutions. Computer scientists have sought to “open” the black box, and recent research finds that many AI algorithms think similarly to humans. For example, a computer trained to recognize animals will learn about different types of eyes and ears and will put this information together to correctly identify the animal.

“The effort to open up the black box is called explainable AI,” Agostinelli writers “My research group at the AI Institute at the University of South Carolina is interested in developing explainable AI. To accomplish this, we work heavily with the Rubik’s Cube. My lab has set up a website where anyone can see how our AI algorithm solves the Rubik’s Cube; however, a person would be hard-pressed to learn how to solve the cube from this website. This is because the computer cannot tell you the logic behind its solutions.”

Currently, his algorithm can use a human plan for solving the Rubik’s Cube, suggest improvements to the plan, recognize plans that don’t work and find alternatives that do. It gives feedback that leads to a step-by-step plan for solving the Rubik’s Cube that a person can understand. His team’s next step is to build an intuitive interface that allows their algorithm to teach people how to solve the Rubik’s Cube. They hope to generalize the approach to a wide range of problems.

Now the article explains how the AI works when it is trying to crack the Rubik’s mystery. However it would take up entirely too much room to post here, so the link leads to that and other ideas to help mankind level the thinking field a little bit with AI.

read more at news.yahoo.com