Credit: Eric Nyquist for Quanta Magazine

Imbuing Machines With an Approximation Of Natural Curiosity Yields Benefits

New research into optimizing reward systems for AI has lead researchers at UC Berkeley to design a novel approach towards machine learning that emulates the way in which a human toddler might learn to navigate the world rather than the more rote and linear “point”-based ways used in other AI reward systems.

This system, called the Intrinsic Curiosity Module or ICM, allows for AI to develop –in the most basic sense– a kind of “curiosity” in which the most unexpected or novel courses of action are rewarded, a breakthrough in AI design which may help AI systems become more adaptive and general-purpose learners.

For an in-depth and fascinating read about the project and the broader implications of imbuing machine learning with a sense of curiosity, read on at Quanta magazine.

“You can think of curiosity as a kind of reward which the agent generates internally on its own, so that it can go explore more about its world,” Agrawal said. This internally generated reward signal is known in cognitive psychology as “intrinsic motivation.” The feeling you may have vicariously experienced while reading the game-play description above — an urge to reveal more of whatever’s waiting just out of sight, or just beyond your reach, just to see what happens — that’s intrinsic motivation.

Humans also respond to extrinsic motivations, which originate in the environment. Examples of these include everything from the salary you receive at work to a demand delivered at gunpoint. Computer scientists apply a similar approach called reinforcement learning to train their algorithms: The software gets “points” when it performs a desired task, while penalties follow unwanted behavior.

But this carrot-and-stick approach to machine learning has its limits, and artificial intelligence researchers are starting to view intrinsic motivation as an important component of software agents that can learn efficiently and flexibly — that is, less like brittle machines and more like humans and animals. Approaches to using intrinsic motivation in AI have taken inspiration from psychology and neurobiology — not to mention decades-old AI research itself, now newly relevant. (“Nothing is really new in machine learning,” said Rein Houthooft, a research scientist at OpenAI, an independent artificial intelligence research organization.)

Such agents may be trained on video games now, but the impact of developing meaningfully “curious” AI would transcend any novelty appeal. “Pick your favorite application area and I’ll give you an example,” said Darrell, co-director of the Berkeley Artificial Intelligence lab. “At home, we want to automate cleaning up and organizing objects. In logistics, we want inventory to be moved around and manipulated. We want vehicles that can navigate complicated environments and rescue robots that can explore a building and find people who need rescuing. In all of these cases, we are trying to figure out this really hard problem: How do you make a machine that can figure its own task out?”

Read More: Clever Machines Learn How to Be Curious