Image via leverege.com

AI Better Able to Understand Spatial Relationships, Other Forms of Reasoning

DeepMind, the cutting-edge AI team behind such AI successes as AlphaGo, has released two papers demonstrating breakthroughs in relational reasoning.

In the two neural networks –Relational Networks (RN) and Visual Interaction Networks (VIN)–, neural networks beat human performance at understanding spatial relationships between objects in images and could also make accurate predictions about the movement of objects in the near future, such as predicting the movement of billiard balls. Interestingly, the RN model trained to interpret objects’ spatial relationships also demonstrated excellent performance in language-based tasks where it demonstrated similar abilities in inferring context and relationships as in the visual model.

For more in depth explanation and links to DeepMind’s papers, read more at leverege.com

Given enough GPUs, distributed machine learning systems (such as the one Facebook has published earlier this week) excel in recognizing and labeling images. These systems can quickly and accurately determine whether a dog is in the image, but struggle to answer relational questions.
leverege.com
For example, a computer vision software cannot determine whether the dog in the picture is bigger than the ball it is playing with or the couch it is sitting on.

Such relational intelligence separates artificial intelligence systems with human cognition. While humans can reason about physical relationship between objects, computers have yet to make that connection until now.

DeepMind, the creators of AlphaGo, quietly published two groundbreaking research papers into this area, demonstrating a way to train relational reasoning using deep neural networks.

read more at leverege.com