DeepMind Able to Extrapolate from Images

Alphabet’s DeepMind researchers announced in a report published in Science magazine that the company developed a way to build a 3-D layout from just a few 2-D photos in a virtual layout in a “Natural Scene Representation and Rendering” program.

According to a story in IEEE’s Spectrum, practical applications could range from reconstructing a crime scene from a few photos to assisting self-driving cars to improving household robots.

“By manipulating the abstraction first and filling in details later, the system can work much faster than rendering systems that attempt to manipulate huge sets of three-dimensionally related points,” the Spectrum story explains. “The researchers add that the division of labor also makes the method much better at representing soft objects, like animals and vegetables.

Alphabet, which is looking for a revenue stream outside of Google, views DeepMind as a potential driver of income. The company’s tech has saved energy in Google’s server farms and developed a product for improving text to speech.

IEEE, or “Eye-triple-E,” stands for the Institute of Electrical and Electronics Engineers, an association dedicated to advancing innovation and technological excellence. It serves professionals in electrical, electronic and computing fields and related areas of science and technology.