Deep Learning, illustrated.

Teaching Cars to Think Like Human Drivers

To drive into the intersection, or not to drive into the intersection?

Predicting what other drivers will do and anticipating that they may break rules of the road is a very human response when driving a vehicle. Now Google and Uber have added that human trait of uncertainty to their deep learning models, so that their cars and other autonomous machines are more apt to respond with a measure of caution–an important aspect of human intelligence.

According to Wil Knight, senior editor for AI at MIT Technology Review, both companies were concerned about the consequences of machines that didn’t possess self-doubt.

“You would like a system that gives you a measure of how certain it is,” says Dustin Tran, who is working on this problem at Google. “If a self-driving car doesn’t know its level of uncertainty, it can make a fatal error, and that can be catastrophic.”

A new programming language released by Uber, called Pyro, combines deep learning with probabilistic programming. Another program, called Edward, was developed at Columbia University with funding from DARPA. Both are in the early stages. Researchers consider the new framework more solid and capable of resolving complex problems. The addition of built-in knowledge will also ensure a more stable program, according to Noah Goodman, a professor at Stanford who is affiliated with Uber’s AI Lab.

“In cases where you have prior knowledge you want to build into the model, probabilistic programming is especially useful, ” Goodman says.