Google Trains Robots Using Social Media ‘Mannequin Challenge’
MIT’s technologyreview.com reports that a brief video craze in 2016 has been used to help train Google’s AI algorithms. And that those algorithms are helping to train robots to navigate the real world.
Robots have difficulties judging space and depth. To approach this problem, a team at Google AI turned to an unexpected data set: thousands of YouTube videos of people performing the Mannequin Challenge. (If it happened to pass you by at the time, this involved standing as still as possible while someone moved around you, filming the pose from all angles.) These videos also happen to be a novel source of data for understanding the depth of a 2D image.
The researchers then took the 2,000 videos and turned them into high-resolution depth data. This was then fed in neural networks that produced far better results in measuring depth. In fact, it was this research that received an honorable mention at a recent computer vision competition.
A warning to the social media users⏤this research and its results have been open sourced and will likely be used again for further development. All the humans/mannequins will remain internet famous, possibly causing some future embarrassment, but researchers draw much of AI training data from publicly shared data.
Many of the field’s most foundational data sets, including Fei-Fei Li’s ImageNet, which kicked off the deep-learning revolution, were compiled from publicly available data scraped from Twitter, Wikipedia, Flickr and other open sources. The practice is motivated by the immense amount of data required to train deep-learning algorithms and has increased in recent years as researchers produce ever bigger models to achieve breakthrough results.
read more at technologyreview.com