Tough Love Works Best for Simulations, or How to Beat Your Robot
You may recall that several years ago, researchers in Japan carried out an eye-opening experiment in robotics. Researchers let a robot loose in a mall and watched how kids reacted. Far from the sense of wonder you might expect from children, the kids proceeded to kick and punch the robot and call it names.
Matt Simon has written an interesting article in wired.com in which he explained how that experiment may have been the forerunner of the best way to train a robot. By being less of a helper and more of a challenge to the robotic learing curve. Simon writes about a new experiment at USC:
The experiment took place entirely in simulation, as so much robot training does these days. In a digital environment, a robot undergoes a supercharged form of trial and error called reinforcement learning. The environment simulates variables like friction, and a robotic arm tries to grasp an object over and over using different grips. If it stumbles on a good grip, the system tallies that as a victory—if it does something stupid, the system counts that as a defeat. Over many attempts, the robot learns how to do a robust grasp.
Grasping The Future Of Robotic Training At USC
But in comes a so-called adversarial human actor, a sort of additional signal. If the robot finds a good grasp, the human uses a graphical interface to click on the object it’s gripping and apply a force in a certain direction. That disturbance basically tests how good the grasp really is, and it helps the robot rule out the less effective ones.
“The robot learned to grasp objects much more robustly using this additional signal that the human was providing, but also learned to generalize to new objects much better,” says USC roboticist Stefanos Nikolaidis, coauthor on a new paper describing the work. To put a number on it, when a human was giving the robot tough love, the machine had a 52 percent success rate at grasping, compared to 26.5 percent without the tough love.
With as little as 20 minutes of training, the sim robot was more successful, had better grasp of the object it was training on and less bothered by interruptions.
You may have seen the video of the Boston Dynamics team, frankly, beating up a robot while training him to pick and carry objects, and wondered why? Or perhaps you saw the video of robots doing back flips. It’s all in the “tough love” training. Some robots learn best by being subjected to adversarial input.
It’s important to remember this experiment above was a simulation. The reality gap between a simulation robot and getting an actual robot to follow the same regimen is wide, but getting smaller every day.
read more at wired.com