Breaking News

Robot trained in a game-like simulation performs better in real life

 Training a robot during a simulation that permits it to recollect the way to get out of sticky situations lets it traverse difficult terrain more smoothly in the real world.

Joonho Lee at ETH Zurich in Switzerland and his colleagues trained a neural network algorithm, which was designed to manage a four-legged robot, during a simulated environment just like a computer game, which was stuffed with hills, steps, and stairs.

The researchers told the algorithm which direction it should be trying to maneuver in, additionally limiting how quickly it could turn, reflecting the capabilities of the 000 robots. They then started the algorithm making random movements within the simulation, rewarding it for getting the proper way, and penalizing it otherwise. By accumulating rewards, the neural network learned the way to give way a spread of terrain.

Currently, most robots respond in real-time to measurements of their surroundings using preprogrammed reactions, encountering every problem for the primary time. A neural network allows the robot to learn from its mistakes, and by first training the algorithm in an exceeding simulation, the team is ready to limit risks and reduce costs.

“We can’t bring it to the important terrains we wish to coach it on because it'd damage the robot, which is extremely expensive,” says Lee.

The researchers initially used a tiny low neural network that was preprogrammed with knowledge about the simulated environment, enabling the algorithm to find out quickly by taking inputs from virtual sensors and remembering them. They then transferred this data to an outsized network accustomed control the important robot.

Using this experience, the robot was ready to move 0.452 meters per second on mossy land – quite twice as fast because it can move with its default programming.

No comments