Ever thought, “I’d really like a game of table tennis,” but had no one to play with? Well, do we have the scientific breakthrough for you! Google DeepMind has just unveiled a robot that could give you a run for your money in a match, but don’t assume you’d be in for a trouncing – the engineers say their robot plays at a “solidly amateur” level.
From nightmare-inducing faces to team-working robo-snails to the now happily retired Atlas, it seems we’re never far away from another incredible feat of robotics technology. But there are still a lot of things humans can do that robots haven’t quite achieved.
When it comes to speed and performance in physical tasks, engineers are still striving to build machines that can mimic human abilities, and now a team at DeepMind has taken a step towards that goal with the creation of their table-tennis-playing robot.
“[C]ompetitive matches are often breathtakingly dynamic, involving complex motion, rapid eye-hand coordination, and high-level strategies that adapt to the opponent’s strengths and weaknesses,” the team writes in their new preprint, which is yet to be published in a peer-reviewed journal. These aspects set something like table tennis apart from pure strategy games like chess, which robots are already mastering (albeit with somewhat… mixed results).
Human players spend years training to build up their skills. The DeepMind team wanted to build a robot that could provide legitimate competition and an enjoyable experience for a human opponent, and they claim that theirs is the first to reach these milestones.
They designed a library of “low-level skills” coupled with a “high-level controller” that selects the most effective skill in each situation. As explained in the team’s announcement of their innovation, the skill library includes a variety of techniques you might call upon during a table tennis match, such as forehand and backhand serves. The controller uses descriptions of these skills, integrated with data about how the game is progressing and the skill level of its opponent, to select the optimal skill that is within its physical capabilities.
The robot started off with a small amount of human data and was then trained through simulations that allowed it to build its skills through reinforcement learning. Playing against humans helped it continue to learn and adapt. You can see for yourself in the footage below how that went.
“Truly awesome to watch the robot play players of all levels and styles. Going in our aim was to have the robot be at an intermediate level. Amazingly it did just that, all the hard work paid off,” said professional table tennis coach Barney J. Reed, who helped out with the project. “I feel the robot exceeded even my expectations.”
The team held competitive matches, pitting the robot against 29 humans with a range of skills from beginner to advanced+. The matches used the standard rulebook, with one important adaptation – the robot was not physically capable of serving the ball.
Against the beginners, the robot won all its matches; by contrast, it lost all the matches against advanced and advanced+ players. Against the intermediate opponents, it won 55 percent of the time, leading the team to judge that it had reached an intermediate human skill level.
Importantly, all the opponents, regardless of skill level, rated the matches highly for being “fun” and “engaging” – even where they were able to exploit the robot’s weaknesses, they had a good time doing so. The advanced players felt such a system could beat a ball thrower as a training aid.
So, we probably won’t be seeing a robot team at the Olympics any time soon, but as a training aid, it definitely has potential. And as for what the future holds – who knows?
The preprint is posted to arXiv.
Source Link: Google DeepMind Reveals Robot That Plays Table Tennis At A Delightful “Solidly Amateur” Level