If you’re looking to automate your chores, don’t count on robots just yet. But don’t count them out in a few years, either. Today you might pick up a Roomba to handle the vacuuming (and to make YouTube videos showing your cat riding around on top of one), or you could pony up a cool $22,000 for a Baxter, a robot from Rethink Robotics that can handle simple, repetitive chores.
But tomorrow? We’re learning how to get past many of the hurdles that robots face as they come into their own, and it may not be all that many years before they’re doing useful work in the home, not to mention the factory settings where specialized machines are already common. I’m looking at the Robo Brain project, which has deep-pocket sponsors like Google and Microsoft, and has caught the attention of universities like Brown, UC Berkeley, Stanford and Cornell.
Steep learning curve
Robo Brain is an attempt by researchers in these four schools to build a cloud-based system that future robots can use to access information. Robots need a lot of information that you and I take for granted simply because we’re human beings, and tasks like avoiding sidewalk collisions, not spilling hot liquids, and picking up a pen sitting by a computer keyboard seem trivial. But a robot has to be taught absolutely every move in each of these common scenarios.
Sign Up and Save
Get six months of free digital access to The Charlotte Observer
In fact, the robots we have today are working against even steeper odds. Because they’re highly specialized (the Roomba does floors, not windows!), they can’t adapt well to new situations. If we want to build a robot to handle a task like straightening up a room for company, we need to hand-code the necessary software, or else find comparable code from another robot programmer. Robo Brain hopes to get around the problem by putting data resources online that can eventually help the evolution of collaborative robotics and common robot algorithms.
The servers carrying Robo Brain have thus far soaked up over 100,000 YouTube videos, 1 billion images, and 1 million documents, but this is just the beginning. Robo Brain is set to expand to 10 universities by October, with access opening to applicants from other schools by the end of the year. Before long, the system will have acquired 1 billion videos and 10 billion documents as it goes about the enormous task of making the world make sense to a robot.
That’s no easy task, because at every step of the way, a robot has to learn how to recognize objects. A kitchen helper robot, for example, will need to know what a toaster is, and why it is different from a waffle iron, and what the handles on the side of a coffee cup are for. Moreover, it needs to know how to grasp these objects so as not to spill what may be in or on them, and if we want it to be truly useful, to bring us objects as we need them while we cook. Robo Brain’s “structured deep learning” models are the software core that translates data into action.
If things work out as the Robo Brain project hopes, our novice robot should be able to acquire all of this information by tapping into a system that allows robots to query it and build on the result, a system open to robots everywhere. Thus we wind up with robots that can receive a request, work through computer routines and commands to master it, and take appropriate action.
There’s your home helper, or perhaps a caregiver robot in a nursing situation, getting better by the day as it programs itself through knowledge routines embedded in cloud computing servers. I suspect this will be slow going for the first few years, but who knows? Give our robots a decade or two and the idea of a mechanical helper in the home may start to seem like second nature.
Paul A. Gilster is the author of several books on technology. Reach him at email@example.com.