In today’s New York Times‘ weekly “Science Times,” science writer John Markoff pens a feature about the state of robotics research — including the many challenges to enabling robots to mimic humans’ basic capabilities of motion and perception.
The robotics pioneer Rodney Brooks often begins speeches by reaching into his pocket, fiddling with some loose change, finding a quarter, pulling it out and twirling it in his fingers.
The task requires hardly any thought. But as Dr. Brooks points out, training a robot to do it is a vastly harder problem for artificial intelligence researchers than IBM’s celebrated victory on “Jeopardy!” this year with a robot named Watson.
Although robots have made great strides in manufacturing, where tasks are repetitive, they are still no match for humans, who can grasp things and move about effortlessly in the physical world.
Designing a robot to mimic the basic capabilities of motion and perception would be revolutionary, researchers say, with applications stretching from care for the elderly to returning overseas manufacturing operations to the United States (albeit with fewer workers).
Yet the challenges remain immense, far higher than artificial intelligence hurdles like speaking and hearing.
“All these problems where you want to duplicate something biology does, such as perception, touch, planning or grasping, turn out to be hard in fundamental ways,” said Gary Bradski, a vision specialist at Willow Garage, a robot development company based here in Silicon Valley.
“It’s always surprising, because humans can do so much effortlessly.”
Markoff notes several Federal initiatives to further robotics R&D, including the Administration’s announcement last month of a new $500 million program to pursue advanced manufacturing — with significant emphasis on robotics technologies — as well as DARPA’s Autonomous Robot Manipulation (ARM) project “to develop robotic arms and hands one-tenth as expensive as today’s systems, which often cost $100,000 or more.”
Recently at an SRI laboratory here, two Stanford University graduate students, John Ulmen and Dan Aukes, put the finishing touches on a significant step toward human capabilities: a four-finger hand that will grasp with a human’s precise sense of touch.
Each three-jointed finger is made in a single manufacturing step by a three-dimensional printer and is then covered with “skin” derived from the same material used to make the touch-sensitive displays on smartphones.
“Part of what we’re riding on is there has been a very strong push for tactile displays because of smartphones,” said Pablo Garcia, an SRI robot designer who is leading the design of the project, along with Robert Bolles, an artificial intelligence researcher.
“We’ve taken advantage of these technologies,” Mr. Garcia went on, “and we’re banking on the fact they will continue to evolve and be made even cheaper.”
Still lacking is a generation of software that is powerful and flexible enough to do tasks that humans do effortlessly. That will require a breakthrough in machines’ perception.
“I would say this is more difficult than what the Watson machine had to do,” said Gill Pratt, the computer scientist who is the program manager in charge of DARPA’s Autonomous Robot Manipulation project, called ARM.
“The world is composed of continuous objects that have various shapes” that can obscure one another, he said. “A perception system needs to figure this out, and it needs the common sense of a child to do that.”
Check out the full story here.
And be sure to watch the accompanying video…
…and listen to Markoff talk about the article at the beginning of today’s Times‘ Science Times Podcast:
(Contributed by Erwin Gianchandani, CCC Director)