Computing Community Consortium Blog

The goal of the Computing Community Consortium (CCC) is to catalyze the computing research community to debate longer range, more audacious research challenges; to build consensus around research visions; to evolve the most promising visions toward clearly defined initiatives; and to work with the funding organizations to move challenges and visions toward funding initiatives. The purpose of this blog is to provide a more immediate, online mechanism for dissemination of visioning concepts and community discussion/debate about them.


“How Google’s Self-Driving Car Works”

October 18th, 2011 / in big science, research horizons, Research News / by Erwin Gianchandani

Google's Autonomous Car: footage of what the on-board computer "sees" and how it detects vehicles, pedestrians, and traffic lights [image courtesy IEEE Spectrum].Stanford University professor Sebastian Thrun and Google engineer Chris Urmson — the brains behind Google’s autonomous vehicle project — explained how the self-driving cars work and showed off videos of successful road tests during a recent keynote at the 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems in San Francisco.

According to IEEE Spectrum, which has complete coverage of the keynote:

Google’s fleet of robotic Toyota Priuses has now logged more than 190,000 miles (about 300,000 kilometers), driving in city traffic, busy highways, and mountainous roads with only occasional human intervention. The project is still far from becoming commercially viable, but Google has set up a demonstration system on its campus, using driverless golf carts, which points to how the technology could change transportation even in the near future…

Google's Autonomous Car: The different subsystems that make up the vehicle [image courtesy IEEE Spectrum].The slide to the right essentially depicts the various subsystems that together comprise a Google autonomous car:

Urmson, who is the tech lead for the project, said that the “heart of our system” is a laser range finder mounted on the roof of the car. The device, a Velodyne 64-beam laser, generates a detailed 3D map of the environment. The car then combines the laser measurements with high-resolution maps of the world, producing different types of data models that allow it to drive itself while avoiding obstacles and respecting traffic laws.

 

The vehicle also carries other sensors, which include: four radars, mounted on the front and rear bumpers, that allow the car to “see” far enough to be able to deal with fast traffic on freeways; a camera, positioned near the rear-view mirror, that detects traffic lights; and a GPS, inertial measurement unit, and wheel encoder, that determine the vehicle’s location and keep track of its movements.

Check out a video of the system in action after the jump…

 Still more from the IEEE Spectrum:

Two things seem particularly interesting about Google’s approach. First, it relies on very detailed maps of the roads and terrain, something that Urmson said is essential to determine accurately where the car is. Using GPS-based techniques alone, he said, the location could be off by several meters.

 

The second thing is that, before sending the self-driving car on a road test, Google engineers drive along the route one or more times to gather data about the environment. When it’s the autonomous vehicle’s turn to drive itself, it compares the data it is acquiring to the previously recorded data, an approach that is useful to differentiate pedestrians from stationary objects like poles and mailboxes.

 

[In the video above]: At one point you can see the car stopping at an intersection. After the light turns green, the car starts a left turn, but there are pedestrians crossing. No problem: It yields to the pedestrians, and even to a guy who decides to cross at the last minute.

And ultimately:

Thrun and his Google colleagues, including co-founders Larry Page and Sergey Brin, are convinced that smarter vehicles could help make transportation safer and more efficient: Cars would drive closer to each other, making better use of the 80 percent to 90 percent of empty space on roads, and also form speedy convoys on freeways. They would react faster than humans to avoid accidents, potentially saving thousands of lives. Making vehicles smarter will require lots of computing power and data, and that’s why it makes sense for Google to back the project, Thrun said in his keynote.

 

Urmson described another scenario they envision: Vehicles would become a shared resource, a service that people would use when needed. You’d just tap on your smartphone, and an autonomous car would show up where you are, ready to drive you anywhere. You’d just sit and relax or do work.

 

He said they put together a video showing a concept called Caddy Beta that demonstrates the idea of shared vehicles — in this case, a fleet of autonomous golf carts. He said the golf carts are much simpler than the Priuses in terms of on-board sensors and computers. In fact, the carts communicate with sensors in the environment to determined their location and “see” the incoming traffic.

To learn much more about the project, read the entire IEEE Spectrum article here (complete with video of the Caddy Beta described above), and watch the first part of the Thrun/Urmson keynote below — in which they describe their experiences in DARPA’s autonomous vehicle Grand Challenge.

(Contributed by Erwin Gianchandani, CCC Director)

“How Google’s Self-Driving Car Works”

Comments are closed.