The Trick That Makes Google’s Self-Driving Cars Work

383

Google’s self-driving cars can tour you around the streets of Mountain View, California.

I know this. I rode in one this week. I saw the car’s human operator take his hands from the wheel and the computer assume control. “Autodriving,” said a woman’s voice, and just like that, the car was operating autonomously, changing lanes, obeying traffic lights, monitoring cyclists and pedestrians, making lefts. Even the way the car accelerated out of turns felt right.

It works so well that it is, as The New York Times‘ John Markoff put it, “boring.” The implications, however, are breathtaking.

Perfect, or near-perfect, robotic drivers could cut traffic accidents, expand the carrying capacity of the nation’s road infrastructure, and free up commuters to stare at their phones, presumably using Google’s many services.

But there’s a catch.

Today, you could not take a Google car, set it down in Akron or Orlando or Oakland and expect it to perform as well as it does in Silicon Valley.

Here’s why: Google has created a virtual track out of Mountain View.

The key to Google’s success has been that these cars aren’t forced to process an entire scene from scratch. Instead, their teams travel and map each road that the car will travel. And these are not any old maps. They are not even the rich, road-logic-filled maps of consumer-grade Google Maps.

They’re probably best thought of as ultra-precise digitizations of the physical world, all the way down to tiny details like the position and height of every single curb. A normal digital map would show a road intersection; these maps would have a precision measured in inches.

But the “map” goes beyond what any of us know as a map. “Really, [our maps] are any geographic information that we can tell the car in advance to make its job easier,” explained Andrew Chatham, the Google self-driving car team’s mapping lead.

Read the source article at The Atlantic.