A confidence booster for you, puny human: The machines don’t have you beat. Yet.
Yes, the autonomous car is coming, and fast. Tesla delivered the first of its much-anticipated Model 3s last week, complete with the Autopilot feature that allows the cars to drive themselves on well-marked highways. The Mercedes-Benz S-Class can conquer a roundabout on its own. Companies like Google, General Motors, and Uber are testing autonomous vehicles in crowded cities like San Francisco, Pittsburgh, and Boston, racking up miles and learning new tricks. In the next few years, they will invade at full speed.
But it is still dawn in autonomy-land, and at least for now, humanity holds an advantage: For all their sensors and computers, robocars still don’t see or understand the world as well as we do with our eyeballs, ear canals, and brain folds. That’s the takeaway from a new paper by University of Michigan’s Transportation Research Institute researcher Brandon Schoettle, who breaks down today’s man vs. machine battle with a focus on the capabilities of those sensors that dot autonomous vehicles.
“You’re probably safer in a self-driving car than with a 16-year-old, or a 90-year-old,” says Schoettle. “But you’re probably significantly safer with an alert, experienced, middle-aged driver than in a self-driving car.” (Vindication for those 40-somethings feeling past their prime).
Of course, there are catches: Drunk, tired, and distracted humans are bad drivers, as are those whose eyeglass prescription are slowly expiring. Today’s self-driving cars are consistent. But they can be too cautious, too easily confused by unusual or bizarre situations.
Indeed, competent drivers are still better than rolling computers in key areas. This is partly because of the limitations of these cars’ sensors, which include cameras (for seeing), stereo cameras (for three dimensional seeing), long- and mid-range radar (for seeing at distance, through weather), short-range radar (for seeing nearby objects), and lidar (for seeing with more granularity, no natural light needed)—plus the computing power to put it all together in real time.
People can drive within lanes, even if the markings are faded or disappear altogether. They can cleanly brake for cats and roll through plastic bags that look vaguely like cats. They are much better at edge detection, seeing where one object ends and other begins.
Example illustration (drawn to scale) of the various sensors, with reasonable estimates of coverage area (field of view) and typical operating ranges, for both a human-driven vehicle as well as a hypothetical AV.
Sustainable Worldwide Transportation/University of Michigan
Humans also have fun driving tricks that the robots have yet to replicate, because their brains are much, much better at processing huge amounts of information. Humans are still better at reading signs, especially ones with lots of words. Non-superpeople can’t see through walls, but if a giant SUV is directly in front of a human driver, they may be able to peer through its windshield to get a sense of what’s ahead. Sensors are fixed, but humans can crane their necks out windows to see what’s up. People have that fun, non-verbal communication thing going on–they can nod, wave other vehicles ahead, make eye contact that says, “I know you’re here, and I won’t threaten your life with my hunk of steel.”
Humans are also more adaptable—a trait that occasionally serves as their downfall. But it also keeps traffic moving. Schoettle offers the example of turning into traffic from an office parking lot. A self-driving car might be set to wait for a 100-foot gap, and if that doesn’t come for twenty minutes because it’s rush hour, it won’t budge. People will adjust their driving—they’ll edge forward, looking for the cue from that one friendly driver willing to let them scoot in. “Some people’s adjusted criteria is poorly adjusted,” says Shoettle. “But it’s what helps things flow. It’s a tall order to program a computer to be able to have that flexibility that the human driver has.”
So: sensing, perception, reasoning—on their counts, humans often have the machines beat. But not always. It seems that self-driving cars are already better at navigating in the dark, for one. Human eyes can only see about 250 feet at night, and headlights reach go so far. The robocar’s radar can see about 820 feet, good lidar sensors go nearly as far—and in 360 degrees. Machines can react faster than humans, in about 0.5 seconds on a dry road compared to 1.6 seconds for the meatbags.
And though human drivers have that whole eye contact thing down pat, vehicle-to-vehicle communications could help autonomous vehicles do even better. Technologies like dedicated short-range radio or 5G cellular networks (that one’s on the way) could help networks of cars talk about what’s happening on the road. If a truck encounters a patch of ice, it could warn everyone behind it. If a motorbike three cars up suddenly stops, V2V systems could warn your car what’s up, and have it stop before you’d see any reason to get off the gas.
So yes, totally self-driving cars aren’t here yet. They won’t be everywhere for many years. But they’re hustling, and they’re getting better. You’re going to have to keep improving those driving skills—and staying off your dang cellphone—to keep up.