DARPA'S ROBO-RACE FIX: CARS THAT THINK

FacebookXPinterestEmailEmailEmailShare

drone_flip.JPGThe television cameras all came. The robot makers worked around the clock, for months. A $1 million check awaited the winning team. But after months of hype and twitching buildup, the Defense Department's drone-only rally across the Mojave Desert fizzled. No robot could make it past the seventh mile of the 150-mile-long Grand Challenge course.
Now officials at Darpa, the Pentagon's way-out research arm, are trying to get rolling after the stall out. They way they propose to do it: build cars that can think for themselves.
The robot racers got stuck in the Mojave because they're half-blind and stone dumb, Darpa officials say. Robots can't make much sense of the world around them, and they don't learn from their experiences of navigating their dimly perceived environment. Imagine not remembering what a pothole is, even after a thousand trips down the freeway. Imagine having to relearn how to swerve around one of the craters on every commute. That's the life of a drone car today, and that's what Darpa is trying to correct in its new project, Learning Applied to Ground Robots, or LAGR.
Today, most robot cars use stereo cameras and laser range finders to create a map of the area just in front of them, according to Darpa's LAGR proposal. Danger areas are identified in that map, and path-planning software picks a route that seems to be safe. But those algorithms are, for the most part, coded to spot only specific obstacles: rocks or ditches or fallen trees. If something unexpected comes up -- a fence, say -- the robot is out of luck.
Darpa's plan to correct this was announced just last week. But agency officials have a pretty clear idea of how they want the three-year LAGR program to run. Darpa will host a series of monthly races between two smallish, 70-centimeter-long robots. At the end of 18 months, the drone with intelligent code should be traversing the mini obstacle course 10 percent quicker than its dumb twin. Eighteen months after that, this braniac on wheels should be twice as swift as the regular bot.
My Wired News article has details.
THERE'S MORE: A Navy drone will use a "brain-based" controller this summer to "attempt to smoothly and quietly maneuver itself in and out of a docking tube," according to an Office of Naval Research press release.
The biggest human klutz still has more control over his body than the most agile robot. The Navy is trying to correct the imbalance, by giving the underwater drone a set of circuits which "mimic the part of the human brain that controls balance and limb movement, known as the olivo-cerebellar system."

Story Continues
DefenseTech