By Ben Nitkin on
Now that you know what the IGVC is about, I'll go into some detail about our high-level approach. The challenge provides some obvious requirements:
- A reference frame relative to earth to navigate to waypoints; GPS provides position and compass provides heading
- Line detection to stay within lanes; a color camera suffices
- Obstacle detection; a 2D rangefinder is the is logical choice
- Precise measurement of velocity; provided by encoders, accelerometers, and gyros
I'll go through each requirement in a bit of depth, describe typical solutions, and design constraints.
An absolute reference tells the robot exactly where in the world it is. GPS and compass provide that data. Unfortunately, GPS is only accurate to about 6', so it isn't useful for local navigation. A 2' difference in position is the difference in hitting an obstacle and avoiding it. The GPS is useful for providing long-range direction: the goal is 60' southwest.
Most teams improve their GPS accuracy with exotic antennas and exact correction services (the speed of light varies through the atmosphere, distorting timestamps and GPS location. A corrective signal provides the difference in true and measured time-of-flight, improving accuracy.) Although these units offer remarkable resolution (within 6"), they are prohibitively expensive ($20,000).
Lafayette's team, named Team Terminus, will use an inexpensive GPS receiver, accurate to roughly 6'. It costs less than $100. (Cost, by the way, will be a persistent theme. Most IGVC robots run $20,000 to $80,000; Lafayette budgeted Terminus $6000.)
The robot also needs to avoid lane-lines and obstacles. These features also provide relative localization data, allowing the robot to estimate its current position relative to a previous one. A reliable depth sensor can generate a 2d profile accurate to a few centimeters; it's easy to compare current and previous obstacle locations to derive movement.
Most teams use a camera for line detection and a LIDAR for ranging. LIDAR, a lower-wavelength version of RADAR, emits a beam of light and calculates distance from roundtrip time. Spinning the sensor yields a 2d dataset of angles and measured ranges. Most navigation stacks are built around LIDAR units, since they provide high-accuracy, low-latency measurements. Unfortunately, light moves at about a foot per nanosecond, and the sort of timing required for LIDAR is expensive - units run between $5000 and $20,000.
Team Terminus is using a less expensive ranging method, divergence mapping. Our robot has dual onboard cameras, spaced 6" or so apart. By comparing synchronized images from the cameras, software can match features, find how far apart they are, and estimate range. They are neither as reliable nor as accurate as LIDAR, but the two cameras cost $40, rather than $5000, and that counts. I'll describe the cameras more in another post, since setting them up occupied the bulk of the semester. (Between figuring out synchronization, missing documentation, burning out a camera, and driver issues, we had our hands full.)
Other localization methods are consistent across robots. Wheel encoders measure how far each wheel has rotated, with resolutions around 1/400th of a rotation. Integrating the rotation provides one position estimate. An onboard accelerometer provides another position estimate. The redundancy of localization sensors helps the robot ensure an accurate position estimate. The wheels may slip, the cameras may lose their fix, or the GPS might jump a few meters, but combining the odometry estimates yields a solid position.