To fully realize the benefit of a mobile sensor platform, it is essential to know where in space the platform is at each instant in time. This information is required in order to create precise and consistent wide-area maps, and to accurately monitor large-scale spatiotemporal phenomena. We are investigating the use of a combination of visual and inertial sensing to determine the ego-motion of the experiment platform (i.e. the robot or actuated sensor node). Our approach fuses motion estimates from stereo cameras and an inertial measurement unit (IMU), and is suitable for situations in which GPS signals are unavailable, for example when operating under forest canopy. Results with a robotic helicopter platform demonstrate that positioning accuracy to within 1% of the measured GPS value is possible, over flight distances of more than 400 meters.
document