We propose a vision-based SLAM algorithm incorporating feature descriptors derived from multiple views of a scene, incorporating illumination and viewpoint variations. These descriptors are extracted from video and then applied to the challenging task of wide baseline matching across significant viewpoint changes. The system incorporates a single camera on a mobile robot in an extended Kalman filter framework to develop a 3D map of the environment and determine egomotion. At the same time, the feature descriptors are generated from the video sequence, which can be used to localize the robot when it returns to a mapped location. The kidnapped robot problem is addressed by matching descriptors without any estimate of position, then determining the epipolar geometry with respect to a known position in the map.
document