We present a method for inferring the location of a robot relative to a three-dimensional map of its environment. The map, created off-line, consists of image patches, their locations in space, and their associated normal vectors. Observational data consist of one or many images taken from the robot’¡Çs current viewpoint (a point and orientation in space). We develop a framework for matching images of a scene (observations) to a map and show how this can be applied to the task of robot localization. Localization is posed as an optimization problem, where the observed data and the map are aligned to produce an estimate of the current pose of the robot. We provide a formalization of our model and demonstrate experimental results in unstructured environments.
Author
Author