High resolution remote sensed image data continues to become more accessible. One consequence of this is that novel geographic information system are playing an increasingly important role not only for academia, but also for daily human business and life. Nevertheless, to automate the understanding of the exponentially growing geographic image data repositories remains by-and-large an unsolved problem. In this dissertation, we put forward efforts to tackle the most important and comprehensive problems in understanding the remotely sensed image data: image retrieval, classification and object recognition. In the interest of high resolution overhead images, we adapt and extend techniques that have been highly developed in generic vision tasks. We investigate the applications of low-level local descriptors to the remote sensed image analysis. In particular, we evaluate how local invariant descriptors perform compared to proven global texture as well as color features for similarity retrieval. We further investigate how different similarity measurements and sizes of the set of interest points used to represent images influence the retrieval. In addition, we extend our work to image classification using bagof- visual-words models. Moreover, we explore the potential for increased synergy between two complementary data sources: gazetteers and overhead imagery. We explore ways in which these two data sources can be integrated to more fully automate geographic data management. In particular, we propose a hieararchial model to estimate the spatial extents of archived geospatial objects from gazetteers such that their spatial representations can be extended from a single latitude/longtidue pair to a bounding box.
document