Towards 3D Matching of Point Clouds Derived from Oblique and Nadir Airborne Imagery by Ming Zhang B.S., Tianjin University, 2007 M.S., Beijing University of Post & Telecommunication, 2010 A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in the Chester F. Carlson Center for Imaging Science, College of Science, Rochester Institute of Technology May, 1st, 2014 Signature of the Author Accepted by Coordinator, M.S. Degree Program Date CHESTER F.CARLSON CENTER FOR IMAGING SCIENCE COLLEGE OF SCIENCE ROCHESTER INSITITUTE OF TECHNOLOGY ROCHESTER, NEW YORK, UNITED STATES OF AMERICA CERTIFICATE OF APPROVAL M.S DEGREE THESIS The M.S. Degree Thesis of Ming Zhang has been examined and approved by the thesis committee as satisfactory for the thesis requirement for the M.S. Degree in Imaging Science Dr. John Kerekes, Thesis Advisor Dr. Carl Salvaggio, Committee Member Dr. David Messinger, Committee Member Date Towards 3D Matching of Point Clouds Derived from Oblique and Nadir Airborne Imagery by Ming Zhang Submitted to the Chester F. Carlson Center for Imaging Science in partial fulfillment of the requirements for the Master of Science Degree at the Rochester Institute of Technology Abstract Because of the low-‐expense high-‐efficient image collection process and the rich 3D and texture information presented in the images, a combined use of 2D airborne nadir and oblique images to reconstruct 3D geometric scene has a promising market for future commercial usage like urban planning or first responders. The methodology introduced in this thesis provides a feasible way towards fully automated 3D city modeling from oblique and nadir airborne imagery. In this thesis, the difficulty of matching 2D images with large disparity is avoided by grouping the images first and applying the 3D registration afterward. The procedure I starts with the extraction of point clouds using a modified version of the RIT 3D Extraction Workflow. Then the point clouds are refined by noise removal and surface smoothing processes. Since the point clouds extracted from different image groups use independent coordinate systems, there are translation, rotation and scale differences existing. To figure out these differences, 3D keypoints and their features are extracted. For each pair of point clouds, an initial alignment and a more accurate registration are applied in succession. The final transform matrix presents the parameters describing the translation, rotation and scale requirements. The methodology presented in the thesis has been shown to behave well for test data. The robustness of this method is discussed by adding artificial noise to the test data. For Pictometry oblique aerial imagery, the initial alignment provides a rough alignment result, which contains a larger offset compared to that of test data because of the low quality of the point clouds themselves, but it can be further refined through the final optimization. The accuracy of the final registration result is evaluated by comparing it to the result obtained from manual selection of matched points. Using the method introduced, point clouds extracted from different image groups could be combined with each other to build a more complete point cloud, or be used as a complement to existing point clouds extracted from other sources. This research will both improve the state of the art of 3D city modeling and inspire new ideas in related fields. II Acknowledgement Foremost, I would like to express my sincere gratitude to my advisor Prof. John Kerekes for his continuous support of my MS study and research, his patience, motivation, enthusiasm, and immense knowledge. He helped me in all the time of research and writing of this thesis. Without his persistent guidance this dissertation would not have been possible. Besides my advisor, I would like to thank the rest of my thesis committee: Prof. Carl Salvaggio and Prof. David Messinger for their insightful comments, and hard questions, which encouraged me to make continuous progress throughout the research. My sincere thanks also go to Nina Raqueno and Michael Richardson. Nina helped me to access more pre-‐processed Pictometry imagery as well as the meta data. Michael offered me the chance to better understand the research using simulated building models and muti-‐view images. I also thank Stephen Schultz and Yandong Wang from Pictometry. They not only provide me with a chance to work on such an exciting project, but also enlighten me with ideas from an industry point of view. I thank Prof. Harvey Rhody, Erin Ontiveros, David Nilosek and Kate Salvaggio. They help me with the usage and modifying of the RIT 3D Extraction Workflow and the understanding of epipolar geometry and photogrammetry. Also I thank the other two students Jie Zhang and Ming Li, who also work on Pictometry data. The cooperation with them gave me a better understanding of the other parts involved in the entire 3D modeling. We discussed possible methods and solved the problems we met together all the time. Last but not the least, I would like to thank Shaohui Sun and Lei Hu, who helped me out with many difficulties I met in coding. III Contents Abstract .............................................................................................................. I Acknowledgement……………………………………………………………………….……………..….III Contents ............................................................................................................. i List of Figures .................................................................................................. iv 1 Introduction ................................................................................................... 1 1.1 Motivation .......................................................................................................... 1 1.2 Objectives ........................................................................................................... 3 1.3 Layout of the Thesis .......................................................................................... 5 2 Background .................................................................................................. 6 2.1 3D Modeling ....................................................................................................... 6 2.1.1 Categories .................................................................................................... 6 2.1.2 Structure from Motion ............................................................................... 7 2.1.3 Presentation of 3D Models ......................................................................... 9 2.1.4 Applications .............................................................................................. 10 2.2 3D Building Models .......................................................................................... 11 2.2.1 Unique Characteristics .............................................................................. 11 2.2.2 Major Steps ................................................................................................. 12 2.2.3 Current Research and Problems ............................................................... 13 2.2.4 Oblique Imagery ....................................................................................... 18 i 3 Methodology ................................................................................................ 21 3.1 Data…………………………………………………………………………………………………………..21 3.1.1 Pictometry Imaging System ...................................................................... 21 3.1.2 RIT Campus Images .................................................................................. 23 3.1.3 Meta Data .................................................................................................. 25 3.2 Proposed Method ............................................................................................ 25 3.3 Algorithms and Implementation .................................................................... 28 3.3.1 2D Feature Matching ................................................................................ 28 3.3.2 3D Point Cloud Extraction ....................................................................... 36 3.3.3 Cloud Refinement ..................................................................................... 41 3.3.4 3D Feature Extraction .............................................................................. 46 3.3.5 3D Registration ......................................................................................... 55 4 Results and Discussion ............................................................................... 61 4.1 Testing Data ..................................................................................................... 61 4.1.1 No Scale Difference .................................................................................. 62 4.1.2 With Scale Difference ............................................................................... 65 4.1.3 With Noise ............................................................................................... 66 4.2 Pictometry Data ............................................................................................... 68 4.2.1 Image Grouping ........................................................................................ 68 4.2.2 Modification of the RIT 3D Workflow .................................................... 70 4.2.3 Original Point Clouds extracted .............................................................. 72 ii
Description: