Please use this identifier to cite or link to this item:
http://localhost:8081/jspui/handle/123456789/20438| Title: | IMPLEMENTATION OF PHOTOGRAMMETRIC POINT CLOUD FOR 3D GIS MODELLING |
| Authors: | Harshit |
| Issue Date: | Apr-2024 |
| Publisher: | IIT Roorkee |
| Abstract: | Our existence is confined to a world that is exemplified by three dimensions. The generation of digital realities requires the creation of visual representations which alter depending on the viewer's perspective, resulting in a very labor-intensive process. To enhance the quality of life, mitigate the effects of hazards or disasters and simulate preventive measures to those events, it is imperative to develop a digital representation of specific areas of the Earth and its surroundings. Creating a model, which is an exact replica of a complex physical reality has not even been completely achieved, while recent advancements in the field of photogrammetry and computer vision have provided a lot of new ventures in the domain of 3D scene reconstruction and modelling. In the field of photogrammetry, creation of a three-dimensional (3D) digital model or reconstruction of an object or scene is done through the analysis and interpretation of photographic or imaging data. The methodology entails the extraction of geometric data and spatial correlations from two-dimensional photographs in order to produce a comprehensive three-dimensional depiction of the initial subject. Various cameras or imaging devices are used to acquire multiple 2D photographs of the object or scene from different perspectives. There are multiple techniques available for acquiring these photos, including aerial photography, terrestrial photography, and close-range imaging. The identification and matching of key characteristics and feature points within the photographs are conducted. The potential characteristics encompass unique points, corners, or other distinguishable parts that possess the capability to be traced across numerous photographs. Via these unique points a comprehensive representation of an observed object could be generated, one such representation is known as point clouds. A point cloud is created by triangulating the matched features from numerous photos. These point clouds depict the spatial arrangement of points in three-dimensional space, constituting the fundamental framework of the item or scene. The representation is a rudimentary form of differentiation, primarily delineating the geometric characteristics of the entity through the process of sampling at specific locations. Although points obtained via photogrammetry exhibit a certain degree of regularity, it is generally not considered that the points within a point cloud possess any specific structure. The efficient handling of disorganized point clouds is a fundamental concern in all applications related to 3D modeling. The first part of the research conducted in this thesis deals with the creation of point clouds from images acquired from an unmanned aerial vehicle (UAV). A thorough assessment of Open i Source Software (OSS) for photogrammetric applications has been investigated. The performance and usability of these tools were thoroughly explored in various scenarios for modeling through intensive experimentation. This part of the study aimed to answer the requirement for a comprehensive grasp of the resources that are available. The primary objective was to enhance the field of UAV photogrammetry and computer vision through the examination and assessment of four open-source software (OSS) algorithms. The aim of this study was to assess, comprehend, and optimize the developments made in recent years regarding the advancement of open-source algorithms and photogrammetric point cloud generation techniques. The second objective aimed to enhance algorithms for unstructured point clouds by suggesting the use of locally designed geometric features to extract statistically significant information. The purpose of these improvements was to better the resilience, precision, comprehensiveness, and expandability of photogrammetric point cloud data. Additionally, the study investigated the use of deep learning-based semantic segmentation for extensive datasets. The point cloud is frequently employed to produce a mesh, which serves as a three-dimensional depiction of the item. The process of texture mapping entails the projection of the original images onto the created mesh, hence facilitating the provision of authentic color and texture data. The outcome of the procedure yields a three-dimensional digital representation that faithfully depicts the form and visual characteristics of the initial object or scene. The aim of developing a workflow for modelling a physical object enables the generation of accurate three-dimensional models based on pre-existing real items or locations, so facilitating the processes of study, documentation, and virtual exploration. A 3D Geographical Information System or GIS model incorporates both the spatial and non-spatial elements of reality, providing a basis for the operation and communication of information among participants. Spatial aspects refer to the attributes of shape, size, and position, mainly related to geometric properties. Non spatial characteristics refer to semantic components that are mostly linked to qualities such as name, color, and function. A 3D GIS model must have the capacity to build a relationship between these two representations. The final objective introduced a comprehensive framework for modeling 3D information from extensive image sets. By combining data obtained from photogrammetrically derived point clouds with Apple LiDAR, this methodology successfully generated a comprehensive and precise three-dimensional representation of the entire scene. This achievement highlights significant progress in the field of comprehensive scene modeling. The process implemented a ii strong Transformation strategy from IFC to CityGML, including Extract, Transform, Load (ETL) approach. The utilization of this approach facilitated the creation of 3D GIS by incorporating OpenBIM components obtained from photogrammetric point cloud datasets. The thesis introduces a novel approach for creating a three-dimensional GIS model using point clouds created from a set of UAV images through photogrammetric computer vision. The simple network enables the execution of both spatial and non-spatial queries, computations, and the creation of 2D and 3D visualizations. The modeling approach established in this study demonstrates efficacy in the management of individual structures. Improvements in scalability and automation are necessary to meet the requirements for developing models from raw data. As the industry progresses, there is growing excitement for the development of new tools and approaches that will enhance the integration of multi-sensor data for 3D GIS modelling. This advancement holds the potential for implementing more effective and sustainable methods in the design and management of buildings and infrastructure such as Digital Twins. |
| URI: | http://localhost:8081/jspui/handle/123456789/20438 |
| Research Supervisor/ Guide: | Jain, Kamal and Zlatanova,Sisi |
| metadata.dc.type: | Thesis |
| Appears in Collections: | DOCTORAL THESES (Civil Engg) |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| 18910013_HARSHIT.pdf | 10.41 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
