Please use this identifier to cite or link to this item:
http://localhost:8081/jspui/handle/123456789/19987Full metadata record
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Prabhakar, Dolonchapa | - |
| dc.date.accessioned | 2026-03-26T13:03:40Z | - |
| dc.date.available | 2026-03-26T13:03:40Z | - |
| dc.date.issued | 2024-10 | - |
| dc.identifier.uri | http://localhost:8081/jspui/handle/123456789/19987 | - |
| dc.guide | Garg, Pradeep Kumar | en_US |
| dc.description.abstract | This research focuses on the automated extraction of building footprints from very highresolution (VHR) satellite/aerial images, utilizing state-of-the-art deep learning techniques. Extracting building footprints is a crucial task for various applications, including urban planning, disaster management, and environmental monitoring, particularly in highly urbanized environments where accurate spatial data is critical for decision-making. The primary objective of this study is to develop a robust and efficient framework for building detection and height estimation by leveraging convolutional neural networks (CNNs), with a specific focus on the U-Net and SegNet architectures. These models have proven successful in handling the complexities of high-resolution imagery and are well-suited for generalizing across diverse urban environments. The research was conducted using two prominent datasets: the ISPRS Toronto dataset and the Massachusetts Buildings dataset. Both datasets provide high-resolution imagery, which is essential for detecting fine details in building structures. Additionally, the ISPRS dataset includes Airborne Laser Scanning (ALS) data, offering valuable 3D information such as building heights and other structural details. This combination of high-resolution 2D imagery and 3D auxiliary data enhances the accuracy and comprehensiveness of building extraction, allowing for more precise height estimation. The methodology developed in this study follows a multi-step process, starting with the preprocessing of satellite/aerial imagery. Pre-processing tasks included georeferencing to align the images with real-world coordinates, mosaicking to merge multiple images into a single continuous dataset, followed by digitizing the buildings of the study area, and image enhancement through histogram equalization to improve contrast and visibility. Color space transformations were then applied, converting the images from RGB to CIELAB and CIELCh color spaces. These color transformations allowed for better isolation of lightness components, which is particularly useful in detecting and removing shadows. Shadows can obscure building features and affect the accuracy of building detection; thus, their removal is crucial. The research applied the thresholding technique of Otsu’s method to accurately identify shadowed areas and improve the clarity of building features. | en_US |
| dc.language.iso | en | en_US |
| dc.publisher | IIT Roorkee | en_US |
| dc.title | Automated Building Extraction from Very High-Resolution Satellite Images Using Deep-Learning | en_US |
| dc.type | Thesis | en_US |
| Appears in Collections: | DOCTORAL THESES (Civil Engg) | |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| 19910005_DOLONCHAPA PRABHAKAR.pdf | 23.16 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
