Please use this identifier to cite or link to this item: http://localhost:8081/jspui/handle/123456789/19734
Full metadata record
DC FieldValueLanguage
dc.contributor.authorKumar, Rahul-
dc.date.accessioned2026-03-17T10:42:23Z-
dc.date.available2026-03-17T10:42:23Z-
dc.date.issued2022-01-
dc.identifier.urihttp://localhost:8081/jspui/handle/123456789/19734-
dc.guideKaushik, Brajesh Kumar and Raman, Balasubramanianen_US
dc.description.abstractHaze particles absorb and scatter light traveling from an object point to the observer plane. This phenomenon not only reduces object visibility but also hamper the color fidelity and contrast of a scene. As a result, it directly impacts the performance of operations such as object detection and recognition, video surveillance, automatic car driver assistance system, long-range imaging, remote sensing, and endoscopic surgery. Therefore, image and video dehazing has emerged as an area of great interest in recent years. Several haze removal studies have been previously presented; however, dedicated hardware architecture and implementation for image and video dehazing are rare.  Real-time applications demand a high frame rate, low memory footprint, and low power onboard processing. For instance, different components of an advanced driver assistance system (ADAS), such as blind-spot detection, lane departure warning, and collision warning, require high-quality images at a high frame rate. Apart from the high frame rate, high resolution is another demanding feature in the current market trend as some system-on-chip (SoC) based driver assistance systems are already capable of processing 3840 2160 pixel resolution image frames @60fps. To meet these stringent requirements, specialized image and video dehazing hardware is essential. Effective mapping of an algorithm to dedicated hardware for real-time processing is non-trivial. Operations such as exponential function, sorting operation, floating-point multiplication and division, full image buffer, data transfers between processor and DRAM deteriorate the performance of edge devices. Previously, different approximation strategies has been employed to reduce the computational cost of prior-based dehazing methods, but this degrades image quality, sometimes to an extent that the restored image quality becomes unusable. Taking cognizance of these facts, this research presents methods and architecture that are co-designed for better performance and image quality. In addition, techniques are discussed to address flickering in restored video frames and to fuse multispectral data for dehazing. A method based on dark channel prior (DCP), Gaussian filtering, and window tiling and its 10-stage pipelined very large-scale integration (VLSI) architecture is presented in Chapter 1 for high speed dehazing operation. It shows better performance and image quality as compared to the existing methods. The prior based image dehazing methods are limited by the underlying assumption and may not work well in some unknown haze scenario. Deep learning-based technique for image dehazing has recently shown promising image restoration capabilities in a wide range of haze conditions. However, their computational cost and memory requirements are even higher than the prior-based methods. No hardware architecture of deep neural networks for image dehazing has been previously elaborated. Therefore, this thesis presents a hybrid data driven approach combining DCP and convolutional neural network (CNN) that adapts data to restore superior image quality by 11% and requires comparable hardware resources to prior-based methods. Furthermore, an in-depth analysis of bit-quantization is carried out so as to reduce the parameters for an optimum improvement of image quality. If a single image dehazing approach is extended for video processing in an intra-frame manner, it produces severe flickering artifacts in the restored video frames. This forced designers to include a separate module to reduce flickering in restored video frames. However, such modules do not fully exploit the temporal information available in consecutive video frames. Therefore, a unified video dehazing approach is needed that can take inter-frame information and can be deployed on constrained hardware. This thesis presents an inter-frame fusion-based video dehazing method and architecture. In order to reduce hardware resources, distributive arithmetic is utilized to implement multiplier-less CNN architecture. The method achieves 55% reduction in the flicker as compared to the prior approach. Dehazing of nearly zero transmission regions in an image is highly challenging. The conventional methods of visible-spectrum image dehazing are not suitable in this situation as they only amplify noise. Haze effects reduces with an increase in wavelength of the imaging spectrum; thus, the near-infrared (NIR) images create an opportunity for visible-spectrum image dehazing using the complementary information available in NIR images. A few two-stage image fusion techniques for image dehazing were reported previously. However, unwanted artifacts were observed in the dehazed images. Therefore, a multispectral data fusion has been investigated in this research, and a single-stage multimodal data fusion method is presented for color image dehazing that reduces image haze up to 57% as compared to the existing methods. In addition, it is highly suitable for dedicated hardware implementation, and for the first time, an RGB-NIR fusion VLSI architecture is presented for real-time on-chip processing.en_US
dc.language.isoenen_US
dc.publisherIIT Roorkeeen_US
dc.titleDESIGN AND ANALYSIS OF IMAGE AND VIDEO DEHAZING VLSI ARCHITECTURESen_US
dc.typeThesisen_US
Appears in Collections:DOCTORAL THESES (E & C)

Files in This Item:
File Description SizeFormat 
RAHUL KUMAR 16911010.pdf5.04 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.