Please use this identifier to cite or link to this item: http://localhost:8081/xmlui/handle/123456789/1850
Full metadata record
DC FieldValueLanguage
dc.contributor.authorBhutada, G. G.-
dc.date.accessioned2014-09-25T16:05:32Z-
dc.date.available2014-09-25T16:05:32Z-
dc.date.issued2010-
dc.identifierPh.Den_US
dc.identifier.urihttp://hdl.handle.net/123456789/1850-
dc.guideAnand, R. S.-
dc.guideSaxena, S. C.-
dc.description.abstractVision is the most advanced sense among the five senses of human beings and plays most important role in human perception. Although the sensitivity of human vision is limited within the visible band, the images developed from imaging modalities like Infrared (IR), Ultraviolet (UV), X-ray, Magnetic Resonance Imaging (MRI), Ultrasound Imaging (USI) etc., facilitate the human perception in different areas of applications. In fact, in the earlier times, neuroscientists have identified that edge-processing neurons play an important and fundamental role in the visual processing and visual perception of mammalian. From this point of view, edges from any visual scene or from any image become prominent feature for better human perception. Excluding the edges, remaining picture contains little importance. From the same point of view, many researchers reported that simultaneous fulfillment of conflicting nature of objectives like preserving the edges and suppressing the noise from images is a difficult task. Especially, in the ultrasound medical images, it becomes more difficult and challenging task due to presence of speckle noise which has multiplicative nature. Thus, developing an edge preserved image denoising approach that simultaneously fulfills the following three principal criteria is of the prime importance: 1. The algorithm must be capable to suppress maximum noise from the uniform areas. 2. True tissue information including edges and other fine details should be preserved and if possible, to be enhanced. 3. The algorithm must be computationally efficient, stable and robust. The main objective of the presented research work has been to design and develop effective algorithms that produce improved performance under above-mentioned criteria of denoising. To achieve these objectives, it not only necessary to analyze and identify better approach among the existing denoising categories of approaches but also it becomes necessary to further improve the performance of the identified approach by either modifying the existing algorithm or by suggesting a new algorithm. Accordingly, the entire research work has been planned and carried out in following three steps: 1) Evaluating the existing latest approaches from two major categories of image denoising i.e. spatial domain category and transform domain category. XVII 2) Based on evaluation of the denoising approaches, new suitable approach has been proposed to enhance the computational performance which includes robustness, stability, and efficiency. 3) The performance of computationally better approach developed in second step has been further enhanced for edge preservation and noise suppression by proposing another novel remnant approach for image denoising. For the present research work, among the images from different imaging modalities, ultrasound B-scan images are considered due to their wide use in countries like India. This widespread choice is due to its cost effectiveness, portability, acceptability, and safety. However, images obtained from ultrasound imaging are of relatively poor quality. The analysis of these images is complex due to their data composition. Images obtained from USI systems, also have interference patterns called speckle. From a technical point of view, speckle is considered as a multiplicative noise which have undesirable interference effect on the images that obscures fine details in the image, like lesions with faint grey value transitions and small details. In addition to the multiplicative noise, sometime these US images also suffer from random additive noise. Due to presence of these noises in images, diagnosis becomes a time consuming job besides being susceptible to the errors depending upon one's experience and expertise. Apart from this, imprecise detection of the boundaries may lead to propagation of the errors in high-level image processing tasks like segmentation and feature extraction. Therefore, detecting and enhancing the boundaries between different cavities is of great need in USI and considered as challenging problem in this area. To overcome this challenging problem, many researchers devoted their work for effective noise suppression with preservation of the edges. The research works in the area can be broadly divided into two categories, viz; spatial domain category and transform domain category. In the spatial domain category, many techniques have been proposed in literature to address the issues of noise reduction in medical ultrasound images. An initial effort on speckle reduction by adaptive filtering was based on image local statistics. In these techniques, speckle reduction filtering changes the amount of smoothing according to the ratio of local variance to local mean. Further, smoothing is increased in homogeneous regions where speckle is fully developed. It is reduced or even avoided in other regions to preserve details. XVIII Most widely cited and applied filters in these speckle reduction category include Lee, Frost and Kuan filters. These spatial adaptive filters have major limitations of dependency of filter sensitivity on the size and shape of the filter window. If the filter window is too large (compared to the scale of interest), over-smoothing occurs and edges get blurred. A small window will decrease the smoothing capability of filter and will leave the speckle. In order to overcome these difficulties, the concept "anisotropic diffusion" has been proposed by the researchers. This concept is based on partial differential equation (PDE). This is widely used for image despeckling with edge preservation and popularly known as Speckle Reduction using Anisotropic Diffusion (SRAD). These methods rely on the instantaneous coefficient of variation (ICOV) of diffusion and control diffusion rate using edge detector as a controller near edges of regional structures. The PDE based approaches not only preserve the edges but also enhance them by inhibiting diffusion across the edges. However, they allow the diffusion on either side of the edges in the images. To control the inverse diffusion effect surrounding the edges and removing speckle effectively in SRAD approach, some of the researchers modified the diffusion coefficients and defined it as modified SRAD (MSRAD). Among the transform domains category, in the last two decades more research has been conducted in the wavelet transform (WT) domain. Due to primary properties of wavelet coefficients like sparsity and decomposition, effective and simplified implementation of the thresholding ideas have become easy. In the initial period, the simple "wavelet shrinkages" methodology have been proposed for classifying wavelet coefficients of real world noisy data and then they have been modified to increase the signal to noise ratio. Later, in the literature, there has been focus on developing better and better thresholding function in this domain. In one of the comparative study on denoising by various thresholding functions proposed for wavelet transform domain and reported that the Sure-Shrink and the Bayes- Shrink give better results. Few researchers applied statistical approach such as Bayesian approach for denoising the images which have been extended further by considering various noise models for distribution of noisy wavelet coefficients like hidden Markov models, Gaussian, Rayleigh, Fisher-Tippet, Maxwell etc. Dependency of these methods to a specific noise model decreases their flexibility. One of the very recently variants of adaptive thresholding function outperforms various other thresholding methodologies like soft, hard, garrote and other similar methodologies and suppresses the noise regardless of its model of XIX the distribution of the wavelet coefficients. The proposed adaptive thresholding function is required to be optimized for better performance. For this purpose, it is observed from the literature that Wavelet Transform based Thresholding Neural Network (WT-TNN) methodology has been used. In the present work, results of the WT-TNN has been thoroughly analyzed in the initial phase of research work and its performance based on edge preserved noise suppression has been compared with diffusion-based approach to identify the effectiveness of two distinct categories of approaches. From the comparative performance of two approaches (i.e. SRAD and MSRAD from spatial domain category and WT-TNN from transform domain category), it is observed that WT-TNN yields better edge preserved denoising performance, though the computational robustness, stability and speed of denoising, have been identified as major concerns and limitations of this approach. In addition, initialization of thresholding parameters of thresholding function is crucial for fast convergence. Keeping the above facts in view, here efforts have been made further to improve the computational performance of WT-TNN approach considering the features like robustness, stability and efficiency. This has been achieved by proposed followingtwo approaches. In WT-TNN approach, parameters of the thresholding function are optimized by steepest gradient-based least mean square algorithm in which learning rates have been kept constant. These constant learning rates have profound effect on computational time. Here, modification in the methodology has been proposed by implementing adaptive learning rates (ALR) in place of constant learning rates. In this proposed approach, adaptive learning rates have been realized by adaptively varying learning steps size (LSS). For learning the thresholding parameters, high values ofLSS have been used initially and later, they have been adaptively reduced in logarithmic fashion when response tries to deviate from optimum value of learning parameter. For assessing the optimum condition, at each stage of learnt value, incremental values of thresholding parameters are evaluated by the steepest descent gradient based equations. Based on these incremental values, learning value is considered as optimum if one of the following conditions is fulfilled: (i) change of the sign of incremental values (ii) ratio of incremental values to the value of parameter itself becomes less than learning rate. With the above LSS concept, initially the threshold value has been learnt and same concept has been applied for learning other thresholding parameters (here, k and m). By doing so, XX high rate of convergence has been achieved at the start even though the initialized values of thresholding parameters are not very close to the optimum value. The performance of WTTNN with proposed adaptive learning rate is computationally better for both types of noises viz; Gaussian and speckle noise. In case of speckle noise suppression in unsupervised mode, the results show that there is significant reduction in denoising time. From the results it is also concluded that denoising time required in case of proposed approach is 8-10 times less than that of respective denoising time required in case of WT-TNN approach. Edge preservation has also got improved in case of proposed utilization of Bior6.8 wavelet filter in place of db8 wavelet filter. Initially developed "ALR approach' although enhances the computational performance, the basic nature of algorithm remains same as sequentially iterative. Due to such nature, dependency of denoising time upon the noise level in the image cannot be overcome effectively. Therefore, PSO-based random iterative approach has been proposed here. The proposed approach, being random search iterative approach, does not require initialization of thresholding parameters, which have been crucial in case of WT-TNN for fast convergence. The approach has been further modified by considering the real part of thresholded coefficient in denoising process, which further reduces the computational time. In this approach, algorithm has been executed for fixed number of iterations, resulting in more robust and stable computational performance for all noise level in the images. In short, above proposed two approaches overcome all the limitations of the WT-TNN approach like computational robustness, stability, and speed, thereby resulting in more generalized, computationally efficient wavelet-basedapproach. Although the computational performance and other factors have been significantly improved by above proposed two approaches, there has been scope to improve edge preserved qualitative denoising performances of the wavelet transform based approach. It has been observed that wavelet transform-based approaches yield better denoising, particularly in homogeneous region whereas they do not give better results in edgy region due to poor directional sparsity of the wavelet coefficients along the curve. These limitation are addressed by new emerging transform like curvelet transform which is known for anisotropic feature required for edge preserved denoising application. However, the XXI application of the discrete curvelet transform in thresholding approach of image denoising tends to add some extra edges (fuzzy edges) in homogeneous region of denoised images. These specific denoising capability of WT and CT transforms in particular region, implies the great potential to use their specific features in their better utility region. This fact is explored, resulting in the proposed Remnant approach that utilizes the features of wavelet and curvelet transform separately and adaptively fusing the denoised images obtained from these transforms. In this approach, denoised images have been obtained by both the transform-based approaches such as wavelet transform based WT-TNN approach and curvelet transformbased soft thresholding approach. The different regions in the images like homogeneous region (smooth region), non-homogeneous region (edgy region), and neither homogeneous nor non-homogeneous regions have been identified by variance approach. The edgy informationthat could not be retained by wavelet-based denoising is extracted back from its filtered remnant image denoised by curvelet transform. This extracted image is used as edge structure information/image (ESI) for adaptive fusing of the above mentioned regions of the images denoised by wavelet and curvelet transform. The output of the image enhanced by such proposed spatially adaptive fusion approach preserves the edgy information while removing fuzzy edges developed during the denoising process by curvelet transform. The edge keeping index (EKI) shows 3-10% more preservation of edges depending upon the image and its noise level. In summary, research work contributions have been on the following points: 1) From the comparative evaluation of performances of SRAD, MSRAD approach from spatial domain and WT-TNN approach from transform domain, it has been identified that though WT-TNN yields better EPID performance, it has the major concerns and limitations of computational robustness, stabilityand speed of denoising. 2) The above identified limitations of WT-TNN approach have been overcome in the proposed two alternative approaches viz, ALR and PSO-based, resulting in improved computational performance. The edge preservations have also been improved by proposed utilization of Bior6.8 wavelet filter inthe ALR and PSO-based approaches. 3) Inthe final stage, edge preserved denoising performance has been further improved by proposed Remnant approach.en_US
dc.language.isoenen_US
dc.subjectELECTRICAL ENGINEERINGen_US
dc.subjectCOMPUTATIONALLY EFFICIENT ALGORITHMSen_US
dc.subjectEDGE PRESERVED DENOISINGen_US
dc.subjectULTRASOUND MEDICAL IMAGESen_US
dc.titleCOMPUTATIONALLY EFFICIENT ALGORITHMS FOR EDGE PRESERVED DENOISING OF ULTRASOUND MEDICAL IMAGESen_US
dc.typeDoctoral Thesisen_US
dc.accession.numberG21373en_US
Appears in Collections:DOCTORAL THESES (Electrical Engg)



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.