Please use this identifier to cite or link to this item: http://localhost:8081/xmlui/handle/123456789/15782
Full metadata record
DC FieldValueLanguage
dc.contributor.authorSingh, Sneha-
dc.date.accessioned2024-09-20T12:45:39Z-
dc.date.available2024-09-20T12:45:39Z-
dc.date.issued2019-08-
dc.identifier.urihttp://localhost:8081/xmlui/handle/123456789/15782-
dc.guideAnand, R.S.-
dc.description.abstractMedical imaging plays a vital role in modern medicine. With the advent of modern technology, there is a tremendous improvement in the capabilities of several medical imaging modalities such as X-ray, computed tomography (CT), magnetic resonance imaging (MRI), ultrasound (US) and functional imaging modalities such as single photon emission computed tomography (SPECT), positron emission tomography (PET) etc. which are extensively prescribed by the clinicians/radiologists for diagnosis purposes. In usual, the diagnostic procedures based on the perception of medical images are performed in a subconscious way which is based on the conclusion drawn upon how the clinicians understand and interpret them. However, due to several sources of medical images used by the clinicians and radiologists, a big problem of information overloading occurs. Moreover, none of the medical imaging modality is able to provide comprehensive and accurate information, especially in critical diseases such as brain hemorrhage, tumor, cancer, other nervous system disorders, any accidental injuries, etc. For example, anatomical imaging (CT/MR) provides morphological information about the human body, but do not reflect the functional status, whereas functional imaging (SPECT/PET) provides the physiological information, but do not reveal anatomical information. Therefore, it is necessary to correlate one modality of medical images to other to provide the significant diagnostic information that requires lots of years of experience and this process is very rigorous, costly and time consuming and has the chance of lots of human errors. Moreover, the advanced imaging modalities prescribed by the doctors multiple times, are too much costlier that also puts an extra financial and mental burden on an individual. Therefore, there is a need to develop some effective multimodal medical image fusion (MIF) approaches to merge all of the features taken from each of individual modality into a single composite fused image that has a significant clinical information and is suitable for the effective diagnostic analysis. Thus, in the above perspective, the medical image fusion algorithms should fulfil the following three principal criteria: 1. The MIF algorithm must be capable to preserve maximum diagnostic information from the input images with perceptible visual quality and without introducing any spatial and spectral distortion. 2. The true tissue information either anatomical or functional, including the edges and other diagnostic details should also be reflected properly. 3. The fusion algorithm must be computationally efficient, stable and robust. With the above background, the main objective of the present research work has been chosen as to design and develop the effective fusion approaches by integrating all the complementary and contrasting information from the different image datasets of same organ and tissues so that the fused image is more useful and acceptable to the human visual ii system and machine perception. Accordingly, the entire research work has been planned and carried out under the following two major objectives. 1. A comparative evaluation of several existing fusion approaches has been carried out and new efficient CT and MR image fusion approaches have been designed, developed and implemented to improve the fusion performance by preserving the clinically relevant information present in the source CT and MR images with higher contrast level and without introducing any artifacts. 2. Based on the post-analysis of the existing and developed fusion approaches, new suitable anatomical and functional (MR, SPECT, PET) image fusion approaches have been designed to reflect the anatomical details produced by the MR/CT images without disturbing any functional status of the tissue reflected in the SPECT/PET images. In order to achieve the first objective of the initial phase of work by developing and implementing the MIF approach, three different fusion techniques are proposed in the present work that will also lead to fullfil the different sub-objectives. Based on the prominent features and advantages of multiscale transformation techniques presented in the literature, the first fusion approach based on the nonsubsampled shearlet transform (NSST) is proposed in the present work. In the proposed NSST domain medical image fusion (NSST-MIF), anatomical (CT-MR) image fusion is performed in the NSST domain using a modified pulse coupled neural network. The proposed fusion approach incorporates the regional energy (RE) based activity level measure to fuse the low frequency (𝑙𝑓) NSST coefficients and novel sum modified Laplacian (NSML) motivated PCNN to fuse the high frequency (β„Žπ‘“) NSST components which help to reflect more amount of informative contents present in the source CT and MR images. The performance of the proposed approach is compared with eight different fusion approach in which WT, NSCT and NSST with different fusion rules such as averaging of 𝑙𝑓 coefficient fusion, maximum and spatial frequency motivated PCNN based β„Žπ‘“ subband fusion are utilized. Their performance is not only analyzed and evaluated in terms of visual perception, but also in terms of different performance measures such as entropy (En), standard deviation (STD), mutual information (MI), spatial frequency (SF), image quality index (IQI) and Xydeas edge index (XEI). Based on the experimental results, it is observed that the proposed NSST-MIF approach is able to fuse the CT and MR images in a better way without distorting the information and showing a significant improvement in detectability of the source images. Based on the findings obtained from the results presented in the literature, it observed that the curvelet transform also produce better results, however, it uses a parabolic scaling law to resolve the two-dimensional singularities along 𝐢2 curves. To represent the diagnostic edge detail more efficiently, discrete ripplet transform with two new additional iii parameters is utilized in second proposed fusion approach which provides a new tight frame with a sparse representation for the source images with discontinuities along 𝐢𝑑 curves, where 𝑑=2 refers to parabolic scaling the same as curvelets and 𝑑=3, refers that ripplet has the cubic scaling and so forth. In the second proposed image fusion approach named as DRT-MIF, firstly, DRT has been applied to decompose the source images, individually in one 𝑙𝑓 and several β„Žπ‘“ ripplet components which are fused by computing NSML and novel sum modified spatial frequency (NMSF) motivated PCNN model that is able to capture the fine details present in the reference images. This model helps to preserve redundant information also. The PCNN model is utilized for 𝑙𝑓 and β„Žπ‘“ DRT coefficients based on the firing times and improved feeding inputs as NMSF for β„Žπ‘“ components and NSML for 𝑙𝑓 ripplet subband. Fusion rules help to capture the suitable differences and provide the resultant images with high contrast and clarity. Finally, fused images are reconstructed by applying the inverse DRT. The results of the proposed DRT-MIF method is compared with wavelet transform (WT), nonsubsampled contourlet transform (NSCT) and NSST with PCNN based fusion approaches including the proposed NSST-MIF. It is observed from their comparative results that the proposed DRT-MIF approach provides a better quality of fused images by preserving the edge and important morphological information. Moreover, the proposed approach provides higher values of En, MI, SF and XEI than NSST-MIF and other existing fusion methods. Based on the analysis of experimental results obtained earlier in two proposed fusion methods, it is observed that both, the NSST and DRT both the decomposition methods provide better fusion results. DRT helps to reflect the higher-order singularities whereas, NSST overcomes the problem of shift-invariance and helps to lose the important information. Considering their motivation, a cascaded medical image fusion (C-MIF) framework has been proposed for CT and MR images in DRT and NSST domain. At the first stage decomposition, a PCNN model motivated by different feeding inputs such as NSML and NMSF is utilized to fuse the 𝑙𝑓 and β„Žπ‘“ DRT coefficients, respectively. The NSST decomposition is used in the second stage where regional energy is computed to fuse the 𝑙𝑓 NSST approximation coefficients and for β„Žπ‘“ NSST detail coefficients, the sum of absolute difference (SAD) and absolute maximum (AM) based fusion rules are applied to provide the richer representation of the edge detail information with improved contrast, respectively. The fusion performance of the proposed C-MIF approach is validated by extensive simulations performed on a different CT-MR image dataset and a detailed comparison is made with WT, dual-tree complex wavelet transform (DTCWT), NSCT, NSST, stationary wavelet transform (SWT) decomposition based and other fusion approaches. It is observed from their comparative analysis that the C-MIF approach gets more informative content in the fused image by computing higher values of En and MI as compared to the others. Moreover, the C-MIF iv approach ensures to retain the contrast and edge detail information by producing higher STD, SF and XEI values than others, thus providing the fused images with better visual quality. In order to achieve the next objective of the present work, four different multimodal image fusion methods are proposed to fuse MR-SPECT, MR–PET along with CT-MR Images. All these four MIF approaches are developed and implemented in such a way that all these methods help to provide more robust and clean structural detail information without introducing any artifacts and without altering the functional information of the tissue reflected in the source images. To achieve the fusion of anatomical and functional images, an improved fusion approach has been proposed that uses the entire features extracted by NSST and adaptive PCNN model (ADPCNN) to retain the desirable contrast and detail information in the fused results. In the proposed approach (ADP-MIF), the ADPCNN model is applied to fuse the 𝑙𝑓 NSST decomposed coefficients with adaptive linking strength parameter based on local visibility and NSML motivated feeding input which helps to provide higher sensitivity and clarity in the visual perception. For fusing the β„Žπ‘“ NSST coefficients, a local log Gabor energy (LLGE)-based fusion is used to extract optimal texture feature with broad spectral information. The fusion performance is compared with existing fusion methods. It is observed from the fusion results that the proposed ADP-MIF approach is able to reproduce the significant visual information with the preservation of structural and spectral content and provides a clear picture of edge details available in the source images. Furthermore, the fusion performance of the proposed ADP-MIF approach is also compared with twenty seven existing image fusion methods for CT-MR images. From the results, it is concluded that the proposed ADP-MIF approach helps to provide a significant improvement in terms of the visual quality of fused images by providing additional diagnostic information especially for the fusion of anatomical with functional images. In the next proposed fusion approach, a hybrid multimodal medical image fusion (H-MIF) approach based on the NSST and SWT decomposition has been proposed for the anatomical with functional and anatomical with anatomical images. In the proposed approach, SWT is applied only on the 𝑙𝑓 NSST subband and letting the β„Žπ‘“ NSST components to remain the same. After SWT decomposition, another 𝑙𝑓 and β„Žπ‘“ SWT subbands are produced. To fuse the 𝑙𝑓 SWT coefficients, an ADPCNN model motivated by NSML based fusion rule is utilized while LLGE based fusion rule is applied for β„Žπ‘“ SWT coefficients to extract the salient features available in the source image and to retain the color and edge details without introducing any artifacts. Finally, AM and SAD based fusion rules are applied to remaining β„Žπ‘“ NSST subbands to retain more information related to edge details. The proposed H-MIF maintains the spatial and spectral details well with sharp minute v details shown by higher scores of En and IQI. The higher values of FMI obtained for the proposed method indicate that the brighter minute features in source images are preserved properly with the appropriate consistency and localization. Moreover, similar performance has also been reflected for CT-MR image fusion by the proposed H-MIF approach that achieves good complementary information with more structural details but it suffers from contrast reduction of fused images having lower STD value that may be acceptable with its ability to retain more diagnostic information. Based on the results and limitations (chromatic imbalance, overbrightness, sensitive to random noise, etc.) of the state of the art methodologies, a unified multimodal fusion framework named as SDL-MIF has been proposed here using multiscale geometric analysis with sparse representation (SR) and guided filtering. The proposed SDL-MIF approach is based on the sparse K-SVD dictionary learning and guided filtering in the NSST domain in which an overcomplete dictionary is learned (training of medical image dataset) to capture complex details of medical images and sparsely represented 𝑙𝑓 NSST subband for better visual feature (luminance, contrast) projection without any spectral distortion. Fusion rule using a dictionary learning (DL) based SR is utilized to improve the comprehensive information in 𝑙𝑓 NSST subband, while guided filtering based rule is adopted to fuse β„Žπ‘“ NSST subbands, which is able to extract the salient features from the source images and reflect color and edge detail properly in the fused outcomes without incorporating any artifacts. Several experimental results are performed on MR-CT, MR-PET and MR-SPECT dataset to validate the proposed SDL-MIF method and showed a detailed comparative analysis with the other available fusion methods. Based on the comparative experimental results, it is observed that the proposed SDL-MIF method is able to preserve the significant information of multimodal input images by producing better visual quality of fused images with improved contrast. In the next fusion approach, a feature level multimodal image fusion framework (CNN-MIF) has been proposed using two-scale β„“1βˆ’β„“0 hybrid layer decomposition with convolutional neural network (CNN) based feature mapping and structural patch clustering. In the proposed CNN-MIF approach, ℓ𝑛-norm based two scale hybrid layer decomposition is utilized to preserve the desired edges and intensity variations at each scale. A pre-trained CNN model followed by consistency verification is used to extract the prominent features from each of the decomposed base layer components and to generate the pixel activity and fusion weight map. For each output feature map, RE based activity measure is computed and refined in the consistency verification step to optimize the activity weight map for merging the decomposed base layers. The two-scale detail layers are merged by utilizing clustering based pre-learned multichannel dictionary with saliency matching rule to efficiently map the structural details of the layers. Moreover, the color components associated with both vi the source images are also combined using pixel saliency measure and finally all three components, i.e. fused base layer, fused detail layer and color component get merged to reconstruct the fused image. From the experimental results, it is observed that the proposed CNN-MIF approach highlights the ability to preserve the layer information (structural, fine details, brightness, and color) and increases fusion accuracy. A ℓ𝑛-norm based two-scale hybrid layer decomposition method is used to separate out the main information from the textural details in the spatial domain. The experimental results also show that the proposed CNN-MIF approach can efficiently extract the complex structure and maintains the spectral information as well without introducing any processing artifacts. It is further observed that the proposed CNN-MIF approach achieve higher performance measures than the other fusion approaches which itself signifies an improvement in the results of the proposed CNN-MIF approach. For the purpose of implementing and evaluating the performance of the above discussed proposed methods, the multimodal CT, MR, SPECT and PET neurological images were acquired from the multimodal image database available at Harvard whole brain atlas (http://www.med.harvard.edu/AANLIB/home.html).en_US
dc.description.sponsorshipINDIAN INSTITUTE OF TECHNOLOGY ROORKEEen_US
dc.language.isoenen_US
dc.publisherI I T ROORKEEen_US
dc.subjectMedical Imaging Playsen_US
dc.subjectComputed Tomographyen_US
dc.subjectPositron Emission Tomography (PET)en_US
dc.subjectAnatomical Imagingen_US
dc.titleFUSION OF MULTIMODAL NEUROLOGICAL IMAGESen_US
dc.typeThesisen_US
Appears in Collections:DOCTORAL THESES (Electrical Engg)

Files in This Item:
File Description SizeFormat 
G29532.pdf11.41 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.