Please use this identifier to cite or link to this item: http://localhost:8081/xmlui/handle/123456789/14741
Full metadata record
DC FieldValueLanguage
dc.contributor.authorAgarwal, Megha-
dc.guideMaheshwari, R.P.-
dc.description.abstractContent based image retrieval (CBIR) paves a way to describe contents within the image based on features and retrieves images accordingly. It resolves the basic problems of text based image retrieval by automatic extraction of low level features from the visual contents of image like color, texture, shape and spatial layout etc. After feature extraction the next step is the measurement of similarity among various images based on these features. Performance of CBIR system substantially depends upon low level visual features. A visual feature can consider only single perception while multiple visual features can perceive an image through different perceptions. The aim of this research is to enhance the performance of retrieval system by designing effective and efficient algorithms for visual features as well as combination of such features. This research work proposes local, global, spatial and transformed domain features for enhanced image retrieval performance. Contributions made towards improvement in the performance of image retrieval systems are summarized as follows: In one of the developed techniques, information of an image is extracted through local visual features. The local features are computed from small portions of an image. Hence they are capable of capturing minute variations present in the image. Local feature, viz. histogram of orientated gradient (HOG) is enumerated on image blocks. Thus it produces a huge set of feature vectors. This problem is taken care of by the vocabulary tree. Vocabulary tree reduces the complexities in feature indexing. HOG performs better than global feature, viz. Gabor wavelet transform (GWT) as well as local feature, scale invariant feature transform (SIFT). The next step is towards feature extraction through multiresolution and multiorientation techniques. Transform domain methods allow extraction of image information at multiresolution and multiorientation levels. Log Gabor filter (LGF) is proposed for feature extraction through three scales and four orientations. It facilitates the better analysis of texture information present in the image. Mean and standard deviation are calculated from transformed image to obtain the texture statistics. LGF shows improvement in retrieval performance as compared to the existing GWT. Another novel multiresolution approach is suggested by binary wavelet transform (BWT). BWT is computationally efficient technique. It decomposes an image into a pyramidal ii structure of subimages with different resolutions corresponding to the different scales. It also provides directional information. BWT transformed images provide equal number of gray levels as the original image. This encourages the designing of binary wavelet transform based histogram (BWTH) feature for extracting information of an image. It extracts histogram for each BWT decomposed subbands. Color information is also integrated by computing BWTH on all the three color components of RGB color space. The results of BWTH based image retrieval system are compared with color histogram, auto correlogram (AC), discrete wavelet transform, directional binary wavelet patterns (DBWP) etc. and a significant improvement in the retrieval performance is observed. It is also noted that BWTH does not consider the spatial relationship among neighboring transformed coefficients. Hence, in order to overcome this problem BWTH is further enhanced by integrating it with correlogram features. This combination outperforms optimal quantized wavelet correlogram (OQWC), Gabor wavelet correlogram (GWC), AC and BWTH itself in terms of various performance measures. Further, á trous wavelet transform (AWT) is utilized to extract multiresolution information. All orientation information is available in a single subband of á trous structure. In the first approach, correlation among á trous wavelet coefficients is employed to analyze the texture statistics. Thus, á trous wavelet correlogram (AWC) is proposed. As orientation information has a great importance in texture detection but it is lost in AWC. Thus in the second approach á trous gradient structure descriptor (AGSD) is designed by extracting the orientation information. In AGSD, orientation information is acquired from á trous wavelet transformed images and then microstructures are used to compute the similarity among orientations within the neighborhood. The microstructured image so obtained is used as a mask to get the corresponding á trous wavelet coefficients. Texture statistics is found out by calculating the correlation of these mapped á trous wavelet coefficients. Retrieval performance of AGSD is found superior to OQWC, combination of standard wavelet filter with rotated wavelet filter correlogram (SWF +RWF correlogram), GWC, combination of GWC with evolutionary group algorithm (EGA) and texton co-occurrence matrix (TCM). Thereafter in the subsequent chapter, a new spatial method is propounded for texture feature extraction with the help of Haar-like wavelet filters. Haar-like wavelet filters possess simple structure which helps to gather image texture information about an image. From a set of Haar-like wavelet filters, poorer response filters are avoided and dominant filter is selected to propose a feature called cooccurrence of Haar-like wavelet filters iii (CHLWF). It permits the consideration of only maximum filter edge response and less prominent directions of intensity variations are prohibited. Cooccurrence of dominant filter is used to capture intensity variations of the most prominent directions. Thus, statistics of dominant edges which incorporate the major information present in the image are utilized to compare various images. Results of this approach is compared with various related works in the literature like cross correlogram (CC), OQWC, GWC, SWF+RWF correlogram, dual tree complex wavelet transform (DT-CWT), dual tree rotated complex wavelet transform (DT-RCWT), DT-CWT + DT-RCWT, Gabor wavelet transform (GWT) etc. and effectiveness of CHLWF is established. Integration of color as well as intensity properties perceives an image from different perspectives and captures additional image information. Using this concept weight cooccurrence based integrated color and intensity matrix (WCICIM) algorithm is proposed. WCICIM features are combined with integrated color and intensity cooccurrence matrix (ICICM) for final feature construction. In WCICIM suitable weights are assigned to each pixel according to its color and intensity contributions. It finds correlation among color–color, color–intensity, intensity–color and intensity–intensity, based on neighboring pixel variations in the weight matrixes. These features have improved the performance as compared with motif cooccurrence matrix (MCM), ICICM, color correlogram and combination of block bit plane (BBP) with global color histogram (GCH) features. The performance of the proposed methods is tested on five distinct benchmark image databases (Corel 1000, Corel 2450, MIRFLICKR 25000, Brodatz and MIT VisTex). The results show progressively improved retrieval performance of the proposed algorithms in terms of various performance measures.en_US
dc.description.sponsorshipIndian Institute of Technology Roorkeeen_US
dc.publisherDept. of Electrical Engineering iit Roorkeeen_US
dc.subjectContent Based Image Retrievalen_US
dc.subjectRetrieves Images Accordinglyen_US
dc.subjectSpatial Layouten_US
Appears in Collections:DOCTORAL THESES (Electrical Engg)

Files in This Item:
File Description SizeFormat 
Ph.D._Thesis_Megha_Agarwal.pdf14.22 MBAdobe PDFView/Open

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.