Please use this identifier to cite or link to this item: http://localhost:8081/xmlui/handle/123456789/12204
Full metadata record
DC FieldValueLanguage
dc.contributor.authorLingamaiah, S.-
dc.date.accessioned2014-11-30T06:11:12Z-
dc.date.available2014-11-30T06:11:12Z-
dc.date.issued2010-
dc.identifierM.Techen_US
dc.identifier.urihttp://hdl.handle.net/123456789/12204-
dc.guideNigam, M. J.-
dc.description.abstractImages containing faces are essential to intelligent vision-based human computer interaction, and research efforts in face processing include face recognition, face tracking, pose estimation, and expression recognition. Given a single image, the goal of face detection is to identify all image regions which contain a face regardless of its position, orientation, and lighting conditions. Such a problem is challenging because faces are nonrigid and have a high degree of variability in size, shape, color, and texture. The faces in a given image are detected by using a neural network and Gabor filter features. Numerous techniques have been developed to detect faces in a single image and are classified into four categories; Knowledge-based methods, Feature invariant approaches, Template matching methods and Appearance-based methods. In case of Appearance-based method the models are learned from a set of training images which should capture or identify the variations of facial appearance. These learned models are then used for detection. These methods are designed mainly for face detection. The dissertation work consists of two parts; Image processing and Neural Network. The given image is processed using Gabor filters for extracting the facial features by using the image processing tool box. The extracted facial features are applied to the feed forward neural network by using neural network tool box. Considering the desirable characteristics of spatial locality and orientation selectivity of the Gabor filter, the filters had been designed for extracting facial features from the local image. The goal of facial feature extraction is to detect the presence and location of features such as eyes, nose, nostrils, eyebrow, mouth, lips, ears, etc. The feature vector based on Gabor filters is used as the input of the classifier, which is a Feed Forward neural network. The given image convolved with Gabor filters by multiplying the image by Gabor filters in frequency domain. Feature extraction algorithm for the method has two main steps; feature point localization and feature vector computation. Feature vectors are extracted from points with high information content on the face image. In most feature-based methods, facial features are assumed to be the eyes, nose and mouth. From the responses of the face image to Gabor filters, peaks are found by searching the locations in a window WO of size W x W. Feature vectors are generated at the feature Points as a composition of Gabor wavelet transform iii coefficients. This architecture was implemented using Matlab in a graphical environment allowing face detection in a database. It has been evaluated using the training data and test data of 150 images containing faces and non faces, on this test set I obtained a good detection which is shown in chapter 6. Face detection and recognition has many applications in a variety of fields such as security system, videoconferencing and identification. The objective of this work is to implement a classifier based on neural networks and Gabor feature extraction for face detection. The ANN is used to classify face and non-face patterns. iven_US
dc.language.isoenen_US
dc.subjectELECTRONICS AND COMPUTER ENGINEERINGen_US
dc.subjectFACE DETECTIONen_US
dc.subjectDIMENSIONAL IMAGESen_US
dc.subjectNEURAL NETWORKSen_US
dc.titleFACE DETECTION FOR TWO DIMENSIONAL IMAGES USING GABOR FEATURE EXTRACTION AND NEURAL NETWORKSen_US
dc.typeM.Tech Dessertationen_US
dc.accession.numberG20117en_US
Appears in Collections:MASTERS' THESES (E & C)

Files in This Item:
File Description SizeFormat 
ECDG20117.pdf6.28 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.