Please use this identifier to cite or link to this item:
http://localhost:8081/jspui/handle/123456789/17043
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Gangwar, Neeraj | - |
dc.date.accessioned | 2025-06-24T15:01:50Z | - |
dc.date.available | 2025-06-24T15:01:50Z | - |
dc.date.issued | 2014-06 | - |
dc.identifier.uri | http://localhost:8081/jspui/handle/123456789/17043 | - |
dc.description.abstract | Sparse representation has attracted a great deal of attention in the past decade. Famous trans- !brrns such as discrete Fourier transform, wavelet transform and singular value decomposition are used to sparsely represent the signals. The aim of these transforms is to reveal certain structures of a signal and representation of these structures in a compact form. Therefore, sparse represen-tation provides high performance in the areas as diverse as image denoising, pattern classification, (olnpression etc. All of these applications are concerned with a compact and high-fidelity repre- sentation of signals. Iii this thesis, we consider the classical face recognition problem. This application is more conceriied with the semantic information of image signals. It is shown that a sparse representation based framework is a possible way to tackle this problem. We also propose a new approach for face classification which is based on task driven dictionary learning | en_US |
dc.description.sponsorship | INDIAN INSTITUTE OF TECHNOLOGY ROORKEE | en_US |
dc.language.iso | en | en_US |
dc.publisher | I I T ROORKEE | en_US |
dc.subject | Sparse representation | en_US |
dc.subject | Therefore | en_US |
dc.subject | Classification | en_US |
dc.subject | Driven Dictionary | en_US |
dc.title | SPARSE REPRESENTATION FOR FACE RECOGNITION | en_US |
dc.type | Other | en_US |
Appears in Collections: | MASTERS' THESES (E & C) |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
G24097.pdf | 8.92 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.