Please use this identifier to cite or link to this item:
http://localhost:8081/xmlui/handle/123456789/14409
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Goyal, Mudit | - |
dc.date.accessioned | 2019-05-21T10:37:23Z | - |
dc.date.available | 2019-05-21T10:37:23Z | - |
dc.date.issued | 2016-05 | - |
dc.identifier.uri | http://hdl.handle.net/123456789/14409 | - |
dc.description.abstract | In this report we have proposed a framework for sign language dynamic gestures recognition from depth sequences. For feature representation two different set of features are extracted. First one is gradient local auto correlation features from the depth motion maps and to incorporate the loss of temporal information which is there in depth motion maps, the other set of features extracted is HON4D (Histogram of oriented 4D normal). A new framework for fusing the features at decision level using classifier ensemble of three 2-layer feed forward neural networks is been proposed . The proposed framework is tested on two datasets MSRGesture3D and ISL3D dataset. The ISL3D dataset is created by us having 12 dynamic Indian Sign Language gestures. The recognition accuracies achieved on the two datasets are: 96.99% on MSRGesture3D dataset and 81.38% on ISL3D dataset. | en_US |
dc.description.sponsorship | Indian Institute of Technology, Roorkee. | en_US |
dc.language.iso | en | en_US |
dc.publisher | Computer Science and Engineering,IITR. | en_US |
dc.subject | Sign Language | en_US |
dc.subject | Dynamic Gestures | en_US |
dc.subject | Depth Motion Maps | en_US |
dc.subject | HON4D (Histogram of oriented 4D normal) | en_US |
dc.subject | MSRGesture3D and ISL3D(dataset) | en_US |
dc.title | Sign Language Dynamic Gestures Recognition using Depth Data | en_US |
dc.type | Other | en_US |
Appears in Collections: | DOCTORAL THESES (E & C) |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
G25971_mudit-D.pdf | 2.12 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.