Please use this identifier to cite or link to this item: http://localhost:8081/xmlui/handle/123456789/15212
Full metadata record
DC FieldValueLanguage
dc.contributor.authorSharma, Renuka-
dc.date.accessioned2021-12-07T06:27:08Z-
dc.date.available2021-12-07T06:27:08Z-
dc.date.issued2018-05-
dc.identifier.urihttp://localhost:8081/xmlui/handle/123456789/15212-
dc.description.abstractAction recognition seems to be a very easy task for us humans but it requires a lot of information processing in terms of recognizing patterns when it comes to computer systems. Here, we try to devise a new way of action recognition for intelligent systems by fusing the shallow and the deep features from the data. Shallow feature extraction starts by identifying the motion salient pixels first, thus eliminating unwanted information and then extract the improved trajectory information from it. To get the deep features, we make use of Convolutional Neural Network (CNN). There will be separate classifiers for both the deep features and shallow features which will be fused in order to result in an efficient classifier for the action recognition. We are using HMDB-51[1] video dataset, one of the most challenging datasets for action recognition which consists of various actions of different kinds like clap, run, walk, box, etc taken from various sources like YouTube, movies and Google videos under various illumination effects, occlusion, camera angle variation and pose variationen_US
dc.description.sponsorshipINDAIN INSTITUTE OF TECHNOLOGY, ROORKEEen_US
dc.language.isoenen_US
dc.publisherI I T ROORKEEen_US
dc.subjectAction Recognitionen_US
dc.subjectMotion Saliencyen_US
dc.subjectFeature Extractionen_US
dc.subjectImproved Trajectoryen_US
dc.titleACTION RECOGNITION USING FEATURE FUSIONen_US
dc.typeOtheren_US
Appears in Collections:MASTERS' THESES (CSE)

Files in This Item:
File Description SizeFormat 
G27904.pdf5.51 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.