DSpace Repository

ACTION RECOGNITION USING FEATURE FUSION

Show simple item record

dc.contributor.author Sharma, Renuka
dc.date.accessioned 2021-12-07T06:27:08Z
dc.date.available 2021-12-07T06:27:08Z
dc.date.issued 2018-05
dc.identifier.uri http://localhost:8081/xmlui/handle/123456789/15212
dc.description.abstract Action recognition seems to be a very easy task for us humans but it requires a lot of information processing in terms of recognizing patterns when it comes to computer systems. Here, we try to devise a new way of action recognition for intelligent systems by fusing the shallow and the deep features from the data. Shallow feature extraction starts by identifying the motion salient pixels first, thus eliminating unwanted information and then extract the improved trajectory information from it. To get the deep features, we make use of Convolutional Neural Network (CNN). There will be separate classifiers for both the deep features and shallow features which will be fused in order to result in an efficient classifier for the action recognition. We are using HMDB-51[1] video dataset, one of the most challenging datasets for action recognition which consists of various actions of different kinds like clap, run, walk, box, etc taken from various sources like YouTube, movies and Google videos under various illumination effects, occlusion, camera angle variation and pose variation en_US
dc.description.sponsorship INDAIN INSTITUTE OF TECHNOLOGY, ROORKEE en_US
dc.language.iso en en_US
dc.publisher I I T ROORKEE en_US
dc.subject Action Recognition en_US
dc.subject Motion Saliency en_US
dc.subject Feature Extraction en_US
dc.subject Improved Trajectory en_US
dc.title ACTION RECOGNITION USING FEATURE FUSION en_US
dc.type Other en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record