Please use this identifier to cite or link to this item: http://localhost:8081/xmlui/handle/123456789/14412
Full metadata record
DC FieldValueLanguage
dc.contributor.authorYadav, Ajay-
dc.date.accessioned2019-05-22T04:45:49Z-
dc.date.available2019-05-22T04:45:49Z-
dc.date.issued2016-05-27-
dc.identifier.urihttp://hdl.handle.net/123456789/14412-
dc.description.abstractBeing able to detect and recognize human activities is essential for several applications, including personal assistive robotics.Many approaches have been discussed in the past. Normally 2D data has been used in past. But , nowadays due to availabilty of low cost 3D cameras like Kinect, it is easier to perform research on depth data. Mainly skeleton and depth data provides more reliable and accurate system. In this Dissertation, a novel approach to detect the activities performed by a human has been implemented. This involves the extracting the frames from a given depth video and getting the skeleton of human in each frame using kinect camera. Simple skeleton feature are used, which are e cient and fast to classify the activities using multiclass svm.This approach gives a better accuracy in comparison to many approaches developed in the past.en_US
dc.description.sponsorshipIndian Institute of Technology, Roorkee.en_US
dc.language.isoenen_US
dc.publisherDepartment of Computer Science and Engineering,IITR.en_US
dc.subjectRGB-Depth Vedeosen_US
dc.subjectPersonal Assistive Roboticsen_US
dc.subject3D cameras(kinect camera)en_US
dc.subjectSkeleton And Depth Dataen_US
dc.titleHUMAN ACTION RECOGNITION USING RGB-DEPTH VIDEOSen_US
dc.typeOtheren_US
Appears in Collections:DOCTORAL THESES (E & C)

Files in This Item:
File Description SizeFormat 
G25974-Ajay -D.pdf6.46 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.