DSpace Repository

CONTENT ANALYSIS OF VIDEOS USING ALIGNMENT TECHNIQUES

Show simple item record

dc.contributor.author Jaiswal, Shubhangi
dc.date.accessioned 2021-12-07T06:41:23Z
dc.date.available 2021-12-07T06:41:23Z
dc.date.issued 2017-11
dc.identifier.uri http://localhost:8081/xmlui/handle/123456789/15215
dc.description.abstract In recent years, with increase in the use of Internet the multimedia contents on it have rapidly increased. Users may want to go through a video in top down manner i.e. browsing the videos, or in bottom up manner i.e. retrieving specific information from videos. They may also want to go through the summary or through the highlights of the videos. Video data is a major multimedia data available over the web; people want interactions to be possible with the videos. This has necessitated the need to handle multimedia resources effectively. Lecture videos are the category of videos that intrigue the users to interact with the videos. This dissertation work proposes an automatic method for aligning scripts of lecture videos with captions. Alignment is needed to extract time information from captions and insert it in the scripts to create index of the videos. No alignment work has been previously done in lecture videos domain. Alignment methods proposed for other type of videos are not applicable for lecture videos because, different similarity techniques behave differently on different types of datasets. The proposed method uses transcripts of lecture videos, SRT file of captions available along with lecture videos and caption files generated from autocaption generation feature of YouTube. The captions and scripts are then aligned using a dynamic programming technique. No such work has been previously done for lecture videos. Most important aspect of alignment is similarity measure. In the proposed work we have used three similarity measures cosine, jaccard, and dice. A comparative analysis of these measures is given in the dissertation. We also use a large lexical database of English words known as WordNet for word-to-word similarity. The experimental result shows comparison of accuracy of alignment for various similarity techniques and comparison of accuracy of alignment for captions available along with lecture videos and captions generated from YouTube’s auto caption generation feature. en_US
dc.description.sponsorship INDAIN INSTITUTE OF TECHNOLOGY, ROORKEE en_US
dc.language.iso en en_US
dc.publisher I I T ROORKEE en_US
dc.subject Alignment Methods en_US
dc.subject Lecture Videos en_US
dc.subject Word Similarity en_US
dc.subject Captions Generated en_US
dc.title CONTENT ANALYSIS OF VIDEOS USING ALIGNMENT TECHNIQUES en_US
dc.type Other en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record