Please use this identifier to cite or link to this item:
|Title:||A STUDY OF VIDEO COMPRESSION USING MPEG STANDARD|
|Authors:||Agarwal, Vijayendra Mohan|
|Keywords:||ELECTRONICS AND COMPUTER ENGINEERING;VIDEO COMPRESSION;MPEG STANDARD;BLOCK-BASED DCT|
|Abstract:||In this thesis, video compression using MPEG standard has been performed. MPEG stands for Motion Pictures Experts Group. Like JPEG, MPEG refers to both a group and the standard that the group has developed. The goal of the MPEG group is to be able to send a high quality picture and stereo soundtrack through a 1.5 Mbps channel. The main processing step of the MPEG algorithm for video compression is block-based motion-compensation in interframe coding and block-based DCT in intraframe coding. The block-based DCT is first implemented. by using the formula for the DCT directly and then by using (matrix factorization to improve the speed of operation. The DCT coefficients are quantized using a quantizer matrix giving step size such that the low frequency DCT coefficients are quantized more accurately (small step size) and the high frequency coefficients are quantized more coarsely (large step size). The DC coefficient of the DCT, which remains fairly constant throughout a frame, is coded differentially within a slice using the DC value of the previous block as a predictor. This predictor is reset at the beginning of every slice. As a result of quantization, a significant proportion of the quantized coefficients are zero valued. These blocks of coefficients can therefore, be efficiently coded by relying on a combination of run length coding and modified Huffman coding. For the most frequent combinations of zero run-lengths and non- zero coefficient value that follow these zero runs, variable length codes have been defined in the MPEG standard. Motion estimation/compensation is the most powerful and key feature of MPEG, used to eliminate the temporal redundancy for compression of video signals. Thus, in the motion-compensated coding scheme, the performance and efficiency of the real-time system depends on the accuracy and speed of the motion estimation. There are many types of motion estimation algorithms such as pel-recursive, block-matching and feature-based approaches. In general, block-matching algorithm is dominant and more suitable for a simple hardware realization, because of its regularity and simplicity. The BMA partitions each image frame into a number of equal-sized blocks and finds a constant motion vector for each block by searching for its peak correlation with an associated block in the previous frame. There are several methods of searching a motion-vector, which includes full-search BMA, two-dimensional logarithmic search, three step search and conjugate direction search. In this work, motion vectors are, at first, estimated by full search BMA, and then modified to conjugate direction search BMA, so as to decrease the number of computations required. The assumption in motion estimation, that is commonly made is that, prediction error increases monotonically as the shift (i,j) moves away from the direction of minimum distortion. Other assumptions are: (a) Objects move in translation in a plane, that is parallel to the camera plane. (b) illumination is spatially and temporally uniform. (iv) (c) Occlusion of one object by another, and uncovered background are neglected. For this work, five frames have been taken as a test sequence. First frame is coded in intraframe mode and successive frames are coded in. predictive mode. For frames coded in intraframe mode, the whole frame is to be transmitted and for frames coded in interframe mode, only motion-vectors need to be transmitted to the receiver for reconstruction. These frames are predicted from the previously reconstructed frames. Performance measures such as mean square error (MSE) and peak SNR (PSNR) have been evaluated for all the reconstructed frames.|
|Research Supervisor/ Guide:||Mehra, D. K.|
|Appears in Collections:||MASTERS' DISSERTATIONS (E & C)|
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.