An efficient video coding using phase-matched error from phase correlation information

Manoranjan Paul, Golam Sorwar

Research output: Book chapter/Published conference paperConference paperpeer-review

5 Citations (Scopus)
48 Downloads (Pure)

Abstract

The H.264 video coding standard exhibits high performance in terms of compression and image quality compared to the other existing standard such as H.263, MPEGX. This improved performance is achieved due to the mainly enormous computations in multiple mode motion estimation and compensation. Recent research tried to reduce the computational time using predictive motion estimation, early zero motion vector detection, fast motion estimation, fast mode decision etc. These approaches successfully reduce the computational time by degrading the image quality. Phase correlation technique is used to find the shift between two pictures. In this paper we used phase correlation technique to indicate the motion information between current and reference block and then we devise an algorithm to predict the motion estimation block size. Using phase correlation we are able to successfully predict the motion estimation mode directly instead of using exhaustive motion estimation by all possible modes, thus we save a huge amount of computational time. The experimental results show that we can save around 50% time in motion estimation without degrading the image quality.
Original languageEnglish
Title of host publicationIEEE Workshop on Multimedia Signal Processing (MMSP)
Place of PublicationUSA
PublisherIEEE, Institute of Electrical and Electronics Engineers
Pages378-382
Number of pages5
ISBN (Electronic)9781424422944
DOIs
Publication statusPublished - 2008
EventMMSP 2008: 10th Workshop - Cairns, Qld, Australia
Duration: 08 Oct 200810 Oct 2008

Workshop

WorkshopMMSP 2008: 10th Workshop
Country/TerritoryAustralia
Period08/10/0810/10/08

Fingerprint

Dive into the research topics of 'An efficient video coding using phase-matched error from phase correlation information'. Together they form a unique fingerprint.

Cite this