Video coding using the most common frame in scene

Manoranjan Paul, Lin Weisi, Tong Lau Chiew, Lee Bu-sung

Research output: Book chapter/Published conference paperConference paper

32 Citations (Scopus)


Motion estimation (ME) and motion compensation (MC) using variable block size, fractional search, and multiple reference frames (MRFs) help the recent video coding standard H.264 to improve the coding performance significantly over the other contemporary coding standards. The concept of MRF achieves better coding performance in the cases of repetitive motion, uncovered background, non-integer pixel displacement, lighting change, etc. The requirement of index codes of the reference frames, computational time in ME&MC, and memory buffer for pre-coded frames limits the number of reference frames used in practical applications. In typical video sequence, the previous frame is used as a reference frame with 68~92% of cases. In this paper, we propose a new video coding method using a reference frame (i.e., the most common frame in scene (McFIS)) generated by the Gaussian mixture based dynamic background modelling. The McFIS is not only more effective in terms of rate-distortion and computational time performance compared to the MRFs but also error resilient transmission channel. The experimental results show that the proposed coding scheme outperforms the H.264 standard video coding with five reference frames by at least 0.5 dB and reduced 60% of computation time.
Original languageEnglish
Title of host publication2010 Conference Proceedings
EditorsScott Douglas
Place of PublicationUSA
PublisherIEEE, Institute of Electrical and Electronics Engineers
Number of pages4
Publication statusPublished - 2010
EventIEEE International Conference on Acoustics, Speech and Signal Processing - USA, New Zealand
Duration: 14 Mar 201019 Mar 2010


ConferenceIEEE International Conference on Acoustics, Speech and Signal Processing
CountryNew Zealand

Fingerprint Dive into the research topics of 'Video coding using the most common frame in scene'. Together they form a unique fingerprint.

Cite this