An efficient video coding technique using a novel non-parametric background model

Subrata Chakraborty, Manoranjan Paul, Manzur Murshed, Mortuza Ali

Research output: Book chapter/Published conference paperConference paperpeer-review

7 Citations (Scopus)
4 Downloads (Pure)

Abstract

Video coding technique with a background frame, extracted from mixture of Gaussian (MoG) based background modeling, provides better rate distortion performance by exploiting coding efficiency in uncovered background areas compared to the latest video coding standard. However, it suffers from high computation time, low coding efficiency for dynamic videos, and prior knowledge requirement of video content. In this paper, we present a novel adaptive weighted non-parametric (WNP) background modeling technique and successfully embed it into HEVC video coding standard. Being non-parametric (NP), the proposed technique naturally exhibits superior performance in dynamic background scenarios compared to MoG-based technique without a priori knowledge of video data distribution. In addition, the WNP technique significantly reduces noise-related drawbacks of existing NP techniques to provide better quality video coding with much lower computation time as demonstrated through extensive comparative studies against NP, MoG and HEVC techniques.
Original languageEnglish
Title of host publication2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)
Place of PublicationUSA
PublisherIEEE
Pages1-6
Number of pages6
ISBN (Electronic)9781479947171
DOIs
Publication statusPublished - 2014
EventIEEE International conference on Multimedia and Expo - JinJiang Hotel, Chengdu, China
Duration: 14 Jul 201418 Jul 2014
https://web.archive.org/web/20140213085557/www.icme2014.org/ (Archived page)

Conference

ConferenceIEEE International conference on Multimedia and Expo
Country/TerritoryChina
CityChengdu
Period14/07/1418/07/14
Internet address

Fingerprint

Dive into the research topics of 'An efficient video coding technique using a novel non-parametric background model'. Together they form a unique fingerprint.

Cite this