Fast mode decision in the HEVC video coding standard by exploiting region with dominated motion and saliency features

Pallab Kanti Podder, Manoranjan Paul, Manzur Murshed

Research output: Contribution to journalArticle

9 Citations (Scopus)
10 Downloads (Pure)

Abstract

The emerging High Efficiency Video Coding (HEVC) standard introduces a number of innovative and powerful coding tools to acquire better compression efficiency compared to its predecessor H.264. The computing time complexities have also increased multiple times for this acquisition. This paper employs a novel coding strategy to reduce the time complexity in HEVC encoder by efficient selection of appropriate block-partitioning modes based on human visual features. The proposed technique exploits human visual attention modelling-based saliency feature and phase correlation-based motion features. The features are innovatively combined through a fusion process by developing a content-based adaptive weighted cost function to determine the region with dominated motion/saliency (RDMS)- based binary pattern for the current block. The generated binary pattern is then compared with a codebook of predefined binary pattern templates aligned to the HEVC recommended block-paritioning to estimate a subset of inter-prediction modes. Without exhaustive exploration of all modes available in the HEVC standard, only the selected subset of modes are motion estimated and motion compensated for a particular coding unit. The experimental evaluation reveals that the proposed technique notably down-scales the computational time of the latest HEVC reference encoder by 38% while providing similar rate-distortion (RD) performance.
Original languageEnglish
Pages (from-to)1-22
Number of pages22
JournalPLoS One
Volume11
Issue number3
DOIs
Publication statusPublished - 2016

Fingerprint Dive into the research topics of 'Fast mode decision in the HEVC video coding standard by exploiting region with dominated motion and saliency features'. Together they form a unique fingerprint.

Cite this