Disocclusion filling for depth-based view synthesis with adaptive utilization of temporal correlations

Pan Gao, Tiantian Zhu, Manoranjan Paul

Research output: Contribution to journalArticlepeer-review

Abstract

The depth image-based rendering paves the path to success of 3-D video. However, one issue still remained in 3-D video is how to fill the disocclusion areas. To this end, Gaussian mixture model (GMM) is commonly employed to generate the background, and then to fill the holes. Nevertheless, GMM usually has poor performance for sequences with big foreground reciprocation. In this paper, we aim to enhance the synthesis performance. Firstly, we propose an expectation maximization based GMM background generation method, in which the pixel mixture distribution is derived. Secondly, we propose a refined foreground depth correlation approach, which recovers the background frame-by-frame based on depth information. Finally, we adaptively choose the background pixels from these two methods for filling. Experimental results show that the proposed method outperforms existing non-deep learning based hole filling methods by around 1.1 dB, and significantly surpasses deep learning based alternative in terms of subjective quality.

Original languageEnglish
Article number103148
Pages (from-to)1-14
Number of pages14
JournalJournal of Visual Communication and Image Representation
Volume78
Early online date13 May 2021
DOIs
Publication statusPublished - Jul 2021

Fingerprint

Dive into the research topics of 'Disocclusion filling for depth-based view synthesis with adaptive utilization of temporal correlations'. Together they form a unique fingerprint.

Cite this