TY - JOUR
T1 - Disocclusion filling for depth-based view synthesis with adaptive utilization of temporal correlations
AU - Gao, Pan
AU - Zhu, Tiantian
AU - Paul, Manoranjan
N1 - Publisher Copyright:
© 2021 Elsevier Inc.
PY - 2021/7
Y1 - 2021/7
N2 - The depth image-based rendering paves the path to success of 3-D video. However, one issue still remained in 3-D video is how to fill the disocclusion areas. To this end, Gaussian mixture model (GMM) is commonly employed to generate the background, and then to fill the holes. Nevertheless, GMM usually has poor performance for sequences with big foreground reciprocation. In this paper, we aim to enhance the synthesis performance. Firstly, we propose an expectation maximization based GMM background generation method, in which the pixel mixture distribution is derived. Secondly, we propose a refined foreground depth correlation approach, which recovers the background frame-by-frame based on depth information. Finally, we adaptively choose the background pixels from these two methods for filling. Experimental results show that the proposed method outperforms existing non-deep learning based hole filling methods by around 1.1 dB, and significantly surpasses deep learning based alternative in terms of subjective quality.
AB - The depth image-based rendering paves the path to success of 3-D video. However, one issue still remained in 3-D video is how to fill the disocclusion areas. To this end, Gaussian mixture model (GMM) is commonly employed to generate the background, and then to fill the holes. Nevertheless, GMM usually has poor performance for sequences with big foreground reciprocation. In this paper, we aim to enhance the synthesis performance. Firstly, we propose an expectation maximization based GMM background generation method, in which the pixel mixture distribution is derived. Secondly, we propose a refined foreground depth correlation approach, which recovers the background frame-by-frame based on depth information. Finally, we adaptively choose the background pixels from these two methods for filling. Experimental results show that the proposed method outperforms existing non-deep learning based hole filling methods by around 1.1 dB, and significantly surpasses deep learning based alternative in terms of subjective quality.
KW - Adaptive hole-filling
KW - Depth-image-based-rendering
KW - Expectation maximization
KW - Foreground depth correlation
KW - Gaussian mixture model
UR - http://www.scopus.com/inward/record.url?scp=85105849618&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85105849618&partnerID=8YFLogxK
U2 - 10.1016/j.jvcir.2021.103148
DO - 10.1016/j.jvcir.2021.103148
M3 - Article
AN - SCOPUS:85105849618
SN - 1047-3203
VL - 78
SP - 1
EP - 14
JO - Journal of Visual Communication and Image Representation
JF - Journal of Visual Communication and Image Representation
M1 - 103148
ER -