Adaptive weighting between warped and learned foregrounds for view synthesize

Research output: Book chapter/Published conference paperConference paperpeer-review

5 Citations (Scopus)

Abstract

In a free viewpoint video (FVV) system, a user has a freedom to choose a view angle arbitrarily. To facilitate view synthesis, we need to interpolate a number of adjacent views by exploiting spatial and temporal redundancy among them. Existing synthesizing techniques may concern poor rendering quality due to the occluded background areas and rounding integer error through warping. Currently, inpainting and background update techniques are used to address these issues. Inpainting techniques normally suffer quality degradation due to the low spatial correlation in the foreground/background boundary areas. Background update technique improves the quality, however, still suffers quality degradation due to the dependency on background warping and spatial correlation. To address the problem, in this paper we learn the foreground and background in each pixel using Gaussian mixture model (GMM) on the previous images in the interpolated view and then use the number of Gaussian models to identify the foreground and background pixels. The background pixel of the synthesized view is determined using the background model and the foreground pixel is determined using the weighted contribution of the foreground model and the warping image between two adjacent views. An adaptive weighting is proposed in this paper based on the amount of multiple models. The experimental results reveal that the proposed approach provides 0.60∼3.15dB PSNR improvement of the synthesized view compared with the three standard state-of-the-art methods.
Original languageEnglish
Title of host publicationProceedings of the IEEE international conference on multimedia and expo workshops (ICMEW) 2017
Place of PublicationUnited States
PublisherIEEE, Institute of Electrical and Electronics Engineers
Pages49-54
Number of pages6
ISBN (Electronic)9781538605608
ISBN (Print)9781538605615 (Print on demand)
DOIs
Publication statusPublished - 07 Sept 2017
EventIEEE International Conference on Multimedia and Expo (ICME) 2017 - Harbour Grand Kowloon, Hong Kong, Hong Kong
Duration: 10 Jul 201714 Jul 2017
http://www.icme2017.org/ (Conference website)
http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=8014303 (Conference proceedings (ICME 2017))
http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=8014334 (Conference proceedings (ICMEW 2017))

Conference

ConferenceIEEE International Conference on Multimedia and Expo (ICME) 2017
Abbreviated titleThe New Media Experience
Country/TerritoryHong Kong
CityHong Kong
Period10/07/1714/07/17
OtherThe IEEE International Conference on Multimedia & Expo (ICME) has been the flagship multimedia conference sponsored by four IEEE societies since 2000. It serves as a forum to promote the exchange of the latest advances in multimedia technologies, systems, and applications from both the research and development perspectives of the circuits and systems, communications, computer, and signal processing communities. ICME also features an Exposition of multimedia products and prototypes. ICME 2017 is the 18th ICME conference. The main theme of 2017 is "The New Media Experience", enabling next generation 3D/AR/VR experiences and applications, based on which various sessions and events, in particular a Grand Challenge, will be organized. About 400 participants mainly from Asia, Europe and North America will gather in Hong Kong to discuss and progress latest development in multimedia technologies and related fields. This year, the best contributions to the conference will be honoured with the 10k Best Paper Award which promotes research advances in the general Multimedia related areas: Text, Graphics, Vision, Image, Video, Audio, Speech, Sensing data, and their mining, learning, processing, compression, communications, rendering, and associated innovations and applications.
Internet address

Fingerprint

Dive into the research topics of 'Adaptive weighting between warped and learned foregrounds for view synthesize'. Together they form a unique fingerprint.

Cite this