Abstract

A dynamic point cloud (DPC) is a set of points irregularly sampled from the continuous surfaces of objects or scenes, comprising texture (i.e., colour) and geometry (i.e., coordinate data). The DPC has made it possible to closely mimic the real world's natural reality and significantly improve training, safety, entertainment, and quality of life. However, to be even more effective, more realistic, and broadcast successfully, the dynamic point clouds require higher compression due to their massive volume of data compared to the traditional video. Recently, MPEG finalized a Video-based Point Cloud Compression (V-PCC) standard as the latest method of compressing both geometric and texture dynamic point clouds, which has achieved the best rate-distortion performance for DPC so far. However, V-PCC requires huge computational time due to expensive normal calculation and segmentation, sacrifices some points to limit the number of 2D patches, and cannot occupy all spaces in the 2D frame, resulting in the inefficiency of video compression. The proposed method addresses these limitations using a novel cross-sectional approach to cut the whole DPC frame into different sections considering the main view, shape, and size. This approach reduces expensive normal estimation and segmentation, retains more points, and utilizes more space for 2D frame generation, leading to more compression compared to the VPCC. The experimental results using standard video sequences show that the proposed technique can achieve better compression in both geometric and texture data compared to the latest V-PCC standard
Original languageEnglish
Title of host publicationImage and Video Technology
Subtitle of host publication10th Pacific-Rim Symposium, PSIVT 2022 Virtual Event, November 12–14, 2022 Proceedings
EditorsHan Wang, Wei Lin, Paul Manoranjan, Guobao Xiao, Kap Luk Chan, Xiaonan Wang, Guiju Ping, Haohe Jiang
Place of PublicationSwitzerland
PublisherSpringer
Pages61-74
Number of pages14
ISBN (Electronic)9783031264313
ISBN (Print)9783031264306
DOIs
Publication statusPublished - 2023
Event10th Pacific-Rim Symposium on Image and Video Technology: PSIVT 2022 - Online
Duration: 25 Nov 202228 Nov 2022
http://www.cis-ram.org/psivt2022/workshops.html (Conference website)
http://www.cis-ram.org/psivt2022/call_for_papers.html (Call for papers)
http://www.cis-ram.org/psivt2022/program.html (Program)
https://arc.nus.edu.sg/wordpress/wp-content/uploads/2022/05/PSIVT-c4p-revised-v8_1.pdf
https://link.springer.com/book/9783031264320 (Proceedings due for publication April 2023 - SpringerLink)

Publication series

NameLecture Notes in Computer Science
PublisherSpringer
Volume13763
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference10th Pacific-Rim Symposium on Image and Video Technology
Period25/11/2228/11/22
OtherThe Pacific-Rim Symposium on Image and Video Technology (PSIVT) is a premier level biennial series of symposia that aim at providing a forum for researchers and practitioners who are being involving, or are contributing to theoretical advances or practical implementations in image and video technology. The 10-th Pacific-Rim Symposium on Image and Video Technology (PSIVT 2022) will be held Online from 25th to 28th November, 2022.

The PSIVT is a premier level biennial series of symposia that aims to provide a forum for researchers and practitioners in the Pacific Rim and around the world who are involving in contributing to theoretical advances or practical implementations in image and video technology. The PSIVT has been held 9 times. It is a highly referenced conference that provides authors with useful feedbacks. Submissions are invited on significant, original, and previously unpublished research on all aspects of image and video technology. All papers will receive mindful and rigorous reviews.
Internet address

Fingerprint

Dive into the research topics of 'Dynamic point cloud compression with cross-sectional approach'. Together they form a unique fingerprint.

Cite this