Incomplete multi-view representation learning through anchor graph-based GCN and information bottleneck

Zhenjiao Liu, Xiao Wang, Xiaodi Huang, Guanlin Li, Ke Sun, Zhikui Chen

Research output: Book chapter/Published conference paperConference paperpeer-review

Abstract

Real-world data often contain incomplete views with varying degrees of missing information. While there are existing methods for learning representations from such data, effectively utilizing all incomplete view data and ensuring robustness to different levels of completeness remains a challenging task. To address this problem, we propose a novel framework named IMRL-AGI. IMRL-AGI combines the anchor graph-based Graph Convolutional Network (GCN) and information bottleneck. Specifically, the framework starts by constructing an anchor graph to effectively captures the nonlinear information between instances. Next, an anchor graph-based GCN is designed to extract feature information from various views. IMRL-AGI maximizes the mutual information between the views obtained by the common representation and the anchor-graph-based GCN, ensuring the accurate extraction of view information. Furthermore, the minimization of mutual information is applied to promote diversity and reduce redundancy in the multi-view representation. Extensive experiments are conducted on several real-world datasets, and the results demonstrate the superiority of IMRL-AGI.
Original languageEnglish
Title of host publication2024 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2024 - Proceedings
Subtitle of host publicationProceedings
PublisherIEEE
Pages7130-7134
Number of pages5
ISBN (Electronic)9798350344851
ISBN (Print)9798350344868
DOIs
Publication statusPublished - Apr 2024
Event2024 IEEE International Conference on Acoustics, Speech and Signal Processing : ICASSP 2024 - COEX Center, Seoul, Korea, Democratic People's Republic of
Duration: 14 Apr 202419 Apr 2024
https://2024.ieeeicassp.org/
https://ieeexplore-ieee-org.ezproxy.csu.edu.au/stamp/stamp.jsp?tp=&arnumber=10446813 (Proceedings welcome)
https://ieeexplore-ieee-org.ezproxy.csu.edu.au/xpl/conhome/10445798/proceeding (Proceedings)

Publication series

NameICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
ISSN (Print)1520-6149

Conference

Conference2024 IEEE International Conference on Acoustics, Speech and Signal Processing
Abbreviated titleSignal Processing: The Foundation for True Intelligence
Country/TerritoryKorea, Democratic People's Republic of
CitySeoul
Period14/04/2419/04/24
OtherOn behalf of the Organizing Committee, it is our immense pleasure and honor to invite you to the 2024 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2024), Seoul, Korea, 14~19 April 2024, hosted by the IEEE Signal Processing Society.

ICASSP is the world’s largest and most comprehensive technical conference focused on signal processing and its applications. It offers a comprehensive technical program presenting all the latest development in research and technology in the industry that attracts thousands of professionals annually.

The venue (COEX) is located in the Gangnam District of Seoul, where the famous “Gangnam Style” music was born. It is also at the heart of technology, business, and culture where many unique and rich Korean cultures (K-pop, K-drama, historical sites, palaces, museums, galleries, etc) and delightful cuisines can be easily reached under the warm spring cherry blossoms near the Han River.

So, mark your calendar and join us as we are filled with anticipation to meet at this flagship conference and hope you will engage in various sessions filled with valuable lectures, cutting-edge topic keynotes with world-renowned speakers, along with great opportunities to network with industry pioneers and leading researchers.
Internet address

Fingerprint

Dive into the research topics of 'Incomplete multi-view representation learning through anchor graph-based GCN and information bottleneck'. Together they form a unique fingerprint.

Cite this