Gaussian Processes Autoencoder for Dimensionality Reduction

Xinwei Jiang, Junbin Gao, Xia Hong, Zhihua Cai

Research output: Book chapter/Published conference paperConference paper

10 Citations (Scopus)


Learning low dimensional manifold from highly nonlinear data of high dimensionality has become increasingly important for discovering intrinsic representation that can be utilized for data visualization and preprocessing. The autoencoder is a powerful dimensionality reduction technique based on minimizing reconstruction error, and it has regained popularity because it has been efficiently used for greedy pre-training of deep neural networks. Compared to Neural Network (NN), the superiority of Gaussian Process (GP) has been shown in model inference, optimization and performance. GP has been successfully applied in nonlinear Dimensionality Reduction (DR) algorithms, such as Gaussian Process Latent Variable Model (GPLVM). In this paper we propose the Gaussian Processes Autoencoder Model (GPAM) for dimensionality reduction by extending the classic NN based autoencoder to GP based autoencoder. More interestingly, the novel model can also be viewed as back constrained GPLVM (BC-GPLVM) where the back constraint smooth function is represented by a GP. Experiments verify the performance of the newly proposed model.
Original languageEnglish
Title of host publicationPAKDD 2014
EditorsV.S. Tseng
Place of PublicationCham Heidelberg
Number of pages12
ISBN (Print)9783319066042
Publication statusPublished - 2014
EventPacific-Asia Conference on Knowledge Discovery and Data Mining - Tainan, Taiwan, China
Duration: 13 May 201416 May 2014


ConferencePacific-Asia Conference on Knowledge Discovery and Data Mining

Fingerprint Dive into the research topics of 'Gaussian Processes Autoencoder for Dimensionality Reduction'. Together they form a unique fingerprint.

  • Cite this

    Jiang, X., Gao, J., Hong, X., & Cai, Z. (2014). Gaussian Processes Autoencoder for Dimensionality Reduction. In V. S. Tseng (Ed.), PAKDD 2014 (Vol. 8444, pp. 62-73). Springer.