Robust L1 Principal Component Analysis and Its Bayesian Variational Inference

Junbin Gao

Research output: Contribution to journalArticle

63 Citations (Scopus)
32 Downloads (Pure)

Abstract

We introduce a robust probabilistic L1-PCA model in which the conventional gaussian distribution for the noise in the observed data was replaced by the Laplacian distribution (or L1 distribution). Due to the heavy tail characteristics of the L1 distribution, the proposed model is supposed to be more robust against data outliers. In this letter, we demonstrate how a variational approximation scheme enables effective inference of key parameters in the probabilistic L1-PCA model. As the L1 density can be expanded as a superposition of infinite number of gaussian densities, we express the L1-PCA model as a marginalized model over the superpositions. By doing so, a tractable Bayesian inference can be achieved based on the variational expectation-maximization-type algorithm.
Original languageEnglish
Pages (from-to)555-572
Number of pages18
JournalNeural Computation
Volume20
Issue number2
DOIs
Publication statusPublished - 2008

Fingerprint

Passive Cutaneous Anaphylaxis
Principal Component Analysis
Normal Distribution
Noise
Bayesian Inference

Cite this

@article{6293944050904a05b7cb337184e717fd,
title = "Robust L1 Principal Component Analysis and Its Bayesian Variational Inference",
abstract = "We introduce a robust probabilistic L1-PCA model in which the conventional gaussian distribution for the noise in the observed data was replaced by the Laplacian distribution (or L1 distribution). Due to the heavy tail characteristics of the L1 distribution, the proposed model is supposed to be more robust against data outliers. In this letter, we demonstrate how a variational approximation scheme enables effective inference of key parameters in the probabilistic L1-PCA model. As the L1 density can be expanded as a superposition of infinite number of gaussian densities, we express the L1-PCA model as a marginalized model over the superpositions. By doing so, a tractable Bayesian inference can be achieved based on the variational expectation-maximization-type algorithm.",
keywords = "Open access version available, Bayesian Inference, Dimensionality Reduction, L1 PCA",
author = "Junbin Gao",
note = "Imported on 12 Apr 2017 - DigiTool details were: Journal title (773t) = Neural Computation. ISSNs: 0899-7667;",
year = "2008",
doi = "10.1162/neco.2007.11-06-397",
language = "English",
volume = "20",
pages = "555--572",
journal = "Neural Computation",
issn = "0899-7667",
publisher = "MIT Press - Journals",
number = "2",

}

Robust L1 Principal Component Analysis and Its Bayesian Variational Inference. / Gao, Junbin.

In: Neural Computation, Vol. 20, No. 2, 2008, p. 555-572.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Robust L1 Principal Component Analysis and Its Bayesian Variational Inference

AU - Gao, Junbin

N1 - Imported on 12 Apr 2017 - DigiTool details were: Journal title (773t) = Neural Computation. ISSNs: 0899-7667;

PY - 2008

Y1 - 2008

N2 - We introduce a robust probabilistic L1-PCA model in which the conventional gaussian distribution for the noise in the observed data was replaced by the Laplacian distribution (or L1 distribution). Due to the heavy tail characteristics of the L1 distribution, the proposed model is supposed to be more robust against data outliers. In this letter, we demonstrate how a variational approximation scheme enables effective inference of key parameters in the probabilistic L1-PCA model. As the L1 density can be expanded as a superposition of infinite number of gaussian densities, we express the L1-PCA model as a marginalized model over the superpositions. By doing so, a tractable Bayesian inference can be achieved based on the variational expectation-maximization-type algorithm.

AB - We introduce a robust probabilistic L1-PCA model in which the conventional gaussian distribution for the noise in the observed data was replaced by the Laplacian distribution (or L1 distribution). Due to the heavy tail characteristics of the L1 distribution, the proposed model is supposed to be more robust against data outliers. In this letter, we demonstrate how a variational approximation scheme enables effective inference of key parameters in the probabilistic L1-PCA model. As the L1 density can be expanded as a superposition of infinite number of gaussian densities, we express the L1-PCA model as a marginalized model over the superpositions. By doing so, a tractable Bayesian inference can be achieved based on the variational expectation-maximization-type algorithm.

KW - Open access version available

KW - Bayesian Inference

KW - Dimensionality Reduction

KW - L1 PCA

U2 - 10.1162/neco.2007.11-06-397

DO - 10.1162/neco.2007.11-06-397

M3 - Article

VL - 20

SP - 555

EP - 572

JO - Neural Computation

JF - Neural Computation

SN - 0899-7667

IS - 2

ER -