The Gaussian process (GP) latent variable model(GPLVM) has the capability of learning low-dimensional manifoldfrom highly nonlinear data of high dimensionality. As an unsuperviseddimensionality reduction (DR) algorithm, the GPLVM hasbeen successfully applied in many areas. However, in its currentsetting, GPLVM is unable to use label information, which is availablefor many tasks; therefore, researchers proposed many kindsof extensions to the GPLVM in order to utilize extra information,among which the supervised GPLVM (SGPLVM) has shown betterperformance compared with other SGPLVM extensions. However,the SGPLVM suffers in its high computational complexity.Bearing in mind the issues of the complexity and the need ofincorporating additionally available information, in this paper, wepropose a novel SGPLVM, called supervised latent linear GPLVM(SLLGPLVM). Our approach is motivated by both SGPLVM andsupervised probabilistic principal component analysis (SPPCA).The proposed SLLGPLVM can be viewed as an appropriate compromisebetween the SGPLVMand the SPPCA. Furthermore, it isalso appropriate to interpret the SLLGPLVMas a semiparametricregression model for supervised DR by making use of the GP tomodel the unknown smooth link function. Complexity analysis andexperiments show that the developed SLLGPLVM outperformsthe SGPLVMnot only in the computational complexity but also inits accuracy.We also compared the SLLGPLVM with two classicalsupervised classifiers, i.e., a GP classifier and a support vectormachine, to illustrate the advantages of the proposed model.
|Number of pages||13|
|Journal||IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics|
|Publication status||Published - Dec 2012|