Learning Gradients with Gaussian Processes

Xinwei Jiang, Junbin Gao, Tianjiang Wang, Paul W. Kwan

Research output: Book chapter/Published conference paperConference paper

1 Citation (Scopus)

Abstract

The problems of variable selection and inference of statistical dependence have been addressed by modeling in the gradients learning framework based on the representer theorem. In this paper, we propose a new gradients learning algorithm in the Bayesian framework, called Gaussian Processes Gradient Learning (GPGL) model, which can achieve higher accuracy while returning the credible intervals of the estimated gradients that existing methods cannot provide. The simulation examples are used to verify the proposed algorithm, and its advantages can be seen from the experimental results.
Original languageEnglish
Title of host publicationPAKDD 2010
EditorsVikram Pudi Vikram Pudi
Place of PublicationGermany
PublisherSpringer
Pages113-124
Number of pages12
Volume6119
DOIs
Publication statusPublished - 2010
EventPacific-Asia Conference on Knowledge Discovery and Data Mining - Hyderabad, India, India
Duration: 21 Jun 201024 Jun 2010

Conference

ConferencePacific-Asia Conference on Knowledge Discovery and Data Mining
CountryIndia
Period21/06/1024/06/10

Fingerprint Dive into the research topics of 'Learning Gradients with Gaussian Processes'. Together they form a unique fingerprint.

  • Cite this

    Jiang, X., Gao, J., Wang, T., & Kwan, P. W. (2010). Learning Gradients with Gaussian Processes. In V. P. V. Pudi (Ed.), PAKDD 2010 (Vol. 6119, pp. 113-124). Springer. https://doi.org/10.1007/978-3-642-13672-6_12