L-1 LASSO modeling and its Bayesian inference

Junbin Gao, Michael Antolovich, Paul W. Kwan

Research output: Book chapter/Published conference paperConference paperpeer-review

1 Citation (Scopus)

Abstract

A new iterative procedure for solving regression problems with the so-called LASSO penalty [1] is proposed by using generative Bayesian modeling and inference. The algorithm produces the anticipated parsimonious or sparse regression models that generalize well on unseen data. The proposed algorithm is quite robust and there is no need to specify any model hyperparameters. A comparison with state-of-the-art methods for constructing sparse regression models such as the relevance vector machine (RVM) and the local regularization assisted orthogonal least squares regression (LROLS) is given.

Original languageEnglish
Title of host publicationAI 2008
Subtitle of host publicationAdvances in Artificial Intelligence - 21st Australasian Joint Conference on Artificial Intelligence, Proceedings
Pages318-324
Number of pages7
DOIs
Publication statusPublished - 01 Dec 2008
Event21st Australasian Joint Conference on Artificial Intelligence, AI 2008 - Auckland, New Zealand
Duration: 01 Dec 200805 Dec 2008

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume5360 LNAI
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference21st Australasian Joint Conference on Artificial Intelligence, AI 2008
Country/TerritoryNew Zealand
CityAuckland
Period01/12/0805/12/08

Fingerprint

Dive into the research topics of 'L-1 LASSO modeling and its Bayesian inference'. Together they form a unique fingerprint.

Cite this