Feature selection using misclassification counts

Adil Bagiriv, Andrew Yatsko, Andrew Stranieri, Herbert Jelinek

Research output: Book chapter/Published conference paperConference paperpeer-review

Abstract

Dimensionality reduction of the problem space through detection and removal of variables, contributing little or not at all to classification, is able to relieve the computational load and instance acquisition effort, considering all the data attributes accessed each time around. The approach to feature selection in this paper is based on the concept of coherent accumulation of data about class centers with respect to coordinates of informative features. Ranking is done on the degree to which different variables exhibit random characteristics. The results are being verified using the Nearest Neighbor classifier. This also helps to address the feature irrelevance and redundancy, what ranking does not immediately decide. Additionally, feature ranking methods from different independent sources are called in for the direct comparison.
Original languageEnglish
Title of host publicationProceedings of the Ninth Australasian Data Mining Conference (AusDM 11)
EditorsP Kennedy P Kennedy
Place of PublicationSydney, Australia
PublisherAustralian Computer Society Inc
Pages51-62
Number of pages12
Volume121
Publication statusPublished - 2011
EventThe 9th Australasian Data Mining Conference: AusDM 2011 - University of Ballarat, Ballarat, Australia
Duration: 01 Dec 201102 Dec 2011
http://www.wikicfp.com/cfp/servlet/event.showcfp?eventid=16402&copyownerid=3591

Publication series

Name
ISSN (Print)1445-1336

Conference

ConferenceThe 9th Australasian Data Mining Conference
Country/TerritoryAustralia
CityBallarat
Period01/12/1102/12/11
Internet address

Fingerprint

Dive into the research topics of 'Feature selection using misclassification counts'. Together they form a unique fingerprint.

Cite this