Abstract

In this paper we propose a novel attribute weight selection technique called AWST that automatically determines attribute weights for a clustering purpose. The main idea of AWST is to assign weight on an attribute based on the ability of the attribute to cluster the records of a dataset. The attributes with higher abilities get higher weights for clustering. We also propose a novel discretization approach in AWST to discretize the domain values of a numerical attribute. The performance of AWST is compared with three other existing attribute weight
selection techniques. We compare the performance of AWST with the three existing techniques namely SABC, WKM and EB in terms of Silhouette Coefficient using nine (9) natural datasets that we obtain from the UCI machine learning repository. The experimental results show that AWST outperforms than the existing techniques on all datasets. The computational complexities and the
execution times of the techniques are also presented in the paper. Note that, AWST requires less execution time than many of the existing techniques used in this study.
Original languageEnglish
Title of host publicationProceedings of the 13th Australasian Data Mining Conference (AusDM 2015)
EditorsMd Zahidul Islam, Ling Chen, Kok-Leong Ong, Yangchang Zhao, Richi Nayak, Paul Kennedy
Place of PublicationAustralia
PublisherCRPIT
Pages51-58
Number of pages8
Volume168
ISBN (Print)9781921770180
Publication statusPublished - 2015
EventThe 13th Australasian Data Mining Conference: AusDM 2015 - University of Technology, Sydney, Australia
Duration: 08 Aug 201509 Aug 2015
https://web.archive.org/web/20150820140652/http://ausdm15.ausdm.org/

Conference

ConferenceThe 13th Australasian Data Mining Conference
Country/TerritoryAustralia
CitySydney
Period08/08/1509/08/15
Internet address

Fingerprint

Dive into the research topics of 'AWST: A novel attribute weight selection technique for data clustering'. Together they form a unique fingerprint.

Cite this