In this paper we propose a novel attribute weight selection technique called AWST that automatically determines attribute weights for a clustering purpose. The main idea of AWST is to assign weight on an attribute based on the ability of the attribute to cluster the records of a dataset. The attributes with higher abilities get higher weights for clustering. We also propose a novel discretization approach in AWST to discretize the domain values of a numerical attribute. The performance of AWST is compared with three other existing attribute weight
selection techniques. We compare the performance of AWST with the three existing techniques namely SABC, WKM and EB in terms of Silhouette Coefficient using nine (9) natural datasets that we obtain from the UCI machine learning repository. The experimental results show that AWST outperforms than the existing techniques on all datasets. The computational complexities and the
execution times of the techniques are also presented in the paper. Note that, AWST requires less execution time than many of the existing techniques used in this study.
Original languageEnglish
Title of host publicationProceedings of the 13th Australasian Data Mining Conference (AusDM 2015)
EditorsMd Zahidul Islam, Ling Chen, Kok-Leong Ong, Yangchang Zhao, Richi Nayak, Paul Kennedy
Place of PublicationAustralia
Number of pages8
ISBN (Print)9781921770180
Publication statusPublished - 2015
EventThe 13th Australasian Data Mining Conference: AusDM 2015 - University of Technology, Sydney, Australia
Duration: 08 Aug 201509 Aug 2015


ConferenceThe 13th Australasian Data Mining Conference
Internet address


Dive into the research topics of 'AWST: A novel attribute weight selection technique for data clustering'. Together they form a unique fingerprint.

Cite this