In this paper, we propose a new decision forest algorithm that builds a set of highly accurate decision trees by exploiting the strength of all non-class attributes available in a data set, unlike some existing algorithms that use a subset of the non-class attributes. At the same time to promote strong diversity, the proposed algorithm imposes penalties (disadvantageous weights) to those attributes that participated in the latest tree in order to generate the subsequent trees. Besides, some other weight-related concerns are taken into account so that the trees generated by the proposed algorithm remain individually accurate and retain strong diversity. In order to show the worthiness of the proposed algorithm, we carry out experiments on 20 well known data sets that are publicly available from the UCI Machine Learning Repository. The experimental results indicate that the proposed algorithm is effective in generating highly accurate and more balanced decision forests compared to other prominent decision forest algorithms. Accordingly, the proposed algorithm is expected to be very effective in the domain of expert and intelligent systems.
|Number of pages||15|
|Journal||Expert Systems with Applications|
|Publication status||Published - 15 Dec 2017|