We propose a new differentially-private decision forest algorithm that minimizes both the number of queries required, and the sensitivity of those queries. To do so, we build an ensemble of random decision trees that avoids querying the private data except to find the majority class label in the leaf nodes. Rather than using a count query to return the class counts like the current state-of-the-art, we use the Exponential Mechanism to only output the class label itself. This drastically reduces the sensitivity of the query - often by several orders of magnitude - which in turn reduces the amount of noise that must be added to preserve privacy. Our improved sensitivity is achieved by using 'smooth sensitivity', which takes into account the specific data used in the query rather than assuming the worst-case scenario. We also extend work done on the optimal depth of random decision trees to handle continuous features, not just discrete features. This, along with several other improvements, allows us to create a differentially private decision forest with substantially higher predictive power than the current state-of-the-art.
Original languageEnglish
Pages (from-to)16-31
Number of pages16
JournalExpert Systems with Applications
Early online dateFeb 2017
Publication statusPublished - 15 Jul 2017


Dive into the research topics of 'Differentially private random decision forests using smooth sensitivity'. Together they form a unique fingerprint.

Cite this