Abstract
A focus on novel, confirmatory, and statistically significant results by journals that publish experimental audit research may result in substantial bias in the literature. We explore one type of bias known as p-hacking: a practice where researchers, whether knowingly or unknowingly, adjust their collection, analysis, and reporting of data and results, until non-significant results become significant. Examining experimental audit literature published in eight accounting and audit journals in the last three decades, we find an overabundance of p-values at or just below the conventional thresholds for statistical significance. The finding of too many “just significant” results is an indication that some of the results published in the experimental audit literature are potentially a consequence of p-hacking. We discuss some potential remedies that, if adopted, may (to some extent) alleviate concerns regarding p-hacking and the publication of false positive results.
Original language | English |
---|---|
Pages (from-to) | 119-131 |
Number of pages | 13 |
Journal | Behavioral Research in Accounting |
Volume | 31 |
Issue number | 1 |
Early online date | 01 Jul 2018 |
DOIs | |
Publication status | Published - 01 Mar 2019 |