P-hacking in experimental audit research

Mohammad Jahanzeb Khan, Per Christen Tronnes

    Research output: Contribution to journalArticlepeer-review

    4 Citations (Scopus)

    Abstract

    A focus on novel, confirmatory, and statistically significant results by journals that publish experimental audit research may result in substantial bias in the literature. We explore one type of bias known as p-hacking: a practice where researchers, whether knowingly or unknowingly, adjust their collection, analysis, and reporting of data and results, until non-significant results become significant. Examining experimental audit literature published in eight accounting and audit journals in the last three decades, we find an overabundance of p-values at or just below the conventional thresholds for statistical significance. The finding of too many “just significant” results is an indication that some of the results published in the experimental audit literature are potentially a consequence of p-hacking. We discuss some potential remedies that, if adopted, may (to some extent) alleviate concerns regarding p-hacking and the publication of false positive results.
    Original languageEnglish
    Pages (from-to)119-131
    Number of pages13
    JournalBehavioral Research in Accounting
    Volume31
    Issue number1
    Early online date01 Jul 2018
    DOIs
    Publication statusPublished - 01 Mar 2019

    Fingerprint

    Dive into the research topics of 'P-hacking in experimental audit research'. Together they form a unique fingerprint.

    Cite this