The rapid development of artificial intelligence brings with it the increasing likelihood of ubiquitous interaction between humans and robots. A significant contribution to studying human–robot interactions (HRI) comes from experimental studies, whereby humans and robots interact in controlled conditions and researchers observe and measure the reactions of humans (and robots). The use of experiments to understand human interactions has long been a central source of information in the field of experimental social psychology. These studies have yielded numerous major insights into the causes and outcomes of interaction. The methodology of experiments, however, including the demands made upon human participants to behave in predictable ways and the impact of experimenters’ expectancies upon results, has been a focus of much critical analysis. We examined a sample of 100 high impact HRI studies for evidence of potentially contaminating experimental artefacts and/or authors’ awareness of such factors. In our conclusions we highlight several methodological issues that appeared frequently in our sample, which may impede generalisations from laboratory experiments to real-world settings. Ultimately, we suggest that researchers may need to reformulate the methodologies used to study the unique features of HRI, and offer a number of recommendations for researchers designing HRI experiments.