Abstract
Repeatability is the cornerstone of science, and it
is particularly important for systematic reviews. However, little is
known on how researchers’ choice of database, and search platform
influence the repeatability of systematic reviews. Here, we aim to
unveil how the computer environment and the location where the search
was initiated from influence hit results.We present a comparative analysis of
time-synchronized searches at different institutional locations in the
world and evaluate the consistency of hits obtained within each of the
search terms using different search platforms.We revealed a large variation among search platforms
and showed that PubMed and Scopus returned consistent results to
identical search strings from different locations. Google Scholar and
Web of Science's Core Collection varied substantially both in the number
of returned hits and in the list of individual articles depending on
the search location and computing environment. Inconsistency in Web of
Science results has most likely emerged from the different licensing
packages at different institutions.To maintain scientific integrity and consistency,
especially in systematic reviews, action is needed from both the
scientific community and scientific search platforms to increase search
consistency. Researchers are encouraged to report the search location
and the databases used for systematic reviews, and database providers
should make search algorithms transparent and revise access rules to
titles behind paywalls. Additional options for increasing the
repeatability and transparency of systematic reviews are storing both
search metadata and hit results in open repositories and using
Application Programming Interfaces (APIs) to retrieve standardized,
machine-readable search metadata.
Original language | English |
---|---|
Pages (from-to) | 14658-14668 |
Number of pages | 11 |
Journal | Ecology and Evolution |
Volume | 11 |
Issue number | 21 |
Early online date | 14 Oct 2021 |
DOIs | |
Publication status | Published - Nov 2021 |