A question of trust: can we build an evidence base to gain trust in systematic review automation technologies?

Annette M O'Connor, Guy Tsafnat, James Thomas, Paul Glasziou, Stephen B Gilbert, Brian Hutton

Research output: Contribution to journalArticleResearchpeer-review

75 Citations (Scopus)
227 Downloads (Pure)

Abstract

BACKGROUND: Although many aspects of systematic reviews use computational tools, systematic reviewers have been reluctant to adopt machine learning tools.

DISCUSSION: We discuss that the potential reason for the slow adoption of machine learning tools into systematic reviews is multifactorial. We focus on the current absence of trust in automation and set-up challenges as major barriers to adoption. It is important that reviews produced using automation tools are considered non-inferior or superior to current practice. However, this standard will likely not be sufficient to lead to widespread adoption. As with many technologies, it is important that reviewers see "others" in the review community using automation tools. Adoption will also be slow if the automation tools are not compatible with workflows and tasks currently used to produce reviews. Many automation tools being developed for systematic reviews mimic classification problems. Therefore, the evidence that these automation tools are non-inferior or superior can be presented using methods similar to diagnostic test evaluations, i.e., precision and recall compared to a human reviewer. However, the assessment of automation tools does present unique challenges for investigators and systematic reviewers, including the need to clarify which metrics are of interest to the systematic review community and the unique documentation challenges for reproducible software experiments.

CONCLUSION: We discuss adoption barriers with the goal of providing tool developers with guidance as to how to design and report such evaluations and for end users to assess their validity. Further, we discuss approaches to formatting and announcing publicly available datasets suitable for assessment of automation technologies and tools. Making these resources available will increase trust that tools are non-inferior or superior to current practice. Finally, we identify that, even with evidence that automation tools are non-inferior or superior to current practice, substantial set-up challenges remain for main stream integration of automation into the systematic review process.

Original languageEnglish
Article number143
Pages (from-to)143
JournalSystematic Reviews
Volume8
Issue number1
DOIs
Publication statusPublished - 18 Jun 2019

Fingerprint

Dive into the research topics of 'A question of trust: can we build an evidence base to gain trust in systematic review automation technologies?'. Together they form a unique fingerprint.

Cite this