Faster title and abstract screening? Evaluating Abstrackr, a semi-automated online screening program for systematic reviewers

Research output: Contribution to journalArticleResearchpeer-review

29 Citations (Scopus)
17 Downloads (Pure)

Abstract

Background: Citation screening is time consuming and inefficient. We sought to evaluate the performance of Abstrackr, a semi-automated online tool for predictive title and abstract screening. Methods: Four systematic reviews (aHUS, dietary fibre, ECHO, rituximab) were used to evaluate Abstrackr. Citations from electronic searches of biomedical databases were imported into Abstrackr, and titles and abstracts were screened and included or excluded according to the entry criteria. This process was continued until Abstrackr predicted and classified the remaining unscreened citations as relevant or irrelevant. These classification predictions were checked for accuracy against the original review decisions. Sensitivity analyses were performed to assess the effects of including case reports in the aHUS dataset whilst screening and the effects of using larger imbalanced datasets with the ECHO dataset. The performance of Abstrackr was calculated according to the number of relevant studies missed, the workload saving, the false negative rate, and the precision of the algorithm to correctly predict relevant studies for inclusion, i.e. further full text inspection. Results: Of the unscreened citations, Abstrackr's prediction algorithm correctly identified all relevant citations for the rituximab and dietary fibre reviews. However, one relevant citation in both the aHUS and ECHO reviews was incorrectly predicted as not relevant. The workload saving achieved with Abstrackr varied depending on the complexity and size of the reviews (9% rituximab, 40% dietary fibre, 67% aHUS, and 57% ECHO). The proportion of citations predicted as relevant, and therefore, warranting further full text inspection (i.e. the precision of the prediction) ranged from 16% (aHUS) to 45% (rituximab) and was affected by the complexity of the reviews. The false negative rate ranged from 2.4 to 21.7%. Sensitivity analysis performed on the aHUS dataset increased the precision from 16 to 25% and increased the workload saving by 10% but increased the number of relevant studies missed. Sensitivity analysis performed with the larger ECHO dataset increased the workload saving (80%) but reduced the precision (6.8%) and increased the number of missed citations. Conclusions: Semi-automated title and abstract screening with Abstrackr has the potential to save time and reduce research waste.

Original languageEnglish
Article number80
JournalSystematic Reviews
Volume4
Issue number1
DOIs
Publication statusPublished - 15 Jun 2015

Fingerprint

Workload
Dietary Fiber
Datasets
Databases
Rituximab
Research

Cite this

@article{1adf7907a6344993823d9a866e2d1e63,
title = "Faster title and abstract screening?: Evaluating Abstrackr, a semi-automated online screening program for systematic reviewers",
abstract = "Background: Citation screening is time consuming and inefficient. We sought to evaluate the performance of Abstrackr, a semi-automated online tool for predictive title and abstract screening. Methods: Four systematic reviews (aHUS, dietary fibre, ECHO, rituximab) were used to evaluate Abstrackr. Citations from electronic searches of biomedical databases were imported into Abstrackr, and titles and abstracts were screened and included or excluded according to the entry criteria. This process was continued until Abstrackr predicted and classified the remaining unscreened citations as relevant or irrelevant. These classification predictions were checked for accuracy against the original review decisions. Sensitivity analyses were performed to assess the effects of including case reports in the aHUS dataset whilst screening and the effects of using larger imbalanced datasets with the ECHO dataset. The performance of Abstrackr was calculated according to the number of relevant studies missed, the workload saving, the false negative rate, and the precision of the algorithm to correctly predict relevant studies for inclusion, i.e. further full text inspection. Results: Of the unscreened citations, Abstrackr's prediction algorithm correctly identified all relevant citations for the rituximab and dietary fibre reviews. However, one relevant citation in both the aHUS and ECHO reviews was incorrectly predicted as not relevant. The workload saving achieved with Abstrackr varied depending on the complexity and size of the reviews (9{\%} rituximab, 40{\%} dietary fibre, 67{\%} aHUS, and 57{\%} ECHO). The proportion of citations predicted as relevant, and therefore, warranting further full text inspection (i.e. the precision of the prediction) ranged from 16{\%} (aHUS) to 45{\%} (rituximab) and was affected by the complexity of the reviews. The false negative rate ranged from 2.4 to 21.7{\%}. Sensitivity analysis performed on the aHUS dataset increased the precision from 16 to 25{\%} and increased the workload saving by 10{\%} but increased the number of relevant studies missed. Sensitivity analysis performed with the larger ECHO dataset increased the workload saving (80{\%}) but reduced the precision (6.8{\%}) and increased the number of missed citations. Conclusions: Semi-automated title and abstract screening with Abstrackr has the potential to save time and reduce research waste.",
author = "John Rathbone and Tammy Hoffmann and Paul Glasziou",
year = "2015",
month = "6",
day = "15",
doi = "10.1186/s13643-015-0067-6",
language = "English",
volume = "4",
journal = "Systematic Reviews",
issn = "2046-4053",
publisher = "BMC",
number = "1",

}

Faster title and abstract screening? Evaluating Abstrackr, a semi-automated online screening program for systematic reviewers. / Rathbone, John; Hoffmann, Tammy; Glasziou, Paul.

In: Systematic Reviews, Vol. 4, No. 1, 80, 15.06.2015.

Research output: Contribution to journalArticleResearchpeer-review

TY - JOUR

T1 - Faster title and abstract screening?

T2 - Evaluating Abstrackr, a semi-automated online screening program for systematic reviewers

AU - Rathbone, John

AU - Hoffmann, Tammy

AU - Glasziou, Paul

PY - 2015/6/15

Y1 - 2015/6/15

N2 - Background: Citation screening is time consuming and inefficient. We sought to evaluate the performance of Abstrackr, a semi-automated online tool for predictive title and abstract screening. Methods: Four systematic reviews (aHUS, dietary fibre, ECHO, rituximab) were used to evaluate Abstrackr. Citations from electronic searches of biomedical databases were imported into Abstrackr, and titles and abstracts were screened and included or excluded according to the entry criteria. This process was continued until Abstrackr predicted and classified the remaining unscreened citations as relevant or irrelevant. These classification predictions were checked for accuracy against the original review decisions. Sensitivity analyses were performed to assess the effects of including case reports in the aHUS dataset whilst screening and the effects of using larger imbalanced datasets with the ECHO dataset. The performance of Abstrackr was calculated according to the number of relevant studies missed, the workload saving, the false negative rate, and the precision of the algorithm to correctly predict relevant studies for inclusion, i.e. further full text inspection. Results: Of the unscreened citations, Abstrackr's prediction algorithm correctly identified all relevant citations for the rituximab and dietary fibre reviews. However, one relevant citation in both the aHUS and ECHO reviews was incorrectly predicted as not relevant. The workload saving achieved with Abstrackr varied depending on the complexity and size of the reviews (9% rituximab, 40% dietary fibre, 67% aHUS, and 57% ECHO). The proportion of citations predicted as relevant, and therefore, warranting further full text inspection (i.e. the precision of the prediction) ranged from 16% (aHUS) to 45% (rituximab) and was affected by the complexity of the reviews. The false negative rate ranged from 2.4 to 21.7%. Sensitivity analysis performed on the aHUS dataset increased the precision from 16 to 25% and increased the workload saving by 10% but increased the number of relevant studies missed. Sensitivity analysis performed with the larger ECHO dataset increased the workload saving (80%) but reduced the precision (6.8%) and increased the number of missed citations. Conclusions: Semi-automated title and abstract screening with Abstrackr has the potential to save time and reduce research waste.

AB - Background: Citation screening is time consuming and inefficient. We sought to evaluate the performance of Abstrackr, a semi-automated online tool for predictive title and abstract screening. Methods: Four systematic reviews (aHUS, dietary fibre, ECHO, rituximab) were used to evaluate Abstrackr. Citations from electronic searches of biomedical databases were imported into Abstrackr, and titles and abstracts were screened and included or excluded according to the entry criteria. This process was continued until Abstrackr predicted and classified the remaining unscreened citations as relevant or irrelevant. These classification predictions were checked for accuracy against the original review decisions. Sensitivity analyses were performed to assess the effects of including case reports in the aHUS dataset whilst screening and the effects of using larger imbalanced datasets with the ECHO dataset. The performance of Abstrackr was calculated according to the number of relevant studies missed, the workload saving, the false negative rate, and the precision of the algorithm to correctly predict relevant studies for inclusion, i.e. further full text inspection. Results: Of the unscreened citations, Abstrackr's prediction algorithm correctly identified all relevant citations for the rituximab and dietary fibre reviews. However, one relevant citation in both the aHUS and ECHO reviews was incorrectly predicted as not relevant. The workload saving achieved with Abstrackr varied depending on the complexity and size of the reviews (9% rituximab, 40% dietary fibre, 67% aHUS, and 57% ECHO). The proportion of citations predicted as relevant, and therefore, warranting further full text inspection (i.e. the precision of the prediction) ranged from 16% (aHUS) to 45% (rituximab) and was affected by the complexity of the reviews. The false negative rate ranged from 2.4 to 21.7%. Sensitivity analysis performed on the aHUS dataset increased the precision from 16 to 25% and increased the workload saving by 10% but increased the number of relevant studies missed. Sensitivity analysis performed with the larger ECHO dataset increased the workload saving (80%) but reduced the precision (6.8%) and increased the number of missed citations. Conclusions: Semi-automated title and abstract screening with Abstrackr has the potential to save time and reduce research waste.

UR - http://www.scopus.com/inward/record.url?scp=84939178127&partnerID=8YFLogxK

U2 - 10.1186/s13643-015-0067-6

DO - 10.1186/s13643-015-0067-6

M3 - Article

VL - 4

JO - Systematic Reviews

JF - Systematic Reviews

SN - 2046-4053

IS - 1

M1 - 80

ER -