Identifying studies for systematic reviews of diagnostic tests was difficult due to the poor sensitivity and precision of methodologic filters and the lack of information in the abstract

JA Doust*, E Pietrzak, S Sanders, PP Glasziou

*Corresponding author for this work

Research output: Contribution to journalArticleResearchpeer-review

72 Citations (Scopus)


Background and Objectives: Methods to identify studies for systematic reviews of diagnostic accuracy are less well developed than for reviews of intervention studies. This study assessed (1) the sensitivity and precision of five published search strategies and (2) the reliability and accuracy of reviewers screening the results of the search strategy.

Methods: We compared the results of the search filters with the studies included in two systematic reviews, and assessed the interobserver reliability of two reviewers screening the list of articles generated by a search strategy.

Results: In the first review, the search strategy published by van der Weijden had the greatest sensitivity, and in the second, four search strategies had 100% sensitivity. There was "substantial" agreement between two reviewers, but in the first review each reviewer working on their own would have missed one paper eligible for inclusion in the review. Ascertainment intersection techniques indicate that it is unlikely that further papers have been missed in the screening process.

Conclusion: Published search strategies may miss papers for reviews of diagnostic test accuracy. Papers are not easily identified as studies of diagnostic test accuracy, and the lack of information in the abstract makes it difficult to assess the eligibility for inclusion in a systematic review. © 2005 Elsevier Inc. All rights reserved.

Original languageEnglish
Pages (from-to)444-449
Number of pages6
JournalJournal of Clinical Epidemiology
Issue number5
Publication statusPublished - May 2005
Externally publishedYes

Cite this