Appraising the quality of randomized controlled trials: Inter-rater reliability for the OTseeker evidence database

L Tooth, S Bennett, A McCluskey, T Hoffmann, K McKenna, M Lovarini

Research output: Contribution to journalArticleResearchpeer-review

34 Citations (Scopus)

Abstract

Rationale and aims 'OTseeker' is an online database of randomized controlled trials (RCTs) and systematic reviews relevant to occupational therapy. RCTs are critically appraised and rated for quality using the 'PEDro' scale. We aimed to investigate the inter-rater reliability of the PEDro scale before and after revising rating guidelines. Methods In study 1, five raters scored 100 RCTs using the original PEDro scale guidelines. In study 2, two raters scored 40 different RCTs using revised guidelines. All RCTs were randomly selected from the OTseeker database. Reliability was calculated using Kappa and intraclass correlation coefficients [ICC (model 2,1)]. Results Inter-rater reliability was 'good to excellent' in the first study (Kappas >= 0.53; ICCs >= 0.71). After revising the rating guidelines, the reliability levels were equivalent or higher to those previously obtained (Kappas >= 0.53; ICCs >= 0.89), except for the item, 'groups similar at baseline', which still had moderate reliability (Kappa = 0.53). In study 2, two PEDro scale items, which had their definitions revised, 'less than 15% dropout' and 'point measures and variability', showed higher reliability. In both studies, the PEDro items with the lowest reliability were 'groups similar at baseline' (Kappas = 0.53), 'less than 15% dropout' (Kappas

Original languageEnglish
Pages (from-to)547-555
Number of pages9
JournalJournal of Evaluation in Clinical Practice
Volume11
Issue number6
DOIs
Publication statusPublished - Dec 2005
Externally publishedYes

Cite this

@article{aca43a7faf9a4d1ab27ea18c04e41d03,
title = "Appraising the quality of randomized controlled trials: Inter-rater reliability for the OTseeker evidence database",
abstract = "Rationale and aims 'OTseeker' is an online database of randomized controlled trials (RCTs) and systematic reviews relevant to occupational therapy. RCTs are critically appraised and rated for quality using the 'PEDro' scale. We aimed to investigate the inter-rater reliability of the PEDro scale before and after revising rating guidelines. Methods In study 1, five raters scored 100 RCTs using the original PEDro scale guidelines. In study 2, two raters scored 40 different RCTs using revised guidelines. All RCTs were randomly selected from the OTseeker database. Reliability was calculated using Kappa and intraclass correlation coefficients [ICC (model 2,1)]. Results Inter-rater reliability was 'good to excellent' in the first study (Kappas >= 0.53; ICCs >= 0.71). After revising the rating guidelines, the reliability levels were equivalent or higher to those previously obtained (Kappas >= 0.53; ICCs >= 0.89), except for the item, 'groups similar at baseline', which still had moderate reliability (Kappa = 0.53). In study 2, two PEDro scale items, which had their definitions revised, 'less than 15{\%} dropout' and 'point measures and variability', showed higher reliability. In both studies, the PEDro items with the lowest reliability were 'groups similar at baseline' (Kappas = 0.53), 'less than 15{\%} dropout' (Kappas",
author = "L Tooth and S Bennett and A McCluskey and T Hoffmann and K McKenna and M Lovarini",
year = "2005",
month = "12",
doi = "10.1111/j.1365-2753.2005.00574.x",
language = "English",
volume = "11",
pages = "547--555",
journal = "Journal of Evaluation in Clinical Practice",
issn = "1356-1294",
publisher = "Blackwell Publishing",
number = "6",

}

Appraising the quality of randomized controlled trials : Inter-rater reliability for the OTseeker evidence database. / Tooth, L; Bennett, S; McCluskey, A; Hoffmann, T; McKenna, K; Lovarini, M.

In: Journal of Evaluation in Clinical Practice, Vol. 11, No. 6, 12.2005, p. 547-555.

Research output: Contribution to journalArticleResearchpeer-review

TY - JOUR

T1 - Appraising the quality of randomized controlled trials

T2 - Inter-rater reliability for the OTseeker evidence database

AU - Tooth, L

AU - Bennett, S

AU - McCluskey, A

AU - Hoffmann, T

AU - McKenna, K

AU - Lovarini, M

PY - 2005/12

Y1 - 2005/12

N2 - Rationale and aims 'OTseeker' is an online database of randomized controlled trials (RCTs) and systematic reviews relevant to occupational therapy. RCTs are critically appraised and rated for quality using the 'PEDro' scale. We aimed to investigate the inter-rater reliability of the PEDro scale before and after revising rating guidelines. Methods In study 1, five raters scored 100 RCTs using the original PEDro scale guidelines. In study 2, two raters scored 40 different RCTs using revised guidelines. All RCTs were randomly selected from the OTseeker database. Reliability was calculated using Kappa and intraclass correlation coefficients [ICC (model 2,1)]. Results Inter-rater reliability was 'good to excellent' in the first study (Kappas >= 0.53; ICCs >= 0.71). After revising the rating guidelines, the reliability levels were equivalent or higher to those previously obtained (Kappas >= 0.53; ICCs >= 0.89), except for the item, 'groups similar at baseline', which still had moderate reliability (Kappa = 0.53). In study 2, two PEDro scale items, which had their definitions revised, 'less than 15% dropout' and 'point measures and variability', showed higher reliability. In both studies, the PEDro items with the lowest reliability were 'groups similar at baseline' (Kappas = 0.53), 'less than 15% dropout' (Kappas

AB - Rationale and aims 'OTseeker' is an online database of randomized controlled trials (RCTs) and systematic reviews relevant to occupational therapy. RCTs are critically appraised and rated for quality using the 'PEDro' scale. We aimed to investigate the inter-rater reliability of the PEDro scale before and after revising rating guidelines. Methods In study 1, five raters scored 100 RCTs using the original PEDro scale guidelines. In study 2, two raters scored 40 different RCTs using revised guidelines. All RCTs were randomly selected from the OTseeker database. Reliability was calculated using Kappa and intraclass correlation coefficients [ICC (model 2,1)]. Results Inter-rater reliability was 'good to excellent' in the first study (Kappas >= 0.53; ICCs >= 0.71). After revising the rating guidelines, the reliability levels were equivalent or higher to those previously obtained (Kappas >= 0.53; ICCs >= 0.89), except for the item, 'groups similar at baseline', which still had moderate reliability (Kappa = 0.53). In study 2, two PEDro scale items, which had their definitions revised, 'less than 15% dropout' and 'point measures and variability', showed higher reliability. In both studies, the PEDro items with the lowest reliability were 'groups similar at baseline' (Kappas = 0.53), 'less than 15% dropout' (Kappas

U2 - 10.1111/j.1365-2753.2005.00574.x

DO - 10.1111/j.1365-2753.2005.00574.x

M3 - Article

VL - 11

SP - 547

EP - 555

JO - Journal of Evaluation in Clinical Practice

JF - Journal of Evaluation in Clinical Practice

SN - 1356-1294

IS - 6

ER -