Impact of adjusting for inter-rater variability in conference abstract ranking and selection processes

Justin Newton Scanlan, Natasha A. Lannin, Tammy Hoffmann, Mandy Stanley, Rachael Mcdonald

Research output: Contribution to journalArticleResearchpeer-review

Abstract

Background/aim: Scientific conferences provide a forum for clinicians, educators, students and researchers to share research findings. To be selected to present at a scientific conference, authors must submit a short abstract which is then rated on its scientific quality and professional merit and is accepted or rejected based on these ratings. Previous research has indicated that inter-rater variability can have a substantial impact on abstract selection decisions. For their 2015 conference, the Occupational Therapy Australia National Conference introduced a system to identify and adjust for inter-rater variability in the abstract ranking and selection process. Method: Ratings for 1340 abstracts submitted for the 2015 and 2017 conferences were analysed using many-faceted Rasch analysis to identify and adjust for inter-rater variability. Analyses of the construct validity of the abstract rating instrument and rater consistency were completed. To quantify the influence of inter-rater variability of abstract selection decisions, comparisons were made between decisions made using Rasch-calibrated measure scores and decisions that would have been made based purely on raw average scores derived from the abstract ratings. Results: Construct validity and measurement properties of the abstract rating tool were good to excellent (item fit MnSq scores ranged from 0.8 to 1.2; item reliability index = 1.0). Most raters (24 of 27, 89%) were consistent in their use of the rating instrument. When comparing abstract allocations under the two conditions, 25% of abstracts (n = 341) would have been allocated differently if inter-rater variability was not accounted for. Conclusion: This study demonstrates that, even with a strong abstract rating instrument and a small rater pool, inter-rater variability still exerts a substantial influence on abstract selection decisions. It is recommended that all occupational therapy conferences internationally, and scientific conferences more generally, adopt systems to identify and adjust for the impact of inter-rater variability in abstract selection processes.

Original languageEnglish
Pages (from-to)54-62
Number of pages9
JournalAustralian Occupational Therapy Journal
Volume65
Issue number1
Early online date20 Dec 2017
DOIs
Publication statusPublished - Feb 2018

Fingerprint

Occupational Therapy
Research
Reproducibility of Results
Research Personnel
Students

Cite this

Scanlan, Justin Newton ; Lannin, Natasha A. ; Hoffmann, Tammy ; Stanley, Mandy ; Mcdonald, Rachael. / Impact of adjusting for inter-rater variability in conference abstract ranking and selection processes. In: Australian Occupational Therapy Journal. 2018 ; Vol. 65, No. 1. pp. 54-62.
@article{de6adee103394070b7b375bb9b09756e,
title = "Impact of adjusting for inter-rater variability in conference abstract ranking and selection processes",
abstract = "Background/aim: Scientific conferences provide a forum for clinicians, educators, students and researchers to share research findings. To be selected to present at a scientific conference, authors must submit a short abstract which is then rated on its scientific quality and professional merit and is accepted or rejected based on these ratings. Previous research has indicated that inter-rater variability can have a substantial impact on abstract selection decisions. For their 2015 conference, the Occupational Therapy Australia National Conference introduced a system to identify and adjust for inter-rater variability in the abstract ranking and selection process. Method: Ratings for 1340 abstracts submitted for the 2015 and 2017 conferences were analysed using many-faceted Rasch analysis to identify and adjust for inter-rater variability. Analyses of the construct validity of the abstract rating instrument and rater consistency were completed. To quantify the influence of inter-rater variability of abstract selection decisions, comparisons were made between decisions made using Rasch-calibrated measure scores and decisions that would have been made based purely on raw average scores derived from the abstract ratings. Results: Construct validity and measurement properties of the abstract rating tool were good to excellent (item fit MnSq scores ranged from 0.8 to 1.2; item reliability index = 1.0). Most raters (24 of 27, 89{\%}) were consistent in their use of the rating instrument. When comparing abstract allocations under the two conditions, 25{\%} of abstracts (n = 341) would have been allocated differently if inter-rater variability was not accounted for. Conclusion: This study demonstrates that, even with a strong abstract rating instrument and a small rater pool, inter-rater variability still exerts a substantial influence on abstract selection decisions. It is recommended that all occupational therapy conferences internationally, and scientific conferences more generally, adopt systems to identify and adjust for the impact of inter-rater variability in abstract selection processes.",
author = "Scanlan, {Justin Newton} and Lannin, {Natasha A.} and Tammy Hoffmann and Mandy Stanley and Rachael Mcdonald",
year = "2018",
month = "2",
doi = "10.1111/1440-1630.12440",
language = "English",
volume = "65",
pages = "54--62",
journal = "Australian Occupational Therapy Journal",
issn = "0045-0766",
publisher = "Wiley Online Library",
number = "1",

}

Impact of adjusting for inter-rater variability in conference abstract ranking and selection processes. / Scanlan, Justin Newton; Lannin, Natasha A.; Hoffmann, Tammy; Stanley, Mandy; Mcdonald, Rachael.

In: Australian Occupational Therapy Journal, Vol. 65, No. 1, 02.2018, p. 54-62.

Research output: Contribution to journalArticleResearchpeer-review

TY - JOUR

T1 - Impact of adjusting for inter-rater variability in conference abstract ranking and selection processes

AU - Scanlan, Justin Newton

AU - Lannin, Natasha A.

AU - Hoffmann, Tammy

AU - Stanley, Mandy

AU - Mcdonald, Rachael

PY - 2018/2

Y1 - 2018/2

N2 - Background/aim: Scientific conferences provide a forum for clinicians, educators, students and researchers to share research findings. To be selected to present at a scientific conference, authors must submit a short abstract which is then rated on its scientific quality and professional merit and is accepted or rejected based on these ratings. Previous research has indicated that inter-rater variability can have a substantial impact on abstract selection decisions. For their 2015 conference, the Occupational Therapy Australia National Conference introduced a system to identify and adjust for inter-rater variability in the abstract ranking and selection process. Method: Ratings for 1340 abstracts submitted for the 2015 and 2017 conferences were analysed using many-faceted Rasch analysis to identify and adjust for inter-rater variability. Analyses of the construct validity of the abstract rating instrument and rater consistency were completed. To quantify the influence of inter-rater variability of abstract selection decisions, comparisons were made between decisions made using Rasch-calibrated measure scores and decisions that would have been made based purely on raw average scores derived from the abstract ratings. Results: Construct validity and measurement properties of the abstract rating tool were good to excellent (item fit MnSq scores ranged from 0.8 to 1.2; item reliability index = 1.0). Most raters (24 of 27, 89%) were consistent in their use of the rating instrument. When comparing abstract allocations under the two conditions, 25% of abstracts (n = 341) would have been allocated differently if inter-rater variability was not accounted for. Conclusion: This study demonstrates that, even with a strong abstract rating instrument and a small rater pool, inter-rater variability still exerts a substantial influence on abstract selection decisions. It is recommended that all occupational therapy conferences internationally, and scientific conferences more generally, adopt systems to identify and adjust for the impact of inter-rater variability in abstract selection processes.

AB - Background/aim: Scientific conferences provide a forum for clinicians, educators, students and researchers to share research findings. To be selected to present at a scientific conference, authors must submit a short abstract which is then rated on its scientific quality and professional merit and is accepted or rejected based on these ratings. Previous research has indicated that inter-rater variability can have a substantial impact on abstract selection decisions. For their 2015 conference, the Occupational Therapy Australia National Conference introduced a system to identify and adjust for inter-rater variability in the abstract ranking and selection process. Method: Ratings for 1340 abstracts submitted for the 2015 and 2017 conferences were analysed using many-faceted Rasch analysis to identify and adjust for inter-rater variability. Analyses of the construct validity of the abstract rating instrument and rater consistency were completed. To quantify the influence of inter-rater variability of abstract selection decisions, comparisons were made between decisions made using Rasch-calibrated measure scores and decisions that would have been made based purely on raw average scores derived from the abstract ratings. Results: Construct validity and measurement properties of the abstract rating tool were good to excellent (item fit MnSq scores ranged from 0.8 to 1.2; item reliability index = 1.0). Most raters (24 of 27, 89%) were consistent in their use of the rating instrument. When comparing abstract allocations under the two conditions, 25% of abstracts (n = 341) would have been allocated differently if inter-rater variability was not accounted for. Conclusion: This study demonstrates that, even with a strong abstract rating instrument and a small rater pool, inter-rater variability still exerts a substantial influence on abstract selection decisions. It is recommended that all occupational therapy conferences internationally, and scientific conferences more generally, adopt systems to identify and adjust for the impact of inter-rater variability in abstract selection processes.

UR - http://www.scopus.com/inward/record.url?scp=85038447024&partnerID=8YFLogxK

U2 - 10.1111/1440-1630.12440

DO - 10.1111/1440-1630.12440

M3 - Article

VL - 65

SP - 54

EP - 62

JO - Australian Occupational Therapy Journal

JF - Australian Occupational Therapy Journal

SN - 0045-0766

IS - 1

ER -