Abstract
Global Assessments (GAs) are used in
Australian General Practice (GP) training during selection,
in-training reviews, and within summative assessment.
This project aimed to determine factors influencing
assessors in their assignment of a GA score.
A modified Delphi process was used with participants
recruited from GP Supervisor and Medical Educator groups
nationwide. Consensus information obtained via
questionnaires was reflected to the group for comment in
subsequent Delphi rounds.
Demographic items collected included educator role, level
of experience, and total number of doctors supervised.
Participants were asked where they had performed GAs,
and the factors they considered when making a GA.
Participants ranked these factors independently, and in
relation to training level of the doctor being observed, as
well as commenting on consensus rankings. Participants
rated their confidence in GAs as an accurate determinant
of GP competence. Participants were asked to identify
personal biases, and their approach to discrepancies in GA
scores.
Of the 28 participants engaging in four Delphi rounds,
most were female, aged over 40, and had roles as Medical
Educators. GAs were most commonly used in direct
observation of practice, formatively and summatively.
Clinical knowledge, conscious incompetence,
communication skills and help-seeking practices were
ranked highly in considering GA. There was good
agreement amongst participants regarding criteria
significance across the training continuum and the
robustness of GA. There was conflicting opinion about
what skills and factors can be learnt versus what should be
inherent characteristics of a doctor.
The factors contributing to a GA are broad and not limited
to assessment of knowledge and skills, but include the
non-clinical domains, namely communication,
professionalism and organisational skills. Trust in the
validity of GA by participants was strong, particularly when
multiple assessors are involved. Personal biases do exist,
and it is unknown at this stage whether or how they are
overcome by assessors when making final judgment.
The strength of GA appears to be drawn from the breadth
of factors considered that go beyond ‘clinical’ checklists by
allowing for overall impressions and gut feeling, providing
a ‘rounded approach’ to competency.
Australian General Practice (GP) training during selection,
in-training reviews, and within summative assessment.
This project aimed to determine factors influencing
assessors in their assignment of a GA score.
A modified Delphi process was used with participants
recruited from GP Supervisor and Medical Educator groups
nationwide. Consensus information obtained via
questionnaires was reflected to the group for comment in
subsequent Delphi rounds.
Demographic items collected included educator role, level
of experience, and total number of doctors supervised.
Participants were asked where they had performed GAs,
and the factors they considered when making a GA.
Participants ranked these factors independently, and in
relation to training level of the doctor being observed, as
well as commenting on consensus rankings. Participants
rated their confidence in GAs as an accurate determinant
of GP competence. Participants were asked to identify
personal biases, and their approach to discrepancies in GA
scores.
Of the 28 participants engaging in four Delphi rounds,
most were female, aged over 40, and had roles as Medical
Educators. GAs were most commonly used in direct
observation of practice, formatively and summatively.
Clinical knowledge, conscious incompetence,
communication skills and help-seeking practices were
ranked highly in considering GA. There was good
agreement amongst participants regarding criteria
significance across the training continuum and the
robustness of GA. There was conflicting opinion about
what skills and factors can be learnt versus what should be
inherent characteristics of a doctor.
The factors contributing to a GA are broad and not limited
to assessment of knowledge and skills, but include the
non-clinical domains, namely communication,
professionalism and organisational skills. Trust in the
validity of GA by participants was strong, particularly when
multiple assessors are involved. Personal biases do exist,
and it is unknown at this stage whether or how they are
overcome by assessors when making final judgment.
The strength of GA appears to be drawn from the breadth
of factors considered that go beyond ‘clinical’ checklists by
allowing for overall impressions and gut feeling, providing
a ‘rounded approach’ to competency.
Original language | English |
---|---|
Pages | 59 |
Number of pages | 1 |
Publication status | Published - 2018 |
Event | Association for Medical Education in Europe Conference (AMEE) 2018 - Congress Center Basel, Basel, Switzerland Duration: 25 Aug 2018 → 29 Aug 2018 https://amee.org/conferences/amee-past-conferences/amee-2018 |
Conference
Conference | Association for Medical Education in Europe Conference (AMEE) 2018 |
---|---|
Abbreviated title | AMEE |
Country/Territory | Switzerland |
City | Basel |
Period | 25/08/18 → 29/08/18 |
Internet address |