This four-year longitudinal study investigate a range of assessment methods for the evaluation of learning outcomes associate with critical thinking, problem solving, written communication and lifelong learning. Students from the Faculties of Arts and Science, and Engineering and Applied Science participated in the study. The measures included surveys, interviews, two standardized tests (Collegiate Learning Assessment and the Critical Thinking Assessment Test) and program-wide rubrics (VALUE rubrics) from the American Association of Colleges and Universities used to score student work samples independently of course grading. Researchers worked with course instructors to align teaching, learning and assessment, and to investigate and evaluate the utility of the instruments used.
The results of the study quantified longitudinal achievement of student outcomes on three instruments, with incremental growth in skills demonstrated across the studied undergraduate programs. The high-level outcomes were:
Queen's students' skills in critical thinking, problem solving, and communication increased over the four years of their degree. The effects were detectable using the standardized tests (CLA+ d= .44 and CAT d= .65), but more evident using the Valid Assessment of Learning in Undergraduate Education (VALUE) rubrics, with first-year medians of Benchmark 1 and Milestone 2, improving to fourth-year medians of Milestone 3.
Queen's students demonstrate a higher level of skill in critical thinking than comparable students at most peer institutions participating in the CLA or CAT (For example, the Queen's fourth-year sample performed at the 87th percentile for CLA+ participating institutions).
Student motivation was a significant concern for standardized tests. Results from student focus groups suggested that for students to put effort into testing, instructors need to value the test, the content needs to be relevant, careful consideration should be made to scheduling, and the results should be made available to students.
Motivation is not a concern when scoring academic work using program-wide rubrics, but alignment of course assignments to rubric dimensions is critical
The relative cost of implementing the VALUE rubric marking was approximately $20 less per student than implementing the CLA+ or CAT tests.
Qualitative and quantitative feedback facilitated through departmental reports and debriefs prompted improvement to courses.
Work needs to continue to increase the adoption of effective practices in assessment.