Virtual and augmented reality text environments support self-directed multimodal reading

Kathy A. Mills*, Alinta Brown, Christian Moro

*Corresponding author for this work

Research output: Contribution to journalArticleResearchpeer-review

2 Citations (Scopus)
99 Downloads (Pure)

Abstract

Virtual (VR) and augmented reality (AR) technologies are two of the fastest-growing technologies anticipated to peak this decade, with a gap in research to understand the new multimodal affordances for teachers to support students’ self-directed interactive reading. This research aimed to understand early adolescents’ experiences of reading multimodal texts containing written and spoken language and 3D imagery in VR and AR text environments. The study applied thematic coding and multimodal interaction analysis to understand how students engaged with reading 3D, multimodal, non-fiction VR and AR texts in educational games. The findings attend to the following reading practices exhibited by learners: (i) connecting new knowledge to the known, (ii) recalling information; (iii) resolving cognitive disequilibrium, (iv) using haptic interactivity to navigate text pathways, and (v) using selective attention to filter information. The students followed varied text pathways while demonstrating cognitive disequilibrium and resolution when responding to multilayered attentional cues and text pathways, with students encountering multiple cuing systems with VR and AR texts that use different navigation aids than those offered in books. Reading multimodal texts across modes involved attending to several modes, while backgrounding others, with opportunities for extensive haptic interactivity, and requiring selective attention in extended reality environments.
Original languageEnglish
Pages (from-to)1-20
Number of pages20
JournalInteractive Learning Environments
DOIs
Publication statusPublished - 26 Mar 2025

Cite this