We examined whether visual and auditory cues to affect in music are integrated preattentively, beyond conscious control. In Experiment 1 participants judged the affective valence of audio-visual recordings of sung intervals. Performers sang major and minor intervals. Each interval was synchronized with facial expressions used to sing the same ‘happy’ or ‘sad’ interval (congruent condition) or a different interval (incongruent condition). Incongruent conditions involved audio and visual dimensions that conveyed conflicting affective connotations (e.g., positive audio, negative facial expression). In the single task condition, participants judged the affective connotation of the audiovisual performances. In the dual-task condition participants judged the affective connotation of performances while performing a secondary task. If conscious attention were needed to integrate visual and auditory cues, then integration should be reduced in the dual-task condition. Participants were influenced by visual cues when making affective judgments, but the secondary task did not influence judgments, suggesting that attentional resources were not involved in audio-visual integration. In Experiment 2, the same paradigm was utilized only participants were instructed to base affect ratings on auditory information alone (to ignore facial expressions). Results corroborated those observed in Experiment 1, confirming that audio-visual integration of affective cues in music occurs preattentively.
|Title of host publication||Proceedings of the Second International Conference on Music and Gesture: Royal Northern College of Music, Manchester (UK), 20-23 July 2006|
|Editors||Anthony Gritten, Elaine King|
|Place of Publication||Hull|
|ISBN (Print)||0955332907, 9780955332906|
|Publication status||Published - 2006|