Decoding emotion in music and speech: A developmental perspective

William Thompson, E. Glenn Schellenberg

Research output: Chapter in Book/Report/Conference proceedingConference contributionResearchpeer-review

Abstract

Background In some instances, young children’s musical knowledge seems relatively sophisticated. In other instances, their knowledge seems immature or completely absent. These apparent discrepancies are likely to be a consequence of the particular experimental methods, and whether one asks listeners to make emotional or cognitive judgments. Aims The goals were to chart the development of sensitivity to emotional cues in speech and music, and to compare such development with age-related changes in mental representations of structural aspects of familiar songs. Method Musically untrained adults and children 4, 5, and 8 years of age (ns = 40) were tested in eight different tasks. Four tasks involved judgments of emotion conveyed by music or by the musical aspects of speech. One task required listeners to identify whether emotionally unambiguous instrumental music sounded happy or sad. Two other tasks asked whether semantically neutral speech with unambiguous prosody sounded happy or sad, or angry or afraid. A fourth task required listeners to decode the prosody of speech with conflicting semantic cues (e.g., “all the kids at school make fun of me” spoken in a happy tone of voice). In four other tasks, listeners were asked to judge whether a familiar melody (“Twinkle Twinkle” or “Old MacDonald”) was performed correctly or incorrectly (i.e., with one tone mistuned by a semitone), or whether the melodies were harmonized correctly or incorrectly (e.g., leading tone harmonized with a major III chord rather than V). Results Eight-year-olds performed at adult levels (over 90% correct) on each of the four emotion tasks. Performance of the younger children was impacted negatively by the presence of semantically neutral words, and even more so by the presence of conflicting semantics. The younger children also found angry-afraid comparisons to be more difficult than happy-sad comparisons. On the harmony tasks, 4-year-olds performed at chance levels, and even 8-year-olds were below 70% correct. Age-related improvements on the melody tasks were more pronounced, but not nearly as rapid as improvements on the emotion tasks.
Conclusions Development is relatively rapid for culture-general processes, such as decoding the emotional valence of temporal cues. Sensitivity to culture-specific musical cues develops at a slower rate.
Original languageEnglish
Title of host publicationAbstracts - 9th International Conference on Music Perception and Cognition. 6th Triennial Conference of the European Society for the Cognitive Sciences of Music
EditorsMario Baroni, Anna Rita Addessi, Roberto Caterina, Marco Costa
PublisherBononia University Press
Pages225-226
Number of pages2
ISBN (Electronic)88-7395-155-4
Publication statusPublished - Aug 2006
Externally publishedYes

Fingerprint

Dive into the research topics of 'Decoding emotion in music and speech: A developmental perspective'. Together they form a unique fingerprint.

Cite this