Music is scaled, while speech is not: A cross-cultural analysis
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Music is well-known to be based on sets of discrete pitches that are combined to form musical melodies. In contrast, there is no evidence that speech is organized into stable tonal structures analogous to musical scales. In the current study, we developed a new computational method for measuring what we call the “scaledness” of an acoustic sample and applied it to three cross-cultural ethnographic corpora of speech, song, and/or instrumental music (n=1,696 samples). The results confirmed the established notion that music is significantly more scaled than speech, but they also revealed some novel findings. First, highly prosodic speech – such as a mother talking to a baby – was no more scaled than regular speech, which contradicts intuitive notions that prosodic speech is more “tonal” than regular speech. Second, instrumental music was far more scaled than vocal music, in keeping with the observation that the voice is highly imprecise at pitch production. Finally, singing style had a significant impact on the scaledness of song, creating a spectrum from chanted styles to more melodious styles. Overall, the results reveal that speech shows minimal scaledness no matter how it is uttered, and that music’s scaledness varies widely depending on its manner of production.