This page contains a Flash digital edition of a book.
Tools Towards Expressive Interpretation by

Cynthia Tobey

Musical structure is not the only element upon which interpretation depends. Most obviously, interpretation depends on the emotional quality that a performer wishes, consciously or otherwise, to communicate. In an early statement of this idea, C. P. E. Bach (1753) moved away from the predominantly objectively determined Affektenlehre toward a more subjective form of emotional communication (Juslin, 1997). Other research (Bigand, Parncutt, & Lerdahl, 1996; Gabrielsson & Juslin, 1997) has returned to the question of exactly what combinations of performance parameters are used to communicate which emotions. Interpretation also depends on the dramatic content of

a piece of music, where drama can be understood to imply interactions between people or fictional characters. The psychological existence of such characters in music was demonstrated by Watt and Ash (1998). Clarke (1995) attempted to explain dramatic aspects of performance, citing and concurring with Shaffer's (1992) proposal that a musical performance is a narrative, in which events are determined by the musical structure, and the personalities of the protagonists are created by the performer through expressive interpretation. Yet another potentially important influence on musical

interpretation may be the body of the musician itself: his or her brain, ears, lungs, lips, and fingers. Parncutt (1998) considered ways in which physical models could be introduced into musical performance. A possible starting point might be the model of temporal integration and integrated energy flux of Todd (1995), in which he generates a physical model of expression based on the physiology of the auditory system. Music's apparent ability to imply movement as expressed by a conductor's baton is another example of the physical model. Yet a major problem exists: these ideas regarding

expression are hard to apply because of the difficulty of defining and incorporating them systematically into teaching and practice. For the purposes of this article, structure is the focus of the relationship between accents and expression. Through this lens, we can consider elements such as variations in timing and dynamics that make up significant aspects of musical expression. Accents in Music Broadly defined, an accent is an important event; one

that “attracts the attention of the listener” (Parncutt, 1998, 391). In both speech and music, it is essential for a clear understanding of the sound that the listener not only correctly decode the individual musical events (syllables or notes), but also get a feel for their importance in relation to one another. This leads to the process of understanding the underlying


structure and meaning. An accent therefore, is an integral part of musical communication. For the purposes of this article we will define

structural accents as those taken directly from the notated score, whereas performed accents are purposefully and effectively added to the score by a performer. Both have aspects associated with four primary attributes of sound: time, pitch, loudness, and timbre. Two different explanations from the related music

research of Parncutt (1998) and Terhardt (1998) might be considered, both of which refer to the human auditory environment. First, spectral frequencies and time intervals between sound events are normally not affected significantly by reflections off environmental objects in human environments. By contrast, spectral amplitudes (loudness and timbre) are often significantly affected by reflections. The ear seems to have adapted to this situation by becoming more sensitive to frequency and time, and less sensitive to amplitude. Second, the human auditory environment abounds

with equally-spaced patterns of frequency (complex harmonic tones such as voiced speech sound) and time (signals such as the sound of footfalls and heartbeats) (Parncutt, 1994), but not with equally-spaced patterns of sound amplitude or timbre. As a result, pitch and time are the only two musical elements that are typically perceived in relation to other clearly defined musical entities (scales for example). To illustrate this point further, consider the structural

distance between the first and third beats of a 4/4 measure and note that it is the same as the distance between the second and fourth beats. Similarly, the structural distance between the notes C and G on the piano is exactly the same as the distance between D and A. However the distance between dynamics p and mp may not be directly compared with the distance between f and mf. Similarly, the difference of timbre between the flute and oboe is not precisely comparable with the difference between that of the violin and trumpet. While it seems relatively straightforward to construct musical forms based on loudness and timbre, “structures in pitch and time are more complicated,” according to Parncutt (1994, 151). Performed accents may be classified in a similar way

to prescribed accents. A performer may adjust the duration of an event by changing either its onset or offset time. According to Riemann (1884), performed accents produced by delaying or anticipating onset and offset times, are referred to as agogic accents. Delaying the onset of a note relative to the meter heightens expectation, increasing the effect of the impact when the note finally arrives. Delaying the off-set relative to the onset of the next note changes articulation, making the note more legato. Both of these aspects of timing are important expressive strategies, which Sloboda’s (1983) research also supports.

A musical tone may also be made expressive by

manipulating its pitch (changing its intonation), loudness (giving it stress), or timbre (changing its coloration). Each of

Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21