Y5 | Student Presentations

Y5

Y5 Homepage | Student Presentations

Student Presentations

Congratulations to the student members selected to present at the plenary session (July 3) of the Y5 ACTOR Workshop. Click the names to view the abstracts.

 

 

Abstracts

Erica Huynh

The role of timbre in source identification in atypically combined excitations and resonators of musical instruments

Timbre plays a crucial role in sound source recognition. Musical instrument sounds carry timbral information about two mechanical properties: the excitation sets into vibration the resonator, which filters sound components. Excitation–resonator interactions in the physical world are restrictive: Strings can be bowed and struck but seldom blown. We used Modalys to combine three excitations (bowing, blowing, striking) with three resonators (string, air column, plate), simulating nine interactions. These interactions are typical (e.g., bowed string) or atypical (e.g., blown plate).

Experiment 1 involved dissimilarity ratings of stimulus pairs. Experiment 2 entailed explicit categorization of excitations and resonators of the stimuli. Experiment 3 comprised three learning tasks training participants on the stimuli’s excitation, resonator, or interaction categories. Training involved trial and error with corrective feedback, followed by an explicit categorization task based on trained categories.

Multidimensional scaling revealed a three-dimensional timbre space (Experiment 1). Dimension 1 showed a clear boundary between struck and continuous excitations. Dimension 2 isolated plates and Dimension 3 further separated strings and air columns. Listeners accurately categorized excitations and resonators of typical interactions (Experiments 2 & 3). They assimilated atypical interactions to typical ones (Experiment 2). This confusion was reduced after training was involved (Experiment 3).

Therefore, categorical boundaries of excitations and resonators were already formed implicitly and not made explicit until training took place. These studies reveal that excitation and resonator properties can be processed independently. Furthermore, they highlight the role of timbre in an essential process of human behaviour: identifying a sound source.

 

 

Simon Jacobsen

Instrument identification and blend in virtual acoustic scenes - a case study of the Tristan prelude

Creating timbre through blending of individual instruments is an effective and affective orchestration technique which critically shapes music perception. However, it remains mostly unclear how listeners perceive musical instruments in complex sound mixtures presented under realistic acoustical conditions such as concert hall acoustics. Using Wagner’s prelude to the opera Tristan and Isolde as a case study in my PhD research project, instrument identification performance serves as a proxy for quantifying perceptual effects of musical and acoustical attributes in musical scene analysis.

Individual (mostly) dry instrument tracks taken from OrchPlay will be used for rendering the complete sound mixture with realistic concert hall acoustics. The toolbox for acoustic scene creation and rendering (TASCAR) will allow to build a reverberant acoustic scenario of a concert hall. A loudspeaker array installed in an anechoic chamber will provide surround-sound playback, effectively creating the virtual acoustical presentation, placing the listener within the scene. Using short excerpts from the score with varying musical (e.g. number of instruments, stratification) and acoustical (e.g. distance between instruments on stage, reverberation time) parameters, the instrument identification performance of human listeners will be quantified in psychoacoustic experiments, where individual target melodies have to be attributed to particular instruments within the mixture. Furthermore, the sound quality and blend properties of different acoustical configurations will be assessed.

First results from the virtual acoustic scene will provide a benchmark for future experiments and modeling approaches on musical instrument identification to uncover the acoustical attributes that shape musical scene analysis in realistic multi-source scenarios.

 

 

Rebecca Moranis

Choreographing Orchestration: A Novel Method for Analyzing Orchestration through Ballet

Lever du jour from Tableau III of Maurice Ravel’s ballet Daphnis et Chloé (1912) is an example of how orchestration can be used as a primary parameter to convey the image of daybreak (Millard 2021). Ravel uses techniques to transform orchestration such as alteration, addition, and expansion to gradually depict the climactic moment of the sun appearing from behind the horizon, accompanied by singing birds (Soden 2020, 71).

Daphnis et Chloé was originally choreographed by Michel Fokine for the 1912 premiere in Paris. Several choreographers have reimagined the ballet, including Sir Frederick Ashton (1951, London), Jean-Christophe Maillot (2010, Monaco), and Benjamin Millepied (2014, Paris). That Ravel’s score and Longus’ original novel of the same name continue to be of interest to choreographers speaks to the compelling partnership between the score and the extramusical program (Goddard 1926).

In this presentation, I analyze Ravel’s Lever du jour to describe and quantify how ballet choreography may emphasize or contradict the trajectory of the scene as established through orchestration. I use three analytical approaches: 1) score-based analysis describes pitch and register content, and the introduction and elimination of instruments, 2) audio analysis accounts for sound intensity and timbral descriptors (such as spectral centroid and spectral flux), and 3) choreographic analysis (performance videos) adapts existing choreomusical notation systems to transcribe and analyze choreographies by Ashton, Maillot, and Millepied to study orchestration (Leaman 2016). This presentation aims to begin a conversation about applying quantitative analytical methods to choreography as a visual representation of orchestration.

 

 

Jason Winikoff

The Voices of Ancestors: Vocal Timbre Descriptors in Zambian Luvale Makishi Masquerade

In Luvale (and related) communities of northwestern Zambia, ancestors live on as makishi: elaborate spirit masquerade. The dozens of makishi are differentiated by their appearance, personality, and action. Many makishi speak, sing, or make noise and proper vocal timbre is vital to this performance (Euba 1988). In this presentation, I analyze the timbre of makishi voices by investigating the ways in which they are described. Interviews conducted during extensive on-site field research demonstrate that there is localized, insider vocabulary used to describe these timbres (Fales 2018; Feld et al. 2004; Wallmark 2018). I organize these words and phrases into Wallmark’s (2019) seven groupings of timbre descriptors, revealing that timbre is often discussed in similar ways across cultures. Both my methodology and focus on descriptors inherently present a listener-centered approach to vocal timbre that extends beyond the vocalizer (Eisheim 2018, 2019). This not only expands the topic of timbre to include perception, but also ensures that audience reception is considered a component of masquerade performance. Although performer agency allows for variation, makishi embodying the same ancestor are still required to adhere to a specific timbral archetype. Following Samples (2018), I argue that the presence of these group-specific descriptors demonstrates the archetype’s perceived distinctiveness. And the listener’s ability to map these timbral quale onto identity (Meizel 2013; Neal 2018) evinces cultural knowledge.

 

 

Linglan Zhu

Comparison of perceived and imagined instrumental blend

Timbral blend is fundamental in various musical activities for shaping sounds and musical intentions. Previous studies on blend perception have mostly focused on sounding blend, neglecting the presence of imagined blend made possible by inner hearing. To investigate how imagined blend compares to the perception of heard blend and if musical background makes a difference in it, two groups of participants (musicians and non-musicians; 31 per group) were presented with pairs of short instrumental sounds in unison from 14 different instruments in two different experimental conditions. In the first condition, paired instruments were played sequentially, and participants were instructed to imagine them being played simultaneously and rate their degree of blend. In the next condition, pairs of instruments were played simultaneously, and participants were asked to rate the perceived degree of blend. Results showed significant interaction effects among the type of instrument pairs, presentation conditions, and musical backgrounds. Acoustic modeling and multidimensional scaling of blend ratings showed both varying and invariant roles of different acoustic features between the two types of blend perception. It appears that imagining blend is more sensitive to differences in brightness and richness of high frequency contents between paired sounds. A follow-up experiment on the perception of dissimilarity between instruments using the same stimuli provided further evidence that evaluating imagined blends is strongly informed by judging the dissimilarity of blending instruments. In practice, how the two types of blends differ is a result of complex interactions involving the specificity of blending instruments and listeners’ musical backgrounds.

Previous
Previous

Travel & accommodation

Next
Next

Instructions for Workgroup Leaders