Inducing Self-related Emotional Experience by Modulating the Speaking voice (SEEMS)
Navn på bevillingshaver
Boris Kleber
Titel
Associate Professor
Institution
Aarhus University
Beløb
DKK 4,237,800
År
2022
Bevillingstype
Semper Ardens: Accelerate
Hvad?
It’s not what we say, it’s how we say it that conveys our emotions—or the emotions we wish to transmit. The human culture has developed a highly sophisticated and unique way to communicate thoughts and feelings states through words. Yet, underneath those words, we also express our emotions through other channels—using our body language, facial expressions, gestures, or, the sound of our voice. In fact, our voice's emotional tone is a powerful – yet often neglected – means of sending and perceiving socially relevant information. It is superimposed on speech sounds to provide essential information about who we are and how we feel. As listeners, we notice and respond to other people’s emotional expressions immediately. But, when we speak, we also hear and monitor ourselves, both consciously and unconsciously, at the same time. This process is different and more complex than listening to someone else’s voice, since we are both producers and perceivers. The goals of SEEMS are to elucidate the mechanisms by which we process the sound of our own voice and investigate what kind of awareness people have of their own emotional expressions.
Hvorfor?
Researchers so far have focused on establishing neurocognitive and neurobiological models concerning the verbal channel for speech processing. Yet, we still know little about the mechanisms that produce non-verbal emotional vocalizations and how the parallel feedback from one’s own voice can interact with how we feel. Using an innovative digital audio platform that can modify the emotional tone of people's voices while they are talking, researchers have provided first evidence that our emotional state changes with the emotional feedback we perceive from our voice. Even if we have no awareness that our voices sounds differently. SEEMS sets out to elucidate the underlying mechanisms by describing how our own voice can change our brain dynamics and consequently our mood.
Hvordan?
Self-perception theory holds that we make conclusions about ourselves by interpreting our own behavior. Thus, people may listen to their own voice to learn how they are feeling. SEEMS will measure brain activity as participants read short stories aloud, while simultaneously listening to their own altered voice, sounding happier or sadder, through a headset. We will first test if changes in how participants feel correspond to the emotion portrayed in the altered feedback. We then use artificial intelligence to reveal how the voice feedback effects on emotional experience map to changes in brain network activity. The results of this project will have significant impact for our understanding of how we respond to our own emotional expressions, elucidating the relationship between the expression and experience of emotions, and how emotions are generated by the brain.