Stina Hasse has through practice-based research (since 2006) explored the politics and aesthetics of vocal expression, based in bodily processes, as well as electronic and synthetic generation of vocal expression.
Below are selected projects making audible how voice has been explored through recordings in anechoic and reveberant chambers, generated with interactive sensor technologies, robots, machine learning and displayed in surround sound installations and museum events.
The research project Voice as a Matter of Design: a Framework for Novel Vocal Imaginaries critically explores the affective and sociocultural implications for human-machine configuration in a time where voice has become a matter of design.
Voice is both a matter of expression and of being heard, and connects deeply to feelings of intimacy, identity, sociality and performativity. Voices are complex, variable and diverse; the minute nuances of tone of voice influence the affective tonality and interpretation of what and how we are trying to communicate in sociocultural context.
Current primarily Western synthetic voice designs often present us with ‘natural’ and heteronormative vocal stereotypes, that do not take vocal diversity into account. To facilitate a broader spectrum of vocal expressions , we explore vocal imaginaries that operate beyond normative vocal stereotypes and develop a framework for pluralistic synthetic voice design.
The research project is supported by the Independent Research Fund Denmark – DFF-Research Project 1: 2022- 2025, led by associate professor, Jonas Fritsch and assistant professor, Stina Marie Hasse Jørgensen.
What are the politics and aesthetics of synthesized voices? What happens to our listening experiences of voices when the voice is synthesized? What notions of identity, personhood and will do we ascribe to the synthesized voices talking to us?
Synthetic voices are artificial voices generated by algorithms in computers. Most of the synthesized voices we hear in our everyday life have one vocal identity.
In the project [multi’vocal] we question the aesthetic as well as representational modes of the synthesized voices and ask: since voices from machines are not limited to a single vocal identity, then why do most synthesized voices have only one gender, one age, and one accent?
The aim of [multi’vocal] is to bring forth reflections of how synthesized voices are affecting us and creating new listening experiences and relationships between humans and machines.
The [multi’vocal] collective was initiated in 2015 and has these amazing people involved: Frederik Tollund Juutilainen, Mads Pelt, Alice Emily Baird, Nina Cecilie Højholdt, and Stina Hasse Jørgensen.
Voice as Function 2015 An exploration speech synthesis in terms of personification, representation and translation. Below is a test of the speech synthesis used in Google Translate reading out an excerpt from the Ursonate by Kurt Schwitters (1922-1932).
Voice as Function was presented at a PhD seminar, Kunsthal Aarhus 2015
The Robot at the Museum The Robot at the Museum (2015 - 2016) investigated the listening relations we have to the synthetic voices of social robots in a museum context. What kind of vocal engagements does the robot create? Specifically the project experimented with the voice of the humanoid robot NAO in the role as a museum tour guide at Medical Museion in Copenhagen, Denmark.
The project was made in collaboration with Oliver Alexander Tafdrup, PhD Fellow at Future Technology, Culture & Learning, Aarhus University and Stine Harrekilde, MA at Future Technology, Culture & Learning, Aarhus University. The project was realized in collaboration with Medical Museion, University of Copenhagen. Read, listen and see more about the project here.
Move/Bevæg Dig Move/Bevæg Dig (2011 - 2012) focuses on the listener’s interaction and experience of the tone of voice and words as musical expressions, in relation to the body and the surrounding space. The hardware consists of a pair of headphones, an Arduino nano and two ultrasonic distance sensors. The software is formed by a phase vocoder Max MSP patch originally created by Dan Trueman transforming the data send from the Arduino. The interactive sound installation is part of a research project that investigates the potential of interactive sound installations.
The Language Tree The Language Tree is a surround sound installation displaying a conversation between different languages on what wisdom is. The paralinguistics of the voices are investigated; how they whisper, scream, overlap, resonate and dissonate in ways that transform the meaning of the words into a cacophonous paralinguistic sonic display of the voices and the recording technology used in the piece.
The Language Tree was comissioned by Enge-rom Library University of Copenhagen 2010. The Language Tree is a surround sound installation made for the Engerom Library at University of Copenhagen. The excerpt shows the first part of the 30 min installation.
I am Sitting in a Different Room
I am Sitting in a Different Room (2010) was recorded in 2008 an anechoic chamber at the Danish Technical University, Department of Acoustics, Denmark.
Through a reappropriation of I am Sitting in a Room from 1970 by Alvin Lucier in an anechoic chamber new meanings of the work appears. The reappropriation in the anechoic chamber investigates the process of playback and recording in the anechoic chamber and how this transforms the character of the voice, not as space, but as technology. This makes the recording device, normally almost inaudible, audible. Read more about I am Sitting in a Different Room in the article 'Doing It Wrong' by Douglas Repetto published in Frontiers of Engineering. Read more about the whole project in the article 'I am Sitting in a Room - from a listener's perspective' in Body, Space & Technology Journal.