Thinktanks 2019

Overview of IPEM's Think Tanks 2019.


Time-evaluation model for life musical interaction with multiple performers
Paolo Laghetto, IPEM UGent

To assess the quality of a music performance many aspects have to be taken into account: timing, pitch, loudness, articulation and other parameters, creating a complex environment of different variables to be evaluated. With this project we want to propose a new time-based model,  that is interested in the note duration, excluding the other parameters from the analysis. Here the Hocket rhythmic technique comes particularly suitable, since it creates a melody shared between two or more performers, that requires mutual actions. In this way we can create a dynamic data-analysis model, taking into account only the IOI (Inter Onset Interval) . With this approach it is possible to compute different type of errors to assess the quality of a performance and get also some features about the interaction between the performers.
Since the framework is even able to work in realtime, it can compute the errors of the different performers while they are singing/playing. In the meanwhile, it can also give a personalised bio-feedback to each one the players, informing them about their timing. Moreover, the bio-feedback system can be used as a technique to improve the precision of the timing of the performers, giving a reward or penalty when they are performing well or not. On the other hand, the errors measurements can be helpful to discover some wrong behaviour that the performers are used to do.
I will go in more details about the  new time-based model during the presentation and, at the end of it, there will be a demonstration of the realtime system in the ASIL.

Paolo Laghetto was born in 1994 in Vicenza, a city nearby Venice, in Italy. He is a master student in Computer Engineering at the University of Padova. He previously got a bachelor degree in Information Engineering in 2017. To take all the opportunities that the University can give him, he decided to join the Erasmus programme and study for the master thesis abroad. Since music is something that was always a part of his life, playing the guitar in multiple bands, he decided to join IPEM research center. During his internship, which he started in March, he worked in the so called Homeostasis project along with Jeska Buhmann and supervised by Marc Leman. The aim of the study, started in 2018, is to create a new model to assess music performance, taking into account the length of the note the performers are playing.
Since he was young, he has been passionate about technology but just in the latest years he started reviewing tech products in his youtube channel called Willpitt. This led him to discover the artistic world of photography and video making that is still taking a big part in his life.

Friday, July 2 19th at 14.00 Room De Papegaai, De Krook Gent

Designing (With) Machine Learning for Interactive Music Devices Machine
Hugo Scurto, IRCAM Paris

Learning for interactive music devices is often designed from a machine-centred perspective—typically, focusing on a given computational task, such as automatic music generation, and borrowing methods from computer science to attain optimal machine performance. In this talk, I will present my doctoral work, adopting a human-centred perspective to design (with) machine learning for interactive music devices, which I did in collaboration with IRCAM's Sound Music Movement Interaction group, under the supervision of Frédéric Bevilacqua. We focused on four embodied musical activities—respectively, motion-sound mapping, sonic exploration, parameter space exploration, and collective mobile music-making. We borrowed four methods from the fields of design and human-computer interaction to develop four machine learning algorithms for each of these musical activities. We implemented the algorithms in prototype music devices that we applied in various interactive contexts, ranging from lab-based experiments to public workshops, exhibitions, and performances. Our observations let us compare different research approaches to the development of machine learning in interactive music devices, and provide a basis for considering a design approach to music devices that would rethink machine learning in terms of human goals and creative intentions.

Hugo Scurto is a PhD Candidate in the Sound Music Movement Interaction team, at IRCAM, supervised by Frédéric Bevilacqua. His research aims at designing new interaction modes between humans and artificial intelligence in the context of music creation. More specifically, he focuses on the human exploration process at stake in creation, in order to develop a computational « co-exploration » system able to assist humans along their very own creative process.
In parallel to his scientific work, he conducts an artistic practice centered around music and new media. He is interested in the aesthetic, disciplinary, and societal decompartmentalisation that is engendered by new usages of digital technology. He has created or co-created several pieces, performances and installations (the last one at GMEM in 2017), and actively takes part in educational actions as member of Lutherie Urbaine.
He took a scientific course at École normale supérieure Paris-Saclay, and a musical course at Cité de la Musique de Marseille. He also received the « ATIAM » Master’s Degree in 2015, and was Visiting Researcher at Goldsmiths, University of London in 2016.

Friday, June 21st at 13.00 Room De Papegaai, De Krook Gent

Andrea Vaghi

Sonification of body movements represents one of the most important research areas at IPEM, in the meaning of finding new form of expressions and interaction between different actors in the musical performance ecosystem (composers, performers, instruments, audience).

Inspired by the D-Jogger technology, which aims at synchronizing music playback with the user's running gait, the idea behind this project is to implement a real-time algorithm for fine-tuned low frequency detection of 1D body movements and to develop an effective strategy for 3D audio sonification, exploiting the expertise in the domain at IPEM. The goal of the system is to provide a meaningful and synchronized auditory bio-feedback w.r.t. the rythm of the movement, with a particular focus on the use-case scenario of respiration. Specifically speaking, a large portion of my time has been devoted to the implementation of a custom-made frequency domain analysis technique involving a combination of Fourier and Wavelet Transforms, and to find an efficient way to keep the synchronization between the auditory feedback and the breathing cycle. The latter was achieved exploiting the Kuramoto model for coupled oscillators. During the presentation, I will explain in details the main features of the system, the design choices and the theory behind them.

Wednesday, May 29th at 10.00 Room De Papegaai, De Krook Gent

Children's Representational Strategies based on verbal versus bodily interactions with music: an intervention-based study
Sandra Fortuna

On the assumption that a bodily engagement with music may affect the children’s graphic representations and their own verbal explanations, it was conducted a comparative study in which primary school children (n= 52; age = 9-10) without any formal music education participated in a verbal-based vs. movement-based intervention. Before and after the interventions, as pre and  posttest, children performed a graphical representation of the music and provided a verbal explanation of their own graphic description. Data have been collected, analysed and compared according to the categories of global or versus differentiated notations in which one or more sonic musical parameters are described. A McNemar test revealed a significant increase of differentiated representations from pre-test to post-test among children involved in a bodily music interaction with a focus on the dynamic and temporal organization of the piece. Further analysis and statistical tests on verbal explanations revealed a significant change of semantic themes, time dimension, and the number of music parameters gathered by children involved in body movement.

Wednesday, May 29th at 10.00 Room De Papegaai, De Krook Gent

Why people move to music: A dynamical systems account of dance
Benjamin G. Schultz, Basic & Applied NeuroDynamics Laboratory, Maastricht University

The human auditory system is closely linked to the motor system and changes in certain acoustic features (e.g., sudden amplitude increases) can evoke strong motor responses, such as, the startle response. From an evolutionary perspective, responding to salient acoustic features that “stand out” in an auditory scene is important because it allows humans to respond to potentially dangerous entities. It is possible this same response mechanism has led to dancing behaviour in humans; subtle changes in acoustic features that once signalled danger now evoke motor responses in patterned ways and become embodied resulting in movement sequences, for example, dancing. I present results from two experiments that test the hypothesis that the motor system resonates with salient acoustic features and that these same features lead to overt movement. In the first experiment, we examined how various acoustic features evoke involuntary motor responses while listening to music without movement using surface electromyography. We further tested whether these features were perceived as salient using continuous salience ratings. Results showed that acoustic features corresponding to amplitude, intensity, energy, and dissonance evoked involuntary motor responses and were also perceived as salient. These findings support the hypothesis that the motor system resonates with salient acoustic features. In the second experiment, we used motion capture to record movement to music and examine how overt movements correspond with these acoustic features. Three different relationships were identified: 1) Foot movements corresponded to measures of beat onsets, 2) Hand movements corresponded to changes in amplitude, intensity, and energy, and 3) Head movements related to changes in dissonance. Overall, these results support the hypothesis that the motor system resonates with salient acoustic features and that these are the same features to which people move. Findings are discussed from a dynamical systems perspective where forms of bottom-up perception-action relationships may be the basis for complex motor sequences like dancing.

FERARI: Feedback system for a more Engaging, Rewarding and Activating Rhythmic Interaction
Jorg de Winne - IPEM UGent

The project aims to find possible ways to improve our interaction with digital environments, by stimulating the brain with a rhythmic sound. The envisioned tool will make the interaction more activating, engaging and rewarding by automatically adapting the rhythmic sound based on signals directly measured from the brain. In a first step the physiological signals, called biomarkers, that are best suited to measure how people perceive the activating, engaging and rewarding aspects of interaction need to be identified. EEG will be used to measure the electric activity in the brain, and fNIRS will be used to measure brain activity based on the amount of oxygen in the blood. Next to these, the pupil will be studied to investigate whether the interaction is enjoyed or not. In a second step, the measurements need to be related to the details of the rhythmic sound. It is then possible to directly influence the signals that we measure in the brain by changing the rhythmic sound. A third step adds people-people interaction to the system. The goal of this step is to learn what makes this interaction activating, engaging and rewarding and how it is different from the first step. This allows to mimic a human-human interaction in the envisioned tool. Finally, a proof of concept needs to be developed, supported by existing models, the achieved results and a small user experiment.

Friday, May 10th at 13.30 Room De Papegaai, De Krook Gent

A new tool for analysing signals in the time domain illustrated with cardio-respiratory examples
Leon Van Noorden - IPEM UGent

Traditionally signal analysis depends heavily on Fourier Analysis, i.e. by a transformation to the frequency domain. However in many situations it would be an advantage to stay in the time domain. An example of this is the application of circular statistics. There is a rapid development in mathematics and experimental sciences of tools for analysing signals in the time domain and becoming available in matlab. An example will be given on the disentaglement of heartbeat and respiration signals with the matlab function FindSignal.

Friday, May 17th at 13.30 Room De Papegaai, De Krook Gent

Dr. Federica Bressan - Audio preservation, Interactive art, Technoculture

A critical overview of 10 years of research work between technology and culture: Audio preservation, Interactive art, Technoculture
Physiological computing applied to the arts is a new genre of scholarship and practice focusing on the senses, art, design, and new technologies. My current work (Fulbright grant, 2020) consists in building the first digital archive to store information about sensory mapping strategies and the technology (sensors) used. This archive ideally contains information about two parallel timelines, showing the co-evolution of technology and culture. This co-evolution is something that I have been increasingly interested in, and that motivated me to start the podcast show Technoculture in late 2018 (available on iTunes, Spotify, and all major podcast platforms). The work on physiological computing applied to the arts builds on my previous experience with modeling interaction in installation art settings (Marie Curie grant, 2017-2019). During my presentation I will put these topics in the context of my academic work during the past 10 years, discussing its multidisciplinary aspects, and potential future evolutions.

April 26th at 13.30 Room De Papegaai, De Krook Gent.

Prof. dr. Noemi Grinspun - Rhythm synchronization, cognitive functions and oscillatory activity during resting state

The aim of this research project is to understand how rhythmic synchronization abilities are correlated with executive functions (mainly with attention and cognitive flexibility), and how it correlates with the characteristics of the oscillatory activity during resting state.

Despite all the advances in the area, it is still unknown whether rhythmic abilities, such as adapt the movement to a change of the beat of the music could be correlated with the oscillatory activity during the resting state, with cognitive flexibility and with musical abilities measured with the Audiation Test (PMMA).

Dr. Noemí Grinspun is Professor and researcher at the Movement and Cognition Lab of the Arts and Physical Education Faculty, Metropolitan University of Educational Sciences, Santiago, Chile and member of GIEM (Grupo de Investigación en Experiencia Musical). She holds a PhD in Biomedical Sciences (Neuroscience), MA degree in Neuroscience, Music Teacher (Trombone, ensamble direction) and Psysical Therapy BSc. Her research is focused in Music and movement interaction during learning, and also in Rhythm synchronisation abilities, executive functions (Cognitive flexibility, attention) and Neural dynamics. She has participated in several research projects about cognition, movement and neurosciences and has been invited as a speaker in conferences about Music, Neuroscience and cognition.

March 29th, 14.30h, Room De Papegaai, De Krook Gent.

Dr. Melissa Bremmer - Perspectives on a cultural diverse music curriculum

Widening learner diversity in music education is not easy and asks for an active policy. One of the aspects to increase participation is the development of a cultural diverse music curriculum. A cultural diverse music curriculum offers the opportunity for learners to experience and express their personal musical heritage, to feel recognized and included. For music teachers, however, learning to bring traditional music to music education is a challenge. Not only do teachers have to learn the performative activities of traditional musics, they also have to grasp the transmission processes of these traditions. In this lecture, Schipper & Campbell and Brinner’s views on the transmission of traditional musics will be presented and how they connect to an embodied cognition approach to music. Furthermore, dilemma’s teachers are confronted with when teaching traditional musics will be discussed, as well as the hidden messages a music curriculum can convey that does - or does not represent cultural diverse music activities.

Panel Members: Prof. dr. Kris Rutten, dr. An De Bisschop

About Melissa Bremmer:

March 1st, 13.30h, Room De Blauwe Vogel, De Krook Gent.


Prof. Dr. Sylvie Nozaradan - The neuroscience of musical entrainment: insights from EEG frequency-tagging

Entrainment to music is a culturally widespread activity with increasingly recognized pro-social and therapeutic effects. Music powerfully compels us to move to the musical rhythm, showcasing the remarkable ability of humans to perceive and produce rhythmic inputs. There is a wave of current research exploring the neural bases of this rhythmic entrainment in both human and non-human animals, in evolutionary terms and in development. One way to investigate these neural dynamics is frequency-tagging, an approach recently developed to capture the neural processing of musical rhythm with surface or intracerebral electroencephalography (EEG).
Recent experiments conducted in healthy and brain-damaged adults, and also in infants, while exposed to either simplified rhythmic patterns or naturalistic music will be presented. Results show that, although the auditory system presents a remarkable ability to synchronize to the rhythmic input, the neural network responding to rhythm shapes the rhythmic input by amplifying specific frequencies. This selective shaping seems to correlate with perception and individual ability to move in time with musical rhythms. These different results may lead to a new understanding of the neural bases of rhythmic entrainment.

Sylvie Nozaradan, MD PhD, is an Associate Professor at the Institute of Neuroscience of UCLouvain, Belgium since September 2018. The same year, she was awarded a Starting Grant from the European Research Council to develop her research on rhythm and the brain. Previously, she received an ARC Discovery Early Career Researcher Award from the Australian Research Council to develop her research independently for three years at the MARCS Institute, Western Sydney University (Australia). She has a double PhD degree in neuroscience from UCLouvain and the BRAMS, Montreal (Canada), for her work on the neural entrainment to musical rhythm. She has a dual background in music (Master in piano, Conservatoire Royal de Bruxelles, Belgium) and science (medical doctor, UCLouvain).

Tim Duerinck and Tim Vets

Tim Duerinck presents his ongoing research on new materials for soundboards of the violin and cello. This research is supported by IPEM, Material science (UGent) and the department of instrument making (School of Arts Gent). Together, we take a look at the research methods and resulting preliminary results from both mechanical/acoustical measurements as well as tests with listeners and musicians. Can listeners and musicians tell the difference between an instrument made from wood and another material? Is there a link between what we can measure and how people perceive aspects related to the quality of violins? What does the future hold, and perhaps most importantly: how should we investigate further? See also

Tim Vets will provide a status overview of his PhD work at ipem, as well as his collaborations with Hans Heymans and Roeland Van Noten, touching upon interaction design, ergonomics and organology for a musical composition toolchain.
The contextualization of these activities within his artistic research will be discussed, as well as the underlying motivations and trajectory that lead to the current search for a unifying narrative within contrasting artistic practices.

January 17th, 13.30. Room De Papegaai, De Krook Ghent.