Thinktanks 2019

Overview of IPEM's Think Tanks 2019

Federico Avanzini - Music, informatics, and society: current research at the LIM


In this talk I will present some recent and current research activities at the Lab of Music Informatics (LIM), University of Milano. The focus will be on three application domains with a strong societal impact. The first one is preservation and exploitation of musical and acoustic cultural heritage: digitization, restoration, and access to music archives as well as musical artifacts. The second one is assistive technologies: current projects at LIM are mainly concerned with sonification approaches to support navigation of people with visual impairments, and musical instruments designed for people with motor impairments. The third application domain is technology-augmented music education: activities at LIM focus on embodied learning approaches to aid the teaching of music harmony, music genres, and so on.


Federico Avanzini works as an Associate Professor at the University of Milano. He received his piano diploma degree from the Conservatory of Milano in 1995, and his Ph.D. degree in computer science from the University of Padova in 2002. His research interests are in Sound and Music Computing (SMC), and mainly concern algorithms for sound synthesis and processing, non-speech sound in human-computer interfaces, multimodal interaction. Prof. Avanzini has been key researcher and principal investigator in several national and international research projects. He has authored more than 150 publications on peer-reviewed international journals and conferences, and has chaired and served in several program and editorial committees. He is currently Associate Editor for the international journal Acta Acustica united with Acustica, Conference Coordinator in the International SMC Board, and President of the Italian Music Informatics Association. 

Friday October 29th at 10.30PM, Room De Papegaai, De Krook Gent

Jorg De winne ∣ Is visual attention supported by auditory rhythm? How is this reflected in several biomarkers?” – an update about the current progress.


In May I presented the general overview of the FERARI project, as it was defined in my FWO application. Although the FWO proposal was refused, the core idea of the project is still on track. Because the general framework was presented before the summer, I will zoom in on the progress made in the first work package, where we investigate if visual attention is supported/improved by auditory rhythm and how this is reflected in several biomarkers. This identification of biomarkers will help to obtain those that best reflect the engaging, rewarding and activating aspects of interaction. Based on literature, discussion, practical consideration and the preliminary results of pilots with both the EEG and fNIRS equipment, several changes were made to the experiment.
Very shortly, I will as well update you about one or two interesting sidetracks that I have been working on, and that will probably be explained in more depth during one of the coming ThinkTanks.


Friday October 18th at 1.30PM, Room De Papegaai, De Krook Gent

 

Mattia Rosso ∣ Coordination dynamics in rhythmic interactions


The aim of my research is to investigate the cerebral organization underlying joint action in rhythmic interactions. Here I propose a new paradigm meant to disclose a layout of the preferred coordination patterns in dyads engaged in a rhythmic task, and a model to describe the organizational principles at a millisecond scale (Heggli, 2019) underlying the emergent patterns. Simultaneous electroencephalography (EEG) recording will be implemented in the setup, followed by a MoCap version of the same paradigm to investigate continuous full-body movements and introduce the spatial dimension in the experimental design.
The first stage should return a profile of the dyads' intrinsic dynamics at the behavioral and neurophysiological level in healthy population, tracking their evolution over time and their recurrence structure.
The second stage would consist of the development of music-based biofeedback strategies to intervene on the intrinsic dynamics, assessing the reshape due to competing and cooperative learning processes (Kelso, 1995).
The third stage will extend investigation and intervention to the clinical population.
The end goal is to develop music-based biofeedback interactive scenarios grounded on this knowledge, to improve social skills and well-being in elderly patients.

Friday October 18th at 1.30PM, Room De Papegaai, De Krook Gent

Lousin Moumdjian ∣ Identifying attentional components and gait dynamic correlates when entraining steps to different tempo in persons with MS with cognitive impairments and cerebellar patients; and its application in an individualized biofeedback intervention to target multimodal functional training.


This is work in progress! It would be the first attempt to put forth the project I would use for an FWO application. This proposal is based on the sensori-motor synchronization work that I am now doing in my PhD project; where we conceptualized a difference in cognitive processes needed in order to continue with intended entrainment past a certain motor thresholds marked by the +4% tempo. In this project, I would like to extend this conceptual proposal into an experimental one, to identify these changes [attention], and additionally add a component of gait dynamics. Once these are identified, my aim is to move towards a larger scale randomised control trial, to investigate entrainment to tailor-made biofeedback applications (based on the identified networks in the first part) to assess co-ordination, balance and gait dynamics in two groups of neurological symptoms: a) Multiple sclerosis patients with cognitive impairments [attention, information processing and executive functioning], as these impairments are presented differenced than in Dementia and Alzheimer’s and, b) Cerebellar patients [across different pathologies; e.g. MS cerebellar patients, cerebellar ataxic patients due to stroke lesions, or pure cerebellar lesions termed SCA6]

Friday October 18th at 1.30PM, Room De Papegaai, De Krook Gent

Renee Veldkamp
Protocol: Learning strategies for improving dual task performance in persons with Multiple Sclerosis.

Dual tasking, such as walking while talking, is a common everyday act. However, simultaneous performance of a motor (e.g. walking) and a cognitive (e.g. talking) task may be difficult and lead to worsening in performance on one or both tasks, called cognitive-motor interference (CMI). This CMI is usually investigated with a dual-task paradigm in which a motor task and a cognitive task are performed separately and concurrently. The dual-task cost (DTC) quantifies this interference and is the percentage change of dual-task performance compared with single-task performance. In persons with Multiple Sclerosis (pwMS), difficulties with dual tasking during walking have been related to higher risks of falls and lower quality of life in MS, highlighting the importance for rehabilitation strategies targeting CMI.
Training might improve dual-task performance via automatization of an individual task, thereby reducing attentional requirements. The more automatized a motor skill, the more robust performance on a task will be when performed under dual-task conditions. Implicit learning strategies are thought to more strongly rely on automatic processes than explicit learning strategies. A prediction is therefore that implicit learning results in superior dual-task performance of the learned task, compared to explicit motor learning. It is presently not known whether implicit learning will occur in pwMS in a stepping task and whether a more implicit strategy results in more robust performance of the learned task under dual task conditions than a more explicit learning strategy. Therefore, our primary aim is to examine the effects of an implicit and explicit learning strategy of a target-directed stepping task on performance of the task in a cognitive-motor DT condition in pwMS and HC. The study will be a cross-sectional design with persons with MS and HC divided over two learning conditions, i.e. an explicit and an implicit one. The goal-directed stepping task will be a reaction time task in which participants are instructed to ‘’step as fast and accurate as possible on the tile that lights up’’. The tiles will light up in a fixed repeating sequence that can be learned in either an implicit or explicit manner, according to the serial reaction time task paradigm previously used in literature. The experiment will be divided over three days. Descriptive and clinical measures will be conducted on day 1, together with a familiarization session with the apparatus of the goal-directed stepping task and the dual tasks. The experimental task consisting of the learning paradigm with the goal-directed stepping task and delayed retention tests of the dual tasks are performed on day 2 and day 3, respectively.

Renee Veldkamp is currently pursuing a PhD on cognitive motor interference in persons with Multiple Sclerosis. She holds a Master Human in Movement Sciences; Motor Function and Cognition in Healthy Ageing, obtained at the University of Groningen. She has been teaching at VU University Amsterdam, was test leader ‘’Peiling Bewegingsonderwijs’’ at the University of Groningenand did several jobs providing household for elder.

Friday September 20th at 1PM, Room De Papegaai, De Krook Gent

Mieke Goetschalck
Sensorimotor synchronization ability of walking and running to metronome sounds and music within motor learning perspective in children with Developmental Coordination Disorder

Given the difficulties in motor learning, motor planning, timing, predictive control and sensorimotor coupling seen in Developmental coordination Disorder (DCD) and the need for a specific task-oriented intervention that address the possible underlying mechanisms of these deficits, we want to bridge the current knowledge gap considering the potential difference in auditory-motor-synchronization ability in children diagnosed with DCD and compared to typical developing individuals during walking and running. The aim is to investigate the effect synchronizing to metronome beat compared to music on walking and running in children diagnosed with DCD. Finally, we want to examine the possible surplus effect of adding an individualized auditory stimuli to conventional Neuromotor task training (NTT) compared to NTT alone on motor coordination and motor learning of walking and/or running in individuals with DCD. A systematic review will be conducted, to review methodologies for the assessment of synchronization ability and intervention protocols that encompass auditory-based interventions and its effects on the variability of spatiotemporal motor control of gait and/or running parameters in pediatrics. Secondly an observational study will be performed to investigate if synchronization of steps to pulses in auditory stimuli (music compared to metronomes) at different tempi ranging from 0% to 16% in increments of 4%, is possible in pediatrics with and without DCD. Lastly, a pilot intervention will examine the possible added effect of including music/metronome to conventional NTT targeting gait, compared to NTT alone on motor coordination and motor learning of walking and running in individuals with DCD.

Mieke Goetschalckx  is currently pursuing a PhD on Sensorimotor synchronization ability in children with Developmental Coordination Disorder. She holds a Master in Rehabilitations science and physiotherapy, obtained at UHasselt. During her masters she obtained extra curricular credits in Biomechanics and motion analysis, and in Pediatrics. She worked as  trainer gymnastics and as  academical assistant and provided childcare and activities for preschoolers – adolescence organised by Grabbelpas.

Friday September 20th at 1PM, Room De Papegaai, De Krook Gent

Aleksandra Michalko
The Coupling of Action and Perception: Can Embodied Cognition Override Enculturation Effects on Beat and Meter Perception?

Studies of rhythmic development showed that 6-month-old North American infants react to isochronous and non-isochronous meter with equal facility, whereas 12-months-old infants are already biased toward the rhythms of their environment and have difficulties with the discrimination of non-isochronous metrical contexts. These findings reveal the existence of a transition from a culture-general to a culture-specific domain of musical rhythmical patterns very early in life. At the same time, the body of research from neurological domains on healthy and impaired patients suggests that the perception of pulse and meter involves integration across widespread auditory and motor related brain regions. Empirical studies in the domain of embodied music cognition suggest that auditory–motor interactions are reciprocal, such that movement can not only affect meter perception in infants and adults but also help to disambiguate musical stimuli and facilitate the meter and beat perception. This study will explore whether the cultural bias developed by Western adult listeners on meter and beat perception can be overridden by the action-based effects and enable the complex metric perception based on complex rhythm encountered in Balkan folk music (3 + 2 + 2 or 2 + 2 + 3). The experiment will consist of three phases and will combine the embodied and neurological approaches used in music cognition studies.


Aleksandra Michalko is currently doing an internship at IPEM. She holds a BD in Linguistics and Literature and MDs in Music Performance and in Music Studies: Cognitive and Computational Musicology. She investigated the influence of figurative language on the music performance and underlying correlations between semantic labelling of musical timbre and flute sounds’ acoustic properties. Her current project focuses on the relationship between embodied music cognition and the beat and meter perception.

Wednesday September 11th at 10PM, Room De Papegaai, De Krook Gent

Alessandro Dell'Anna
Does musical interaction in a jazz duet modulate peripersonal space?


Peripersonal space, the space within reach, has been widely studied in the last twenty years, with a focus on its plasticity following the use of tools and, more recently, social interactions. Ensemble music is a sophisticated joint action that has been typically explored in its temporal rather than spatial dimension, even within embodied approaches. Therefore we devised a new paradigm in which two musicians could perform a jazz standard either in a cooperative (correct harmony) or in a non cooperative (incorrect harmony) condition, under the hypothesis that their peripersonal spaces would be modulated by the interaction. We exploited a well-established audio-tactile integration task to measure such a space, asking the subjects to respond to stimuli delivered at two different distances. In particular, in accordance with the relevant literature, we predicted that a cooperative interaction would have extended the peripersonal space of the musicians towards their partner. Surprisingly, we obtained the complementary result, that is, a suppression of the peripersonal space after the non-cooperative condition, interpretable as a removal of the partner and a breakdown of the musicians’ “mutual incorporation”. Subjective reports and correlations between these reports and reaction times are coherent with such interpretation. Finally, an overall better multisensory integration competence was found in musicians compared to a sample of non-musicians tested in the same task.

After a PhD in Philosophy of (Cognitive) Science (Genoa 2004) on the enactive approaches to visual perception, and ten years teaching in many Turin high schools and playing the saxophone around there, I'm currently completing my second PhD on the behavioural and brain bases of embodied music interactions thanks to a co-tutelle agreement between Turin (Neuroscience) and Ghent (Art Science) University.

Wednesday September 11th at 10PM, Room De Papegaai, De Krook Gent

Time-evaluation model for life musical interaction with multiple performers
Paolo Laghetto, IPEM UGent

To assess the quality of a music performance many aspects have to be taken into account: timing, pitch, loudness, articulation and other parameters, creating a complex environment of different variables to be evaluated. With this project we want to propose a new time-based model,  that is interested in the note duration, excluding the other parameters from the analysis. Here the Hocket rhythmic technique comes particularly suitable, since it creates a melody shared between two or more performers, that requires mutual actions. In this way we can create a dynamic data-analysis model, taking into account only the IOI (Inter Onset Interval) . With this approach it is possible to compute different type of errors to assess the quality of a performance and get also some features about the interaction between the performers.
Since the framework is even able to work in realtime, it can compute the errors of the different performers while they are singing/playing. In the meanwhile, it can also give a personalised bio-feedback to each one the players, informing them about their timing. Moreover, the bio-feedback system can be used as a technique to improve the precision of the timing of the performers, giving a reward or penalty when they are performing well or not. On the other hand, the errors measurements can be helpful to discover some wrong behaviour that the performers are used to do.
I will go in more details about the  new time-based model during the presentation and, at the end of it, there will be a demonstration of the realtime system in the ASIL.

Paolo Laghetto was born in 1994 in Vicenza, a city nearby Venice, in Italy. He is a master student in Computer Engineering at the University of Padova. He previously got a bachelor degree in Information Engineering in 2017. To take all the opportunities that the University can give him, he decided to join the Erasmus programme and study for the master thesis abroad. Since music is something that was always a part of his life, playing the guitar in multiple bands, he decided to join IPEM research center. During his internship, which he started in March, he worked in the so called Homeostasis project along with Jeska Buhmann and supervised by Marc Leman. The aim of the study, started in 2018, is to create a new model to assess music performance, taking into account the length of the note the performers are playing.
Since he was young, he has been passionate about technology but just in the latest years he started reviewing tech products in his youtube channel called Willpitt. This led him to discover the artistic world of photography and video making that is still taking a big part in his life.

Friday, July 2 19th at 14.00 Room De Papegaai, De Krook Gent

Designing (With) Machine Learning for Interactive Music Devices Machine
Hugo Scurto, IRCAM Paris

Learning for interactive music devices is often designed from a machine-centred perspective—typically, focusing on a given computational task, such as automatic music generation, and borrowing methods from computer science to attain optimal machine performance. In this talk, I will present my doctoral work, adopting a human-centred perspective to design (with) machine learning for interactive music devices, which I did in collaboration with IRCAM's Sound Music Movement Interaction group, under the supervision of Frédéric Bevilacqua. We focused on four embodied musical activities—respectively, motion-sound mapping, sonic exploration, parameter space exploration, and collective mobile music-making. We borrowed four methods from the fields of design and human-computer interaction to develop four machine learning algorithms for each of these musical activities. We implemented the algorithms in prototype music devices that we applied in various interactive contexts, ranging from lab-based experiments to public workshops, exhibitions, and performances. Our observations let us compare different research approaches to the development of machine learning in interactive music devices, and provide a basis for considering a design approach to music devices that would rethink machine learning in terms of human goals and creative intentions.

Hugo Scurto is a PhD Candidate in the Sound Music Movement Interaction team, at IRCAM, supervised by Frédéric Bevilacqua. His research aims at designing new interaction modes between humans and artificial intelligence in the context of music creation. More specifically, he focuses on the human exploration process at stake in creation, in order to develop a computational « co-exploration » system able to assist humans along their very own creative process.
In parallel to his scientific work, he conducts an artistic practice centered around music and new media. He is interested in the aesthetic, disciplinary, and societal decompartmentalisation that is engendered by new usages of digital technology. He has created or co-created several pieces, performances and installations (the last one at GMEM in 2017), and actively takes part in educational actions as member of Lutherie Urbaine.
He took a scientific course at École normale supérieure Paris-Saclay, and a musical course at Cité de la Musique de Marseille. He also received the « ATIAM » Master’s Degree in 2015, and was Visiting Researcher at Goldsmiths, University of London in 2016.

Friday, June 21st at 13.00 Room De Papegaai, De Krook Gent

A NEW FREQUENCY DOMAIN APPROACH FOR SYNCHRONIZED SONIFICATION AND BIOFEEDBACK OF REAL-TIME PERIODIC BODY MOVEMENTS
Andrea Vaghi

Sonification of body movements represents one of the most important research areas at IPEM, in the meaning of finding new form of expressions and interaction between different actors in the musical performance ecosystem (composers, performers, instruments, audience).

Inspired by the D-Jogger technology, which aims at synchronizing music playback with the user's running gait, the idea behind this project is to implement a real-time algorithm for fine-tuned low frequency detection of 1D body movements and to develop an effective strategy for 3D audio sonification, exploiting the expertise in the domain at IPEM. The goal of the system is to provide a meaningful and synchronized auditory bio-feedback w.r.t. the rythm of the movement, with a particular focus on the use-case scenario of respiration. Specifically speaking, a large portion of my time has been devoted to the implementation of a custom-made frequency domain analysis technique involving a combination of Fourier and Wavelet Transforms, and to find an efficient way to keep the synchronization between the auditory feedback and the breathing cycle. The latter was achieved exploiting the Kuramoto model for coupled oscillators. During the presentation, I will explain in details the main features of the system, the design choices and the theory behind them.

Wednesday, May 29th at 10.00 Room De Papegaai, De Krook Gent

Children's Representational Strategies based on verbal versus bodily interactions with music: an intervention-based study
Sandra Fortuna


On the assumption that a bodily engagement with music may affect the children’s graphic representations and their own verbal explanations, it was conducted a comparative study in which primary school children (n= 52; age = 9-10) without any formal music education participated in a verbal-based vs. movement-based intervention. Before and after the interventions, as pre and  posttest, children performed a graphical representation of the music and provided a verbal explanation of their own graphic description. Data have been collected, analysed and compared according to the categories of global or versus differentiated notations in which one or more sonic musical parameters are described. A McNemar test revealed a significant increase of differentiated representations from pre-test to post-test among children involved in a bodily music interaction with a focus on the dynamic and temporal organization of the piece. Further analysis and statistical tests on verbal explanations revealed a significant change of semantic themes, time dimension, and the number of music parameters gathered by children involved in body movement.

Wednesday, May 29th at 10.00 Room De Papegaai, De Krook Gent


Why people move to music: A dynamical systems account of dance
Benjamin G. Schultz, Basic & Applied NeuroDynamics Laboratory, Maastricht University

The human auditory system is closely linked to the motor system and changes in certain acoustic features (e.g., sudden amplitude increases) can evoke strong motor responses, such as, the startle response. From an evolutionary perspective, responding to salient acoustic features that “stand out” in an auditory scene is important because it allows humans to respond to potentially dangerous entities. It is possible this same response mechanism has led to dancing behaviour in humans; subtle changes in acoustic features that once signalled danger now evoke motor responses in patterned ways and become embodied resulting in movement sequences, for example, dancing. I present results from two experiments that test the hypothesis that the motor system resonates with salient acoustic features and that these same features lead to overt movement. In the first experiment, we examined how various acoustic features evoke involuntary motor responses while listening to music without movement using surface electromyography. We further tested whether these features were perceived as salient using continuous salience ratings. Results showed that acoustic features corresponding to amplitude, intensity, energy, and dissonance evoked involuntary motor responses and were also perceived as salient. These findings support the hypothesis that the motor system resonates with salient acoustic features. In the second experiment, we used motion capture to record movement to music and examine how overt movements correspond with these acoustic features. Three different relationships were identified: 1) Foot movements corresponded to measures of beat onsets, 2) Hand movements corresponded to changes in amplitude, intensity, and energy, and 3) Head movements related to changes in dissonance. Overall, these results support the hypothesis that the motor system resonates with salient acoustic features and that these are the same features to which people move. Findings are discussed from a dynamical systems perspective where forms of bottom-up perception-action relationships may be the basis for complex motor sequences like dancing.

FERARI: Feedback system for a more Engaging, Rewarding and Activating Rhythmic Interaction
Jorg de Winne - IPEM UGent


The project aims to find possible ways to improve our interaction with digital environments, by stimulating the brain with a rhythmic sound. The envisioned tool will make the interaction more activating, engaging and rewarding by automatically adapting the rhythmic sound based on signals directly measured from the brain. In a first step the physiological signals, called biomarkers, that are best suited to measure how people perceive the activating, engaging and rewarding aspects of interaction need to be identified. EEG will be used to measure the electric activity in the brain, and fNIRS will be used to measure brain activity based on the amount of oxygen in the blood. Next to these, the pupil will be studied to investigate whether the interaction is enjoyed or not. In a second step, the measurements need to be related to the details of the rhythmic sound. It is then possible to directly influence the signals that we measure in the brain by changing the rhythmic sound. A third step adds people-people interaction to the system. The goal of this step is to learn what makes this interaction activating, engaging and rewarding and how it is different from the first step. This allows to mimic a human-human interaction in the envisioned tool. Finally, a proof of concept needs to be developed, supported by existing models, the achieved results and a small user experiment.

Friday, May 10th at 13.30 Room De Papegaai, De Krook Gent


A new tool for analysing signals in the time domain illustrated with cardio-respiratory examples
Leon Van Noorden - IPEM UGent


Traditionally signal analysis depends heavily on Fourier Analysis, i.e. by a transformation to the frequency domain. However in many situations it would be an advantage to stay in the time domain. An example of this is the application of circular statistics. There is a rapid development in mathematics and experimental sciences of tools for analysing signals in the time domain and becoming available in matlab. An example will be given on the disentaglement of heartbeat and respiration signals with the matlab function FindSignal.

Friday, May 17th at 13.30 Room De Papegaai, De Krook Gent

 
Dr. Federica Bressan - Audio preservation, Interactive art, Technoculture

A critical overview of 10 years of research work between technology and culture: Audio preservation, Interactive art, Technoculture
Physiological computing applied to the arts is a new genre of scholarship and practice focusing on the senses, art, design, and new technologies. My current work (Fulbright grant, 2020) consists in building the first digital archive to store information about sensory mapping strategies and the technology (sensors) used. This archive ideally contains information about two parallel timelines, showing the co-evolution of technology and culture. This co-evolution is something that I have been increasingly interested in, and that motivated me to start the podcast show Technoculture in late 2018 (available on iTunes, Spotify, and all major podcast platforms). The work on physiological computing applied to the arts builds on my previous experience with modeling interaction in installation art settings (Marie Curie grant, 2017-2019). During my presentation I will put these topics in the context of my academic work during the past 10 years, discussing its multidisciplinary aspects, and potential future evolutions.

April 26th at 13.30 Room De Papegaai, De Krook Gent.

Prof. dr. Noemi Grinspun - Rhythm synchronization, cognitive functions and oscillatory activity during resting state

The aim of this research project is to understand how rhythmic synchronization abilities are correlated with executive functions (mainly with attention and cognitive flexibility), and how it correlates with the characteristics of the oscillatory activity during resting state.

Despite all the advances in the area, it is still unknown whether rhythmic abilities, such as adapt the movement to a change of the beat of the music could be correlated with the oscillatory activity during the resting state, with cognitive flexibility and with musical abilities measured with the Audiation Test (PMMA).

Dr. Noemí Grinspun is Professor and researcher at the Movement and Cognition Lab of the Arts and Physical Education Faculty, Metropolitan University of Educational Sciences, Santiago, Chile and member of GIEM (Grupo de Investigación en Experiencia Musical). She holds a PhD in Biomedical Sciences (Neuroscience), MA degree in Neuroscience, Music Teacher (Trombone, ensamble direction) and Psysical Therapy BSc. Her research is focused in Music and movement interaction during learning, and also in Rhythm synchronisation abilities, executive functions (Cognitive flexibility, attention) and Neural dynamics. She has participated in several research projects about cognition, movement and neurosciences and has been invited as a speaker in conferences about Music, Neuroscience and cognition.

March 29th, 14.30h, Room De Papegaai, De Krook Gent.

Dr. Melissa Bremmer - Perspectives on a cultural diverse music curriculum

Widening learner diversity in music education is not easy and asks for an active policy. One of the aspects to increase participation is the development of a cultural diverse music curriculum. A cultural diverse music curriculum offers the opportunity for learners to experience and express their personal musical heritage, to feel recognized and included. For music teachers, however, learning to bring traditional music to music education is a challenge. Not only do teachers have to learn the performative activities of traditional musics, they also have to grasp the transmission processes of these traditions. In this lecture, Schipper & Campbell and Brinner’s views on the transmission of traditional musics will be presented and how they connect to an embodied cognition approach to music. Furthermore, dilemma’s teachers are confronted with when teaching traditional musics will be discussed, as well as the hidden messages a music curriculum can convey that does - or does not represent cultural diverse music activities.

Panel Members: Prof. dr. Kris Rutten, dr. An De Bisschop

About Melissa Bremmer: https://www.ahk.nl/lectoraten/educatie/melissa-bremmer/

March 1st, 13.30h, Room De Blauwe Vogel, De Krook Gent.

 

Prof. Dr. Sylvie Nozaradan - The neuroscience of musical entrainment: insights from EEG frequency-tagging

Entrainment to music is a culturally widespread activity with increasingly recognized pro-social and therapeutic effects. Music powerfully compels us to move to the musical rhythm, showcasing the remarkable ability of humans to perceive and produce rhythmic inputs. There is a wave of current research exploring the neural bases of this rhythmic entrainment in both human and non-human animals, in evolutionary terms and in development. One way to investigate these neural dynamics is frequency-tagging, an approach recently developed to capture the neural processing of musical rhythm with surface or intracerebral electroencephalography (EEG).
Recent experiments conducted in healthy and brain-damaged adults, and also in infants, while exposed to either simplified rhythmic patterns or naturalistic music will be presented. Results show that, although the auditory system presents a remarkable ability to synchronize to the rhythmic input, the neural network responding to rhythm shapes the rhythmic input by amplifying specific frequencies. This selective shaping seems to correlate with perception and individual ability to move in time with musical rhythms. These different results may lead to a new understanding of the neural bases of rhythmic entrainment.


Sylvie Nozaradan, MD PhD, is an Associate Professor at the Institute of Neuroscience of UCLouvain, Belgium since September 2018. The same year, she was awarded a Starting Grant from the European Research Council to develop her research on rhythm and the brain. Previously, she received an ARC Discovery Early Career Researcher Award from the Australian Research Council to develop her research independently for three years at the MARCS Institute, Western Sydney University (Australia). She has a double PhD degree in neuroscience from UCLouvain and the BRAMS, Montreal (Canada), for her work on the neural entrainment to musical rhythm. She has a dual background in music (Master in piano, Conservatoire Royal de Bruxelles, Belgium) and science (medical doctor, UCLouvain).


Tim Duerinck and Tim Vets

Tim Duerinck presents his ongoing research on new materials for soundboards of the violin and cello. This research is supported by IPEM, Material science (UGent) and the department of instrument making (School of Arts Gent). Together, we take a look at the research methods and resulting preliminary results from both mechanical/acoustical measurements as well as tests with listeners and musicians. Can listeners and musicians tell the difference between an instrument made from wood and another material? Is there a link between what we can measure and how people perceive aspects related to the quality of violins? What does the future hold, and perhaps most importantly: how should we investigate further? See also https://youtu.be/xtSoUKvDGTY

Tim Vets will provide a status overview of his PhD work at ipem, as well as his collaborations with Hans Heymans and Roeland Van Noten, touching upon interaction design, ergonomics and organology for a musical composition toolchain.
The contextualization of these activities within his artistic research will be discussed, as well as the underlying motivations and trajectory that lead to the current search for a unifying narrative within contrasting artistic practices.


January 17th, 13.30. Room De Papegaai, De Krook Ghent.