SPEAC: Sensitive Processing of Artistic Content

Project Description:

The SPEAC project is about music and qualitative/emotive descriptions of music, with a focus on quantitative and quasi-deterministic modelling of musical expressive and emotional content. Our main contribution will consist in the development of emotion sensitive audio agents, that is, computational units, which enable dynamic emotional interpretation of acoustic stimuli, and open new ways for musical expressive content processing, with applications in audio-data mining, music therapy, the audio visual field, and brain science.

Motivated by research in the area of psychology, cues, relevant or indicative for emotional agitation at the audio-structural level, are searched for. Three categories are distinguished: (i) cues at the unconscious level, this means uncontrolled and subliminal effects such as blood pressure, skin conductance, heart ratings, … (ii) cues at the sub-conscious level, such as unattended expressive movements evoked by acoustical stimuli and (iii) cues at the semantic level, which are mostly reactions requiring cognitive participation of the listener.

Once relevant cues are found, they will be used as descriptors of the emotional state of our subjects. An important research objective is to develop continuous quantitative models that imitate the interaction process between music and listener on a quasi real-time basis.


SPEAC grew out of close collaboration with the Laboratorio di Informatica Musicale (Infomus Lab) of Prof. A. Camurri (http://infomus.dist.unige.it/).


· Using the technique of bipolar semantic differentials we investigated a pre-defined large semantic emotional vector space. Statistical analysis revealed three dominant factors  “activity”, “dominance” and “arousal” as being the basis dimensions of this space. Multivariate linear regression revealed significant correlations between certain structural surface audio-features and these factors; activity was strongly correlated with onset characteristics of the acoustic signal while the roughness of the sound accounts for most part of the perceived “dominance”. Details of the experiments and computations can be found in Leman et. al. (2003a, 2003b). 
· Collaborative research with Infomus Lab will lead to new results, available in April 2003.


Leman, M., Vermeulen, V., De Voogdt L, Taelman, J. Moelants, D. (in preparation). “Correlations between perceived emotive/affective qualities and auditory features of polyphonic music”.

Leman, M., Vermeulen, V., De Voogdt L, Taelman J., Moelants D., and Lesaffre, M. (2003). Correlation of gestural audio cues and perceived expressive qualities. Paper proposed at Gesture Workshop, Genova, Italia.

Promotors: Prof. Dr. M. Leman
Researchers: L. De Voogdt (Musicology)
Dr. V. Vermeulen (Mathematics) 
Financial Support:

This project is supported by the BOF of Ghent University