S2S^2: Sound to Sense, Sense to Sound

Project Description:

Website: www.s2s2.org

Nowadays, there is a wide variety of techniques that can be used to generate and analyze sounds. However, urgent requirements (coming from the world of ubiquitous, mobile, pervasive technologies and mixed reality in general) trigger some fundamental yet unanswered questions:

  • how to synthesize sounds that are perceptually adequate in a given situation (or context)?
  • how to synthesize sound for direct manipulatio or other forms of control?
  • how to analyze sound to extract information that is genuinely meaningful?
  • how to model and communicate sound embedded in multimodal content in multisensory experiences?
  • how to model sound in context-aware enviromnents?

As a specific core research emerging and motivated by the above depicted scenario, essentially sound and sense are two separate domains and there is a lack of methods to bridge them with two-way paths: From Sound to Sense, from Sense to Sound.

The CA S2S^2 has been conceived to prepare the scientific grounds on which to build the next generation of scientific research on sound and its perceptual/cognitivereflexes. So far, a number of fast-moving sciences ranging from signal processing to experimental psycology, from acoustics to cognitive musicology, have tapped the S2S^2 arena here or there. What we are still missing is an integrated multidisciplinary and multidirectional approach. Only by coordinating the actions of the most active contributors in different subfields of the S2S^2 arena we can hope to elicit fresh ideas and new paradigms. The potential impact on society is terrific, as there is already a number of mass application technologies that are stagnating because of the existing gap between sound and sense. Just to name a few: sound/music information retrieval and data mining (whose importance exceeds P2P exchange technologies), virtual and augmented environments, expressive multimodal communication, intelligent navigation, etc.

Consortium

The project is coordinated by the Media Innovation Unit - Firenze Tecnologia and has the following partners:

  • CSC-DEI, Università di Padova, Padova, Italy
  • DI-VIPS, Università di Verona, Verona, Italy
  • DIST, Università di Genova, Genova, Italy
  • Helsinki University of Technology, Helsinki, Finland
  • PECA-DEC, Ecole Normale Supérieure, Paris, France
  • IPEM, Ghent University, Ghent, Belgium
  • KTH, Kungl Tekniska Högskolan, Stockholm, Sweden
  • LEAD, Université de Dijon, Dijon, France
  • UPF, Universitat Pompeu Fabra, Barcelona, Spain
  • ÖFAI - Austrian Research Institute for Artificial Intelligence, Wien, Austria
Promotors: Prof. Dr. M. Leman
Researchers: Frederik Styns
Frank Desmet
Financial Support:

European Commission