MAMI: Musical Audio-Mining ("Query by Humming")

Project Description:

MAMI is a data-mining project for audio recognition that investigates ways of searching an audio archive as easily as you can search a text archive. 
The project starts from the observation that given the current state-of-the-art in telematica, the technological orientation of the music culture and the interest of the music industry to sell musical commodities and services via the Internet, there is a high need to develop advanced tools that support new ways to deal with content concerning musical audio and associated processing. Current technology makes it possible to retrieve music from a database using new content-based methods. Performing feature extraction on a wide range of sound characteristics opens the possibility for multiple ways of querying on data not only by text queries but also by music-based query techniques such as query-by-humming or query by specification of a list of musical variables.
A main characteristic of the MAMI-project is its focus on music as audio signal. This includes all kinds of music, including electro-acoustical music as well as ethnic and world music.

Project aims

  • Develop a background epistomology for audio mining that is based on auditory modelling and perception theory.
  • Work out methodologies, techniques and software tools for content-based musical audio mining taking into account all kinds of music.
  • Development of an integrated system for audio description using different levels of representation.
  • Work towards a practical application which demonstrates its usefulness by means of the so-called "query-by-humming" paradigm.
  • Allow users to retrieve a musical piece by describing sound characteristics, either by humming or playing or describing the piece on the basis of its sound characteristics.
  • Set up representational structures in compliance with the MPEG-7 standard, an interface for Multimedia Content Description.

Description levels

The MAMI-research project uses different description levels to describe music, such as:

  • Waveform representation.
  • Frame-based representations.
  • Parameter-based representations.
  • Event-based representations.
  • Gesture-based representations.
  • Concept-based representations.

Fund

The MAMI-project started in October 2001 and is funded by the IWT (Instituut voor de aanmoediging van Innovatie door Wetenschap en Technologie in Vlaanderen). IWT homepage.

Consortium

The MAMI-project has an interdisciplinary basis which is grounded in diverse fields such as signal processing, musicology and applied mathematics.Therefore the research takes place at different research laboratories of the Ghent University. The methods that are developed are related to representation of musical audio, feature extraction, pattern-matching, application of fuzzy logics, causal networks, databases...

  • The Institute for Psychoacoustics and Electronic Music, department of Musicology. IPEM
  • The Electronics and Information System Department. ELIS
  • Department of Applied Mathematics, Biometrics and Process Control. KERMIT
  • Department of Applied Mathematics and Computer Sciences. TWI

Contact

For more info on the MAMI project or collaboration proposals, you can contact us at the following email address: MAMI.info@ipem.UGent.be

Promotors: Prof. Dr. Marc Leman (IPEM)
Prof. Dr. ir. Jean-Pierre Martens (ELIS)
Prof. Dr. Bernard De Baets (KERMIT)
Prof. Dr. Hans De Meyer (TWI)
Researchers: Micheline Lesaffre (IPEM)
Koen Tanghe (IPEM)
Tom De Mulder (ELIS)
Dirk Van Steelant (KERMIT, left the project in October 2004)
Sven Degroeve (KERMIT, replaced Dirk in January 2005)
Gaëtan Martens (TWI)
Financial Support:

IWT homepage.