KNAW

Research

Context-based sound event recognition

Pagina-navigatie:


Update content


Title Context-based sound event recognition
Period 11 / 2005 - 10 / 2010
Status Completed
Dissertation Yes
Research number OND1316457
Data Supplier Website BCN

Abstract

Different real-world environments could benefit from the presence of a robotic agent which alerts the inhabitants to important events, or which helps with specific person and environment-related tasks. As an example, one can imagine the apartment of an elderly person, where an intelligent agent takes over certain tasks of care-taking personnel, acting as an advanced alarm system: Whenever something out of the ordinary happens (the kettle keeps boiling, a glass breaking, or more crucial events like a person falling) the agent will act in an appropriate way (gives a sign to the elderly person, alerts personnel). As a first experimentation platform for this agent the iCat, a user-interface robot in the appearance of a cat, will be used. The iCat is capable of showing emotions which allows it to communicate with its user in a natural way. Furthermore, because of its natural appearance, the usage of the iCat can be extended with social and entertainment functions. By means of different modalities the iCat will have a realistic situational awareness which will enable it to respond appropriately. The Boon Companion project as a whole aims at collecting the necessary knowledge for the implementation of such intelligent companions. In the current subproject we will focus on the necessary and important component which deals with sound recognition. We believe that a robust and cognitively realistic situational awareness requires methods of sound analysis which are compatible with the human perception of salience and relevance of sounds in the home environment. A thorough understanding of sounds, their physical nature and their temporal evolution will make it possible to classify and recognize arbitrary sounds. We will develop an architecture which allows for adding new sound classes to the system without requiring an undue amount of machine learning and without increasing the complexity of the system substantially as the number of sound classes increases. Apart from tangible deliverables in the form of working models and systems, this project will yield fundamental insights in the area of auditory cognition: sound taxonomies, sound classification and cognitive architectures for multimodal selective attention.

Related organisations

Related people

Supervisor Prof.dr. L.R.B. Schomaker
Doctoral/PhD student Dr. M.E. Niessen

Classification

D12300 Electromagnetism, optics, acoustics
D16600 Artificial intelligence, expert systems
D21100 Bioinformatics, biomathematics, biomechanics

Go to page top
Go back to contents
Go back to site navigation