The problem of room acoustics and transmission effects, which are conventionally referred to as "convolutional noise" is a longstanding and unsolved problem in Automatic Speech Recognition (ASR). This is the reason that ASR-systems can only function correctly with input speech recorded with a microphone that records predominantly direct sound (such as close-talking microphones). This project aims to offer a solution to this problem which will then allow the development and commercialization of much more robust ASR systems in which microphone positioning is much less important, and in which close-talking or directional microphones are no longer required for reliable speech recognition. This may be enabling technology for a new generation of intelligent interfaces for domains such as command-and-control, domotics, computer-interfacing, automatic transcription of meetings, etc. Although ignored in traditional Automatic Speech Recognition (ASR) systems, there is a lot of cognition required to extend the operating domain of even a very simple keyword spotting system to an environment in which, acoustically, anything can happen at any time. This project searches for the sort of knowledge from the domain of cognitive science (especially linguistics and psycholinguists) that is essential to make the transition from closed operating domain to open domains.