The sub-programme Models for human and automatic speech recognition combines three closely connected research lines. The first line of research is directed towards improving auto-matic speech recognition systems, guided by insights from human speech recognition. The second line of research applies insights from speech technology for the development of comprehensive computational models of human speech recognition. Finally, the third line applies the insights obtained from modelling automatic and human speech recognition to multimodal human-machine interaction, computer-assisted language learning and aug-mented communication. The main focus of this sub-programme is on developing novel models of automatic and human speech recognition. We are building improved models for automatic speech recognition by integrating knowledge about essential aspects of human speech processing. At the same time, we are applying successful techniques from the field of automatic speech recognition to develop comprehensive computational models of human speech recognition. We plan to continue and extend research in modelling pronunciation varia-tion and to further enhance our existing procedures for the creation of automatic phonetic transcriptions. We are also continuing our research in the field of robust automatic speech recognition, informed by the theory of active perception. A new line of research addresses the contribution that non-verbal, mainly prosodic, information in the speech signal can make to the recognition and interpretation of the verbal message. The long-term goal of this research is to create a novel architecture for speech recognition that can be used both as an operational recognition engine and as a realistic model of human speech recognition. Application directed research is concerned with multimodal human-machine inter-action, computer-assisted language learning and augmented communication. In multi-modal interaction the emphasis is on combining speech and pen input in services designed for use with small mobile terminals. In addition, we intend to investigate dialog models based on layered perception-action loops. Language learning applications address two different groups of users, users who are learning Dutch as their second language, and users with communicative disorders. For both user groups we are designing and testing computer-assisted systems for training and testing oral proficiency. The research aimed at relieving communication impairments is conducted in close collaboration with the Sint Maartenskliniek, in the framework of the Knowledge Centre for Language and Speech Technology in Rehabilitation. Although the research in this sub-programme is concerned primarily with Dutch as object language, we will also use English benchmark corpora when and where these are available to facilitate comparison with the results obtained in the international community. Not surprisingly, the Spoken Dutch Corpus and its emerging extensions play an important role in this sub-programme.