User-Interface-Enabling Algorithms 

Speech is one of the modalities for human-machine interaction. Specific applications like simple dialogue systems for the access to very structured data bases are well established and various companies already provide such systems. However, more sophisticated applications very often suffer from missing robustness. We develop new algorithms for robust speech recognition. To that end, we have set up a living room environment in our lab, that allows to test newly developed methods under realistic conditions. Other application scenarios involve mobile devices like smartphones. Our ambition is to integrate speech with other modalities to make human machine interaction as natural as possible. 

Statistical Spoken Language Processing 

Spoken language systems, like dialogue systems or systems for spoken document retrieval, rely on models of the human language. Traditionally, in the speech community, these models capture only relatively short  range dependencies (trigrams). Our goal is to derive language models that go beyond trigrams and capture statistical dependencies on sentence level and beyond. This involves close interaction and integration of linguistic knowledge. To test the newly developed models we investigate applications like advanced information retrieval and extraction.