Block course
Time & Location: kick-off meeting in MS Teams; presentation meetings indicatively 2-3 last weeks of September or 2-3 first weeks of October
Teacher: Dr Volha Petukhova
*** Announcements***
Registration CLOSED
Kick-off: 19.04.2021 at 11am in MS Teams
Kick-off & Introduction slides: see TEAMS Class Material
Suitable for: CoLi, CS and CuK
Organization:
We plan to hold a first planing meeting early in the semester. For the actual seminar (doodle decision on time and papers) we will have a talk for each participant of 30 minutes followed by 10 minutes discussion (discussions participation will be also graded) . After the talk, the presenter has to prepare a short about 10 pages report and hand it in for grading.
Grading: 40% based on the talk, 40% based on the report, 20% based on discussions participation.
Term paper:
Topics:
Situated interaction;
Understanding and generation of multimodal human dialogue behavior;
Social signals/affective computing;
Multimodal dialogue modelling;
Multimodal dialogue systems & applications
*Each talk will be based on a research paper
Cognition: cognitive states, affective states and cognitive agents
1. Barsalou, Lawrence W. “Situated conceptualization: theory and application.” Perceptual and Emotional Embodiment: Foundations of Embodied Cognition. Psychology Press: East Sussex (2015) PDF
2. Zhang, T., Hasegawa-Johnson, M., & Levinson, S. E. (2006). Cognitive state classification in a spoken tutorial dialogue system. Speech communication, 48(6), 616-632.
3. Sims, S. D., & Conati, C. (2020, October). A neural architecture for detecting user confusion in eye-tracking data. In Proceedings of the 2020 International Conference on Multimodal Interaction (pp. 15-23).
4. Cumbal, R., Lopes, J., & Engwall, O. (2020, October). Detection of Listener Uncertainty in Robot-Led Second Language Conversation Practice. In Proceedings of the 2020 International Conference on Multimodal Interaction (pp. 625-629).
5. D’Mello, S., Craig, S., Fike, K., & Graesser, A. (2009, July). Responding to learners’ cognitive-affective states with supportive and shakeup dialogues. In International Conference on Human-Computer Interaction (pp. 595-604). Springer, Berlin, Heidelberg.
Multimodality: multimodal expressions, annotations and tools
6. Kucherenko, T., Jonell, P., van Waveren, S., Henter, G. E., Alexandersson, S., Leite, I., & Kjellström, H. (2020, October). Gesticulator: A framework for semantically-aware speech-driven gesture generation. In Proceedings of the 2020 International Conference on Multimodal Interaction (pp. 242-250).
7. Lin, V., Girard, J. M., Sayette, M. A., & Morency, L. P. (2020, October). Toward Multimodal Modeling of Emotional Expressiveness. In Proceedings of the 2020 International Conference on Multimodal Interaction (pp. 548-557).
8. Blomsma, P. A., Vaitonyte, J., Alimardani, M., & Louwerse, M. M. (2020, October). Spontaneous Facial Behavior Revolves Around Neutral Facial Displays. In Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents (pp. 1-8).
9. Fitrianie, S., Bruijnes, M., Richards, D., Bönsch, A., & Brinkman, W. P. (2020, October). The 19 Unifying Questionnaire Constructs of Artificial Social Agents: An IVA Community Analysis. In Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents (pp. 1-8).
10. Jonell, P., Kucherenko, T., Henter, G. E., & Beskow, J. (2020, October). Let’s face it: Probabilistic multi-modal interlocutor-aware generation of facial gestures in dyadic settings. In Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents (pp. 1-8).
11. Feng, D., & Marsella, S. (2020, October). An Improvisational Approach to Acquire Social Interactions. In Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents (pp. 1-8).
12. Baltrušaitis, T., Robinson, P., & Morency, L. P. (2016, March). Openface: an open source facial behavior analysis toolkit. In 2016 IEEE Winter Conference on Applications of Computer Vision (WACV) (pp. 1-10). IEEE.
Multimodal fusion, dialogue modelling and management
13. Bao, C., Fountas, Z., Olugbade, T., & Bianchi-Berthouze, N. (2020, October). Multimodal Data Fusion based on the Global Workspace Theory. In Proceedings of the 2020 International Conference on Multimodal Interaction (pp. 414-422).
14. Hirano, Y., Okada, S., Nishimoto, H., & Komatani, K. (2019, October). Multitask Prediction of Exchange-level Annotations for Multimodal Dialogue Systems. In 2019 International Conference on Multimodal Interaction (pp. 85-94).
15. Pecune, F., & Marsella, S. (2020, October). A framework to co-optimize task and social dialogue policies using Reinforcement Learning. In Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents (pp. 1-8).
16. Johnson, E., & Gratch, J. (2020, October). The Impact of Implicit Information Exchange in Human-agent Negotiations. In Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents (pp. 1-8).
Multimodal dialogue systems & applications
17. Kawasaki, M., Yamashita, N., Lee, Y. C., & Nohara, K. (2020, October). Assessing Users’ Mental Status from their Journaling Behavior through Chatbots. In Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents (pp. 1-8).
18. Zalake, M., Tavassoli, F., Griffin, L., Krieger, J., & Lok, B. (2019, July). Internet-based Tailored Virtual Human Health Intervention to Promote Colorectal Cancer Screening: Design Guidelines from Two User Studies. In Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents (pp. 73-80).
19. Tavabi, L., Stefanov, K., Nasihati Gilani, S., Traum, D., & Soleymani, M. (2019, October). Multimodal Learning for Identifying Opportunities for Empathetic Responses. In 2019 International Conference on Multimodal Interaction, pp. 95-104
20. Hoegen, R., Aneja, D., McDuff, D., & Czerwinski, M. (2019, July). An end-to-end conversational style matching agent. In Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents, pp. 111-118.
21. Steinert, L., Putze, F., Küster, D., & Schultz, T. (2020, October). Towards Engagement Recognition of People with Dementia in Care Settings. In Proceedings of the 2020 International Conference on Multimodal Interaction (pp. 558-565).
22. Bickmore, T., Rubin, A., & Simon, S. (2020, October). Substance Use Screening using Virtual Agents: Towards Automated Screening, Brief Intervention, and Referral to Treatment (SBIRT). In Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents (pp. 1-7).
For any questions, please send an email to:
v.petukhova@lsv.uni-saarland.de
Use subject tag: [MDS_2021]