Block course
Time & Location: kick-off meeting in April-May; presentation meetings indicatively 2-3 last weeks of September or 2-3 first weeks of October
Teacher: Dr Volha Petukhova
*** Announcements***
Meetings takes place on 21.09, 22.09, 26.09 and 27.09 in C 7.4, AQUARIUM (Top Floor)
Registration CLOSED
Kick-off & Introduction slides: PDF
Suitable for: CoLi, CS and CuK
Organization:
We plan to hold a first planing meeting early in the semester. For the actual seminar (doodle decision on time and papers) we will have a talk for each participant of 30 minutes followed by 10 minutes discussion (discussions participation will be also graded) . After the talk, the presenter has to prepare a short about 10 pages report and hand it in for grading.
Grading: 40% based on the talk, 40% based on the report, 20% based on discussions participation.
Term paper:
Topics:
Situated interaction;
Understanding and generation of multimodal human dialogue behavior;
Social signals/affective computing;
Multimodal dialogue modelling;
Multimodal dialogue systems & applications
*Each talk will be based on a research paper
Cognition: cognitive states, affective states and cognitive agents
1. Barsalou, Lawrence W. “Situated conceptualization: theory and application.” Perceptual and Emotional Embodiment: Foundations of Embodied Cognition. Psychology Press: East Sussex (2015) PDF
2. Zhang, T., Hasegawa-Johnson, M., & Levinson, S. E. (2006). Cognitive state classification in a spoken tutorial dialogue system. Speech communication, 48(6), 616-632.
3. Sims, S. D., & Conati, C. (2020, October). A neural architecture for detecting user confusion in eye-tracking data. In Proceedings of the 2020 International Conference on Multimodal Interaction (pp. 15-23).
4. Cumbal, R., Lopes, J., & Engwall, O. (2020, October). Detection of Listener Uncertainty in Robot-Led Second Language Conversation Practice. In Proceedings of the 2020 International Conference on Multimodal Interaction (pp. 625-629).
5. D’Mello, S., Craig, S., Fike, K., & Graesser, A. (2009, July). Responding to learners’ cognitive-affective states with supportive and shakeup dialogues. In International Conference on Human-Computer Interaction (pp. 595-604). Springer, Berlin, Heidelberg.
Multimodality: multimodal expressions, annotations and tools
6. Dudzik, B., Columbus, S., Hrkalovic, T. M., Balliet, D., & Hung, H. (2021, October). Recognizing Perceived Interdependence in Face-to-Face Negotiations through Multimodal Analysis of Nonverbal Behavior. In Proceedings of the 2021 International Conference on Multimodal Interaction (pp. 121-130).
7. Lin, V., Girard, J. M., Sayette, M. A., & Morency, L. P. (2020, October). Toward Multimodal Modeling of Emotional Expressiveness. In Proceedings of the 2020 International Conference on Multimodal Interaction (pp. 548-557).
8. Blomsma, P. A., Vaitonyte, J., Alimardani, M., & Louwerse, M. M. (2020, October). Spontaneous Facial Behavior Revolves Around Neutral Facial Displays. In Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents (pp. 1-8).
9. Huang, J., Lin, Z., Yang, Z., & Liu, W. (2021, October). Temporal Graph Convolutional Network for Multimodal Sentiment Analysis. In Proceedings of the 2021 International Conference on Multimodal Interaction (pp. 239-247).
10. Jonell, P., Kucherenko, T., Henter, G. E., & Beskow, J. (2020, October). Let’s face it: Probabilistic multi-modal interlocutor-aware generation of facial gestures in dyadic settings. In Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents (pp. 1-8).
11. Feng, D., & Marsella, S. (2020, October). An Improvisational Approach to Acquire Social Interactions. In Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents (pp. 1-8).
12. Baltrušaitis, T., Robinson, P., & Morency, L. P. (2016, March). Openface: an open source facial behavior analysis toolkit. In 2016 IEEE Winter Conference on Applications of Computer Vision (WACV) (pp. 1-10). IEEE.
Multimodal fusion, dialogue modelling and management
13. Hirano, Y., Okada, S., & Komatani, K. (2021, October). Recognizing Social Signals with Weakly Supervised Multitask Learning for Multimodal Dialogue Systems. In Proceedings of the 2021 International Conference on Multimodal Interaction (pp. 141-149).
14. Hirano, Y., Okada, S., Nishimoto, H., & Komatani, K. (2019, October). Multitask Prediction of Exchange-level Annotations for Multimodal Dialogue Systems. In 2019 International Conference on Multimodal Interaction (pp. 85-94).
15. Pecune, F., & Marsella, S. (2020, October). A framework to co-optimize task and social dialogue policies using Reinforcement Learning. In Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents (pp. 1-8).
16. Han, W., Chen, H., Gelbukh, A., Zadeh, A., Morency, L. P., & Poria, S. (2021, October). Bi-bimodal modality fusion for correlation-controlled multimodal sentiment analysis. In Proceedings of the 2021 International Conference on Multimodal Interaction (pp. 6-15).
17. Johnson, E., & Gratch, J. (2020, October). The Impact of Implicit Information Exchange in Human-agent Negotiations. In Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents (pp. 1-8).
Multimodal dialogue systems & applications
18. Kawasaki, M., Yamashita, N., Lee, Y. C., & Nohara, K. (2020, October). Assessing Users’ Mental Status from their Journaling Behavior through Chatbots. In Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents (pp. 1-8).
19. Tavabi, L., Stefanov, K., Nasihati Gilani, S., Traum, D., & Soleymani, M. (2019, October). Multimodal Learning for Identifying Opportunities for Empathetic Responses. In 2019 International Conference on Multimodal Interaction, pp. 95-104
20. Pantazopoulos, G., Bruyere, J., Nikandrou, M., Boissier, T., Hemanthage, S., Sachish, B. K., … & Lemon, O. (2021, October). ViCA: Combining visual, social, and task-oriented conversational AI in a healthcare setting. In Proceedings of the 2021 International Conference on Multimodal Interaction (pp. 71-79).
21. Steinert, L., Putze, F., Küster, D., & Schultz, T. (2020, October). Towards Engagement Recognition of People with Dementia in Care Settings. In Proceedings of the 2020 International Conference on Multimodal Interaction (pp. 558-565).
22. Speer, S., Hamner, E., Tasota, M., Zito, L., & Byrne-Houser, S. K. (2021, October). MindfulNest: Strengthening Emotion Regulation with Tangible User Interfaces. In Proceedings of the 2021 International Conference on Multimodal Interaction (pp. 103-111).
For any questions, please send an email to:
v.petukhova@lsv.uni-saarland.de
Use subject tag: [MDS_2022]