Learning from Limited Data in NLP (Seminar, WeSe 2022/23)

Time & Location: Mondays 14:00 – 16:00. B3.1, Seminarraum 1.15.
First session: TBA
Teachers: Marius Mosbach, Dawei Zhu
Suitable for: Master CS, LST, and related
Places: 12
Registration: TBA

Description:
While deep learning based Natural Language Processing (NLP) has achieved great progress during the past decade, one of the major bottlenecks in training deep neural networks (DNNs) is the requirement of substantial amounts of labeled training data. This can make deploying NLP models to real-world applications challenging, as data creation can be costly, time-consuming and/or labor-intensive.

In the last years, there has been an increased interest in building NLP models that are less data-demanding, and significant progress has been made there. Recent advances in this field enable efficient learning from just a handful of samples (few-shot learning). In addition, large-scale pre-trained language models can often achieve non-trivial performance on unseen NLP tasks (zero-shot learning).

This seminar aims to provide a board and up-to-date overview of recent progress on zero- and few-shot learning in NLP. In particular, we will study recent papers to understand the challenges of learning from limited data and how to leverage pre-trained language models to make efficient learning in low-resource settings possible.

Suggested Papers:
TBA