Deep learning is the predominant machine learning paradigm in natural language processing (NLP). This approach not only gave huge performance improvements across a large variety of natural language processing tasks. It also allowed for a much easier integration of real world knowledge and visual information into NLP systems. However, a big problem of deep learning is the need for massive amounts of training data. Therefore in some of the topics you have to look into methods that can cope with this issue, e.g. by automatically creating noisy training data.
Lecturer: Dietrich Klakow
Location: to be announced
Time: block course in the spring break 2022
Application for participation (CoLi/LST/LCT only!): you can register here for the waiting list. Chances to get a slot are however tiny.
Exam date: there is a nominal exam date of December 15th in order to create a HISPOS registration deadline. Please keep that in mind and register in time in HISPOS.
List of Topics:
- Discriminative Nearest Neighbor Few-Shot Intent Detection by Transferring Natural Language Inference
- Few-Shot Named Entity Recognition: A Comprehensive Study
- Example-Based Named Entity Recognition
- Adaptive Subspaces for Few-Shot Learning
- A Theory of Self-Supervised Framework for Few-Shot Learning
- A Sample Complexity Separation between Non-Convex and Convex Meta-Learning
- On the Validity of Modeling SGD with Stochastic Differential Equations (SDEs)
- How Important is the Train-Validation Split in Meta-Learning?
- When MAML Can Adapt Fast and How to Assist When It Cannot
- Theoretical Analysis of Self-Training with Deep Networks on Unlabeled Data
- Neural Architecture Search without Training
- What is the State of Neural Network Pruning?
- Pruning neural networks without any data by iteratively conserving synaptic flow
- Expressivity of Deep Neural Networks
still to be ordered and links to be added, however, each title is a unique search string that will take you to the corresponding paper