Deep learning is the predominant machine learning paradigm in natural language processing (NLP). This approach not only gave huge performance improvements across a large variety of natural language processing tasks. It also allowed for a much easier integration of real world knowledge and visual information into NLP systems. However, a big problem of deep learning is the need for massive amounts of training data. Therefore in some of the topics you have to look into methods that can cope with this issue, e.g. by automatically creating noisy training data.
Lecturer: Dietrich Klakow
Location: to be announced
Time: block course in the spring break 2020
Application for participation: closer
Exam date: there is a nominal exam date of December 2nd in order to create a HISPOS registration deadline. Please keep that in mind and register in time in HISPOS.
List of Topics:
- Learning to Reweight Examples for Robust Deep Learning by Ren, Mengye and Zeng, Wenyuan and Yang, Bin and Urtasun, Raquel
- Learning from noisy labels with distillation by Li, Yuncheng and Yang, Jianchao and Song, Yale and Cao, Liangliang and Luo, Jiebo and Li, Li-Jia
- Learning with noisy labels by Natarajan, Nagarajan and Dhillon, Inderjit S and Ravikumar, Pradeep K and Tewari, Ambuj
- Generalized cross entropy loss for training deep neural networks with noisy labels by Zhang, Zhilu and Sabuncu, Mert
- Pre-Learning Environment Representations for Data-Efficient Neural Instruction Following by Gaddy, David and Klein, Dan
- Joint concept learning and semantic parsing from natural language explanations by Srivastava, Shashank and Labutov, Igor and Mitchell, Tom
- Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning by Das, Abhishek and Kottur, Satwik and Moura, Jose MF and Lee, Stefan and Batra, Dhruv
- Visual coreference resolution in visual dialog using neural module networks by Kottur, Satwik and Moura, Jose MF and Parikh, Devi and Batra, Dhruv and Rohrbach, Marcus
- Active and semi-supervised learning in asr: Benefits on the acoustic and language models by Drugman, Thomas and Pylkkonen, Janne and Kneser, Reinhard
- Transfer learning for speech recognition on a budget by Kunze, Julius and Kirsch, Louis and Kurenkov, Ilia and Krug, Andreas and Johannsmeier, Jens and Stober, Sebastian
- Transfer learning for named-entity recognition with neural networks by Lee, Ji Young and Dernoncourt, Franck and Szolovits, Peter
- Semi-supervised sequence modeling with cross-view training by Clark, Kevin and Luong, Minh-Thang and Manning, Christopher D and Le, Quoc V
- A Survey on Transfer Learning by Sinno Jialin Pan and Qiang Yang
- Taskonomy: Disentangling Task Transfer Learning by Amir R. Zamir, Alexander Sax, William Shen, Leonidas Guibas, Jitendra Malik and Silvio Savarese
- Deep Transfer Learning with Joint Adaptation Networks by Mingsheng Long, Han Zhu, Jianmin Wang and Michael I. Jordan
- Learning Transferable Features with Deep Adaptation Networks by Mingsheng Long, Yua Cao, Jiamin Wang and Michael Jordan
- Optimal Bayesian Transfer Learning by Alireza Karbalayghareh, Xiaoning Qian and Edward R. Dougherty