My research is concerned with representation learning and multi-modal machine learning in the field of natural language processing (NLP).
In particular, I’m interested in how we can learn better representations of natural language by leveraging world knowledge.
Further, I am interested in adversarial machine learning and its applications for interpreting and understanding machine learning models.
Building C7 1 Room 0.11
Saarland University, Saarland Informatics Campus
You can also contact me by email: mmosbach at lsv dot uni-saarland dot de
Publications & Preprints
Below you find some of my recent publications and preprints. You can also check my Google Scholar profile.
- Logit Pairing Methods Can Fool Gradient-Based Attacks
NeurIPS 2018 Workshop on Security in Machine Learning, December 2018, Montreal, Canada
Marius Mosbach, Maksym Andriushchenko, Thomas Trost, Matthias Hein, Dietrich Klakow
- Adversarial Initialization – When your networks performs the way I want
arXiv preprint arXiv:1902.03020
Kathrin Grosse , Thomas A. Trost, Marius Mosbach , Michael Backes , and Dietrich Klakow
- incom.py – A Toolbox for Calculating Linguistic Distances and Asymmetries between Related Languages
RANLP 2019, September 2019, Varna, Bulgaria
Marius Mosbach, Irina Stenger, Tania Avgustinova and Dietrich Klakow
- Some steps towards the generation of diachronic WordNets
NoDaLiDa 2019, Oktober 2019, Turku, Finland
Yuri Bizzoni, Marius Mosbach, Deitrich Klakow and Stefania Degaetano-Ortlieb
I’m currently involved in supervising the following students
- Daria Pylypenko (Impact of real-world knowledge on the decodability of natural language commands in the navigation environment – Master Thesis)
- Anilkumar Erapanakoppal Swamy (Analyzing the generalization ability of Transformers for solving combinatorial optimization problems – Master Thesis)
- Sven Stauden (Robustness of Transfer Learning Approaches in NLP – Master Thesis)