Deep learning is the predominant machine learning paradigm in natural language processing (NLP). This approach not only gave huge performance improvements across a large variety of natural language processing tasks.
This versions topic still needs to be decided
Lecturer: Dietrich Klakow
Location: t.d.b
Time: block course in the spring break 2024 however preparations start earlier. Here the specific time line
Closing topic doodle: tbd
Kick-Off: some time tbd (doodle)
One page outline: tbd
Draft presentation: tbd
Practice talks and final talks will be during the spring break. Time/date will be decided during the kick-off.
Application for participation: see CS seminar system (for CS; DSAI, VC, ES, …), for CoLI,LST, LCT use this registration system.
HISPOS registration deadline: 31.1.25
Grading (tentative):
- 5% one page talk outline
- 10% draft presentation
- 10% practice talk
- 35% final talk
- 5% contributions to discussion during final talk of fellow participants
- 35% report
List of Topics (tentative):
- Transformer for Graphs: An Overview from Architecture Perspective
- A generalization of transformer networks to graphs
- Graphit: Encoding graph structure in transformers
- Do transformers really perform badly for graph representation?
- Structure-aware transformer for graph representation learning
- Recipe for a general, powerful, scalable graph transformer
- Exphormer: Sparse transformers for graphs
- Relational attention: Generalizing transformers for graph-structured tasks
- Graph-based molecular representation learning
- Geometric deep learning on molecular representations
- MG-BERT: leveraging unsupervised atomic representation learning for molecular property prediction
- TopoFormer: Multiscale Topology-enabled Structure-to-Sequence Transformer for Protein-Ligand Interaction Predictions
Thanks to Anna Karnysheva for providing her paper list.