Neural Methods in NLP
|Master||SS-CL, SS-TAC||8 LP|
Bitte beachten: Der Kurs muss in der 2. Vorlesungswoche ausfallen und startet daher in der 1. Vorlesungswoche, Do. 19.4.
|Zeit und Ort||Do, 11:15–12:45, INF 325 / SR 24 (SR)|
- Statistical Methods for CL
- preferrably: Neural Networks: Architectures and Applications for NLP
- Lektüre und aktive Teilnahme
- Referat, Bearbeitung kleinerer Aufgaben
- Hausarbeit, Projekt oder Extraleistungen im Seminar
In order to decide which type of neural architecture to apply for a specific NLP task,
we need to understand both neural learning techniques and the nature of the NLP tasks
that we want to address, with their specific characteristics and challenges.
In this seminar we will study how different neural network architectures and learning methods are tailored to diverse NLP tasks, so that the resulting systems are able to capture the underlying linguistic phenomena and specific challenges, and yield good performance.
Besides traditional NLP tasks, such as sequence labeling, syntactic and semantic parsing, relation extraction or coreference resolution, we will also look at more complex NLU tasks and applications such as reading comprehension and inferencing, how to generate natural language in specific styles, and multimodal NLP tasks, e.g visual QA.
The literature will be selected to offer a broad spectrum of techniques. Special topics will include multilingual methods, how to address structured prediction and graph-based modeling, how to deal with sparse data (e.g., multi-task learning), and how to inspect and interpret the learned neural models.
A list of reading materials is provided here.
You can see some of our investigations in this blog.