
Hallucinations, Faithfulness, and Factuality in Natural Language Generation
Kursbeschreibung
Studiengang | Modulkürzel | Leistungs- bewertung |
---|---|---|
BA-2010[100%|75%] | CS-CL | 6 LP |
BA-2010[50%] | BS-CL | 6 LP |
BA-2010[25%] | BS-AC, BS-FL | 4 LP |
BA-2010 | AS-CL | 8 LP |
Master | SS-CL, SS-TAC | 8 LP |
Dozenten/-innen | Julius Steen |
Veranstaltungsart |
|
Sprache | English |
Erster Termin | 18.04.2023 |
Zeit und Ort | Dienstag, 13:15-14:45, INF 306 / SR 18 |
Commitment-Frist | tbd. |
Teilnahmevoraussetzungen
- Completion of Programming I and Introduction to Computational Linguistics or similar introductory courses
- Mathematical Foundations of Computational Linguistics and Statistics (or equivalent) are heavily suggested
Leistungsnachweis
- Active Participation
- Presentation
- Second presentation or implementation project
Inhalt
While modern natural language generation (NLG) systems have made overwhelming progress in recent years, a persistent issue with such systems is that they tend to generate output that is not supported by their input. This phenomenon is often refered to as hallucination and has been observed across many popular NLP tasks such as dialog generation, summarization and machine translation. Since hallucination reduce the trustworthiness of NLG models and thus their applicability in real-world scenarios, much effort has been made into understanding and mitigating this problem.
In this seminar we are going to take a look at possible causes for hallucinations, how they can be mitigated both during inference and during training and how we can design automatic metrics to detect them.
Seminarplan
Datum | Sitzung | Materialien |