Hallucinations, Faithfulness, and Factuality in Natural Language Generation
|BA-2010[25%]||BS-AC, BS-FL||4 LP|
|Master||SS-CL, SS-TAC||8 LP|
|Zeit und Ort||Dienstag, 13:15-14:45, INF 306 / SR 18|
- Completion of Programming I and Introduction to Computational Linguistics or similar introductory courses
- Mathematical Foundations of Computational Linguistics and Statistics (or equivalent) are heavily suggested
- Active Participation
- Second presentation or implementation project
While modern natural language generation (NLG) systems have made overwhelming progress in recent years, a persistent issue with such systems is that they tend to generate output that is not supported by their input. This phenomenon is often refered to as hallucination and has been observed across many popular NLP tasks such as dialog generation, summarization and machine translation. Since hallucination reduce the trustworthiness of NLG models and thus their applicability in real-world scenarios, much effort has been made into understanding and mitigating this problem.
In this seminar we are going to take a look at possible causes for hallucinations, how they can be mitigated both during inference and during training and how we can design automatic metrics to detect them.