Ruprecht-Karls-Universität Heidelberg
Institut für Computerlinguistik

Bilder vom Neuenheimer Feld, Heidelberg und der Universität Heidelberg

Ethics in NLP: Bias and Dual Use

Kursbeschreibung

Studiengang Modulkürzel Leistungs-
bewertung
BA-2010[100%|75%] CS-CL 6 LP
BA-2010[50%] BS-CL 6 LP
BA-2010[25%] BS-CL 4 LP
BA-2010 AS-CL 8 LP
Master SS-CL, SS-TAC 8 LP
Dozenten/-innen Michael Strube, Pan Shimei
Veranstaltungsart Proseminar/Hauptseminar
Sprache English
Erster Termin 18.10.2022
Zeit und Ort Dienstags, 15:15-16:45, INF 327 / SR 3
Commitment-Frist tbd.

Inhalt

NLP applications are widely used in everyday life: web search, grammar correction, machine translation, chatbots/virtual assistants etc,. They are commonly available on our computers and mobile phones. Moreover, very large pretrained language models such as BERT and GPT-3 are at the core of many applications that understand and generate natural language. Since these models are mostly trained on human-generated data (e.g., text from the web/social media), they frequently inherit human biases and prejudices. In this seminar, we will discuss the implications of this. We will answer questions such as "How can we assess the bias in NLP models and data?" and "How to debias language models and NLP applications?" Bias assessment and mitigation will be the focus of the first half of the seminar. The second half will be dedicated to dual use: NLP helps not only us, but also e-commerce to get to know more about their customers, the industry to place personalized advertisements, authoritarian governments to censor posts in microblogs and social networks, secret services to search phone calls and emails not only for keywords but for contents. In the seminar we will look at methods and applications from sentiment analysis, machine translation, text mining, NLP and social media, NLP in health applications, etc. We will question their ethical implications and their impact on society.

Literatur

  • Blue, Ethan et al. (2014). Engineering and War: Militarism, Ethics, Institutions, Alternatives. Morgan and Claypool Publishers.
  • Caliskan, Aylin and Bryson, Joanna J. and Narayanan, Arvind (2017). Semantics Derived Automatically from Language Corpora Contain Human-like Biases. In Science, 356, pp.183-186.
  • Church, Kenneth Ward and Kordoni, Valia (2021). Emerging trends: Ethics, intimidation, and the Cold War. In Natural Language Engineering, 27, pp.379-390.
  • Noble, Safiya Umoja (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press.

» weitere Kursmaterialien

zum Seitenanfang