Bias in NLP: Intersection of AI and Society
AI and NLP technologies are increasingly being used for making consequential decisions such as determining whether someone is hired, offered a loan and granted parole. Unfortunately, there have been a wide range of recent discoveries of biased AI/NLP systems that exhibit prejudice against certain groups of people such as women or people of color. There is an increasing concern that vulnerable groups in our society could be harmed by biased AI/NLP systems. In this talk, I will present some of our recent research on assessing and mitigating bias in AI/NLP systems. I will also emphasize the importance of interdisciplinary collaboration on this research topic, which is at the intersection of AI and society and an algorithmic solution itself is often insufficient to achieve its intended societal goals.