Director, Undergraduate Studies for Data Science; Associate Professor, Informatics and Computing, Indiana Universityand
College of Arts & Sciences Associate Dean for Diversity and Inclusion; Professor, English, Indiana Universityand
Graduate Research Assistant, Center for Research on Learning and Technology, Indiana Universityand
Director, Data to Insight Center; Michael A and Laurie Burns McRobbie Bicentennial Professor of Computer Engineering, Indiana University
Coded Bias— Untangling the web of racially biased artificial intelligence
During this interactive Q&A with the Indiana University Center of Excellence for Women & Technology, three data science experts sit down with an associate dean for diversity and inclusion to discuss the problem of bias in facial recognition software.
An exploration of the implications of MIT Media Lab researcher Joy Buolamwini's startling discovery that racial bias is written into the code of facial recognition algorithms.
Modern society sits at the intersection of two crucial questions: What does it mean when artificial intelligence (AI) increasingly governs our liberties? And what are the consequences for the people AI is biased against? When MIT Media Lab researcher Joy Buolamwini discovers that most facial-recognition software does not accurately identify darker-skinned faces and the faces of women, she delves into an investigation of widespread bias in algorithms. As it turns out, AI is not neutral, and women are leading the charge to ensure our civil rights are protected.
Image courtesy of Shalini Kantayya