PhD Dissertation Defense: Yanchen Wang
“Using Different Modes of Correction to Improve Fairness”
As the prevalence of artificial intelligence continues to grow, we are seeing the increasing impact of algorithmic decision on people’s daily lives, e.g., credit approval, hiring, criminal justice, and student grading. Given the potential impact of algorithmic decision-making, discussions have emerged about the fairness of decisions made by machines, and researchers are proposing different strategies to improve fairness in machine learning models. In the last decade, a lot of research has been done on developing fairness metrics to measure algorithmic bias and proposing bias mitigation algorithms to improve fairness. However, these approaches do not work equally well on different types of prediction tasks. In this dissertation, we aim to explore some of these tasks and identify which type of strategy is the best to improve model fairness for each task. We first look at the most recent state of the art large language model used in chatbots and generative AI. Because most fairness metrics are designed for classification tasks, they do not work in assessing fairness with chatbots and generative AI models. We propose a values-based auditing framework to measure potential bias in chatbots. After this chapter, we switch directions and focus on strategies for improving our understanding of model fairness for binary prediction tasks. We take a holistic approach that looks at fairness issues from different perspectives — from the quality of the data input to the availability of sensitive attribute values for evaluating fairness to the need to improve fairness in neural models with multivariate sensitive attributes. All of these contributions are initial steps in understanding how to better use different modes of correction to improve machine learning fairness when dealing with machine learning tasks in different environments of data and model availability.
Committee members:
Lisa Singh (adviser)
Sarah Bargal
Elissa Redmiles
Nathan Schneider
Carole Roan Gresenz (Dept. of Health Management and Policy)