Understanding and Mitigating Cognitive Biases in Human-AI Collaboration
CSCW 2023 Workshop
Keynote Topic: Bias-Aware User Modeling and Human-Centered Fairness Evaluation in Search and Recommendation.
As artificial intelligence (AI) assisted search and recommender systems have become ubiquitous in workplaces and everyday lives, understanding and accounting for fairness has gained increasing attention in the design and evaluation of such systems. While there is a growing body of computing research on measuring system fairness and biases associated with data and algorithms, the impact of human biases that go beyond traditional machine learning (ML) pipelines still remain understudied. Our studies seek to develop a two-sided fairness framework that not only characterizes data and algorithmic biases, but also highlights the cognitive and perceptual biases that may exacerbate system biases and lead to unfair decisions. Within the framework, we also analyze the interactions between human and system biases in search and recommendation episodes. Built upon the two-sided framework, our research synthesizes intervention and intelligent nudging strategies applied in cognitive and algorithmic debiasing, and also proposes novel goals and measures for evaluating the performance of systems in addressing and proactively mitigating the risks associated with biases in data, algorithms, and bounded rationality. Our research uniquely integrates the insights regarding human biases and system biases into a cohesive framework and extends the concept of fairness from human-centered perspective. The extended fairness framework better reflects the challenges and opportunities in users' interactions with search and recommender systems of varying modalities. Adopting the two-sided approach in information system design has the potential to enhancing both the effectiveness in online debiasing and the usefulness to boundedly rational users engaging in information-intensive decision-making.
Jiqun Liu is currently an assistant professor of data science and affiliated assistant professor of psychology at the University of Oklahoma. He directs the OU human-computer interaction and recommendation (HCIR) Lab where he advises students from different levels and backgrounds on intelligent search and recommendation, human-centered computing, and ethical AI research. His current research program focuses on the intersection of human-AI interaction, machine learning, and cognitive psychology. His work applies the knowledge learned about people interacting with information in user modeling, adaptive search recommendation, bias-aware system evaluation and intelligent nudging. His recent studies have been supported by grants from National Science Foundation (NSF) and have been published at leading computer and data science venues. His recent work on user modeling, interface design, and human-centered system evaluation has also been presented in the research monograph entitled “A behavioral economics approach to interactive information retrieval: Understanding and supporting boundedly rational users”, published by Springer Nature in March 2023.