Background

In recent years, AI has been increasingly outperforming humans in many tasks. We have seen a rapid uptake in the deployment of AI systems to complement and support human decision-makers in critical domains: judges use algorithmic risk assessment to determine criminal sentences, doctors rely on machine learning models to diagnose patients, and online media platforms adopt recommendation systems to present users with relevant content items. However, human decision-makers are often affected by cognitive biases -- defined by Tversky and Kahneman as mental shortcuts or heuristics to make faster but less deliberate decisions. Cognitive biases distort our thinking in a way we are often unaware of and can negatively influence decision outcomes. For example, confirmation bias can affect how users interpret and seek information online, anchoring bias can induce unfair juridical decisions when presented with multiple pieces of evidence, and the Dunning-Kruger effect can hinder appropriate reliance on AI systems.

AI systems can trigger and even amplify cognitive biases in their users. Personalised recommendation systems, for example, optimise content recommendations around the users' preferences and cater predominantly to what users prefer. As a result, such systems risk reinforcing confirmation bias and the echo chamber effect. Moreover, AI explanations can exacerbate our cognitive biases and compromise AI-assisted decision-making, such as AI user trust, reliance, and interpretation. Cognitive biases can also shape the quality of ground-truth data and thereby influence downstream applications and the outcomes of AI systems. Recommendation systems pick up not only user preferences but also their confirmation bias through their selective information consumption behaviour. As a result, these systems deliver content that, in turn, amplifies users' cognitive biases. A recent example of ChatGPT also demonstrates that it exhibits many biases humans possess, for instance, framing bias and overconfidence bias. With AI systems and cognitive biases forming an interplay that influences human decision-making, it is, therefore, crucial to understand how cognitive biases manifest themselves and how their effects can be mitigated.

In this workshop, we aim to bring together researchers, practitioners, and designers to jointly seek a better understanding of cognitive biases and solutions to mitigate problems arising from them. We will focus on cognitive biases in the context of human-AI collaboration, where AI systems act as supporting tools for human decision-makers.