Robust and Explainable Measurement of AI Political Bias

Name of applicant

Dustin Wright

Title

Postdoctoral Fellow

Institution

University of Cambridge

Amount

DKK 1,695,290

Year

2025

Type of grant

Internationalisation Fellowships

What?

This project asks: how politically biased is artifical intelligence (AI), how does this bias manifest itself in AI systems, and how does this impact real people?

Why?

From asking questions to ChatGPT, to seeing AI generated content online, people interact with AI systems every day. If these systems show persuasive political bias, for example by strongly favoring particular political issues, they can worsen other societal issues, such as polarisation. Identifying these biases, their causes, and their impacts, will help us to develop and use AI responsibly.

How?

This project will develop a new method to uncover AI political bias that is more robust and explainable than previous methods. It will look at what arguments AI systems present in favor or against different political issues in order to measure the underlying biases hidden behind these arguments. It will then examine the data the systems are trained on in order to explain the presence of any bias.

Back to listing page