Algorithmic Explainability for Everyday Citizens

Name of applicant

Niels van Berkel


Aalborg University


DKK 4,186,913



Type of grant

Semper Ardens: Accelerate


This project investigates how to best help citizens engage with Artificial Intelligence-driven systems in complex decision-making scenarios. Despite the ever-increasing impact of AI on our daily lives, everyday citizens are insufficiently able to understand and interact with algorithmically driven decision-making. In this project, we seek to understand the needs of the general public in assessing AI-driven decision-making. Based on these identified needs, we will develop an interactive application to evaluate our solution in the context of governmental services. Governmental services offer a highly compelling and relevant case, as governments are increasingly looking for ways to embed AI into public administration.


The growing role of AI in society calls for a more extensive involvement of the public in evaluating and informing AI development. Existing research has primarily focused on enabling AI experts to understand better how their systems behave. For example, by developing intelligible models that allow experts to inspect AI decision-making results visually. However, these efforts all require an extensive technical understanding of AI development and are, therefore, unsuitable for use by the general public. Rather than relying merely on the existing perspectives of AI developers, this project will identify and develop the necessary methods and tools to allow everyday citizens to engage in discussions regarding algorithmic-driven decision-making.


A key objective of this project is to contribute to AI explainability theories through a conceptual model of the requirements of everyday citizens in inspecting decision-making algorithms. We will synthesise existing literature and products, conduct interviews and interactive workshops, and implement empirical evaluations. The development of our web-based solution will go hand-in-hand with scientific pilot evaluations to inform design decisions. We will study the effect of our tool in providing useful and understandable explanations by assessing study participants' understanding across multi-study evaluations in the domain of public administration.

Back to listing page