What Generally, the project asks how the widespread inclusion of machine-learning algorithms in society will affect decision making. On the one hand, there is reason to expect that such algorithms in many data heavy contexts soon will be more reliable and accurate than even human experts. Yet, on the other hand, such algorithms often work as intransparent "black-boxes": they correlate reliably and accurately specific inputs with specific outputs, but we have no idea why and how they do it. So we face a serious trade-off: by aligning our decision making with algorithms, we increase accuracy and reliability, but in virtue of their intransparency, we risk sacrificing informed decision making in the process. The projects investigates the theoretical and practical implications of this trade-off. Why The trade-off between algorithmic accuracy and transparency has recently gained much attention in both philosophy and AI. The current project is important scientifically because it promises to make these two-largely insulated-research strands speak together. To illustrate, consider the rapidly growing field of 'Explainable AI', which, as a research area, aims to develop systems that can 'explain' the internal workings of black-box algorithms. Many systems are currently being developed but none has so far gained serious traction. In part because it is unclear which explanations we want from algorithms, and in part because it is unclear which explanations they can offer. By consulting the vast philosophical literature on explanation, there is hope we can make progress on these issues. How While the core research in the project is non-empirical, there will be expertise in the research group to extract and explain the technical AI literature on machine-learning algorithms. By combining this technical knowledge with the conceptual rigour and interpretative potential of philosophical thinking, we can critically assess how-and to what extent-the inclusion of complex algorithms will affect decision making in various societal contexts. Although it is not the aim of the project to explain how we ought to navigate the trade-off between algorithmic accuracy and transparency, there is hope that our work can help facilitate such normative considerations. SSR The project promises to give us a much richer understanding of the complex relationship between transparency and accuracy in algorithmic decision making. The hope is that this richer understanding will help inform us not only about choices of technological solutions in AI, but also about the impact that algorithms can have on core human values such as trust in concrete societal contexts. By now, algorithms are (increasingly) found in all parts of society: from analyses of customer behavior to diagnostic tools in healthcare. So a project devoted to a careful analysis of the type of decision making that algorithms promote is thus important in most social contexts.