My current work is exploring the epistemological and ethical aspects of AI, particularly as it is applied to help us manage uncertainty, filter information, and support decision making. I am especially interested in the epistemology and ethics of recommendation, information retrieval and decision support systems. I am also interested in the epistemology of de se beliefs, or beliefs about oneself, and how they feature in uncertain reasoning.
A Review of Modern Recommender Systems Using Generative Models (Gen-RecSys)
(Yashar Deldjoo, Zhankui He, Julian McAuley, Anton Korikov, Scott Sanner, Arnau Ramisa, René Vidal, Maheswaran Sathiamoorthy, Atoosa Kasirzadeh, and Silvia Milano) In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD '24). Association for Computing Machinery, New York, NY, USA, 6448–6458. https://doi.org/10.1145/3637528.3671474
Traditional recommender systems typically use user-item rating histories as their main data source. However, deep generative models now have the capability to model and sample from complex data distributions, including user-item interactions, text, images, and videos, enabling novel recommendation tasks. This comprehensive, multidisciplinary survey connects key advancements in RS using Generative Models (Gen-RecSys), covering: interaction-driven generative models; the use of large language models (LLM) and textual data for natural language recommendation; and the integration of multimodal models for generating and processing images/videos in RS. Our work highlights necessary paradigms for evaluating the impact and harm of Gen-RecSys and identifies open challenges. This survey accompanies a "tutorial" presented at ACM KDD'24, with supporting materials provided at: https://encr.pw/vDhLq.
Advanced AI assistants that act on our behalf may not be ethically or legally feasible
(with S. Nyholm) Nature Machine Intelligence (2024). https://doi.org/10.1038/s42256-024-00877-9
Algorithmic profiling as a source of hermeneutical injustice (with C. Prunkl) Philos Stud (2024). https://doi-org.uoelibrary.idm.oclc.org/10.1007/s11098-023-02095-2
It is a well-established fact that algorithms can be instruments of injustice. It is less frequently discussed, however, how current modes of AI deployment often make the very discovery of injustice difficult, if not impossible. In this article, we focus on the effects of algorithmic profiling on epistemic agency. In particular, we show how algorithmic profiling can give rise to epistemic injustice through the depletion of epistemic resources that are needed to interpret and evaluate certain experiences.
A Review of Modern Recommender Systems Using Generative Models (Gen-RecSys) (with Y. Deldjoo et al.), arXiv:2404.00579.
Traditional recommender systems (RS) have used user-item rating histories as their primary data source, with collaborative filtering being one of the principal methods. However, generative models have recently developed abilities to model and sample from complex data distributions, including not only user-item interaction histories but also text, images, and videos - unlocking this rich data for novel recommendation tasks. Through this comprehensive and multi-disciplinary survey, we aim to connect the key advancements in RS using Generative Models (Gen-RecSys), encompassing: a foundational overview of interaction-driven generative models; the application of large language models (LLM) for generative recommendation, retrieval, and conversational recommendation; and the integration of multimodal models for processing and generating image and video content in RS. Our holistic perspective allows us to highlight necessary paradigms for evaluating the impact and harm of Gen-RecSys and identify open challenges. A more up-to-date version of the papers is maintained at: this https URL.
Large language models challenge the future of higher education (with J. McGrane and S. Leonelli), Nature Machine Intelligence (2023). https://doi.org/10.1038/s42256-023-00644-2
Epistemic fragmentation poses a threat to the governance of online targeting (with B. Mittelstadt, S. Wachter and C. Russell), Nature Machine Intelligence 3, 466-472 (2021). https://doi.org/10.1038/s42256-021-00358-3
Online targeting isolates individual consumers, causing what we call epistemic fragmentation. This phenomenon amplifies the harms of advertising and inflicts structural damage to the public forum. The two natural strategies to tackle the problem of regulating online targeted advertising, increasing consumer awareness and extending proactive monitoring, fail because even sophisticated individual consumers are vulnerable in isolation, and the contextual knowledge needed for effective proactive monitoring remains largely inaccessible to platforms and external regulators. The limitations of both consumer awareness and of proactive monitoring strategies can be attributed to their failure to address epistemic fragmentation. We call attention to a third possibility that we call a civic model of governance for online targeted advertising, which overcomes this problem, and describe four possible pathways to implement this model.
Recommender Systems and their Ethical Challenges (with M. Taddeo and L. Floridi), AI & Society (2020). [Available Open Access. Preprints on PhilPapers and SSRN]
This article presents the first, systematic analysis of the ethical challenges posed by recommender systems. Through a literature review, the article identifies six areas of concern, and maps them onto a proposed taxonomy of different kinds of ethical impact. The analysis uncovers a gap in the literature: currently user-centred approaches do not consider the interests of a variety of other stakeholders—as opposed to just the receivers of a recommendation—in assessing the ethical impacts of a recommender system.
Ethical Aspects of Multi-stakeholder Recommendation Systems (with M. Taddeo and L. Floridi), The Information Society (2021). [Preprint available on PhilPapers and SSRN]
This article analyses the ethical aspects of multi-stakeholder recommendation systems (RSs). Following the most common approach in the literature, we assume a consequentialist framework to introduce the main concepts of multi-stakeholder recommendation. We then consider three questions: who are the stakeholders in a RS? How are their interests taken into account when formulating a recommendation? And, what is the scientific paradigm underlying RSs? Our main finding is that multi-stakeholder RSs (MRSs) are designed and theorised, methodologically, according to neoclassical welfare economics. We consider and reply to some methodological objections to MRSs on this basis, concluding that the multi-stakeholder approach offers the resources to understand the normative social dimension of RSs.
Rational updating at the crossroads (with A. Perea), Economics and Philosophy (2023). [Open Access - PhilPapers]
In this paper we explore the absentminded driver problem using two different scenarios. In the first scenario we assume that the driver is capable of reasoning about his degree of absentmindedness before he hits the highway. This leads to a Savage-style model where the states are mutually exclusive and the act-state independence is in place. In the second we employ centred possibilities, by modelling the states (i.e. the events about which the driver is uncertain) as the possible final destinations indexed by a time period. The optimal probability we find for continuing at an exit is different from almost all papers in the literature. In this scenario, act-state independence is still violated, but states are mutually exclusive and the driver arrives at his optimal choice probability via Bayesian updating. We show that our solution is the only one guaranteeing immunity from sure loss via a Dutch strategy, and that, despite initial appearances, it is time consistent. This raises important implications for the rationality of commitment.
Bayesian Beauty, Erkenntnis (2022) * [Winner of the 2018 biannual LSE Philosophy Popper Prize] [published Open Access - PhilPapers]
The Sleeping Beauty problem has attracted considerable attention in the literature as a paradigmatic example of how self-locating uncertainty creates problems for standard Bayesian principles of Conditionalisation and Reflection. Furthermore, it is also thought to raise serious issues for diachronic Dutch Book arguments. I show that, contrary to the consensus, it is possible to represent the Sleeping Beauty problem within a standard Bayesian framework. Once the problem is correctly represented, the solution satisfies all the standard Bayesian principles, including Conditionalisation and Reflection, and is immune from Dutch Books. Moreover, the solution does not make any appeal to the Restricted Principle of Indifference that is generally accepted in the literature on self-locating uncertainty, which, I argue, is incompatible with the principles of Bayesian reasoning.
Recommended! In: Living with AI: Moral Challenges. Edited by David Edmonds. Oxford: Oxford University Press. (Forthcoming)
Inform, educate, entertain... and recommend? Exploring the use and ethics of recommendation systems in public service media, co authored with Elliot Jones, Catherine Miller, and with substantial contributions from Andrew Strait. Ada Lovelace Institute (November 2022).
This report explores the ethics of recommendation systems as used in public service media organisations. These independent organisations have a mission to inform, educate and entertain the public, and are often funded by and accountable to the public. We make nine recommendations for future research, experimentation and collaboration between public service media organisations, academics, funders and regulators.
AI Reflections in 2021 (with Buckner, C., Miikkulainen, R., Forrest, S. et al), Nat Mach Intell 4, 5–10 (2022). Available OA: https://doi.org/10.1038/s42256-021-00435-7
The Non-Identity Problem and the Ethics of Future People (D. Boonin) (OUP, 2014). Economics and Philosophy, Vol. 32, Issue 2, July 2016. [journal] [penultimate draft]
The 2019 Yearbook of the Digital Ethics Lab (edited by Christopher Burr and Silvia Milano) [Springer]
This edited volume presents an overview of cutting edge research areas within digital ethics as defined by the Digital Ethics Lab of the University of Oxford. It identifies new challenges and opportunities that will be influential in setting the research agenda in the field. The yearbook presents research on the following topics: conceptual metaphor theory, cybersecurity governance, cyber conflicts, anthropomorphism in AI, digital technologies for mental healthcare, data ethics in the asylum process, AI’s legitimacy and democratic deficit, digital afterlife industry, automatic prayer bots, foresight analysis and the future of AI.
Abstract: What kind of thing do you believe when you believe that you are in a certain place, that it is a certain time, and that you are a certain individual? What happens if you get lost, or lose track of the time? Can you ever be unsure of your own identity? These are the kind of questions considered in my thesis. Beliefs about where, when and who you are are what are called in the literature de se, or self-locating beliefs. This thesis examines how we can represent de se beliefs, and how we can reason probabilistically about de se uncertainty.