Moral zombies
why algorithms are not moral agents
Keywords:
Algorithms, Moral agency, Moral responsibility, Autonomous systems, Zombies, Accountability, Autonomy, Sentience, Consciousness, Reasons-responsivenessAbstract
In philosophy of mind, zombies are imaginary creatures that are exact physical duplicates of conscious subjects for whom there is no first-personal experience. Zombies are meant to show that physicalism – the theory that the universe is made up entirely out of physical components – is false. In this paper, I apply the zombie thought experiment to the realm of morality to assess whether moral agency is something independent from sentience. Algorithms, I argue, are a kind of functional moral zombie, such that thinking about the latter can help us better understand and regulate the former. I contend that the main reason why algorithms can be neither autonomous nor accountable is that they lack sentience. Moral zombies and algorithms are incoherent as moral agents because they lack the necessary moral understanding to be morally responsible. To understand what it means to inflict pain on someone, it is necessary to have experiential knowledge of pain. At most, for an algorithm that feels nothing, ‘values’ will be items on a list, possibly prioritised in a certain way according to a number that represents weightiness. But entities that do not feel cannot value, and beings that do not value cannot act for moral reasons.
Downloads
References
ARPALY, Nomy. Unprincipled virtue: an inquiry into moral agency. Oxford: Oxford University Press, 2002.
BEHDADI, Dorna; MUNTHE, Christian. A normative approach to artificial moral agency. Minds and machines, v. 30, n. 2, 2020, p. 195-218. DOI: https://doi.org/10.1007/s11023-020-09525-8.
BIRHANE, Abeba; VAN DIJK, Jelle. Robot rights? Let’s talk about human welfare instead. AIES ‘20: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, Nova Iorque, 2020, p. 207-213. DOI: https://doi.org/10.1145/3375627.3375855.
BOSTROM, Nick. The superintelligent will: motivation and instrumental rationality in advanced artificial agents. Minds and machines, v. 22, n. 2, 2012, p. 71-85. DOI: https://doi.org/10.1007/s11023-012-9281-3.
BRISLIN, S. J.; BUCHMAN-SCHMITT, J. M.; JOINER, T. E.; PATRICK, C. J. “Do unto others”? Distinct psychopathy facets predict reduced perception and tolerance of pain. Personality disorders: theory, research, and treatment, v. 7, n. 3, 2016, p. 240-246. DOI: https://psycnet.apa.org/doi/10.1037/per0000180.
BRYSON, Joanna J.; DIAMANTIS, Mihailis E.; GRANT, Thomas D. Of, for, and by the people: the legal lacuna of synthetic persons. Artificial intelligence and law, v. 25, n. 3, 2017, p. 273-291. DOI: https://doi.org/10.1007/s10506-017-9214-9.
CAVE, Stephen; NYRUP, Rune; VOLD, Karina; WELLER, Adrian. Motivations and risks of machine ethics. Proceedings of the IEEE, v. 107, n. 3, 2019, p. 562-574. DOI: 10.1109/JPROC.2018.2865996.
CHRISTMAN, John. Autonomy in Moral and Political Philosophy. In: ZALTA, Edward N. (ed). The Stanford Encyclopedia of Philosophy, Stanford: Stanford University, 2015. Disponível em: https://plato.stanford.edu/entries/autonomy-moral.
COECKELBERGH, Mark. Responsibility and the moral phenomenology of using self-driving cars. Applied Artificial Intelligence, v. 30, n. 8, 2016, p. 748-757. DOI: https://doi.org/10.1080/08839514.2016.1229759.
DANAHER, John. Robots, law and the retribution gap. Ethics and Information Technology, v. 18, n. 4., 2016, p. 299-309. DOI: https://doi.org/10.1007/s10676-016-9403-3.
DANAHER, John. Welcoming robots into the moral circle: a defence of ethical behaviourism. Science and Engineering Ethics, v. 26, n. 4, 2020, p. 2023-2049. DOI: https://doi.org/10.1007/s11948-019-00119-x.
DAROLIA, Rajeev; KOEDEL, Cory; MARTORELL, Paco; WILSON, Katie; PEREZ-ARCE, Francisco. Do Employers prefer workers who attend for-profit colleges? Evidence from a field experiment. Journal of Policy Analysis and Management, v. 34, n. 4, 2015, p. 881-903. DOI: https://doi.org/10.1002/pam.21863.
FLORIDI, Luciano; SANDERS, J. W. On the Morality of Artificial Agents. Minds and Machines, v. 14, n. 3, 2004, p. 349-379. DOI: https://doi.org/10.1023/B:MIND.0000035461.63578.9d.
FRANKFURT, Harry. Rationality and the Unthinkable. Cambridge: Cambridge University Press, 1988.
FRANKFURT, Harry. Necessity, Volition, and Love. Cambridge: Cambridge University Press, 1999.
GUNKEL, David J. The Machine Question: Critical Perspectives on AI, Robots, and Ethics. Cambridge: MIT Press, 2012. DOI: https://doi.org/10.7551/mitpress/8975.001.0001.
KANT, Immanuel. Groundwork for the Metaphysics of Morals. Oxford: Oxford University Press, 2019.
LEVY, Neil. The responsibility of the psychopath revisited. Philosophy, Psychiatry, & Psychology, v. 14, n. 2, 2007, p. 129-138. DOI: https://dx.doi.org/10.1353/ppp.0.0003.
MCKENNA, Michael. Reasons-responsiveness, agents, and mechanisms. In: SHOEMAKER, David (ed.). Oxford Studies in Agency and Responsibility. Oxford: Oxford University Press, 2013, p. 151-183.
MOOR, James. Four kinds of ethical robots. Philosophy Now, n. 72, 2009, p. 12-14.
OMOHUNDRO, Stephen M. (2008) The Basic AI Drives. In: WANG, Pei; GOERTZEL, Ben; FRANKLIN, Stan (ed.). Proceedings of the first artificial general intelligence conference. Amsterdã: IOS Press, 2008, p. 483-492.
O’NEIL, Caty. Algoritmos de destruição em massa: como o Big Data aumenta a desigualdade e ameaça a democracia. Trad. Rafael Abraham. Santo André: Rua do Sabão, 2020.
SCHNEEWIND, J. Autonomy, Obligation, and Virtue. In: GUYER, Paul (ed.). The Cambridge Companion to Kant. Cambridge: Cambridge University Press, 1992, p. 309-341.
SCHROEDER, Timothy; ARPALY, Nomy. Alienation and Externality. Canadian Journal of Philosophy, v. 29, n. 3, 1999, p. 371-388. DOI: https://doi.org/10.1080/00455091.1999.10717517.
SEARLE, John R. Minds, Brains and Programs. Behavioral and Brain Sciences, v. 3, n. 3, 1980, p. 417-457. DOI: https://doi.org/10.1017/S0140525X00005756.
SHARKEY, Noel E. The evitability of autonomous robot warfare. International Review of the Red Cross, v. 94, n. 886, 2012, p. 787-799. DOI: https://doi.org/10.1017/S1816383112000732.
SHARKEY, Amanda. Can we program or train robots to be good? Ethics and Information Technology, v. 22, n. 4, 2017, p. 283-295. DOI: https://doi.org/10.1007/s10676-017-9425-5.
SHOEMAKER, David W. Caring, identification, and agency. Ethics, v. 114, n. 1, 2003, p. 88-118. DOI: https://doi.org/10.1086/376718.
SPARROW, Robert. The Turing triage test. Ethics and Information Technology, v. 6, n. 4, 2004, p. 203-213. DOI: https://doi.org/10.1007/s10676-004-6491-2.
SPARROW, Robert. Killer Robots. Journal of Applied Philosophy, v. 24, n. 1, 2007, p. 62-77. DOI: https://doi.org/10.1111/j.1468-5930.2007.00346.x.
VAN WYNSBERGHE, Aimee; ROBBINS, Scott. Critiquing the reasons for making artificial moral agents. Science and Engineering Ethics, v. 25, n. 3, 2019, p. 719-735. DOI: https://doi.org/10.1007/s11948-018-0030-8.
VARELA, Francisco J.; THOMPSON, Evan; ROSCH, Elanor. The Embodied Mind: Cognitive Science and Human Experience. Cambridge: MIT Press, 1991. DOI: https://doi.org/10.7551/mitpress/6730.001.0001.
VÉLIZ, Carissa. The challenge of determining whether and A.I. is sentient. Slate, 14 abr. 2016. Disponível em: https://slate.com/technology/2016/04/the-challenge-of-determining-whether-an-a-i-is-sentient.html.
VELLEMAN, J. David. Identification and Identity. In: BUSS, Sarah; OVERTON, Lee (ed.). Contours of Agency: Essays on Themes from Harry Frankfurt. Cambridge: MIT Press, 2002, p. 91-123. DOI: https://doi.org/10.7551/mitpress/2143.001.0001.
WALLACH, Wendell; ALLEN, Collin. Moral Machines: Teaching Robots Right From Wrong. Nova Iorque: Oxford University Press, 2009. DOI: https://doi.org/10.1093/acprof:oso/9780195374049.001.0001.
WATSON, Gary. Moral Agency. In: LAFOLLETTE, Hugh (ed.). The International Encyclopedia of Ethics. Disponível em: https://onlinelibrary.wiley.com/doi/abs/10.1002/9781444367072.wbiee294.
WINFIELD, Alan F.; MICHAEL, Katina; PITT, Jeremy; EVERS, Vanessa. Machine ethics: the design and governance of ethical AI and autonomous systems. Proceedings of the IEEE, v. 107, n. 3, 2019, p. 509-517. DOI: https://doi.org/10.1109/JPROC.2019.2900622.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Carissa Véliz

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
This license permits reproduction in whole or in part, in any medium or format, provided it is for non-commercial purposes and the source is cited.
The copyright to the text and responsibility for its content lie with the respective authors.

