Direitos dos robôs?

em vez disso, falemos sobre bem-estar humano

Autores

Palavras-chave:

Direitos dos robôs, Ética da IA, Personificação, Bem-estar humano

Resumo

O debate sobre “direitos dos robôs” e as questões correlatas sobre responsabilidade dos robôs evocam algumas das mais polarizadas posições na ética da IA. Enquanto alguns advogam que robôs deveriam ter direitos assim como os seres humanos, outros, em frontal oposição, argumentam que robôs não merecem direitos, pois são objetos que devem ser nossos escravos. Baseados numa filosofia pós-cartesiana, sustentamos não apenas negar “direitos” aos robôs, mas, antes de tudo, negar que robôs, como artefatos que emergem de e medeiam seres humanos, sequer poderiam receber direitos. Uma vez que entendamos robôs como mediadores de seres humanos, poderemos notar que o debate sobre “direitos dos robôs” está focado em problemas do Primeiro Mundo, em detrimento de preocupações éticas urgentes, como viés de máquina, exploração do trabalho humano induzida pelas máquinas e erosão da privacidade, todos impactando os indivíduos menos privilegiados da sociedade. Concluímos que, se o ser humano é nosso ponto de partida e o bem-estar humano é nossa preocupação primária, os impactos negativos decorrentes dos sistemas maquínicos, bem como a falta de responsabilidade por parte das pessoas que projetam, vendem e implementam tais máquinas, continuam sendo a discussão ética mais urgente sobre a IA.

Downloads

Não há dados estatísticos.

Referências

ANGWIN, Julia; LARSON, Jeff; MATTU, Surya; KIRCHNER, Lauren. Machine bias. ProPublica, 23 maio 2016. Disponível em: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.

ARENDT, Hannah. The human condition. Chicago: University of Chicago Press, 1958.

ASARO, Peter M. What should we want from a robot ethic. International Review of Information Ethics, v. 6, n. 12, p. 9-16, 2006.

BAINBRIDGE, Lisanne Bainbridge. Ironies of automation. In: JOHANNSEN, G.; RIJNSDORP, J.E. (ed.). Analysis, design and evaluation of man-machine systems. Oxford: Pergamon, 1983, p. 129-135.

BAXTER, Gordon D.; ROOKSBY, John; WANG, Yuanzhi; KHAJEH-HOSSEINI, Ali. The ironies of automation: still going strong at 30?. ECCE ‘12: Proceedings of the 30th European Conference on Cognitive Ergonomics, 2012, p. 65-71. DOI: https://doi.org/10.1145/2448136.2448149.

BENJAMIN, Ruha. Race after technology: abolitionist tools for the new jim code. Cambridge: Polity, 2019.

BIRHANE, Abeba. Descartes Was Wrong: ‘A Person Is a Person through Other Persons’. Aeon, 7 abr. 2017. Disponível em: https://aeon.co/ideas/descartes-was-wrong-a-person-is-a-person-through-other-persons.

BRANNIGAN, Augustine. Stanley Milgram’s obedience experiments: A report card 50 years later. Society, v. 50, n. 6, p. 623-628., 2013. DOI: https://doi.org/10.1007/s12115-013-9724-3.

BROOKS, Rodney Brooks. Will robots demand equal rights? Time, 19 jun. 2000.

BRYSON, Joanna J. Robots should be slaves. In: WILKS, Y. (ed.). Close engagements with artificial companions: key social, psychological, ethical and design issues. Amsterdam: John Benjamins, 2010, p. 63-74.

CHENEY-LIPPOLD, John. We are data: algorithms and the making of our digital selves. Nova Iorque: NYU Press, 2018.

CHURCHLAND, Paul M. Matter and consciousness. Cambridge: MIT, 2013.

CLARK, Andy Clark. Being there: putting brain, body, and world together again. Cambridge: MIT, 1998.

COECKELBERGH, Mark. Robot rights? Towards a social-relational justification of moral consideration. Ethics and Information Technology, v. 12, n. 3, p. 209-221, 2010. DOI: https://doi.org/10.1007/s10676-010-9235-5.

COECKELBERGH, Mark. Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability. Science and engineering ethics, v. 26, n. 4, p. 1-18, 2019. DOI: https://doi.org/10.1007/s11948-019-00146-8.

DENNETT, Daniel C. The intentional stance. Cambridge: MIT, 1987.

DI PAOLO, Ezequiel A.; CUFFARI, Elena Clare; DE JAEGHER, Hanne. Linguistic bodies: The continuity between life and language. Cambridge: MIT, 2018.

DREYFUS, Hubert L.; DREYFUS, Stuart E. The ethical implications of the five-stage skill-acquisition model. Bulletin of Science, Technology & Society, v. 24, n. 3, p. 251-264, 2004. DOI: https://doi.org/10.1177/0270467604265023.

EUBANKS, Virginia. Automating inequality: How high-tech tools profile, police, and punish the poor. Nova Iorque: St. Martin’s Press. 2018.

FERRYMAN, Kadija; PITCAN, Mikaela Pitcan. Fairness in precision medicine. Data & Society, 2018. Disponível em: https://datasociety.net/library/fairness-in-precision-medicine.

GOLONKA, Sabrina; WILSON, Andrew D. Gibson’s ecological approach. Avant: Trends in Interdisciplinary Studies, v. 3, n. 2, p. 40-53, 2012.

GUNKEL, David J. The rights of machines: Caring for robotic care-givers. In: VAN RYSEWYK, S.P.; PONTIER, M. (ed.). Machine Medical Ethics. Cham: Springer, 2015, p. 151-166.

GUNKEL, David J. Robot rights. Cambridge: MIT, 2018.

INTRONA, Lucas; NISSENBAUM, Helen. Defining the web: The politics of search engines. Computer, v. 33, n. 1, p. 54-62, 2000. DOI: http://dx.doi.org/10.1109/2.816269.

KEYES, Os. The misgendering machines: Trans/HCI implications of automatic gender recognition. Proceedings of the ACM on Human-Computer Interaction, v. 2, n. 88, p. 1-22, 2018. DOI: https://doi.org/10.1145/3274357.

KURZWEIL, Ray. The singularity is near: When humans transcend biology. Nova Iorque: Penguin, 2005.

LAMBRECHT, Anja; TUCKER, Catherine. Algorithmic bias? An empirical study of apparent gender-based discrimination in the display of STEM career ads. Management Science, v. 65, n. 7, p. 2947-3448, 2019. DOI: https://doi.org/10.1287/mnsc.2018.3093.

LAVE, Jean. Cognition in practice: Mind, mathematics and culture in everyday life. Cambridge: Cambridge University Press, 1988.

MCQUILLAN, Dan. Data science as machinic neoplatonism. Philosophy and Technology, v. 31, n. 2, p. 253-272, 2018. DOI: https://doi.org/10.1007/s13347-017-0273-3.

OBERMEYER, Ziad; MULLAINATHAN, Sendhil. Dissecting racial bias in an algorithm that guides health decisions for 70 million people. Proceedings of the Conference on Fairness, Accountability, and Transparency, 2019, p. 89. DOI: https://doi.org/10.1145/3287560.3287593.

O’NEIL, Cathy. Weapons of math destruction: How big data increases inequality and threatens democracy. Nova Iorque: Broadway Books, 2016.

PITT pauses testing of Starship robots due to safety concerns. The Pitt News, 21 out. 2019. Disponível em: https://pittnews.com/article/151679/news/pitt-pauses-testing-of-starship-robots-due-to-safety-concerns.

RICHARDSON, Rashida; SCHULTZ, Jason; CRAWFORD, Kate. Dirty data, bad predictions: How civil rights violations impact police data, predictive policing systems, and justice. New York University Law Review, v. 94, n. 15, p. 15-55, 2019. Disponível em: https://nyulawreview.org/online-features/dirty-data-bad-predictions-how-civil-rights-violations-impact-police-data-predictive-policing-systems-and-justice.

RUSHKOFF, Douglas. Team human. Nova Iorque: WW Norton & Company, 2019.

SCHMIEG, Sebastian; LORUSSO, Silvio. Five Years of Captured Captchas. 2017. Disponível em: http://five.yearsofcapturedcapt.ch/as.

SCHUTZ, Alfred; LUCKMANN, Thomas. The structures of the life-world. Vol. 1. Evanston: Northwestern University Press, 1973.

STONE, Christopher D. Should trees have standing-toward legal rights for natural objects. Southern California Law Review, n. 45, p. 450-501, 1972.

STRAUCH, Barry. Ironies of automation: still unresolved after all these years. IEEE Transactions on Human-Machine Systems, v. 48, n. 5, p. 419-433, 2017. DOI: https://doi.org/10.1109/THMS.2017.2732506.

SUCHMAN, Lucy A. Human-machine reconfigurations: Plans and situated actions. Cambridge: Cambridge University Press, 2007.

TUBARO, Paola; CASILLI, Antonio A. Micro-work, artificial intelligence and the automotive industry. Journal of Industrial and Business Economics, v. 46, n. 3, p. 333-345, 2019. DOI: https://doi.org/10.1007/s40812-019-00121-1.

VAN DIJK, Jelle. Designing for embodied being-in-the-world: A critical analysis of the concept of embodiment in the design of hybrids. Multimodal Technologies and Interaction, v. 2, n. 1, p. 1-21, 2018. DOI: https://doi.org/10.3390/mti2010007.

VERBEEK, Peter-Paul. De daadkracht der dingen: over techniek, filosofie en vormgeving. Meppel: Boom, 2000.

WHITBY, Blay. Sometimes it’s hard to be a robot: A call for action on the ethics of abusing artificial agents. Interacting with Computers, v. 20, n. 3, p. 326-333, 2008. DOI: https://doi.org/10.1016/j.intcom.2008.02.002.

WILSON, Benjamin; HOFFMAN, Judy; MORGENSTERN, Jamie. Predictive inequity in object detection. ArXiv, 21 fev. 2019. DOI: https://doi.org/10.48550/arXiv.1902.11097.

ZUBOFF, Shoshana. The age of surveillance capitalism: The fight for a human future at the new frontier of power. Londres: Profile, 2019.

Downloads

Publicado

24/10/2025

Como Citar

BIRHANE, Abeba; DIJK, Jelle van. Direitos dos robôs? em vez disso, falemos sobre bem-estar humano. Revista Conexões: Novas Tecnologias, Sociedade e Direito, Sorocaba, v. 1, n. 1, p. 1–16, 2025. Disponível em: https://conexoes.fadi.br/revista/article/view/3. Acesso em: 26 out. 2025.

Edição

Seção

Artigos