Artificial instinct : Lem's robots as a model case for AI

Název: Artificial instinct : Lem's robots as a model case for AI
Zdrojový dokument: Pro-Fil. 2021, roč. 22, č. Special issue, s. 92-102
  • ISSN
    1212-9097 (online)
Type: Článek

Upozornění: Tyto citace jsou generovány automaticky. Nemusí být zcela správně podle citačních pravidel.

In the seventy years since AI became a field of study, the theoretical work of philosophers has played increasingly important roles in understanding many aspects of the AI project, from the metaphysics of mind and what kinds of systems can or cannot implement them, the epistemology of objectivity and algorithmic bias, the ethics of automation, drones, and specific implementations of AI, as well as analyses of AI embedded in social contexts (for example). Serious scholarship in AI ethics sometimes quotes Asimov's speculative laws of robotics as if they were genuine proposals, and yet Lem remains historically undervalued as a theorist who uses fiction as his vehicle. Here, I argue that Lem's fiction (in particular his fiction about robots) is overlooked but highly nuanced philosophy of AI, and that we should recognize the lessons he tried to offer us, which focused on the human and social failures rather than technological breakdowns. Stories like "How the World Was Saved" and "Upside Down Evolution" ask serious philosophical questions about AI metaphysics and ethics, and offer insightful answers that deserve more attention. Highlighting some of this work from The Cyberiad and the stories in Mortal Engines in particular, I argue that the time has never been more appropriate to attend to his philosophy in light of the widespread technological and social failures brought about by the quest for artificial intelligence. In service of this argument, I discuss some of the history and philosophical debates around AI in the last decades, as well as contemporary events that illustrate Lem's strongest claims in critique of the human side of AI.
[1] Abid, A. – Farooqi, M. – Zou, J. (2021): Large language models associate Muslims with violence, Nature Machine Intelligence 3, 461–463, available at: < >. | DOI 10.1038/s42256-021-00359-2

[2] Anderson, S. L. (2016): Asimov's 'Three Laws of Robotics' and machine metaethics, in Schnei-der, S. (ed.) Science Fiction and Philosophy: From Time Travel to Superintelligence, Wiley Black-well, 290–307.

[3] Anderson, M. R. (2017): After 75 years, Isaac Asimov's three laws of robotics need updating, The Conversation [online] 2017-03-17, [accessed 2021-09-27] available at: < >.

[4] Asimov, I. (1950): I, Robot, Gnome Press.

[5] Asimov, I. (1986): Robot Dreams, Ace Books.

[6] BBC News [online] (2021): Margaret Mitchell: Google fires AI ethics founder, 2021-02-20 [accessed 2021-09-27] available at: < >.

[7] BBC News [online] (2020): Timnit Gebru: Google staff rally behind fired AI researcher, 2020-12-20, [accessed 2021-09-27], available at: < >.

[8] Bender, E. et al. (2021): On the dangers of stochastic parrots: Can language models be too big? , FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transpar-ency, 2021-03-03, 610–623, available at: < >.

[9] Birhane, A. – van Dijk, J. (2020): Robot Rights? Let's Talk About Human Welfare Instead, AIES '20: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society 2020-02-7, 207–213, available at: < >.

[10] Birhane, A. et al. (2021): The Values Encoded in Machine Learning Research, Cornell University arXiv:2106.15590 [cs.LG] [online] available at: < >.

[11] Birhane, A. (2021): The impossibility of automating ambiguity, Artificial Life 27, 44–61, available at: < >. | DOI 10.1162/artl_a_00336

[12] Boden, M. et al. (2017): Principles of robotics: regulating robots in the real world, Connection Science, 29(2), 124–129, available at: < >. | DOI 10.1080/09540091.2016.1271400

[13] Bostrom, N. (2014): Superintelligence, Oxford University Press.

[14] Damasio, A. (1994): Descartes' Error: Emotion, Reason, and the Human Brain, Quill/Harper Col-lins.

[15] Deng, B. (2015): Machine ethics: The robot's dilemma, Nature 523, 24–26, available at: < >. | DOI 10.1038/523024a

[16] Dennett, D. (1984): Can machines think?, in Shafto, M. G. (ed.) How We Know, Harper and Row.

[17] Dietrich, E. et al. (2021a): Great Philosophical Objections to Artificial Intelligence: The History and Legacy of the AI Wars, Bloomsbury Academic.

[18] Dietrich, E. et al. (2021b): The AI Wars, 1950–2000, and their consequences, Journal of Artificial Intelligence and Consciousness, available at: < >.

[19] D'Onfro, J. (2019): Google scraps its AI ethics board less than two weeks after launch in the wake of employee protest, Forbes 2019-04-4, [accessed 2021-09-27], available at: < >.

[20] Flowers, J. (2019): Rethinking algorithmic bias through phenomenology and pragmatism, Computer Ethics – Philosophical Enquiry (CEPE) Proceedings, available at: < >.

[21] Johnson, M. (1993): Moral Imagination: The Implications of Cognitive Science for Ethics, University of Chicago Press.

[22] Kandel, M. (1992): Introduction, in Lem, S. Mortal Engines, Harvest/Harcourt Brace Jovanovich, i–vii.

[23] Kuipers, B. (2016): We need ethical robots. Asimov's laws are a good place to start, General Electric [online] 2016-06-27, [accessed 2021-09-27], available at: < >.

[24] Lem, S. (1974) The Cyberiad, translated by Kandel, M., Avon Books.

[25] Lem, S. (1977/1992) Mortal Engines, translated by Kandel, M., The Seabury Press/Harvest-Har-court Brace Jovanovich.

[26] Lem, S. (1986): One Human Minute, translated by Leach, C., Harvest-Harcourt Brace & Co.

[27] Lin, P. (2018): Would deviant sex robots violate Asimov's law of robotics? Forbes [online] 2018-10-15, [accessed 2021-09-27], available at: < >.

[28] Lin, P. – Abney, K. – Bekey, G. (eds.) (2012): Robot Ethics: The Ethical and Social Implications of Robotics, MIT Press.

[29] Lin, P. – Jenkins, R. – Abney, K. (eds.) (2017): Robot Ethics 2.0: from autonomous cars to artificial intelligence, Oxford University Press.

[30] Lohr, S. (2021): What ever happened to IBM's Watson?, The New York Times [online] 2021-07-16, [accessed 2021-09-27], available at: < >.

[31] McGrath, J. – Gupta, A. (2018): Writing a Moral Code: Algorithms for Ethical Reasoning by Humans and Machines, Religions 9(8), 240, available at: < >. | DOI 10.3390/rel9080240

[32] Newell, A. – Simon, H. (1976): Computer science as empirical inquiry: Symbols and search, Communications of the ACM 9(3) 113–126. | DOI 10.1145/360018.360022

[33] Olson, P. (2021): Much 'artificial intelligence' is still people behind a screen, in Bloomberg [online] 2021-10-13, [accessed 2021-10-13], available at: < >.

[34] Sawyer, R. (2007): Robot Ethics, Science 318(5853), 1037, available at: < >. | DOI 10.1126/science.1151606

[35] Shamir, L. (2020): A case against the STEM rush, Inside Higher ED [online] 2020-02-03, [accessed 2021-09-29], available at: < >.

[36] Sullins, J.P. (2015): Applied professional ethics for the reluctant roboticist, Proceedings of the Emerging Policy and Ethics of Human-Robot Interaction Workshop at HRI [online], available at: < >.

[37] Sullins, J. P. (2016): Artificial Phronesis and the social robot, in Seibt, J. – Nørskov, M. – Schack Anderson, S. (eds.) What Social Robots Can and Should Do, IOS Press, 37–39.

[38] Thompson, E. (2001): Empathy and consciousness, Journal of Consciousness Studies 8(5), 1–32.

[39] Torrance, S. (2008): Ethics and consciousness in artificial agents, AI & Society 22, 495–521. | DOI 10.1007/s00146-007-0091-8

[40] Torreson, J. (2018): A Review of future and ethical perspectives of robotics and AI, Frontiers in Robotics and AI [online] 2018-1-15, [accessed 2021-09-28], available at < >.

[41] Turing, A. (1950): Computing machinery and intelligence, Mind (LIX)236, 433–460. | DOI 10.1093/mind/LIX.236.433

[42] Vallor, S. (2016): Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting, Oxford University Press.

[43] Van Dang, C. et al. (2018): Application of modified Asimov's laws to the agent of home service robot using state, operator, and result (Soar). International Journal of Advanced Robotic Systems 15(3), available at: < >.

[44] Wallach, W. – Allen, C. (2009): Moral Machines: Teaching Robots Right from Wrong, Oxford University Press.

[45] Weizenbaum, J. (1966): ELIZA--A Computer Program for the Study of Natural Language Communication Between Man and Machine, Communications of the ACM9, 36–35, available at: < >.

[46] Wittgenstein, L. (1953): Philosophical Investigations, Macmillan.

[47] Zakaria, F. (2015): Why America's obsession with STEM education is dangerous, Washington Post [online] 2015-03-26, [accessed 2021-09-28], available at: < >.