Contents
186 found
Order:
1 — 50 / 186
  1. AI Alignment vs. AI Ethical Treatment: Ten Challenges.Adam Bradley & Bradford Saad - manuscript
    A morally acceptable course of AI development should avoid two dangers: creating unaligned AI systems that pose a threat to humanity and mistreating AI systems that merit moral consideration in their own right. This paper argues these two dangers interact and that if we create AI systems that merit moral consideration, simultaneously avoiding both of these dangers would be extremely challenging. While our argument is straightforward and supported by a wide range of pretheoretical moral judgments, it has far-reaching moral implications (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  2. Can AI systems have free will?Christian List - manuscript
    While there has been much discussion of whether AI systems could function as moral agents or acquire sentience, there has been very little discussion of whether AI systems could have free will. I sketch a framework for thinking about this question, inspired by Daniel Dennett’s work. I argue that, to determine whether an AI system has free will, we should not look for some mysterious property, expect its underlying algorithms to be indeterministic, or ask whether the system is unpredictable. Rather, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  3. Is simulation a substitute for experimentation?Isabelle Peschard - manuscript
    It is sometimes said that simulation can serve as epistemic substitute for experimentation. Such a claim might be suggested by the fast-spreading use of computer simulation to investigate phenomena not accessible to experimentation (in astrophysics, ecology, economics, climatology, etc.). But what does that mean? The paper starts with a clarification of the terms of the issue and then focuses on two powerful arguments for the view that simulation and experimentation are ‘epistemically on a par’. One is based on the claim (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  4. A Talking Cure for Autonomy Traps : How to share our social world with chatbots.Regina Rini - manuscript
    Large Language Models (LLMs) like ChatGPT were trained on human conversation, but in the future they will also train us. As chatbots speak from our smartphones and customer service helplines, they will become a part of everyday life and a growing share of all the conversations we ever have. It’s hard to doubt this will have some effect on us. Here I explore a specific concern about the impact of artificial conversation on our capacity to deliberate and hold ourselves accountable (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  5. Consciousness, Machines, and Moral Status.Henry Shevlin - manuscript
    In light of recent breakneck pace in machine learning, questions about whether near-future artificial systems might be conscious and possess moral status are increasingly pressing. This paper argues that as matters stand these debates lack any clear criteria for resolution via the science of consciousness. Instead, insofar as they are settled at all, it is likely to be via shifts in public attitudes brought about by the increasingly close relationships between humans and AI users. Section 1 of the paper I (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  6. AI Mimicry and Human Dignity: Chatbot Use as a Violation of Self-Respect.Jan-Willem van der Rijt, Dimitri Coelho Mollo & Bram Vaassen - manuscript
    This paper investigates how human interactions with AI-powered chatbots may offend human dignity. Current chatbots, driven by large language models (LLMs), mimic human linguistic behaviour but lack the moral and rational capacities essential for genuine interpersonal respect. Human beings are prone to anthropomorphise chatbots—indeed, chatbots appear to be deliberately designed to elicit that response. As a result, human beings’ behaviour toward chatbots often resembles behaviours typical of interaction between moral agents. Drawing on a second-personal, relational account of dignity, we argue (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  7. Sims and Vulnerability: On the Ethics of Creating Emulated Minds.Bartek Chomanski - forthcoming - Science and Engineering Ethics.
    It might become possible to build artificial minds with the capacity for experience. This raises a plethora of ethical issues, explored, among others, in the context of whole brain emulations (WBE). In this paper, I will take up the problem of vulnerability – given, for various reasons, less attention in the literature – that the conscious emulations will likely exhibit. Specifically, I will examine the role that vulnerability plays in generating ethical issues that may arise when dealing with WBEs. I (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  8. The Philosophical Case for Robot Friendship.John Danaher - forthcoming - Journal of Posthuman Studies.
    Friendship is an important part of the good life. While many roboticists are eager to create friend-like robots, many philosophers and ethicists are concerned. They argue that robots cannot really be our friends. Robots can only fake the emotional and behavioural cues we associate with friendship. Consequently, we should resist the drive to create robot friends. In this article, I argue that the philosophical critics are wrong. Using the classic virtue-ideal of friendship, I argue that robots can plausibly be considered (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   31 citations  
  9. Understanding Artificial Agency.Leonard Dung - forthcoming - Philosophical Quarterly.
    Which artificial intelligence (AI) systems are agents? To answer this question, I propose a multidimensional account of agency. According to this account, a system's agency profile is jointly determined by its level of goal-directedness and autonomy as well as is abilities for directly impacting the surrounding world, long-term planning and acting for reasons. Rooted in extant theories of agency, this account enables fine-grained, nuanced comparative characterizations of artificial agency. I show that this account has multiple important virtues and is more (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   12 citations  
  10. (1 other version)Walking Through the Turing Wall.Albert Efimov - forthcoming - In Teces.
    Can the machines that play board games or recognize images only in the comfort of the virtual world be intelligent? To become reliable and convenient assistants to humans, machines need to learn how to act and communicate in the physical reality, just like people do. The authors propose two novel ways of designing and building Artificial General Intelligence (AGI). The first one seeks to unify all participants at any instance of the Turing test – the judge, the machine, the human (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  11. From AI to Octopi and Back. AI Systems as Responsive and Contested Scaffolds.Giacomo Figà-Talamanca - forthcoming - In Vincent C. Müller, Leonard Dung, Guido Löhr & Aliya Rumana, Philosophy of Artificial Intelligence: The State of the Art. Berlin: SpringerNature.
    In this paper, I argue against the view that existing AI systems can be deemed agents comparably to human beings or other organisms. I especially focus on the criteria of interactivity, autonomy, and adaptivity, provided by the seminal work of Luciano Floridi and José Sanders to determine whether an artificial system can be considered an agent. I argue that the tentacles of octopuses also fit those criteria. However, I argue that octopuses’ tentacles cannot be attributed agency because their behavior can (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  12. Digital Necrolatry: Thanabots and the Prohibition of Post-Mortem AI Simulations.Demetrius Floudas - forthcoming - Submissions to Eu Ai Office's Plenary Drafting the Code of Practice for General-Purpose Artificial Intelligence.
    The emergence of Thanabots —artificial intelligence systems designed to simulate deceased individuals—presents unprecedented challenges at the intersection of artificial intelligence, legal rights, and societal configuration. This short policy recommendations report examines the legal, social and psychological implications of these posthumous simulations and argues for their prohibition on ethical, sociological, and legal grounds.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  13. What is it for a Machine Learning Model to Have a Capability?Jacqueline Harding & Nathaniel Sharadin - forthcoming - British Journal for the Philosophy of Science.
    What can contemporary machine learning (ML) models do? Given the proliferation of ML models in society, answering this question matters to a variety of stakeholders, both public and private. The evaluation of models' capabilities is rapidly emerging as a key subfield of modern ML, buoyed by regulatory attention and government grants. Despite this, the notion of an ML model possessing a capability has not been interrogated: what are we saying when we say that a model is able to do something? (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  14. Consciousness Makes Things Matter.Andrew Y. Lee - forthcoming - Philosophers' Imprint.
    This paper argues that phenomenal consciousness is what makes an entity a welfare subject. I develop a variety of motivations for this view, and then defend it from objections concerning death, non-conscious entities that have interests (such as plants), and conscious entities that necessarily have welfare level zero. I also explain how my theory of welfare subjects relates to experientialist and anti-experientialist theories of welfare goods.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   4 citations  
  15. Discerning genuine and artificial sociality: a technomoral wisdom to live with chatbots.Katsunori Miyahara & Hayate Shimizu - forthcoming - In Vincent C. Müller, Leonard Dung, Guido Löhr & Aliya Rumana, Philosophy of Artificial Intelligence: The State of the Art. Berlin: SpringerNature.
    Chatbots powered by large language models (LLMs) are increasingly capable of engaging in what seems like natural conversations with humans. This raises the question of whether we should interact with these chatbots in a morally considerate manner. In this chapter, we examine how to answer this question from within the normative framework of virtue ethics. In the literature, two kinds of virtue ethics arguments, the moral cultivation and the moral character argument, have been advanced to argue that we should afford (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  16. (1 other version)AI Extenders and the Ethics of Mental Health.Karina Vold & Jose Hernandez-Orallo - forthcoming - In Marcello Ienca & Fabrice Jotterand, Ethics of Artificial Intelligence in Brain and Mental Health.
    The extended mind thesis maintains that the functional contributions of tools and artefacts can become so essential for our cognition that they can be constitutive parts of our minds. In other words, our tools can be on a par with our brains: our minds and cognitive processes can literally ‘extend’ into the tools. Several extended mind theorists have argued that this ‘extended’ view of the mind offers unique insights into how we understand, assess, and treat certain cognitive conditions. In this (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  17. Sustainability of Artificial Intelligence: Reconciling human rights with legal rights of robots.Ammar Younas & Rehan Younas - forthcoming - In Zhyldyzbek Zhakshylykov & Aizhan Baibolot, Quality Time 18. International Alatoo University Kyrgyzstan. pp. 25-28.
    With the advancement of artificial intelligence and humanoid robotics and an ongoing debate between human rights and rule of law, moral philosophers, legal and political scientists are facing difficulties to answer the questions like, “Do humanoid robots have same rights as of humans and if these rights are superior to human rights or not and why?” This paper argues that the sustainability of human rights will be under question because, in near future the scientists (considerably the most rational people) will (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  18. Reasons to Respond to AI Emotional Expressions.Rodrigo Díaz & Jonas Blatter - 2025 - American Philosophical Quarterly 62 (1):87-102.
    Human emotional expressions can communicate the emotional state of the expresser, but they can also communicate appeals to perceivers. For example, sadness expressions such as crying request perceivers to aid and support, and anger expressions such as shouting urge perceivers to back off. Some contemporary artificial intelligence (AI) systems can mimic human emotional expressions in a (more or less) realistic way, and they are progressively being integrated into our daily lives. How should we respond to them? Do we have reasons (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  19. I Contain Multitudes: A Typology of Digital Doppelgängers.William D’Alessandro, Trenton W. Ford & Michael Yankoski - 2025 - American Journal of Bioethics 25 (2):132-134.
    Iglesias et al. (2025) argue that “some of the aims or ostensible goods of person-span expansion could plausibly be fulfilled in part by creating a digital doppelgänger”—that is, an AI system desig...
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  20. AI wellbeing.Simon Goldstein & Cameron Domenico Kirk-Giannini - 2025 - Asian Journal of Philosophy 4 (1):1-22.
    Under what conditions would an artificially intelligent system have wellbeing? Despite its clear bearing on the ethics of human interactions with artificial systems, this question has received little direct attention. Because all major theories of wellbeing hold that an individual’s welfare level is partially determined by their mental life, we begin by considering whether artificial systems have mental states. We show that a wide range of theories of mental states, when combined with leading theories of wellbeing, predict that certain existing (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   10 citations  
  21. Raising an AI Teenager.Catherine Stinson - 2025 - In David Friedell, The Philosophy of Ted Chiang. Palgrave MacMillan.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  22. The Point of Blaming AI Systems.Hannah Altehenger & Leonhard Menges - 2024 - Journal of Ethics and Social Philosophy 27 (2).
    As Christian List (2021) has recently argued, the increasing arrival of powerful AI systems that operate autonomously in high-stakes contexts creates a need for “future-proofing” our regulatory frameworks, i.e., for reassessing them in the face of these developments. One core part of our regulatory frameworks that dominates our everyday moral interactions is blame. Therefore, “future-proofing” our extant regulatory frameworks in the face of the increasing arrival of powerful AI systems requires, among others things, that we ask whether it makes sense (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  23. The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI.Jonathan Birch - 2024 - Oxford: Oxford University Press.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   5 citations  
  24. The Ethics of Automating Therapy.Jake Burley, James J. Hughes, Alec Stubbs & Nir Eisikovits - 2024 - Ieet White Papers.
    The mental health crisis and loneliness epidemic have sparked a growing interest in leveraging artificial intelligence (AI) and chatbots as a potential solution. This report examines the benefits and risks of incorporating chatbots in mental health treatment. AI is used for mental health diagnosis and treatment decision-making and to train therapists on virtual patients. Chatbots are employed as always-available intermediaries with therapists, flagging symptoms for human intervention. But chatbots are also sold as stand-alone virtual therapists or as friends and lovers. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  25. Dubito Ergo Sum: Exploring AI Ethics.Viktor Dörfler & Giles Cuthbert - 2024 - Hicss 57: Hawaii International Conference on System Sciences, Honolulu, Hi.
    We paraphrase Descartes’ famous dictum in the area of AI ethics where the “I doubt and therefore I am” is suggested as a necessary aspect of morality. Therefore AI, which cannot doubt itself, cannot possess moral agency. Of course, this is not the end of the story. We explore various aspects of the human mind that substantially differ from AI, which includes the sensory grounding of our knowing, the act of understanding, and the significance of being able to doubt ourselves. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  26. Quasi-Metacognitive Machines: Why We Don’t Need Morally Trustworthy AI and Communicating Reliability is Enough.John Dorsch & Ophelia Deroy - 2024 - Philosophy and Technology 37 (2):1-21.
    Many policies and ethical guidelines recommend developing “trustworthy AI”. We argue that developing morally trustworthy AI is not only unethical, as it promotes trust in an entity that cannot be trustworthy, but it is also unnecessary for optimal calibration. Instead, we show that reliability, exclusive of moral trust, entails the appropriate normative constraints that enable optimal calibration and mitigate the vulnerability that arises in high-stakes hybrid decision-making environments, without also demanding, as moral trust would, the anthropomorphization of AI and thus (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  27. ChatGPT: towards AI subjectivity.Kristian D’Amato - 2024 - AI and Society 39:1-15.
    Motivated by the question of responsible AI and value alignment, I seek to offer a uniquely Foucauldian reconstruction of the problem as the emergence of an ethical subject in a disciplinary setting. This reconstruction contrasts with the strictly human-oriented programme typical to current scholarship that often views technology in instrumental terms. With this in mind, I problematise the concept of a technological subjectivity through an exploration of various aspects of ChatGPT in light of Foucault’s work, arguing that current systems lack (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  28. Intersubstrate Welfare Comparisons: Important, Difficult, and Potentially Tractable.Bob Fischer & Jeff Sebo - 2024 - Utilitas 36 (1):50-63.
    In the future, when we compare the welfare of a being of one substrate (say, a human) with the welfare of another (say, an artificial intelligence system), we will be making an intersubstrate welfare comparison. In this paper, we argue that intersubstrate welfare comparisons are important, difficult, and potentially tractable. The world might soon contain a vast number of sentient or otherwise significant beings of different substrates, and moral agents will need to be able to compare their welfare levels. However, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  29. Computers will not acquire general intelligence, but may still rule the world.Ragnar Fjelland - 2024 - Cosmos+Taxis 12 (5+6):58-68.
    Jobst Langrebe’s and Barry Smith’s book Why Machines Will Never Rule the World argues that artificial general intelligence (AGI) will never be realized. Drawing on theories of complexity they argue that it is not only technically, but mathematically impossible to realize AGI. The book is the result of cooperation between a philosopher and a mathematician. In addition to a thorough treatment of mathematical modelling of complex systems the book addresses many fundamental philosophical questions. The authors show that philosophy is still (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  30. The Hazards of Putting Ethics on Autopilot.Julian Friedland, B. Balkin, David & Kristian Myrseth - 2024 - MIT Sloan Management Review 65 (4).
    The generative AI boom is unleashing its minions. Enterprise software vendors have rolled out legions of automated assistants that use large language model (LLM) technology, such as ChatGPT, to offer users helpful suggestions or to execute simple tasks. These so-called copilots and chatbots can increase productivity and automate tedious manual work. In this article, we explain how that leads to the risk that users' ethical competence may degrade over time — and what to do about it.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  31. A way forward for responsibility in the age of AI.Dane Leigh Gogoshin - 2024 - Inquiry: An Interdisciplinary Journal of Philosophy:1-34.
    Whatever one makes of the relationship between free will and moral responsibility – e.g. whether it’s the case that we can have the latter without the former and, if so, what conditions must be met; whatever one thinks about whether artificially intelligent agents might ever meet such conditions, one still faces the following questions. What is the value of moral responsibility? If we take moral responsibility to be a matter of being a fitting target of moral blame or praise, what (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  32. Authenticity in algorithm-aided decision-making.Brett Karlan - 2024 - Synthese 204 (93):1-25.
    I identify an undertheorized problem with decisions we make with the aid of algorithms: the problem of inauthenticity. When we make decisions with the aid of algorithms, we can make ones that go against our commitments and values in a normatively important way. In this paper, I present a framework for algorithm-aided decision-making that can lead to inauthenticity. I then construct a taxonomy of the features of the decision environment that make such outcomes likely, and I discuss three possible solutions (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  33. The Perfect Politician.Theodore M. Lechterman - 2024 - In David Edmonds, AI Morality. Oxford: Oxford University Press USA.
    Ideas for integrating AI into politics are now emerging and advancing at accelerating pace. This chapter highlights a few different varieties and show how they reflect different assumptions about the value of democracy. We cannot make informed decisions about which, if any, proposals to pursue without further reflection on what makes democracy valuable and how current conditions fail to fully realize it. Recent advances in political philosophy provide some guidance but leave important questions open. If AI advances to a state (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  34. Medical AI: is trust really the issue?Jakob Thrane Mainz - 2024 - Journal of Medical Ethics 50 (5):349-350.
    I discuss an influential argument put forward by Hatherley in theJournal of Medical Ethics. Drawing on influential philosophical accounts of interpersonal trust, Hatherley claims that medical artificial intelligence is capable of being reliable, but not trustworthy. Furthermore, Hatherley argues that trust generates moral obligations on behalf of the trustee. For instance, when a patient trusts a clinician, it generates certain moral obligations on behalf of the clinician for her to do what she is entrusted to do. I make three objections (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  35. Conformism, Ignorance & Injustice: AI as a Tool of Epistemic Oppression.Martin Miragoli - 2024 - Episteme: A Journal of Social Epistemology:1-19.
    From music recommendation to assessment of asylum applications, machine-learning algorithms play a fundamental role in our lives. Naturally, the rise of AI implementation strategies has brought to public attention the ethical risks involved. However, the dominant anti-discrimination discourse, too often preoccupied with identifying particular instances of harmful AIs, has yet to bring clearly into focus the more structural roots of AI-based injustice. This paper addresses the problem of AI-based injustice from a distinctively epistemic angle. More precisely, I argue that the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  36. Decolonial AI as Disenclosure.Warmhold Jan Thomas Mollema - 2024 - Open Journal of Social Sciences 12 (2):574-603.
    The development and deployment of machine learning and artificial intelligence (AI) engender “AI colonialism”, a term that conceptually overlaps with “data colonialism”, as a form of injustice. AI colonialism is in need of decolonization for three reasons. Politically, because it enforces digital capitalism’s hegemony. Ecologically, as it negatively impacts the environment and intensifies the extraction of natural resources and consumption of energy. Epistemically, since the social systems within which AI is embedded reinforce Western universalism by imposing Western colonial values on (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  37. A Robust Governance for the AI Act: AI Office, AI Board, Scientific Panel, and National Authorities.Claudio Novelli, Philipp Hacker, Jessica Morley, Jarle Trondal & Luciano Floridi - 2024 - European Journal of Risk Regulation 4:1-25.
    Regulation is nothing without enforcement. This particularly holds for the dynamic field of emerging technologies. Hence, this article has two ambitions. First, it explains how the EU´s new Artificial Intelligence Act (AIA) will be implemented and enforced by various institutional bodies, thus clarifying the governance framework of the AIA. Second, it proposes a normative model of governance, providing recommendations to ensure uniform and coordinated execution of the AIA and the fulfilment of the legislation. Taken together, the article explores how the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  38. Artificial Intelligence and an Anthropological Ethics of Work: Implications on the Social Teaching of the Church.Justin Nnaemeka Onyeukaziri - 2024 - Religions 15 (5):623.
    It is the contention of this paper that ethics of work ought to be anthropological, and artificial intelligence (AI) research and development, which is the focus of work today, should be anthropological, that is, human-centered. This paper discusses the philosophical and theological implications of the development of AI research on the intrinsic nature of work and the nature of the human person. AI research and the implications of its development and advancement, being a relatively new phenomenon, have not been comprehensively (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  39. Should the use of adaptive machine learning systems in medicine be classified as research?Robert Sparrow, Joshua Hatherley, Justin Oakley & Chris Bain - 2024 - American Journal of Bioethics 24 (10):58-69.
    A novel advantage of the use of machine learning (ML) systems in medicine is their potential to continue learning from new data after implementation in clinical practice. To date, considerations of the ethical questions raised by the design and use of adaptive machine learning systems in medicine have, for the most part, been confined to discussion of the so-called “update problem,” which concerns how regulators should approach systems whose performance and parameters continue to change even after they have received regulatory (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   17 citations  
  40. Mistakes in the moral mathematics of existential risk.David Thorstad - 2024 - Ethics 135 (1):122-150.
    Longtermists have recently argued that it is overwhelmingly important to do what we can to mitigate existential risks to humanity. I consider three mistakes that are often made in calculating the value of existential risk mitigation. I show how correcting these mistakes pushes the value of existential risk mitigation substantially below leading estimates, potentially low enough to threaten the normative case for existential risk mitigation. I use this discussion to draw four positive lessons for the study of existential risk. -/- (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  41. Consilience and AI as technological prostheses.Jeffrey B. White - 2024 - AI and Society 39 (5):1-3.
    Edward Wilson wrote in Consilience that “Human history can be viewed through the lens of ecology as the accumulation of environmental prostheses” (1999 p 316), with technologies mediating our collective habitation of the Earth and its complex, interdependent ecosystems. Wilson emphasized the defining characteristic of complex systems, that they undergo transformations which are irreversible. His view is now standard, and his central point bears repeated emphasis, today: natural systems can be broken, species—including us—can disappear, ecosystems can fail, and technological prostheses (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  42. Artificial consciousness: a perspective from the free energy principle.Wanja Wiese - 2024 - Philosophical Studies 181:1947–1970.
    Does the assumption of a weak form of computational functionalism, according to which the right form of neural computation is sufficient for consciousness, entail that a digital computational simulation of such neural computations is conscious? Or must this computational simulation be implemented in the right way, in order to replicate consciousness? From the perspective of Karl Friston’s free energy principle, self-organising systems (such as living organisms) share a set of properties that could be realised in artificial systems, but are not (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   5 citations  
  43. On the Moral Status of Artificial Cognition to Natural Cognition.Jianhua Xie - 2024 - Journal of Human Cognition 8 (2):17-28.
    Artificial Cognition (AC) has provoked a great deal of controversy in recent years. Concerns over its development have revolved around the questions of whether or not a moral status may be ascribed to AC and, if so, how could it be characterized? This paper provides an analysis of consciousness as a means to query the moral status of AC. This method suggests that the question of moral status of artificial cognition depends upon the level of development of consciousness achieved. As (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  44. Nonhuman Moral Agency: A Practice-Focused Exploration of Moral Agency in Nonhuman Animals and Artificial Intelligence.Dorna Behdadi - 2023 - Dissertation, University of Gothenburg
    Can nonhuman animals and artificial intelligence (AI) entities be attributed moral agency? The general assumption in the philosophical literature is that moral agency applies exclusively to humans since they alone possess free will or capacities required for deliberate reflection. Consequently, only humans have been taken to be eligible for ascriptions of moral responsibility in terms of, for instance, blame or praise, moral criticism, or attributions of vice and virtue. Animals and machines may cause harm, but they cannot be appropriately ascribed (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  45. What is a subliminal technique? An ethical perspective on AI-driven influence.Juan Pablo Bermúdez, Rune Nyrup, Sebastian Deterding, Celine Mougenot, Laura Moradbakhti, Fangzhou You & Rafael A. Calvo - 2023 - Ieee Ethics-2023 Conference Proceedings.
    Concerns about threats to human autonomy feature prominently in the field of AI ethics. One aspect of this concern relates to the use of AI systems for problematically manipulative influence. In response to this, the European Union’s draft AI Act (AIA) includes a prohibition on AI systems deploying subliminal techniques that alter people’s behavior in ways that are reasonably likely to cause harm (Article 5(1)(a)). Critics have argued that the term ‘subliminal techniques’ is too narrow to capture the target cases (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  46. Artificial Consciousness Is Morally Irrelevant.Bruce P. Blackshaw - 2023 - American Journal of Bioethics Neuroscience 14 (2):72-74.
    It is widely agreed that possession of consciousness contributes to an entity’s moral status, even if it is not necessary for moral status (Levy and Savulescu 2009). An entity is considered to have...
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  47. Black-box assisted medical decisions: AI power vs. ethical physician care.Berman Chan - 2023 - Medicine, Health Care and Philosophy 26 (3):285-292.
    Without doctors being able to explain medical decisions to patients, I argue their use of black box AIs would erode the effective and respectful care they provide patients. In addition, I argue that physicians should use AI black boxes only for patients in dire straits, or when physicians use AI as a “co-pilot” (analogous to a spellchecker) but can independently confirm its accuracy. I respond to A.J. London’s objection that physicians already prescribe some drugs without knowing why they work.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   7 citations  
  48. Should the State Prohibit the Production of Artificial Persons?Bartek Chomanski - 2023 - Journal of Libertarian Studies 27.
    This article argues that criminal law should not, in general, prevent the creation of artificially intelligent servants who achieve humanlike moral status, even though it may well be immoral to construct such beings. In defending this claim, a series of thought experiments intended to evoke clear intuitions is proposed, and presuppositions about any particular theory of criminalization or any particular moral theory are kept to a minimum.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  49. Moral Uncertainty and Our Relationships with Unknown Minds.John Danaher - 2023 - Cambridge Quarterly of Healthcare Ethics 32 (4):482-495.
    We are sometimes unsure of the moral status of our relationships with other entities. Recent case studies in this uncertainty include our relationships with artificial agents (robots, assistant AI, etc.), animals, and patients with “locked-in” syndrome. Do these entities have basic moral standing? Could they count as true friends or lovers? What should we do when we do not know the answer to these questions? An influential line of reasoning suggests that, in such cases of moral uncertainty, we need meta-moral (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  50. The Kant-Inspired Indirect Argument for Non-Sentient Robot Rights.Tobias Flattery - 2023 - AI and Ethics.
    Some argue that robots could never be sentient, and thus could never have intrinsic moral status. Others disagree, believing that robots indeed will be sentient and thus will have moral status. But a third group thinks that, even if robots could never have moral status, we still have a strong moral reason to treat some robots as if they do. Drawing on a Kantian argument for indirect animal rights, a number of technology ethicists contend that our treatment of anthropomorphic or (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
1 — 50 / 186