Results for 'Moral Status of Artificial Systems'

998 found
Order:
  1. Moral Encounters of the Artificial Kind: Towards a non-anthropocentric account of machine moral agency.Fabio Tollon - 2019 - Dissertation, Stellenbosch University
    The aim of this thesis is to advance a philosophically justifiable account of Artificial Moral Agency (AMA). Concerns about the moral status of Artificial Intelligence (AI) traditionally turn on questions of whether these systems are deserving of moral concern (i.e. if they are moral patients) or whether they can be sources of moral action (i.e. if they are moral agents). On the Organic View of Ethical Status, being a (...) patient is a necessary condition for an entity to qualify as a moral agent. This view claims that because artificial agents (AAs) lack sentience, they cannot be proper subjects of moral concern and hence cannot be considered to be moral agents. I raise conceptual and epistemic issues with regards to the sense of sentience employed on this view, and I argue that the Organic View does not succeed in showing that machines cannot be moral patients. Nevertheless, irrespective of this failure, I also argue that the entire project is misdirected in that moral patiency need not be a necessary condition for moral agency. Moreover, I claim that whereas machines may conceivably be moral patients in the future, there is a strong case to be made that they are (or will very soon be) moral agents. Whereas it is often argued that machines cannot be agents simpliciter, let alone moral agents, I claim that this argument is predicated on a conception of agency that makes unwarranted metaphysical assumptions even in the case of human agents. Once I have established the shortcomings of this “standard account”, I move to elaborate on other, more plausible, conceptions of agency, on which some machines clearly qualify as agents. Nevertheless, the argument is still often made that while some machines may be agents, they cannot be moral agents, given their ostensible lack of the requisite phenomenal states. Against this thesis, I argue that the requirement of internal states for moral agency is philosophically unsound, as it runs up against the problem of other minds. In place of such intentional accounts of moral agency, I provide a functionalist alternative, which makes conceptual room for the existence of AMAs. The implications of this thesis are that at some point in the future we may be faced with situations for which no human being is morally responsible, but a machine may be. Moreover, this responsibility holds, I claim, independently of whether the agent in question is “punishable” or not. (shrink)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  2. Artificial morality: Making of the artificial moral agents.Marija Kušić & Petar Nurkić - 2019 - Belgrade Philosophical Annual 1 (32):27-49.
    Abstract: Artificial Morality is a new, emerging interdisciplinary field that centres around the idea of creating artificial moral agents, or AMAs, by implementing moral competence in artificial systems. AMAs are ought to be autonomous agents capable of socially correct judgements and ethically functional behaviour. This request for moral machines comes from the changes in everyday practice, where artificial systems are being frequently used in a variety of situations from home help and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  3. Consciousness, Machines, and Moral Status.Henry Shevlin - manuscript
    In light of recent breakneck pace in machine learning, questions about whether near-future artificial systems might be conscious and possess moral status are increasingly pressing. This paper argues that as matters stand these debates lack any clear criteria for resolution via the science of consciousness. Instead, insofar as they are settled at all, it is likely to be via shifts in public attitudes brought about by the increasingly close relationships between humans and AI users. Section 1 (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4. Artificial Moral Patients: Mentality, Intentionality, and Systematicity.Howard Nye & Tugba Yoldas - 2021 - International Review of Information Ethics 29:1-10.
    In this paper, we defend three claims about what it will take for an AI system to be a basic moral patient to whom we can owe duties of non-maleficence not to harm her and duties of beneficence to benefit her: (1) Moral patients are mental patients; (2) Mental patients are true intentional systems; and (3) True intentional systems are systematically flexible. We suggest that we should be particularly alert to the possibility of such systematically flexible (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  5. The moral status of conscious subjects.Joshua Shepherd - forthcoming - In Stephen Clarke, Hazem Zohny & Julian Savulescu (eds.), Rethinking Moral Status.
    The chief themes of this discussion are as follows. First, we need a theory of the grounds of moral status that could guide practical considerations regarding how to treat the wide range of potentially conscious entities with which we are acquainted – injured humans, cerebral organoids, chimeras, artificially intelligent machines, and non-human animals. I offer an account of phenomenal value that focuses on the structure and sophistication of phenomenally conscious states at a time and over time in the (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  6. Machine Intentionality, the Moral Status of Machines, and the Composition Problem.David Leech Anderson - 2012 - In Vincent C. Müller (ed.), The Philosophy & Theory of Artificial Intelligence. Springer. pp. 312-333.
    According to the most popular theories of intentionality, a family of theories we will refer to as “functional intentionality,” a machine can have genuine intentional states so long as it has functionally characterizable mental states that are causally hooked up to the world in the right way. This paper considers a detailed description of a robot that seems to meet the conditions of functional intentionality, but which falls victim to what I call “the composition problem.” One obvious way to escape (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  7. Is it time for robot rights? Moral status in artificial entities.Vincent C. Müller - 2021 - Ethics and Information Technology 23 (3):579–587.
    Some authors have recently suggested that it is time to consider rights for robots. These suggestions are based on the claim that the question of robot rights should not depend on a standard set of conditions for ‘moral status’; but instead, the question is to be framed in a new way, by rejecting the is/ought distinction, making a relational turn, or assuming a methodological behaviourism. We try to clarify these suggestions and to show their highly problematic consequences. While (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  8. Presumptuous aim attribution, conformity, and the ethics of artificial social cognition.Owen C. King - 2020 - Ethics and Information Technology 22 (1):25-37.
    Imagine you are casually browsing an online bookstore, looking for an interesting novel. Suppose the store predicts you will want to buy a particular novel: the one most chosen by people of your same age, gender, location, and occupational status. The store recommends the book, it appeals to you, and so you choose it. Central to this scenario is an automated prediction of what you desire. This article raises moral concerns about such predictions. More generally, this article examines (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  9. Varieties of Artificial Moral Agency and the New Control Problem.Marcus Arvan - 2022 - Humana.Mente - Journal of Philosophical Studies 15 (42):225-256.
    This paper presents a new trilemma with respect to resolving the control and alignment problems in machine ethics. Section 1 outlines three possible types of artificial moral agents (AMAs): (1) 'Inhuman AMAs' programmed to learn or execute moral rules or principles without understanding them in anything like the way that we do; (2) 'Better-Human AMAs' programmed to learn, execute, and understand moral rules or principles somewhat like we do, but correcting for various sources of human (...) error; and (3) 'Human-Like AMAs' programmed to understand and apply moral values in broadly the same way that we do, with a human-like moral psychology. Sections 2–4 then argue that each type of AMA generates unique control and alignment problems that have not been fully appreciated. Section 2 argues that Inhuman AMAs are likely to behave in inhumane ways that pose serious existential risks. Section 3 then contends that Better-Human AMAs run a serious risk of magnifying some sources of human moral error by reducing or eliminating others. Section 4 then argues that Human-Like AMAs would not only likely reproduce human moral failures, but also plausibly be highly intelligent, conscious beings with interests and wills of their own who should therefore be entitled to similar moral rights and freedoms as us. This generates what I call the New Control Problem: ensuring that humans and Human-Like AMAs exert a morally appropriate amount of control over each other. Finally, Section 5 argues that resolving the New Control Problem would, at a minimum, plausibly require ensuring what Hume and Rawls term ‘circumstances of justice’ between humans and Human-Like AMAs. But, I argue, there are grounds for thinking this will be profoundly difficult to achieve. I thus conclude on a skeptical note. Different approaches to developing ‘safe, ethical AI’ generate subtly different control and alignment problems that we do not currently know how to adequately resolve, and which may or may not be ultimately surmountable. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  10. Understanding Artificial Agency.Leonard Dung - forthcoming - Philosophical Quarterly.
    Which artificial intelligence (AI) systems are agents? To answer this question, I propose a multidimensional account of agency. According to this account, a system's agency profile is jointly determined by its level of goal-directedness and autonomy as well as is abilities for directly impacting the surrounding world, long-term planning and acting for reasons. Rooted in extant theories of agency, this account enables fine-grained, nuanced comparative characterizations of artificial agency. I show that this account has multiple important virtues (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  11. On the morality of artificial agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.
    Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and (...)
    Download  
     
    Export citation  
     
    Bookmark   290 citations  
  12. Moral Agents or Mindless Machines? A Critical Appraisal of Agency in Artificial Systems.Fabio Tollon - 2019 - Hungarian Philosophical Review 4 (63):9-23.
    In this paper I provide an exposition and critique of Johnson and Noorman’s (2014) three conceptualizations of the agential roles artificial systems can play. I argue that two of these conceptions are unproblematic: that of causally efficacious agency and “acting for” or surrogate agency. Their third conception, that of “autonomous agency,” however, is one I have reservations about. The authors point out that there are two ways in which the term “autonomy” can be used: there is, firstly, the (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  13.  91
    Towards broadening the perspective on lethal autonomous weapon systems ethics and regulations.Diego Andres Salcedo, Bianca Ximenes & Geber Ramalho - 2020 - In Diego Andres Salcedo, Bianca Ximenes & Geber Ramalho (eds.), Rio Seminar on Autonomous Weapons Systems. Brasília: Alexandre de Gusmão Foundation. pp. 133-158.
    Our reflections on LAWS issues are the result of the work of our research group on AI and ethics at the Informatics Center in partnership with the Information Science Department, both from the Federal University of Pernambuco, Brazil. In particular, our propositions and provocations are tied to Bianca Ximenes’s ongoing doctoral thesis, advised by Prof. Geber Ramalho, from the area of computer science, and co-advised by Prof. Diego Salcedo, from the humanities. Our research group is interested in answering two tricky (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. Artificial consciousness: A perspective from the free energy principle.Wanja Wiese - manuscript
    Could a sufficiently detailed computer simulation of consciousness replicate consciousness? In other words, is performing the right computations sufficient for artificial consciousness? Or will there remain a difference between simulating and being a conscious system, because the right computations must be implemented in the right way? From the perspective of Karl Friston's free energy principle, self-organising systems (such as living organisms) share a set of properties that could be realised in artificial systems, but are not instantiated (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15. Group Agency and Artificial Intelligence.Christian List - 2021 - Philosophy and Technology (4):1-30.
    The aim of this exploratory paper is to review an under-appreciated parallel between group agency and artificial intelligence. As both phenomena involve non-human goal-directed agents that can make a difference to the social world, they raise some similar moral and regulatory challenges, which require us to rethink some of our anthropocentric moral assumptions. Are humans always responsible for those entities’ actions, or could the entities bear responsibility themselves? Could the entities engage in normative reasoning? Could they even (...)
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  16. What You Are and Its Affects on Moral Status: Godman's Epistemology and Morality of Human Kinds, Gunkel's Robot Rights, and Schneider on Artificial You.Lantz Fleming Miller - 2021 - Human Rights Review 22 (4):525-531.
    Thanks to mounting discussion about projected technologies’ possibly altering the species mentally and physically, philosophical investigation of what human beings are proceeds robustly. Many thinkers contend that whatever we are has little to do with how we should behave. Yet, tampering with what the human being is may tread upon human rights to be whatever one is. Rights given in widely recognized documents such as the U.N. Declaration of the Rights of Indigenous Peoples assume what humans are and need depends (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. The Minimal Cognitive Grid: A Tool to Rank the Explanatory Status of Cognitive Artificial Systems.Antonio Lieto - 2022 - Proceedings of AISC 2022.
    Download  
     
    Export citation  
     
    Bookmark  
  18. Non-Human Moral Status: Problems with Phenomenal Consciousness.Joshua Shepherd - 2023 - American Journal of Bioethics Neuroscience 14 (2):148-157.
    Consciousness-based approaches to non-human moral status maintain that consciousness is necessary for (some degree or level of) moral status. While these approaches are intuitive to many, in this paper I argue that the judgment that consciousness is necessary for moral status is not secure enough to guide policy regarding non-humans, that policies responsive to the moral status of non-humans should take seriously the possibility that psychological features independent of consciousness are sufficient for (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  19. The Morality of Artificial Friends in Ishiguro’s Klara and the Sun.Jakob Stenseke - 2022 - Journal of Science Fiction and Philosophy 5.
    Can artificial entities be worthy of moral considerations? Can they be artificial moral agents (AMAs), capable of telling the difference between good and evil? In this essay, I explore both questions—i.e., whether and to what extent artificial entities can have a moral status (“the machine question”) and moral agency (“the AMA question”)—in light of Kazuo Ishiguro’s 2021 novel Klara and the Sun. I do so by juxtaposing two prominent approaches to machine morality (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  20. Ethics of Artificial Intelligence.Vincent C. Müller - 2021 - In Anthony Elliott (ed.), The Routledge social science handbook of AI. London: Routledge. pp. 122-137.
    Artificial intelligence (AI) is a digital technology that will be of major importance for the development of humanity in the near future. AI has raised fundamental questions about what we should do with such systems, what the systems themselves should do, what risks they involve and how we can control these. - After the background to the field (1), this article introduces the main debates (2), first on ethical issues that arise with AI systems as objects, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. What Matters for Moral Status: Behavioral or Cognitive Equivalence?John Danaher - 2021 - Cambridge Quarterly of Healthcare Ethics 30 (3):472-478.
    Henry Shevlin’s paper—“How could we know when a robot was a moral patient?” – argues that we should recognize robots and artificial intelligence (AI) as psychological moral patients if they are cognitively equivalent to other beings that we already recognize as psychological moral patients (i.e., humans and, at least some, animals). In defending this cognitive equivalence strategy, Shevlin draws inspiration from the “behavioral equivalence” strategy that I have defended in previous work but argues that it is (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  22. Kantian Moral Agency and the Ethics of Artificial Intelligence.Riya Manna & Rajakishore Nath - 2021 - Problemos 100:139-151.
    This paper discusses the philosophical issues pertaining to Kantian moral agency and artificial intelligence. Here, our objective is to offer a comprehensive analysis of Kantian ethics to elucidate the non-feasibility of Kantian machines. Meanwhile, the possibility of Kantian machines seems to contend with the genuine human Kantian agency. We argue that in machine morality, ‘duty’ should be performed with ‘freedom of will’ and ‘happiness’ because Kant narrated the human tendency of evaluating our ‘natural necessity’ through ‘happiness’ as the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  23. Ethics of Artificial Intelligence and Robotics.Vincent C. Müller - 2012 - In Peter Adamson (ed.), Stanford Encyclopedia of Philosophy. Stanford Encyclopedia of Philosophy. pp. 1-70.
    Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., (...)
    Download  
     
    Export citation  
     
    Bookmark   30 citations  
  24. Theology Meets AI: Examining Perspectives, Tasks, and Theses on the Intersection of Technology and Religion.Anna Puzio - 2023 - In Anna Puzio, Nicole Kunkel & Hendrik Klinge (eds.), Alexa, wie hast du's mit der Religion? Theologische Zugänge zu Technik und Künstlicher Intelligenz. Darmstadt: Wbg.
    Artificial intelligence (AI), blockchain, virtual and augmented reality, (semi-)autonomous ve- hicles, autoregulatory weapon systems, enhancement, reproductive technologies and human- oid robotics – these technologies (and many others) are no longer speculative visions of the future; they have already found their way into our lives or are on the verge of a breakthrough. These rapid technological developments awaken a need for orientation: what distinguishes hu- man from machine and human intelligence from artificial intelligence, how far should the body (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. The Neural Correlates of Consciousness.Jorge Morales & Hakwan Lau - 2020 - In Uriah Kriegel (ed.), The Oxford Handbook of the Philosophy of Consciousness. Oxford: Oxford University Press. pp. 233-260.
    In this chapter, we discuss a selection of current views of the neural correlates of consciousness (NCC). We focus on the different predictions they make, in particular with respect to the role of prefrontal cortex (PFC) during visual experiences, which is an area of critical interest and some source of contention. Our discussion of these views focuses on the level of functional anatomy, rather than at the neuronal circuitry level. We take this approach because we currently understand more about experimental (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  26. Moral significance of phenomenal consciousness.Neil Levy & Julian Savulescu - 2009 - Progress in Brain Research.
    Recent work in neuroimaging suggests that some patients diagnosed as being in the persistent vegetative state are actually conscious. In this paper, we critically examine this new evidence. We argue that though it remains open to alternative interpretations, it strongly suggests the presence of consciousness in some patients. However, we argue that its ethical significance is less than many people seem to think. There are several different kinds of consciousness, and though all kinds of consciousness have some ethical significance, different (...)
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  27. Sustainability of Artificial Intelligence: Reconciling human rights with legal rights of robots.Ammar Younas & Rehan Younas - forthcoming - In Zhyldyzbek Zhakshylykov & Aizhan Baibolot (eds.), Quality Time 18. International Alatoo University Kyrgyzstan. pp. 25-28.
    With the advancement of artificial intelligence and humanoid robotics and an ongoing debate between human rights and rule of law, moral philosophers, legal and political scientists are facing difficulties to answer the questions like, “Do humanoid robots have same rights as of humans and if these rights are superior to human rights or not and why?” This paper argues that the sustainability of human rights will be under question because, in near future the scientists (considerably the most rational (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28. Autonomous Weapons Systems and the Moral Equality of Combatants.Michael Skerker, Duncan Purves & Ryan Jenkins - 2020 - Ethics and Information Technology 3 (6).
    To many, the idea of autonomous weapons systems (AWS) killing human beings is grotesque. Yet critics have had difficulty explaining why it should make a significant moral difference if a human combatant is killed by an AWS as opposed to being killed by a human combatant. The purpose of this paper is to explore the roots of various deontological concerns with AWS and to consider whether these concerns are distinct from any concerns that also apply to long- distance, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  29. Do androids dream of normative endorsement? On the fallibility of artificial moral agents.Frodo Podschwadek - 2017 - Artificial Intelligence and Law 25 (3):325-339.
    The more autonomous future artificial agents will become, the more important it seems to equip them with a capacity for moral reasoning and to make them autonomous moral agents. Some authors have even claimed that one of the aims of AI development should be to build morally praiseworthy agents. From the perspective of moral philosophy, praiseworthy moral agents, in any meaningful sense of the term, must be fully autonomous moral agents who endorse moral (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  30. Moral Agency in Artificial Intelligence (Robots).Saleh Gorbanian - 2020 - Ethical Reflections, 1 (1):11-32.
    Growing technological advances in intelligent artifacts and bitter experiences of the past have emphasized the need to use and operate ethics in this field. Accordingly, it is vital to discuss the ethical integrity of having intelligent artifacts. Concerning the method of gathering materials, the current study uses library and documentary research followed by attribution style. Moreover, descriptive analysis is employed in order to analyze data. Explaining and criticizing the opposing views in this field and reviewing the related literature, it is (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31. Disentangling Human Nature from Moral Status: Lessons for and from Philip K. Dick.James Okapal - 2023 - Journal of Science Fiction and Philosophy 6.
    A common interpretation of Philip K. Dick’s texts _Do Androids Dream of Electric Sheep?_ and _We Can Build You_ is that they attempt to answer the question “What does it mean to be human?” -/- Unfortunately, these interpretations fail to deal with the fact that the term “human” has both metaphysical and moral connotations. Metaphysical meanings associated with theories of human nature and moral meanings associated with theories of moral status are thus blurred in the novels (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32. The Measurement Problem of Consciousness.Heather Browning & Walter Veit - 2020 - Philosophical Topics 48 (1):85-108.
    This paper addresses what we consider to be the most pressing challenge for the emerging science of consciousness: the measurement problem of consciousness. That is, by what methods can we determine the presence of and properties of consciousness? Most methods are currently developed through evaluation of the presence of consciousness in humans and here we argue that there are particular problems in application of these methods to nonhuman cases—what we call the indicator validity problem and the extrapolation problem. The first (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  33. Artificial Beings Worthy of Moral Consideration in Virtual Environments: An Analysis of Ethical Viability.Stefano Gualeni - 2020 - Journal of Virtual Worlds Research 13 (1).
    This article explores whether and under which circumstances it is ethically viable to include artificial beings worthy of moral consideration in virtual environments. In particular, the article focuses on virtual environments such as those in digital games and training simulations – interactive and persistent digital artifacts designed to fulfill specific purposes, such as entertainment, education, training, or persuasion. The article introduces the criteria for moral consideration that serve as a framework for this analysis. Adopting this framework, the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  34. Moral Perspective from a Holistic Point of View for Weighted DecisionMaking and its Implications for the Processes of Artificial Intelligence.Mina Singh, Devi Ram, Sunita Kumar & Suresh Das - 2023 - International Journal of Research Publication and Reviews 4 (1):2223-2227.
    In the case of AI, automated systems are making increasingly complex decisions with significant ethical implications, raising questions about who is responsible for decisions made by AI and how to ensure that these decisions align with society's ethical and moral values, both in India and the West. Jonathan Haidt has conducted research on moral and ethical decision-making. Today, solving problems like decision-making in autonomous vehicles can draw on the literature of the trolley dilemma in that it illustrates (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. On a Possible Basis for Metaphysical Self-development in Natural and Artificial Systems.Jeffrey White - 2022 - Filozofia i Nauka. Studia Filozoficzne I Interdyscyplinarne 10:71-100.
    Recent research into the nature of self in artificial and biological systems raises interest in a uniquely determining immutable sense of self, a “metaphysical ‘I’” associated with inviolable personal values and moral convictions that remain constant in the face of environmental change, distinguished from an object “me” that changes with its environment. Complementary research portrays processes associated with self as multimodal routines selectively enacted on the basis of contextual cues informing predictive self or world models, with the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  36. Sustained Representation of Perspectival Shape.Jorge Morales, Axel Bax & Chaz Firestone - 2020 - Proceedings of the National Academy of Sciences of the United States of America 117 (26):14873–14882.
    Arguably the most foundational principle in perception research is that our experience of the world goes beyond the retinal image; we perceive the distal environment itself, not the proximal stimulation it causes. Shape may be the paradigm case of such “unconscious inference”: When a coin is rotated in depth, we infer the circular object it truly is, discarding the perspectival ellipse projected on our eyes. But is this really the fate of such perspectival shapes? Or does a tilted coin retain (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  37. Distributed cognition and distributed morality: Agency, artifacts and systems.Richard Heersmink - 2017 - Science and Engineering Ethics 23 (2):431-448.
    There are various philosophical approaches and theories describing the intimate relation people have to artifacts. In this paper, I explore the relation between two such theories, namely distributed cognition and distributed morality theory. I point out a number of similarities and differences in these views regarding the ontological status they attribute to artifacts and the larger systems they are part of. Having evaluated and compared these views, I continue by focussing on the way cognitive artifacts are used in (...)
    Download  
     
    Export citation  
     
    Bookmark   20 citations  
  38. Philosophical Signposts for Artificial Moral Agent Frameworks.Robert James M. Boyles - 2017 - Suri 6 (2):92–109.
    This article focuses on a particular issue under machine ethics—that is, the nature of Artificial Moral Agents. Machine ethics is a branch of artificial intelligence that looks into the moral status of artificial agents. Artificial moral agents, on the other hand, are artificial autonomous agents that possess moral value, as well as certain rights and responsibilities. This paper demonstrates that attempts to fully develop a theory that could possibly account for (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. Philosophy and Theory of Artificial Intelligence 2021.Vincent C. Müller (ed.) - 2022 - Berlin: Springer.
    This book gathers contributions from the fourth edition of the Conference on "Philosophy and Theory of Artificial Intelligence" (PT-AI), held on 27-28th of September 2021 at Chalmers University of Technology, in Gothenburg, Sweden. It covers topics at the interface between philosophy, cognitive science, ethics and computing. It discusses advanced theories fostering the understanding of human cognition, human autonomy, dignity and morality, and the development of corresponding artificial cognitive structures, analyzing important aspects of the relationship between humans and AI (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. Amoral, im/moral and dis/loyal: Children’s moral status in child welfare.Zlatana Knezevic - 2017 - Childhood 4 (24):470-484.
    This article is a discursive examination of children’s status as knowledgeable moral agents within the Swedish child welfare system and in the widely used assessment framework BBIC. Departing from Fricker’s concept of epistemic injustice, three discursive positions of children’s moral status are identified: amoral, im/moral and dis/loyal. The findings show the undoubtedly moral child as largely missing and children’s agency as diminished, deviant or rendered ambiguous. Epistemic injustice applies particularly to disadvantaged children with difficult (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  41. Artificial Consciousness Is Morally Irrelevant.Bruce P. Blackshaw - 2023 - American Journal of Bioethics Neuroscience 14 (2):72-74.
    It is widely agreed that possession of consciousness contributes to an entity’s moral status, even if it is not necessary for moral status (Levy and Savulescu 2009). An entity is considered to have...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  42. Is There a Trade-Off Between Human Autonomy and the ‘Autonomy’ of AI Systems?C. Prunkl - 2022 - In Conference on Philosophy and Theory of Artificial Intelligence. Springer International Publishing. pp. 67-71.
    Autonomy is often considered a core value of Western society that is deeply entrenched in moral, legal, and political practices. The development and deployment of artificial intelligence (AI) systems to perform a wide variety of tasks has raised new questions about how AI may affect human autonomy. Numerous guidelines on the responsible development of AI now emphasise the need for human autonomy to be protected. In some cases, this need is linked to the emergence of increasingly ‘autonomous’ (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. Critical Analysis of the “No Relevant Difference” Argument in Defense of the Rights of Artificial Intelligence.Mazarian Alireza - 2019 - Journal of Philosophical Theological Research 21 (1):165-190.
    There are many new philosophical queries about the moral status and rights of artificial intelligences; questions such as whether such entities can be considered as morally responsible entities and as having special rights. Recently, the contemporary philosophy of mind philosopher, Eric Schwitzgebel, has tried to defend the possibility of equal rights of AIs and human beings (in an imaginary future), by designing a new argument (2015). In this paper, after an introduction, the author reviews and analyzes the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44.  82
    Xinxue (The philosophy of mind) System.Cheng Gong - manuscript
    Xinxue (The philosophy of mind) founded by ancient Chinese philosopher Wang Yangming of the Ming Dynasty for over 700 years. Its ideas have deeply influenced East Asian countries such as China, Japan, and Korea in the field of social philosophy, and even indirectly promoted Japan's Meiji Restoration movement. At the same time, scholars from all over the world have conducted numerous studies and explorations on it, but overall, there is a lack of systematic exploration and research on it. This article (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. Artificial moral experts: asking for ethical advice to artificial intelligent assistants.Blanca Rodríguez-López & Jon Rueda - 2023 - AI and Ethics.
    In most domains of human life, we are willing to accept that there are experts with greater knowledge and competencies that distinguish them from non-experts or laypeople. Despite this fact, the very recognition of expertise curiously becomes more controversial in the case of “moral experts”. Do moral experts exist? And, if they indeed do, are there ethical reasons for us to follow their advice? Likewise, can emerging technological developments broaden our very concept of moral expertise? In this (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. Should the State Prohibit the Production of Artificial Persons?Bartek Chomanski - 2023 - Journal of Libertarian Studies 27.
    This article argues that criminal law should not, in general, prevent the creation of artificially intelligent servants who achieve humanlike moral status, even though it may well be immoral to construct such beings. In defending this claim, a series of thought experiments intended to evoke clear intuitions is proposed, and presuppositions about any particular theory of criminalization or any particular moral theory are kept to a minimum.
    Download  
     
    Export citation  
     
    Bookmark  
  47. The moral status of micro-inequities: In favour of institutional solutions.Samantha Brennan - manuscript
    This chapter is about micro-inequities and their connection to the problem of implicit bias. It begins by defining micro-inequities, goes on to discuss what makes them wrong and what solutions might be appropriate given the institutional context in which they occur.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  48. The Moral Status of Animals: Degrees of Moral Status and the Interest-Based Approach.Zorana Todorovic - 2021 - Philosophy and Society 2 (32):282–295.
    This paper addresses the issue of the moral status of non-human animals, or the question whether sentient animals are morally considerable. The arguments for and against the moral status of animals are discussed, above all the argument from marginal cases. It is argued that sentient animals have moral status based on their having interests in their experiential well-being, but that there are degrees of moral status. Two interest-based approaches are presented and discussed: (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  49. Materialism and the Moral Status of Animals.Jonathan Birch - 2022 - Philosophical Quarterly 72 (4):795-815.
    Consciousness has an important role in ethics: when a being consciously experiences the frustration or satisfaction of its interests, those interests deserve higher moral priority than those of a behaviourally similar but non-conscious being. I consider the relationship between this ethical role and an a posteriori (or “type-B”) materialist solution to the mind-body problem. It is hard to avoid the conclusion that, if type-B materialism is correct, then the reference of the concept of phenomenal consciousness is radically indeterminate between (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  50. Guilty Artificial Minds: Folk Attributions of Mens Rea and Culpability to Artificially Intelligent Agents.Michael T. Stuart & Markus Kneer - 2021 - Proceedings of the ACM on Human-Computer Interaction 5 (CSCW2).
    While philosophers hold that it is patently absurd to blame robots or hold them morally responsible [1], a series of recent empirical studies suggest that people do ascribe blame to AI systems and robots in certain contexts [2]. This is disconcerting: Blame might be shifted from the owners, users or designers of AI systems to the systems themselves, leading to the diminished accountability of the responsible human agents [3]. In this paper, we explore one of the potential (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
1 — 50 / 998