Switch to: Citations

Add references

You must login to add references.
  1. Trust and antitrust.Annette Baier - 1986 - Ethics 96 (2):231-260.
    Download  
     
    Export citation  
     
    Bookmark   635 citations  
  • Principles alone cannot guarantee ethical AI.Brent Mittelstadt - 2019 - Nature Machine Intelligence 1 (11):501-507.
    Download  
     
    Export citation  
     
    Bookmark   111 citations  
  • On the morality of artificial agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.
    Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility of agents (most (...)
    Download  
     
    Export citation  
     
    Bookmark   299 citations  
  • The role of trust in knowledge.John Hardwig - 1991 - Journal of Philosophy 88 (12):693-708.
    Most traditional epistemologists see trust and knowledge as deeply antithetical: we cannot know by trusting in the opinions of others; knowledge must be based on evidence, not mere trust. I argue that this is badly mistaken. Modern knowers cannot be independent and self-reliant. In most disciplines, those who do not trust cannot know. Trust is thus often more epistemically basic than empirical evidence or logical argument, for the evidence and the argument are available only through trust. Finally, since the reliability (...)
    Download  
     
    Export citation  
     
    Bookmark   268 citations  
  • Moral zombies: why algorithms are not moral agents.Carissa Véliz - 2021 - AI and Society 36 (2):487-497.
    In philosophy of mind, zombies are imaginary creatures that are exact physical duplicates of conscious subjects but for whom there is no first-personal experience. Zombies are meant to show that physicalism—the theory that the universe is made up entirely out of physical components—is false. In this paper, I apply the zombie thought experiment to the realm of morality to assess whether moral agency is something independent from sentience. Algorithms, I argue, are a kind of functional moral zombie, such that thinking (...)
    Download  
     
    Export citation  
     
    Bookmark   40 citations  
  • Robots, Law and the Retribution Gap.John Danaher - 2016 - Ethics and Information Technology 18 (4):299–309.
    We are living through an era of increased robotisation. Some authors have already begun to explore the impact of this robotisation on legal rules and practice. In doing so, many highlight potential liability gaps that might arise through robot misbehaviour. Although these gaps are interesting and socially significant, they do not exhaust the possible gaps that might be created by increased robotisation. In this article, I make the case for one of those alternative gaps: the retribution gap. This gap arises (...)
    Download  
     
    Export citation  
     
    Bookmark   77 citations  
  • GPT-3: its nature, scope, limits, and consequences.Luciano Floridi & Massimo Chiriatti - 2020 - Minds and Machines 30 (4):681–⁠694.
    In this commentary, we discuss the nature of reversible and irreversible questions, that is, questions that may enable one to identify the nature of the source of their answers. We then introduce GPT-3, a third-generation, autoregressive language model that uses deep learning to produce human-like texts, and use the previous distinction to analyse it. We expand the analysis to present three tests based on mathematical, semantic, and ethical questions and show that GPT-3 is not designed to pass any of them. (...)
    Download  
     
    Export citation  
     
    Bookmark   42 citations  
  • The Emotional Dog and Its Rational Tail: A Social Intuitionist Approach to Moral Judgment.Jonathan Haidt - 2001 - Psychological Review 108 (4):814-834.
    Research on moral judgment has been dominated by rationalist models, in which moral judgment is thought to be caused by moral reasoning. The author gives 4 reasons for considering the hypothesis that moral reasoning does not cause moral judgment; rather, moral reasoning is usually a post hoc construction, generated after a judgment has been reached. The social intuitionist model is presented as an alternative to rationalist models. The model is a social model in that it deemphasizes the private reasoning done (...)
    Download  
     
    Export citation  
     
    Bookmark   1635 citations  
  • Persons, situations, and virtue ethics.John M. Doris - 1998 - Noûs 32 (4):504-530.
    Download  
     
    Export citation  
     
    Bookmark   229 citations  
  • There Is No Techno-Responsibility Gap.Daniel W. Tigard - 2021 - Philosophy and Technology 34 (3):589-607.
    In a landmark essay, Andreas Matthias claimed that current developments in autonomous, artificially intelligent (AI) systems are creating a so-called responsibility gap, which is allegedly ever-widening and stands to undermine both the moral and legal frameworks of our society. But how severe is the threat posed by emerging technologies? In fact, a great number of authors have indicated that the fear is thoroughly instilled. The most pessimistic are calling for a drastic scaling-back or complete moratorium on AI systems, while the (...)
    Download  
     
    Export citation  
     
    Bookmark   47 citations  
  • Mind the gap: responsible robotics and the problem of responsibility.David J. Gunkel - 2020 - Ethics and Information Technology 22 (4):307-320.
    The task of this essay is to respond to the question concerning robots and responsibility—to answer for the way that we understand, debate, and decide who or what is able to answer for decisions and actions undertaken by increasingly interactive, autonomous, and sociable mechanisms. The analysis proceeds through three steps or movements. It begins by critically examining the instrumental theory of technology, which determines the way one typically deals with and responds to the question of responsibility when it involves technology. (...)
    Download  
     
    Export citation  
     
    Bookmark   45 citations  
  • (1 other version)Critiquing the Reasons for Making Artificial Moral Agents.Aimee van Wynsberghe & Scott Robbins - 2019 - Science and Engineering Ethics 25 (3):719-735.
    Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and (...)
    Download  
     
    Export citation  
     
    Bookmark   53 citations  
  • Autonomous Cars: In Favor of a Mandatory Ethics Setting.Jan Gogoll & Julian F. Müller - 2017 - Science and Engineering Ethics 23 (3):681-700.
    The recent progress in the development of autonomous cars has seen ethical questions come to the forefront. In particular, life and death decisions regarding the behavior of self-driving cars in trolley dilemma situations are attracting widespread interest in the recent debate. In this essay we want to ask whether we should implement a mandatory ethics setting for the whole of society or, whether every driver should have the choice to select his own personal ethics setting. While the consensus view seems (...)
    Download  
     
    Export citation  
     
    Bookmark   56 citations  
  • Patiency is not a virtue: the design of intelligent systems and systems of ethics.Joanna J. Bryson - 2018 - Ethics and Information Technology 20 (1):15-26.
    The question of whether AI systems such as robots can or should be afforded moral agency or patiency is not one amenable either to discovery or simple reasoning, because we as societies constantly reconstruct our artefacts, including our ethical systems. Consequently, the place of AI systems in society is a matter of normative, not descriptive ethics. Here I start from a functionalist assumption, that ethics is the set of behaviour that maintains a society. This assumption allows me to exploit the (...)
    Download  
     
    Export citation  
     
    Bookmark   51 citations  
  • Moral Deskilling and Upskilling in a New Machine Age: Reflections on the Ambiguous Future of Character.Shannon Vallor - 2015 - Philosophy and Technology 28 (1):107-124.
    This paper explores the ambiguous impact of new information and communications technologies on the cultivation of moral skills in human beings. Just as twentieth century advances in machine automation resulted in the economic devaluation of practical knowledge and skillsets historically cultivated by machinists, artisans, and other highly trained workers , while also driving the cultivation of new skills in a variety of engineering and white collar occupations, ICTs are also recognized as potential causes of a complex pattern of economic deskilling, (...)
    Download  
     
    Export citation  
     
    Bookmark   70 citations  
  • Making moral machines: why we need artificial moral agents.Paul Formosa & Malcolm Ryan - forthcoming - AI and Society.
    As robots and Artificial Intelligences become more enmeshed in rich social contexts, it seems inevitable that we will have to make them into moral machines equipped with moral skills. Apart from the technical difficulties of how we could achieve this goal, we can also ask the ethical question of whether we should seek to create such Artificial Moral Agents (AMAs). Recently, several papers have argued that we have strong reasons not to develop AMAs. In response, we develop a comprehensive analysis (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Responsibility for Crashes of Autonomous Vehicles: An Ethical Analysis.Alexander Hevelke & Julian Nida-Rümelin - 2015 - Science and Engineering Ethics 21 (3):619-630.
    A number of companies including Google and BMW are currently working on the development of autonomous cars. But if fully autonomous cars are going to drive on our roads, it must be decided who is to be held responsible in case of accidents. This involves not only legal questions, but also moral ones. The first question discussed is whether we should try to design the tort liability for car manufacturers in a way that will help along the development and improvement (...)
    Download  
     
    Export citation  
     
    Bookmark   59 citations  
  • (1 other version)Critiquing the Reasons for Making Artificial Moral Agents.Aimee van Wynsberghe & Scott Robbins - 2018 - Science and Engineering Ethics:1-17.
    Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and (...)
    Download  
     
    Export citation  
     
    Bookmark   51 citations  
  • Why machines cannot be moral.Robert Sparrow - 2021 - AI and Society (3):685-693.
    The fact that real-world decisions made by artificial intelligences (AI) are often ethically loaded has led a number of authorities to advocate the development of “moral machines”. I argue that the project of building “ethics” “into” machines presupposes a flawed understanding of the nature of ethics. Drawing on the work of the Australian philosopher, Raimond Gaita, I argue that ethical dilemmas are problems for particular people and not (just) problems for everyone who faces a similar situation. Moreover, the force of (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Virtue ethics and situationist personality psychology.Maria Merritt - 2000 - Ethical Theory and Moral Practice 3 (4):365-383.
    In this paper I examine and reply to a deflationary challenge brought against virtue ethics. The challenge comes from critics who are impressed by recent psychological evidence suggesting that much of what we take to be virtuous conduct is in fact elicited by narrowly specific social settings, as opposed to being the manifestation of robust individual character. In answer to the challenge, I suggest a conception of virtue that openly acknowledges the likelihood of its deep, ongoing dependence upon particular social (...)
    Download  
     
    Export citation  
     
    Bookmark   119 citations  
  • On the moral status of social robots: considering the consciousness criterion.Kestutis Mosakas - 2021 - AI and Society 36 (2):429-443.
    While philosophers have been debating for decades on whether different entities—including severely disabled human beings, embryos, animals, objects of nature, and even works of art—can legitimately be considered as having moral status, this question has gained a new dimension in the wake of artificial intelligence (AI). One of the more imminent concerns in the context of AI is that of the moral rights and status of social robots, such as robotic caregivers and artificial companions, that are built to interact with (...)
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  • (1 other version)In search of the moral status of AI: why sentience is a strong argument.Martin Gibert & Dominic Martin - 2021 - AI and Society 1:1-12.
    Is it OK to lie to Siri? Is it bad to mistreat a robot for our own pleasure? Under what condition should we grant a moral status to an artificial intelligence system? This paper looks at different arguments for granting moral status to an AI system: the idea of indirect duties, the relational argument, the argument from intelligence, the arguments from life and information, and the argument from sentience. In each but the last case, we find unresolved issues with the (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • What do we owe to intelligent robots?John-Stewart Gordon - 2020 - AI and Society 35 (1):209-223.
    Great technological advances in such areas as computer science, artificial intelligence, and robotics have brought the advent of artificially intelligent robots within our reach within the next century. Against this background, the interdisciplinary field of machine ethics is concerned with the vital issue of making robots “ethical” and examining the moral status of autonomous robots that are capable of moral reasoning and decision-making. The existence of such robots will deeply reshape our socio-political life. This paper focuses on whether such highly (...)
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  • Artificial virtue: the machine question and perceptions of moral character in artificial moral agents.Patrick Gamez, Daniel B. Shank, Carson Arnold & Mallory North - 2020 - AI and Society 35 (4):795-809.
    Virtue ethics seems to be a promising moral theory for understanding and interpreting the development and behavior of artificial moral agents. Virtuous artificial agents would blur traditional distinctions between different sorts of moral machines and could make a claim to membership in the moral community. Accordingly, we investigate the “machine question” by studying whether virtue or vice can be attributed to artificial intelligence; that is, are people willing to judge machines as possessing moral character? An experiment describes situations where either (...)
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  • Ethics and consciousness in artificial agents.Steve Torrance - 2008 - AI and Society 22 (4):495-521.
    In what ways should we include future humanoid robots, and other kinds of artificial agents, in our moral universe? We consider the Organic view, which maintains that artificial humanoid agents, based on current computational technologies, could not count as full-blooded moral agents, nor as appropriate targets of intrinsic moral concern. On this view, artificial humanoids lack certain key properties of biological organisms, which preclude them from having full moral status. Computationally controlled systems, however advanced in their cognitive or informational capacities, (...)
    Download  
     
    Export citation  
     
    Bookmark   63 citations  
  • Toward the ethical robot.James Gips - 1994 - In Kenneth M. Ford, Clark N. Glymour & Patrick J. Hayes (eds.), Android Epistemology. MIT Press. pp. 243--252.
    Download  
     
    Export citation  
     
    Bookmark   41 citations  
  • Virtuous vs. utilitarian artificial moral agents.William A. Bauer - 2020 - AI and Society (1):263-271.
    Given that artificial moral agents—such as autonomous vehicles, lethal autonomous weapons, and automated financial trading systems—are now part of the socio-ethical equation, we should morally evaluate their behavior. How should artificial moral agents make decisions? Is one moral theory better suited than others for machine ethics? After briefly overviewing the dominant ethical approaches for building morality into machines, this paper discusses a recent proposal, put forward by Don Howard and Ioan Muntean (2016, 2017), for an artificial moral agent based on (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • A challenge for machine ethics.Ryan Tonkens - 2009 - Minds and Machines 19 (3):421-438.
    That the successful development of fully autonomous artificial moral agents (AMAs) is imminent is becoming the received view within artificial intelligence research and robotics. The discipline of Machines Ethics, whose mandate is to create such ethical robots, is consequently gaining momentum. Although it is often asked whether a given moral framework can be implemented into machines, it is never asked whether it should be. This paper articulates a pressing challenge for Machine Ethics: To identify an ethical framework that is both (...)
    Download  
     
    Export citation  
     
    Bookmark   40 citations  
  • Why robots should not be treated like animals.Deborah G. Johnson & Mario Verdicchio - 2018 - Ethics and Information Technology 20 (4):291-301.
    Responsible Robotics is about developing robots in ways that take their social implications into account, which includes conceptually framing robots and their role in the world accurately. We are now in the process of incorporating robots into our world and we are trying to figure out what to make of them and where to put them in our conceptual, physical, economic, legal, emotional and moral world. How humans think about robots, especially humanoid social robots, which elicit complex and sometimes disconcerting (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • From Sex Robots to Love Robots: Is Mutual Love with a Robot Possible?Sven Nyholm & Lily Frank - 2017 - In John Danaher & Neil McArthur (eds.), Robot Sex: Social and Ethical Implications. MIT Press. pp. 219-244.
    Some critics of sex-robots worry that their use might spread objectifying attitudes about sex, and common sense places a higher value on sex within love-relationships than on casual sex. If there could be mutual love between humans and sex-robots, this could help to ease the worries about objectifying attitudes. And mutual love between humans and sex-robots, if possible, could also help to make this sex more valuable. But is mutual love between humans and robots possible, or even conceivable? We discuss (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • AI assisted ethics.Amitai Etzioni & Oren Etzioni - 2016 - Ethics and Information Technology 18 (2):149-156.
    The growing number of ‘smart’ instruments, those equipped with AI, has raised concerns because these instruments make autonomous decisions; that is, they act beyond the guidelines provided them by programmers. Hence, the question the makers and users of smart instrument face is how to ensure that these instruments will not engage in unethical conduct. The article suggests that to proceed we need a new kind of AI program—oversight programs—that will monitor, audit, and hold operational AI programs accountable.
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  • Why Care About Robots? Empathy, Moral Standing, and the Language of Suffering.Mark Coeckelbergh - 2018 - Kairos 20 (1):141-158.
    This paper tries to understand the phenomenon that humans are able to empathize with robots and the intuition that there might be something wrong with “abusing” robots by discussing the question regarding the moral standing of robots. After a review of some relevant work in empirical psychology and a discussion of the ethics of empathizing with robots, a philosophical argument concerning the moral standing of robots is made that questions distant and uncritical moral reasoning about entities’ properties and that recommends (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • What should we want from a robot ethic.Peter M. Asaro - 2006 - International Review of Information Ethics 6 (12):9-16.
    There are at least three things we might mean by "ethics in robotics": the ethical systems built into robots, the ethics of people who design and use robots, and the ethics of how people treat robots. This paper argues that the best approach to robot ethics is one which addresses all three of these, and to do this it ought to consider robots as socio-technical systems. By so doing, it is possible to think of a continuum of agency that lies (...)
    Download  
     
    Export citation  
     
    Bookmark   38 citations  
  • AI and the path to envelopment: knowledge as a first step towards the responsible regulation and use of AI-powered machines.Scott Robbins - 2020 - AI and Society 35 (2):391-400.
    With Artificial Intelligence entering our lives in novel ways—both known and unknown to us—there is both the enhancement of existing ethical issues associated with AI as well as the rise of new ethical issues. There is much focus on opening up the ‘black box’ of modern machine-learning algorithms to understand the reasoning behind their decisions—especially morally salient decisions. However, some applications of AI which are no doubt beneficial to society rely upon these black boxes. Rather than requiring algorithms to be (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Morality Play: A Model for Developing Games of Moral Expertise.Dan Staines, Paul Formosa & Malcolm Ryan - 2019 - Games and Culture 14 (4):410-429.
    According to cognitive psychologists, moral decision-making is a dual-process phenomenon involving two types of cognitive processes: explicit reasoning and implicit intuition. Moral development involves training and integrating both types of cognitive processes through a mix of instruction, practice, and reflection. Serious games are an ideal platform for this kind of moral training, as they provide safe spaces for exploring difficult moral problems and practicing the skills necessary to resolve them. In this article, we present Morality Play, a model for the (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Un-making artificial moral agents.Deborah G. Johnson & Keith W. Miller - 2008 - Ethics and Information Technology 10 (2-3):123-133.
    Floridi and Sanders, seminal work, “On the morality of artificial agents” has catalyzed attention around the moral status of computer systems that perform tasks for humans, effectively acting as “artificial agents.” Floridi and Sanders argue that the class of entities considered moral agents can be expanded to include computers if we adopt the appropriate level of abstraction. In this paper we argue that the move to distinguish levels of abstraction is far from decisive on this issue. We also argue that (...)
    Download  
     
    Export citation  
     
    Bookmark   39 citations  
  • The entanglement of trust and knowledge on the web.Judith Simon - 2010 - Ethics and Information Technology 12 (4):343-355.
    In this paper I use philosophical accounts on the relationship between trust and knowledge in science to apprehend this relationship on the Web. I argue that trust and knowledge are fundamentally entangled in our epistemic practices. Yet despite this fundamental entanglement, we do not trust blindly. Instead we make use of knowledge to rationally place or withdraw trust. We use knowledge about the sources of epistemic content as well as general background knowledge to assess epistemic claims. Hence, although we may (...)
    Download  
     
    Export citation  
     
    Bookmark   31 citations  
  • Artificial agents among us: Should we recognize them as agents proper?Migle Laukyte - 2017 - Ethics and Information Technology 19 (1):1-17.
    In this paper, I discuss whether in a society where the use of artificial agents is pervasive, these agents should be recognized as having rights like those we accord to group agents. This kind of recognition I understand to be at once social and legal, and I argue that in order for an artificial agent to be so recognized, it will need to meet the same basic conditions in light of which group agents are granted such recognition. I then explore (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Can Robotic AI Systems Be Virtuous and Why Does This Matter?Mihaela Constantinescu & Roger Crisp - 2022 - International Journal of Social Robotics 14 (6):1547–1557.
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Ethical disagreement, ethical objectivism and moral indeterminacy.Russ Shafer-Landau - 1994 - Philosophy and Phenomenological Research 54 (2):331-344.
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  • “Trust but Verify”: The Difficulty of Trusting Autonomous Weapons Systems.Heather M. Roff & David Danks - 2018 - Journal of Military Ethics 17 (1):2-20.
    ABSTRACTAutonomous weapons systems pose many challenges in complex battlefield environments. Previous discussions of them have largely focused on technological or policy issues. In contrast, we focus here on the challenge of trust in an AWS. One type of human trust depends only on judgments about the predictability or reliability of the trustee, and so are suitable for all manner of artifacts. However, AWSs that are worthy of the descriptor “autonomous” will not exhibit the required strong predictability in the complex, changing (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Moral agency, self-consciousness, and practical wisdom.Shaun Gallagher - 2007 - Journal of Consciousness Studies 14 (5-6):199-223.
    This paper argues that self-consciousness and moral agency depend crucially on both embodied and social aspects of human existence, and that the capacity for practical wisdom, phronesis, is central to moral personhood. The nature of practical wisdom is elucidated by drawing on rival analyses of expertise. Although ethical expertise and practical wisdom differ importantly, they are alike in that we can acquire them only in interaction with other persons and through habituation. The analysis of moral agency and practical wisdom is (...)
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  • (1 other version)Autonomous Reboot: Kant, the categorical imperative, and contemporary challenges for machine ethicists.Jeffrey White - 2022 - AI and Society 37 (2):661-673.
    Ryan Tonkens has issued a seemingly impossible challenge, to articulate a comprehensive ethical framework within which artificial moral agents satisfy a Kantian inspired recipe—"rational" and "free"—while also satisfying perceived prerogatives of machine ethicists to facilitate the creation of AMAs that are perfectly and not merely reliably ethical. This series of papers meets this challenge by landscaping traditional moral theory in resolution of a comprehensive account of moral agency. The first paper established the challenge and set out autonomy in Aristotelian terms. (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Four Kinds of Ethical Robots.James Moor - 2009 - Philosophy Now 72:12-14.
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • What Sparks Ethical Decision Making? The Interplay Between Moral Intuition and Moral Reasoning: Lessons from the Scholastic Doctrine.Lamberto Zollo, Massimiliano Matteo Pellegrini & Cristiano Ciappei - 2017 - Journal of Business Ethics 145 (4):681-700.
    Recent theories on cognitive science have stressed the significance of moral intuition as a counter to and complementary part of moral reasoning in decision making. Thus, the aim of this paper is to create an integrated framework that can account for both intuitive and reflective cognitive processes, in order to explore the antecedents of ethical decision making. To do that, we build on Scholasticism, an important medieval school of thought from which descends the main pillars of the modern Catholic social (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Artificial wisdom: a philosophical framework.Cheng-Hung Tsai - 2020 - AI and Society:937-944.
    Human excellences such as intelligence, morality, and consciousness are investigated by philosophers as well as artificial intelligence researchers. One excellence that has not been widely discussed by AI researchers is practical wisdom, the highest human excellence, or the highest, seventh, stage in Dreyfus’s model of skill acquisition. In this paper, I explain why artificial wisdom matters and how artificial wisdom is possible (in principle and in practice) by responding to two philosophical challenges to building artificial wisdom systems. The result is (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Ethical Decision-Making: A Case for the Triple Font Theory.Surendra Arjoon - 2007 - Journal of Business Ethics 71 (4):395-410.
    This paper discusses the philosophical argument and the application of the Triple Font Theory for moral evaluation of human acts and attempts to integrate the conceptual components of major moral theories into a systematic internally consistent decision-making model that is theoretically driven. The paper incorporates concepts such as formal and material cooperation and the Principle of Double Effect into the theoretical framework. It also advances the thesis that virtue theory ought to be included in any adequate justification of morality and (...)
    Download  
     
    Export citation  
     
    Bookmark   20 citations  
  • Future progress in artificial intelligence: A poll among experts.Vincent C. Müller & Nick Bostrom - 2014 - AI Matters 1 (1):9-11.
    [This is the short version of: Müller, Vincent C. and Bostrom, Nick (forthcoming 2016), ‘Future progress in artificial intelligence: A survey of expert opinion’, in Vincent C. Müller (ed.), Fundamental Issues of Artificial Intelligence (Synthese Library 377; Berlin: Springer).] - - - In some quarters, there is intense concern about high–level machine intelligence and superintelligent AI coming up in a few dec- ades, bringing with it significant risks for human- ity; in other quarters, these issues are ignored or considered science (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • A minimalist model of the artificial autonomous moral agent (AAMA).Ioan Muntean & Don Howard - 2016 - In Ioan Muntean & Don Howard (eds.), A minimalist model of the artificial autonomous moral agent (AAMA). AAAI.
    This paper proposes a model for an artificial autonomous moral agent (AAMA), which is parsimonious in its ontology and minimal in its ethical assumptions. Starting from a set of moral data, this AAMA is able to learn and develop a form of moral competency. It resembles an “optimizing predictive mind,” which uses moral data (describing typical behavior of humans) and a set of dispositional traits to learn how to classify different actions (given a given background knowledge) as morally right, wrong, (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • This “Ethical Trap” Is for Roboticists, Not Robots: On the Issue of Artificial Agent Ethical Decision-Making.Keith W. Miller, Marty J. Wolf & Frances Grodzinsky - 2017 - Science and Engineering Ethics 23 (2):389-401.
    In this paper we address the question of when a researcher is justified in describing his or her artificial agent as demonstrating ethical decision-making. The paper is motivated by the amount of research being done that attempts to imbue artificial agents with expertise in ethical decision-making. It seems clear that computing systems make decisions, in that they make choices between different options; and there is scholarship in philosophy that addresses the distinction between ethical decision-making and general decision-making. Essentially, the qualitative (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations