Switch to: References

Add citations

You must login to add citations.
  1. Can AI Weapons Make Ethical Decisions?Ross W. Bellaby - 2021 - Criminal Justice Ethics 40 (2):86-107.
    The ability of machines to make truly independent and autonomous decisions is a goal of many, not least of military leaders who wish to take the human out of the loop as much as possible, claiming that autonomous military weaponry—most notably drones—can make decisions more quickly and with greater accuracy. However, there is no clear understanding of how autonomous weapons should be conceptualized and of the implications that their “autonomous” nature has on them as ethical agents. It will be argued (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Moral Status and Intelligent Robots.John-Stewart Gordon & David J. Gunkel - 2021 - Southern Journal of Philosophy 60 (1):88-117.
    The Southern Journal of Philosophy, Volume 60, Issue 1, Page 88-117, March 2022.
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Interdependence as the key for an ethical artificial autonomy.Filippo Pianca & Vieri Giuliano Santucci - forthcoming - AI and Society:1-15.
    Currently, the autonomy of artificial systems, robotic systems in particular, is certainly one of the most debated issues, both from the perspective of technological development and its social impact and ethical repercussions. While theoretical considerations often focus on scenarios far beyond what can be concretely hypothesized from the current state of the art, the term autonomy is still used in a vague or too general way. This reduces the possibilities of a punctual analysis of such an important issue, thus leading (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Foundations of an Ethical Framework for AI Entities: the Ethics of Systems.Andrej Dameski - 2020 - Dissertation, University of Luxembourg
    The field of AI ethics during the current and previous decade is receiving an increasing amount of attention from all involved stakeholders: the public, science, philosophy, religious organizations, enterprises, governments, and various organizations. However, this field currently lacks consensus on scope, ethico-philosophical foundations, or common methodology. This thesis aims to contribute towards filling this gap by providing an answer to the two main research questions: first, what theory can explain moral scenarios in which AI entities are participants?; and second, what (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • From machine ethics to computational ethics.Samuel T. Segun - 2021 - AI and Society 36 (1):263-276.
    Research into the ethics of artificial intelligence is often categorized into two subareas—robot ethics and machine ethics. Many of the definitions and classifications of the subject matter of these subfields, as found in the literature, are conflated, which I seek to rectify. In this essay, I infer that using the term ‘machine ethics’ is too broad and glosses over issues that the term computational ethics best describes. I show that the subject of inquiry of computational ethics is of great value (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Responses to Catastrophic AGI Risk: A Survey.Kaj Sotala & Roman V. Yampolskiy - 2015 - Physica Scripta 90.
    Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may inflict serious damage to human well-being on a global scale ('catastrophic risk'). After summarizing the arguments for why AGI may pose such a risk, we review the fieldʼs proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors and proposals for creating AGIs that are safe due to (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Nonconscious Cognitive Suffering: Considering Suffering Risks of Embodied Artificial Intelligence.Steven Umbrello & Stefan Lorenz Sorgner - 2019 - Philosophies 4 (2):24.
    Strong arguments have been formulated that the computational limits of disembodied artificial intelligence (AI) will, sooner or later, be a problem that needs to be addressed. Similarly, convincing cases for how embodied forms of AI can exceed these limits makes for worthwhile research avenues. This paper discusses how embodied cognition brings with it other forms of information integration and decision-making consequences that typically involve discussions of machine cognition and similarly, machine consciousness. N. Katherine Hayles’s novel conception of nonconscious cognition in (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • What makes any agent a moral agent? Reflections on machine consciousness and moral agency.Joel Parthemore & Blay Whitby - 2013 - International Journal of Machine Consciousness 5 (2):105-129.
    In this paper, we take moral agency to be that context in which a particular agent can, appropriately, be held responsible for her actions and their consequences. In order to understand moral agency, we will discuss what it would take for an artifact to be a moral agent. For reasons that will become clear over the course of the paper, we take the artifactual question to be a useful way into discussion but ultimately misleading. We set out a number of (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Implementations in Machine Ethics: A Survey.Suzanne Tolmeijer, Markus Kneer, Cristina Sarasua, Markus Christen & Abraham Bernstein - 2020 - ACM Computing Surveys 53 (6):1–38.
    Increasingly complex and autonomous systems require machine ethics to maximize the benefits and minimize the risks to society arising from the new technology. It is challenging to decide which type of ethical theory to employ and how to implement it effectively. This survey provides a threefold contribution. First, it introduces a trimorphic taxonomy to analyze machine ethics implementations with respect to their object (ethical theories), as well as their nontechnical and technical aspects. Second, an exhaustive selection and description of relevant (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • O papel das emoções no processo de tomada de decisão moral diante de conflitos bioéticos.Caroline Izidoro Marim - 2020 - Veritas – Revista de Filosofia da Pucrs 65 (2):e36830.
    O presente artigo tem por objetivo mostrar o papel crucial das emoções nas tomadas de decisões morais e sua contribuição na solução de conflitos bioéticos. Ao contrário da tese racional, as tomadas de decisão morais demandam a colaboração entre razão e emoção, ou nos termos dos estudos em Metaética, cognição e emoção. Por meio da análise da teoria de Antônio Damásio, teses de filósofas morais feministas, como Kathryn Pyne Addelson, entre outras, pretendemos refutar as recorrentes teses bioéticas conservadoras cuja autoridade (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial Moral Agents Within an Ethos of AI4SG.Bongani Andy Mabaso - 2020 - Philosophy and Technology 34 (1):7-21.
    As artificial intelligence (AI) continues to proliferate into every area of modern life, there is no doubt that society has to think deeply about the potential impact, whether negative or positive, that it will have. Whilst scholars recognise that AI can usher in a new era of personal, social and economic prosperity, they also warn of the potential for it to be misused towards the detriment of society. Deliberate strategies are therefore required to ensure that AI can be safely integrated (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • The humanness of artificial non-normative personalities.Kevin B. Clark - 2017 - Behavioral and Brain Sciences 40:e259.
    Technoscientific ambitions for perfecting human-like machines, by advancing state-of-the-art neuromorphic architectures and cognitive computing, may end in ironic regret without pondering the humanness of fallible artificial non-normative personalities. Self-organizing artificial personalities individualize machine performance and identity through fuzzy conscientiousness, emotionality, extraversion/introversion, and other traits, rendering insights into technology-assisted human evolution, robot ethology/pedagogy, and best practices against unwanted autonomous machine behavior.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Consciousness and ethics: Artificially conscious moral agents.Wendell Wallach, Colin Allen & Stan Franklin - 2011 - International Journal of Machine Consciousness 3 (01):177-192.
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Artificial Moral Agents: A Survey of the Current Status. [REVIEW]José-Antonio Cervantes, Sonia López, Luis-Felipe Rodríguez, Salvador Cervantes, Francisco Cervantes & Félix Ramos - 2020 - Science and Engineering Ethics 26 (2):501-532.
    One of the objectives in the field of artificial intelligence for some decades has been the development of artificial agents capable of coexisting in harmony with people and other systems. The computing research community has made efforts to design artificial agents capable of doing tasks the way people do, tasks requiring cognitive mechanisms such as planning, decision-making, and learning. The application domains of such software agents are evident nowadays. Humans are experiencing the inclusion of artificial agents in their environment as (...)
    Download  
     
    Export citation  
     
    Bookmark   31 citations  
  • Moral Gridworlds: A Theoretical Proposal for Modeling Artificial Moral Cognition.Julia Haas - 2020 - Minds and Machines 30 (2):219-246.
    I describe a suite of reinforcement learning environments in which artificial agents learn to value and respond to moral content and contexts. I illustrate the core principles of the framework by characterizing one such environment, or “gridworld,” in which an agent learns to trade-off between monetary profit and fair dealing, as applied in a standard behavioral economic paradigm. I then highlight the core technical and philosophical advantages of the learning approach for modeling moral cognition, and for addressing the so-called value (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • People are averse to machines making moral decisions.Yochanan E. Bigman & Kurt Gray - 2018 - Cognition 181 (C):21-34.
    Download  
     
    Export citation  
     
    Bookmark   33 citations  
  • Building Moral Robots: Ethical Pitfalls and Challenges.John-Stewart Gordon - 2020 - Science and Engineering Ethics 26 (1):141-157.
    This paper examines the ethical pitfalls and challenges that non-ethicists, such as researchers and programmers in the fields of computer science, artificial intelligence and robotics, face when building moral machines. Whether ethics is “computable” depends on how programmers understand ethics in the first place and on the adequacy of their understanding of the ethical problems and methodological challenges in these fields. Researchers and programmers face at least two types of problems due to their general lack of ethical knowledge or expertise. (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Robot minds and human ethics: the need for a comprehensive model of moral decision making. [REVIEW]Wendell Wallach - 2010 - Ethics and Information Technology 12 (3):243-250.
    Building artificial moral agents (AMAs) underscores the fragmentary character of presently available models of human ethical behavior. It is a distinctly different enterprise from either the attempt by moral philosophers to illuminate the “ought” of ethics or the research by cognitive scientists directed at revealing the mechanisms that influence moral psychology, and yet it draws on both. Philosophers and cognitive scientists have tended to stress the importance of particular cognitive mechanisms, e.g., reasoning, moral sentiments, heuristics, intuitions, or a moral grammar, (...)
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  • Phronetic Ethics in Social Robotics: A New Approach to Building Ethical Robots.Roman Krzanowski & Paweł Polak - 2020 - Studies in Logic, Grammar and Rhetoric 63 (1):165-183.
    Social robotics are autonomous robots or Artificial Moral Agents (AMA), that will interact respect and embody human ethical values. However, the conceptual and practical problems of building such systems have not yet been resolved, playing a role of significant challenge for computational modeling. It seems that the lack of success in constructing robots, ceteris paribus, is due to the conceptual and algorithmic limitations of the current design of ethical robots. This paper proposes a new approach for developing ethical capacities in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Global workspace theory, Shanahan, and Lida.Stan Franklin - 2011 - International Journal of Machine Consciousness 3 (02):327-337.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Digital life, a theory of minds, and mapping human and machine cultural universals.Kevin B. Clark - 2020 - Behavioral and Brain Sciences 43:e98.
    Emerging cybertechnologies, such as social digibots, bend epistemological conventions of life and culture already complicated by human and animal relationships. Virtually-augmented niches of machines and organic life promise new free-energy-governed selection of intelligent digital life. These provocative eco-evolutionary contexts demand a theory of (natural and artificial) minds to characterize and validate the immersive social phenomena universally-shaping cultural affordances.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • A Prospective Framework for the Design of Ideal Artificial Moral Agents: Insights from the Science of Heroism in Humans.Travis J. Wiltshire - 2015 - Minds and Machines 25 (1):57-71.
    The growing field of machine morality has becoming increasingly concerned with how to develop artificial moral agents. However, there is little consensus on what constitutes an ideal moral agent let alone an artificial one. Leveraging a recent account of heroism in humans, the aim of this paper is to provide a prospective framework for conceptualizing, and in turn designing ideal artificial moral agents, namely those that would be considered heroic robots. First, an overview of what it means to be an (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Discourse analysis of academic debate of ethics for AGI.Ross Graham - 2022 - AI and Society 37 (4):1519-1532.
    Artificial general intelligence is a greatly anticipated technology with non-trivial existential risks, defined as machine intelligence with competence as great/greater than humans. To date, social scientists have dedicated little effort to the ethics of AGI or AGI researchers. This paper employs inductive discourse analysis of the academic literature of two intellectual groups writing on the ethics of AGI—applied and/or ‘basic’ scientific disciplines henceforth referred to as technicians (e.g., computer science, electrical engineering, physics), and philosophy-adjacent disciplines henceforth referred to as PADs (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations