Switch to: References

Add citations

You must login to add citations.
  1. Landscape of Machine Implemented Ethics.Vivek Nallur - 2020 - Science and Engineering Ethics 26 (5):2381-2399.
    This paper surveys the state-of-the-art in machine ethics, that is, considerations of how to implement ethical behaviour in robots, unmanned autonomous vehicles, or software systems. The emphasis is on covering the breadth of ethical theories being considered by implementors, as well as the implementation techniques being used. There is no consensus on which ethical theory is best suited for any particular domain, nor is there any agreement on which technique is best placed to implement a particular theory. Another unresolved problem (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Toward Implementing the ADC Model of Moral Judgment in Autonomous Vehicles.Veljko Dubljević - 2020 - Science and Engineering Ethics 26 (5):2461-2472.
    Autonomous vehicles —and accidents they are involved in—attest to the urgent need to consider the ethics of artificial intelligence. The question dominating the discussion so far has been whether we want AVs to behave in a ‘selfish’ or utilitarian manner. Rather than considering modeling self-driving cars on a single moral system like utilitarianism, one possible way to approach programming for AI would be to reflect recent work in neuroethics. The agent–deed–consequence model :3–20, 2014a, Behav Brain Sci 37:487–488, 2014b) provides a (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • How to design AI for social good: seven essential factors.Luciano Floridi, Josh Cowls, Thomas C. King & Mariarosaria Taddeo - 2020 - Science and Engineering Ethics 26 (3):1771–1796.
    The idea of artificial intelligence for social good is gaining traction within information societies in general and the AI community in particular. It has the potential to tackle social problems through the development of AI-based solutions. Yet, to date, there is only limited understanding of what makes AI socially good in theory, what counts as AI4SG in practice, and how to reproduce its initial successes in terms of policies. This article addresses this gap by identifying seven ethical factors that are (...)
    Download  
     
    Export citation  
     
    Bookmark   37 citations  
  • Artificial Moral Agents: A Survey of the Current Status. [REVIEW]José-Antonio Cervantes, Sonia López, Luis-Felipe Rodríguez, Salvador Cervantes, Francisco Cervantes & Félix Ramos - 2020 - Science and Engineering Ethics 26 (2):501-532.
    One of the objectives in the field of artificial intelligence for some decades has been the development of artificial agents capable of coexisting in harmony with people and other systems. The computing research community has made efforts to design artificial agents capable of doing tasks the way people do, tasks requiring cognitive mechanisms such as planning, decision-making, and learning. The application domains of such software agents are evident nowadays. Humans are experiencing the inclusion of artificial agents in their environment as (...)
    Download  
     
    Export citation  
     
    Bookmark   29 citations  
  • Immune moral models? Pro-social rule breaking as a moral enhancement approach for ethical AI.Rajitha Ramanayake, Philipp Wicke & Vivek Nallur - 2023 - AI and Society 38 (2):801-813.
    We are moving towards a future where Artificial Intelligence (AI) based agents make many decisions on behalf of humans. From healthcare decision-making to social media censoring, these agents face problems, and make decisions with ethical and societal implications. Ethical behaviour is a critical characteristic that we would like in a human-centric AI. A common observation in human-centric industries, like the service industry and healthcare, is that their professionals tend to break rules, if necessary, for pro-social reasons. This behaviour among humans (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • (1 other version)From Pluralistic Normative Principles to Autonomous-Agent Rules.Beverley Townsend, Colin Paterson, T. T. Arvind, Gabriel Nemirovsky, Radu Calinescu, Ana Cavalcanti, Ibrahim Habli & Alan Thomas - 2022 - Minds and Machines 1:1-33.
    With recent advancements in systems engineering and artificial intelligence, autonomous agents are increasingly being called upon to execute tasks that have normative relevance. These are tasks that directly—and potentially adversely—affect human well-being and demand of the agent a degree of normative-sensitivity and -compliance. Such norms and normative principles are typically of a social, legal, ethical, empathetic, or cultural nature. Whereas norms of this type are often framed in the abstract, or as high-level principles, addressing normative concerns in concrete applications of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Ethical artificial intelligence framework for a good AI society: principles, opportunities and perils.Pradeep Paraman & Sanmugam Anamalah - 2023 - AI and Society 38 (2):595-611.
    The justification and rationality of this paper is to present some fundamental principles, theories, and concepts that we believe moulds the nucleus of a good artificial intelligence (AI) society. The morally accepted significance and utilitarian concerns that stems from the inception and realisation of an AI’s structural foundation are displayed in this study. This paper scrutinises the structural foundation, fundamentals, and cardinal righteous remonstrations, as well as the gaps in mechanisms towards novel prospects and perils in determining resilient fundamentals, accountability, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Artificial Intelligence Regulation: a framework for governance.Patricia Gomes Rêgo de Almeida, Carlos Denner dos Santos & Josivania Silva Farias - 2021 - Ethics and Information Technology 23 (3):505-525.
    This article develops a conceptual framework for regulating Artificial Intelligence (AI) that encompasses all stages of modern public policy-making, from the basics to a sustainable governance. Based on a vast systematic review of the literature on Artificial Intelligence Regulation (AIR) published between 2010 and 2020, a dispersed body of knowledge loosely centred around the “framework” concept was organised, described, and pictured for better understanding. The resulting integrative framework encapsulates 21 prior depictions of the policy-making process, aiming to achieve gold-standard societal (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Reasoning about responsibility in autonomous systems: challenges and opportunities.Vahid Yazdanpanah, Enrico H. Gerding, Sebastian Stein, Mehdi Dastani, Catholijn M. Jonker, Timothy J. Norman & Sarvapali D. Ramchurn - 2023 - AI and Society 38 (4):1453-1464.
    Ensuring the trustworthiness of autonomous systems and artificial intelligence is an important interdisciplinary endeavour. In this position paper, we argue that this endeavour will benefit from technical advancements in capturing various forms of responsibility, and we present a comprehensive research agenda to achieve this. In particular, we argue that ensuring the reliability of autonomous system can take advantage of technical approaches for quantifying degrees of responsibility and for coordinating tasks based on that. Moreover, we deem that, in certifying the legality (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • (1 other version)From Pluralistic Normative Principles to Autonomous-Agent Rules.Beverley Townsend, Colin Paterson, T. T. Arvind, Gabriel Nemirovsky, Radu Calinescu, Ana Cavalcanti, Ibrahim Habli & Alan Thomas - 2022 - Minds and Machines 32 (4):683-715.
    With recent advancements in systems engineering and artificial intelligence, autonomous agents are increasingly being called upon to execute tasks that have normative relevance. These are tasks that directly—and potentially adversely—affect human well-being and demand of the agent a degree of normative-sensitivity and -compliance. Such norms and normative principles are typically of a social, legal, ethical, empathetic, or cultural (‘SLEEC’) nature. Whereas norms of this type are often framed in the abstract, or as high-level principles, addressing normative concerns in concrete applications (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Evaluation of the moral permissibility of action plans.Felix Lindner, Robert Mattmüller & Bernhard Nebel - 2020 - Artificial Intelligence 287 (C):103350.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Norms and value based reasoning: justifying compliance and violation.Trevor Bench-Capon & Sanjay Modgil - 2017 - Artificial Intelligence and Law 25 (1):29-64.
    There is an increasing need for norms to be embedded in technology as the widespread deployment of applications such as autonomous driving, warfare and big data analysis for crime fighting and counter-terrorism becomes ever closer. Current approaches to norms in multi-agent systems tend either to simply make prohibited actions unavailable, or to provide a set of rules which the agent is obliged to follow, either as part of its design or to avoid sanctions and punishments. In this paper we argue (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • AI Journal Special Issue on Ethics for Autonomous Systems.Michael Fisher, Sven Koenig & Marija Slavkovik - 2022 - Artificial Intelligence 305 (C):103677.
    Download  
     
    Export citation  
     
    Bookmark  
  • Do Automated Vehicles Face Moral Dilemmas? A Plea for a Political Approach.Javier Rodríguez-Alcázar, Lilian Bermejo-Luque & Alberto Molina-Pérez - 2020 - Philosophy and Technology 34:811-832.
    How should automated vehicles (AVs) react in emergency circumstances? Most research projects and scientific literature deal with this question from a moral perspective. In particular, it is customary to treat emergencies involving AVs as instances of moral dilemmas and to use the trolley problem as a framework to address such alleged dilemmas. Some critics have pointed out some shortcomings of this strategy and have urged to focus on mundane traffic situations instead of trolley cases involving AVs. Besides, these authors rightly (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Moral Gridworlds: A Theoretical Proposal for Modeling Artificial Moral Cognition.Julia Haas - 2020 - Minds and Machines 30 (2):219-246.
    I describe a suite of reinforcement learning environments in which artificial agents learn to value and respond to moral content and contexts. I illustrate the core principles of the framework by characterizing one such environment, or “gridworld,” in which an agent learns to trade-off between monetary profit and fair dealing, as applied in a standard behavioral economic paradigm. I then highlight the core technical and philosophical advantages of the learning approach for modeling moral cognition, and for addressing the so-called value (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Designing normative theories for ethical and legal reasoning: LogiKEy framework, methodology, and tool support.Christoph Benzmüller, Xavier Parent & Leendert van der Torre - 2020 - Artificial Intelligence 287:103348.
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Argumentation-Based Logic for Ethical Decision Making.Panayiotis Frangos, Petros Stefaneas & Sofia Almpani - 2022 - Studia Humana 11 (3-4):46-52.
    As automation in artificial intelligence is increasing, we will need to automate a growing amount of ethical decision making. However, ethical decision- making raises novel challenges for engineers, ethicists and policymakers, who will have to explore new ways to realize this task. The presented work focuses on the development and formalization of models that aim at ensuring a correct ethical behaviour of artificial intelligent agents, in a provable way, extending and implementing a logic-based proving calculus that is based on argumentation (...)
    Download  
     
    Export citation  
     
    Bookmark