Switch to: References

Add citations

You must login to add citations.
  1. Explainable AI in the military domain.Nathan Gabriel Wood - 2024 - Ethics and Information Technology 26 (2):1-13.
    Artificial intelligence (AI) has become nearly ubiquitous in modern society, from components of mobile applications to medical support systems, and everything in between. In societally impactful systems imbued with AI, there has been increasing concern related to opaque AI, that is, artificial intelligence where it is unclear how or why certain decisions are reached. This has led to a recent boom in research on “explainable AI” (XAI), or approaches to making AI more explainable and understandable to human users. In the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Design for values and conceptual engineering.Herman Veluwenkamp & Jeroen van den Hoven - 2023 - Ethics and Information Technology 25 (1):1-12.
    Politicians and engineers are increasingly realizing that values are important in the development of technological artefacts. What is often overlooked is that different conceptualizations of these abstract values lead to different design-requirements. For example, designing social media platforms for deliberative democracy sets us up for technical work on completely different types of architectures and mechanisms than designing for so-called liquid or direct forms of democracy. Thinking about Democracy is not enough, we need to design for the proper conceptualization of these (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Reasons for Meaningful Human Control.Herman Veluwenkamp - 2022 - Ethics and Information Technology 24 (4):1-9.
    ”Meaningful human control” is a term invented in the political and legal debate on autonomous weapons system, but it is nowadays also used in many other contexts. It is supposed to specify conditions under which an artificial system is under the right kind of control to avoid responsibility gaps: that is, situations in which no moral agent is responsible. Santoni de Sio and Van den Hoven have recently suggested a framework that can be used by system designers to operationalize this (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • How to Do Things with Information Online. A Conceptual Framework for Evaluating Social Networking Platforms as Epistemic Environments.Lavinia Marin - 2022 - Philsophy and Technology 35 (77).
    This paper proposes a conceptual framework for evaluating how social networking platforms fare as epistemic environments for human users. I begin by proposing a situated concept of epistemic agency as fundamental for evaluating epistemic environments. Next, I show that algorithmic personalisation of information makes social networking platforms problematic for users’ epistemic agency because these platforms do not allow users to adapt their behaviour sufficiently. Using the tracing principle inspired by the ethics of self-driving cars, I operationalise it here and identify (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • (1 other version)Technology as Driver for Morally Motivated Conceptual Engineering.Herman Veluwenkamp, Marianna Capasso, Jonne Maas & Lavinia Marin - 2022 - Philosophy and Technology 35 (3):1-25.
    New technologies are the source of uncertainties about the applicability of moral and morally connotated concepts. These uncertainties sometimes call for conceptual engineering, but it is not often recognized when this is the case. We take this to be a missed opportunity, as a recognition that different researchers are working on the same kind of project can help solve methodological questions that one is likely to encounter. In this paper, we present three case studies where philosophers of technology implicitly engage (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Reflection machines: increasing meaningful human control over Decision Support Systems.W. F. G. Haselager, H. K. Schraffenberger, R. J. M. van Eerdt & N. A. J. Cornelissen - 2022 - Ethics and Information Technology 24 (2).
    Rapid developments in Artificial Intelligence are leading to an increasing human reliance on machine decision making. Even in collaborative efforts with Decision Support Systems (DSSs), where a human expert is expected to make the final decisions, it can be hard to keep the expert actively involved throughout the decision process. DSSs suggest their own solutions and thus invite passive decision making. To keep humans actively ‘on’ the decision-making loop and counter overreliance on machines, we propose a ‘reflection machine’ (RM). This (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Towards a Value Sensitive Design Framework for Attaining Meaningful Human Control over Autonomous Weapons Systems.Steven Umbrello - 2021 - Dissertation, Consortium Fino
    The international debate on the ethics and legality of autonomous weapon systems (AWS) as well as the call for a ban are primarily focused on the nebulous concept of fully autonomous AWS. More specifically, on AWS that are capable of target selection and engagement without human supervision or control. This thesis argues that such a conception of autonomy is divorced both from military planning and decision-making operations as well as the design requirements that govern AWS engineering and subsequently the tracking (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Self-Driving Vehicles—an Ethical Overview.Sven Ove Hansson, Matts-Åke Belin & Björn Lundgren - 2021 - Philosophy and Technology 34 (4):1383-1408.
    The introduction of self-driving vehicles gives rise to a large number of ethical issues that go beyond the common, extremely narrow, focus on improbable dilemma-like scenarios. This article provides a broad overview of realistic ethical issues related to self-driving vehicles. Some of the major topics covered are as follows: Strong opinions for and against driverless cars may give rise to severe social and political conflicts. A low tolerance for accidents caused by driverless vehicles may delay the introduction of driverless systems (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Four Responsibility Gaps with Artificial Intelligence: Why they Matter and How to Address them.Filippo Santoni de Sio & Giulio Mecacci - 2021 - Philosophy and Technology 34 (4):1057-1084.
    The notion of “responsibility gap” with artificial intelligence (AI) was originally introduced in the philosophical debate to indicate the concern that “learning automata” may make more difficult or impossible to attribute moral culpability to persons for untoward events. Building on literature in moral and legal philosophy, and ethics of technology, the paper proposes a broader and more comprehensive analysis of the responsibility gap. The responsibility gap, it is argued, is not one problem but a set of at least four interconnected (...)
    Download  
     
    Export citation  
     
    Bookmark   68 citations  
  • Coupling levels of abstraction in understanding meaningful human control of autonomous weapons: a two-tiered approach.Steven Umbrello - 2021 - Ethics and Information Technology 23 (3):455-464.
    The international debate on the ethics and legality of autonomous weapon systems (AWS), along with the call for a ban, primarily focus on the nebulous concept of fully autonomous AWS. These are AWS capable of target selection and engagement absent human supervision or control. This paper argues that such a conception of autonomy is divorced from both military planning and decision-making operations; it also ignores the design requirements that govern AWS engineering and the subsequent tracking and tracing of moral responsibility. (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • From Responsibility to Reason-Giving Explainable Artificial Intelligence.Kevin Baum, Susanne Mantel, Timo Speith & Eva Schmidt - 2022 - Philosophy and Technology 35 (1):1-30.
    We argue that explainable artificial intelligence (XAI), specifically reason-giving XAI, often constitutes the most suitable way of ensuring that someone can properly be held responsible for decisions that are based on the outputs of artificial intelligent (AI) systems. We first show that, to close moral responsibility gaps (Matthias 2004), often a human in the loop is needed who is directly responsible for particular AI-supported decisions. Second, we appeal to the epistemic condition on moral responsibility to argue that, in order to (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Moral Dilemmas, Amoral Obligations, and Responsible Innovation; Two-Dimensional “Human Control” Over “Autonomous” Socio-Technical Systems.Keyvan Alasti - forthcoming - Ethics, Policy and Environment.
    In some cases, the term ‘Responsible Innovation’ has been considered a type of ethical solution to the Collingridge predicament in control of technology development. In this article, I claimed that two different approaches for responsible innovation (i.e. Van den Hoven’s innovation-based approach and Owen’s social-based approach) can be considered as two different dimensions that, while being conflicting, dialectically interact and thus can be useful for solving the problem of Collingridge. For this purpose, I argue that the first approach that resorts (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Autonomous weapon systems and responsibility gaps: a taxonomy.Nathan Gabriel Wood - 2023 - Ethics and Information Technology 25 (1):1-14.
    A classic objection to autonomous weapon systems (AWS) is that these could create so-called responsibility gaps, where it is unclear who should be held responsible in the event that an AWS were to violate some portion of the law of armed conflict (LOAC). However, those who raise this objection generally do so presenting it as a problem for AWS as a whole class of weapons. Yet there exists a rather wide range of systems that can be counted as “autonomous weapon (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Accountability and Control Over Autonomous Weapon Systems: A Framework for Comprehensive Human Oversight.Ilse Verdiesen, Filippo Santoni de Sio & Virginia Dignum - 2020 - Minds and Machines 31 (1):137-163.
    Accountability and responsibility are key concepts in the academic and societal debate on Autonomous Weapon Systems, but these notions are often used as high-level overarching constructs and are not operationalised to be useful in practice. “Meaningful Human Control” is often mentioned as a requirement for the deployment of Autonomous Weapon Systems, but a common definition of what this notion means in practice, and a clear understanding of its relation with responsibility and accountability is also lacking. In this paper, we present (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Impactful Conceptual Engineering: Designing Technological Artefacts Ethically.Herman Veluwenkamp - forthcoming - Ethical Theory and Moral Practice:1-16.
    Conceptual engineering is the design, evaluation and implementation of concepts. Despite its popularity, some have argued that the methodology is not worthwhile, because the implementation of new concepts is both inscrutable and beyond our control. In the recent literature we see different responses to this worry. Some have argued that it is for political reasons just as well that implementation is such a difficult task, while others have challenged the metasemantic and social assumptions that underlie this skepticism about implementation. In (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • A metaphysical account of agency for technology governance.Sadjad Soltanzadeh - forthcoming - AI and Society:1-12.
    The way in which agency is conceptualised has implications for understanding human–machine interactions and the governance of technology, especially artificial intelligence (AI) systems. Traditionally, agency is conceptualised as a capacity, defined by intrinsic properties, such as cognitive or volitional facilities. I argue that the capacity-based account of agency is inadequate to explain the dynamics of human–machine interactions and guide technology governance. Instead, I propose to conceptualise agency as impact. Agents as impactful entities can be identified at different levels: from the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The European Commission report on ethics of connected and automated vehicles and the future of ethics of transportation.Filippo Santoni de Sio - 2021 - Ethics and Information Technology 23 (4):713-726.
    The paper has two goals. The first is presenting the main results of the recent report Ethics of Connected and Automated Vehicles: recommendations on road safety, privacy, fairness, explainability and responsibility written by the Horizon 2020 European Commission Expert Group to advise on specific ethical issues raised by driverless mobility, of which the author of this paper has been member and rapporteur. The second is presenting some broader ethical and philosophical implications of these recommendations, and using these to contribute to (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Human–machine coordination in mixed traffic as a problem of Meaningful Human Control.Giulio Mecacci, Simeon C. Calvert & Filippo Santoni de Sio - 2023 - AI and Society 38 (3):1151-1166.
    The urban traffic environment is characterized by the presence of a highly differentiated pool of users, including vulnerable ones. This makes vehicle automation particularly difficult to implement, as a safe coordination among those users is hard to achieve in such an open scenario. Different strategies have been proposed to address these coordination issues, but all of them have been found to be costly for they negatively affect a range of human values (e.g. safety, democracy, accountability…). In this paper, we claim (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Clouds on the horizon: clinical decision support systems, the control problem, and physician-patient dialogue.Mahmut Alpertunga Kara - forthcoming - Medicine, Health Care and Philosophy:1-13.
    Artificial intelligence-based clinical decision support systems have a potential to improve clinical practice, but they may have a negative impact on the physician-patient dialogue, because of the control problem. Physician-patient dialogue depends on human qualities such as compassion, trust, and empathy, which are shared by both parties. These qualities are necessary for the parties to reach a shared understanding -the merging of horizons- about clinical decisions. The patient attends the clinical encounter not only with a malfunctioning body, but also with (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Responsible AI Through Conceptual Engineering.Johannes Himmelreich & Sebastian Köhler - 2022 - Philosophy and Technology 35 (3):1-30.
    The advent of intelligent artificial systems has sparked a dispute about the question of who is responsible when such a system causes a harmful outcome. This paper champions the idea that this dispute should be approached as a conceptual engineering problem. Towards this claim, the paper first argues that the dispute about the responsibility gap problem is in part a conceptual dispute about the content of responsibility and related concepts. The paper then argues that the way forward is to evaluate (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Reflection Machines: Supporting Effective Human Oversight Over Medical Decision Support Systems.Pim Haselager, Hanna Schraffenberger, Serge Thill, Simon Fischer, Pablo Lanillos, Sebastiaan van de Groes & Miranda van Hooff - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (3):380-389.
    Human decisions are increasingly supported by decision support systems (DSS). Humans are required to remain “on the loop,” by monitoring and approving/rejecting machine recommendations. However, use of DSS can lead to overreliance on machines, reducing human oversight. This paper proposes “reflection machines” (RM) to increase meaningful human control. An RM provides a medical expert not with suggestions for a decision, but with questions that stimulate reflection about decisions. It can refer to data points or suggest counterarguments that are less compatible (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Realising Meaningful Human Control Over Automated Driving Systems: A Multidisciplinary Approach.Filippo Santoni de Sio, Giulio Mecacci, Simeon Calvert, Daniel Heikoop, Marjan Hagenzieker & Bart van Arem - 2023 - Minds and Machines 33 (4):587-611.
    The paper presents a framework to realise “meaningful human control” over Automated Driving Systems. The framework is based on an original synthesis of the results of the multidisciplinary research project “Meaningful Human Control over Automated Driving Systems” lead by a team of engineers, philosophers, and psychologists at Delft University of the Technology from 2017 to 2021. Meaningful human control aims at protecting safety and reducing responsibility gaps. The framework is based on the core assumption that human persons and institutions, not (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Autonomous Military Systems: collective responsibility and distributed burdens.Niël Henk Conradie - 2023 - Ethics and Information Technology 25 (1):1-14.
    The introduction of Autonomous Military Systems (AMS) onto contemporary battlefields raises concerns that they will bring with them the possibility of a techno-responsibility gap, leaving insecurity about how to attribute responsibility in scenarios involving these systems. In this work I approach this problem in the domain of applied ethics with foundational conceptual work on autonomy and responsibility. I argue that concerns over the use of AMS can be assuaged by recognising the richly interrelated context in which these systems will most (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations