Switch to: Citations

Add references

You must login to add references.
  1. The responsibility gap: Ascribing responsibility for the actions of learning automata. [REVIEW]Andreas Matthias - 2004 - Ethics and Information Technology 6 (3):175-183.
    Traditionally, the manufacturer/operator of a machine is held (morally and legally) responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it. The society must decide between not using this kind of machine any more (which is not a (...)
    Download  
     
    Export citation  
     
    Bookmark   200 citations  
  • Computer systems: Moral entities but not moral agents. [REVIEW]Deborah G. Johnson - 2006 - Ethics and Information Technology 8 (4):195-204.
    After discussing the distinction between artifacts and natural entities, and the distinction between artifacts and technology, the conditions of the traditional account of moral agency are identified. While computer system behavior meets four of the five conditions, it does not and cannot meet a key condition. Computer systems do not have mental states, and even if they could be construed as having mental states, they do not have intendings to act, which arise from an agent’s freedom. On the other hand, (...)
    Download  
     
    Export citation  
     
    Bookmark   91 citations  
  • On the morality of artificial agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.
    Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility of agents (most (...)
    Download  
     
    Export citation  
     
    Bookmark   294 citations  
  • How the machine ‘thinks’: Understanding opacity in machine learning algorithms.Jenna Burrell - 2016 - Big Data and Society 3 (1):205395171562251.
    This article considers the issue of opacity as a problem for socially consequential mechanisms of classification and ranking, such as spam filters, credit card fraud detection, search engines, news trends, market segmentation and advertising, insurance or loan qualification, and credit scoring. These mechanisms of classification all frequently rely on computational algorithms, and in many cases on machine learning algorithms to do this work. In this article, I draw a distinction between three forms of opacity: opacity as intentional corporate or state (...)
    Download  
     
    Export citation  
     
    Bookmark   210 citations  
  • Embedding Values in Artificial Intelligence (AI) Systems.Ibo van de Poel - 2020 - Minds and Machines 30 (3):385-409.
    Organizations such as the EU High-Level Expert Group on AI and the IEEE have recently formulated ethical principles and (moral) values that should be adhered to in the design and deployment of artificial intelligence (AI). These include respect for autonomy, non-maleficence, fairness, transparency, explainability, and accountability. But how can we ensure and verify that an AI system actually respects these values? To help answer this question, I propose an account for determining when an AI system can be said to embody (...)
    Download  
     
    Export citation  
     
    Bookmark   51 citations  
  • Mapping Value Sensitive Design onto AI for Social Good Principles.Steven Umbrello & Ibo van de Poel - 2021 - AI and Ethics 1 (3):283–296.
    Value Sensitive Design (VSD) is an established method for integrating values into technical design. It has been applied to different technologies and, more recently, to artificial intelligence (AI). We argue that AI poses a number of challenges specific to VSD that require a somewhat modified VSD approach. Machine learning (ML), in particular, poses two challenges. First, humans may not understand how an AI system learns certain things. This requires paying attention to values such as transparency, explicability, and accountability. Second, ML (...)
    Download  
     
    Export citation  
     
    Bookmark   35 citations  
  • Democratizing Algorithmic Fairness.Pak-Hang Wong - 2020 - Philosophy and Technology 33 (2):225-244.
    Algorithms can now identify patterns and correlations in the (big) datasets, and predict outcomes based on those identified patterns and correlations with the use of machine learning techniques and big data, decisions can then be made by algorithms themselves in accordance with the predicted outcomes. Yet, algorithms can inherit questionable values from the datasets and acquire biases in the course of (machine) learning, and automated algorithmic decision-making makes it more difficult for people to see algorithms as biased. While researchers have (...)
    Download  
     
    Export citation  
     
    Bookmark   32 citations  
  • The Borg–eye and the We–I. The production of a collective living body through wearable computers.Nicola Liberati - 2020 - AI and Society 35 (1):39-49.
    The aim of this work is to analyze the constitution of a new collective subject thanks to wearable computers. Wearable computers are emerging technologies which are supposed to become pervasively used in the near future. They are devices designed to be on us every single moment of our life and to capture every experience we have. Therefore, we need to be prepared to such intrusive devices and to analyze potential effect they will have on us and our society. Thanks to (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • On the Moral Agency of Computers.Thomas M. Powers - 2013 - Topoi 32 (2):227-236.
    Can computer systems ever be considered moral agents? This paper considers two factors that are explored in the recent philosophical literature. First, there are the important domains in which computers are allowed to act, made possible by their greater functional capacities. Second, there is the claim that these functional capacities appear to embody relevant human abilities, such as autonomy and responsibility. I argue that neither the first (Domain-Function) factor nor the second (Simulacrum) factor gets at the central issue in the (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Eight grand challenges for value sensitive design from the 2016 Lorentz workshop.Batya Friedman, Maaike Harbers, David G. Hendry, Jeroen van den Hoven, Catholijn Jonker & Nick Logler - 2018 - Ethics and Information Technology 23 (1):5-16.
    In this article, we report on eight grand challenges for value sensitive design, which were developed at a one-week workshop, Value Sensitive Design: Charting the Next Decade, Lorentz Center, Leiden, The Netherlands, November 14–18, 2016. A grand challenge is a substantial problem, opportunity, or question that motives sustained research and design activity. The eight grand challenges are: Accounting for Power, Evaluating Value Sensitive Design, Framing and Prioritizing Values, Professional and Industry Appropriation, Tech policy, Values and Human Emotions, Value Sensitive Design (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Philosophy of Technology after the Empirical Turn.Philip Brey - 2010 - Techné: Research in Philosophy and Technology 14 (1):36-48.
    What are the strengths and weaknesses of contemporary philosophy of technology, and how may the field be developed and improved in the future? That is the question I will address in this paper. I will argue that in the past twenty-five years, philosophy of technology has entered a new era. This era has arrived with new and distinct issues and approaches that differ from those that came before it. Many of the new developments have been for the good. Yet, I (...)
    Download  
     
    Export citation  
     
    Bookmark   43 citations  
  • A Study of Technological Intentionality in C++ and Generative Adversarial Model: Phenomenological and Postphenomenological Perspectives.Dmytro Mykhailov & Nicola Liberati - 2023 - Foundations of Science 28 (3):841-857.
    This paper aims to highlight the life of computer technologies to understand what kind of ‘technological intentionality’ is present in computers based upon the phenomenological elements constituting the objects in general. Such a study can better explain the effects of new digital technologies on our society and highlight the role of digital technologies by focusing on their activities. Even if Husserlian phenomenology rarely talks about technologies, some of its aspects can be used to address the actions performed by the digital (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Fair, Transparent, and Accountable Algorithmic Decision-making Processes: The Premise, the Proposed Solutions, and the Open Challenges.Bruno Lepri, Nuria Oliver, Emmanuel Letouzé, Alex Pentland & Patrick Vinck - 2018 - Philosophy and Technology 31 (4):611-627.
    The combination of increased availability of large amounts of fine-grained human behavioral data and advances in machine learning is presiding over a growing reliance on algorithms to address complex societal problems. Algorithmic decision-making processes might lead to more objective and thus potentially fairer decisions than those made by humans who may be influenced by greed, prejudice, fatigue, or hunger. However, algorithmic decision-making has been criticized for its potential to enhance discrimination, information and power asymmetry, and opacity. In this paper, we (...)
    Download  
     
    Export citation  
     
    Bookmark   55 citations  
  • Algorithmic bias and the Value Sensitive Design approach.Judith Simon, Pak-Hang Wong & Gernot Rieder - 2020 - Internet Policy Review 9 (4).
    Recently, amid growing awareness that computer algorithms are not neutral tools but can cause harm by reproducing and amplifying bias, attempts to detect and prevent such biases have intensified. An approach that has received considerable attention in this regard is the Value Sensitive Design (VSD) methodology, which aims to contribute to both the critical analysis of (dis)values in existing technologies and the construction of novel technologies that account for specific desired values. This article provides a brief overview of the key (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Algorithmic Iteration for Computational Intelligence.Giuseppe Primiero - 2017 - Minds and Machines 27 (3):521-543.
    Machine awareness is a disputed research topic, in some circles considered a crucial step in realising Artificial General Intelligence. Understanding what that is, under which conditions such feature could arise and how it can be controlled is still a matter of speculation. A more concrete object of theoretical analysis is algorithmic iteration for computational intelligence, intended as the theoretical and practical ability of algorithms to design other algorithms for actions aimed at solving well-specified tasks. We know this ability is already (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • A moral analysis of intelligent decision-support systems in diagnostics through the lens of Luciano Floridi’s information ethics.Dmytro Mykhailov - 2021 - Human Affairs 31 (2):149-164.
    Contemporary medical diagnostics has a dynamic moral landscape, which includes a variety of agents, factors, and components. A significant part of this landscape is composed of information technologies that play a vital role in doctors’ decision-making. This paper focuses on the so-called Intelligent Decision-Support System that is widely implemented in the domain of contemporary medical diagnosis. The purpose of this article is twofold. First, I will show that the IDSS may be considered a moral agent in the practice of medicine (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • The Problem of Original Agency.Don Berkich - 2017 - Southwest Philosophy Review 33 (1):75-82.
    The problem of original intentionality—wherein computational states have at most derived intentionality, but intelligence presupposes original intentionality—has been disputed at some length in the philosophical literature by Searle, Dennett, Dretske, Block, and many others. Largely absent from these discussions is the problem of original agency: Robots and the computational states upon which they depend have at most derived agency. That is, a robot’s agency is wholly inherited from its designer’s original agency. Yet intelligence presupposes original agency at least as much (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations