Switch to: References

Add citations

You must login to add citations.
  1. Three philosophical perspectives on the relation between technology and society, and how they affect the current debate about artificial intelligence.Ibo van de Poel - 2020 - Human Affairs 30 (4):499-511.
    Three philosophical perspectives on the relation between technology and society are distinguished and discussed: 1) technology as an autonomous force that determines society; 2) technology as a human construct that can be shaped by human values, and 3) a co-evolutionary perspective on technology and society where neither of them determines the other. The historical evolution of the three perspectives is discussed and it is argued that all three are still present in current debates about technological change and how it may (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Are we done with (Wordy) manifestos? Towards an introverted digital humanism.Giacomo Pezzano - 2024 - Journal of Responsible Technology 17 (C):100078.
    Download  
     
    Export citation  
     
    Bookmark  
  • Robotrust and Legal Responsibility.Ugo Pagallo - 2010 - Knowledge, Technology & Policy 23 (3):367-379.
    The paper examines some aspects of today’s debate on trust and e-trust and, more specifically, issues of legal responsibility for the production and use of robots. Their impact on human-to-human interaction has produced new problems both in the fields of contractual and extra-contractual liability in that robots negotiate, enter into contracts, establish rights and obligations between humans, while reshaping matters of responsibility and risk in trust relations. Whether or not robotrust concerns human-to-robot or even robot-to-robot relations, there is a new (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Killers, fridges, and slaves: a legal journey in robotics. [REVIEW]Ugo Pagallo - 2011 - AI and Society 26 (4):347-354.
    This paper adopts a legal perspective to counter some exaggerations of today’s debate on the social understanding of robotics. According to a long and well-established tradition, there is in fact a relative strong consensus among lawyers about some key notions as, say, agency and liability in the current use of robots. However, dealing with a field in rapid evolution, we need to rethink some basic tenets of the contemporary legal framework. In particular, time has come for lawyers to acknowledge that (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Cracking down on autonomy: three challenges to design in IT Law. [REVIEW]U. Pagallo - 2012 - Ethics and Information Technology 14 (4):319-328.
    The paper examines how technology challenges conventional borders of national legal systems, as shown by cases that scholars address as a part of their everyday work in the fields of information technology (IT)-Law, i.e., computer crimes, data protection, digital copyright, and so forth. Information on the internet has in fact a ubiquitous nature that transcends political borders and questions the notion of the law as made of commands enforced through physical sanctions. Whereas many of today’s impasses on jurisdiction, international conflicts (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Smart soldiers: towards a more ethical warfare.Femi Richard Omotoyinbo - 2023 - AI and Society 38 (4):1485-1491.
    It is a truism that, due to human weaknesses, human soldiers have yet to have sufficiently ethical warfare. It is arguable that the likelihood of human soldiers to breach the Principle of Non-Combatant Immunity, for example, is higher in contrast tosmart soldierswho are emotionally inept. Hence, this paper examines the possibility that the integration of ethics into smart soldiers will help address moral challenges in modern warfare. The approach is to develop and employ smart soldiers that are enhanced with ethical (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Automated cars meet human drivers: responsible human-robot coordination and the ethics of mixed traffic.Sven Nyholm & Jilles Smids - 2020 - Ethics and Information Technology 22 (4):335-344.
    In this paper, we discuss the ethics of automated driving. More specifically, we discuss responsible human-robot coordination within mixed traffic: i.e. traffic involving both automated cars and conventional human-driven cars. We do three main things. First, we explain key differences in robotic and human agency and expectation-forming mechanisms that are likely to give rise to compatibility-problems in mixed traffic, which may lead to crashes and accidents. Second, we identify three possible solution-strategies for achieving better human-robot coordination within mixed traffic. Third, (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Ethical regulations on robotics in Europe.Michael Nagenborg, Rafael Capurro, Jutta Weber & Christoph Pingel - 2008 - AI and Society 22 (3):349-366.
    There are only a few ethical regulations that deal explicitly with robots, in contrast to a vast number of regulations, which may be applied. We will focus on ethical issues with regard to “responsibility and autonomous robots”, “machines as a replacement for humans”, and “tele-presence”. Furthermore we will examine examples from special fields of application (medicine and healthcare, armed forces, and entertainment). We do not claim to present a complete list of ethical issue nor of regulations in the field of (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Philosophical Inquiry into Computer Intentionality: Machine Learning and Value Sensitive Design.Dmytro Mykhailov - 2023 - Human Affairs 33 (1):115-127.
    Intelligent algorithms together with various machine learning techniques hold a dominant position among major challenges for contemporary value sensitive design. Self-learning capabilities of current AI applications blur the causal link between programmer and computer behavior. This creates a vital challenge for the design, development and implementation of digital technologies nowadays. This paper seeks to provide an account of this challenge. The main question that shapes the current analysis is the following: What conceptual tools can be developed within the value sensitive (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • A moral analysis of intelligent decision-support systems in diagnostics through the lens of Luciano Floridi’s information ethics.Dmytro Mykhailov - 2021 - Human Affairs 31 (2):149-164.
    Contemporary medical diagnostics has a dynamic moral landscape, which includes a variety of agents, factors, and components. A significant part of this landscape is composed of information technologies that play a vital role in doctors’ decision-making. This paper focuses on the so-called Intelligent Decision-Support System that is widely implemented in the domain of contemporary medical diagnosis. The purpose of this article is twofold. First, I will show that the IDSS may be considered a moral agent in the practice of medicine (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • A Study of Technological Intentionality in C++ and Generative Adversarial Model: Phenomenological and Postphenomenological Perspectives.Dmytro Mykhailov & Nicola Liberati - 2023 - Foundations of Science 28 (3):841-857.
    This paper aims to highlight the life of computer technologies to understand what kind of ‘technological intentionality’ is present in computers based upon the phenomenological elements constituting the objects in general. Such a study can better explain the effects of new digital technologies on our society and highlight the role of digital technologies by focusing on their activities. Even if Husserlian phenomenology rarely talks about technologies, some of its aspects can be used to address the actions performed by the digital (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • On the moral status of social robots: considering the consciousness criterion.Kestutis Mosakas - 2021 - AI and Society 36 (2):429-443.
    While philosophers have been debating for decades on whether different entities—including severely disabled human beings, embryos, animals, objects of nature, and even works of art—can legitimately be considered as having moral status, this question has gained a new dimension in the wake of artificial intelligence (AI). One of the more imminent concerns in the context of AI is that of the moral rights and status of social robots, such as robotic caregivers and artificial companions, that are built to interact with (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Information Gain and Approaching True Belief.Jonas Clausen Mork - 2015 - Erkenntnis 80 (1):77-96.
    Recent years have seen a renewed interest in the philosophical study of information. In this paper a two-part analysis of information gain—objective and subjective—in the context of doxastic change is presented and discussed. Objective information gain is analyzed in terms of doxastic movement towards true belief, while subjective information gain is analyzed as an agent’s expectation value of her objective information gain for a given doxastic change. The resulting expression for subjective information gain turns out to be a familiar one (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Is it time for robot rights? Moral status in artificial entities.Vincent C. Müller - 2021 - Ethics and Information Technology 23 (3):579–587.
    Some authors have recently suggested that it is time to consider rights for robots. These suggestions are based on the claim that the question of robot rights should not depend on a standard set of conditions for ‘moral status’; but instead, the question is to be framed in a new way, by rejecting the is/ought distinction, making a relational turn, or assuming a methodological behaviourism. We try to clarify these suggestions and to show their highly problematic consequences. While we find (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • “An Eye Turned into a Weapon”: a Philosophical Investigation of Remote Controlled, Automated, and Autonomous Drone Warfare.Oliver Müller - 2020 - Philosophy and Technology 34 (4):875-896.
    Military drones combine surveillance technology with missile equipment in a far-reaching way. In this article, I argue that military drones could and should be object for a philosophical investigation, referring in particular on Chamayou’s theory of the drone, who also coined the term “an eye turned into a weapon.” Focusing on issues of human self-understanding, agency, and alterity, I examine the intricate human-technology relations in the context of designing and deploying military drones. For that purpose, I am drawing on the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Conformity Assessments and Post-market Monitoring: A Guide to the Role of Auditing in the Proposed European AI Regulation.Jakob Mökander, Maria Axente, Federico Casolari & Luciano Floridi - 2022 - Minds and Machines 32 (2):241-268.
    The proposed European Artificial Intelligence Act (AIA) is the first attempt to elaborate a general legal framework for AI carried out by any major global economy. As such, the AIA is likely to become a point of reference in the larger discourse on how AI systems can (and should) be regulated. In this article, we describe and discuss the two primary enforcement mechanisms proposed in the AIA: the _conformity assessments_ that providers of high-risk AI systems are expected to conduct, and (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • A Softwaremodule for an Ethical Elder Care Robot. Design and Implementation.Catrin Misselhorn - 2019 - Ethics in Progress 10 (2):68-81.
    The development of increasingly intelligent and autonomous technologies will eventually lead to these systems having to face morally problematic situations. This is particularly true of artificial systems that are used in geriatric care environments. The goal of this article is to describe how one can approach the design of an elder care robot which is capable of moral decision-making and moral learning. A conceptual design for the development of such a system is provided and the steps that are necessary to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Artificial systems with moral capacities? A research design and its implementation in a geriatric care system.Catrin Misselhorn - 2020 - Artificial Intelligence 278 (C):103179.
    The development of increasingly intelligent and autonomous technologies will eventually lead to these systems having to face morally problematic situations. This gave rise to the development of artificial morality, an emerging field in artificial intelligence which explores whether and how artificial systems can be furnished with moral capacities. This will have a deep impact on our lives. Yet, the methodological foundations of artificial morality are still sketchy and often far off from possible applications. One important area of application of artificial (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • The autonomy-safety-paradox of service robotics in Europe and Japan: a comparative analysis.Hironori Matsuzaki & Gesa Lindemann - 2016 - AI and Society 31 (4):501-517.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Who Should Decide How Machines Make Morally Laden Decisions?Dominic Martin - 2017 - Science and Engineering Ethics 23 (4):951-967.
    Who should decide how a machine will decide what to do when it is driving a car, performing a medical procedure, or, more generally, when it is facing any kind of morally laden decision? More and more, machines are making complex decisions with a considerable level of autonomy. We should be much more preoccupied by this problem than we currently are. After a series of preliminary remarks, this paper will go over four possible answers to the question raised above. First, (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Computer Says I Don’t Know: An Empirical Approach to Capture Moral Uncertainty in Artificial Intelligence.Andreia Martinho, Maarten Kroesen & Caspar Chorus - 2021 - Minds and Machines 31 (2):215-237.
    As AI Systems become increasingly autonomous, they are expected to engage in decision-making processes that have moral implications. In this research we integrate theoretical and empirical lines of thought to address the matters of moral reasoning and moral uncertainty in AI Systems. We reconceptualize the metanormative framework for decision-making under moral uncertainty and we operationalize it through a latent class choice model. The core idea being that moral heterogeneity in society can be codified in terms of a small number of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Should criminal law protect love relation with robots?Kamil Mamak - forthcoming - AI and Society:1-10.
    Whether or not we call a love-like relationship with robots true love, some people may feel and claim that, for them, it is a sufficient substitute for love relationship. The love relationship between humans has a special place in our social life. On the grounds of both morality and law, our significant other can expect special treatment. It is understandable that, precisely because of this kind of relationship, we save our significant other instead of others or will not testify against (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Humans, Neanderthals, robots and rights.Kamil Mamak - 2022 - Ethics and Information Technology 24 (3):1-9.
    Robots are becoming more visible parts of our life, a situation which prompts questions about their place in our society. One group of issues that is widely discussed is connected with robots’ moral and legal status as well as their potential rights. The question of granting robots rights is polarizing. Some positions accept the possibility of granting them human rights whereas others reject the notion that robots can be considered potential rights holders. In this paper, I claim that robots will (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Integrating robot ethics and machine morality: the study and design of moral competence in robots.Bertram F. Malle - 2016 - Ethics and Information Technology 18 (4):243-256.
    Robot ethics encompasses ethical questions about how humans should design, deploy, and treat robots; machine morality encompasses questions about what moral capacities a robot should have and how these capacities could be computationally implemented. Publications on both of these topics have doubled twice in the past 10 years but have often remained separate from one another. In an attempt to better integrate the two, I offer a framework for what a morally competent robot would look like and discuss a number (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • Computationally rational agents can be moral agents.Bongani Andy Mabaso - 2020 - Ethics and Information Technology 23 (2):137-145.
    In this article, a concise argument for computational rationality as a basis for artificial moral agency is advanced. Some ethicists have long argued that rational agents can become artificial moral agents. However, most of their views have come from purely philosophical perspectives, thus making it difficult to transfer their arguments to a scientific and analytical frame of reference. The result has been a disintegrated approach to the conceptualisation and design of artificial moral agents. In this article, I make the argument (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Artificial Moral Agents Within an Ethos of AI4SG.Bongani Andy Mabaso - 2020 - Philosophy and Technology 34 (1):7-21.
    As artificial intelligence (AI) continues to proliferate into every area of modern life, there is no doubt that society has to think deeply about the potential impact, whether negative or positive, that it will have. Whilst scholars recognise that AI can usher in a new era of personal, social and economic prosperity, they also warn of the potential for it to be misused towards the detriment of society. Deliberate strategies are therefore required to ensure that AI can be safely integrated (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Open problems in the philosophy of information.Luciano Floridi - 2004 - Metaphilosophy 35 (4):554-582.
    The philosophy of information (PI) is a new area of research with its own field of investigation and methodology. This article, based on the Herbert A. Simon Lecture of Computing and Philosophy I gave at Carnegie Mellon University in 2001, analyses the eighteen principal open problems in PI. Section 1 introduces the analysis by outlining Herbert Simon's approach to PI. Section 2 discusses some methodological considerations about what counts as a good philosophical problem. The discussion centers on Hilbert's famous analysis (...)
    Download  
     
    Export citation  
     
    Bookmark   57 citations  
  • Moral dilemmas in self-driving cars.Chiara Lucifora, Giorgio Mario Grasso, Pietro Perconti & Alessio Plebe - 2020 - Rivista Internazionale di Filosofia e Psicologia 11 (2):238-250.
    : Autonomous driving systems promise important changes for future of transport, primarily through the reduction of road accidents. However, ethical concerns, in particular, two central issues, will be key to their successful development. First, situations of risk that involve inevitable harm to passengers and/or bystanders, in which some individuals must be sacrificed for the benefit of others. Secondly, and identification responsible parties and liabilities in the event of an accident. Our work addresses the first of these ethical problems. We are (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Transparency as design publicity: explaining and justifying inscrutable algorithms.Michele Loi, Andrea Ferrario & Eleonora Viganò - 2020 - Ethics and Information Technology 23 (3):253-263.
    In this paper we argue that transparency of machine learning algorithms, just as explanation, can be defined at different levels of abstraction. We criticize recent attempts to identify the explanation of black box algorithms with making their decisions (post-hoc) interpretable, focusing our discussion on counterfactual explanations. These approaches to explanation simplify the real nature of the black boxes and risk misleading the public about the normative features of a model. We propose a new form of algorithmic transparency, that consists in (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • The Future Impact of Artificial Intelligence on Humans and Human Rights.Steven Livingston & Mathias Risse - 2019 - Ethics and International Affairs 33 (2):141-158.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Problems with “Friendly AI”.Oliver Li - 2021 - Ethics and Information Technology 23 (3):543-550.
    On virtue ethical grounds, Barbro Fröding and Martin Peterson recently recommended that near-future AIs should be developed as ‘Friendly AI’. AI in social interaction with humans should be programmed such that they mimic aspects of human friendship. While it is a reasonable goal to implement AI systems interacting with humans as Friendly AI, I identify four issues that need to be addressed concerning Friendly AI with Fröding’s and Peterson’s understanding of Friendly AI as a starting point. In a first step, (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Why a Virtual Assistant for Moral Enhancement When We Could have a Socrates?Francisco Lara - 2021 - Science and Engineering Ethics 27 (4):1-27.
    Can Artificial Intelligence be more effective than human instruction for the moral enhancement of people? The author argues that it only would be if the use of this technology were aimed at increasing the individual's capacity to reflectively decide for themselves, rather than at directly influencing behaviour. To support this, it is shown how a disregard for personal autonomy, in particular, invalidates the main proposals for applying new technologies, both biomedical and AI-based, to moral enhancement. As an alternative to these (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • AI Systems Under Criminal Law: a Legal Analysis and a Regulatory Perspective.Francesca Lagioia & Giovanni Sartor - 2020 - Philosophy and Technology 33 (3):433-465.
    Criminal liability for acts committed by AI systems has recently become a hot legal topic. This paper includes three different contributions. The first contribution is an analysis of the extent to which an AI system can satisfy the requirements for criminal liability: accomplishing an actus reus, having the corresponding mens rea, possessing the cognitive capacities needed for responsibility. The second contribution is a discussion of criminal activity accomplished by an AI entity, with reference to a recent case involving an online (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Disagreements Over Analogies.Oliver Laas - 2017 - Metaphilosophy 48 (1-2):153-182.
    This essay presents a dialogical framework for treating philosophical disagreements as persuasion dialogues with analogical argumentation, with the aim of recasting philosophical disputes as disagreements over analogies. This has two benefits: it allows us to temporarily bypass conflicting metaphysical intuitions by focusing on paradigmatic examples, similarities, and the plausibility of conclusions for or against a given point of view; and it can reveal new avenues of argumentation regarding a given issue. This approach to philosophical disagreements is illustrated by studying the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • On the moral permissibility of robot apologies.Makoto Kureha - forthcoming - AI and Society:1-11.
    Robots that incorporate the function of apologizing have emerged in recent years. This paper examines the moral permissibility of making robots apologize. First, I characterize the nature of apology based on analyses conducted in multiple scholarly domains. Next, I present a prima facie argument that robot apologies are not permissible because they may harm human societies by inducing the misattribution of responsibility. Subsequently, I respond to a possible response to the prima facie objection based on the interpretation that attributing responsibility (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Phronetic Ethics in Social Robotics: A New Approach to Building Ethical Robots.Roman Krzanowski & Paweł Polak - 2020 - Studies in Logic, Grammar and Rhetoric 63 (1):165-183.
    Social robotics are autonomous robots or Artificial Moral Agents (AMA), that will interact respect and embody human ethical values. However, the conceptual and practical problems of building such systems have not yet been resolved, playing a role of significant challenge for computational modeling. It seems that the lack of success in constructing robots, ceteris paribus, is due to the conceptual and algorithmic limitations of the current design of ethical robots. This paper proposes a new approach for developing ethical capacities in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Who Gets to Choose? On the Socio-algorithmic Construction of Choice.Dan M. Kotliar - 2021 - Science, Technology, and Human Values 46 (2):346-375.
    This article deals with choice-inducing algorithms––algorithms that are explicitly designed to affect people’s choices. Based on an ethnographic account of three Israeli data analytics companies, I explore how algorithms are being designed to drive people into choice-making and examine their co-constitution by an assemblage of specifically positioned human and nonhuman agents. I show that the functioning, logic, and even ethics of choice-inducing algorithms are deeply influenced by the epistemologies, meaning systems, and practices of the individuals who devise and use them (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Artificial intelligence crime: an interdisciplinary analysis of foreseeable threats and solutions.Thomas C. King, Nikita Aggarwal, Mariarosaria Taddeo & Luciano Floridi - 2020 - Science and Engineering Ethics 26 (1):89-120.
    Artificial intelligence research and regulation seek to balance the benefits of innovation against any potential harms and disruption. However, one unintended consequence of the recent surge in AI research is the potential re-orientation of AI technologies to facilitate criminal acts, term in this article AI-Crime. AIC is theoretically feasible thanks to published experiments in automating fraud targeted at social media users, as well as demonstrations of AI-driven manipulation of simulated markets. However, because AIC is still a relatively young and inherently (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Un-making artificial moral agents.Deborah G. Johnson & Keith W. Miller - 2008 - Ethics and Information Technology 10 (2-3):123-133.
    Floridi and Sanders, seminal work, “On the morality of artificial agents” has catalyzed attention around the moral status of computer systems that perform tasks for humans, effectively acting as “artificial agents.” Floridi and Sanders argue that the class of entities considered moral agents can be expanded to include computers if we adopt the appropriate level of abstraction. In this paper we argue that the move to distinguish levels of abstraction is far from decisive on this issue. We also argue that (...)
    Download  
     
    Export citation  
     
    Bookmark   37 citations  
  • Computer systems: Moral entities but not moral agents. [REVIEW]Deborah G. Johnson - 2006 - Ethics and Information Technology 8 (4):195-204.
    After discussing the distinction between artifacts and natural entities, and the distinction between artifacts and technology, the conditions of the traditional account of moral agency are identified. While computer system behavior meets four of the five conditions, it does not and cannot meet a key condition. Computer systems do not have mental states, and even if they could be construed as having mental states, they do not have intendings to act, which arise from an agent’s freedom. On the other hand, (...)
    Download  
     
    Export citation  
     
    Bookmark   86 citations  
  • AI, agency and responsibility: the VW fraud case and beyond.Deborah G. Johnson & Mario Verdicchio - 2019 - AI and Society 34 (3):639-647.
    The concept of agency as applied to technological artifacts has become an object of heated debate in the context of AI research because some AI researchers ascribe to programs the type of agency traditionally associated with humans. Confusion about agency is at the root of misconceptions about the possibilities for future AI. We introduce the concept of a triadic agency that includes the causal agency of artifacts and the intentional agency of humans to better describe what happens in AI as (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Can we wrong a robot?Nancy S. Jecker - 2023 - AI and Society 38 (1):259-268.
    With the development of increasingly sophisticated sociable robots, robot-human relationships are being transformed. Not only can sociable robots furnish emotional support and companionship for humans, humans can also form relationships with robots that they value highly. It is natural to ask, do robots that stand in close relationships with us have any moral standing over and above their purely instrumental value as means to human ends. We might ask our question this way, ‘Are there ways we can act towards robots (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Artificial superintelligence and its limits: why AlphaZero cannot become a general agent.Karim Jebari & Joakim Lundborg - forthcoming - AI and Society.
    An intelligent machine surpassing human intelligence across a wide set of skills has been proposed as a possible existential catastrophe. Among those concerned about existential risk related to artificial intelligence, it is common to assume that AI will not only be very intelligent, but also be a general agent. This article explores the characteristics of machine agency, and what it would mean for a machine to become a general agent. In particular, it does so by articulating some important differences between (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Moral difference between humans and robots: paternalism and human-relative reason.Tsung-Hsing Ho - 2022 - AI and Society 37 (4):1533-1543.
    According to some philosophers, if moral agency is understood in behaviourist terms, robots could become moral agents that are as good as or even better than humans. Given the behaviourist conception, it is natural to think that there is no interesting moral difference between robots and humans in terms of moral agency (call it the _equivalence thesis_). However, such moral differences exist: based on Strawson’s account of participant reactive attitude and Scanlon’s relational account of blame, I argue that a distinct (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Decentered ethics in the machine era and guidance for AI regulation.Christian Hugo Hoffmann & Benjamin Hahn - 2020 - AI and Society 35 (3):635-644.
    Recent advancements in AI have prompted a large number of AI ethics guidelines published by governments and nonprofits. While many of these papers propose concrete or seemingly applicable ideas, few philosophically sound proposals are made. In particular, we observe that the line of questioning has often not been examined critically and underlying conceptual problems not always dealt with at the root. In this paper, we investigate the nature of ethical AI systems and what their moral status might be by first (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Responsible AI Through Conceptual Engineering.Johannes Himmelreich & Sebastian Köhler - 2022 - Philosophy and Technology 35 (3):1-30.
    The advent of intelligent artificial systems has sparked a dispute about the question of who is responsible when such a system causes a harmful outcome. This paper champions the idea that this dispute should be approached as a conceptual engineering problem. Towards this claim, the paper first argues that the dispute about the responsibility gap problem is in part a conceptual dispute about the content of responsibility and related concepts. The paper then argues that the way forward is to evaluate (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Who Needs Stories if You Can Get the Data? ISPs in the Era of Big Number Crunching.Mireille Hildebrandt - 2011 - Philosophy and Technology 24 (4):371-390.
    Who Needs Stories if You Can Get the Data? ISPs in the Era of Big Number Crunching Content Type Journal Article Category Special Issue Pages 371-390 DOI 10.1007/s13347-011-0041-8 Authors Mireille Hildebrandt, Institute of Computer and Information Sciences (ICIS), Radboud University Nijmegen, Nijmegen, the Netherlands Journal Philosophy & Technology Online ISSN 2210-5441 Print ISSN 2210-5433 Journal Volume Volume 24 Journal Issue Volume 24, Number 4.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Artificial moral agents are infeasible with foreseeable technologies.Patrick Chisan Hew - 2014 - Ethics and Information Technology 16 (3):197-206.
    For an artificial agent to be morally praiseworthy, its rules for behaviour and the mechanisms for supplying those rules must not be supplied entirely by external humans. Such systems are a substantial departure from current technologies and theory, and are a low prospect. With foreseeable technologies, an artificial agent will carry zero responsibility for its behavior and humans will retain full responsibility.
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Three Risks That Caution Against a Premature Implementation of Artificial Moral Agents for Practical and Economical Use.Christian Herzog - 2021 - Science and Engineering Ethics 27 (1):1-15.
    In the present article, I will advocate caution against developing artificial moral agents based on the notion that the utilization of preliminary forms of AMAs will potentially negatively feed back on the human social system and on human moral thought itself and its value—e.g., by reinforcing social inequalities, diminishing the breadth of employed ethical arguments and the value of character. While scientific investigations into AMAs pose no direct significant threat, I will argue against their premature utilization for practical and economical (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Object‐Oriented Ontology and the Other of We in Anthropocentric Posthumanism.Yogi Hale Hendlin - 2023 - Zygon 58 (2):315-339.
    The object-oriented ontology group of philosophies, and certain strands of posthumanism, overlook important ethical and biological differences, which make a difference. These allied intellectual movements, which have at times found broad popular appeal, attempt to weird life as a rebellion to the forced melting of lifeforms through the artefacts of capitalist realism. They truck, however, in a recursive solipsism resulting in ontological flattening, overlooking that things only show up to us according to our attunement to them. Ecology and biology tend (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations