Switch to: References

Add citations

You must login to add citations.
  1. Responsible AI Through Conceptual Engineering.Johannes Himmelreich & Sebastian Köhler - 2022 - Philosophy and Technology 35 (3):1-30.
    The advent of intelligent artificial systems has sparked a dispute about the question of who is responsible when such a system causes a harmful outcome. This paper champions the idea that this dispute should be approached as a conceptual engineering problem. Towards this claim, the paper first argues that the dispute about the responsibility gap problem is in part a conceptual dispute about the content of responsibility and related concepts. The paper then argues that the way forward is to evaluate (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Who Needs Stories if You Can Get the Data? ISPs in the Era of Big Number Crunching.Mireille Hildebrandt - 2011 - Philosophy and Technology 24 (4):371-390.
    Who Needs Stories if You Can Get the Data? ISPs in the Era of Big Number Crunching Content Type Journal Article Category Special Issue Pages 371-390 DOI 10.1007/s13347-011-0041-8 Authors Mireille Hildebrandt, Institute of Computer and Information Sciences (ICIS), Radboud University Nijmegen, Nijmegen, the Netherlands Journal Philosophy & Technology Online ISSN 2210-5441 Print ISSN 2210-5433 Journal Volume Volume 24 Journal Issue Volume 24, Number 4.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Artificial moral agents are infeasible with foreseeable technologies.Patrick Chisan Hew - 2014 - Ethics and Information Technology 16 (3):197-206.
    For an artificial agent to be morally praiseworthy, its rules for behaviour and the mechanisms for supplying those rules must not be supplied entirely by external humans. Such systems are a substantial departure from current technologies and theory, and are a low prospect. With foreseeable technologies, an artificial agent will carry zero responsibility for its behavior and humans will retain full responsibility.
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Three Risks That Caution Against a Premature Implementation of Artificial Moral Agents for Practical and Economical Use.Christian Herzog - 2021 - Science and Engineering Ethics 27 (1):1-15.
    In the present article, I will advocate caution against developing artificial moral agents based on the notion that the utilization of preliminary forms of AMAs will potentially negatively feed back on the human social system and on human moral thought itself and its value—e.g., by reinforcing social inequalities, diminishing the breadth of employed ethical arguments and the value of character. While scientific investigations into AMAs pose no direct significant threat, I will argue against their premature utilization for practical and economical (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Object‐Oriented Ontology and the Other of We in Anthropocentric Posthumanism.Yogi Hale Hendlin - 2023 - Zygon 58 (2):315-339.
    The object-oriented ontology group of philosophies, and certain strands of posthumanism, overlook important ethical and biological differences, which make a difference. These allied intellectual movements, which have at times found broad popular appeal, attempt to weird life as a rebellion to the forced melting of lifeforms through the artefacts of capitalist realism. They truck, however, in a recursive solipsism resulting in ontological flattening, overlooking that things only show up to us according to our attunement to them. Ecology and biology tend (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Distributed cognition and distributed morality: Agency, artifacts and systems.Richard Heersmink - 2017 - Science and Engineering Ethics 23 (2):431-448.
    There are various philosophical approaches and theories describing the intimate relation people have to artifacts. In this paper, I explore the relation between two such theories, namely distributed cognition and distributed morality theory. I point out a number of similarities and differences in these views regarding the ontological status they attribute to artifacts and the larger systems they are part of. Having evaluated and compared these views, I continue by focussing on the way cognitive artifacts are used in moral practice. (...)
    Download  
     
    Export citation  
     
    Bookmark   20 citations  
  • The Three Pillars of Autonomous Weapon Systems. Steven Umbrello (2022). Designed for Death: Controlling Killer Robots. Budapest: Trivent Publishing. [REVIEW]Stephen Harwood - 2023 - Journal of Responsible Technology 14 (C):100062.
    Download  
     
    Export citation  
     
    Bookmark  
  • Beyond the skin bag: On the moral responsibility of extended agencies.F. Allan Hanson - 2009 - Ethics and Information Technology 11 (1):91-99.
    The growing prominence of computers in contemporary life, often seemingly with minds of their own, invites rethinking the question of moral responsibility. If the moral responsibility for an act lies with the subject that carried it out, it follows that different concepts of the subject generate different views of moral responsibility. Some recent theorists have argued that actions are produced by composite, fluid subjects understood as extended agencies (cyborgs, actor networks). This view of the subject contrasts with methodological individualism: the (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • The ethics of designing artificial agents.Frances S. Grodzinsky, Keith W. Miller & Marty J. Wolf - 2008 - Ethics and Information Technology 10 (2-3):115-121.
    In their important paper “Autonomous Agents”, Floridi and Sanders use “levels of abstraction” to argue that computers are or may soon be moral agents. In this paper we use the same levels of abstraction to illuminate differences between human moral agents and computers. In their paper, Floridi and Sanders contributed definitions of autonomy, moral accountability and responsibility, but they have not explored deeply some essential questions that need to be answered by computer scientists who design artificial agents. One such question (...)
    Download  
     
    Export citation  
     
    Bookmark   20 citations  
  • Developing artificial agents worthy of trust: “Would you buy a used car from this artificial agent?”. [REVIEW]F. S. Grodzinsky, K. W. Miller & M. J. Wolf - 2011 - Ethics and Information Technology 13 (1):17-27.
    There is a growing literature on the concept of e-trust and on the feasibility and advisability of “trusting” artificial agents. In this paper we present an object-oriented model for thinking about trust in both face-to-face and digitally mediated environments. We review important recent contributions to this literature regarding e-trust in conjunction with presenting our model. We identify three important types of trust interactions and examine trust from the perspective of a software developer. Too often, the primary focus of research in (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Ethical Reflections on Artificial Intelligence.Brian Patrick Green - 2018 - Scientia et Fides 6 (2):9-31.
    Artificial Intelligence technology presents a multitude of ethical concerns, many of which are being actively considered by organizations ranging from small groups in civil society to large corporations and governments. However, it also presents ethical concerns which are not being actively considered. This paper presents a broad overview of twelve topics in ethics in AI, including function, transparency, evil use, good use, bias, unemployment, socio-economic inequality, moral automation and human de-skilling, robot consciousness and rights, dependency, social-psychological effects, and spiritual effects. (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • On Corporate Virtue.Aditi Gowri - 2007 - Journal of Business Ethics 70 (4):391-400.
    This paper considers the question of virtues appropriate to a corporate actor's moral character. A model of corporate appetites is developed by analogy with animal appetities; and the pursuit of initially virtuous corporate tendencies to an extreme degree is shown to be morally perilous. The author thus refutes a previous argument which suggested that (1) corporate virtues, unlike human virtues, need not be located on an Aristotelian mean between opposite undesirable extremes because (2) corporations do not have appetites; and (3) (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • What do we owe to intelligent robots?John-Stewart Gordon - 2020 - AI and Society 35 (1):209-223.
    Great technological advances in such areas as computer science, artificial intelligence, and robotics have brought the advent of artificially intelligent robots within our reach within the next century. Against this background, the interdisciplinary field of machine ethics is concerned with the vital issue of making robots “ethical” and examining the moral status of autonomous robots that are capable of moral reasoning and decision-making. The existence of such robots will deeply reshape our socio-political life. This paper focuses on whether such highly (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • Review of Artificial Intelligence: Reflections in Philosophy, Theology and the Social Sciences by Benedikt P. Göcke and Astrid Rosenthal-von der Pütten. [REVIEW]John-Stewart Gordon - 2021 - AI and Society 36 (2):655-659.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Moral Status and Intelligent Robots.John-Stewart Gordon & David J. Gunkel - 2021 - Southern Journal of Philosophy 60 (1):88-117.
    The Southern Journal of Philosophy, Volume 60, Issue 1, Page 88-117, March 2022.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Building Moral Robots: Ethical Pitfalls and Challenges.John-Stewart Gordon - 2020 - Science and Engineering Ethics 26 (1):141-157.
    This paper examines the ethical pitfalls and challenges that non-ethicists, such as researchers and programmers in the fields of computer science, artificial intelligence and robotics, face when building moral machines. Whether ethics is “computable” depends on how programmers understand ethics in the first place and on the adequacy of their understanding of the ethical problems and methodological challenges in these fields. Researchers and programmers face at least two types of problems due to their general lack of ethical knowledge or expertise. (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Moral control and ownership in AI systems.Raul Gonzalez Fabre, Javier Camacho Ibáñez & Pedro Tejedor Escobar - 2021 - AI and Society 36 (1):289-303.
    AI systems are bringing an augmentation of human capabilities to shape the world. They may also drag a replacement of human conscience in large chunks of life. AI systems can be designed to leave moral control in human hands, to obstruct or diminish that moral control, or even to prevent it, replacing human morality with pre-packaged or developed ‘solutions’ by the ‘intelligent’ machine itself. Artificial Intelligent systems (AIS) are increasingly being used in multiple applications and receiving more attention from the (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • In search of the moral status of AI: why sentience is a strong argument.Martin Gibert & Dominic Martin - 2021 - AI and Society 1:1-12.
    Is it OK to lie to Siri? Is it bad to mistreat a robot for our own pleasure? Under what condition should we grant a moral status to an artificial intelligence system? This paper looks at different arguments for granting moral status to an AI system: the idea of indirect duties, the relational argument, the argument from intelligence, the arguments from life and information, and the argument from sentience. In each but the last case, we find unresolved issues with the (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Lethal Autonomous Weapon Systems and Responsibility Gaps.Anne Gerdes - 2018 - Philosophy Study 8 (5).
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • IT-ethical issues in sci-fi film within the timeline of the Ethicomp conference series.Anne Gerdes - 2015 - Journal of Information, Communication and Ethics in Society 13 (3/4):314-325.
    Purpose– This paper aims to explore human technology relations through the lens of sci-fi movies within the life cycle of the ETHICOMP conference series. Here, different perspectives on artificial intelligent agents, primarily in the shape of robots, but also including other kinds of intelligent systems, are explored. Hence, IT-ethical issues related to humans interactions with social robots and artificial intelligent agents are illustrated with reference to: Alex Proyas’ I, Robot; James Cameron’s Terminator; and the Wachowski brothers’ Matrix. All three movies (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Modelling ethical rules of lying with answer set programming.Jean-Gabriel Ganascia - 2007 - Ethics and Information Technology 9 (1):39-47.
    There has been considerable discussion in the past about the assumptions and basis of different ethical rules. For instance, it is commonplace to say that ethical rules are defaults rules, which means that they tolerate exceptions. Some authors argue that morality can only be grounded in particular cases while others defend the existence of general principles related to ethical rules. Our purpose here is not to justify either position, but to try to model general ethical rules with artificial intelligence formalisms (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Robots as moral environments.Tomislav Furlanis, Takayuki Kanda & Dražen Brščić - forthcoming - AI and Society:1-19.
    In this philosophical exploration, we investigate the concept of robotic moral environment interaction. The common view understands moral interaction to occur between agents endowed with ethical and interactive capacities. However, recent developments in moral philosophy argue that moral interaction also occurs in relation to the environment. Here conditions and situations of the environment contribute to human moral cognition and the formation of our moral experiences. Based on this philosophical position, we imagine robots interacting as moral environments—a novel conceptualization of human–robot (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Moral agency without responsibility? Analysis of three ethical models of human-computer interaction in times of artificial intelligence (AI).Alexis Fritz, Wiebke Brandt, Henner Gimpel & Sarah Bayer - 2020 - De Ethica 6 (1):3-22.
    Philosophical and sociological approaches in technology have increasingly shifted toward describing AI (artificial intelligence) systems as ‘(moral) agents,’ while also attributing ‘agency’ to them. It is only in this way – so their principal argument goes – that the effects of technological components in a complex human-computer interaction can be understood sufficiently in phenomenological-descriptive and ethical-normative respects. By contrast, this article aims to demonstrate that an explanatory model only achieves a descriptively and normatively satisfactory result if the concepts of ‘(moral) (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Towards the Epistemology of the Internet of Things Techno-Epistemology and Ethical Considerations Through the Prism of Trust.Ori Freiman - 2014 - International Review of Information Ethics 22:6-22.
    This paper discusses the epistemology of the Internet of Things [IoT] by focusing on the topic of trust. It presents various frameworks of trust, and argues that the ethical framework of trust is what constitutes our responsibility to reveal desired norms and standards and embed them in other frameworks of trust. The first section briefly presents the IoT and scrutinizes the scarce philosophical work that has been done on this subject so far. The second section suggests that the field of (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Robot sex and consent: Is consent to sex between a robot and a human conceivable, possible, and desirable?Lily Frank & Sven Nyholm - 2017 - Artificial Intelligence and Law 25 (3):305-323.
    The development of highly humanoid sex robots is on the technological horizon. If sex robots are integrated into the legal community as “electronic persons”, the issue of sexual consent arises, which is essential for legally and morally permissible sexual relations between human persons. This paper explores whether it is conceivable, possible, and desirable that humanoid robots should be designed such that they are capable of consenting to sex. We consider reasons for giving both “no” and “yes” answers to these three (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • Argumentation-Based Logic for Ethical Decision Making.Panayiotis Frangos, Petros Stefaneas & Sofia Almpani - 2022 - Studia Humana 11 (3-4):46-52.
    As automation in artificial intelligence is increasing, we will need to automate a growing amount of ethical decision making. However, ethical decision- making raises novel challenges for engineers, ethicists and policymakers, who will have to explore new ways to realize this task. The presented work focuses on the development and formalization of models that aim at ensuring a correct ethical behaviour of artificial intelligent agents, in a provable way, extending and implementing a logic-based proving calculus that is based on argumentation (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial Moral Agents: Moral Mentors or Sensible Tools?Fabio Fossa - 2018 - Ethics and Information Technology (2):1-12.
    The aim of this paper is to offer an analysis of the notion of artificial moral agent (AMA) and of its impact on human beings’ self-understanding as moral agents. Firstly, I introduce the topic by presenting what I call the Continuity Approach. Its main claim holds that AMAs and human moral agents exhibit no significant qualitative difference and, therefore, should be considered homogeneous entities. Secondly, I focus on the consequences this approach leads to. In order to do this I take (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Making moral machines: why we need artificial moral agents.Paul Formosa & Malcolm Ryan - forthcoming - AI and Society.
    As robots and Artificial Intelligences become more enmeshed in rich social contexts, it seems inevitable that we will have to make them into moral machines equipped with moral skills. Apart from the technical difficulties of how we could achieve this goal, we can also ask the ethical question of whether we should seek to create such Artificial Moral Agents (AMAs). Recently, several papers have argued that we have strong reasons not to develop AMAs. In response, we develop a comprehensive analysis (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • What the near future of artificial intelligence could be.Luciano Floridi - 2019 - Philosophy and Technology 32 (1):1-15.
    In this article, I shall argue that AI’s likely developments and possible challenges are best understood if we interpret AI not as a marriage between some biological-like intelligence and engineered artefacts, but as a divorce between agency and intelligence, that is, the ability to solve problems successfully and the necessity of being intelligent in doing so. I shall then look at five developments: (1) the growing shift from logic to statistics, (2) the progressive adaptation of the environment to AI rather (...)
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  • The method of levels of abstraction.Luciano Floridi - 2008 - Minds and Machines 18 (3):303–329.
    The use of “levels of abstraction” in philosophical analysis (levelism) has recently come under attack. In this paper, I argue that a refined version of epistemological levelism should be retained as a fundamental method, called the method of levels of abstraction. After a brief introduction, in section “Some Definitions and Preliminary Examples” the nature and applicability of the epistemological method of levels of abstraction is clarified. In section “A Classic Application of the Method ofion”, the philosophical fruitfulness of the new (...)
    Download  
     
    Export citation  
     
    Bookmark   118 citations  
  • The ontological interpretation of informational privacy.Luciano Floridi - 2005 - Ethics and Information Technology 7 (4):185–200.
    The paper outlines a new interpretation of informational privacy and of its moral value. The main theses defended are: (a) informational privacy is a function of the ontological friction in the infosphere, that is, of the forces that oppose the information flow within the space of information; (b) digital ICTs (information and communication technologies) affect the ontological friction by changing the nature of the infosphere (re-ontologization); (c) digital ICTs can therefore both decrease and protect informational privacy but, most importantly, they (...)
    Download  
     
    Export citation  
     
    Bookmark   57 citations  
  • Network ethics: information and business ethics in a networked society.Luciano Floridi - 2009 - Journal of Business Ethics 90 (S4):649 - 659.
    This article brings together two research fields in applied ethics - namely, information ethics and business ethics- which deal with the ethical impact of information and communication technologies but that, so far, have remained largely independent. Its goal is to articulate and defend an informational approach to the conceptual foundation of business ethics, by using ideas and methods developed in information ethics, in view of the convergence of the two fields in an increasingly networked society.
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Information ethics, its nature and scope.Luciano Floridi - 2006 - Acm Sigcas Computers and Society 36 (2):21-36.
    In recent years, “Information Ethics” (IE) has come to mean different things to different researchers working in a variety of disciplines, including computer ethics, business ethics, medical ethics, computer science, the philosophy of information, social epistemology and library and information science. Using an ontocentric approach, this paper seeks to define the parameters of IE and thereby increase our understanding of the moral challenges associated with Information Communication Technologies.
    Download  
     
    Export citation  
     
    Bookmark   49 citations  
  • Four challenges for a theory of informational privacy.Luciano Floridi - 2006 - Ethics and Information Technology 8 (3):109–119.
    In this article, I summarise the ontological theory of informational privacy (an approach based on information ethics) and then discuss four types of interesting challenges confronting any theory of informational privacy: (1) parochial ontologies and non-Western approaches to informational privacy; (2) individualism and the anthropology of informational privacy; (3) the scope and limits of informational privacy; and (4) public, passive and active informational privacy. I argue that the ontological theory of informational privacy can cope with such challenges fairly successfully. In (...)
    Download  
     
    Export citation  
     
    Bookmark   38 citations  
  • Distributed morality in an information society.Luciano Floridi - 2013 - Science and Engineering Ethics 19 (3):727-743.
    The phenomenon of distributed knowledge is well-known in epistemic logic. In this paper, a similar phenomenon in ethics, somewhat neglected so far, is investigated, namely distributed morality. The article explains the nature of distributed morality, as a feature of moral agency, and explores the implications of its occurrence in advanced information societies. In the course of the analysis, the concept of infraethics is introduced, in order to refer to the ensemble of moral enablers, which, although morally neutral per se, can (...)
    Download  
     
    Export citation  
     
    Bookmark   46 citations  
  • Consciousness, agents and the knowledge game.Luciano Floridi - 2005 - Minds and Machines 15 (3):415-444.
    This paper has three goals. The first is to introduce the “knowledge game”, a new, simple and yet powerful tool for analysing some intriguing philosophical questions. The second is to apply the knowledge game as an informative test to discriminate between conscious (human) and conscious-less agents (zombies and robots), depending on which version of the game they can win. And the third is to use a version of the knowledge game to provide an answer to Dretske’s question “how do you (...)
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  • Artificial intelligence's new frontier: Artificial companions and the fourth revolution.Luciano Floridi - 2008 - Metaphilosophy 39 (4-5):651-655.
    Abstract: In this article I argue that the best way to understand the information turn is in terms of a fourth revolution in the long process of reassessing humanity's fundamental nature and role in the universe. We are not immobile, at the centre of the universe (Copernicus); we are not unnaturally distinct and different from the rest of the animal world (Darwin); and we are far from being entirely transparent to ourselves (Freud). We are now slowly accepting the idea that (...)
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  • AI as Agency Without Intelligence: on ChatGPT, Large Language Models, and Other Generative Models.Luciano Floridi - 2023 - Philosophy and Technology 36 (1):1-7.
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  • In AI We Trust Incrementally: a Multi-layer Model of Trust to Analyze Human-Artificial Intelligence Interactions.Andrea Ferrario, Michele Loi & Eleonora Viganò - 2020 - Philosophy and Technology 33 (3):523-539.
    Real engines of the artificial intelligence revolution, machine learning models, and algorithms are embedded nowadays in many services and products around us. As a society, we argue it is now necessary to transition into a phronetic paradigm focused on the ethical dilemmas stemming from the conception and application of AIs to define actionable recommendations as well as normative solutions. However, both academic research and society-driven initiatives are still quite far from clearly defining a solid program of study and intervention. In (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Animals and Technoscientific Developments: Getting Out of Invisibility.Arianna Ferrari - 2015 - NanoEthics 9 (1):5-10.
    Animals and TechnoscienceThe essays in the section “Animals in technoscientific developments” have been collected from the submissions to the 3rd European Conference of Critical Animal Studies that I organized in Karlsruhe on 28–30 November 2013. The aim of the conference was to stimulate critical scholars to engage on the multifaceted relationships between animals and technosciences, an under-researched topic.Technoscience has become an important concept in the current debate on the epistemic and normative changes taking place in how scientific and technological research (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Algorithmic memory and the right to be forgotten on the web.Elena Esposito - 2017 - Big Data and Society 4 (1).
    The debate on the right to be forgotten on Google involves the relationship between human information processing and digital processing by algorithms. The specificity of digital memory is not so much its often discussed inability to forget. What distinguishes digital memory is, instead, its ability to process information without understanding. Algorithms only work with data without remembering or forgetting. Merely calculating, algorithms manage to produce significant results not because they operate in an intelligent way, but because they “parasitically” exploit the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Mathematics, ethics and purism: an application of MacIntyre’s virtue theory.Paul Ernest - 2020 - Synthese 199 (1-2):3137-3167.
    A traditional problem of ethics in mathematics is the denial of social responsibility. Pure mathematics is viewed as neutral and value free, and therefore free of ethical responsibility. Applications of mathematics are seen as employing a neutral set of tools which, of themselves, are free from social responsibility. However, mathematicians are convinced they know what constitutes good mathematics. Furthermore many pure mathematicians are committed to purism, the ideology that values purity above applications in mathematics, and some historical reasons for this (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Actionless Agent: An Account of Human-CAI Relationships.Charles E. Binkley & Bryan Pilkington - 2023 - American Journal of Bioethics 23 (5):25-27.
    We applaud Sedlakova and Trachsel’s work and their description of conversational artificial intelligence (CAI) as possessing a hybrid nature with features of both a tool and an agent (Sedlakova and...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • What Is the Model of Trust for Multi-agent Systems? Whether or Not E-Trust Applies to Autonomous Agents.Massimo Durante - 2010 - Knowledge, Technology & Policy 23 (3):347-366.
    A socio-cognitive approach to trust can help us envisage a notion of networked trust for multi-agent systems (MAS) based on different interacting agents. In this framework, the issue is to evaluate whether or not a socio-cognitive analysis of trust can apply to the interactions between human and autonomous agents. Two main arguments support two alternative hypothesis; one suggests that only reliance applies to artificial agents, because predictability of agents’ digital interaction is viewed as an absolute value and human relation is (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • How to cross boundaries in the information society: vulnerability, responsiveness, and accountability.Massimo Durante - 2013 - Acm Sigcas Computers and Society 43 (1):9-21.
    The paper examines how the current evolution and growth of ICTs enables a greater number of individuals to communicate and interact with each other on a larger scale: this phenomenon enables people to cross the conventional boundaries set up across modernity. The presence of diverse barriers does not however disappear, and we therefore still experience cultural, political, legal and moral boundaries in the globalised Information Society. The paper suggests that the issue of boundaries is to be understood, primarily, in philosophical (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • A guide to the Floridi keys: Luciano Floridi: The philosophy of information. Oxford: Oxford University Press, 2011, xx+405pp, £37.50 HB.J. Michael Dunn - 2012 - Metascience 22 (1):93-98.
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • The Retribution-Gap and Responsibility-Loci Related to Robots and Automated Technologies: A Reply to Nyholm.Roos de Jong - 2020 - Science and Engineering Ethics 26 (2):727-735.
    Automated technologies and robots make decisions that cannot always be fully controlled or predicted. In addition to that, they cannot respond to punishment and blame in the ways humans do. Therefore, when automated cars harm or kill people, for example, this gives rise to concerns about responsibility-gaps and retribution-gaps. According to Sven Nyholm, however, automated cars do not pose a challenge on human responsibility, as long as humans can control them and update them. He argues that the agency exercised in (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Moral Mechanisms.David Davenport - 2014 - Philosophy and Technology 27 (1):47-60.
    As highly intelligent autonomous robots are gradually introduced into the home and workplace, ensuring public safety becomes extremely important. Given that such machines will learn from interactions with their environment, standard safety engineering methodologies may not be applicable. Instead, we need to ensure that the machines themselves know right from wrong; we need moral mechanisms. Morality, however, has traditionally been considered a defining characteristic, indeed the sole realm of human beings; that which separates us from animals. But if only humans (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Imagining a non-biological machine as a legal person.David J. Calverley - 2008 - AI and Society 22 (4):523-537.
    As non-biological machines come to be designed in ways which exhibit characteristics comparable to human mental states, the manner in which the law treats these entities will become increasingly important both to designers and to society at large. The direct question will become whether, given certain attributes, a non-biological machine could ever be viewed as a legal person. In order to begin to understand the ramifications of this question, this paper starts by exploring the distinction between the related concepts of (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • The decision-point-dilemma: Yet another problem of responsibility in human-AI interaction.Laura Crompton - 2021 - Journal of Responsible Technology 7:100013.
    AI as decision support supposedly helps human agents make ‘better’decisions more efficiently. However, research shows that it can, sometimes greatly, influence the decisions of its human users. While there has been a fair amount of research on intended AI influence, there seem to be great gaps within both theoretical and practical studies concerning unintended AI influence. In this paper I aim to address some of these gaps, and hope to shed some light on the ethical and moral concerns that arise (...)
    Download  
     
    Export citation  
     
    Bookmark