Switch to: References

Add citations

You must login to add citations.
  1. Thinking with things: An embodied enactive account of mind–technology interaction.Anco Peeters - 2019 - Dissertation, University of Wollongong
    Technological artefacts have, in recent years, invited increasingly intimate ways of interaction. But surprisingly little attention has been devoted to how such interactions, like with wearable devices or household robots, shape our minds, cognitive capacities, and moral character. In this thesis, I develop an embodied, enactive account of mind--technology interaction that takes the reciprocal influence of artefacts on minds seriously. First, I examine how recent developments in philosophy of technology can inform the phenomenology of mind--technology interaction as seen through an (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Moral Encounters of the Artificial Kind: Towards a non-anthropocentric account of machine moral agency.Fabio Tollon - 2019 - Dissertation, Stellenbosch University
    The aim of this thesis is to advance a philosophically justifiable account of Artificial Moral Agency (AMA). Concerns about the moral status of Artificial Intelligence (AI) traditionally turn on questions of whether these systems are deserving of moral concern (i.e. if they are moral patients) or whether they can be sources of moral action (i.e. if they are moral agents). On the Organic View of Ethical Status, being a moral patient is a necessary condition for an entity to qualify as (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The ethics of algorithms: mapping the debate.Brent Mittelstadt, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter & Luciano Floridi - 2016 - Big Data and Society 3 (2).
    In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences (...)
    Download  
     
    Export citation  
     
    Bookmark   166 citations  
  • Artificial virtuous agents: from theory to machine implementation.Jakob Stenseke - 2021 - AI and Society:1-20.
    Virtue ethics has many times been suggested as a promising recipe for the construction of artificial moral agents due to its emphasis on moral character and learning. However, given the complex nature of the theory, hardly any work has de facto attempted to implement the core tenets of virtue ethics in moral machines. The main goal of this paper is to demonstrate how virtue ethics can be taken all the way from theory to machine implementation. To achieve this goal, we (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Ethics of Artificial Intelligence.Vincent C. Müller - 2021 - In Anthony Elliott (ed.), The Routledge social science handbook of AI. London: Routledge. pp. 122-137.
    Artificial intelligence (AI) is a digital technology that will be of major importance for the development of humanity in the near future. AI has raised fundamental questions about what we should do with such systems, what the systems themselves should do, what risks they involve and how we can control these. - After the background to the field (1), this article introduces the main debates (2), first on ethical issues that arise with AI systems as objects, i.e. tools made and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Interdisciplinary Confusion and Resolution in the Context of Moral Machines.Jakob Stenseke - 2022 - Science and Engineering Ethics 28 (3):1-17.
    Recent advancements in artificial intelligence have fueled widespread academic discourse on the ethics of AI within and across a diverse set of disciplines. One notable subfield of AI ethics is machine ethics, which seeks to implement ethical considerations into AI systems. However, since different research efforts within machine ethics have discipline-specific concepts, practices, and goals, the resulting body of work is pestered with conflict and confusion as opposed to fruitful synergies. The aim of this paper is to explore ways to (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Do androids dream of normative endorsement? On the fallibility of artificial moral agents.Frodo Podschwadek - 2017 - Artificial Intelligence and Law 25 (3):325-339.
    The more autonomous future artificial agents will become, the more important it seems to equip them with a capacity for moral reasoning and to make them autonomous moral agents. Some authors have even claimed that one of the aims of AI development should be to build morally praiseworthy agents. From the perspective of moral philosophy, praiseworthy moral agents, in any meaningful sense of the term, must be fully autonomous moral agents who endorse moral rules as action-guiding. They need to do (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Mapping Value Sensitive Design onto AI for Social Good Principles.Steven Umbrello & Ibo van de Poel - 2021 - AI and Ethics 1 (3):283–296.
    Value Sensitive Design (VSD) is an established method for integrating values into technical design. It has been applied to different technologies and, more recently, to artificial intelligence (AI). We argue that AI poses a number of challenges specific to VSD that require a somewhat modified VSD approach. Machine learning (ML), in particular, poses two challenges. First, humans may not understand how an AI system learns certain things. This requires paying attention to values such as transparency, explicability, and accountability. Second, ML (...)
    Download  
     
    Export citation  
     
    Bookmark   32 citations  
  • The Rhetoric and Reality of Anthropomorphism in Artificial Intelligence.David Watson - 2019 - Minds and Machines 29 (3):417-440.
    Artificial intelligence has historically been conceptualized in anthropomorphic terms. Some algorithms deploy biomimetic designs in a deliberate attempt to effect a sort of digital isomorphism of the human brain. Others leverage more general learning strategies that happen to coincide with popular theories of cognitive science and social epistemology. In this paper, I challenge the anthropomorphic credentials of the neural network algorithm, whose similarities to human cognition I argue are vastly overstated and narrowly construed. I submit that three alternative supervised learning (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Basic issues in AI policy.Vincent C. Müller - 2022 - In Maria Amparo Grau-Ruiz (ed.), Interactive robotics: Legal, ethical, social and economic aspects. Springer. pp. 3-9.
    This extended abstract summarises some of the basic points of AI ethics and policy as they present themselves now. We explain the notion of AI, the main ethical issues in AI and the main policy aims and means.
    Download  
     
    Export citation  
     
    Bookmark  
  • Moral zombies: why algorithms are not moral agents.Carissa Véliz - 2021 - AI and Society 36 (2):487-497.
    In philosophy of mind, zombies are imaginary creatures that are exact physical duplicates of conscious subjects but for whom there is no first-personal experience. Zombies are meant to show that physicalism—the theory that the universe is made up entirely out of physical components—is false. In this paper, I apply the zombie thought experiment to the realm of morality to assess whether moral agency is something independent from sentience. Algorithms, I argue, are a kind of functional moral zombie, such that thinking (...)
    Download  
     
    Export citation  
     
    Bookmark   33 citations  
  • Quasi-Metacognitive Machines: Why We Don’t Need Morally Trustworthy AI and Communicating Reliability is Enough.John Dorsch & Ophelia Deroy - 2024 - Philosophy and Technology 37 (2).
    Many policies and ethical guidelines recommend developing “trustworthy AI”. We argue that developing morally trustworthy AI is not only unethical, as it promotes trust in an entity that cannot be trustworthy, but it is also unnecessary for optimal calibration. Instead, we show that reliability, exclusive of moral trust, entails the appropriate normative constraints that enable optimal calibration and mitigate the vulnerability that arises in high-stakes hybrid decision-making environments, without also demanding, as moral trust would, the anthropomorphization of AI and thus (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Thinking unwise: a relational u-turn.Nicholas Barrow - 2023 - In Social Robots in Social Institutions: Proceedings of RoboPhilosophy 2022.
    In this paper, I add to the recent flurry of research concerning the moral patiency of artificial beings. Focusing on David Gunkel's adaptation of Levinas, I identify and argue that the Relationist's extrinsic case-by-case approach of ascribing artificial moral status fails on two accounts. Firstly, despite Gunkel's effort to avoid anthropocentrism, I argue that Relationism is, itself, anthropocentric in virtue of how its case-by-case approach is, necessarily, assessed from a human perspective. Secondly I, in light of interpreting Gunkel's Relationism as (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Human, Technology and Architecture - the change of AI-Robot technology and the Industry of Architectural Service -. 변순용 - 2017 - Environmental Philosophy 24:77-93.
    Download  
     
    Export citation  
     
    Bookmark  
  • A metaphysical account of agency for technology governance.Sadjad Soltanzadeh - forthcoming - AI and Society:1-12.
    The way in which agency is conceptualised has implications for understanding human–machine interactions and the governance of technology, especially artificial intelligence (AI) systems. Traditionally, agency is conceptualised as a capacity, defined by intrinsic properties, such as cognitive or volitional facilities. I argue that the capacity-based account of agency is inadequate to explain the dynamics of human–machine interactions and guide technology governance. Instead, I propose to conceptualise agency as impact. Agents as impactful entities can be identified at different levels: from the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Retribution-Gap and Responsibility-Loci Related to Robots and Automated Technologies: A Reply to Nyholm.Roos de Jong - 2020 - Science and Engineering Ethics 26 (2):727-735.
    Automated technologies and robots make decisions that cannot always be fully controlled or predicted. In addition to that, they cannot respond to punishment and blame in the ways humans do. Therefore, when automated cars harm or kill people, for example, this gives rise to concerns about responsibility-gaps and retribution-gaps. According to Sven Nyholm, however, automated cars do not pose a challenge on human responsibility, as long as humans can control them and update them. He argues that the agency exercised in (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Moral Mechanisms.David Davenport - 2014 - Philosophy and Technology 27 (1):47-60.
    As highly intelligent autonomous robots are gradually introduced into the home and workplace, ensuring public safety becomes extremely important. Given that such machines will learn from interactions with their environment, standard safety engineering methodologies may not be applicable. Instead, we need to ensure that the machines themselves know right from wrong; we need moral mechanisms. Morality, however, has traditionally been considered a defining characteristic, indeed the sole realm of human beings; that which separates us from animals. But if only humans (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Imagining a non-biological machine as a legal person.David J. Calverley - 2008 - AI and Society 22 (4):523-537.
    As non-biological machines come to be designed in ways which exhibit characteristics comparable to human mental states, the manner in which the law treats these entities will become increasingly important both to designers and to society at large. The direct question will become whether, given certain attributes, a non-biological machine could ever be viewed as a legal person. In order to begin to understand the ramifications of this question, this paper starts by exploring the distinction between the related concepts of (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • The decision-point-dilemma: Yet another problem of responsibility in human-AI interaction.Laura Crompton - 2021 - Journal of Responsible Technology 7:100013.
    AI as decision support supposedly helps human agents make ‘better’decisions more efficiently. However, research shows that it can, sometimes greatly, influence the decisions of its human users. While there has been a fair amount of research on intended AI influence, there seem to be great gaps within both theoretical and practical studies concerning unintended AI influence. In this paper I aim to address some of these gaps, and hope to shed some light on the ethical and moral concerns that arise (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Robots: ethical by design.Gordana Dodig Crnkovic & Baran Çürüklü - 2012 - Ethics and Information Technology 14 (1):61-71.
    Among ethicists and engineers within robotics there is an ongoing discussion as to whether ethical robots are possible or even desirable. We answer both of these questions in the positive, based on an extensive literature study of existing arguments. Our contribution consists in bringing together and reinterpreting pieces of information from a variety of sources. One of the conclusions drawn is that artifactual morality must come in degrees and depend on the level of agency, autonomy and intelligence of the machine. (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • A principle-based approach to AI: the case for European Union and Italy.Francesco Corea, Fabio Fossa, Andrea Loreggia, Stefano Quintarelli & Salvatore Sapienza - 2023 - AI and Society 38 (2):521-535.
    As Artificial Intelligence (AI) becomes more and more pervasive in our everyday life, new questions arise about its ethical and social impacts. Such issues concern all stakeholders involved in or committed to the design, implementation, deployment, and use of the technology. The present document addresses these preoccupations by introducing and discussing a set of practical obligations and recommendations for the development of applications and systems based on AI techniques. With this work we hope to contribute to spreading awareness on the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Automatic decision-making and reliability in robotic systems: some implications in the case of robot weapons.Roberto Cordeschi - 2013 - AI and Society 28 (4):431-441.
    In this article, I shall examine some of the issues and questions involved in the technology of autonomous robots, a technology that has developed greatly and is advancing rapidly. I shall do so with reference to a particularly critical field: autonomous military robotic systems. In recent times, various issues concerning the ethical implications of these systems have been the object of increasing attention from roboticists, philosophers and legal experts. The purpose of this paper is not to deal with these issues, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Bots, Social Capital, and the Need for Civility.Miles C. Coleman - 2018 - Journal of Media Ethics 33 (3):120-132.
    ABSTRACTPoliticians, hate groups, counterpublics of science, and even socially-minded critics use bots to pad their numbers, spread information, and engage in social critique. This article pursues the ethics of bots beyond the automated or not question that dominates the literature and offers the concept of bot civility. Machinic and social bot strategies are discussed with regard for the manufacture of social capital—bot incivility. The analysis suggests that bots, which do not trick persons into thinking they are human, are not necessarily (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Why Care About Robots? Empathy, Moral Standing, and the Language of Suffering.Mark Coeckelbergh - 2018 - Kairos 20 (1):141-158.
    This paper tries to understand the phenomenon that humans are able to empathize with robots and the intuition that there might be something wrong with “abusing” robots by discussing the question regarding the moral standing of robots. After a review of some relevant work in empirical psychology and a discussion of the ethics of empathizing with robots, a philosophical argument concerning the moral standing of robots is made that questions distant and uncritical moral reasoning about entities’ properties and that recommends (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Virtual moral agency, virtual moral responsibility: on the moral significance of the appearance, perception, and performance of artificial agents. [REVIEW]Mark Coeckelbergh - 2009 - AI and Society 24 (2):181-189.
    Download  
     
    Export citation  
     
    Bookmark   41 citations  
  • The Moral Standing of Machines: Towards a Relational and Non-Cartesian Moral Hermeneutics.Mark Coeckelbergh - 2014 - Philosophy and Technology 27 (1):61-77.
    Should we give moral standing to machines? In this paper, I explore the implications of a relational approach to moral standing for thinking about machines, in particular autonomous, intelligent robots. I show how my version of this approach, which focuses on moral relations and on the conditions of possibility of moral status ascription, provides a way to take critical distance from what I call the “standard” approach to thinking about moral status and moral standing, which is based on properties. It (...)
    Download  
     
    Export citation  
     
    Bookmark   43 citations  
  • Should We Treat Teddy Bear 2.0 as a Kantian Dog? Four Arguments for the Indirect Moral Standing of Personal Social Robots, with Implications for Thinking About Animals and Humans. [REVIEW]Mark Coeckelbergh - 2020 - Minds and Machines 31 (3):337-360.
    The use of autonomous and intelligent personal social robots raises questions concerning their moral standing. Moving away from the discussion about direct moral standing and exploring the normative implications of a relational approach to moral standing, this paper offers four arguments that justify giving indirect moral standing to robots under specific conditions based on some of the ways humans—as social, feeling, playing, and doubting beings—relate to them. The analogy of “the Kantian dog” is used to assist reasoning about this. The (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Robot rights? Towards a social-relational justification of moral consideration.Mark Coeckelbergh - 2010 - Ethics and Information Technology 12 (3):209-221.
    Should we grant rights to artificially intelligent robots? Most current and near-future robots do not meet the hard criteria set by deontological and utilitarian theory. Virtue ethics can avoid this problem with its indirect approach. However, both direct and indirect arguments for moral consideration rest on ontological features of entities, an approach which incurs several problems. In response to these difficulties, this paper taps into a different conceptual resource in order to be able to grant some degree of moral consideration (...)
    Download  
     
    Export citation  
     
    Bookmark   97 citations  
  • Distributive justice and co-operation in a world of humans and non-humans: A contractarian argument for drawing non-humans into the sphere of justice.Mark Coeckelbergh - 2009 - Res Publica 15 (1):67-84.
    Various arguments have been provided for drawing non-humans such as animals and artificial agents into the sphere of moral consideration. In this paper, I argue for a shift from an ontological to a social-philosophical approach: instead of asking what an entity is, we should try to conceptually grasp the quasi-social dimension of relations between non-humans and humans. This allows me to reconsider the problem of justice, in particular distributive justice . Engaging with the work of Rawls, I show that an (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Can we trust robots?Mark Coeckelbergh - 2012 - Ethics and Information Technology 14 (1):53-60.
    Can we trust robots? Responding to the literature on trust and e-trust, this paper asks if the question of trust is applicable to robots, discusses different approaches to trust, and analyses some preconditions for trust. In the course of the paper a phenomenological-social approach to trust is articulated, which provides a way of thinking about trust that puts less emphasis on individual choice and control than the contractarian-individualist approach. In addition, the argument is made that while robots are neither human (...)
    Download  
     
    Export citation  
     
    Bookmark   37 citations  
  • Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability.Mark Coeckelbergh - 2020 - Science and Engineering Ethics 26 (4):2051-2068.
    This paper discusses the problem of responsibility attribution raised by the use of artificial intelligence technologies. It is assumed that only humans can be responsible agents; yet this alone already raises many issues, which are discussed starting from two Aristotelian conditions for responsibility. Next to the well-known problem of many hands, the issue of “many things” is identified and the temporal dimension is emphasized when it comes to the control condition. Special attention is given to the epistemic condition, which draws (...)
    Download  
     
    Export citation  
     
    Bookmark   48 citations  
  • Trust and Trust-Engineering in Artificial Intelligence Research: Theory and Praxis.Melvin Chen - 2021 - Philosophy and Technology 34 (4):1429-1447.
    In this paper, I will identify two problems of trust in an AI-relevant context: a theoretical problem and a practical one. I will identify and address a number of skeptical challenges to an AI-relevant theory of trust. In addition, I will identify what I shall term the ‘scope challenge’, which I take to hold for any AI-relevant theory of trust that purports to be representationally adequate to the multifarious forms of trust and AI. Thereafter, I will suggest how trust-engineering, a (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The agency of the forum: Mechanisms for algorithmic accountability through the lens of agency.Florian Cech - 2021 - Journal of Responsible Technology 7:100015.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Information, Ethics, and Computers: The Problem of Autonomous Moral Agents. [REVIEW]Bernd Carsten Stahl - 2004 - Minds and Machines 14 (1):67-83.
    In modern technical societies computers interact with human beings in ways that can affect moral rights and obligations. This has given rise to the question whether computers can act as autonomous moral agents. The answer to this question depends on many explicit and implicit definitions that touch on different philosophical areas such as anthropology and metaphysics. The approach chosen in this paper centres on the concept of information. Information is a multi-facetted notion which is hard to define comprehensively. However, the (...)
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  • Towards an ontological foundation of information ethics.Rafael Capurro - 2006 - Ethics and Information Technology 8 (4):175-186.
    The paper presents, firstly, a brief review of the long history\nof information ethics beginning with the Greek concept of parrhesia\nor freedom of speech as analyzed by Michel Foucault. The recent concept\nof information ethics is related particularly to problems which arose\nin the last century with the development of computer technology and\nthe internet. A broader concept of information ethics as dealing\nwith the digital reconstruction of all possible phenomena leads to\nquestions relating to digital ontology. Following Heidegger{\textquoteright}s\nconception of the relation between ontology and metaphysics, (...)
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  • Sympathy for Dolores: Moral Consideration for Robots Based on Virtue and Recognition.Massimiliano L. Cappuccio, Anco Peeters & William McDonald - 2019 - Philosophy and Technology 33 (1):9-31.
    This paper motivates the idea that social robots should be credited as moral patients, building on an argumentative approach that combines virtue ethics and social recognition theory. Our proposal answers the call for a nuanced ethical evaluation of human-robot interaction that does justice to both the robustness of the social responses solicited in humans by robots and the fact that robots are designed to be used as instruments. On the one hand, we acknowledge that the instrumental nature of robots and (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • On Floridi’s metaphysical foundation of information ecology.Rafael Capurro - 2008 - Ethics and Information Technology 10 (2-3):167-173.
    The paper presents a critical appraisal of Floridi’s metaphysical foundation of information ecology. It highlights some of the issues raised by Floridi with regard to the axiological status of the objects in the “infosphere,” the moral status of artificial agents, and Floridi’s foundation of information ethics as information ecology. I further criticise the ontological conception of value as a first order category. I suggest that a weakening of Floridi’s demiurgic information ecology is needed in order not to forget the limitations (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Enculturating Algorithms.Rafael Capurro - 2019 - NanoEthics 13 (2):131-137.
    The paper deals with the difference between who and what we are in order to take an ethical perspective on algorithms and their regulation. The present casting of ourselves as homo digitalis implies the possibility of projecting who we are as social beings sharing a world, into the digital medium, thereby engendering what can be called digital whoness, or a digital reification of ourselves. A main ethical challenge for the evolving digital age consists in unveiling this ethical difference, particularly when (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Hiperética artificial: crítica a la colonización algorítmica de lo moral.Patrici Calvo - 2022 - Revista de Filosofía (Madrid):1-21.
    Este estudio reflexionar pretende críticamente sobre la posibilidad de un enfoque dataficado, hiperconectado y algoritmizado de clarificación, fundamentación y aplicación de lo moral: la hiperética artificial. Para ello, se mostrará la ética como un saber práctico que, preocupado por la racionalización de los comportamientos libres, ha encontrado en el diálogo entre afectados el criterio de moralidad desde el cual poder criticar tanto el conocimiento como el comportamiento. Posteriormente, se profundizará en la etificación, el intento de establecer procesos de transformación de (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Floridi’s Fourth Revolution and the Demise of Ethics.Michael Byron - 2010 - Knowledge, Technology & Policy 23 (1-2):135-147.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Floridi’s Fourth Revolution and the Demise of Ethics.Michael Byron - 2010 - Knowledge, Technology & Policy 23 (1-2):135-147.
    Luciano Floridi has proposed that we are on the cusp of a fourth revolution in human self-understanding. The information revolution with its prospect of digitally enhancing human beings opens the door to engineering human nature. Floridi has emphasized the importance of making this transition as ethically smooth as possible. He is quite right to worry about ethics after the fourth revolution. The coming revolution, if it unfolds as he envisions, spells the demise of traditional ethical theorizing.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Philosophy in the information age.Terrell Ward Bynum - 2010 - Metaphilosophy 41 (3):420-442.
    Abstract: In the past, major scientific and technological revolutions, like the Copernican Revolution and the Industrial Revolution, have had profound effects, not only upon society in general, but also upon Philosophy. Today's Information Revolution is no exception. Already it has had significant impacts upon our understanding of human nature, the nature of society, even the nature of the universe. Given these developments, this essay considers some of the philosophical contributions of two "philosophers of the Information Age"—Norbert Wiener and Luciano Floridi—with (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Flourishing ethics.Terrell Ward Bynum - 2006 - Ethics and Information Technology 8 (4):157-173.
    This essay describes a new ethical theory that has begun to coalesce from the works of several scholars in the international computer ethics community. I call the new theory ‚Flourishing Ethics’ because of its Aristotelian roots, though it also includes ideas suggestive of Taoism and Buddhism. In spite of its roots in ancient ethical theories, Flourishing Ethics is informed and grounded by recent scientific insights into the nature of living things, human nature and the fundamental nature of the universe – (...)
    Download  
     
    Export citation  
     
    Bookmark   30 citations  
  • Toward an Epistemology of ISP Secondary Liability.Dan L. Burk - 2011 - Philosophy and Technology 24 (4):437-454.
    At common law, contributory infringement for copyright infringement requires "knowledge" of the infringing activity by a direct infringer before secondary liability can attach. In the USA, the "safe harbor" provisions of the Digital Millennium Copyright Act, that shield Internet Service Providers from secondary copyright liability, are concomitantly available only to ISPs that lack the common law knowledge prerequisites for such liability. But this leads to the question of when a juridical corporate entity can be said to have "knowledge" under the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • An Analysis of the Interaction Between Intelligent Software Agents and Human Users.Christopher Burr, Nello Cristianini & James Ladyman - 2018 - Minds and Machines 28 (4):735-774.
    Interactions between an intelligent software agent and a human user are ubiquitous in everyday situations such as access to information, entertainment, and purchases. In such interactions, the ISA mediates the user’s access to the content, or controls some other aspect of the user experience, and is not designed to be neutral about outcomes of user choices. Like human users, ISAs are driven by goals, make autonomous decisions, and can learn from experience. Using ideas from bounded rationality, we frame these interactions (...)
    Download  
     
    Export citation  
     
    Bookmark   36 citations  
  • Trust and multi-agent systems: applying the diffuse, default model of trust to experiments involving artificial agents. [REVIEW]Jeff Buechner & Herman T. Tavani - 2011 - Ethics and Information Technology 13 (1):39-51.
    We argue that the notion of trust, as it figures in an ethical context, can be illuminated by examining research in artificial intelligence on multi-agent systems in which commitment and trust are modeled. We begin with an analysis of a philosophical model of trust based on Richard Holton’s interpretation of P. F. Strawson’s writings on freedom and resentment, and we show why this account of trust is difficult to extend to artificial agents (AAs) as well as to other non-human entities. (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Autonomous weapons systems and the necessity of interpretation: what Heidegger can tell us about automated warfare.Kieran M. Brayford - forthcoming - AI and Society:1-9.
    Despite resistance from various societal actors, the development and deployment of lethal autonomous weaponry to warzones is perhaps likely, considering the perceived operational and ethical advantage such weapons are purported to bring. In this paper, it is argued that the deployment of truly autonomous weaponry presents an ethical danger by calling into question the ability of such weapons to abide by the Laws of War. This is done by noting the resonances between battlefield target identification and the process of ontic-ontological (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Value Sensitive Design for autonomous weapon systems – a primer.Christine Boshuijzen-van Burken - 2023 - Ethics and Information Technology 25 (1):1-14.
    Value Sensitive Design (VSD) is a design methodology developed by Batya Friedman and Peter Kahn (2003) that brings in moral deliberations in an early stage of a design process. It assumes that neither technology itself is value neutral, nor shifts the value-ladennes to the sole usage of technology. This paper adds to emerging literature onVSD for autonomous weapons systems development and discusses extant literature on values in autonomous systems development in general and in autonomous weapons development in particular. I identify (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Introduction: Digital Technologies and Human Decision-Making.Sofia Bonicalzi, Mario De Caro & Benedetta Giovanola - 2023 - Topoi 42 (3):793-797.
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial Intelligence and Autonomy: On the Ethical Dimension of Recommender Systems.Sofia Bonicalzi, Mario De Caro & Benedetta Giovanola - 2023 - Topoi 42 (3):819-832.
    Feasting on a plethora of social media platforms, news aggregators, and online marketplaces, recommender systems (RSs) are spreading pervasively throughout our daily online activities. Over the years, a host of ethical issues have been associated with the diffusion of RSs and the tracking and monitoring of users’ data. Here, we focus on the impact RSs may have on personal autonomy as the most elusive among the often-cited sources of grievance and public outcry. On the grounds of a philosophically nuanced notion (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation