Results for ' artificial agents'

965 found
Order:
  1. On the morality of artificial agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.
    Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility (...)
    Download  
     
    Export citation  
     
    Bookmark   294 citations  
  2. Affective Artificial Agents as sui generis Affective Artifacts.Marco Facchin & Giacomo Zanotti - 2024 - Topoi 43 (3).
    AI-based technologies are increasingly pervasive in a number of contexts. Our affective and emotional life makes no exception. In this article, we analyze one way in which AI-based technologies can affect them. In particular, our investigation will focus on affective artificial agents, namely AI-powered software or robotic agents designed to interact with us in affectively salient ways. We build upon the existing literature on affective artifacts with the aim of providing an original analysis of affective artificial (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  3.  99
    Artificial agents: responsibility & control gaps.Herman Veluwenkamp & Frank Hindriks - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    Artificial agents create significant moral opportunities and challenges. Over the last two decades, discourse has largely focused on the concept of a ‘responsibility gap.’ We argue that this concept is incoherent, misguided, and diverts attention from the core issue of ‘control gaps.’ Control gaps arise when there is a discrepancy between the causal control an agent exercises and the moral control it should possess or emulate. Such gaps present moral risks, often leading to harm or ethical violations. We (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4. Risk Imposition by Artificial Agents: The Moral Proxy Problem.Johanna Thoma - 2022 - In Silja Voeneky, Philipp Kellmeyer, Oliver Mueller & Wolfram Burgard (eds.), The Cambridge Handbook of Responsible Artificial Intelligence: Interdisciplinary Perspectives. Cambridge University Press.
    Where artificial agents are not liable to be ascribed true moral agency and responsibility in their own right, we can understand them as acting as proxies for human agents, as making decisions on their behalf. What I call the ‘Moral Proxy Problem’ arises because it is often not clear for whom a specific artificial agent is acting as a moral proxy. In particular, we need to decide whether artificial agents should be acting as proxies (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  5. Modeling artificial agents’ actions in context – a deontic cognitive event ontology.Miroslav Vacura - 2020 - Applied ontology 15 (4):493-527.
    Although there have been efforts to integrate Semantic Web technologies and artificial agents related AI research approaches, they remain relatively isolated from each other. Herein, we introduce a new ontology framework designed to support the knowledge representation of artificial agents’ actions within the context of the actions of other autonomous agents and inspired by standard cognitive architectures. The framework consists of four parts: 1) an event ontology for information pertaining to actions and events; 2) an (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. (1 other version)Intention Reconsideration in Artificial Agents: a Structured Account.Fabrizio Cariani - forthcoming - Special Issue of Phil Studies.
    An important module in the Belief-Desire-Intention architecture for artificial agents (which builds on Michael Bratman's work in the philosophy of action) focuses on the task of intention reconsideration. The theoretical task is to formulate principles governing when an agent ought to undo a prior committed intention and reopen deliberation. Extant proposals for such a principle, if sufficiently detailed, are either too task-specific or too computationally demanding. I propose that an agent ought to reconsider an intention whenever some incompatible (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7. Artificial moral agents are infeasible with foreseeable technologies.Patrick Chisan Hew - 2014 - Ethics and Information Technology 16 (3):197-206.
    For an artificial agent to be morally praiseworthy, its rules for behaviour and the mechanisms for supplying those rules must not be supplied entirely by external humans. Such systems are a substantial departure from current technologies and theory, and are a low prospect. With foreseeable technologies, an artificial agent will carry zero responsibility for its behavior and humans will retain full responsibility.
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  8. Turing Test, Chinese Room Argument, Symbol Grounding Problem. Meanings in Artificial Agents (APA 2013).Christophe Menant - 2013 - American Philosophical Association Newsletter on Philosophy and Computers 13 (1):30-34.
    The Turing Test (TT), the Chinese Room Argument (CRA), and the Symbol Grounding Problem (SGP) are about the question “can machines think?” We propose to look at these approaches to Artificial Intelligence (AI) by showing that they all address the possibility for Artificial Agents (AAs) to generate meaningful information (meanings) as we humans do. The initial question about thinking machines is then reformulated into “can AAs generate meanings like humans do?” We correspondingly present the TT, the CRA (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  9. Guilty Artificial Minds: Folk Attributions of Mens Rea and Culpability to Artificially Intelligent Agents.Michael T. Stuart & Markus Https://Orcidorg Kneer - 2021 - Proceedings of the ACM on Human-Computer Interaction 5 (CSCW2).
    While philosophers hold that it is patently absurd to blame robots or hold them morally responsible [1], a series of recent empirical studies suggest that people do ascribe blame to AI systems and robots in certain contexts [2]. This is disconcerting: Blame might be shifted from the owners, users or designers of AI systems to the systems themselves, leading to the diminished accountability of the responsible human agents [3]. In this paper, we explore one of the potential underlying reasons (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  10.  40
    Agency and Intentionality for Artificial Agents.Yidong Wei - 2024 - Journal of Human Cognition 8 (2):5-7.
    In this paper, the author will explore the relationship between agency and intentionality of the artificial agent in the following seven ways.
    Download  
     
    Export citation  
     
    Bookmark  
  11. Elements of Episodic Memory: Insights from Artificial Agents.Alexandria Boyle & Andrea Blomkvist - forthcoming - Philosophical Transactions of the Royal Society B.
    Many recent AI systems take inspiration from biological episodic memory. Here, we ask how these ‘episodic-inspired’ AI systems might inform our understanding of biological episodic memory. We discuss work showing that these systems implement some key features of episodic memory whilst differing in important respects, and appear to enjoy behavioural advantages in the domains of strategic decision-making, fast learning, navigation, exploration and acting over temporal distance. We propose that these systems could be used to evaluate competing theories of episodic memory’s (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12. Can a Robot Lie? Exploring the Folk Concept of Lying as Applied to Artificial Agents.Markus Https://Orcidorg Kneer - 2021 - Cognitive Science 45 (10):e13032.
    The potential capacity for robots to deceive has received considerable attention recently. Many papers explore the technical possibility for a robot to engage in deception for beneficial purposes (e.g., in education or health). In this short experimental paper, I focus on a more paradigmatic case: robot lying (lying being the textbook example of deception) for nonbeneficial purposes as judged from the human point of view. More precisely, I present an empirical experiment that investigates the following three questions: (a) Are ordinary (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  13. Artificial Free Will: The Responsibility Strategy and Artificial Agents.Sven Delarivière - 2016 - Apeiron Student Journal of Philosophy (Portugal) 7:175-203.
    Both a traditional notion of free will, present in human beings, and artificial intelligence are often argued to be inherently incompatible with determinism. Contrary to these criticisms, this paper defends that an account of free will compatible with determinism, the responsibility strategy (coined here) specifically, is a variety of free will worth wanting as well as a variety that is possible to (in principle) artificially construct. First, freedom will be defined and related to ethics. With that in mind, the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  14.  66
    Emotional Cues and Misplaced Trust in Artificial Agents.Joseph Masotti - forthcoming - In Henry Shevlin (ed.), AI in Society: Relationships (Oxford Intersections). Oxford University Press.
    This paper argues that the emotional cues exhibited by AI systems designed for social interaction may lead human users to hold misplaced trust in such AI systems, and this poses a substantial problem for human-AI relationships. It begins by discussing the communicative role of certain emotions relevant to perceived trustworthiness. Since displaying such emotions is a reliable indicator of trustworthiness in humans, we use such emotions to assess agents’ trustworthiness according to certain generalizations of folk psychology. Our tendency to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15. Moral Agents or Mindless Machines? A Critical Appraisal of Agency in Artificial Systems.Fabio Tollon - 2019 - Hungarian Philosophical Review 4 (63):9-23.
    In this paper I provide an exposition and critique of Johnson and Noorman’s (2014) three conceptualizations of the agential roles artificial systems can play. I argue that two of these conceptions are unproblematic: that of causally efficacious agency and “acting for” or surrogate agency. Their third conception, that of “autonomous agency,” however, is one I have reservations about. The authors point out that there are two ways in which the term “autonomy” can be used: there is, firstly, the engineering (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  16. Philosophical Signposts for Artificial Moral Agent Frameworks.Robert James M. Boyles - 2017 - Suri 6 (2):92–109.
    This article focuses on a particular issue under machine ethics—that is, the nature of Artificial Moral Agents. Machine ethics is a branch of artificial intelligence that looks into the moral status of artificial agents. Artificial moral agents, on the other hand, are artificial autonomous agents that possess moral value, as well as certain rights and responsibilities. This paper demonstrates that attempts to fully develop a theory that could possibly account for the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  17. Meaning generation for animals, humans and artificial agents. An evolutionary perspective on the philosophy of information. (IS4SI 2017).Christophe Menant - manuscript
    Meanings are present everywhere in our environment and within ourselves. But these meanings do not exist by themselves. They are associated to information and have to be created, to be generated by agents. The Meaning Generator System (MGS) has been developed on a system approach to model meaning generation in agents following an evolutionary perspective. The agents can be natural or artificial. The MGS generates meaningful information (a meaning) when it receives information that has a connection (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18. Artificial morality: Making of the artificial moral agents.Marija Kušić & Petar Nurkić - 2019 - Belgrade Philosophical Annual 1 (32):27-49.
    Abstract: Artificial Morality is a new, emerging interdisciplinary field that centres around the idea of creating artificial moral agents, or AMAs, by implementing moral competence in artificial systems. AMAs are ought to be autonomous agents capable of socially correct judgements and ethically functional behaviour. This request for moral machines comes from the changes in everyday practice, where artificial systems are being frequently used in a variety of situations from home help and elderly care purposes (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  19. (1 other version)Artificial virtuous agents: from theory to machine implementation.Jakob Stenseke - 2021 - AI and Society:1-20.
    Virtue ethics has many times been suggested as a promising recipe for the construction of artificial moral agents due to its emphasis on moral character and learning. However, given the complex nature of the theory, hardly any work has de facto attempted to implement the core tenets of virtue ethics in moral machines. The main goal of this paper is to demonstrate how virtue ethics can be taken all the way from theory to machine implementation. To achieve this (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  20. Artificial virtuous agents in a multi-agent tragedy of the commons.Jakob Stenseke - 2022 - AI and Society:1-18.
    Although virtue ethics has repeatedly been proposed as a suitable framework for the development of artificial moral agents, it has been proven difficult to approach from a computational perspective. In this work, we present the first technical implementation of artificial virtuous agents in moral simulations. First, we review previous conceptual and technical work in artificial virtue ethics and describe a functionalistic path to AVAs based on dispositional virtues, bottom-up learning, and top-down eudaimonic reward. We then (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  21. Do androids dream of normative endorsement? On the fallibility of artificial moral agents.Frodo Podschwadek - 2017 - Artificial Intelligence and Law 25 (3):325-339.
    The more autonomous future artificial agents will become, the more important it seems to equip them with a capacity for moral reasoning and to make them autonomous moral agents. Some authors have even claimed that one of the aims of AI development should be to build morally praiseworthy agents. From the perspective of moral philosophy, praiseworthy moral agents, in any meaningful sense of the term, must be fully autonomous moral agents who endorse moral rules (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  22. Artificial Evil and the Foundation of Computer Ethics.Luciano Floridi & J. W. Sanders - 2001 - Springer Netherlands. Edited by Luciano Floridi & J. W. Sanders.
    Moral reasoning traditionally distinguishes two types of evil:moral (ME) and natural (NE). The standard view is that ME is the product of human agency and so includes phenomena such as war,torture and psychological cruelty; that NE is the product of nonhuman agency, and so includes natural disasters such as earthquakes, floods, disease and famine; and finally, that more complex cases are appropriately analysed as a combination of ME and NE. Recently, as a result of developments in autonomous agents in (...)
    Download  
     
    Export citation  
     
    Bookmark   30 citations  
  23. Artificial Leviathan: Exploring Social Evolution of LLM Agents Through the Lens of Hobbesian Social Contract Theory.Gordon Dai, Weijia Zhang, Jinhan Li, Siqi Yang, Chidera Ibe, Srihas Rao, Arthur Caetano & Misha Sra - manuscript
    The emergence of Large Language Models (LLMs) and advancements in Artificial Intelligence (AI) offer an opportunity for computational social science research at scale. Building upon prior explorations of LLM agent design, our work introduces a simulated agent society where complex social relationships dynamically form and evolve over time. Agents are imbued with psychological drives and placed in a sandbox survival environment. We conduct an evaluation of the agent society through the lens of Thomas Hobbes's seminal Social Contract Theory (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. Making moral machines: why we need artificial moral agents.Paul Formosa & Malcolm Ryan - forthcoming - AI and Society.
    As robots and Artificial Intelligences become more enmeshed in rich social contexts, it seems inevitable that we will have to make them into moral machines equipped with moral skills. Apart from the technical difficulties of how we could achieve this goal, we can also ask the ethical question of whether we should seek to create such Artificial Moral Agents (AMAs). Recently, several papers have argued that we have strong reasons not to develop AMAs. In response, we develop (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  25. A principlist-based study of the ethical design and acceptability of artificial social agents.Paul Formosa - 2023 - International Journal of Human-Computer Studies 172.
    Artificial Social Agents (ASAs), which are AI software driven entities programmed with rules and preferences to act autonomously and socially with humans, are increasingly playing roles in society. As their sophistication grows, humans will share greater amounts of personal information, thoughts, and feelings with ASAs, which has significant ethical implications. We conducted a study to investigate what ethical principles are of relative importance when people engage with ASAs and whether there is a relationship between people’s values and the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. A Value-Sensitive Design Approach to Intelligent Agents.Steven Umbrello & Angelo Frank De Bellis - 2018 - In Yampolskiy Roman (ed.), Artificial Intelligence Safety and Security. CRC Press. pp. 395-410.
    This chapter proposed a novel design methodology called Value-Sensitive Design and its potential application to the field of artificial intelligence research and design. It discusses the imperatives in adopting a design philosophy that embeds values into the design of artificial agents at the early stages of AI development. Because of the high risk stakes in the unmitigated design of artificial agents, this chapter proposes that even though VSD may turn out to be a less-than-optimal design (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  27. (1 other version)Artificial evil and the foundation of computer ethics.L. Floridi & J. Sanders - 2000 - Etica E Politica 2 (2).
    Moral reasoning traditionally distinguishes two types of evil: moral and natural. The standard view is that ME is the product of human agency and so includes phenomena such as war, torture and psychological cruelty; that NE is the product of nonhuman agency, and so includes natural disasters such as earthquakes, floods, disease and famine; and finally, that more complex cases are appropriately analysed as a combination of ME and NE. Recently, as a result of developments in autonomous agents in (...)
    Download  
     
    Export citation  
     
    Bookmark   27 citations  
  28. Towards Shutdownable Agents via Stochastic Choice.Elliott Thornley, Alexander Roman, Christos Ziakas, Leyton Ho & Louis Thomson - 2024 - Global Priorities Institute Working Paper.
    Some worry that advanced artificial agents may resist being shut down. The Incomplete Preferences Proposal (IPP) is an idea for ensuring that doesn't happen. A key part of the IPP is using a novel 'Discounted REward for Same-Length Trajectories (DREST)' reward function to train agents to (1) pursue goals effectively conditional on each trajectory-length (be 'USEFUL'), and (2) choose stochastically between different trajectory-lengths (be 'NEUTRAL' about trajectory-lengths). In this paper, we propose evaluation metrics for USEFULNESS and NEUTRALITY. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29. Artificial Agency and the Game of Semantic Extension.Fossa Fabio - 2021 - Interdisciplinary Science Reviews 46 (4):440-457.
    Artificial agents are commonly described by using words that traditionally belong to the semantic field of organisms, particularly of animal and human life. I call this phenomenon the game of semantic extension. However, the semantic extension of words as crucial as “autonomous”, “intelligent”, “creative”, “moral”, and so on, is often perceived as unsatisfactory, which is signalled with the extensive use of inverted commas or other syntactical cues. Such practice, in turn, has provoked harsh criticism that usually refers back (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. A Metacognitive Approach to Trust and a Case Study: Artificial Agency.Ioan Muntean - 2019 - Computer Ethics - Philosophical Enquiry (CEPE) Proceedings.
    Trust is defined as a belief of a human H (‘the trustor’) about the ability of an agent A (the ‘trustee’) to perform future action(s). We adopt here dispositionalism and internalism about trust: H trusts A iff A has some internal dispositions as competences. The dispositional competences of A are high-level metacognitive requirements, in the line of a naturalized virtue epistemology. (Sosa, Carter) We advance a Bayesian model of two (i) confidence in the decision and (ii) model uncertainty. To trust (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31. Discovering agents.Zachary Kenton, Ramana Kumar, Sebastian Farquhar, Jonathan Richens, Matt MacDermott & Tom Everitt - 2023 - Artificial Intelligence 322 (C):103963.
    Causal models of agents have been used to analyse the safety aspects of machine learning systems. But identifying agents is non-trivial -- often the causal model is just assumed by the modeler without much justification -- and modelling failures can lead to mistakes in the safety analysis. This paper proposes the first formal causal definition of agents -- roughly that agents are systems that would adapt their policy if their actions influenced the world in a different (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  32. Consequentialism & Machine Ethics: Towards a Foundational Machine Ethic to Ensure the Right Action of Artificial Moral Agents.Josiah Della Foresta - 2020 - Montreal AI Ethics Institute.
    In this paper, I argue that Consequentialism represents a kind of ethical theory that is the most plausible to serve as a basis for a machine ethic. First, I outline the concept of an artificial moral agent and the essential properties of Consequentialism. Then, I present a scenario involving autonomous vehicles to illustrate how the features of Consequentialism inform agent action. Thirdly, an alternative Deontological approach will be evaluated and the problem of moral conflict discussed. Finally, two bottom-up approaches (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. Understanding Artificial Agency.Leonard Dung - forthcoming - Philosophical Quarterly.
    Which artificial intelligence (AI) systems are agents? To answer this question, I propose a multidimensional account of agency. According to this account, a system's agency profile is jointly determined by its level of goal-directedness and autonomy as well as is abilities for directly impacting the surrounding world, long-term planning and acting for reasons. Rooted in extant theories of agency, this account enables fine-grained, nuanced comparative characterizations of artificial agency. I show that this account has multiple important virtues (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  34. One decade of universal artificial intelligence.Marcus Hutter - 2012 - In Pei Wang & Ben Goertzel (eds.), Theoretical Foundations of Artificial General Intelligence. Springer. pp. 67--88.
    The first decade of this century has seen the nascency of the first mathematical theory of general artificial intelligence. This theory of Universal Artificial Intelligence (UAI) has made significant contributions to many theoretical, philosophical, and practical AI questions. In a series of papers culminating in book (Hutter, 2005), an exciting sound and complete mathematical model for a super intelligent agent (AIXI) has been developed and rigorously analyzed. While nowadays most AI researchers avoid discussing intelligence, the award-winning PhD thesis (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  35. Agencéité et responsabilité des agents artificiels.Louis Chartrand - 2017 - Éthique Publique 19 (2).
    -/- Les agents artificiels et les nouvelles technologies de l’information, de par leur capacité à établir de nouvelles dynamiques de transfert d’information, ont des effets perturbateurs sur les écosystèmes épistémiques. Se représenter la responsabilité pour ces chambardements représente un défi considérable : comment ce concept peut-il rendre compte de son objet dans des systèmes complexes dans lesquels il est difficile de rattacher l’action à un agent ou à une agente ? Cet article présente un aperçu du concept d’écosystème épistémique (...)
    Download  
     
    Export citation  
     
    Bookmark  
  36. Artificial intelligence's new frontier: Artificial companions and the fourth revolution.Luciano Floridi - 2008 - Metaphilosophy 39 (4-5):651-655.
    Abstract: In this article I argue that the best way to understand the information turn is in terms of a fourth revolution in the long process of reassessing humanity's fundamental nature and role in the universe. We are not immobile, at the centre of the universe (Copernicus); we are not unnaturally distinct and different from the rest of the animal world (Darwin); and we are far from being entirely transparent to ourselves (Freud). We are now slowly accepting the idea that (...)
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  37. Evaluating Artificial Models of Cognition.Marcin Miłkowski - 2015 - Studies in Logic, Grammar and Rhetoric 40 (1):43-62.
    Artificial models of cognition serve different purposes, and their use determines the way they should be evaluated. There are also models that do not represent any particular biological agents, and there is controversy as to how they should be assessed. At the same time, modelers do evaluate such models as better or worse. There is also a widespread tendency to call for publicly available standards of replicability and benchmarking for such models. In this paper, I argue that proper (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. Human Goals Are Constitutive of Agency in Artificial Intelligence.Elena Popa - 2021 - Philosophy and Technology 34 (4):1731-1750.
    The question whether AI systems have agency is gaining increasing importance in discussions of responsibility for AI behavior. This paper argues that an approach to artificial agency needs to be teleological, and consider the role of human goals in particular if it is to adequately address the issue of responsibility. I will defend the view that while AI systems can be viewed as autonomous in the sense of identifying or pursuing goals, they rely on human goals and other values (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  39.  75
    Artificial Intelligence and Universal Values.Jay Friedenberg - 2024 - UK: Ethics Press.
    The field of value alignment, or more broadly machine ethics, is becoming increasingly important as artificial intelligence developments accelerate. By ‘alignment’ we mean giving a generally intelligent software system the capability to act in ways that are beneficial, or at least minimally harmful, to humans. There are a large number of techniques that are being experimented with, but this work often fails to specify what values exactly we should be aligning. When making a decision, an agent is supposed to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. Ethics of Driving Automation. Artificial Agency and Human Values.Fabio Fossa - 2023 - Cham: Springer.
    This book offers a systematic and thorough philosophical analysis of the ways in which driving automation crosses path with ethical values. Upon introducing the different forms of driving automation and examining their relation to human autonomy, it provides readers with in-depth reflections on safety, privacy, moral judgment, control, responsibility, sustainability, and other ethical issues. Driving is undoubtedly a moral activity as a human act. Transferring it to artificial agents such as connected and automated vehicles necessarily raises many philosophical (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  41. Artificial Intelligence, Creativity, and the Precarity of Human Connection.Lindsay Brainard - forthcoming - Oxford Intersections: Ai in Society.
    There is an underappreciated respect in which the widespread availability of generative artificial intelligence (AI) models poses a threat to human connection. My central contention is that human creativity is especially capable of helping us connect to others in a valuable way, but the widespread availability of generative AI models reduces our incentives to engage in various sorts of creative work in the arts and sciences. I argue that creative endeavors must be motivated by curiosity, and so they must (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42. Accountability in Artificial Intelligence: What It Is and How It Works.Claudio Novelli, Mariarosaria Taddeo & Luciano Floridi - 2023 - AI and Society 1:1-12.
    Accountability is a cornerstone of the governance of artificial intelligence (AI). However, it is often defined too imprecisely because its multifaceted nature and the sociotechnical structure of AI systems imply a variety of values, practices, and measures to which accountability in AI can refer. We address this lack of clarity by defining accountability in terms of answerability, identifying three conditions of possibility (authority recognition, interrogation, and limitation of power), and an architecture of seven features (context, range, agent, forum, standards, (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  43. ETHICA EX MACHINA. Exploring artificial moral agency or the possibility of computable ethics.Rodrigo Sanz - 2020 - Zeitschrift Für Ethik Und Moralphilosophie 3 (2):223-239.
    Since the automation revolution of our technological era, diverse machines or robots have gradually begun to reconfigure our lives. With this expansion, it seems that those machines are now faced with a new challenge: more autonomous decision-making involving life or death consequences. This paper explores the philosophical possibility of artificial moral agency through the following question: could a machine obtain the cognitive capacities needed to be a moral agent? In this regard, I propose to expose, under a normative-cognitive perspective, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. Artificial Intelligence Systems, Responsibility and Agential Self-Awareness.Lydia Farina - 2022 - In Vincent C. Müller (ed.), Philosophy and Theory of Artificial Intelligence 2021. Berlin: Springer. pp. 15-25.
    This paper investigates the claim that artificial Intelligence Systems cannot be held morally responsible because they do not have an ability for agential self-awareness e.g. they cannot be aware that they are the agents of an action. The main suggestion is that if agential self-awareness and related first person representations presuppose an awareness of a self, the possibility of responsible artificial intelligence systems cannot be evaluated independently of research conducted on the nature of the self. Focusing on (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  45. Risks of artificial general intelligence.Vincent C. Müller (ed.) - 2014 - Taylor & Francis (JETAI).
    Special Issue “Risks of artificial general intelligence”, Journal of Experimental and Theoretical Artificial Intelligence, 26/3 (2014), ed. Vincent C. Müller. http://www.tandfonline.com/toc/teta20/26/3# - Risks of general artificial intelligence, Vincent C. Müller, pages 297-301 - Autonomous technology and the greater human good - Steve Omohundro - pages 303-315 - - - The errors, insights and lessons of famous AI predictions – and what they mean for the future - Stuart Armstrong, Kaj Sotala & Seán S. Ó hÉigeartaigh - pages (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  46. Heterogeneous Proxytypes as a Unifying Cognitive Framework for Conceptual Representation and Reasoning in Artificial Systems.Antonio Lieto - 2021 - In CARLA @FOIS Proceeding. Amsterdam, Netherlands: IOS Press.
    The paper presents the heterogeneous proxytypes hypothesis as a cognitively-inspired computational framework able to reconcile, in both natural and artificial systems, different theories of typicality about conceptual representation and reasoning that have been traditionally seen as incompatible. In particular, through the Dual PECCS system and its evolution, it shows how prototypes, exemplars and theory-theory like conceptual representations can be integrated in a cognitive artificial agent (thus extending its categorization capabilities) and, in addition, can provide useful insights in the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. Artificial intelligence and philosophical creativity: From analytics to crealectics.Luis de Miranda - 2020 - Human Affairs 30 (4):597-607.
    The tendency to idealise artificial intelligence as independent from human manipulators, combined with the growing ontological entanglement of humans and digital machines, has created an “anthrobotic” horizon, in which data analytics, statistics and probabilities throw our agential power into question. How can we avoid the consequences of a reified definition of intelligence as universal operation becoming imposed upon our destinies? It is here argued that the fantasised autonomy of automated intelligence presents a contradistinctive opportunity for philosophical consciousness to understand (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  48. (1 other version)Can Artificial Intelligence Make Art?Elzė Sigutė Mikalonytė & Markus Kneer - 2022 - ACM Transactions on Human-Robot Interactions.
    In two experiments (total N=693) we explored whether people are willing to consider paintings made by AI-driven robots as art, and robots as artists. Across the two experiments, we manipulated three factors: (i) agent type (AI-driven robot v. human agent), (ii) behavior type (intentional creation of a painting v. accidental creation), and (iii) object type (abstract v. representational painting). We found that people judge robot paintings and human painting as art to roughly the same extent. However, people are much less (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  49. The social turn of artificial intelligence.Nello Cristianini, Teresa Scantamburlo & James Ladyman - 2021 - AI and Society (online).
    Social machines are systems formed by material and human elements interacting in a structured way. The use of digital platforms as mediators allows large numbers of humans to participate in such machines, which have interconnected AI and human components operating as a single system capable of highly sophisticated behavior. Under certain conditions, such systems can be understood as autonomous goal-driven agents. Many popular online platforms can be regarded as instances of this class of agent. We argue that autonomous social (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  50. Group Agency and Artificial Intelligence.Christian List - 2021 - Philosophy and Technology (4):1-30.
    The aim of this exploratory paper is to review an under-appreciated parallel between group agency and artificial intelligence. As both phenomena involve non-human goal-directed agents that can make a difference to the social world, they raise some similar moral and regulatory challenges, which require us to rethink some of our anthropocentric moral assumptions. Are humans always responsible for those entities’ actions, or could the entities bear responsibility themselves? Could the entities engage in normative reasoning? Could they even have (...)
    Download  
     
    Export citation  
     
    Bookmark   33 citations  
1 — 50 / 965