Results for 'intelligent agents'

997 found
Order:
  1. A Value-Sensitive Design Approach to Intelligent Agents.Steven Umbrello & Angelo Frank De Bellis - 2018 - In Yampolskiy Roman (ed.), Artificial Intelligence Safety and Security. CRC Press. pp. 395-410.
    This chapter proposed a novel design methodology called Value-Sensitive Design and its potential application to the field of artificial intelligence research and design. It discusses the imperatives in adopting a design philosophy that embeds values into the design of artificial agents at the early stages of AI development. Because of the high risk stakes in the unmitigated design of artificial agents, this chapter proposes that even though VSD may turn out to be a less-than-optimal design methodology, it currently (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  2. Guilty Artificial Minds: Folk Attributions of Mens Rea and Culpability to Artificially Intelligent Agents.Michael T. Stuart & Markus Kneer - 2021 - Proceedings of the ACM on Human-Computer Interaction 5 (CSCW2).
    While philosophers hold that it is patently absurd to blame robots or hold them morally responsible [1], a series of recent empirical studies suggest that people do ascribe blame to AI systems and robots in certain contexts [2]. This is disconcerting: Blame might be shifted from the owners, users or designers of AI systems to the systems themselves, leading to the diminished accountability of the responsible human agents [3]. In this paper, we explore one of the potential underlying reasons (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  3. A Case for Machine Ethics in Modeling Human-Level Intelligent Agents.Robert James M. Boyles - 2018 - Kritike 12 (1):182–200.
    This paper focuses on the research field of machine ethics and how it relates to a technological singularity—a hypothesized, futuristic event where artificial machines will have greater-than-human-level intelligence. One problem related to the singularity centers on the issue of whether human values and norms would survive such an event. To somehow ensure this, a number of artificial intelligence researchers have opted to focus on the development of artificial moral agents, which refers to machines capable of moral reasoning, judgment, and (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  4. Intelligence via ultrafilters: structural properties of some intelligence comparators of deterministic Legg-Hutter agents.Samuel Alexander - 2019 - Journal of Artificial General Intelligence 10 (1):24-45.
    Legg and Hutter, as well as subsequent authors, considered intelligent agents through the lens of interaction with reward-giving environments, attempting to assign numeric intelligence measures to such agents, with the guiding principle that a more intelligent agent should gain higher rewards from environments in some aggregate sense. In this paper, we consider a related question: rather than measure numeric intelligence of one Legg- Hutter agent, how can we compare the relative intelligence of two Legg-Hutter agents? (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  5. Universal Agent Mixtures and the Geometry of Intelligence.Samuel Allen Alexander, David Quarel, Len Du & Marcus Hutter - 2023 - Aistats.
    Inspired by recent progress in multi-agent Reinforcement Learning (RL), in this work we examine the collective intelligent behaviour of theoretical universal agents by introducing a weighted mixture operation. Given a weighted set of agents, their weighted mixture is a new agent whose expected total reward in any environment is the corresponding weighted average of the original agents' expected total rewards in that environment. Thus, if RL agent intelligence is quantified in terms of performance across environments, the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. Measuring the intelligence of an idealized mechanical knowing agent.Samuel Alexander - 2020 - Lecture Notes in Computer Science 12226.
    We define a notion of the intelligence level of an idealized mechanical knowing agent. This is motivated by efforts within artificial intelligence research to define real-number intelligence levels of compli- cated intelligent systems. Our agents are more idealized, which allows us to define a much simpler measure of intelligence level for them. In short, we define the intelligence level of a mechanical knowing agent to be the supremum of the computable ordinals that have codes the agent knows to (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  7. An Analysis of the Interaction Between Intelligent Software Agents and Human Users.Christopher Burr, Nello Cristianini & James Ladyman - 2018 - Minds and Machines 28 (4):735-774.
    Interactions between an intelligent software agent and a human user are ubiquitous in everyday situations such as access to information, entertainment, and purchases. In such interactions, the ISA mediates the user’s access to the content, or controls some other aspect of the user experience, and is not designed to be neutral about outcomes of user choices. Like human users, ISAs are driven by goals, make autonomous decisions, and can learn from experience. Using ideas from bounded rationality, we frame these (...)
    Download  
     
    Export citation  
     
    Bookmark   35 citations  
  8. Mandevillian Intelligence: From Individual Vice to Collective Virtue.Paul Smart - 2018 - In Carter Joseph Adam, Clark Andy, Kallestrup Jesper, Palermos Spyridon Orestis & Pritchard Duncan (eds.), Socially-Extended Knowledge. Oxford University Press. pp. 253–274.
    Mandevillian intelligence is a specific form of collective intelligence in which individual cognitive shortcomings, limitations and biases play a positive functional role in yielding various forms of collective cognitive success. When this idea is transposed to the epistemological domain, mandevillian intelligence emerges as the idea that individual forms of intellectual vice may, on occasion, support the epistemic performance of some form of multi-agent ensemble, such as a socio-epistemic system, a collective doxastic agent, or an epistemic group agent. As a specific (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  9. Intelligent capacities in artificial systems.Atoosa Kasirzadeh & Victoria McGeer - 2023 - In William A. Bauer & Anna Marmodoro (eds.), Artificial Dispositions: Investigating Ethical and Metaphysical Issues. Bloomsbury.
    This paper investigates the nature of dispositional properties in the context of artificial intelligence systems. We start by examining the distinctive features of natural dispositions according to criteria introduced by McGeer (2018) for distinguishing between object-centered dispositions (i.e., properties like ‘fragility’) and agent-based abilities, including both ‘habits’ and ‘skills’ (a.k.a. ‘intelligent capacities’, Ryle 1949). We then explore to what extent the distinction applies to artificial dispositions in the context of two very different kinds of artificial systems, one based on (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10. Reward-Punishment Symmetric Universal Intelligence.Samuel Allen Alexander & Marcus Hutter - 2021 - In AGI.
    Can an agent's intelligence level be negative? We extend the Legg-Hutter agent-environment framework to include punishments and argue for an affirmative answer to that question. We show that if the background encodings and Universal Turing Machine (UTM) admit certain Kolmogorov complexity symmetries, then the resulting Legg-Hutter intelligence measure is symmetric about the origin. In particular, this implies reward-ignoring agents have Legg-Hutter intelligence 0 according to such UTMs.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  11. The intelligent use of space.David Kirsh - 1995 - Artificial Intelligence 73 (1--2):31-68.
    The objective of this essay is to provide the beginning of a principled classification of some of the ways space is intelligently used. Studies of planning have typically focused on the temporal ordering of action, leaving as unaddressed questions of where to lay down instruments, ingredients, work-in-progress, and the like. But, in having a body, we are spatially located creatures: we must always be facing some direction, have only certain objects in view, be within reach of certain others. How we (...)
    Download  
     
    Export citation  
     
    Bookmark   134 citations  
  12. Emergent Agent Causation.Juan Morales - 2023 - Synthese 201:138.
    In this paper I argue that many scholars involved in the contemporary free will debates have underappreciated the philosophical appeal of agent causation because the resources of contemporary emergentism have not been adequately introduced into the discussion. Whereas I agree that agent causation’s main problem has to do with its intelligibility, particularly with respect to the issue of how substances can be causally relevant, I argue that the notion of substance causation can be clearly articulated from an emergentist framework. According (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  13. Social Machinery and Intelligence.Nello Cristianini, James Ladyman & Teresa Scantamburlo - manuscript
    Social machines are systems formed by technical and human elements interacting in a structured manner. The use of digital platforms as mediators allows large numbers of human participants to join such mechanisms, creating systems where interconnected digital and human components operate as a single machine capable of highly sophisticated behaviour. Under certain conditions, such systems can be described as autonomous and goal-driven agents. Many examples of modern Artificial Intelligence (AI) can be regarded as instances of this class of mechanisms. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. natural intelligence and anthropic reasoning.Predrag Slijepcevic - 2020 - Biosemiotics 13 (tba):1-23.
    This paper aims to justify the concept of natural intelligence in the biosemiotic context. I will argue that the process of life is (i) a cognitive/semiotic process and (ii) that organisms, from bacteria to animals, are cognitive or semiotic agents. To justify these arguments, the neural-type intelligence represented by the form of reasoning known as anthropic reasoning will be compared and contrasted with types of intelligence explicated by four disciplines of biology – relational biology, evolutionary epistemology, biosemiotics and the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  15. Accountability in Artificial Intelligence: What It Is and How It Works.Claudio Novelli, Mariarosaria Taddeo & Luciano Floridi - 2023 - AI and Society 1:1-12.
    Accountability is a cornerstone of the governance of artificial intelligence (AI). However, it is often defined too imprecisely because its multifaceted nature and the sociotechnical structure of AI systems imply a variety of values, practices, and measures to which accountability in AI can refer. We address this lack of clarity by defining accountability in terms of answerability, identifying three conditions of possibility (authority recognition, interrogation, and limitation of power), and an architecture of seven features (context, range, agent, forum, standards, process, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  16. Artificial intelligence and philosophical creativity: From analytics to crealectics.Luis de Miranda - 2020 - Human Affairs 30 (4):597-607.
    The tendency to idealise artificial intelligence as independent from human manipulators, combined with the growing ontological entanglement of humans and digital machines, has created an “anthrobotic” horizon, in which data analytics, statistics and probabilities throw our agential power into question. How can we avoid the consequences of a reified definition of intelligence as universal operation becoming imposed upon our destinies? It is here argued that the fantasised autonomy of automated intelligence presents a contradistinctive opportunity for philosophical consciousness to understand itself (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  17.  55
    A narrative review of the active ingredients in psychotherapy delivered by conversational agents.Arthur Herbener, Michal Klincewicz & Malene Flensborg Damholdt A. Show More - 2024 - Computers in Human Behavior Reports 14.
    The present narrative review seeks to unravel where we are now, and where we need to go to delineate the active ingredients in psychotherapy delivered by conversational agents (e.g., chatbots). While psychotherapy delivered by conversational agents has shown promising effectiveness for depression, anxiety, and psychological distress across several randomized controlled trials, little emphasis has been placed on the therapeutic processes in these interventions. The theoretical framework of this narrative review is grounded in prominent perspectives on the active ingredients (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18. Intelligibility and the Guise of the Good.Paul Boswell - 2018 - Journal of Ethics and Social Philosophy 13 (1):1-31.
    According to the Guise of the Good, an agent only does for a reason what she sees as good. One of the main motivations for the view is its apparent ability to explain why action for a reason must be intelligible to its agent, for on this view, an action is intelligible just in case it seems good. This motivation has come under criticism in recent years. Most notably, Kieran Setiya has argued that merely seeing one’s action as good does (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  19. Measuring Intelligence and Growth Rate: Variations on Hibbard's Intelligence Measure.Samuel Alexander & Bill Hibbard - 2021 - Journal of Artificial General Intelligence 12 (1):1-25.
    In 2011, Hibbard suggested an intelligence measure for agents who compete in an adversarial sequence prediction game. We argue that Hibbard’s idea should actually be considered as two separate ideas: first, that the intelligence of such agents can be measured based on the growth rates of the runtimes of the competitors that they defeat; and second, one specific (somewhat arbitrary) method for measuring said growth rates. Whereas Hibbard’s intelligence measure is based on the latter growth-rate-measuring method, we survey (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  20. Risks of artificial general intelligence.Vincent C. Müller (ed.) - 2014 - Taylor & Francis (JETAI).
    Special Issue “Risks of artificial general intelligence”, Journal of Experimental and Theoretical Artificial Intelligence, 26/3 (2014), ed. Vincent C. Müller. http://www.tandfonline.com/toc/teta20/26/3# - Risks of general artificial intelligence, Vincent C. Müller, pages 297-301 - Autonomous technology and the greater human good - Steve Omohundro - pages 303-315 - - - The errors, insights and lessons of famous AI predictions – and what they mean for the future - Stuart Armstrong, Kaj Sotala & Seán S. Ó hÉigeartaigh - pages 317-342 - - (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  21. Artificial Intelligence Systems, Responsibility and Agential Self-Awareness.Lydia Farina - 2022 - In Vincent C. Müller (ed.), Philosophy and Theory of Artificial Intelligence 2021. Berlin, Germany: pp. 15-25.
    This paper investigates the claim that artificial Intelligence Systems cannot be held morally responsible because they do not have an ability for agential self-awareness e.g. they cannot be aware that they are the agents of an action. The main suggestion is that if agential self-awareness and related first person representations presuppose an awareness of a self, the possibility of responsible artificial intelligence systems cannot be evaluated independently of research conducted on the nature of the self. Focusing on a specific (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  22. Group Agency and Artificial Intelligence.Christian List - 2021 - Philosophy and Technology (4):1-30.
    The aim of this exploratory paper is to review an under-appreciated parallel between group agency and artificial intelligence. As both phenomena involve non-human goal-directed agents that can make a difference to the social world, they raise some similar moral and regulatory challenges, which require us to rethink some of our anthropocentric moral assumptions. Are humans always responsible for those entities’ actions, or could the entities bear responsibility themselves? Could the entities engage in normative reasoning? Could they even have rights (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  23. Legg-Hutter universal intelligence implies classical music is better than pop music for intellectual training.Samuel Alexander - 2019 - The Reasoner 13 (11):71-72.
    In their thought-provoking paper, Legg and Hutter consider a certain abstrac- tion of an intelligent agent, and define a universal intelligence measure, which assigns every such agent a numerical intelligence rating. We will briefly summarize Legg and Hutter’s paper, and then give a tongue-in-cheek argument that if one’s goal is to become more intelligent by cultivating music appreciation, then it is bet- ter to use classical music (such as Bach, Mozart, and Beethoven) than to use more recent pop (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. Intelligence, race, and psychological testing.Mark Alfano, Latasha Holden & Andrew Conway - 2016 - In Naomi Zack (ed.), The Oxford Handbook of Philosophy and Race.
    This chapter has two main goals: to update philosophers on the state of the art in the scientific psychology of intelligence, and to explain and evaluate challenges to the measurement invariance of intelligence tests. First, we provide a brief history of the scientific psychology of intelligence. Next, we discuss the metaphysics of intelligence in light of scientific studies in psychology and neuroimaging. Finally, we turn to recent skeptical developments related to measurement invariance. These have largely focused on attributability: Where do (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  25.  79
    Agent-Based Computational Economics: Overview and Brief History.Leigh Tesfatsion - 2023 - In Ragupathy Venkatachalam (ed.), Artificial Intelligence, Learning, and Computation in Economics and Finance. Cham: Springer. pp. 41-58.
    Scientists and engineers seek to understand how real-world systems work and could work better. Any modeling method devised for such purposes must simplify reality. Ideally, however, the modeling method should be flexible as well as logically rigorous; it should permit model simplifications to be appropriately tailored for the specific purpose at hand. Flexibility and logical rigor have been the two key goals motivating the development of Agent-based Computational Economics (ACE), a completely agent-based modeling method characterized by seven specific modeling principles. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. Artificial intelligence's new frontier: Artificial companions and the fourth revolution.Luciano Floridi - 2008 - Metaphilosophy 39 (4-5):651-655.
    Abstract: In this article I argue that the best way to understand the information turn is in terms of a fourth revolution in the long process of reassessing humanity's fundamental nature and role in the universe. We are not immobile, at the centre of the universe (Copernicus); we are not unnaturally distinct and different from the rest of the animal world (Darwin); and we are far from being entirely transparent to ourselves (Freud). We are now slowly accepting the idea that (...)
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  27. On the morality of artificial agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.
    Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility of (...)
    Download  
     
    Export citation  
     
    Bookmark   286 citations  
  28. One decade of universal artificial intelligence.Marcus Hutter - 2012 - In Pei Wang & Ben Goertzel (eds.), Theoretical Foundations of Artificial General Intelligence. Springer. pp. 67--88.
    The first decade of this century has seen the nascency of the first mathematical theory of general artificial intelligence. This theory of Universal Artificial Intelligence (UAI) has made significant contributions to many theoretical, philosophical, and practical AI questions. In a series of papers culminating in book (Hutter, 2005), an exciting sound and complete mathematical model for a super intelligent agent (AIXI) has been developed and rigorously analyzed. While nowadays most AI researchers avoid discussing intelligence, the award-winning PhD thesis (Legg, (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  29. Intelligence ethics and non-coercive interrogation.Michael Skerker - 2007 - Defense Intelligence Journal 16 (1):61-76.
    This paper will address the moral implications of non-coercive interrogations in intelligence contexts. U.S. Army and CIA interrogation manuals define non-coercive interrogation as interrogation which avoids the use of physical pressure, relying instead on oral gambits. These methods, including some that involve deceit and emotional manipulation, would be mostly familiar to viewers of TV police dramas. As I see it, there are two questions that need be answered relevant to this subject. First, under what circumstances, if any, may a state (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30.  47
    Rethinking Human and Machine Intelligence through Determinism.Jae Jeong Lee - manuscript
    This paper proposes a metaphysical framework for distinguishing between human and machine intelligence. It posits two identical deterministic worlds -- one comprising a human agent and the other a machine agent. These agents exhibit different information processing mechanisms despite their apparent sameness in a causal sense. Providing a conceptual modeling of their difference, this paper resolves what it calls “the vantage point problem” – namely, how to justify an omniscient perspective through which a determinist asserts determinism from within the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31. The social turn of artificial intelligence.Nello Cristianini, Teresa Scantamburlo & James Ladyman - 2021 - AI and Society (online).
    Social machines are systems formed by material and human elements interacting in a structured way. The use of digital platforms as mediators allows large numbers of humans to participate in such machines, which have interconnected AI and human components operating as a single system capable of highly sophisticated behavior. Under certain conditions, such systems can be understood as autonomous goal-driven agents. Many popular online platforms can be regarded as instances of this class of agent. We argue that autonomous social (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32. The rise of artificial intelligence and the crisis of moral passivity.Berman Chan - 2020 - AI and Society 35 (4):991-993.
    Set aside fanciful doomsday speculations about AI. Even lower-level AIs, while otherwise friendly and providing us a universal basic income, would be able to do all our jobs. Also, we would over-rely upon AI assistants even in our personal lives. Thus, John Danaher argues that a human crisis of moral passivity would result However, I argue firstly that if AIs are posited to lack the potential to become unfriendly, they may not be intelligent enough to replace us in all (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  33. Making moral machines: why we need artificial moral agents.Paul Formosa & Malcolm Ryan - forthcoming - AI and Society.
    As robots and Artificial Intelligences become more enmeshed in rich social contexts, it seems inevitable that we will have to make them into moral machines equipped with moral skills. Apart from the technical difficulties of how we could achieve this goal, we can also ask the ethical question of whether we should seek to create such Artificial Moral Agents (AMAs). Recently, several papers have argued that we have strong reasons not to develop AMAs. In response, we develop a comprehensive (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  34. The Agent Intellect in Aquinas: A Metaphysical Condition of Possibility of Human Understanding as Receptive of Objective Content.Andres Ayala - 2018 - Dissertation, University of St. Michael's College
    The following is an interpretation of Aquinas’ agent intellect focusing on Summa Theologiae I, qq. 75-89, and proposing that the agent intellect is a metaphysical rather than a formal a priori of human understanding. A formal a priori is responsible for the intelligibility as content of the object of human understanding and is related to Kant’s epistemological views; whereas a metaphysical a priori is responsible for intelligibility as mode of being of this same object. We can find in Aquinas’ text (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  35. Can Artificial Intelligence Make Art?Elzė Sigutė Mikalonytė & Markus Kneer - 2022 - ACM Transactions on Human-Robot Interactions.
    In two experiments (total N=693) we explored whether people are willing to consider paintings made by AI-driven robots as art, and robots as artists. Across the two experiments, we manipulated three factors: (i) agent type (AI-driven robot v. human agent), (ii) behavior type (intentional creation of a painting v. accidental creation), and (iii) object type (abstract v. representational painting). We found that people judge robot paintings and human painting as art to roughly the same extent. However, people are much less (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  36. Agents, mechanisms, and other minds.Douglas C. Long - 1979 - In Donald F. Gustafson & Bangs L. Tapscott (eds.), Body, Mind And Method. Dordrecht: Reidel. pp. 129--148.
    One of the goals of physiologists who study the detailed physical, chemical,and neurological mechanisms operating within the human body is to understand the intricate causal processes which underlie human abilities and activities. It is doubtless premature to predict that they will eventually be able to explain the behaviour of a particular human being as we might now explain the behaviour of a pendulum clock or even the invisible changes occurring within the hardware of a modern electronic computer. Nonetheless, it seems (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37. Fundamental Issues of Artificial Intelligence.Vincent C. Müller (ed.) - 2016 - Cham: Springer.
    [Müller, Vincent C. (ed.), (2016), Fundamental issues of artificial intelligence (Synthese Library, 377; Berlin: Springer). 570 pp.] -- This volume offers a look at the fundamental issues of present and future AI, especially from cognitive science, computer science, neuroscience and philosophy. This work examines the conditions for artificial intelligence, how these relate to the conditions for intelligence in humans and other natural agents, as well as ethical and societal problems that artificial intelligence raises or will raise. The key issues (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  38. Artificial Intelligence, Robots, and Philosophy.Masahiro Morioka, Shin-Ichiro Inaba, Makoto Kureha, István Zoltán Zárdai, Minao Kukita, Shimpei Okamoto, Yuko Murakami & Rossa Ó Muireartaigh - 2023 - Journal of Philosophy of Life.
    This book is a collection of all the papers published in the special issue “Artificial Intelligence, Robots, and Philosophy,” Journal of Philosophy of Life, Vol.13, No.1, 2023, pp.1-146. The authors discuss a variety of topics such as science fiction and space ethics, the philosophy of artificial intelligence, the ethics of autonomous agents, and virtuous robots. Through their discussions, readers are able to think deeply about the essence of modern technology and the future of humanity. All papers were invited and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. Dynamic Cognition Applied to Value Learning in Artificial Intelligence.Nythamar De Oliveira & Nicholas Corrêa - 2021 - Aoristo - International Journal of Phenomenology, Hermeneutics and Metaphysics 4 (2):185-199.
    Experts in Artificial Intelligence (AI) development predict that advances in the dvelopment of intelligent systems and agents will reshape vital areas in our society. Nevertheless, if such an advance isn't done with prudence, it can result in negative outcomes for humanity. For this reason, several researchers in the area are trying to develop a robust, beneficial, and safe concept of artificial intelligence. Currently, several of the open problems in the field of AI research arise from the difficulty of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. Ethics of Artificial Intelligence.Vincent C. Müller - 2021 - In Anthony Elliott (ed.), The Routledge social science handbook of AI. London: Routledge. pp. 122-137.
    Artificial intelligence (AI) is a digital technology that will be of major importance for the development of humanity in the near future. AI has raised fundamental questions about what we should do with such systems, what the systems themselves should do, what risks they involve and how we can control these. - After the background to the field (1), this article introduces the main debates (2), first on ethical issues that arise with AI systems as objects, i.e. tools made and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  41. Risks of artificial intelligence.Vincent C. Müller (ed.) - 2016 - CRC Press - Chapman & Hall.
    Papers from the conference on AI Risk (published in JETAI), supplemented by additional work. --- If the intelligence of artificial systems were to surpass that of humans, humanity would face significant risks. The time has come to consider these issues, and this consideration must include progress in artificial intelligence (AI) as much as insights from AI theory. -- Featuring contributions from leading experts and thinkers in artificial intelligence, Risks of Artificial Intelligence is the first volume of collected chapters dedicated to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  42.  52
    Theory of Cooperative-Competitive Intelligence: Principles, Research Directions, and Applications.Robert Hristovski & Natàlia Balagué - 2020 - Frontiers in Psychology 11.
    We present a theory of cooperative-competitive intelligence (CCI), its measures, research program, and applications that stem from it. Within the framework of this theory, satisficing sub-optimal behavior is any behavior that does not promote a decrease in the prospective control of the functional action diversity/unpredictability (D/U) potential of the agent or team. This potential is defined as the entropy measure in multiple, context-dependent dimensions. We define the satisficing interval of behaviors as CCI. In order to manifest itself at individual or (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  43. Risk Imposition by Artificial Agents: The Moral Proxy Problem.Johanna Thoma - 2022 - In Silja Voeneky, Philipp Kellmeyer, Oliver Mueller & Wolfram Burgard (eds.), The Cambridge Handbook of Responsible Artificial Intelligence: Interdisciplinary Perspectives. Cambridge University Press.
    Where artificial agents are not liable to be ascribed true moral agency and responsibility in their own right, we can understand them as acting as proxies for human agents, as making decisions on their behalf. What I call the ‘Moral Proxy Problem’ arises because it is often not clear for whom a specific artificial agent is acting as a moral proxy. In particular, we need to decide whether artificial agents should be acting as proxies for low-level (...) — e.g. individual users of the artificial agents — or whether they should be moral proxies for high-level agents — e.g. designers, distributors or regulators, that is, those who can potentially control the choice behaviour of many artificial agents at once. Who we think an artificial agent is a moral proxy for determines from which agential perspective the choice problems artificial agents will be faced with should be framed: should we frame them like the individual choice scenarios previously faced by individual human agents? Or should we, rather, consider the expected aggregate effects of the many choices made by all the artificial agents of a particular type all at once? This paper looks at how artificial agents should be designed to make risky choices, and argues that the question of risky choice by artificial agents shows the moral proxy problem to be both practically relevant and difficult. (shrink)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  44. In Conversation with Artificial Intelligence: Aligning language Models with Human Values.Atoosa Kasirzadeh - 2023 - Philosophy and Technology 36 (2):1-24.
    Large-scale language technologies are increasingly used in various forms of communication with humans across different contexts. One particular use case for these technologies is conversational agents, which output natural language text in response to prompts and queries. This mode of engagement raises a number of social and ethical questions. For example, what does it mean to align conversational agents with human norms or values? Which norms or values should they be aligned with? And how can this be accomplished? (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  45. Artificial virtuous agents: from theory to machine implementation.Jakob Stenseke - 2021 - AI and Society:1-20.
    Virtue ethics has many times been suggested as a promising recipe for the construction of artificial moral agents due to its emphasis on moral character and learning. However, given the complex nature of the theory, hardly any work has de facto attempted to implement the core tenets of virtue ethics in moral machines. The main goal of this paper is to demonstrate how virtue ethics can be taken all the way from theory to machine implementation. To achieve this goal, (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  46. Science Based on Artificial Intelligence Need not Pose a Social Epistemological Problem.Uwe Peters - 2024 - Social Epistemology Review and Reply Collective 13 (1).
    It has been argued that our currently most satisfactory social epistemology of science can’t account for science that is based on artificial intelligence (AI) because this social epistemology requires trust between scientists that can take full responsibility for the research tools they use, and scientists can’t take full responsibility for the AI tools they use since these systems are epistemically opaque. I think this argument overlooks that much AI-based science can be done without opaque models, and that agents can (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. Artificial virtuous agents in a multi-agent tragedy of the commons.Jakob Stenseke - 2022 - AI and Society:1-18.
    Although virtue ethics has repeatedly been proposed as a suitable framework for the development of artificial moral agents, it has been proven difficult to approach from a computational perspective. In this work, we present the first technical implementation of artificial virtuous agents in moral simulations. First, we review previous conceptual and technical work in artificial virtue ethics and describe a functionalistic path to AVAs based on dispositional virtues, bottom-up learning, and top-down eudaimonic reward. We then provide the details (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  48. Agencéité et responsabilité des agents artificiels.Louis Chartrand - 2017 - Éthique Publique 19 (2).
    -/- Les agents artificiels et les nouvelles technologies de l’information, de par leur capacité à établir de nouvelles dynamiques de transfert d’information, ont des effets perturbateurs sur les écosystèmes épistémiques. Se représenter la responsabilité pour ces chambardements représente un défi considérable : comment ce concept peut-il rendre compte de son objet dans des systèmes complexes dans lesquels il est difficile de rattacher l’action à un agent ou à une agente ? Cet article présente un aperçu du concept d’écosystème épistémique (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49. Body Schema in Autonomous Agents.Zachariah A. Neemeh & Christian Kronsted - 2021 - Journal of Artificial Intelligence and Consciousness 1 (8):113-145.
    A body schema is an agent's model of its own body that enables it to act on affordances in the environment. This paper presents a body schema system for the Learning Intelligent Decision Agent (LIDA) cognitive architecture. LIDA is a conceptual and computational implementation of Global Workspace Theory, also integrating other theories from neuroscience and psychology. This paper contends that the ‘body schema' should be split into three separate functions based on the functional role of consciousness in Global Workspace (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  50. Noncognitivism and agent-centered norms.Alisabeth Ayars & Gideon Rosen - 2021 - Philosophical Studies 179 (4):1019-1038.
    This paper takes up a neglected problem for metaethical noncognitivism: the characterization of the acceptance states for agent-centered normative theories like Rational Egoism. If Egoism is a coherent view, the non-cognitivist needs a coherent acceptance state for it. This can be provided, as Dreier and Gibbard have shown. But those accounts fail when generalized, assigning the same acceptance state to normative theories that are clearly distinct, or assigning no acceptance state to theories that look to be intelligible. The paper makes (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
1 — 50 / 997