Results for 'Intelligent agents'

960 found
Order:
  1. A Value-Sensitive Design Approach to Intelligent Agents.Steven Umbrello & Angelo Frank De Bellis - 2018 - In Yampolskiy Roman (ed.), Artificial Intelligence Safety and Security. CRC Press. pp. 395-410.
    This chapter proposed a novel design methodology called Value-Sensitive Design and its potential application to the field of artificial intelligence research and design. It discusses the imperatives in adopting a design philosophy that embeds values into the design of artificial agents at the early stages of AI development. Because of the high risk stakes in the unmitigated design of artificial agents, this chapter proposes that even though VSD may turn out to be a less-than-optimal design methodology, it currently (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  2. Guilty Artificial Minds: Folk Attributions of Mens Rea and Culpability to Artificially Intelligent Agents.Michael T. Stuart & Markus Https://Orcidorg Kneer - 2021 - Proceedings of the ACM on Human-Computer Interaction 5 (CSCW2).
    While philosophers hold that it is patently absurd to blame robots or hold them morally responsible [1], a series of recent empirical studies suggest that people do ascribe blame to AI systems and robots in certain contexts [2]. This is disconcerting: Blame might be shifted from the owners, users or designers of AI systems to the systems themselves, leading to the diminished accountability of the responsible human agents [3]. In this paper, we explore one of the potential underlying reasons (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  3. A Case for Machine Ethics in Modeling Human-Level Intelligent Agents.Robert James M. Boyles - 2018 - Kritike 12 (1):182–200.
    This paper focuses on the research field of machine ethics and how it relates to a technological singularity—a hypothesized, futuristic event where artificial machines will have greater-than-human-level intelligence. One problem related to the singularity centers on the issue of whether human values and norms would survive such an event. To somehow ensure this, a number of artificial intelligence researchers have opted to focus on the development of artificial moral agents, which refers to machines capable of moral reasoning, judgment, and (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  4. Intelligence via ultrafilters: structural properties of some intelligence comparators of deterministic Legg-Hutter agents.Samuel Alexander - 2019 - Journal of Artificial General Intelligence 10 (1):24-45.
    Legg and Hutter, as well as subsequent authors, considered intelligent agents through the lens of interaction with reward-giving environments, attempting to assign numeric intelligence measures to such agents, with the guiding principle that a more intelligent agent should gain higher rewards from environments in some aggregate sense. In this paper, we consider a related question: rather than measure numeric intelligence of one Legg- Hutter agent, how can we compare the relative intelligence of two Legg-Hutter agents? (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  5. An Analysis of the Interaction Between Intelligent Software Agents and Human Users.Christopher Burr, Nello Cristianini & James Ladyman - 2018 - Minds and Machines 28 (4):735-774.
    Interactions between an intelligent software agent and a human user are ubiquitous in everyday situations such as access to information, entertainment, and purchases. In such interactions, the ISA mediates the user’s access to the content, or controls some other aspect of the user experience, and is not designed to be neutral about outcomes of user choices. Like human users, ISAs are driven by goals, make autonomous decisions, and can learn from experience. Using ideas from bounded rationality, we frame these (...)
    Download  
     
    Export citation  
     
    Bookmark   39 citations  
  6. Measuring the intelligence of an idealized mechanical knowing agent.Samuel Alexander - 2020 - Lecture Notes in Computer Science 12226.
    We define a notion of the intelligence level of an idealized mechanical knowing agent. This is motivated by efforts within artificial intelligence research to define real-number intelligence levels of compli- cated intelligent systems. Our agents are more idealized, which allows us to define a much simpler measure of intelligence level for them. In short, we define the intelligence level of a mechanical knowing agent to be the supremum of the computable ordinals that have codes the agent knows to (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  7. Universal Agent Mixtures and the Geometry of Intelligence.Samuel Allen Alexander, David Quarel, Len Du & Marcus Hutter - 2023 - Aistats.
    Inspired by recent progress in multi-agent Reinforcement Learning (RL), in this work we examine the collective intelligent behaviour of theoretical universal agents by introducing a weighted mixture operation. Given a weighted set of agents, their weighted mixture is a new agent whose expected total reward in any environment is the corresponding weighted average of the original agents' expected total rewards in that environment. Thus, if RL agent intelligence is quantified in terms of performance across environments, the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. The social turn of artificial intelligence.Nello Cristianini, Teresa Scantamburlo & James Ladyman - 2021 - AI and Society (online).
    Social machines are systems formed by material and human elements interacting in a structured way. The use of digital platforms as mediators allows large numbers of humans to participate in such machines, which have interconnected AI and human components operating as a single system capable of highly sophisticated behavior. Under certain conditions, such systems can be understood as autonomous goal-driven agents. Many popular online platforms can be regarded as instances of this class of agent. We argue that autonomous social (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  9. The intelligent use of space.David Kirsh - 1995 - Artificial Intelligence 73 (1--2):31-68.
    The objective of this essay is to provide the beginning of a principled classification of some of the ways space is intelligently used. Studies of planning have typically focused on the temporal ordering of action, leaving as unaddressed questions of where to lay down instruments, ingredients, work-in-progress, and the like. But, in having a body, we are spatially located creatures: we must always be facing some direction, have only certain objects in view, be within reach of certain others. How we (...)
    Download  
     
    Export citation  
     
    Bookmark   137 citations  
  10. Social Machinery and Intelligence.Nello Cristianini, James Ladyman & Teresa Scantamburlo - manuscript
    Social machines are systems formed by technical and human elements interacting in a structured manner. The use of digital platforms as mediators allows large numbers of human participants to join such mechanisms, creating systems where interconnected digital and human components operate as a single machine capable of highly sophisticated behaviour. Under certain conditions, such systems can be described as autonomous and goal-driven agents. Many examples of modern Artificial Intelligence (AI) can be regarded as instances of this class of mechanisms. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11. One decade of universal artificial intelligence.Marcus Hutter - 2012 - In Pei Wang & Ben Goertzel (eds.), Theoretical Foundations of Artificial General Intelligence. Springer. pp. 67--88.
    The first decade of this century has seen the nascency of the first mathematical theory of general artificial intelligence. This theory of Universal Artificial Intelligence (UAI) has made significant contributions to many theoretical, philosophical, and practical AI questions. In a series of papers culminating in book (Hutter, 2005), an exciting sound and complete mathematical model for a super intelligent agent (AIXI) has been developed and rigorously analyzed. While nowadays most AI researchers avoid discussing intelligence, the award-winning PhD thesis (Legg, (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  12. Risks of artificial general intelligence.Vincent C. Müller (ed.) - 2014 - Taylor & Francis (JETAI).
    Special Issue “Risks of artificial general intelligence”, Journal of Experimental and Theoretical Artificial Intelligence, 26/3 (2014), ed. Vincent C. Müller. http://www.tandfonline.com/toc/teta20/26/3# - Risks of general artificial intelligence, Vincent C. Müller, pages 297-301 - Autonomous technology and the greater human good - Steve Omohundro - pages 303-315 - - - The errors, insights and lessons of famous AI predictions – and what they mean for the future - Stuart Armstrong, Kaj Sotala & Seán S. Ó hÉigeartaigh - pages 317-342 - - (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  13. Artificial intelligence and philosophical creativity: From analytics to crealectics.Luis de Miranda - 2020 - Human Affairs 30 (4):597-607.
    The tendency to idealise artificial intelligence as independent from human manipulators, combined with the growing ontological entanglement of humans and digital machines, has created an “anthrobotic” horizon, in which data analytics, statistics and probabilities throw our agential power into question. How can we avoid the consequences of a reified definition of intelligence as universal operation becoming imposed upon our destinies? It is here argued that the fantasised autonomy of automated intelligence presents a contradistinctive opportunity for philosophical consciousness to understand itself (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  14. Emergent Agent Causation.Juan Morales - 2023 - Synthese 201:138.
    In this paper I argue that many scholars involved in the contemporary free will debates have underappreciated the philosophical appeal of agent causation because the resources of contemporary emergentism have not been adequately introduced into the discussion. Whereas I agree that agent causation’s main problem has to do with its intelligibility, particularly with respect to the issue of how substances can be causally relevant, I argue that the notion of substance causation can be clearly articulated from an emergentist framework. According (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  15. Discovering agents.Zachary Kenton, Ramana Kumar, Sebastian Farquhar, Jonathan Richens, Matt MacDermott & Tom Everitt - 2023 - Artificial Intelligence 322 (C):103963.
    Causal models of agents have been used to analyse the safety aspects of machine learning systems. But identifying agents is non-trivial -- often the causal model is just assumed by the modeler without much justification -- and modelling failures can lead to mistakes in the safety analysis. This paper proposes the first formal causal definition of agents -- roughly that agents are systems that would adapt their policy if their actions influenced the world in a different (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  16. Artificial Intelligence, Creativity, and the Precarity of Human Connection.Lindsay Brainard - forthcoming - Oxford Intersections: Ai in Society.
    There is an underappreciated respect in which the widespread availability of generative artificial intelligence (AI) models poses a threat to human connection. My central contention is that human creativity is especially capable of helping us connect to others in a valuable way, but the widespread availability of generative AI models reduces our incentives to engage in various sorts of creative work in the arts and sciences. I argue that creative endeavors must be motivated by curiosity, and so they must disclose (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. Accountability in Artificial Intelligence: What It Is and How It Works.Claudio Novelli, Mariarosaria Taddeo & Luciano Floridi - 2023 - AI and Society 1:1-12.
    Accountability is a cornerstone of the governance of artificial intelligence (AI). However, it is often defined too imprecisely because its multifaceted nature and the sociotechnical structure of AI systems imply a variety of values, practices, and measures to which accountability in AI can refer. We address this lack of clarity by defining accountability in terms of answerability, identifying three conditions of possibility (authority recognition, interrogation, and limitation of power), and an architecture of seven features (context, range, agent, forum, standards, process, (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  18. natural intelligence and anthropic reasoning.Predrag Slijepcevic - 2020 - Biosemiotics 13 (tba):1-23.
    This paper aims to justify the concept of natural intelligence in the biosemiotic context. I will argue that the process of life is (i) a cognitive/semiotic process and (ii) that organisms, from bacteria to animals, are cognitive or semiotic agents. To justify these arguments, the neural-type intelligence represented by the form of reasoning known as anthropic reasoning will be compared and contrasted with types of intelligence explicated by four disciplines of biology – relational biology, evolutionary epistemology, biosemiotics and the (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  19.  76
    Artificial Intelligence and Universal Values.Jay Friedenberg - 2024 - UK: Ethics Press.
    The field of value alignment, or more broadly machine ethics, is becoming increasingly important as artificial intelligence developments accelerate. By ‘alignment’ we mean giving a generally intelligent software system the capability to act in ways that are beneficial, or at least minimally harmful, to humans. There are a large number of techniques that are being experimented with, but this work often fails to specify what values exactly we should be aligning. When making a decision, an agent is supposed to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20. The Agent Intellect in Aquinas: A Metaphysical Condition of Possibility of Human Understanding as Receptive of Objective Content.Andres Ayala - 2018 - Dissertation, University of St. Michael's College
    The following is an interpretation of Aquinas’ agent intellect focusing on Summa Theologiae I, qq. 75-89, and proposing that the agent intellect is a metaphysical rather than a formal a priori of human understanding. A formal a priori is responsible for the intelligibility as content of the object of human understanding and is related to Kant’s epistemological views; whereas a metaphysical a priori is responsible for intelligibility as mode of being of this same object. We can find in Aquinas’ text (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  21. Group Agency and Artificial Intelligence.Christian List - 2021 - Philosophy and Technology (4):1-30.
    The aim of this exploratory paper is to review an under-appreciated parallel between group agency and artificial intelligence. As both phenomena involve non-human goal-directed agents that can make a difference to the social world, they raise some similar moral and regulatory challenges, which require us to rethink some of our anthropocentric moral assumptions. Are humans always responsible for those entities’ actions, or could the entities bear responsibility themselves? Could the entities engage in normative reasoning? Could they even have rights (...)
    Download  
     
    Export citation  
     
    Bookmark   33 citations  
  22. On the morality of artificial agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.
    Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility of (...)
    Download  
     
    Export citation  
     
    Bookmark   294 citations  
  23. Theory of Cooperative-Competitive Intelligence: Principles, Research Directions, and Applications.Robert Hristovski & Natàlia Balagué - 2020 - Frontiers in Psychology 11.
    We present a theory of cooperative-competitive intelligence (CCI), its measures, research program, and applications that stem from it. Within the framework of this theory, satisficing sub-optimal behavior is any behavior that does not promote a decrease in the prospective control of the functional action diversity/unpredictability (D/U) potential of the agent or team. This potential is defined as the entropy measure in multiple, context-dependent dimensions. We define the satisficing interval of behaviors as CCI. In order to manifest itself at individual or (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  24. Risks of artificial intelligence.Vincent C. Muller (ed.) - 2015 - CRC Press - Chapman & Hall.
    Papers from the conference on AI Risk (published in JETAI), supplemented by additional work. --- If the intelligence of artificial systems were to surpass that of humans, humanity would face significant risks. The time has come to consider these issues, and this consideration must include progress in artificial intelligence (AI) as much as insights from AI theory. -- Featuring contributions from leading experts and thinkers in artificial intelligence, Risks of Artificial Intelligence is the first volume of collected chapters dedicated to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  25. Making moral machines: why we need artificial moral agents.Paul Formosa & Malcolm Ryan - forthcoming - AI and Society.
    As robots and Artificial Intelligences become more enmeshed in rich social contexts, it seems inevitable that we will have to make them into moral machines equipped with moral skills. Apart from the technical difficulties of how we could achieve this goal, we can also ask the ethical question of whether we should seek to create such Artificial Moral Agents (AMAs). Recently, several papers have argued that we have strong reasons not to develop AMAs. In response, we develop a comprehensive (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  26. Legg-Hutter universal intelligence implies classical music is better than pop music for intellectual training.Samuel Alexander - 2019 - The Reasoner 13 (11):71-72.
    In their thought-provoking paper, Legg and Hutter consider a certain abstrac- tion of an intelligent agent, and define a universal intelligence measure, which assigns every such agent a numerical intelligence rating. We will briefly summarize Legg and Hutter’s paper, and then give a tongue-in-cheek argument that if one’s goal is to become more intelligent by cultivating music appreciation, then it is bet- ter to use classical music (such as Bach, Mozart, and Beethoven) than to use more recent pop (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. Reward-Punishment Symmetric Universal Intelligence.Samuel Allen Alexander & Marcus Hutter - 2021 - In Samuel Allen Alexander & Marcus Hutter (eds.), AGI.
    Can an agent's intelligence level be negative? We extend the Legg-Hutter agent-environment framework to include punishments and argue for an affirmative answer to that question. We show that if the background encodings and Universal Turing Machine (UTM) admit certain Kolmogorov complexity symmetries, then the resulting Legg-Hutter intelligence measure is symmetric about the origin. In particular, this implies reward-ignoring agents have Legg-Hutter intelligence 0 according to such UTMs.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  28. Intelligent capacities in artificial systems.Atoosa Kasirzadeh & Victoria McGeer - 2023 - In William A. Bauer & Anna Marmodoro (eds.), Artificial Dispositions: Investigating Ethical and Metaphysical Issues. New York: Bloomsbury.
    This paper investigates the nature of dispositional properties in the context of artificial intelligence systems. We start by examining the distinctive features of natural dispositions according to criteria introduced by McGeer (2018) for distinguishing between object-centered dispositions (i.e., properties like ‘fragility’) and agent-based abilities, including both ‘habits’ and ‘skills’ (a.k.a. ‘intelligent capacities’, Ryle 1949). We then explore to what extent the distinction applies to artificial dispositions in the context of two very different kinds of artificial systems, one based on (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29. Intelligence, race, and psychological testing.Mark Alfano, Latasha Holden & Andrew Conway - 2017 - In Naomi Zack (ed.), The Oxford Handbook of Philosophy and Race. New York, USA: Oxford University Press USA.
    This chapter has two main goals: to update philosophers on the state of the art in the scientific psychology of intelligence, and to explain and evaluate challenges to the measurement invariance of intelligence tests. First, we provide a brief history of the scientific psychology of intelligence. Next, we discuss the metaphysics of intelligence in light of scientific studies in psychology and neuroimaging. Finally, we turn to recent skeptical developments related to measurement invariance. These have largely focused on attributability: Where do (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  30. Artificial virtuous agents in a multi-agent tragedy of the commons.Jakob Stenseke - 2022 - AI and Society:1-18.
    Although virtue ethics has repeatedly been proposed as a suitable framework for the development of artificial moral agents, it has been proven difficult to approach from a computational perspective. In this work, we present the first technical implementation of artificial virtuous agents in moral simulations. First, we review previous conceptual and technical work in artificial virtue ethics and describe a functionalistic path to AVAs based on dispositional virtues, bottom-up learning, and top-down eudaimonic reward. We then provide the details (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  31. Noncognitivism and agent-centered norms.Alisabeth Ayars & Gideon Rosen - 2021 - Philosophical Studies 179 (4):1019-1038.
    This paper takes up a neglected problem for metaethical noncognitivism: the characterization of the acceptance states for agent-centered normative theories like Rational Egoism. If Egoism is a coherent view, the non-cognitivist needs a coherent acceptance state for it. This can be provided, as Dreier and Gibbard have shown. But those accounts fail when generalized, assigning the same acceptance state to normative theories that are clearly distinct, or assigning no acceptance state to theories that look to be intelligible. The paper makes (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  32. Risk Imposition by Artificial Agents: The Moral Proxy Problem.Johanna Thoma - 2022 - In Silja Voeneky, Philipp Kellmeyer, Oliver Mueller & Wolfram Burgard (eds.), The Cambridge Handbook of Responsible Artificial Intelligence: Interdisciplinary Perspectives. Cambridge University Press.
    Where artificial agents are not liable to be ascribed true moral agency and responsibility in their own right, we can understand them as acting as proxies for human agents, as making decisions on their behalf. What I call the ‘Moral Proxy Problem’ arises because it is often not clear for whom a specific artificial agent is acting as a moral proxy. In particular, we need to decide whether artificial agents should be acting as proxies for low-level (...) — e.g. individual users of the artificial agents — or whether they should be moral proxies for high-level agents — e.g. designers, distributors or regulators, that is, those who can potentially control the choice behaviour of many artificial agents at once. Who we think an artificial agent is a moral proxy for determines from which agential perspective the choice problems artificial agents will be faced with should be framed: should we frame them like the individual choice scenarios previously faced by individual human agents? Or should we, rather, consider the expected aggregate effects of the many choices made by all the artificial agents of a particular type all at once? This paper looks at how artificial agents should be designed to make risky choices, and argues that the question of risky choice by artificial agents shows the moral proxy problem to be both practically relevant and difficult. (shrink)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  33. Artificial Intelligence Systems, Responsibility and Agential Self-Awareness.Lydia Farina - 2022 - In Vincent C. Müller (ed.), Philosophy and Theory of Artificial Intelligence 2021. Berlin: Springer. pp. 15-25.
    This paper investigates the claim that artificial Intelligence Systems cannot be held morally responsible because they do not have an ability for agential self-awareness e.g. they cannot be aware that they are the agents of an action. The main suggestion is that if agential self-awareness and related first person representations presuppose an awareness of a self, the possibility of responsible artificial intelligence systems cannot be evaluated independently of research conducted on the nature of the self. Focusing on a specific (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  34. Agencéité et responsabilité des agents artificiels.Louis Chartrand - 2017 - Éthique Publique 19 (2).
    -/- Les agents artificiels et les nouvelles technologies de l’information, de par leur capacité à établir de nouvelles dynamiques de transfert d’information, ont des effets perturbateurs sur les écosystèmes épistémiques. Se représenter la responsabilité pour ces chambardements représente un défi considérable : comment ce concept peut-il rendre compte de son objet dans des systèmes complexes dans lesquels il est difficile de rattacher l’action à un agent ou à une agente ? Cet article présente un aperçu du concept d’écosystème épistémique (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. Intelligibility and the Guise of the Good.Paul Boswell - 2018 - Journal of Ethics and Social Philosophy 13 (1):1-31.
    According to the Guise of the Good, an agent only does for a reason what she sees as good. One of the main motivations for the view is its apparent ability to explain why action for a reason must be intelligible to its agent, for on this view, an action is intelligible just in case it seems good. This motivation has come under criticism in recent years. Most notably, Kieran Setiya has argued that merely seeing one’s action as good does (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  36. Fundamental Issues of Artificial Intelligence.Vincent C. Müller (ed.) - 2016 - Cham: Springer.
    [Müller, Vincent C. (ed.), (2016), Fundamental issues of artificial intelligence (Synthese Library, 377; Berlin: Springer). 570 pp.] -- This volume offers a look at the fundamental issues of present and future AI, especially from cognitive science, computer science, neuroscience and philosophy. This work examines the conditions for artificial intelligence, how these relate to the conditions for intelligence in humans and other natural agents, as well as ethical and societal problems that artificial intelligence raises or will raise. The key issues (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  37. Explicit Legg-Hutter intelligence calculations which suggest non-Archimedean intelligence.Samuel Allen Alexander & Arthur Paul Pedersen - forthcoming - Lecture Notes in Computer Science.
    Are the real numbers rich enough to measure intelligence? We generalize a result of Alexander and Hutter about the so-called Legg-Hutter intelligence measures of reinforcement learning agents. Using the generalized result, we exhibit a paradox: in one particular version of the Legg-Hutter intelligence measure, certain agents all have intelligence 0, even though in a certain sense some of them outperform others. We show that this paradox disappears if we vary the Legg-Hutter intelligence measure to be hyperreal-valued rather than (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. Ethics of Artificial Intelligence.Vincent C. Müller - 2021 - In Anthony Elliott (ed.), The Routledge Social Science Handbook of Ai. Routledge. pp. 122-137.
    Artificial intelligence (AI) is a digital technology that will be of major importance for the development of humanity in the near future. AI has raised fundamental questions about what we should do with such systems, what the systems themselves should do, what risks they involve and how we can control these. - After the background to the field (1), this article introduces the main debates (2), first on ethical issues that arise with AI systems as objects, i.e. tools made and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  39. Agent-Based Computational Economics: Overview and Brief History.Leigh Tesfatsion - 2023 - In Ragupathy Venkatachalam (ed.), Artificial Intelligence, Learning, and Computation in Economics and Finance. Cham: Springer. pp. 41-58.
    Scientists and engineers seek to understand how real-world systems work and could work better. Any modeling method devised for such purposes must simplify reality. Ideally, however, the modeling method should be flexible as well as logically rigorous; it should permit model simplifications to be appropriately tailored for the specific purpose at hand. Flexibility and logical rigor have been the two key goals motivating the development of Agent-based Computational Economics (ACE), a completely agent-based modeling method characterized by seven specific modeling principles. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. The rise of artificial intelligence and the crisis of moral passivity.Berman Chan - 2020 - AI and Society 35 (4):991-993.
    Set aside fanciful doomsday speculations about AI. Even lower-level AIs, while otherwise friendly and providing us a universal basic income, would be able to do all our jobs. Also, we would over-rely upon AI assistants even in our personal lives. Thus, John Danaher argues that a human crisis of moral passivity would result However, I argue firstly that if AIs are posited to lack the potential to become unfriendly, they may not be intelligent enough to replace us in all (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  41. Intelligence ethics and non-coercive interrogation.Michael Skerker - 2007 - Defense Intelligence Journal 16 (1):61-76.
    This paper will address the moral implications of non-coercive interrogations in intelligence contexts. U.S. Army and CIA interrogation manuals define non-coercive interrogation as interrogation which avoids the use of physical pressure, relying instead on oral gambits. These methods, including some that involve deceit and emotional manipulation, would be mostly familiar to viewers of TV police dramas. As I see it, there are two questions that need be answered relevant to this subject. First, under what circumstances, if any, may a state (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42.  85
    Digital Homunculi: Reimagining Democracy Research with Generative Agents.Petr Špecián - manuscript
    The pace of technological change continues to outstrip the evolution of democratic institutions, creating an urgent need for innovative approaches to democratic reform. However, the experimentation bottleneck - characterized by slow speed, high costs, limited scalability, and ethical risks - has long hindered progress in democracy research. This paper proposes a novel solution: employing generative artificial intelligence (GenAI) to create synthetic data through the simulation of digital homunculi, GenAI-powered entities designed to mimic human behavior in social contexts. By enabling rapid, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. Updating the Frame Problem for Artificial Intelligence Research.Lisa Miracchi - 2020 - Journal of Artificial Intelligence and Consciousness 7 (2):217-230.
    The Frame Problem is the problem of how one can design a machine to use information so as to behave competently, with respect to the kinds of tasks a genuinely intelligent agent can reliably, effectively perform. I will argue that the way the Frame Problem is standardly interpreted, and so the strategies considered for attempting to solve it, must be updated. We must replace overly simplistic and reductionist assumptions with more sophisticated and plausible ones. In particular, the standard interpretation (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. Measuring Intelligence and Growth Rate: Variations on Hibbard's Intelligence Measure.Samuel Alexander & Bill Hibbard - 2021 - Journal of Artificial General Intelligence 12 (1):1-25.
    In 2011, Hibbard suggested an intelligence measure for agents who compete in an adversarial sequence prediction game. We argue that Hibbard’s idea should actually be considered as two separate ideas: first, that the intelligence of such agents can be measured based on the growth rates of the runtimes of the competitors that they defeat; and second, one specific (somewhat arbitrary) method for measuring said growth rates. Whereas Hibbard’s intelligence measure is based on the latter growth-rate-measuring method, we survey (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  45.  74
    Unusual coincidences, statistics and an intelligent influence.Sergei Chekanov - manuscript
    This paper argues that unusual coincidences, particularly those involving historical events, can be viewed as design patterns, suggesting an intelligent influence over the course of events. A compelling case examined in detail using probability theory concerns the presidencies of Abraham Lincoln (1809–1865) and John F. Kennedy (1917–1963). This and other coincidences involving historical figures disfavor the materialistic perspective and point to the presence of an intelligent agent acting on a global scale, beyond the arrow of time, influencing human (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. (1 other version)Agents, mechanisms, and other minds.Douglas C. Long - 1979 - In Agents, Mechanisms, and Other Minds. Dordrecht, Holland: Reidel. pp. 129--148.
    One of the goals of physiologists who study the detailed physical, chemical,and neurological mechanisms operating within the human body is to understand the intricate causal processes which underlie human abilities and activities. It is doubtless premature to predict that they will eventually be able to explain the behaviour of a particular human being as we might now explain the behaviour of a pendulum clock or even the invisible changes occurring within the hardware of a modern electronic computer. Nonetheless, it seems (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. The Gap between Intelligence and Mind.Bowen Xu, Xinyi Zhan & Quansheng Ren - manuscript
    The feeling brings the "Hard Problem" to philosophy of mind. Does the subjective feeling have a non-ignorable impact on Intelligence? If so, can the feeling be realized in Artificial Intelligence (AI)? To discuss the problems, we have to figure out what the feeling means, by giving a clear definition. In this paper, we primarily give some mainstream perspectives on the topic of the mind, especially the topic of the feeling (or qualia, subjective experience, etc.). Then, a definition of the feeling (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48. (1 other version)Can Artificial Intelligence Make Art?Elzė Sigutė Mikalonytė & Markus Kneer - 2022 - ACM Transactions on Human-Robot Interactions.
    In two experiments (total N=693) we explored whether people are willing to consider paintings made by AI-driven robots as art, and robots as artists. Across the two experiments, we manipulated three factors: (i) agent type (AI-driven robot v. human agent), (ii) behavior type (intentional creation of a painting v. accidental creation), and (iii) object type (abstract v. representational painting). We found that people judge robot paintings and human painting as art to roughly the same extent. However, people are much less (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  49. Artificial intelligence's new frontier: Artificial companions and the fourth revolution.Luciano Floridi - 2008 - Metaphilosophy 39 (4-5):651-655.
    Abstract: In this article I argue that the best way to understand the information turn is in terms of a fourth revolution in the long process of reassessing humanity's fundamental nature and role in the universe. We are not immobile, at the centre of the universe (Copernicus); we are not unnaturally distinct and different from the rest of the animal world (Darwin); and we are far from being entirely transparent to ourselves (Freud). We are now slowly accepting the idea that (...)
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  50. Artificial Intelligence, Robots, and Philosophy.Masahiro Morioka, Shin-Ichiro Inaba, Makoto Kureha, István Zoltán Zárdai, Minao Kukita, Shimpei Okamoto, Yuko Murakami & Rossa Ó Muireartaigh - 2023 - Journal of Philosophy of Life.
    This book is a collection of all the papers published in the special issue “Artificial Intelligence, Robots, and Philosophy,” Journal of Philosophy of Life, Vol.13, No.1, 2023, pp.1-146. The authors discuss a variety of topics such as science fiction and space ethics, the philosophy of artificial intelligence, the ethics of autonomous agents, and virtuous robots. Through their discussions, readers are able to think deeply about the essence of modern technology and the future of humanity. All papers were invited and (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 960