Results for 'Artificial social agents'

955 found
Order:
  1. A principlist-based study of the ethical design and acceptability of artificial social agents.Paul Formosa - 2023 - International Journal of Human-Computer Studies 172.
    Artificial Social Agents (ASAs), which are AI software driven entities programmed with rules and preferences to act autonomously and socially with humans, are increasingly playing roles in society. As their sophistication grows, humans will share greater amounts of personal information, thoughts, and feelings with ASAs, which has significant ethical implications. We conducted a study to investigate what ethical principles are of relative importance when people engage with ASAs and whether there is a relationship between people’s values and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2. Making moral machines: why we need artificial moral agents.Paul Formosa & Malcolm Ryan - forthcoming - AI and Society.
    As robots and Artificial Intelligences become more enmeshed in rich social contexts, it seems inevitable that we will have to make them into moral machines equipped with moral skills. Apart from the technical difficulties of how we could achieve this goal, we can also ask the ethical question of whether we should seek to create such Artificial Moral Agents (AMAs). Recently, several papers have argued that we have strong reasons not to develop AMAs. In response, we (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  3. Artificial morality: Making of the artificial moral agents.Marija Kušić & Petar Nurkić - 2019 - Belgrade Philosophical Annual 1 (32):27-49.
    Abstract: Artificial Morality is a new, emerging interdisciplinary field that centres around the idea of creating artificial moral agents, or AMAs, by implementing moral competence in artificial systems. AMAs are ought to be autonomous agents capable of socially correct judgements and ethically functional behaviour. This request for moral machines comes from the changes in everyday practice, where artificial systems are being frequently used in a variety of situations from home help and elderly care purposes (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  4. Artificial Leviathan: Exploring Social Evolution of LLM Agents Through the Lens of Hobbesian Social Contract Theory.Gordon Dai, Weijia Zhang, Jinhan Li, Siqi Yang, Chidera Ibe, Srihas Rao, Arthur Caetano & Misha Sra - manuscript
    The emergence of Large Language Models (LLMs) and advancements in Artificial Intelligence (AI) offer an opportunity for computational social science research at scale. Building upon prior explorations of LLM agent design, our work introduces a simulated agent society where complex social relationships dynamically form and evolve over time. Agents are imbued with psychological drives and placed in a sandbox survival environment. We conduct an evaluation of the agent society through the lens of Thomas Hobbes's seminal (...) Contract Theory (SCT). We analyze whether, as the theory postulates, agents seek to escape a brutish "state of nature" by surrendering rights to an absolute sovereign in exchange for order and security. Our experiments unveil an alignment: Initially, agents engage in unrestrained conflict, mirroring Hobbes's depiction of the state of nature. However, as the simulation progresses, social contracts emerge, leading to the authorization of an absolute sovereign and the establishment of a peaceful commonwealth founded on mutual cooperation. This congruence between our LLM agent society's evolutionary trajectory and Hobbes's theoretical account indicates LLMs' capability to model intricate social dynamics and potentially replicate forces that shape human societies. By enabling such insights into group behavior and emergent societal phenomena, LLM-driven multi-agent simulations, while unable to simulate all the nuances of human behavior, may hold potential for advancing our understanding of social structures, group dynamics, and complex human systems. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  5. On the morality of artificial agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.
    Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility (...)
    Download  
     
    Export citation  
     
    Bookmark   294 citations  
  6. Affective Artificial Agents as sui generis Affective Artifacts.Marco Facchin & Giacomo Zanotti - 2024 - Topoi 43 (3).
    AI-based technologies are increasingly pervasive in a number of contexts. Our affective and emotional life makes no exception. In this article, we analyze one way in which AI-based technologies can affect them. In particular, our investigation will focus on affective artificial agents, namely AI-powered software or robotic agents designed to interact with us in affectively salient ways. We build upon the existing literature on affective artifacts with the aim of providing an original analysis of affective artificial (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  7. The social turn of artificial intelligence.Nello Cristianini, Teresa Scantamburlo & James Ladyman - 2021 - AI and Society (online).
    Social machines are systems formed by material and human elements interacting in a structured way. The use of digital platforms as mediators allows large numbers of humans to participate in such machines, which have interconnected AI and human components operating as a single system capable of highly sophisticated behavior. Under certain conditions, such systems can be understood as autonomous goal-driven agents. Many popular online platforms can be regarded as instances of this class of agent. We argue that autonomous (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  8.  55
    Artificial agents: responsibility & control gaps.Herman Veluwenkamp & Frank Hindriks - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    Artificial agents create significant moral opportunities and challenges. Over the last two decades, discourse has largely focused on the concept of a ‘responsibility gap.’ We argue that this concept is incoherent, misguided, and diverts attention from the core issue of ‘control gaps.’ Control gaps arise when there is a discrepancy between the causal control an agent exercises and the moral control it should possess or emulate. Such gaps present moral risks, often leading to harm or ethical violations. We (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9. Science Based on Artificial Intelligence Need not Pose a Social Epistemological Problem.Uwe Peters - 2024 - Social Epistemology Review and Reply Collective 13 (1).
    It has been argued that our currently most satisfactory social epistemology of science can’t account for science that is based on artificial intelligence (AI) because this social epistemology requires trust between scientists that can take full responsibility for the research tools they use, and scientists can’t take full responsibility for the AI tools they use since these systems are epistemically opaque. I think this argument overlooks that much AI-based science can be done without opaque models, and that (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10. The Morality of Artificial Friends in Ishiguro’s Klara and the Sun.Jakob Stenseke - 2022 - Journal of Science Fiction and Philosophy 5.
    Can artificial entities be worthy of moral considerations? Can they be artificial moral agents (AMAs), capable of telling the difference between good and evil? In this essay, I explore both questions—i.e., whether and to what extent artificial entities can have a moral status (“the machine question”) and moral agency (“the AMA question”)—in light of Kazuo Ishiguro’s 2021 novel Klara and the Sun. I do so by juxtaposing two prominent approaches to machine morality that are central to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  11. Social Machinery and Intelligence.Nello Cristianini, James Ladyman & Teresa Scantamburlo - manuscript
    Social machines are systems formed by technical and human elements interacting in a structured manner. The use of digital platforms as mediators allows large numbers of human participants to join such mechanisms, creating systems where interconnected digital and human components operate as a single machine capable of highly sophisticated behaviour. Under certain conditions, such systems can be described as autonomous and goal-driven agents. Many examples of modern Artificial Intelligence (AI) can be regarded as instances of this class (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12. One decade of universal artificial intelligence.Marcus Hutter - 2012 - In Pei Wang & Ben Goertzel (eds.), Theoretical Foundations of Artificial General Intelligence. Springer. pp. 67--88.
    The first decade of this century has seen the nascency of the first mathematical theory of general artificial intelligence. This theory of Universal Artificial Intelligence (UAI) has made significant contributions to many theoretical, philosophical, and practical AI questions. In a series of papers culminating in book (Hutter, 2005), an exciting sound and complete mathematical model for a super intelligent agent (AIXI) has been developed and rigorously analyzed. While nowadays most AI researchers avoid discussing intelligence, the award-winning PhD thesis (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  13. Group Agency and Artificial Intelligence.Christian List - 2021 - Philosophy and Technology (4):1-30.
    The aim of this exploratory paper is to review an under-appreciated parallel between group agency and artificial intelligence. As both phenomena involve non-human goal-directed agents that can make a difference to the social world, they raise some similar moral and regulatory challenges, which require us to rethink some of our anthropocentric moral assumptions. Are humans always responsible for those entities’ actions, or could the entities bear responsibility themselves? Could the entities engage in normative reasoning? Could they even (...)
    Download  
     
    Export citation  
     
    Bookmark   32 citations  
  14. Ethics of Artificial Intelligence.Vincent C. Müller - 2021 - In Anthony Elliott (ed.), The Routledge Social Science Handbook of Ai. Routledge. pp. 122-137.
    Artificial intelligence (AI) is a digital technology that will be of major importance for the development of humanity in the near future. AI has raised fundamental questions about what we should do with such systems, what the systems themselves should do, what risks they involve and how we can control these. - After the background to the field (1), this article introduces the main debates (2), first on ethical issues that arise with AI systems as objects, i.e. tools made (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  15.  60
    The Prisoner’s versus Pardoner’s Dilemmas: A Juxtaposition of Two Strategic Decision-Game Theoretic Approaches in Social Sciences.Saad Malook - 2024 - Journal of Social and Organizational Matters 3 (3):52-74.
    This article introduces a strategic decision-game theoretic approach, the Pardoner’s Dilemma, and juxtaposes it with the Prisoner’s Dilemma. Game theory has emerged as a significant approach in the twentieth century for explaining strategic decision-making in numerous arenas, including economics, business, politics, ethics, international relations, biology, law, and war studies. ‘Game theory’ explains how and why players/actors/agents cooperate or conflict to procure their self-interests in a social world. Life is a game, and human, corporate, and artificial intelligent (...) are players who play different games to maximise utility or minimise disutility. The Prisoner’s Dilemma is a promising game-theoretic approach that explains strategic decision-making in zero-sum and non-zero-sum games. ‘Strategic decision-making’ means that the outcome does not depend upon the actions of a player but upon all players. There are numerous essential game strategies, including tossing, negotiation, bargaining, balloting, competition, chance, power, and arbitration. Although the Prisoner’s Dilemma is a good game-theoretic approach, it does not allow players to use the key game strategies. In contrast, Pardoner’s Dilemma is a game theoretic approach that not only explains zero-sum and non-zero-sum games but also allows the players to use different game strategies, such as negotiation, bargaining, tossing, chance, balloting, competition, arbitration, and power. The article develops and defends the Pardoner’s Dilemma in the game-theoretic approach as an alternative to the Prisoner’s Dilemma. The article claims that the Pardoner’s Dilemma is a more promising approach than the Prisoner’s Dilemma in the decision-game theoretic framework. By introducing the Pardoner’s Dilemma, the article enhances the scope of decision/game theory in social sciences. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  16. The Logic of the Method of Agent-Based Simulation in the Social Sciences: Empirical and Intentional Adequacy of Computer Programs.Nuno David, Jaime Sichman & Helder Coleho - 2005 - Journal of Artificial Societies and Social Simulation 8 (4).
    The classical theory of computation does not represent an adequate model of reality for simulation in the social sciences. The aim of this paper is to construct a methodological perspective that is able to conciliate the formal and empirical logic of program verification in computer science, with the interpretative and multiparadigmatic logic of the social sciences. We attempt to evaluate whether social simulation implies an additional perspective about the way one can understand the concepts of program and (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  17. The Structure and Logic of Interdisciplinary Research in Agent-Based Social Simulation.Nuno David, Maria Marietto, Jaime Sichman & Helder Coelho - 2004 - Journal of Artificial Societies and Social Simulation 7 (3).
    This article reports an exploratory survey of the structure of interdisciplinary research in Agent-Based Social Simulation. One hundred and ninety six researchers participated in the survey completing an on-line questionnaire. The questionnaire had three distinct sections, a classification of research domains, a classification of models, and an inquiry into software requirements for designing simulation platforms. The survey results allowed us to disambiguate the variety of scientific goals and modus operandi of researchers with a reasonable level of detail, and to (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  18. Human Goals Are Constitutive of Agency in Artificial Intelligence.Elena Popa - 2021 - Philosophy and Technology 34 (4):1731-1750.
    The question whether AI systems have agency is gaining increasing importance in discussions of responsibility for AI behavior. This paper argues that an approach to artificial agency needs to be teleological, and consider the role of human goals in particular if it is to adequately address the issue of responsibility. I will defend the view that while AI systems can be viewed as autonomous in the sense of identifying or pursuing goals, they rely on human goals and other values (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  19. Logic and Social Cognition: The Facts Matter, and So Do Computational Models.Rineke Verbrugge - 2009 - Journal of Philosophical Logic 38 (6):649-680.
    This article takes off from Johan van Benthem’s ruminations on the interface between logic and cognitive science in his position paper “Logic and reasoning: Do the facts matter?”. When trying to answer Van Benthem’s question whether logic can be fruitfully combined with psychological experiments, this article focuses on a specific domain of reasoning, namely higher-order social cognition, including attributions such as “Bob knows that Alice knows that he wrote a novel under pseudonym”. For intelligent interaction, it is important that (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  20. Agent-Based Computational Economics: Overview and Brief History.Leigh Tesfatsion - 2023 - In Ragupathy Venkatachalam (ed.), Artificial Intelligence, Learning, and Computation in Economics and Finance. Cham: Springer. pp. 41-58.
    Scientists and engineers seek to understand how real-world systems work and could work better. Any modeling method devised for such purposes must simplify reality. Ideally, however, the modeling method should be flexible as well as logically rigorous; it should permit model simplifications to be appropriately tailored for the specific purpose at hand. Flexibility and logical rigor have been the two key goals motivating the development of Agent-based Computational Economics (ACE), a completely agent-based modeling method characterized by seven specific modeling principles. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. In Conversation with Artificial Intelligence: Aligning language Models with Human Values.Atoosa Kasirzadeh - 2023 - Philosophy and Technology 36 (2):1-24.
    Large-scale language technologies are increasingly used in various forms of communication with humans across different contexts. One particular use case for these technologies is conversational agents, which output natural language text in response to prompts and queries. This mode of engagement raises a number of social and ethical questions. For example, what does it mean to align conversational agents with human norms or values? Which norms or values should they be aligned with? And how can this be (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  22. What decision theory provides the best procedure for identifying the best action available to a given artificially intelligent system?Samuel A. Barnett - 2018 - Dissertation, University of Oxford
    Decision theory has had a long-standing history in the behavioural and social sciences as a tool for constructing good approximations of human behaviour. Yet as artificially intelligent systems (AIs) grow in intellectual capacity and eventually outpace humans, decision theory becomes evermore important as a model of AI behaviour. What sort of decision procedure might an AI employ? In this work, I propose that policy-based causal decision theory (PCDT), which places a primacy on the decision-relevance of predictors and simulations of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23.  65
    Digital Homunculi: Reimagining Democracy Research with Generative Agents.Petr Špecián - manuscript
    The pace of technological change continues to outstrip the evolution of democratic institutions, creating an urgent need for innovative approaches to democratic reform. However, the experimentation bottleneck - characterized by slow speed, high costs, limited scalability, and ethical risks - has long hindered progress in democracy research. This paper proposes a novel solution: employing generative artificial intelligence (GenAI) to create synthetic data through the simulation of digital homunculi, GenAI-powered entities designed to mimic human behavior in social contexts. By (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. Social AI and The Equation of Wittgenstein’s Language User With Calvino’s Literature Machine.Warmhold Jan Thomas Mollema - 2024 - International Review of Literary Studies 6 (1):39-55.
    Is it sensical to ascribe psychological predicates to AI systems like chatbots based on large language models (LLMs)? People have intuitively started ascribing emotions or consciousness to social AI (‘affective artificial agents’), with consequences that range from love to suicide. The philosophical question of whether such ascriptions are warranted is thus very relevant. This paper advances the argument that LLMs instantiate language users in Ludwig Wittgenstein’s sense but that ascribing psychological predicates to these systems remains a functionalist (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. Toward a social theory of Human-AI Co-creation: Bringing techno-social reproduction and situated cognition together with the following seven premises.Manh-Tung Ho & Quan-Hoang Vuong - manuscript
    This article synthesizes the current theoretical attempts to understand human-machine interactions and introduces seven premises to understand our emerging dynamics with increasingly competent, pervasive, and instantly accessible algorithms. The hope that these seven premises can build toward a social theory of human-AI cocreation. The focus on human-AI cocreation is intended to emphasize two factors. First, is the fact that our machine learning systems are socialized. Second, is the coevolving nature of human mind and AI systems as smart devices form (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. The fetish of artificial intelligence. In response to Iason Gabriel’s “Towards a Theory of Justice for Artificial Intelligence”.Albert Efimov - forthcoming - Philosophy Science.
    The article presents the grounds for defining the fetish of artificial intelligence (AI). The fundamental differences of AI from all previous technological innovations are highlighted, as primarily related to the introduction into the human cognitive sphere and fundamentally new uncontrolled consequences for society. Convincing arguments are presented that the leaders of the globalist project are the main beneficiaries of the AI fetish. This is clearly manifested in the works of philosophers close to big technology corporations and their mega-projects. It (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27.  86
    The Pragmatic Turn in Explainable Artificial Intelligence.Andrés Páez - 2019 - Minds and Machines 29 (3):441-459.
    In this paper I argue that the search for explainable models and interpretable decisions in AI must be reformulated in terms of the broader project of offering a pragmatic and naturalistic account of understanding in AI. Intuitively, the purpose of providing an explanation of a model or a decision is to make it understandable to its stakeholders. But without a previous grasp of what it means to say that an agent understands a model or a decision, the explanatory strategies will (...)
    Download  
     
    Export citation  
     
    Bookmark   34 citations  
  28. Moral zombies: why algorithms are not moral agents.Carissa Véliz - 2021 - AI and Society 36 (2):487-497.
    In philosophy of mind, zombies are imaginary creatures that are exact physical duplicates of conscious subjects but for whom there is no first-personal experience. Zombies are meant to show that physicalism—the theory that the universe is made up entirely out of physical components—is false. In this paper, I apply the zombie thought experiment to the realm of morality to assess whether moral agency is something independent from sentience. Algorithms, I argue, are a kind of functional moral zombie, such that thinking (...)
    Download  
     
    Export citation  
     
    Bookmark   35 citations  
  29. Computer Models of Constitutive Social Practices.Richard Evans - 2013 - In Vincent Müller (ed.), Philosophy and Theory of Artificial Intelligence. Springer. pp. 389-409.
    Research in multi-agent systems typically assumes a regulative model of social practice. This model starts with agents who are already capable of acting autonomously to further their individual ends. A social practice, according to this view, is a way of achieving coordination between multiple agents by restricting the set of actions available. For example, in a world containing cars but no driving regulations, agents are free to drive on either side of the road. To prevent (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. An Analysis of the Interaction Between Intelligent Software Agents and Human Users.Christopher Burr, Nello Cristianini & James Ladyman - 2018 - Minds and Machines 28 (4):735-774.
    Interactions between an intelligent software agent and a human user are ubiquitous in everyday situations such as access to information, entertainment, and purchases. In such interactions, the ISA mediates the user’s access to the content, or controls some other aspect of the user experience, and is not designed to be neutral about outcomes of user choices. Like human users, ISAs are driven by goals, make autonomous decisions, and can learn from experience. Using ideas from bounded rationality, we frame these interactions (...)
    Download  
     
    Export citation  
     
    Bookmark   38 citations  
  31. From responsible robotics towards a human rights regime oriented to the challenges of robotics and artificial intelligence.Hin-Yan Liu & Karolina Zawieska - 2020 - Ethics and Information Technology 22 (4):321-333.
    As the aim of the responsible robotics initiative is to ensure that responsible practices are inculcated within each stage of design, development and use, this impetus is undergirded by the alignment of ethical and legal considerations towards socially beneficial ends. While every effort should be expended to ensure that issues of responsibility are addressed at each stage of technological progression, irresponsibility is inherent within the nature of robotics technologies from a theoretical perspective that threatens to thwart the endeavour. This is (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  32. Designing AI for Explainability and Verifiability: A Value Sensitive Design Approach to Avoid Artificial Stupidity in Autonomous Vehicles.Steven Umbrello & Roman Yampolskiy - 2022 - International Journal of Social Robotics 14 (2):313-322.
    One of the primary, if not most critical, difficulties in the design and implementation of autonomous systems is the black-boxed nature of the decision-making structures and logical pathways. How human values are embodied and actualised in situ may ultimately prove to be harmful if not outright recalcitrant. For this reason, the values of stakeholders become of particular significance given the risks posed by opaque structures of intelligent agents (IAs). This paper explores how decision matrix algorithms, via the belief-desire-intention model (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  33. Distributed responsibility in human–machine interactions.Anna Strasser - 2021 - AI and Ethics.
    Artificial agents have become increasingly prevalent in human social life. In light of the diversity of new human–machine interactions, we face renewed questions about the distribution of moral responsibility. Besides positions denying the mere possibility of attributing moral responsibility to artificial systems, recent approaches discuss the circumstances under which artificial agents may qualify as moral agents. This paper revisits the discussion of how responsibility might be distributed between artificial agents and human (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  34. Polarization and Belief Dynamics in the Black and White Communities: An Agent-Based Network Model from the Data.Patrick Grim, Stephen B. Thomas, Stephen Fisher, Christopher Reade, Daniel J. Singer, Mary A. Garza, Craig S. Fryer & Jamie Chatman - 2012 - In Christoph Adami, David M. Bryson, Charles Offria & Robert T. Pennock (eds.), Artificial Life 13. MIT Press.
    Public health care interventions—regarding vaccination, obesity, and HIV, for example—standardly take the form of information dissemination across a community. But information networks can vary importantly between different ethnic communities, as can levels of trust in information from different sources. We use data from the Greater Pittsburgh Random Household Health Survey to construct models of information networks for White and Black communities--models which reflect the degree of information contact between individuals, with degrees of trust in information from various sources correlated with (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  35. Validation and Verification in Social Simulation: Patterns and Clarification of Terminology.Nuno David - 2009 - Epistemological Aspects of Computer Simulation in the Social Sciences, EPOS 2006, Revised Selected and Invited Papers, Lecture Notes in Artificial Intelligence, Squazzoni, Flaminio (Ed.) 5466:117-129.
    The terms ‘verification’ and ‘validation’ are widely used in science, both in the natural and the social sciences. They are extensively used in simulation, often associated with the need to evaluate models in different stages of the simulation development process. Frequently, terminological ambiguities arise when researchers conflate, along the simulation development process, the technical meanings of both terms with other meanings found in the philosophy of science and the social sciences. This article considers the problem of verification and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  36. Artificial moral agents are infeasible with foreseeable technologies.Patrick Chisan Hew - 2014 - Ethics and Information Technology 16 (3):197-206.
    For an artificial agent to be morally praiseworthy, its rules for behaviour and the mechanisms for supplying those rules must not be supplied entirely by external humans. Such systems are a substantial departure from current technologies and theory, and are a low prospect. With foreseeable technologies, an artificial agent will carry zero responsibility for its behavior and humans will retain full responsibility.
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  37. (1 other version)Artificial virtuous agents: from theory to machine implementation.Jakob Stenseke - 2021 - AI and Society:1-20.
    Virtue ethics has many times been suggested as a promising recipe for the construction of artificial moral agents due to its emphasis on moral character and learning. However, given the complex nature of the theory, hardly any work has de facto attempted to implement the core tenets of virtue ethics in moral machines. The main goal of this paper is to demonstrate how virtue ethics can be taken all the way from theory to machine implementation. To achieve this (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  38. Robot morali? Considerazioni filosofiche sulla machine ethics.Fabio Fossa - 2020 - Sistemi Intelligenti 2020 (2):425-444.
    The purpose of this essay is to determine the domain of validity of the notions developed in Machine Ethics [ME]. To this aim, I analyse the epistemological and methodological presuppositions that lie at the root of such technological project. On this basis, I then try and develop the theoretical means to identify and deconstruct improper applications of these notions to objects that do not belong to the same epistemic context, focusing in particular on the extent to which ME is supposed (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. Artificial virtuous agents in a multi-agent tragedy of the commons.Jakob Stenseke - 2022 - AI and Society:1-18.
    Although virtue ethics has repeatedly been proposed as a suitable framework for the development of artificial moral agents, it has been proven difficult to approach from a computational perspective. In this work, we present the first technical implementation of artificial virtuous agents in moral simulations. First, we review previous conceptual and technical work in artificial virtue ethics and describe a functionalistic path to AVAs based on dispositional virtues, bottom-up learning, and top-down eudaimonic reward. We then (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  40. Introducing the Argumentation Framework within Agent-Based Models to Better Simulate Agents’ Cognition in Opinion Dynamics: Application to Vegetarian Diet Diffusion.Patrick Taillandier, Nicolas Salliou & Rallou Thomopoulos - 2021 - Journal of Artificial Societies and Social Simulation 24 (2).
    This paper introduces a generic agent-based model simulating the exchange and the diffusion of pro and con arguments. It is applied to the case of the diffusion of vegetarian diets in the context of a potential emergence of a second nutrition transition. To this day, agent-based simulation has been extensively used to study opinion dynamics. However, the vast majority of existing models have been limited to extremely abstract and simplified representations of the diffusion process. These simplifications impairs the realism of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  41. Responsibility gaps and the reactive attitudes.Fabio Tollon - 2022 - AI and Ethics 1 (1).
    Artificial Intelligence (AI) systems are ubiquitous. From social media timelines, video recommendations on YouTube, and the kinds of adverts we see online, AI, in a very real sense, filters the world we see. More than that, AI is being embedded in agent-like systems, which might prompt certain reactions from users. Specifically, we might find ourselves feeling frustrated if these systems do not meet our expectations. In normal situations, this might be fine, but with the ever increasing sophistication of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  42. Philosophical Signposts for Artificial Moral Agent Frameworks.Robert James M. Boyles - 2017 - Suri 6 (2):92–109.
    This article focuses on a particular issue under machine ethics—that is, the nature of Artificial Moral Agents. Machine ethics is a branch of artificial intelligence that looks into the moral status of artificial agents. Artificial moral agents, on the other hand, are artificial autonomous agents that possess moral value, as well as certain rights and responsibilities. This paper demonstrates that attempts to fully develop a theory that could possibly account for the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  43. Współzależność analizy etycznej i etyki.John Ladd - 1973 - Etyka 11:139-158.
    AI designers endeavour to improve ‘autonomy’ in artificial intelligent devices, as recent developments show. This chapter firstly argues against attributing metaphysical attitudes to AI and, simultaneously, in favor of improving autonomous AI which has been enabled to respect autonomy in human agents. This seems to be the only responsible way of making further advances in the field of autonomous social AI. Let us examine what is meant by claims such as designing our artificial alter egos and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. Guilty Artificial Minds: Folk Attributions of Mens Rea and Culpability to Artificially Intelligent Agents.Michael T. Stuart & Markus Https://Orcidorg Kneer - 2021 - Proceedings of the ACM on Human-Computer Interaction 5 (CSCW2).
    While philosophers hold that it is patently absurd to blame robots or hold them morally responsible [1], a series of recent empirical studies suggest that people do ascribe blame to AI systems and robots in certain contexts [2]. This is disconcerting: Blame might be shifted from the owners, users or designers of AI systems to the systems themselves, leading to the diminished accountability of the responsible human agents [3]. In this paper, we explore one of the potential underlying reasons (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  45. Consequentialism & Machine Ethics: Towards a Foundational Machine Ethic to Ensure the Right Action of Artificial Moral Agents.Josiah Della Foresta - 2020 - Montreal AI Ethics Institute.
    In this paper, I argue that Consequentialism represents a kind of ethical theory that is the most plausible to serve as a basis for a machine ethic. First, I outline the concept of an artificial moral agent and the essential properties of Consequentialism. Then, I present a scenario involving autonomous vehicles to illustrate how the features of Consequentialism inform agent action. Thirdly, an alternative Deontological approach will be evaluated and the problem of moral conflict discussed. Finally, two bottom-up approaches (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. Do androids dream of normative endorsement? On the fallibility of artificial moral agents.Frodo Podschwadek - 2017 - Artificial Intelligence and Law 25 (3):325-339.
    The more autonomous future artificial agents will become, the more important it seems to equip them with a capacity for moral reasoning and to make them autonomous moral agents. Some authors have even claimed that one of the aims of AI development should be to build morally praiseworthy agents. From the perspective of moral philosophy, praiseworthy moral agents, in any meaningful sense of the term, must be fully autonomous moral agents who endorse moral rules (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  47. Developing a Trusted Human-AI Network for Humanitarian Benefit.Susannah Kate Devitt, Jason Scholz, Timo Schless & Larry Lewis - forthcoming - Journal of Digital War:TBD.
    Humans and artificial intelligences (AI) will increasingly participate digitally and physically in conflicts yet there is a lack of trusted communications across agents and platforms. For example, humans in disasters and conflict already use messaging and social media to share information, however, international humanitarian relief organisations treat this information as unverifiable and untrustworthy. AI may reduce the ‘fog-of-war’ and improve outcomes, however current AI implementations are often brittle, have a narrow scope of application and wide ethical risks. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48. Presumptuous aim attribution, conformity, and the ethics of artificial social cognition.Owen C. King - 2020 - Ethics and Information Technology 22 (1):25-37.
    Imagine you are casually browsing an online bookstore, looking for an interesting novel. Suppose the store predicts you will want to buy a particular novel: the one most chosen by people of your same age, gender, location, and occupational status. The store recommends the book, it appeals to you, and so you choose it. Central to this scenario is an automated prediction of what you desire. This article raises moral concerns about such predictions. More generally, this article examines the ethics (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  49. The only wrong cell is the dead one: On the enactive approach to normativity.Manuel Heras-Escribano, Jason Noble & Manuel De Pinedo García - 2013 - In Heras-Escribano Manuel, Noble Jason & Pinedo García Manuel De (eds.), Pietro Liò et al. (eds.) Advances in Artificial Life (ECAL 2013). pp. 665-670.
    In this paper we challenge the notion of ‘normativity’ used by some enactive approaches to cognition. We define some varieties of enactivism and their assumptions and make explicit the reasoning behind the co-emergence of individuality and normativity. Then we argue that appealing to dispositions for explaining some living processes can be more illuminating than claiming that all such processes are normative. For this purpose, we will present some considerations, inspired by Wittgenstein, regarding norm-establishing and norm-following and show that attributions of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  50. Synthetic Socio-Technical Systems: Poiêsis as Meaning Making.Piercosma Bisconti, Andrew McIntyre & Federica Russo - 2024 - Philosophy and Technology 37 (3):1-19.
    With the recent renewed interest in AI, the field has made substantial advancements, particularly in generative systems. Increased computational power and the availability of very large datasets has enabled systems such as ChatGPT to effectively replicate aspects of human social interactions, such as verbal communication, thus bringing about profound changes in society. In this paper, we explain that the arrival of generative AI systems marks a shift from ‘interacting through’ to ‘interacting with’ technologies and calls for a reconceptualization of (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 955