Results for 'trust, metacognition, Bayes, confidence, artificial agents'

999 found
Order:
  1. A Metacognitive Approach to Trust and a Case Study: Artificial Agency.Ioan Muntean - 2019 - Computer Ethics - Philosophical Enquiry (CEPE) Proceedings.
    Trust is defined as a belief of a human H (‘the trustor’) about the ability of an agent A (the ‘trustee’) to perform future action(s). We adopt here dispositionalism and internalism about trust: H trusts A iff A has some internal dispositions as competences. The dispositional competences of A are high-level metacognitive requirements, in the line of a naturalized virtue epistemology. (Sosa, Carter) We advance a Bayesian model of two (i) confidence in the decision and (ii) model uncertainty. To trust (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2. Trusting the (ro)botic other: By assumption?Paul B. de Laat - 2015 - SIGCAS Computers and Society 45 (3):255-260.
    How may human agents come to trust (sophisticated) artificial agents? At present, since the trust involved is non-normative, this would seem to be a slow process, depending on the outcomes of the transactions. Some more options may soon become available though. As debated in the literature, humans may meet (ro)bots as they are embedded in an institution. If they happen to trust the institution, they will also trust them to have tried out and tested the machines in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  3. Extremists are more confident.Nora Heinzelmann & Viet Tran - 2022 - Erkenntnis (5).
    Metacognitive mental states are mental states about mental states. For example, I may be uncertain whether my belief is correct. In social discourse, an interlocutor’s metacognitive certainty may constitute evidence about the reliability of their testimony. For example, if a speaker is certain that their belief is correct, then we may take this as evidence in favour of their belief, or its content. This paper argues that, if metacognitive certainty is genuine evidence, then it is disproportionate evidence for extreme beliefs. (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  4. Thinking Fast and Slow in AI: the Role of Metacognition.Marianna Bergamaschi Ganapini - manuscript
    Multiple Authors - please see paper attached. -/- AI systems have seen dramatic advancement in recent years, bringing many applications that pervade our everyday life. However, we are still mostly seeing instances of narrow AI: many of these recent developments are typically focused on a very limited set of competencies and goals, e.g., image interpretation, natural language processing, classification, prediction, and many others. We argue that a better study of the mechanisms that allow humans to have these capabilities can help (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5. What Confidence Should We Have in Grade?Baigrie Brian & Mercuri Mathew - 2018 - Journal of Evaluation in Clinical Practice 24:1240-1246.
    Rationale, Aims, and Objectives: Confidence (or belief) that a therapy is effective is essential to practicing clinical medicine. GRADE, a popular framework for developing clinical recommendations, provides a means for assigning how much confidence one should have in a therapy's effect estimate. One's level of confidence (or “degree of belief”) can also be modelled using Bayes theorem. In this paper, we look through both a GRADE and Bayesian lens to examine how one determines confidence in the effect estimate. Methods: Philosophical (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. Publishing, Belief, and Self-Trust.Alexandra Plakias - 2023 - Episteme 20 (3):632-646.
    This paper offers a defense of ‘publishing without belief’ (PWB) – the view that authors are not required to believe what they publish. I address objections to the view ranging from outright denial and advocacy of a belief norm for publication, to a modified version that allows for some cases of PWB but not others. I reject these modifications. In doing so, I offer both an alternative story about the motivations for PWB and a diagnosis of the disagreement over its (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  7. Science Based on Artificial Intelligence Need not Pose a Social Epistemological Problem.Uwe Peters - 2024 - Social Epistemology Review and Reply Collective 13 (1).
    It has been argued that our currently most satisfactory social epistemology of science can’t account for science that is based on artificial intelligence (AI) because this social epistemology requires trust between scientists that can take full responsibility for the research tools they use, and scientists can’t take full responsibility for the AI tools they use since these systems are epistemically opaque. I think this argument overlooks that much AI-based science can be done without opaque models, and that agents (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. Developing a Trusted Human-AI Network for Humanitarian Benefit.Susannah Kate Devitt, Jason Scholz, Timo Schless & Larry Lewis - forthcoming - Journal of Digital War:TBD.
    Humans and artificial intelligences (AI) will increasingly participate digitally and physically in conflicts yet there is a lack of trusted communications across agents and platforms. For example, humans in disasters and conflict already use messaging and social media to share information, however, international humanitarian relief organisations treat this information as unverifiable and untrustworthy. AI may reduce the ‘fog-of-war’ and improve outcomes, however current AI implementations are often brittle, have a narrow scope of application and wide ethical risks. Meanwhile, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9. Opinion dynamics and bounded confidence: models, analysis and simulation.Hegselmann Rainer & Ulrich Krause - 2002 - Journal of Artificial Societies and Social Simulation 5 (3).
    When does opinion formation within an interacting group lead to consensus, polarization or fragmentation? The article investigates various models for the dynamics of continuous opinions by analytical methods as well as by computer simulations. Section 2 develops within a unified framework the classical model of consensus formation, the variant of this model due to Friedkin and Johnsen, a time-dependent version and a nonlinear version with bounded confidence of the agents. Section 3 presents for all these models major analytical results. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  10. The emergence of “truth machines”?: Artificial intelligence approaches to lie detection.Jo Ann Oravec - 2022 - Ethics and Information Technology 24 (1):1-10.
    This article analyzes emerging artificial intelligence (AI)-enhanced lie detection systems from ethical and human resource (HR) management perspectives. I show how these AI enhancements transform lie detection, followed with analyses as to how the changes can lead to moral problems. Specifically, I examine how these applications of AI introduce human rights issues of fairness, mental privacy, and bias and outline the implications of these changes for HR management. The changes that AI is making to lie detection are altering the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11. Polarization and Belief Dynamics in the Black and White Communities: An Agent-Based Network Model from the Data.Patrick Grim, Stephen B. Thomas, Stephen Fisher, Christopher Reade, Daniel J. Singer, Mary A. Garza, Craig S. Fryer & Jamie Chatman - 2012 - In Christoph Adami, David M. Bryson, Charles Offria & Robert T. Pennock (eds.), Artificial Life 13. MIT Press.
    Public health care interventions—regarding vaccination, obesity, and HIV, for example—standardly take the form of information dissemination across a community. But information networks can vary importantly between different ethnic communities, as can levels of trust in information from different sources. We use data from the Greater Pittsburgh Random Household Health Survey to construct models of information networks for White and Black communities--models which reflect the degree of information contact between individuals, with degrees of trust in information from various sources correlated with (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  12. On the morality of artificial agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.
    Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility (...)
    Download  
     
    Export citation  
     
    Bookmark   293 citations  
  13. Affective Artificial Agents as sui generis Affective Artifacts.Marco Facchin & Giacomo Zanotti - 2024 - Topoi.
    AI-based technologies are increasingly pervasive in a number of contexts. Our affective and emotional life makes no exception. In this article, we analyze one way in which AI-based technologies can affect them. In particular, our investigation will focus on affective artificial agents, namely AI-powered software or robotic agents designed to interact with us in affectively salient ways. We build upon the existing literature on affective artifacts with the aim of providing an original analysis of affective artificial (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  14. Risk Imposition by Artificial Agents: The Moral Proxy Problem.Johanna Thoma - 2022 - In Silja Voeneky, Philipp Kellmeyer, Oliver Mueller & Wolfram Burgard (eds.), The Cambridge Handbook of Responsible Artificial Intelligence: Interdisciplinary Perspectives. Cambridge University Press.
    Where artificial agents are not liable to be ascribed true moral agency and responsibility in their own right, we can understand them as acting as proxies for human agents, as making decisions on their behalf. What I call the ‘Moral Proxy Problem’ arises because it is often not clear for whom a specific artificial agent is acting as a moral proxy. In particular, we need to decide whether artificial agents should be acting as proxies (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  15. Modeling artificial agents’ actions in context – a deontic cognitive event ontology.Miroslav Vacura - 2020 - Applied ontology 15 (4):493-527.
    Although there have been efforts to integrate Semantic Web technologies and artificial agents related AI research approaches, they remain relatively isolated from each other. Herein, we introduce a new ontology framework designed to support the knowledge representation of artificial agents’ actions within the context of the actions of other autonomous agents and inspired by standard cognitive architectures. The framework consists of four parts: 1) an event ontology for information pertaining to actions and events; 2) an (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16. Nou zeg, waar bemoei je je mee.Jan Bransen - 2011 - Algemeen Nederlands Tijdschrift voor Wijsbegeerte 103 (1):4.
    This paper investigates the possibilities of ordinary people to estabish a moral authority in a subclass of everyday scenarios in the public domain that are characterised by an underdetermination of the obtaining norms and regulations. The paper offers a strategy based on hospitality to challenge the all too common practice of ignoring one’s responsibility as a moral agent and to hide in one’s shell, hoping that others (police power!) will solve one’s problem. The paper begins with a description of a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. Artificial moral agents are infeasible with foreseeable technologies.Patrick Chisan Hew - 2014 - Ethics and Information Technology 16 (3):197-206.
    For an artificial agent to be morally praiseworthy, its rules for behaviour and the mechanisms for supplying those rules must not be supplied entirely by external humans. Such systems are a substantial departure from current technologies and theory, and are a low prospect. With foreseeable technologies, an artificial agent will carry zero responsibility for its behavior and humans will retain full responsibility.
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  18. Turing Test, Chinese Room Argument, Symbol Grounding Problem. Meanings in Artificial Agents (APA 2013).Christophe Menant - 2013 - American Philosophical Association Newsletter on Philosophy and Computers 13 (1):30-34.
    The Turing Test (TT), the Chinese Room Argument (CRA), and the Symbol Grounding Problem (SGP) are about the question “can machines think?” We propose to look at these approaches to Artificial Intelligence (AI) by showing that they all address the possibility for Artificial Agents (AAs) to generate meaningful information (meanings) as we humans do. The initial question about thinking machines is then reformulated into “can AAs generate meanings like humans do?” We correspondingly present the TT, the CRA (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  19. Superintelligence as superethical.Steve Petersen - 2017 - In Patrick Lin, Keith Abney & Ryan Jenkins (eds.), Robot Ethics 2. 0: New Challenges in Philosophy, Law, and Society. Oxford University Press. pp. 322-337.
    Nick Bostrom's book *Superintelligence* outlines a frightening but realistic scenario for human extinction: true artificial intelligence is likely to bootstrap itself into superintelligence, and thereby become ideally effective at achieving its goals. Human-friendly goals seem too abstract to be pre-programmed with any confidence, and if those goals are *not* explicitly favorable toward humans, the superintelligence will extinguish us---not through any malice, but simply because it will want our resources for its own purposes. In response I argue that things might (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  20. Vice Epistemology of Believers in Pseudoscience.Filip Tvrdý - 2021 - Filozofia 76 (10):735-751.
    The demarcation of pseudoscience has been one of the most important philosophical tasks since the 1960s. During the 1980s, an atmosphere of defeatism started to spread among philosophers of science, some of them claimed the failure of the demarcation project. I defend that the more auspicious approach to the problem might be through the intellectual character of epistemic agents, i.e., from the point of view of vice epistemology. Unfortunately, common lists of undesirable character features are usually based on a (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  21. Can a Robot Lie? Exploring the Folk Concept of Lying as Applied to Artificial Agents.Markus Https://Orcidorg Kneer - 2021 - Cognitive Science 45 (10):e13032.
    The potential capacity for robots to deceive has received considerable attention recently. Many papers explore the technical possibility for a robot to engage in deception for beneficial purposes (e.g., in education or health). In this short experimental paper, I focus on a more paradigmatic case: robot lying (lying being the textbook example of deception) for nonbeneficial purposes as judged from the human point of view. More precisely, I present an empirical experiment that investigates the following three questions: (a) Are ordinary (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  22. (Social) Metacognition and (Self-)Trust.Kourken Michaelian - 2012 - Review of Philosophy and Psychology 3 (4):481-514.
    What entitles you to rely on information received from others? What entitles you to rely on information retrieved from your own memory? Intuitively, you are entitled simply to trust yourself, while you should monitor others for signs of untrustworthiness. This article makes a case for inverting the intuitive view, arguing that metacognitive monitoring of oneself is fundamental to the reliability of memory, while monitoring of others does not play a significant role in ensuring the reliability of testimony.
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  23. Artificial Free Will: The Responsibility Strategy and Artificial Agents.Sven Delarivière - 2016 - Apeiron Student Journal of Philosophy (Portugal) 7:175-203.
    Both a traditional notion of free will, present in human beings, and artificial intelligence are often argued to be inherently incompatible with determinism. Contrary to these criticisms, this paper defends that an account of free will compatible with determinism, the responsibility strategy (coined here) specifically, is a variety of free will worth wanting as well as a variety that is possible to (in principle) artificially construct. First, freedom will be defined and related to ethics. With that in mind, the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  24. Mad Speculation and Absolute Inhumanism: Lovecraft, Ligotti, and the Weirding of Philosophy.Ben Woodard - 2011 - Continent 1 (1):3-13.
    continent. 1.1 : 3-13. / 0/ – Introduction I want to propose, as a trajectory into the philosophically weird, an absurd theoretical claim and pursue it, or perhaps more accurately, construct it as I point to it, collecting the ground work behind me like the Perpetual Train from China Mieville's Iron Council which puts down track as it moves reclaiming it along the way. The strange trajectory is the following: Kant's critical philosophy and much of continental philosophy which has followed, (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  25. Guilty Artificial Minds: Folk Attributions of Mens Rea and Culpability to Artificially Intelligent Agents.Michael T. Stuart & Markus Https://Orcidorg Kneer - 2021 - Proceedings of the ACM on Human-Computer Interaction 5 (CSCW2).
    While philosophers hold that it is patently absurd to blame robots or hold them morally responsible [1], a series of recent empirical studies suggest that people do ascribe blame to AI systems and robots in certain contexts [2]. This is disconcerting: Blame might be shifted from the owners, users or designers of AI systems to the systems themselves, leading to the diminished accountability of the responsible human agents [3]. In this paper, we explore one of the potential underlying reasons (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  26. Detecting Health Problems Related to Addiction of Video Game Playing Using an Expert System.Samy S. Abu Naser & Mohran H. Al-Bayed - 2016 - World Wide Journal of Multidisciplinary Research and Development 2 (9):7-12.
    Today’s everyone normal life can include a normal rate of playing computer games or video games; but what about an excessive or compulsive use of video games that impact on our life? Our kids, who usually spend a lot of time in playing video games will likely have a trouble in paying attention to their school lessons. In this paper, we introduce an expert system to help users in getting the correct diagnosis of the health problem of video game addictions (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  27. Trust and Confidence: A Dilemma for Epistemic Entitlement Theory.Matthew Jope - 2023 - Erkenntnis 88 (7):2807-2826.
    In this paper I argue that entitlement theorists face a dilemma, the upshot of which is that entitlement theory is either unmotivated or incoherent. I begin with the question of how confident one should be in a proposition on the basis of an entitlement to trust, distinguishing between strong views that warrant certainty and weak views that warrant less than certainty. Strong views face the problem that they are incompatible with the ineliminable epistemic risk that is a feature of the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28. Trusting artificial intelligence in cybersecurity is a double-edged sword.Mariarosaria Taddeo, Tom McCutcheon & Luciano Floridi - 2019 - Philosophy and Technology 32 (1):1-15.
    Applications of artificial intelligence (AI) for cybersecurity tasks are attracting greater attention from the private and the public sectors. Estimates indicate that the market for AI in cybersecurity will grow from US$1 billion in 2016 to a US$34.8 billion net worth by 2025. The latest national cybersecurity and defence strategies of several governments explicitly mention AI capabilities. At the same time, initiatives to define new standards and certification procedures to elicit users’ trust in AI are emerging on a global (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  29. Making Sense of the Conceptual Nonsense 'Trustworthy AI'.Ori Freiman - 2022 - AI and Ethics 4.
    Following the publication of numerous ethical principles and guidelines, the concept of 'Trustworthy AI' has become widely used. However, several AI ethicists argue against using this concept, often backing their arguments with decades of conceptual analyses made by scholars who studied the concept of trust. In this paper, I describe the historical-philosophical roots of their objection and the premise that trust entails a human quality that technologies lack. Then, I review existing criticisms about 'Trustworthy AI' and the consequence of ignoring (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  30. The Conditions of the Question: What Is Philosophy?Gilles Deleuze, Daniel W. Smith & Arnold I. Davidson - 1991 - Critical Inquiry 17 (3):471-478.
    Perhaps the question “What is philosophy?” can only be posed late in life, when old age has come, and with it the time to speak in concrete terms. It is a question one poses when one no longer has anything to ask for, but its consequences can be considerable. One was asking the question before, one never ceased asking it, but it was too artificial, too abstract; one expounded and dominated the question, more than being grabbed by it. There (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  31. Trust in Medical Artificial Intelligence: A Discretionary Account.Philip J. Nickel - 2022 - Ethics and Information Technology 24 (1):1-10.
    This paper sets out an account of trust in AI as a relationship between clinicians, AI applications, and AI practitioners in which AI is given discretionary authority over medical questions by clinicians. Compared to other accounts in recent literature, this account more adequately explains the normative commitments created by practitioners when inviting clinicians’ trust in AI. To avoid committing to an account of trust in AI applications themselves, I sketch a reductive view on which discretionary authority is exercised by AI (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  32. Sensitive to Reasons: Moral Intuition and the Dual Process Challenge to Ethics.Dario Cecchini - 2022 - Dissertation,
    This dissertation is a contribution to the field of empirically informed metaethics, which combines the rigorous conceptual clarity of traditional metaethics with a careful review of empirical evidence. More specifically, this work stands at the intersection of moral psychology, moral epistemology, and philosophy of action. The study comprises six chapters on three distinct (although related) topics. Each chapter is structured as an independent paper and addresses a specific open question in the literature. The first part concerns the psychological features and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. Algorithm exploitation: humans are keen to exploit benevolent AI.Jurgis Karpus, Adrian Krüger, Julia Tovar Verba, Bahador Bahrami & Ophelia Deroy - 2021 - iScience 24 (6):102679.
    We cooperate with other people despite the risk of being exploited or hurt. If future artificial intelligence (AI) systems are benevolent and cooperative toward us, what will we do in return? Here we show that our cooperative dispositions are weaker when we interact with AI. In nine experiments, humans interacted with either another human or an AI agent in four classic social dilemma economic games and a newly designed game of Reciprocity that we introduce here. Contrary to the hypothesis (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  34. Elements of Episodic Memory: Insights from Artificial Agents.Alexandria Boyle & Andrea Blomkvist - forthcoming - Philosophical Transactions of the Royal Society B.
    Many recent AI systems take inspiration from biological episodic memory. Here, we ask how these ‘episodic-inspired’ AI systems might inform our understanding of biological episodic memory. We discuss work showing that these systems implement some key features of episodic memory whilst differing in important respects, and appear to enjoy behavioural advantages in the domains of strategic decision-making, fast learning, navigation, exploration and acting over temporal distance. We propose that these systems could be used to evaluate competing theories of episodic memory’s (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. Autonomy, understanding, and moral disagreement.C. Thi Nguyen - 2010 - Philosophical Topics 38 (2):111-129.
    Should the existence of moral disagreement reduce one’s confidence in one’s moral judgments? Many have claimed that it should not. They claim that we should be morally self-sufficient: that one’s moral judgment and moral confidence ought to be determined entirely one’s own reasoning. Others’ moral beliefs ought not impact one’s own in any way. I claim that moral self-sufficiency is wrong. Moral self-sufficiency ignores the degree to which moral judgment is a fallible cognitive process like all the rest. In this (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  36. Meaning generation for animals, humans and artificial agents. An evolutionary perspective on the philosophy of information. (IS4SI 2017).Christophe Menant - manuscript
    Meanings are present everywhere in our environment and within ourselves. But these meanings do not exist by themselves. They are associated to information and have to be created, to be generated by agents. The Meaning Generator System (MGS) has been developed on a system approach to model meaning generation in agents following an evolutionary perspective. The agents can be natural or artificial. The MGS generates meaningful information (a meaning) when it receives information that has a connection (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37. Philosophical Signposts for Artificial Moral Agent Frameworks.Robert James M. Boyles - 2017 - Suri 6 (2):92–109.
    This article focuses on a particular issue under machine ethics—that is, the nature of Artificial Moral Agents. Machine ethics is a branch of artificial intelligence that looks into the moral status of artificial agents. Artificial moral agents, on the other hand, are artificial autonomous agents that possess moral value, as well as certain rights and responsibilities. This paper demonstrates that attempts to fully develop a theory that could possibly account for the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  38. Moral Agents or Mindless Machines? A Critical Appraisal of Agency in Artificial Systems.Fabio Tollon - 2019 - Hungarian Philosophical Review 4 (63):9-23.
    In this paper I provide an exposition and critique of Johnson and Noorman’s (2014) three conceptualizations of the agential roles artificial systems can play. I argue that two of these conceptions are unproblematic: that of causally efficacious agency and “acting for” or surrogate agency. Their third conception, that of “autonomous agency,” however, is one I have reservations about. The authors point out that there are two ways in which the term “autonomy” can be used: there is, firstly, the engineering (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  39.  18
    Institutional Trust in Medicine in the Age of Artificial Intelligence.Michał Klincewicz - 2023 - In Mark Alfano & David Collins (eds.), The Moral Psychology of Trust. Lexington Books.
    It is easier to talk frankly to a person whom one trusts. It is also easier to agree with a scientist whom one trusts. Even though in both cases the psychological state that underlies the behavior is called ‘trust’, it is controversial whether it is a token of the same psychological type. Trust can serve an affective, epistemic, or other social function, and comes to interact with other psychological states in a variety of ways. The way that the functional role (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. Artificial virtuous agents: from theory to machine implementation.Jakob Stenseke - 2021 - AI and Society:1-20.
    Virtue ethics has many times been suggested as a promising recipe for the construction of artificial moral agents due to its emphasis on moral character and learning. However, given the complex nature of the theory, hardly any work has de facto attempted to implement the core tenets of virtue ethics in moral machines. The main goal of this paper is to demonstrate how virtue ethics can be taken all the way from theory to machine implementation. To achieve this (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  41. Artificial thinking and doomsday projections: a discourse on trust, ethics and safety.Jeffrey White, Dietrich Brandt, Jan Söffner & Larry Stapleton - 2023 - AI and Society 38 (6):2119-2124.
    The article reflects on where AI is headed and the world along with it, considering trust, ethics and safety. Implicit in artificial thinking and doomsday appraisals is the engineered divorce from reality of sublime human embodiment. Jeffrey White, Dietrich Brandt, Jan Soeffner, and Larry Stapleton, four scholars associated with AI & Society, address these issues, and more, in the following exchange.
    Download  
     
    Export citation  
     
    Bookmark  
  42. Institutional Trust in Medicine in the Age of Artificial Intelligence.Michał Klincewicz - 2023 - In Mark Alfano & David Collins (eds.), The Moral Psychology of Trust. Lexington Books.
    It is easier to talk frankly to a person whom one trusts. It is also easier to agree with a scientist whom one trusts. Even though in both cases the psychological state that underlies the behavior is called ‘trust’, it is controversial whether it is a token of the same psychological type. Trust can serve an affective, epistemic, or other social function, and comes to interact with other psychological states in a variety of ways. The way that the functional role (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. Artificial morality: Making of the artificial moral agents.Marija Kušić & Petar Nurkić - 2019 - Belgrade Philosophical Annual 1 (32):27-49.
    Abstract: Artificial Morality is a new, emerging interdisciplinary field that centres around the idea of creating artificial moral agents, or AMAs, by implementing moral competence in artificial systems. AMAs are ought to be autonomous agents capable of socially correct judgements and ethically functional behaviour. This request for moral machines comes from the changes in everyday practice, where artificial systems are being frequently used in a variety of situations from home help and elderly care purposes (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  44. Artificial virtuous agents in a multi-agent tragedy of the commons.Jakob Stenseke - 2022 - AI and Society:1-18.
    Although virtue ethics has repeatedly been proposed as a suitable framework for the development of artificial moral agents, it has been proven difficult to approach from a computational perspective. In this work, we present the first technical implementation of artificial virtuous agents in moral simulations. First, we review previous conceptual and technical work in artificial virtue ethics and describe a functionalistic path to AVAs based on dispositional virtues, bottom-up learning, and top-down eudaimonic reward. We then (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  45. Metacognition and Endorsement.Kourken Michaelian - 2012 - Mind and Language 27 (3):284-307.
    Real agents rely, when forming their beliefs, on imperfect informational sources (sources which deliver, even under normal conditions of operation, both accurate and inaccurate information). They therefore face the ‘endorsement problem’: how can beliefs produced by endorsing information received from imperfect sources be formed in an epistemically acceptable manner? Focussing on the case of episodic memory and drawing on empirical work on metamemory, this article argues that metacognition likely plays a crucial role in explaining how agents solve the (...)
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  46. Making moral machines: why we need artificial moral agents.Paul Formosa & Malcolm Ryan - forthcoming - AI and Society.
    As robots and Artificial Intelligences become more enmeshed in rich social contexts, it seems inevitable that we will have to make them into moral machines equipped with moral skills. Apart from the technical difficulties of how we could achieve this goal, we can also ask the ethical question of whether we should seek to create such Artificial Moral Agents (AMAs). Recently, several papers have argued that we have strong reasons not to develop AMAs. In response, we develop (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  47.  40
    Artificial Leviathan: Exploring Social Evolution of LLM Agents Through the Lens of Hobbesian Social Contract Theory.Gordon Dai, Weijia Zhang, Jinhan Li, Siqi Yang, Chidera Ibe, Srihas Rao, Arthur Caetano & Misha Sra - manuscript
    The emergence of Large Language Models (LLMs) and advancements in Artificial Intelligence (AI) offer an opportunity for computational social science research at scale. Building upon prior explorations of LLM agent design, our work introduces a simulated agent society where complex social relationships dynamically form and evolve over time. Agents are imbued with psychological drives and placed in a sandbox survival environment. We conduct an evaluation of the agent society through the lens of Thomas Hobbes's seminal Social Contract Theory (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48. Do androids dream of normative endorsement? On the fallibility of artificial moral agents.Frodo Podschwadek - 2017 - Artificial Intelligence and Law 25 (3):325-339.
    The more autonomous future artificial agents will become, the more important it seems to equip them with a capacity for moral reasoning and to make them autonomous moral agents. Some authors have even claimed that one of the aims of AI development should be to build morally praiseworthy agents. From the perspective of moral philosophy, praiseworthy moral agents, in any meaningful sense of the term, must be fully autonomous moral agents who endorse moral rules (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  49. Close calls and the confident agent: Free will, deliberation, and alternative possibilities.Eddy Nahmias - 2006 - Philosophical Studies 131 (3):627-667.
    Two intuitions lie at the heart of our conception of free will. One intuition locates free will in our ability to deliberate effectively and control our actions accordingly: the ‘Deliberation and Control’ (DC) condition. The other intuition is that free will requires the existence of alternative possibilities for choice: the AP condition. These intuitions seem to conflict when, for instance, we deliberate well to decide what to do, and we do not want it to be possible to act in some (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  50. Artificial Evil and the Foundation of Computer Ethics.Luciano Floridi & J. W. Sanders - 2001 - Springer Netherlands. Edited by Luciano Floridi & J. W. Sanders.
    Moral reasoning traditionally distinguishes two types of evil:moral (ME) and natural (NE). The standard view is that ME is the product of human agency and so includes phenomena such as war,torture and psychological cruelty; that NE is the product of nonhuman agency, and so includes natural disasters such as earthquakes, floods, disease and famine; and finally, that more complex cases are appropriately analysed as a combination of ME and NE. Recently, as a result of developments in autonomous agents in (...)
    Download  
     
    Export citation  
     
    Bookmark   30 citations  
1 — 50 / 999