Switch to: References

Citations of:

Can Artificial Entities Assert?

In Sanford C. Goldberg (ed.), The Oxford Handbook of Assertion. Oxford University Press. pp. 415-436 (2018)

Add citations

You must login to add citations.
  1. (3 other versions)Social epistemology.Alvin I. Goldman - 2001 - Stanford Encyclopedia of Philosophy.
    Social epistemology is the study of the social dimensions of knowledge or information. There is little consensus, however, on what the term "knowledge" comprehends, what is the scope of the "social", or what the style or purpose of the study should be. According to some writers, social epistemology should retain the same general mission as classical epistemology, revamped in the recognition that classical epistemology was too individualistic. According to other writers, social epistemology should be a more radical departure from classical (...)
    Download  
     
    Export citation  
     
    Bookmark   74 citations  
  • (1 other version)Taking It Not at Face Value: A New Taxonomy for the Beliefs Acquired from Conversational AIs.Shun Iizuka - forthcoming - Techné: Research in Philosophy and Technology.
    One of the central questions in the epistemology of conversational AIs is how to classify the beliefs acquired from them. Two promising candidates are instrument-based and testimony-based beliefs. However, the category of instrument-based beliefs faces an intrinsic problem, and a challenge arises in its application. On the other hand, relying solely on the category of testimony-based beliefs does not encompass the totality of our practice of using conversational AIs. To address these limitations, I propose a novel classification of beliefs that (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI-Testimony, Conversational AIs and Our Anthropocentric Theory of Testimony.Ori Freiman - 2024 - Social Epistemology 38 (4):476-490.
    The ability to interact in a natural language profoundly changes devices’ interfaces and potential applications of speaking technologies. Concurrently, this phenomenon challenges our mainstream theories of knowledge, such as how to analyze linguistic outputs of devices under existing anthropocentric theoretical assumptions. In section 1, I present the topic of machines that speak, connecting between Descartes and Generative AI. In section 2, I argue that accepted testimonial theories of knowledge and justification commonly reject the possibility that a speaking technological artifact can (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • AI or Your Lying Eyes: Some Shortcomings of Artificially Intelligent Deepfake Detectors.Keith Raymond Harris - 2024 - Philosophy and Technology 37 (7):1-19.
    Deepfakes pose a multi-faceted threat to the acquisition of knowledge. It is widely hoped that technological solutions—in the form of artificially intelligent systems for detecting deepfakes—will help to address this threat. I argue that the prospects for purely technological solutions to the problem of deepfakes are dim. Especially given the evolving nature of the threat, technological solutions cannot be expected to prevent deception at the hands of deepfakes, or to preserve the authority of video footage. Moreover, the success of such (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Making Sense of the Conceptual Nonsense 'Trustworthy AI'.Ori Freiman - 2022 - AI and Ethics 4.
    Following the publication of numerous ethical principles and guidelines, the concept of 'Trustworthy AI' has become widely used. However, several AI ethicists argue against using this concept, often backing their arguments with decades of conceptual analyses made by scholars who studied the concept of trust. In this paper, I describe the historical-philosophical roots of their objection and the premise that trust entails a human quality that technologies lack. Then, I review existing criticisms about 'Trustworthy AI' and the consequence of ignoring (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • People, posts, and platforms: reducing the spread of online toxicity by contextualizing content and setting norms.Isaac Record & Boaz Miller - 2022 - Asian Journal of Philosophy 1 (2):1-19.
    We present a novel model of individual people, online posts, and media platforms to explain the online spread of epistemically toxic content such as fake news and suggest possible responses. We argue that a combination of technical features, such as the algorithmically curated feed structure, and social features, such as the absence of stable social-epistemic norms of posting and sharing in social media, is largely responsible for the unchecked spread of epistemically toxic content online. Sharing constitutes a distinctive communicative act, (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Trust in Medical Artificial Intelligence: A Discretionary Account.Philip J. Nickel - 2022 - Ethics and Information Technology 24 (1):1-10.
    This paper sets out an account of trust in AI as a relationship between clinicians, AI applications, and AI practitioners in which AI is given discretionary authority over medical questions by clinicians. Compared to other accounts in recent literature, this account more adequately explains the normative commitments created by practitioners when inviting clinicians’ trust in AI. To avoid committing to an account of trust in AI applications themselves, I sketch a reductive view on which discretionary authority is exercised by AI (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Trust and Distributed Epistemic Labor‎.Boaz Miller & Ori Freiman - 2019 - In Judith Simon (ed.), The Routledge Handbook of Trust and Philosophy. Routledge. pp. ‎341-353‎.
    This chapter explores properties that bind individuals, knowledge, and communities, together. Section ‎‎1 introduces Hardwig’s argument from trust in others’ testimonies as entailing that trust is the glue ‎that binds individuals into communities. Section 2 asks “what grounds trust?” by exploring assessment ‎of collaborators’ explanatory responsiveness, formal indicators such as affiliation and credibility, ‎appreciation of peers’ tacit knowledge, game-theoretical considerations, and the role moral character ‎of peers, social biases, and social values play in grounding trust. Section 3 deals with establishing (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Chatting with Bots: AI, Speech-Acts, and the Edge of Assertion.Iwan Williams & Tim Bayne - 2024 - Inquiry: An Interdisciplinary Journal of Philosophy.
    This paper addresses the question of whether large language model-powered chatbots are capable of assertion. According to what we call the Thesis of Chatbot Assertion (TCA), chatbots are the kinds of things that can assert, and at least some of the output produced by current-generation chatbots qualifies as assertion. We provide some motivation for TCA, arguing that it ought to be taken seriously and not simply dismissed. We also review recent objections to TCA, arguing that these objections are weighty. We (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Public artifacts and the epistemology of collective material testimony.Quill R. Kukla - 2022 - Philosophical Issues 32 (1):233-252.
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • What Might Machines Mean?Mitchell Green & Jan G. Michel - 2022 - Minds and Machines 32 (2):323-338.
    This essay addresses the question whether artificial speakers can perform speech acts in the technical sense of that term common in the philosophy of language. We here argue that under certain conditions artificial speakers can perform speech acts so understood. After explaining some of the issues at stake in these questions, we elucidate a relatively uncontroversial way in which machines can communicate, namely through what we call verbal signaling. But verbal signaling is not sufficient for the performance of a speech (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • (3 other versions)Social epistemology.Alvin Goldman - 2006 - Stanford Encyclopedia of Philosophy.
    Social epistemology is the study of the social dimensions of knowledge or information. There is little consensus, however, on what the term "knowledge" comprehends, what is the scope of the "social", or what the style or purpose of the study should be. According to some writers, social epistemology should retain the same general mission as classical epistemology, revamped in the recognition that classical epistemology was too individualistic. According to other writers, social epistemology should be a more radical departure from classical (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Fictionalism about Chatbots.Fintan Mallory - 2023 - Ergo: An Open Access Journal of Philosophy 10.
    According to widely accepted views in metasemantics, the outputs of chatbots and other artificial text generators should be meaningless. They aren’t produced with communicative intentions and the systems producing them are not following linguistic conventions. Nevertheless, chatbots have assumed roles in customer service and healthcare, they are spreading information and disinformation and, in some cases, it may be more rational to trust the outputs of bots than those of our fellow human beings. To account for the epistemic role of chatbots (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Analysis of Beliefs Acquired from a Conversational AI: Instruments-based Beliefs, Testimony-based Beliefs, and Technology-based Beliefs.Ori Freiman - 2024 - Episteme 21 (3):1031-1047.
    Speaking with conversational AIs, technologies whose interfaces enable human-like interaction based on natural language, has become a common phenomenon. During these interactions, people form their beliefs due to the say-so of conversational AIs. In this paper, I consider, and then reject, the concepts of testimony-based beliefs and instrument-based beliefs as suitable for analysis of beliefs acquired from these technologies. I argue that the concept of instrument-based beliefs acknowledges the non-human agency of the source of the belief. However, the analysis focuses (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • “Google Told Me So!” On the Bent Testimony of Search Engine Algorithms.Devesh Narayanan & David De Cremer - 2022 - Philosophy and Technology 35 (2):1-19.
    Search engines are important contemporary sources of information and contribute to shaping our beliefs about the world. Each time they are consulted, various algorithms filter and order content to show us relevant results for the inputted search query. Because these search engines are frequently and widely consulted, it is necessary to have a clear understanding of the distinctively epistemic role that these algorithms play in the background of our online experiences. To aid in such understanding, this paper argues that search (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations