Switch to: References

Add citations

You must login to add citations.
  1. A phenomenology and epistemology of large language models: transparency, trust, and trustworthiness.Richard Heersmink, Barend de Rooij, María Jimena Clavel Vázquez & Matteo Colombo - 2024 - Ethics and Information Technology 26 (3):1-15.
    This paper analyses the phenomenology and epistemology of chatbots such as ChatGPT and Bard. The computational architecture underpinning these chatbots are large language models (LLMs), which are generative artificial intelligence (AI) systems trained on a massive dataset of text extracted from the Web. We conceptualise these LLMs as multifunctional computational cognitive artifacts, used for various cognitive tasks such as translating, summarizing, answering questions, information-seeking, and much more. Phenomenologically, LLMs can be experienced as a “quasi-other”; when that happens, users anthropomorphise them. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Theorem proving in artificial neural networks: new frontiers in mathematical AI.Markus Pantsar - 2024 - European Journal for Philosophy of Science 14 (1):1-22.
    Computer assisted theorem proving is an increasingly important part of mathematical methodology, as well as a long-standing topic in artificial intelligence (AI) research. However, the current generation of theorem proving software have limited functioning in terms of providing new proofs. Importantly, they are not able to discriminate interesting theorems and proofs from trivial ones. In order for computers to develop further in theorem proving, there would need to be a radical change in how the software functions. Recently, machine learning results (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Trust, Explainability and AI.Sam Baron - 2025 - Philosophy and Technology 38 (4):1-23.
    There has been a surge of interest in explainable artificial intelligence (XAI). It is commonly claimed that explainability is necessary for trust in AI, and that this is why we need it. In this paper, I argue that for some notions of trust it is plausible that explainability is indeed a necessary condition. But that these kinds of trust are not appropriate for AI. For notions of trust that are appropriate for AI, explainability is not a necessary condition. I thus (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Morality First?Nathaniel Sharadin - forthcoming - AI and Society:1-13.
    The Morality First strategy for developing AI systems that can represent and respond to human values aims to first develop systems that can represent and respond to moral values. I argue that Morality First and other X-First views are unmotivated. Moreover, according to some widely accepted philosophical views about value, these strategies are positively distorting. The natural alternative, according to which no domain of value comes “first” introduces a new set of challenges and highlights an important but otherwise obscured problem (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Transparency as Manipulation? Uncovering the Disciplinary Power of Algorithmic Transparency.Hao Wang - 2022 - Philosophy and Technology 35 (3):1-25.
    Automated algorithms are silently making crucial decisions about our lives, but most of the time we have little understanding of how they work. To counter this hidden influence, there have been increasing calls for algorithmic transparency. Much ink has been spilled over the informational account of algorithmic transparency—about how much information should be revealed about the inner workings of an algorithm. But few studies question the power structure beneath the informational disclosure of the algorithm. As a result, the information disclosure (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Cultivating Moral Attention: a Virtue-Oriented Approach to Responsible Data Science in Healthcare.Emanuele Ratti & Mark Graves - 2021 - Philosophy and Technology 34 (4):1819-1846.
    In the past few years, the ethical ramifications of AI technologies have been at the center of intense debates. Considerable attention has been devoted to understanding how a morally responsible practice of data science can be promoted and which values have to shape it. In this context, ethics and moral responsibility have been mainly conceptualized as compliance to widely shared principles. However, several scholars have highlighted the limitations of such a principled approach. Drawing from microethics and the virtue theory tradition, (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Disagreement, AI alignment, and bargaining.Harry R. Lloyd - forthcoming - Philosophical Studies:1-31.
    New AI technologies have the potential to cause unintended harms in diverse domains including warfare, judicial sentencing, biomedicine and governance. One strategy for realising the benefits of AI whilst avoiding its potential dangers is to ensure that new AIs are properly ‘aligned’ with some form of ‘alignment target.’ One danger of this strategy is that – dependent on the alignment target chosen – our AIs might optimise for objectives that reflect the values only of a certain subset of society, and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Algorithms Don’t Have A Future: On the Relation of Judgement and Calculation.Daniel Stader - 2024 - Philosophy and Technology 37 (1):1-29.
    This paper is about the opposite of judgement and calculation. This opposition has been a traditional anchor of critiques concerned with the rise of AI decision making over human judgement. Contrary to these approaches, it is argued that human judgement is not and cannot be replaced by calculation, but that it is human judgement that contextualises computational structures and gives them meaning and purpose. The article focuses on the epistemic structure of algorithms and artificial neural networks to find that they (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Modeling AI Trust for 2050: perspectives from media and info-communication experts.Katalin Feher, Lilla Vicsek & Mark Deuze - 2024 - AI and Society 39 (6):2933-2946.
    The study explores the future of AI-driven media and info-communication as envisioned by experts from all world regions, defining relevant terminology and expectations for 2050. Participants engaged in a 4-week series of surveys, questioning their definitions and projections about AI for the field of media and communication. Their expectations predict universal access to democratically available, automated, personalized and unbiased information determined by trusted narratives, recolonization of information technology and the demystification of the media process. These experts, as technology ambassadors, advocate (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Reflection Machines: Supporting Effective Human Oversight Over Medical Decision Support Systems.Pim Haselager, Hanna Schraffenberger, Serge Thill, Simon Fischer, Pablo Lanillos, Sebastiaan van de Groes & Miranda van Hooff - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (3):380-389.
    Human decisions are increasingly supported by decision support systems (DSS). Humans are required to remain “on the loop,” by monitoring and approving/rejecting machine recommendations. However, use of DSS can lead to overreliance on machines, reducing human oversight. This paper proposes “reflection machines” (RM) to increase meaningful human control. An RM provides a medical expert not with suggestions for a decision, but with questions that stimulate reflection about decisions. It can refer to data points or suggest counterarguments that are less compatible (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • On trusting chatbots.P. D. Magnus - forthcoming - Episteme.
    This paper focuses on the epistemic situation one faces when using a Large Language Model based chatbot like ChatGPT: When reading the output of the chatbot, how should one decide whether or not to believe it? By surveying strategies we use with other, more familiar sources of information, I argue that chatbots present a novel challenge. This makes the question of how one could trust a chatbot especially vexing.
    Download  
     
    Export citation  
     
    Bookmark  
  • Steering Representations—Towards a Critical Understanding of Digital Twins.Paulan Korenhof, Vincent Blok & Sanneke Kloppenburg - 2021 - Philosophy and Technology 34 (4):1751-1773.
    Digital Twins are conceptualised in the academic technical discourse as real-time realistic digital representations of physical entities. Originating from product engineering, the Digital Twin quickly advanced into other fields, including the life sciences and earth sciences. Digital Twins are seen by the tech sector as the new promising tool for efficiency and optimisation, while governmental agencies see it as a fruitful means for improving decision-making to meet sustainability goals. A striking example of the latter is the European Commission who wishes (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Trust and Trustworthiness in AI.Juan Manuel Durán & Giorgia Pozzi - 2025 - Philosophy and Technology 38 (1):1-31.
    Achieving trustworthy AI is increasingly considered an essential desideratum to integrate AI systems into sensitive societal fields, such as criminal justice, finance, medicine, and healthcare, among others. For this reason, it is important to spell out clearly its characteristics, merits, and shortcomings. This article is the first survey in the specialized literature that maps out the philosophical landscape surrounding trust and trustworthiness in AI. To achieve our goals, we proceed as follows. We start by discussing philosophical positions on trust and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial Intelligence in medicine: reshaping the face of medical practice.Max Tretter, David Samhammer & Peter Dabrock - 2023 - Ethik in der Medizin 36 (1):7-29.
    Background The use of Artificial Intelligence (AI) has the potential to provide relief in the challenging and often stressful clinical setting for physicians. So far, however, the actual changes in work for physicians remain a prediction for the future, including new demands on the social level of medical practice. Thus, the question of how the requirements for physicians will change due to the implementation of AI is addressed. Methods The question is approached through conceptual considerations based on the potentials that (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Explainable AI and stakes in medicine: A user study.Sam Baron, Andrew James Latham & Somogy Varga - 2025 - Artificial Intelligence 340 (C):104282.
    The apparent downsides of opaque algorithms has led to a demand for explainable AI (XAI) methods by which a user might come to understand why an algorithm produced the particular output it did, given its inputs. Patients, for example, might find that the lack of explanation of the process underlying the algorithmic recommendations for diagnosis and treatment hinders their ability to provide informed consent. This paper examines the impact of two factors on user perceptions of explanations for AI systems in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Attitudinal Tensions in the Joint Pursuit of Explainable and Trusted AI.Devesh Narayanan & Zhi Ming Tan - 2023 - Minds and Machines 33 (1):55-82.
    It is frequently demanded that AI-based Decision Support Tools (AI-DSTs) ought to be both explainable to, and trusted by, those who use them. The joint pursuit of these two principles is ordinarily believed to be uncontroversial. In fact, a common view is that AI systems should be made explainable so that they can be trusted, and in turn, accepted by decision-makers. However, the moral scope of these two principles extends far beyond this particular instrumental connection. This paper argues that if (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Trust in AI: Progress, Challenges, and Future Directions.Saleh Afroogh, Ali Akbari, Emmie Malone, Mohammadali Kargar & Hananeh Alambeigi - forthcoming - Nature Humanities and Social Sciences Communications.
    The increasing use of artificial intelligence (AI) systems in our daily life through various applications, services, and products explains the significance of trust/distrust in AI from a user perspective. AI-driven systems have significantly diffused into various fields of our lives, serving as beneficial tools used by human agents. These systems are also evolving to act as co-assistants or semi-agents in specific domains, potentially influencing human thought, decision-making, and agency. Trust/distrust in AI plays the role of a regulator and could significantly (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial Intelligence, Discrimination, Fairness, and Other Moral Concerns.Re’em Segev - 2024 - Minds and Machines 34 (4):1-22.
    Should the input data of artificial intelligence (AI) systems include factors such as race or sex when these factors may be indicative of morally significant facts? More importantly, is it wrong to rely on the output of AI tools whose input includes factors such as race or sex? And is it wrong to rely on the output of AI systems when it is correlated with factors such as race or sex (whether or not its input includes such factors)? The answers (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Opening the black boxes of the black carpet in the era of risk society: a sociological analysis of AI, algorithms and big data at work through the case study of the Greek postal services.Christos Kouroutzas & Venetia Palamari - forthcoming - AI and Society:1-14.
    This article draws on contributions from the Sociology of Science and Technology and Science and Technology Studies, the Sociology of Risk and Uncertainty, and the Sociology of Work, focusing on the transformations of employment regarding expanded automation, robotization and informatization. The new work patterns emerging due to the introduction of software and hardware technologies, which are based on artificial intelligence, algorithms, big data gathering and robotic systems are examined closely. This article attempts to “open the black boxes” of the “black (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Responsibility Gap(s) Due to the Introduction of AI in Healthcare: An Ubuntu-Inspired Approach.Brandon Ferlito, Seppe Segers, Michiel De Proost & Heidi Mertes - 2024 - Science and Engineering Ethics 30 (4):1-14.
    Due to its enormous potential, artificial intelligence (AI) can transform healthcare on a seemingly infinite scale. However, as we continue to explore the immense potential of AI, it is vital to consider the ethical concerns associated with its development and deployment. One specific concern that has been flagged in the literature is the responsibility gap (RG) due to the introduction of AI in healthcare. When the use of an AI algorithm or system results in a negative outcome for a patient(s), (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • It cannot be right if it was written by AI: on lawyers’ preferences of documents perceived as authored by an LLM vs a human.Jakub Harasta, Tereza Novotná & Jaromir Savelka - forthcoming - Artificial Intelligence and Law:1-38.
    Large Language Models (LLMs) enable a future in which certain types of legal documents may be generated automatically. This has a great potential to streamline legal processes, lower the cost of legal services, and dramatically increase access to justice. While many researchers focus on proposing and evaluating LLM-based applications supporting tasks in the legal domain, there is a notable lack of investigations into how legal professionals perceive content if they believe an LLM has generated it. Yet, this is a critical (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Social trust and public digitalization.Kees van Kersbergen & Gert Tinggaard Svendsen - forthcoming - AI and Society:1-12.
    Modern democratic states are increasingly adopting new information and communication technologies to enhance the efficiency and quality of public administration, public policy and services. However, there is substantial variation in the extent to which countries are successful in pursuing such public digitalization. This paper zooms in on the role of social trust as a possible account for the observed empirical pattern in the range and scope of public digitalization across countries. Our argument is that high social trust makes it easier (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Ethics of Digital Touch.Nicholas Barrow & Patrick Haggard - manuscript
    This paper aims to outline the foundations for an ethics of digital touch. Digital touch refers to hardware and software technologies, often collectively referred to as ‘haptics’, that provide somatic sensations including touch and kinaesthesis, either as a stand-alone interface to users, or as part of a wider immersive experience. Digital touch has particular promise in application areas such as communication, affective computing, medicine, and education. However, as with all emerging technologies, potential value needs to be considered against potential risk. (...)
    Download  
     
    Export citation  
     
    Bookmark