Switch to: References

Add citations

You must login to add citations.
  1. Toward an empathy-based trust in human-otheroid relations.Abootaleb Safdari - forthcoming - AI and Society:1-16.
    The primary aim of this paper is twofold: firstly, to argue that we can enter into relation of trust with robots and AI systems (automata); and secondly, to provide a comprehensive description of the underlying mechanisms responsible for this relation of trust. To achieve these objectives, the paper first undertakes a critical examination of the main arguments opposing the concept of a trust-based relation with automata. Showing that these arguments face significant challenges that render them untenable, it thereby prepares the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Simulacra as Conscious Exotica.Murray Shanahan - 2024 - Inquiry: An Interdisciplinary Journal of Philosophy.
    The advent of conversational agents with increasingly human-like behaviour throws old philosophical questions into new light. Does it, or could it, ever make sense to speak of AI agents built out of generative language models in terms of consciousness, given that they are ‘mere’ simulacra of human behaviour, and that what they do can be seen as ‘merely’ role play? Drawing on the later writings of Wittgenstein, this paper attempts to tackle this question while avoiding the pitfalls of dualistic thinking.
    Download  
     
    Export citation  
     
    Bookmark  
  • Scoping Review Shows the Dynamics and Complexities Inherent to the Notion of “Responsibility” in Artificial Intelligence within the Healthcare Context.Sarah Bouhouita-Guermech & Hazar Haidar - 2024 - Asian Bioethics Review 16 (3):315-344.
    The increasing integration of artificial intelligence (AI) in healthcare presents a host of ethical, legal, social, and political challenges involving various stakeholders. These challenges prompt various studies proposing frameworks and guidelines to tackle these issues, emphasizing distinct phases of AI development, deployment, and oversight. As a result, the notion of responsible AI has become widespread, incorporating ethical principles such as transparency, fairness, responsibility, and privacy. This paper explores the existing literature on AI use in healthcare to examine how it addresses, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Quasi-Metacognitive Machines: Why We Don’t Need Morally Trustworthy AI and Communicating Reliability is Enough.John Dorsch & Ophelia Deroy - 2024 - Philosophy and Technology 37 (2):1-21.
    Many policies and ethical guidelines recommend developing “trustworthy AI”. We argue that developing morally trustworthy AI is not only unethical, as it promotes trust in an entity that cannot be trustworthy, but it is also unnecessary for optimal calibration. Instead, we show that reliability, exclusive of moral trust, entails the appropriate normative constraints that enable optimal calibration and mitigate the vulnerability that arises in high-stakes hybrid decision-making environments, without also demanding, as moral trust would, the anthropomorphization of AI and thus (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Misinformation, Content Moderation, and Epistemology: Protecting Knowledge.Keith Raymond Harris - 2024 - Routledge.
    This book argues that misinformation poses a multi-faceted threat to knowledge, while arguing that some forms of content moderation risk exacerbating these threats. It proposes alternative forms of content moderation that aim to address this complexity while enhancing human epistemic agency. The proliferation of fake news, false conspiracy theories, and other forms of misinformation on the internet and especially social media is widely recognized as a threat to individual knowledge and, consequently, to collective deliberation and democracy itself. This book argues (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI or Your Lying Eyes: Some Shortcomings of Artificially Intelligent Deepfake Detectors.Keith Raymond Harris - 2024 - Philosophy and Technology 37 (7):1-19.
    Deepfakes pose a multi-faceted threat to the acquisition of knowledge. It is widely hoped that technological solutions—in the form of artificially intelligent systems for detecting deepfakes—will help to address this threat. I argue that the prospects for purely technological solutions to the problem of deepfakes are dim. Especially given the evolving nature of the threat, technological solutions cannot be expected to prevent deception at the hands of deepfakes, or to preserve the authority of video footage. Moreover, the success of such (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • KI:Text: Diskurse über KI-Textgeneratoren.Gerhard Schreiber & Lukas Ohly (eds.) - 2024 - De Gruyter.
    Wenn Künstliche Intelligenz (KI) Texte generieren kann, was sagt das darüber, was ein Text ist? Worin unterscheiden sich von Menschen geschriebene und mittels KI generierte Texte? Welche Erwartungen, Befürchtungen und Hoffnungen hegen Wissenschaften, wenn in ihren Diskursen KI-generierte Texte rezipiert werden und Anerkennung finden, deren Urheberschaft und Originalität nicht mehr eindeutig definierbar sind? Wie verändert sich die Arbeit mit Quellen und welche Konsequenzen ergeben sich daraus für die Kriterien wissenschaftlicher Textarbeit und das Verständnis von Wissenschaft insgesamt? Welche Chancen, Grenzen und (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Modeling AI Trust for 2050: perspectives from media and info-communication experts.Katalin Feher, Lilla Vicsek & Mark Deuze - 2024 - AI and Society 39 (6):2933-2946.
    The study explores the future of AI-driven media and info-communication as envisioned by experts from all world regions, defining relevant terminology and expectations for 2050. Participants engaged in a 4-week series of surveys, questioning their definitions and projections about AI for the field of media and communication. Their expectations predict universal access to democratically available, automated, personalized and unbiased information determined by trusted narratives, recolonization of information technology and the demystification of the media process. These experts, as technology ambassadors, advocate (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Intersection of Bernard Lonergan’s Critical Realism, the Common Good, and Artificial Intelligence in Modern Religious Practices.Steven Umbrello - 2023 - Religions 14 (12):1536.
    Artificial intelligence (AI) profoundly influences a number of societal structures today, including religious dynamics. Using Bernard Lonergan’s critical realism as a lens, this article investigates the intersections of AI and religious traditions in their shared pursuit of the common good. Beginning with Lonergan’s principle that humans construct their understanding through cognitive processes, we examine how AI-mediated realities align with or challenge traditional religious tenets. By delving into specific cases, we spotlight AI’s role in reshaping religious symbols, rituals, and even creating (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Trust criteria for artificial intelligence in health: normative and epistemic considerations.Kristin Kostick-Quenet, Benjamin H. Lang, Jared Smith, Meghan Hurley & Jennifer Blumenthal-Barby - 2024 - Journal of Medical Ethics 50 (8):544-551.
    Rapid advancements in artificial intelligence and machine learning (AI/ML) in healthcare raise pressing questions about how much users should trust AI/ML systems, particularly for high stakes clinical decision-making. Ensuring that user trust is properly calibrated to a tool’s computational capacities and limitations has both practical and ethical implications, given that overtrust or undertrust can influence over-reliance or under-reliance on algorithmic tools, with significant implications for patient safety and health outcomes. It is, thus, important to better understand how variability in trust (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Keep trusting! A plea for the notion of Trustworthy AI.Giacomo Zanotti, Mattia Petrolo, Daniele Chiffi & Viola Schiaffonati - 2024 - AI and Society 39 (6):2691-2702.
    A lot of attention has recently been devoted to the notion of Trustworthy AI (TAI). However, the very applicability of the notions of trust and trustworthiness to AI systems has been called into question. A purely epistemic account of trust can hardly ground the distinction between trustworthy and merely reliable AI, while it has been argued that insisting on the importance of the trustee’s motivations and goodwill makes the notion of TAI a categorical error. After providing an overview of the (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • The Use of Artificial Intelligence (AI) in Qualitative Research for Theory Development.Prokopis A. Christou - 2023 - The Qualitative Report 28 (9):2739-2755.
    Theory development is an important component of academic research since it can lead to the acquisition of new knowledge, the development of a field of study, and the formation of theoretical foundations to explain various phenomena. The contribution of qualitative researchers to theory development and advancement remains significant and highly valued, especially in an era of various epochal shifts and technological innovation in the form of Artificial Intelligence (AI). Even so, the academic community has not yet fully explored the dynamics (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI as an Epistemic Technology.Ramón Alvarado - 2023 - Science and Engineering Ethics 29 (5):1-30.
    In this paper I argue that Artificial Intelligence and the many data science methods associated with it, such as machine learning and large language models, are first and foremost epistemic technologies. In order to establish this claim, I first argue that epistemic technologies can be conceptually and practically distinguished from other technologies in virtue of what they are designed for, what they do and how they do it. I then proceed to show that unlike other kinds of technology (_including_ other (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Making Sense of the Conceptual Nonsense 'Trustworthy AI'.Ori Freiman - 2022 - AI and Ethics 4.
    Following the publication of numerous ethical principles and guidelines, the concept of 'Trustworthy AI' has become widely used. However, several AI ethicists argue against using this concept, often backing their arguments with decades of conceptual analyses made by scholars who studied the concept of trust. In this paper, I describe the historical-philosophical roots of their objection and the premise that trust entails a human quality that technologies lack. Then, I review existing criticisms about 'Trustworthy AI' and the consequence of ignoring (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • (E)‐Trust and Its Function: Why We Shouldn't Apply Trust and Trustworthiness to Human–AI Relations.Pepijn Al - 2023 - Journal of Applied Philosophy 40 (1):95-108.
    With an increasing use of artificial intelligence (AI) systems, theorists have analyzed and argued for the promotion of trust in AI and trustworthy AI. Critics have objected that AI does not have the characteristics to be an appropriate subject for trust. However, this argumentation is open to counterarguments. Firstly, rejecting trust in AI denies the trust attitudes that some people experience. Secondly, we can trust other non‐human entities, such as animals and institutions, so why can we not trust AI systems? (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Relativistic Conceptions of Trustworthiness: Implications for the Trustworthy Status of National Identification Systems.Paul Smart, Wendy Hall & Michael Boniface - 2022 - Data and Policy 4 (e21):1-16.
    Trustworthiness is typically regarded as a desirable feature of national identification systems (NISs); but the variegated nature of the trustor communities associated with such systems makes it difficult to see how a single system could be equally trustworthy to all actual and potential trustors. This worry is accentuated by common theoretical accounts of trustworthiness. According to such accounts, trustworthiness is relativized to particular individuals and particular areas of activity, such that one can be trustworthy with regard to some individuals in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI employment decision-making: integrating the equal opportunity merit principle and explainable AI.Gary K. Y. Chan - forthcoming - AI and Society:1-12.
    Artificial intelligence tools used in employment decision-making cut across the multiple stages of job advertisements, shortlisting, interviews and hiring, and actual and potential bias can arise in each of these stages. One major challenge is to mitigate AI bias and promote fairness in opaque AI systems. This paper argues that the equal opportunity merit principle is an ethical approach for fair AI employment decision-making. Further, explainable AI can mitigate the opacity problem by placing greater emphasis on enhancing the understanding of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Trust in Medical Artificial Intelligence: A Discretionary Account.Philip J. Nickel - 2022 - Ethics and Information Technology 24 (1):1-10.
    This paper sets out an account of trust in AI as a relationship between clinicians, AI applications, and AI practitioners in which AI is given discretionary authority over medical questions by clinicians. Compared to other accounts in recent literature, this account more adequately explains the normative commitments created by practitioners when inviting clinicians’ trust in AI. To avoid committing to an account of trust in AI applications themselves, I sketch a reductive view on which discretionary authority is exercised by AI (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • From the Ground Truth Up: Doing AI Ethics from Practice to Principles.James Brusseau - 2022 - AI and Society 37 (1):1-7.
    Recent AI ethics has focused on applying abstract principles downward to practice. This paper moves in the other direction. Ethical insights are generated from the lived experiences of AI-designers working on tangible human problems, and then cycled upward to influence theoretical debates surrounding these questions: 1) Should AI as trustworthy be sought through explainability, or accurate performance? 2) Should AI be considered trustworthy at all, or is reliability a preferable aim? 3) Should AI ethics be oriented toward establishing protections for (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • (1 other version)Can Artificial Intelligence Make Art?Elzė Sigutė Mikalonytė & Markus Kneer - 2022 - ACM Transactions on Human-Robot Interactions.
    In two experiments (total N=693) we explored whether people are willing to consider paintings made by AI-driven robots as art, and robots as artists. Across the two experiments, we manipulated three factors: (i) agent type (AI-driven robot v. human agent), (ii) behavior type (intentional creation of a painting v. accidental creation), and (iii) object type (abstract v. representational painting). We found that people judge robot paintings and human painting as art to roughly the same extent. However, people are much less (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Represent me: please! Towards an ethics of digital twins in medicine.Matthias Braun - 2021 - Journal of Medical Ethics 47 (6):394-400.
    Simulations are used in very different contexts and for very different purposes. An emerging development is the possibility of using simulations to obtain a more or less representative reproduction of organs or even entire persons. Such simulations are framed and discussed using the term ‘digital twin’. This paper unpacks and scrutinises the current use of such digital twins in medicine and the ideas embedded in this practice. First, the paper maps the different types of digital twins. A special focus is (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • A Leap of Faith: Is There a Formula for “Trustworthy” AI?Matthias Braun, Hannah Bleher & Patrik Hummel - 2021 - Hastings Center Report 51 (3):17-22.
    Trust is one of the big buzzwords in debates about the shaping of society, democracy, and emerging technologies. For example, one prominent idea put forward by the High‐Level Expert Group on Artificial Intelligence appointed by the European Commission is that artificial intelligence should be trustworthy. In this essay, we explore the notion of trust and argue that both proponents and critics of trustworthy AI have flawed pictures of the nature of trust. We develop an approach to understanding trust in AI (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Public perceptions of the use of artificial intelligence in Defence: a qualitative exploration.Lee Hadlington, Maria Karanika-Murray, Jane Slater, Jens Binder, Sarah Gardner & Sarah Knight - forthcoming - AI and Society:1-14.
    There are a wide variety of potential applications of artificial intelligence (AI) in Defence settings, ranging from the use of autonomous drones to logistical support. However, limited research exists exploring how the public view these, especially in view of the value of public attitudes for influencing policy-making. An accurate understanding of the public’s perceptions is essential for crafting informed policy, developing responsible governance, and building responsive assurance relating to the development and use of AI in military settings. This study is (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Trustworthy artificial intelligence and ethical design: public perceptions of trustworthiness of an AI-based decision-support tool in the context of intrapartum care.Angeliki Kerasidou, Antoniya Georgieva & Rachel Dlugatch - 2023 - BMC Medical Ethics 24 (1):1-16.
    BackgroundDespite the recognition that developing artificial intelligence (AI) that is trustworthy is necessary for public acceptability and the successful implementation of AI in healthcare contexts, perspectives from key stakeholders are often absent from discourse on the ethical design, development, and deployment of AI. This study explores the perspectives of birth parents and mothers on the introduction of AI-based cardiotocography (CTG) in the context of intrapartum care, focusing on issues pertaining to trust and trustworthiness.MethodsSeventeen semi-structured interviews were conducted with birth parents (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Can robots be trustworthy?Ines Schröder, Oliver Müller, Helena Scholl, Shelly Levy-Tzedek & Philipp Kellmeyer - 2023 - Ethik in der Medizin 35 (2):221-246.
    Definition of the problem This article critically addresses the conceptualization of trust in the ethical discussion on artificial intelligence (AI) in the specific context of social robots in care. First, we attempt to define in which respect we can speak of ‘social’ robots and how their ‘social affordances’ affect the human propensity to trust in human–robot interaction. Against this background, we examine the use of the concept of ‘trust’ and ‘trustworthiness’ with respect to the guidelines and recommendations of the High-Level (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Trustworthy artificial intelligence.Mona Simion & Christoph Kelp - 2020 - Asian Journal of Philosophy 2 (1):1-12.
    This paper develops an account of trustworthy AI. Its central idea is that whether AIs are trustworthy is a matter of whether they live up to their function-based obligations. We argue that this account serves to advance the literature in a couple of important ways. First, it serves to provide a rationale for why a range of properties that are widely assumed in the scientific literature, as well as in policy, to be required of trustworthy AI, such as safety, justice, (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Analysis of Beliefs Acquired from a Conversational AI: Instruments-based Beliefs, Testimony-based Beliefs, and Technology-based Beliefs.Ori Freiman - 2024 - Episteme 21 (3):1031-1047.
    Speaking with conversational AIs, technologies whose interfaces enable human-like interaction based on natural language, has become a common phenomenon. During these interactions, people form their beliefs due to the say-so of conversational AIs. In this paper, I consider, and then reject, the concepts of testimony-based beliefs and instrument-based beliefs as suitable for analysis of beliefs acquired from these technologies. I argue that the concept of instrument-based beliefs acknowledges the non-human agency of the source of the belief. However, the analysis focuses (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Misplaced Trust and Distrust: How Not to Engage with Medical Artificial Intelligence.Georg Starke & Marcello Ienca - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (3):360-369.
    Artificial intelligence (AI) plays a rapidly increasing role in clinical care. Many of these systems, for instance, deep learning-based applications using multilayered Artificial Neural Nets, exhibit epistemic opacity in the sense that they preclude comprehensive human understanding. In consequence, voices from industry, policymakers, and research have suggested trust as an attitude for engaging with clinical AI systems. Yet, in the philosophical and ethical literature on medical AI, the notion of trust remains fiercely debated. Trust skeptics hold that talking about trust (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • On the Contribution of Neuroethics to the Ethics and Regulation of Artificial Intelligence.Michele Farisco, Kathinka Evers & Arleen Salles - 2022 - Neuroethics 15 (1):1-12.
    Contemporary ethical analysis of Artificial Intelligence is growing rapidly. One of its most recognizable outcomes is the publication of a number of ethics guidelines that, intended to guide governmental policy, address issues raised by AI design, development, and implementation and generally present a set of recommendations. Here we propose two things: first, regarding content, since some of the applied issues raised by AI are related to fundamental questions about topics like intelligence, consciousness, and the ontological and ethical status of humans, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • (1 other version)Justifying Our Credences in the Trustworthiness of AI Systems: A Reliabilistic Approach.Andrea Ferrario - 2024 - Science and Engineering Ethics 30 (6):1-21.
    We address an open problem in the philosophy of artificial intelligence (AI): how to justify the epistemic attitudes we have towards the trustworthiness of AI systems. The problem is important, as providing reasons to believe that AI systems are worthy of trust is key to appropriately rely on these systems in human-AI interactions. In our approach, we consider the trustworthiness of an AI as a time-relative, composite property of the system with two distinct facets. One is the actual trustworthiness of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Machine learning in healthcare and the methodological priority of epistemology over ethics.Thomas Grote - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    This paper develops an account of how the implementation of ML models into healthcare settings requires revising the methodological apparatus of philosophical bioethics. On this account, ML models are cognitive interventions that provide decision-support to physicians and patients. Due to reliability issues, opaque reasoning processes, and information asymmetries, ML models pose inferential problems for them. These inferential problems lay the grounds for many ethical problems that currently claim centre-stage in the bioethical debate. Accordingly, this paper argues that the best way (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Attitudinal Tensions in the Joint Pursuit of Explainable and Trusted AI.Devesh Narayanan & Zhi Ming Tan - 2023 - Minds and Machines 33 (1):55-82.
    It is frequently demanded that AI-based Decision Support Tools (AI-DSTs) ought to be both explainable to, and trusted by, those who use them. The joint pursuit of these two principles is ordinarily believed to be uncontroversial. In fact, a common view is that AI systems should be made explainable so that they can be trusted, and in turn, accepted by decision-makers. However, the moral scope of these two principles extends far beyond this particular instrumental connection. This paper argues that if (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Learning to Live with Strange Error: Beyond Trustworthiness in Artificial Intelligence Ethics.Charles Rathkopf & Bert Heinrichs - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (3):333-345.
    Position papers on artificial intelligence (AI) ethics are often framed as attempts to work out technical and regulatory strategies for attaining what is commonly called trustworthy AI. In such papers, the technical and regulatory strategies are frequently analyzed in detail, but the concept of trustworthy AI is not. As a result, it remains unclear. This paper lays out a variety of possible interpretations of the concept and concludes that none of them is appropriate. The central problem is that, by framing (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The application of chatbot on Vietnamese misgrant workers’ right protection in the implementation of new generation free trade agreements (FTAS).Quoc Nguyen Phan, Chin-Chin Tseng, Thu Thi Hoai Le & Thi Bich Ngoc Nguyen - 2023 - AI and Society 38 (4):1771-1783.
    The accession and implementation of new generation free trade agreements bring numerous opportunities as well as challenges to Viet Nam, regarding trade, labor and investment. The increasing number of workers abroad puts a pressure on Vietnamese government to support them in new working cultures and environments. The application of chatbot, which has been known to help certain vulnerable groups such as patients, women and migrants could be one of the tools to support Vietnamese migrant workers by providing immediate information, network (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Leveraging Artificial Intelligence in Marketing for Social Good—An Ethical Perspective.Erik Hermann - 2022 - Journal of Business Ethics 179 (1):43-61.
    Artificial intelligence is shaping strategy, activities, interactions, and relationships in business and specifically in marketing. The drawback of the substantial opportunities AI systems and applications provide in marketing are ethical controversies. Building on the literature on AI ethics, the authors systematically scrutinize the ethical challenges of deploying AI in marketing from a multi-stakeholder perspective. By revealing interdependencies and tensions between ethical principles, the authors shed light on the applicability of a purely principled, deontological approach to AI ethics in marketing. To (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Organisational responses to the ethical issues of artificial intelligence.Bernd Carsten Stahl, Josephina Antoniou, Mark Ryan, Kevin Macnish & Tilimbe Jiya - 2022 - AI and Society 37 (1):23-37.
    The ethics of artificial intelligence is a widely discussed topic. There are numerous initiatives that aim to develop the principles and guidance to ensure that the development, deployment and use of AI are ethically acceptable. What is generally unclear is how organisations that make use of AI understand and address these ethical issues in practice. While there is an abundance of conceptual work on AI ethics, empirical insights are rare and often anecdotal. This paper fills the gap in our current (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • No Agent in the Machine: Being Trustworthy and Responsible about AI.Niël Henk Conradie & Saskia K. Nagel - 2024 - Philosophy and Technology 37 (2):1-24.
    Many recent AI policies have been structured under labels that follow a particular trend: national or international guidelines, policies or regulations, such as the EU’s and USA’s ‘Trustworthy AI’ and China’s and India’s adoption of ‘Responsible AI’, use a label that follows the recipe of [agentially loaded notion + ‘AI’]. A result of this branding, even if implicit, is to encourage the application by laypeople of these agentially loaded notions to the AI technologies themselves. Yet, these notions are appropriate only (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Making Trust Safe for AI? Non-agential Trust as a Conceptual Engineering Problem.Juri Viehoff - 2023 - Philosophy and Technology 36 (4):1-29.
    Should we be worried that the concept of trust is increasingly used when we assess non-human agents and artefacts, say robots and AI systems? Whilst some authors have developed explanations of the concept of trust with a view to accounting for trust in AI systems and other non-agents, others have rejected the idea that we should extend trust in this way. The article advances this debate by bringing insights from conceptual engineering to bear on this issue. After setting up a (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Machine and human agents in moral dilemmas: automation–autonomic and EEG effect.Federico Cassioli, Laura Angioletti & Michela Balconi - 2024 - AI and Society 39 (6):2677-2689.
    Automation is inherently tied to ethical challenges because of its potential involvement in morally loaded decisions. In the present research, participants (n = 34) took part in a moral multi-trial dilemma-based task where the agent (human vs. machine) and the behavior (action vs. inaction) factors were randomized. Self-report measures, in terms of morality, consciousness, responsibility, intentionality, and emotional impact evaluation were gathered, together with electroencephalography (delta, theta, beta, upper and lower alpha, and gamma powers) and peripheral autonomic (electrodermal activity, heart (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Ethics of AI Ethics. A Constructive Critique.Jan-Christoph Heilinger - 2022 - Philosophy and Technology 35 (3):1-20.
    The paper presents an ethical analysis and constructive critique of the current practice of AI ethics. It identifies conceptual substantive and procedural challenges and it outlines strategies to address them. The strategies include countering the hype and understanding AI as ubiquitous infrastructure including neglected issues of ethics and justice such as structural background injustices into the scope of AI ethics and making the procedures and fora of AI ethics more inclusive and better informed with regard to philosophical ethics. These measures (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Tech Ethics Through Trust Auditing.Matthew Grellette - 2022 - Science and Engineering Ethics 28 (3):1-15.
    The public’s trust in the technology sector is waning and, in response, technology companies and state governments have started to champion “tech ethics”. That is, they have pledged to design, develop, distribute, and employ new technologies in an ethical manner. In this paper, I observe that tech ethics is already subject to a widespread pathology in that technology companies, the primary executors of tech ethics, are incentivized to pursue it half-heartedly or even disingenuously. Next, I highlight two emerging strategies which (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Incorporation, Transparency and Cognitive Extension: Why the Distinction Between Embedded and Extended Might Be More Important to Ethics Than to Metaphysics.Mirko Farina & Andrea Lavazza - 2022 - Philosophy and Technology 35 (1):1-21.
    We begin by introducing our readers to the Extended Mind Thesis and briefly discuss a series of arguments in its favour. We continue by showing of such a theory can be resisted and go on to demonstrate that a more conservative account of cognition can be developed. We acknowledge a stalemate between these two different accounts of cognition and notice a couple of issues that we argue have prevented further progress in the field. To overcome the stalemate, we propose to (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Adaptable robots, ethics, and trust: a qualitative and philosophical exploration of the individual experience of trustworthy AI.Stephanie Sheir, Arianna Manzini, Helen Smith & Jonathan Ives - forthcoming - AI and Society:1-14.
    Much has been written about the need for trustworthy artificial intelligence (AI), but the underlying meaning of trust and trustworthiness can vary or be used in confusing ways. It is not always clear whether individuals are speaking of a technology’s trustworthiness, a developer’s trustworthiness, or simply of gaining the trust of users by any means. In sociotechnical circles, trustworthiness is often used as a proxy for ‘the good’, illustrating the moral heights to which technologies and developers ought to aspire, at (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Creating God.Andie Rothenhäusler - 2024 - In Gerhard Schreiber & Lukas Ohly (eds.), KI:Text: Diskurse über KI-Textgeneratoren. De Gruyter. pp. 183-198.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Network of AI and trustworthy: response to Simion and Kelp’s account of trustworthy AI.Fei Song - 2023 - Asian Journal of Philosophy 2 (2):1-8.
    Simion and Kelp develop the obligation-based account of trustworthiness as a compelling general account of trustworthiness and then apply this account to various instances of AI. By doing so, they explain in what way any AI can be considered trustworthy, as per the general account. Simion and Kelp identify that any account of trustworthiness that relies on assumptions of agency that are too anthropocentric, such as that being trustworthy, must involve goodwill. I argue that goodwill is a necessary condition for (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Trustworthy AI: a plea for modest anthropocentrism.Rune Nyrup - 2023 - Asian Journal of Philosophy 2 (2):1-10.
    Simion and Kelp defend a non-anthropocentric account of trustworthy AI, based on the idea that the obligations of AI systems should be sourced in purely functional norms. In this commentary, I highlight some pressing counterexamples to their account, involving AI systems that reliably fulfil their functions but are untrustworthy because those functions are antagonistic to the interests of the trustor. Instead, I outline an alternative account, based on the idea that AI systems should not be considered primarily as tools but (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Embedding artificial intelligence in society: looking beyond the EU AI master plan using the culture cycle.Simone Borsci, Ville V. Lehtola, Francesco Nex, Michael Ying Yang, Ellen-Wien Augustijn, Leila Bagheriye, Christoph Brune, Ourania Kounadi, Jamy Li, Joao Moreira, Joanne Van Der Nagel, Bernard Veldkamp, Duc V. Le, Mingshu Wang, Fons Wijnhoven, Jelmer M. Wolterink & Raul Zurita-Milla - forthcoming - AI and Society:1-20.
    The European Union Commission’s whitepaper on Artificial Intelligence proposes shaping the emerging AI market so that it better reflects common European values. It is a master plan that builds upon the EU AI High-Level Expert Group guidelines. This article reviews the masterplan, from a culture cycle perspective, to reflect on its potential clashes with current societal, technical, and methodological constraints. We identify two main obstacles in the implementation of this plan: the lack of a coherent EU vision to drive future (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Towards trustworthy medical AI ecosystems – a proposal for supporting responsible innovation practices in AI-based medical innovation.Christian Herzog, Sabrina Blank & Bernd Carsten Stahl - forthcoming - AI and Society:1-21.
    In this article, we explore questions about the culture of trustworthy artificial intelligence (AI) through the lens of ecosystems. We draw on the European Commission’s Guidelines for Trustworthy AI and its philosophical underpinnings. Based on the latter, the trustworthiness of an AI ecosystem can be conceived of as being grounded by both the so-called rational-choice and motivation-attributing accounts—i.e., trusting is rational because solution providers deliver expected services reliably, while trust also involves resigning control by attributing one’s motivation, and hence, goals, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Dual-Use and Trustworthy? A Mixed Methods Analysis of AI Diffusion Between Civilian and Defense R&D.Christian Reuter, Thea Riebe & Stefka Schmid - 2022 - Science and Engineering Ethics 28 (2):1-23.
    Artificial Intelligence (AI) seems to be impacting all industry sectors, while becoming a motor for innovation. The diffusion of AI from the civilian sector to the defense sector, and AI’s dual-use potential has drawn attention from security and ethics scholars. With the publication of the ethical guideline Trustworthy AI by the European Union (EU), normative questions on the application of AI have been further evaluated. In order to draw conclusions on Trustworthy AI as a point of reference for responsible research (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • La libertad como el punto de encuentro para la construcción de la confianza en las relaciones humanas.Carlos Vargas-González & Iván-Darío Toro-Jaramillo - 2021 - Isegoría 65:09-09.
    This paper proposes freedom as the condition of possibility for the construction of trust in human relationships. The methodology used is a review of the scientific literature of the most recent moral and political philosophy. As a result of the dialogue between different positions, it is discovered that freedom, despite being present in the act of trust, is forgotten in the discussion around trust, a forgetfulness that has as its main causes the assumption that trust is natural and the confusion (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation