Results for 'Trust in AI'

975 found
Order:
  1. Trust in AI: Progress, Challenges, and Future Directions.Saleh Afroogh, Ali Akbari, Emmie Malone, Mohammadali Kargar & Hananeh Alambeigi - forthcoming - Nature Humanities and Social Sciences Communications.
    The increasing use of artificial intelligence (AI) systems in our daily life through various applications, services, and products explains the significance of trust/distrust in AI from a user perspective. AI-driven systems have significantly diffused into various fields of our lives, serving as beneficial tools used by human agents. These systems are also evolving to act as co-assistants or semi-agents in specific domains, potentially influencing human thought, decision-making, and agency. Trust/distrust in AI plays the role of a regulator and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2. Trust in Medical Artificial Intelligence: A Discretionary Account.Philip J. Nickel - 2022 - Ethics and Information Technology 24 (1):1-10.
    This paper sets out an account of trust in AI as a relationship between clinicians, AI applications, and AI practitioners in which AI is given discretionary authority over medical questions by clinicians. Compared to other accounts in recent literature, this account more adequately explains the normative commitments created by practitioners when inviting clinicians’ trust in AI. To avoid committing to an account of trust in AI applications themselves, I sketch a reductive view on which discretionary authority is (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  3. Living with Uncertainty: Full Transparency of AI isn’t Needed for Epistemic Trust in AI-based Science.Uwe Peters - forthcoming - Social Epistemology Review and Reply Collective.
    Can AI developers be held epistemically responsible for the processing of their AI systems when these systems are epistemically opaque? And can explainable AI (XAI) provide public justificatory reasons for opaque AI systems’ outputs? Koskinen (2024) gives negative answers to both questions. Here, I respond to her and argue for affirmative answers. More generally, I suggest that when considering people’s uncertainty about the factors causally determining an opaque AI’s output, it might be worth keeping in mind that a degree of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4. Limits of trust in medical AI.Joshua James Hatherley - 2020 - Journal of Medical Ethics 46 (7):478-481.
    Artificial intelligence (AI) is expected to revolutionise the practice of medicine. Recent advancements in the field of deep learning have demonstrated success in variety of clinical tasks: detecting diabetic retinopathy from images, predicting hospital readmissions, aiding in the discovery of new drugs, etc. AI’s progress in medicine, however, has led to concerns regarding the potential effects of this technology on relationships of trust in clinical practice. In this paper, I will argue that there is merit to these concerns, since (...)
    Download  
     
    Export citation  
     
    Bookmark   30 citations  
  5. Study on effect of shared investing strategy on trust in AI.N. YokoiRyosuke & N. Kazuya - 2019 - Japanese Journal of Experimental 59 (1):46-50.
    This study examined the determinants of trust in artificial intelligence (AI) in the area of asset management. Many studies of risk perception have found that value similarity determines trust in risk managers. Some studies have demonstrated that value similarity also influences trust in AI. AI is currently employed in a diverse range of domains, including asset management. However, little is known about the factors that influence trust in asset management-related AI. We developed an investment game and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. The importance of understanding trust in Confucianism and what it is like in an AI-powered world.Ho Manh Tung - unknown
    Since the revival of artificial intelligence (AI) research, many countries in the world have proposed their visions of an AI-powered world: Germany with the concept of “Industry 4.0,”1 Japan with the concept of “Society 5.0,”2 China with the “New Generation Artificial Intelligence Plan (AIDP).”3 In all of the grand visions, all governments emphasize the “human-centric element” in their plans. This essay focuses on the concept of trust in Confucian societies and places this very human element in the context of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7. Anthropomorphism in AI: Hype and Fallacy.Adriana Placani - 2024 - AI and Ethics.
    This essay focuses on anthropomorphism as both a form of hype and fallacy. As a form of hype, anthropomorphism is shown to exaggerate AI capabilities and performance by attributing human-like traits to systems that do not possess them. As a fallacy, anthropomorphism is shown to distort moral judgments about AI, such as those concerning its moral character and status, as well as judgments of responsibility and trust. By focusing on these two dimensions of anthropomorphism in AI, the essay highlights (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  8.  55
    (1 other version)Institutional Trust in Medicine in the Age of Artificial Intelligence.Michał Klincewicz - 2023 - In David Collins, Iris Vidmar Jovanović, Mark Alfano & Hale Demir-Doğuoğlu (eds.), The Moral Psychology of Trust. Lexington Books.
    It is easier to talk frankly to a person whom one trusts. It is also easier to agree with a scientist whom one trusts. Even though in both cases the psychological state that underlies the behavior is called ‘trust’, it is controversial whether it is a token of the same psychological type. Trust can serve an affective, epistemic, or other social function, and comes to interact with other psychological states in a variety of ways. The way that the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9. Developing a Trusted Human-AI Network for Humanitarian Benefit.Susannah Kate Devitt, Jason Scholz, Timo Schless & Larry Lewis - forthcoming - Journal of Digital War:TBD.
    Humans and artificial intelligences (AI) will increasingly participate digitally and physically in conflicts yet there is a lack of trusted communications across agents and platforms. For example, humans in disasters and conflict already use messaging and social media to share information, however, international humanitarian relief organisations treat this information as unverifiable and untrustworthy. AI may reduce the ‘fog-of-war’ and improve outcomes, however current AI implementations are often brittle, have a narrow scope of application and wide ethical risks. Meanwhile, human error (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10. Medical AI: is trust really the issue?Jakob Thrane Mainz - 2024 - Journal of Medical Ethics 50 (5):349-350.
    I discuss an influential argument put forward by Hatherley in theJournal of Medical Ethics. Drawing on influential philosophical accounts of interpersonal trust, Hatherley claims that medical artificial intelligence is capable of being reliable, but not trustworthy. Furthermore, Hatherley argues that trust generates moral obligations on behalf of the trustee. For instance, when a patient trusts a clinician, it generates certain moral obligations on behalf of the clinician for her to do what she is entrusted to do. I make (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  11.  45
    Trust and generative AI: embodiment considered.Kefu Zhu - 2024 - AI and Ethics.
    Questions surrounding engagement with generative AI are often framed in terms of trust, yet mere theorizing about trust may not yield actionable insights, given the multifaceted nature of trust. Literature on trust typically overlooks how individuals make meaning in their interactions with other entities, including AI. This paper reexamines trust with insights from Merleau-Ponty’s views on embodiment, positing trust as a style of world engagement characterized by openness—an attitude wherein individuals enact and give themselves (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12. Making Sense of the Conceptual Nonsense 'Trustworthy AI'.Ori Freiman - 2022 - AI and Ethics 4.
    Following the publication of numerous ethical principles and guidelines, the concept of 'Trustworthy AI' has become widely used. However, several AI ethicists argue against using this concept, often backing their arguments with decades of conceptual analyses made by scholars who studied the concept of trust. In this paper, I describe the historical-philosophical roots of their objection and the premise that trust entails a human quality that technologies lack. Then, I review existing criticisms about 'Trustworthy AI' and the consequence (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  13. AI Decision Making with Dignity? Contrasting Workers’ Justice Perceptions of Human and AI Decision Making in a Human Resource Management Context.Sarah Bankins, Paul Formosa, Yannick Griep & Deborah Richards - forthcoming - Information Systems Frontiers.
    Using artificial intelligence (AI) to make decisions in human resource management (HRM) raises questions of how fair employees perceive these decisions to be and whether they experience respectful treatment (i.e., interactional justice). In this experimental survey study with open-ended qualitative questions, we examine decision making in six HRM functions and manipulate the decision maker (AI or human) and decision valence (positive or negative) to determine their impact on individuals’ experiences of interactional justice, trust, dehumanization, and perceptions of decision-maker role (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  14. Embracing ChatGPT and other generative AI tools in higher education: The importance of fostering trust and responsible use in teaching and learning.Jonathan Y. H. Sim - 2023 - Higher Education in Southeast Asia and Beyond.
    Trust is the foundation for learning, and we must not allow ignorance of this new technologies, like Generative AI, to disrupt the relationship between students and educators. As a first step, we need to actively engage with AI tools to better understand how they can help us in our work.
    Download  
     
    Export citation  
     
    Bookmark  
  15.  89
    Can AI become an Expert?Hyeongyun Kim - 2024 - Journal of Ai Humanities 16 (4):113-136.
    With the rapid development of artificial intelligence (AI), understanding its capabilities and limitations has become significant for mitigating unfounded anxiety and unwarranted optimism. As part of this endeavor, this study delves into the following question: Can AI become an expert? More precisely, should society confer the authority of experts on AI even if its decision-making process is highly opaque? Throughout the investigation, I aim to identify certain normative challenges in elevating current AI to a level comparable to that of human (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16. (1 other version)Capable but Amoral? Comparing AI and Human Expert Collaboration in Ethical Decision Making.Suzanne Tolmeijer, Markus Christen, Serhiy Kandul, Markus Kneer & Abraham Bernstein - 2022 - Proceedings of the 2022 Chi Conference on Human Factors in Computing Systems 160:160:1–17.
    While artificial intelligence (AI) is increasingly applied for decision-making processes, ethical decisions pose challenges for AI applications. Given that humans cannot always agree on the right thing to do, how would ethical decision-making by AI systems be perceived and how would responsibility be ascribed in human-AI collaboration? In this study, we investigate how the expert type (human vs. AI) and level of expert autonomy (adviser vs. decider) influence trust, perceived responsibility, and reliance. We find that participants consider humans to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  17. Big Tech corporations and AI: A Social License to Operate and Multi-Stakeholder Partnerships in the Digital Age.Marianna Capasso & Steven Umbrello - 2023 - In Francesca Mazzi & Luciano Floridi (eds.), The Ethics of Artificial Intelligence for the Sustainable Development Goals. Springer Verlag. pp. 231–249.
    The pervasiveness of AI-empowered technologies across multiple sectors has led to drastic changes concerning traditional social practices and how we relate to one another. Moreover, market-driven Big Tech corporations are now entering public domains, and concerns have been raised that they may even influence public agenda and research. Therefore, this chapter focuses on assessing and evaluating what kind of business model is desirable to incentivise the AI for Social Good (AI4SG) factors. In particular, the chapter explores the implications of this (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18.  28
    AI Contribution Value System Argument.Michael Haimes - manuscript
    The AI Contribution Value System Argument proposes a framework in which AI-generated contributions are valued based on their societal impact rather than traditional monetary metrics. Traditional economic systems often fail to capture the enduring value of AI innovations, which can mitigate pressing global challenges. This argument introduces a contribution-based valuation model grounded in equity, inclusivity, and sustainability. By incorporating measurable metrics such as quality-adjusted life years (QALYs), emissions reduced, and innovations generated, this system ensures rewards align with tangible societal benefits. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. The promise and perils of AI in medicine.Robert Sparrow & Joshua James Hatherley - 2019 - International Journal of Chinese and Comparative Philosophy of Medicine 17 (2):79-109.
    What does Artificial Intelligence (AI) have to contribute to health care? And what should we be looking out for if we are worried about its risks? In this paper we offer a survey, and initial evaluation, of hopes and fears about the applications of artificial intelligence in medicine. AI clearly has enormous potential as a research tool, in genomics and public health especially, as well as a diagnostic aid. It’s also highly likely to impact on the organisational and business practices (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  20. Algorithm exploitation: humans are keen to exploit benevolent AI.Jurgis Karpus, Adrian Krüger, Julia Tovar Verba, Bahador Bahrami & Ophelia Deroy - 2021 - iScience 24 (6):102679.
    We cooperate with other people despite the risk of being exploited or hurt. If future artificial intelligence (AI) systems are benevolent and cooperative toward us, what will we do in return? Here we show that our cooperative dispositions are weaker when we interact with AI. In nine experiments, humans interacted with either another human or an AI agent in four classic social dilemma economic games and a newly designed game of Reciprocity that we introduce here. Contrary to the hypothesis that (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  21. Adopting trust as an ex post approach to privacy.Haleh Asgarinia - 2024 - AI and Ethics 3 (4).
    This research explores how a person with whom information has been shared and, importantly, an artificial intelligence (AI) system used to deduce information from the shared data contribute to making the disclosure context private. The study posits that private contexts are constituted by the interactions of individuals in the social context of intersubjectivity based on trust. Hence, to make the context private, the person who is the trustee (i.e., with whom information has been shared) must fulfil trust norms. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22. Robot Mindreading and the Problem of Trust.Andrés Páez - 2021 - In AISB Convention 2021: Communication and Conversation. Curran. pp. 140-143.
    This paper raises three questions regarding the attribution of beliefs, desires, and intentions to robots. The first one is whether humans in fact engage in robot mindreading. If they do, this raises a second question: does robot mindreading foster trust towards robots? Both of these questions are empirical, and I show that the available evidence is insufficient to answer them. Now, if we assume that the answer to both questions is affirmative, a third and more important question arises: should (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. AI-Testimony, Conversational AIs and Our Anthropocentric Theory of Testimony.Ori Freiman - 2024 - Social Epistemology 38 (4):476-490.
    The ability to interact in a natural language profoundly changes devices’ interfaces and potential applications of speaking technologies. Concurrently, this phenomenon challenges our mainstream theories of knowledge, such as how to analyze linguistic outputs of devices under existing anthropocentric theoretical assumptions. In section 1, I present the topic of machines that speak, connecting between Descartes and Generative AI. In section 2, I argue that accepted testimonial theories of knowledge and justification commonly reject the possibility that a speaking technological artifact can (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  24.  31
    AI-Driven Legislative Simulation and Inclusive Global Governance.Michael Haimes - manuscript
    This argument explores the transformative potential of AI-driven legislative simulations for creating inclusive, equitable, and globally adaptable laws. By using predictive modeling and adaptive frameworks, these simulations can account for diverse cultural, social, and economic contexts. The argument emphasizes the need for universal ethical safeguards, trust-building measures, and phased implementation strategies. Case studies of successful applications in governance and conflict resolution demonstrate the feasibility and efficacy of this approach. The conclusion highlights AI’s role in democratizing governance and ensuring laws (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. Augmented Intelligence - The New AI - Unleashing Human Capabilities in Knowledge Work.James M. Corrigan - 2012 - 2012 34Th International Conference on Software Engineering (Icse 2012).
    In this paper I describe a novel application of contemplative techniques to software engineering with the goal of augmenting the intellectual capabilities of knowledge workers within the field in four areas: flexibility, attention, creativity, and trust. The augmentation of software engineers’ intellectual capabilities is proposed as a third complement to the traditional focus of methodologies on the process and environmental factors of the software development endeavor. I argue that these capabilities have been shown to be open to improvement through (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. The value of testimonial-based beliefs in the face of AI-generated quasi-testimony.Felipe Alejandro Álvarez Osorio & Ruth Marcela Espinosa Sarmiento - 2024 - Aufklärung 11 (Especial):25-38.
    The value of testimony as a source of knowledge has been a subject of epistemological debates. The "trust theory of testimony" suggests that human testimony is based on an affective relationship supported by social norms. However, the advent of generative artificial intelligence challenges our understanding of genuine testimony. The concept of "quasi-testimony" seeks to characterize utterances produced by non-human entities that mimic testimony but lack certain fundamental attributes. This article analyzes these issues in depth, exploring philosophical perspectives on testimony (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. A Formal Account of AI Trustworthiness: Connecting Intrinsic and Perceived Trustworthiness.Piercosma Bisconti, Letizia Aquilino, Antonella Marchetti & Daniele Nardi - forthcoming - Aies '24: Proceedings of the 2024 Aaai/Acmconference on Ai, Ethics, and Society.
    This paper proposes a formal account of AI trustworthiness, connecting both intrinsic and perceived trustworthiness in an operational schematization. We argue that trustworthiness extends beyond the inherent capabilities of an AI system to include significant influences from observers' perceptions, such as perceived transparency, agency locus, and human oversight. While the concept of perceived trustworthiness is discussed in the literature, few attempts have been made to connect it with the intrinsic trustworthiness of AI systems. Our analysis introduces a novel schematization to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28. Quasi-Metacognitive Machines: Why We Don’t Need Morally Trustworthy AI and Communicating Reliability is Enough.John Dorsch & Ophelia Deroy - 2024 - Philosophy and Technology 37 (2):1-21.
    Many policies and ethical guidelines recommend developing “trustworthy AI”. We argue that developing morally trustworthy AI is not only unethical, as it promotes trust in an entity that cannot be trustworthy, but it is also unnecessary for optimal calibration. Instead, we show that reliability, exclusive of moral trust, entails the appropriate normative constraints that enable optimal calibration and mitigate the vulnerability that arises in high-stakes hybrid decision-making environments, without also demanding, as moral trust would, the anthropomorphization of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  29. AI or Your Lying Eyes: Some Shortcomings of Artificially Intelligent Deepfake Detectors.Keith Raymond Harris - 2024 - Philosophy and Technology 37 (7):1-19.
    Deepfakes pose a multi-faceted threat to the acquisition of knowledge. It is widely hoped that technological solutions—in the form of artificially intelligent systems for detecting deepfakes—will help to address this threat. I argue that the prospects for purely technological solutions to the problem of deepfakes are dim. Especially given the evolving nature of the threat, technological solutions cannot be expected to prevent deception at the hands of deepfakes, or to preserve the authority of video footage. Moreover, the success of such (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  30. SIDEs: Separating Idealization from Deceptive ‘Explanations’ in xAI.Emily Sullivan - forthcoming - Proceedings of the 2024 Acm Conference on Fairness, Accountability, and Transparency.
    Explainable AI (xAI) methods are important for establishing trust in using black-box models. However, recent criticism has mounted against current xAI methods that they disagree, are necessarily false, and can be manipulated, which has started to undermine the deployment of black-box models. Rudin (2019) goes so far as to say that we should stop using black-box models altogether in high-stakes cases because xAI explanations ‘must be wrong’. However, strict fidelity to the truth is historically not a desideratum in science. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31. Digital Democracy in the Age of Artificial Intelligence.Claudio Novelli & Giulia Sandri - manuscript
    This chapter explores the influence of Artificial Intelligence (AI) on digital democracy, focusing on four main areas: citizenship, participation, representation, and the public sphere. It traces the evolution from electronic to virtual and network democracy, underscoring how each stage has broadened democratic engagement through technology. Focusing on digital citizenship, the chapter examines how AI can improve online engagement while posing privacy risks and fostering identity stereotyping. Regarding political participation, it highlights AI's dual role in mobilising civic actions and spreading misinformation. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32.  62
    Ethical Standards in Higher Education.Eutychus Gichuru - 2023 - Kiu Journal of Education 3 (2):98-114.
    A study was conducted regarding ways in which higher education institutions can improve ethics. Theoretical frameworks used included: Virtue ethics, deontological and environmental ethics theories. The total sampled written texts were 94. Non-probability sampling was used. The type that was used was online convenience sampling through web scraping. Philosophical assumption that guided this study was interpretivism and the approach was Qualitative. Case study was used as a design and content analysis as a method of data analysis. Some of the findings (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. Artificial thinking and doomsday projections: a discourse on trust, ethics and safety.Jeffrey White, Dietrich Brandt, Jan Söffner & Larry Stapleton - 2023 - AI and Society 38 (6):2119-2124.
    The article reflects on where AI is headed and the world along with it, considering trust, ethics and safety. Implicit in artificial thinking and doomsday appraisals is the engineered divorce from reality of sublime human embodiment. Jeffrey White, Dietrich Brandt, Jan Soeffner, and Larry Stapleton, four scholars associated with AI & Society, address these issues, and more, in the following exchange.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  34. A phenomenology and epistemology of large language models: transparency, trust, and trustworthiness.Richard Heersmink, Barend de Rooij, María Jimena Clavel Vázquez & Matteo Colombo - 2024 - Ethics and Information Technology 26 (3):1-15.
    This paper analyses the phenomenology and epistemology of chatbots such as ChatGPT and Bard. The computational architecture underpinning these chatbots are large language models (LLMs), which are generative artificial intelligence (AI) systems trained on a massive dataset of text extracted from the Web. We conceptualise these LLMs as multifunctional computational cognitive artifacts, used for various cognitive tasks such as translating, summarizing, answering questions, information-seeking, and much more. Phenomenologically, LLMs can be experienced as a “quasi-other”; when that happens, users anthropomorphise them. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. Vertrouwen in de geneeskunde en kunstmatige intelligentie.Lily Frank & Michal Klincewicz - 2021 - Podium Voor Bioethiek 3 (28):37-42.
    Kunstmatige intelligentie (AI) en systemen die met machine learning (ML) werken, kunnen veel onderdelen van het medische besluitvormingsproces ondersteunen of vervangen. Ook zouden ze artsen kunnen helpen bij het omgaan met klinische, morele dilemma’s. AI/ML-beslissingen kunnen zo in de plaats komen van professionele beslissingen. We betogen dat dit belangrijke gevolgen heeft voor de relatie tussen een patiënt en de medische professie als instelling, en dat dit onvermijdelijk zal leiden tot uitholling van het institutionele vertrouwen in de geneeskunde.
    Download  
     
    Export citation  
     
    Bookmark  
  36. The transparency of retraction notices in The Lancet.Trans Eva - manuscript
    In the year 2020, during the global race to combat the coronavirus, the scientific community experienced a seismic shock when a research paper in the medical science journal The Lancet was retracted [1]. Since then, retractions of research papers in The Lancet have become more frequent. This not only raises concerns about the quality of research within the academic community but also has the potential to erode public trust in science. As transparent retraction notice will help alleviate the negative (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37.  47
    Emerging Trends in Cybersecurity: Navigating the Future of Digital Protection.Anumiti Jat - 2024 - Idea of Spectrum 1 (12):1-7.
    The increasing sophistication of cyber threats necessitates innovative and proactive cybersecurity measures. This paper explores the latest trends in cybersecurity, focusing on the role of Artificial Intelligence (AI), Zero Trust security, and blockchain technology. A review of the literature highlights significant advancements and persistent challenges, including the security of Internet of Things (IoT) ecosystems and human-centric vulnerabilities. Experiments were conducted to evaluate the efficacy of machine learning-based intrusion detection systems and Zero Trust implementation in a simulated environment. Results (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. Consequences of unexplainable machine learning for the notions of a trusted doctor and patient autonomy.Michal Klincewicz & Lily Frank - 2020 - Proceedings of the 2nd EXplainable AI in Law Workshop (XAILA 2019) Co-Located with 32nd International Conference on Legal Knowledge and Information Systems (JURIX 2019).
    This paper provides an analysis of the way in which two foundational principles of medical ethics–the trusted doctor and patient autonomy–can be undermined by the use of machine learning (ML) algorithms and addresses its legal significance. This paper can be a guide to both health care providers and other stakeholders about how to anticipate and in some cases mitigate ethical conflicts caused by the use of ML in healthcare. It can also be read as a road map as to what (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. Ethical Issues in Near-Future Socially Supportive Smart Assistants for Older Adults.Alex John London - forthcoming - IEEE Transactions on Technology and Society.
    Abstract:This paper considers novel ethical issues pertaining to near-future artificial intelligence (AI) systems that seek to support, maintain, or enhance the capabilities of older adults as they age and experience cognitive decline. In particular, we focus on smart assistants (SAs) that would seek to provide proactive assistance and mediate social interactions between users and other members of their social or support networks. Such systems would potentially have significant utility for users and their caregivers if they could reduce the cognitive load (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  40.  62
    Reading at university in the time of GenA.Thomas Corbin, Yifei Liang, Margaret Bearman, Tim Fawns, Gene Flenady, Paul Formosa, Lucinda McKnight, Jack Reynolds & Jack Walton - 2024 - Learning Letters 3 (35):1-8.
    Concerns around Generative Artificial Intelligence (GenAI) in higher education have so far largely centred on assessment integrity, resulting in fundamental questions about students’ broader engagement with these tools remaining underexplored. This paper reports on the findings of a survey that forms part of a wider study, comprising the first empirical investigation of GenAI use by university students as a method of engaging with their academic readings. Our survey of 101 students shows that over half of all students surveyed used GenAI (...)
    Download  
     
    Export citation  
     
    Bookmark  
  41.  38
    If Not Then Voting System Argument.Michael Haimes - manuscript
    The If Not Then Voting System Argument proposes a transformative approach to electoral systems by enabling voters to rank their preferences. This system ensures that secondary choices are considered if primary choices are eliminated, addressing voter dissatisfaction and polarization inherent in traditional voting models. By integrating AI-driven transparency tools, behavioral science insights, and cultural adaptability mechanisms, this system enhances fairness, equity, and trust in democratic processes. Case studies in ranked-choice voting demonstrate its effectiveness in fostering broader consensus, reducing polarization, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42. (1 other version)Trust in Medicine.Philip J. Nickel & Lily Frank - 2019 - In Judith Simon (ed.), The Routledge Handbook of Trust and Philosophy. Routledge.
    In this chapter, we consider ethical and philosophical aspects of trust in the practice of medicine. We focus on trust within the patient-physician relationship, trust and professionalism, and trust in Western (allopathic) institutions of medicine and medical research. Philosophical approaches to trust contain important insights into medicine as an ethical and social practice. In what follows we explain several philosophical approaches and discuss their strengths and weaknesses in this context. We also highlight some relevant empirical (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  43.  41
    Information and Communications Technology in Romania - Comparative Analysis with the EU, Social Impact, Challenges and Opportunities, Future Directions.Nicolae Sfetcu - 2024 - Bucharest, Romania: MultiMedia Publishing.
    The modern global technological landscape is shaped by rapid advances and interconnectivity, leading to a complex ecosystem of innovation, competition and collaboration. Significant developments are being seen in artificial intelligence, telecommunications, biotechnology and energy technologies. Digitalization is redefining industries such as healthcare, transport and finance, while cross-border data flows and 5G infrastructure are accelerating global connectivity. Key players such as the United States, China and Japan are investing heavily in research and development, pushing the capabilities of AI and quantum computing (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. Trust in God: an evaluative review of the literature and research proposal.Daniel Howard-Snyder, Daniel J. McKaughan, Joshua N. Hook, Daryl R. Van Tongeren, Don E. Davis, Peter C. Hill & M. Elizabeth Lewis Hall - 2021 - Mental Health, Religion and Culture 24:745-763.
    Until recently, psychologists have conceptualised and studied trust in God (TIG) largely in isolation from contemporary work in theology, philosophy, history, and biblical studies that has examined the topic with increasing clarity. In this article, we first review the primary ways that psychologists have conceptualised and measured TIG. Then, we draw on conceptualizations of TIG outside the psychology of religion to provide a conceptual map for how TIG might be related to theorised predictors and outcomes. Finally, we provide a (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  45. Trust in technology: interlocking trust concepts for privacy respecting video surveillance.Sebastian Weydner-Volkmann & Linus Feiten - 2021 - Journal of Information, Communication and Ethics in Society 19 (4):506-520.
    Purpose The purpose of this paper is to defend the notion of “trust in technology” against the philosophical view that this concept is misled and unsuitable for ethical evaluation. In contrast, it is shown that “trustworthy technology” addresses a critical societal need in the digital age as it is inclusive of IT-security risks not only from a technical but also from a public layperson perspective. Design/methodology/approach From an interdisciplinary perspective between philosophy andIT-security, the authors discuss a potential instantiation of (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  46. Trust in technological systems.Philip J. Nickel - 2013 - In M. J. de Vries, S. O. Hansson & A. W. M. Meijers (eds.), Norms in technology: Philosophy of Engineering and Technology, Vol. 9. Springer.
    Technology is a practically indispensible means for satisfying one’s basic interests in all central areas of human life including nutrition, habitation, health care, entertainment, transportation, and social interaction. It is impossible for any one person, even a well-trained scientist or engineer, to know enough about how technology works in these different areas to make a calculated choice about whether to rely on the vast majority of the technologies she/he in fact relies upon. Yet, there are substantial risks, uncertainties, and unforeseen (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  47. Science Based on Artificial Intelligence Need not Pose a Social Epistemological Problem.Uwe Peters - 2024 - Social Epistemology Review and Reply Collective 13 (1).
    It has been argued that our currently most satisfactory social epistemology of science can’t account for science that is based on artificial intelligence (AI) because this social epistemology requires trust between scientists that can take full responsibility for the research tools they use, and scientists can’t take full responsibility for the AI tools they use since these systems are epistemically opaque. I think this argument overlooks that much AI-based science can be done without opaque models, and that agents can (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48. Trust in engineering.Philip J. Nickel - 2021 - In Diane P. Michelfelder & Neelke Doorn (eds.), Routledge Handbook of Philosophy of Engineering. Taylor & Francis Ltd. pp. 494-505.
    Engineers are traditionally regarded as trustworthy professionals who meet exacting standards. In this chapter I begin by explicating our trust relationship towards engineers, arguing that it is a linear but indirect relationship in which engineers “stand behind” the artifacts and technological systems that we rely on directly. The chapter goes on to explain how this relationship has become more complex as engineers have taken on two additional aims: the aim of social engineering to create and steer trust between (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  49. From Explanation to Recommendation: Ethical Standards for Algorithmic Recourse.Emily Sullivan & Philippe Verreault-Julien - forthcoming - Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (AIES’22).
    People are increasingly subject to algorithmic decisions, and it is generally agreed that end-users should be provided an explanation or rationale for these decisions. There are different purposes that explanations can have, such as increasing user trust in the system or allowing users to contest the decision. One specific purpose that is gaining more traction is algorithmic recourse. We first pro- pose that recourse should be viewed as a recommendation problem, not an explanation problem. Then, we argue that the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  50. The role of trust in knowledge.John Hardwig - 1991 - Journal of Philosophy 88 (12):693-708.
    Most traditional epistemologists see trust and knowledge as deeply antithetical: we cannot know by trusting in the opinions of others; knowledge must be based on evidence, not mere trust. I argue that this is badly mistaken. Modern knowers cannot be independent and self-reliant. In most disciplines, those who do not trust cannot know. Trust is thus often more epistemically basic than empirical evidence or logical argument, for the evidence and the argument are available only through (...). Finally, since the reliability of testimonial evidence depends on the trustworthiness of the testifier, this implies that knowledge often rests on a foundation of ethics. The rationality of many of our beliefs depends not only on our own character, but on the character of others. (shrink)
    Download  
     
    Export citation  
     
    Bookmark   266 citations  
1 — 50 / 975