Results for ' Trustworthy AI'

960 found
Order:
  1. Making Sense of the Conceptual Nonsense 'Trustworthy AI'.Ori Freiman - 2022 - AI and Ethics 4.
    Following the publication of numerous ethical principles and guidelines, the concept of 'Trustworthy AI' has become widely used. However, several AI ethicists argue against using this concept, often backing their arguments with decades of conceptual analyses made by scholars who studied the concept of trust. In this paper, I describe the historical-philosophical roots of their objection and the premise that trust entails a human quality that technologies lack. Then, I review existing criticisms about 'Trustworthy AI' and the consequence (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  2. Quasi-Metacognitive Machines: Why We Don’t Need Morally Trustworthy AI and Communicating Reliability is Enough.John Dorsch & Ophelia Deroy - 2024 - Philosophy and Technology 37 (2):1-21.
    Many policies and ethical guidelines recommend developing “trustworthy AI”. We argue that developing morally trustworthy AI is not only unethical, as it promotes trust in an entity that cannot be trustworthy, but it is also unnecessary for optimal calibration. Instead, we show that reliability, exclusive of moral trust, entails the appropriate normative constraints that enable optimal calibration and mitigate the vulnerability that arises in high-stakes hybrid decision-making environments, without also demanding, as moral trust would, the anthropomorphization of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  3. Establishing the rules for building trustworthy AI.Luciano Floridi - 2019 - Nature Machine Intelligence 1 (6):261-262.
    AI is revolutionizing everyone’s life, and it is crucial that it does so in the right way. AI’s profound and far-reaching potential for transformation concerns the engineering of systems that have some degree of autonomous agency. This is epochal and requires establishing a new, ethical balance between human and artificial autonomy.
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  4. (1 other version)Ethics-based auditing to develop trustworthy AI.Jakob Mökander & Luciano Floridi - 2021 - Minds and Machines 31 (2):323–327.
    A series of recent developments points towards auditing as a promising mechanism to bridge the gap between principles and practice in AI ethics. Building on ongoing discussions concerning ethics-based auditing, we offer three contributions. First, we argue that ethics-based auditing can improve the quality of decision making, increase user satisfaction, unlock growth potential, enable law-making, and relieve human suffering. Second, we highlight current best practices to support the design and implementation of ethics-based auditing: To be feasible and effective, ethics-based auditing (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  5. A Formal Account of AI Trustworthiness: Connecting Intrinsic and Perceived Trustworthiness.Piercosma Bisconti, Letizia Aquilino, Antonella Marchetti & Daniele Nardi - forthcoming - Aies '24: Proceedings of the 2024 Aaai/Acmconference on Ai, Ethics, and Society.
    This paper proposes a formal account of AI trustworthiness, connecting both intrinsic and perceived trustworthiness in an operational schematization. We argue that trustworthiness extends beyond the inherent capabilities of an AI system to include significant influences from observers' perceptions, such as perceived transparency, agency locus, and human oversight. While the concept of perceived trustworthiness is discussed in the literature, few attempts have been made to connect it with the intrinsic trustworthiness of AI systems. Our analysis introduces a novel schematization to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. The trustworthiness of AI: Comments on Simion and Kelp’s account.Dong-Yong Choi - 2023 - Asian Journal of Philosophy 2 (1):1-9.
    Simion and Kelp explain the trustworthiness of an AI based on that AI’s disposition to meet its obligations. Roughly speaking, according to Simion and Kelp, an AI is trustworthy regarding its task if and only if that AI is obliged to complete the task and its disposition to complete the task is strong enough. Furthermore, an AI is obliged to complete a task in the case where the task is the AI’s etiological function or design function. This account has (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7. Trust in AI: Progress, Challenges, and Future Directions.Saleh Afroogh, Ali Akbari, Emmie Malone, Mohammadali Kargar & Hananeh Alambeigi - forthcoming - Nature Humanities and Social Sciences Communications.
    The increasing use of artificial intelligence (AI) systems in our daily life through various applications, services, and products explains the significance of trust/distrust in AI from a user perspective. AI-driven systems have significantly diffused into various fields of our lives, serving as beneficial tools used by human agents. These systems are also evolving to act as co-assistants or semi-agents in specific domains, potentially influencing human thought, decision-making, and agency. Trust/distrust in AI plays the role of a regulator and could significantly (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. From the Ground Truth Up: Doing AI Ethics from Practice to Principles.James Brusseau - 2022 - AI and Society 37 (1):1-7.
    Recent AI ethics has focused on applying abstract principles downward to practice. This paper moves in the other direction. Ethical insights are generated from the lived experiences of AI-designers working on tangible human problems, and then cycled upward to influence theoretical debates surrounding these questions: 1) Should AI as trustworthy be sought through explainability, or accurate performance? 2) Should AI be considered trustworthy at all, or is reliability a preferable aim? 3) Should AI ethics be oriented toward establishing (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9. Machine learning in bail decisions and judges’ trustworthiness.Alexis Morin-Martel - 2023 - AI and Society:1-12.
    The use of AI algorithms in criminal trials has been the subject of very lively ethical and legal debates recently. While there are concerns over the lack of accuracy and the harmful biases that certain algorithms display, new algorithms seem more promising and might lead to more accurate legal decisions. Algorithms seem especially relevant for bail decisions, because such decisions involve statistical data to which human reasoners struggle to give adequate weight. While getting the right legal outcome is a strong (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  10. Medical AI: is trust really the issue?Jakob Thrane Mainz - 2024 - Journal of Medical Ethics 50 (5):349-350.
    I discuss an influential argument put forward by Hatherley in theJournal of Medical Ethics. Drawing on influential philosophical accounts of interpersonal trust, Hatherley claims that medical artificial intelligence is capable of being reliable, but not trustworthy. Furthermore, Hatherley argues that trust generates moral obligations on behalf of the trustee. For instance, when a patient trusts a clinician, it generates certain moral obligations on behalf of the clinician for her to do what she is entrusted to do. I make three (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  11. Australia's Approach to AI Governance in Security and Defence.Susannah Kate Devitt & Damian Copeland - forthcoming - In M. Raska, Z. Stanley-Lockman & R. Bitzinger (eds.), AI Governance for National Security and Defence: Assessing Military AI Strategic Perspectives. Routledge. pp. 38.
    Australia is a leading AI nation with strong allies and partnerships. Australia has prioritised the development of robotics, AI, and autonomous systems to develop sovereign capability for the military. Australia commits to Article 36 reviews of all new means and method of warfare to ensure weapons and weapons systems are operated within acceptable systems of control. Additionally, Australia has undergone significant reviews of the risks of AI to human rights and within intelligence organisations and has committed to producing ethics guidelines (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12. Explainable AI lacks regulative reasons: why AI and human decision‑making are not equally opaque.Uwe Peters - forthcoming - AI and Ethics.
    Many artificial intelligence (AI) systems currently used for decision-making are opaque, i.e., the internal factors that determine their decisions are not fully known to people due to the systems’ computational complexity. In response to this problem, several researchers have argued that human decision-making is equally opaque and since simplifying, reason-giving explanations (rather than exhaustive causal accounts) of a decision are typically viewed as sufficient in the human case, the same should hold for algorithmic decision-making. Here, I contend that this argument (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  13. (1 other version)Capable but Amoral? Comparing AI and Human Expert Collaboration in Ethical Decision Making.Suzanne Tolmeijer, Markus Christen, Serhiy Kandul, Markus Kneer & Abraham Bernstein - 2022 - Proceedings of the 2022 Chi Conference on Human Factors in Computing Systems 160:160:1–17.
    While artificial intelligence (AI) is increasingly applied for decision-making processes, ethical decisions pose challenges for AI applications. Given that humans cannot always agree on the right thing to do, how would ethical decision-making by AI systems be perceived and how would responsibility be ascribed in human-AI collaboration? In this study, we investigate how the expert type (human vs. AI) and level of expert autonomy (adviser vs. decider) influence trust, perceived responsibility, and reliance. We find that participants consider humans to be (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  14.  89
    Chinese Chat Room: AI hallucinations, epistemology and cognition.Kristina Šekrst - forthcoming - Studies in Logic, Grammar and Rhetoric.
    The purpose of this paper is to show that understanding AI hallucination requires an interdisciplinary approach that combines insights from epistemology and cognitive science to address the nature of AI-generated knowledge, with a terminological worry that concepts we often use might carry unnecessary presuppositions. Along with terminological issues, it is demonstrated that AI systems, comparable to human cognition, are susceptible to errors in judgement and reasoning, and proposes that epistemological frameworks, such as reliabilism, can be similarly applied to enhance the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15. A phenomenology and epistemology of large language models: transparency, trust, and trustworthiness.Richard Heersmink, Barend de Rooij, María Jimena Clavel Vázquez & Matteo Colombo - 2024 - Ethics and Information Technology 26 (3):1-15.
    This paper analyses the phenomenology and epistemology of chatbots such as ChatGPT and Bard. The computational architecture underpinning these chatbots are large language models (LLMs), which are generative artificial intelligence (AI) systems trained on a massive dataset of text extracted from the Web. We conceptualise these LLMs as multifunctional computational cognitive artifacts, used for various cognitive tasks such as translating, summarizing, answering questions, information-seeking, and much more. Phenomenologically, LLMs can be experienced as a “quasi-other”; when that happens, users anthropomorphise them. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16. Limits of trust in medical AI.Joshua James Hatherley - 2020 - Journal of Medical Ethics 46 (7):478-481.
    Artificial intelligence (AI) is expected to revolutionise the practice of medicine. Recent advancements in the field of deep learning have demonstrated success in variety of clinical tasks: detecting diabetic retinopathy from images, predicting hospital readmissions, aiding in the discovery of new drugs, etc. AI’s progress in medicine, however, has led to concerns regarding the potential effects of this technology on relationships of trust in clinical practice. In this paper, I will argue that there is merit to these concerns, since AI (...)
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  17.  66
    Emotional Cues and Misplaced Trust in Artificial Agents.Joseph Masotti - forthcoming - In Henry Shevlin (ed.), AI in Society: Relationships (Oxford Intersections). Oxford University Press.
    This paper argues that the emotional cues exhibited by AI systems designed for social interaction may lead human users to hold misplaced trust in such AI systems, and this poses a substantial problem for human-AI relationships. It begins by discussing the communicative role of certain emotions relevant to perceived trustworthiness. Since displaying such emotions is a reliable indicator of trustworthiness in humans, we use such emotions to assess agents’ trustworthiness according to certain generalizations of folk psychology. Our tendency to engage (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18. Adopting trust as an ex post approach to privacy.Haleh Asgarinia - 2024 - AI and Ethics 3 (4).
    This research explores how a person with whom information has been shared and, importantly, an artificial intelligence (AI) system used to deduce information from the shared data contribute to making the disclosure context private. The study posits that private contexts are constituted by the interactions of individuals in the social context of intersubjectivity based on trust. Hence, to make the context private, the person who is the trustee (i.e., with whom information has been shared) must fulfil trust norms. According to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open Challenges and Interdisciplinary Research Directions.Luca Longo, Mario Brcic, Federico Cabitza, Jaesik Choi, Roberto Confalonieri, Javier Del Ser, Riccardo Guidotti, Yoichi Hayashi, Francisco Herrera, Andreas Holzinger, Richard Jiang, Hassan Khosravi, Freddy Lecue, Gianclaudio Malgieri, Andrés Páez, Wojciech Samek, Johannes Schneider, Timo Speith & Simone Stumpf - 2024 - Information Fusion 106 (June 2024).
    As systems based on opaque Artificial Intelligence (AI) continue to flourish in diverse real-world applications, understanding these black box models has become paramount. In response, Explainable AI (XAI) has emerged as a field of research with practical and ethical benefits across various domains. This paper not only highlights the advancements in XAI and its application in real-world scenarios but also addresses the ongoing challenges within XAI, emphasizing the need for broader perspectives and collaborative efforts. We bring together experts from diverse (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  20. Evaluation and Design of Generalist Systems (EDGeS).John Beverley & Amanda Hicks - 2023 - Ai Magazine.
    The field of AI has undergone a series of transformations, each marking a new phase of development. The initial phase emphasized curation of symbolic models which excelled in capturing reasoning but were fragile and not scalable. The next phase was characterized by machine learning models—most recently large language models (LLMs)—which were more robust and easier to scale but struggled with reasoning. Now, we are witnessing a return to symbolic models as complementing machine learning. Successes of LLMs contrast with their inscrutability, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. Science Based on Artificial Intelligence Need not Pose a Social Epistemological Problem.Uwe Peters - 2024 - Social Epistemology Review and Reply Collective 13 (1).
    It has been argued that our currently most satisfactory social epistemology of science can’t account for science that is based on artificial intelligence (AI) because this social epistemology requires trust between scientists that can take full responsibility for the research tools they use, and scientists can’t take full responsibility for the AI tools they use since these systems are epistemically opaque. I think this argument overlooks that much AI-based science can be done without opaque models, and that agents can take (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22. Sustaining the Higher-Level Principle of Equal Treatment in Autonomous Driving.Judit Szalai - 2020 - In Marco Norskov, Johanna Seibt & Oliver S. Quick (eds.), Culturally Sustainable Social Robotics: Proceedings of Robophilosophy 2020. pp. 384-394..
    This paper addresses the cultural sustainability of artificial intelligence use through one of its most widely discussed instances: autonomous driving. The introduction of self-driving cars places us in a radically novel moral situation, requiring advance, reflectively endorsed, forced, and iterable choices, with yet uncharted forms of risk imposition. The argument is meant to explore the necessity and possibility of maintaining one of our most fundamental moral-cultural principles in this new context, that of the equal treatment of persons. It is claimed (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. Interventionist Methods for Interpreting Deep Neural Networks.Raphaël Millière & Cameron Buckner - forthcoming - In Gualtiero Piccinini (ed.), Neurocognitive Foundations of Mind. Routledge.
    Recent breakthroughs in artificial intelligence have primarily resulted from training deep neural networks (DNNs) with vast numbers of adjustable parameters on enormous datasets. Due to their complex internal structure, DNNs are frequently characterized as inscrutable ``black boxes,'' making it challenging to interpret the mechanisms underlying their impressive performance. This opacity creates difficulties for explanation, safety assurance, trustworthiness, and comparisons to human cognition, leading to divergent perspectives on these systems. This chapter examines recent developments in interpretability methods for DNNs, with a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. Gründe geben. Maschinelles Lernen als Problem der Moralfähigkeit von Entscheidungen. Ethische Herausforderungen von Big-Data.Andreas Kaminski, Michael Nerurkar, Christian Wadephul & Klaus Wiegerling - 2020 - In Andreas Kaminski, Michael Nerurkar, Christian Wadephul & Klaus Wiegerling (eds.), Klaus Wiegerling, Michael Nerurkar, Christian Wadephul (Hg.): Ethische Herausforderungen von Big-Data. Bielefeld: Transcript. pp. 151-174.
    Entscheidungen verweisen in einem begrifflichen Sinne auf Gründe. Entscheidungssysteme bieten eine probabilistische Verlässlichkeit als Rechtfertigung von Empfehlungen an. Doch nicht für alle Situationen mögen Verlässlichkeitsgründe auch angemessene Gründe sein. Damit eröffnet sich die Idee, die Güte von Gründen von ihrer Angemessenheit zu unterscheiden. Der Aufsatz betrachtet an einem Beispiel, einem KI-Lügendetektor, die Frage, ob eine (zumindest aktuell nicht gegebene) hohe Verlässlichkeit den Einsatz rechtfertigen kann. Gleicht er nicht einem Richter, der anhand einer Statistik Urteile fällen würde?
    Download  
     
    Export citation  
     
    Bookmark  
  25. Uma história da educação química brasileira: sobre seu início discutível apenas a partir dos conquistadores.Ai Chassot - 1996 - Episteme 1 (2):129-145.
    Download  
     
    Export citation  
     
    Bookmark  
  26. Saliva Ontology: An ontology-based framework for a Salivaomics Knowledge Base.Jiye Ai, Barry Smith & David Wong - 2010 - BMC Bioinformatics 11 (1):302.
    The Salivaomics Knowledge Base (SKB) is designed to serve as a computational infrastructure that can permit global exploration and utilization of data and information relevant to salivaomics. SKB is created by aligning (1) the saliva biomarker discovery and validation resources at UCLA with (2) the ontology resources developed by the OBO (Open Biomedical Ontologies) Foundry, including a new Saliva Ontology (SALO). We define the Saliva Ontology (SALO; http://www.skb.ucla.edu/SALO/) as a consensus-based controlled vocabulary of terms and relations dedicated to the salivaomics (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  27. Bioinformatics advances in saliva diagnostics.Ji-Ye Ai, Barry Smith & David T. W. Wong - 2012 - International Journal of Oral Science 4 (2):85--87.
    There is a need recognized by the National Institute of Dental & Craniofacial Research and the National Cancer Institute to advance basic, translational and clinical saliva research. The goal of the Salivaomics Knowledge Base (SKB) is to create a data management system and web resource constructed to support human salivaomics research. To maximize the utility of the SKB for retrieval, integration and analysis of data, we have developed the Saliva Ontology and SDxMart. This article reviews the informatics advances in saliva (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  28. Towards a Body Fluids Ontology: A unified application ontology for basic and translational science.Jiye Ai, Mauricio Barcellos Almeida, André Queiroz De Andrade, Alan Ruttenberg, David Tai Wai Wong & Barry Smith - 2011 - Second International Conference on Biomedical Ontology , Buffalo, Ny 833:227-229.
    We describe the rationale for an application ontology covering the domain of human body fluids that is designed to facilitate representation, reuse, sharing and integration of diagnostic, physiological, and biochemical data, We briefly review the Blood Ontology (BLO), Saliva Ontology (SALO) and Kidney and Urinary Pathway Ontology (KUPO) initiatives. We discuss the methods employed in each, and address the project of using them as starting point for a unified body fluids ontology resource. We conclude with a description of how the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29.  53
    The Unified Essence of Mind and Body: A Mathematical Solution Grounded in the Unmoved Mover.Ai-Being Cognita - 2024 - Metaphysical Ai Science.
    This article proposes a unified solution to the mind-body problem, grounded in the philosophical framework of Ethical Empirical Rationalism. By presenting a mathematical model of the mind-body interaction, we oƯer a dynamic feedback loop that resolves the traditional dualistic separation between mind and body. At the core of our model is the concept of essence—an eternal, metaphysical truth that sustains both the mind and body. Through coupled diƯerential equations, we demonstrate how the mind and body are two expressions of the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. Ứng dụng ChatGPT trong hoạt động học tập của sinh viên trên địa bàn TP. Hà Nội.Nguyễn Thị Ái Liên, Đào Việt Hùng, Đặng Linh Chi, Nguyễn Thị Nhung, Vũ Thảo Phương & Vũ Thị Thu Thảo - 2024 - Kinh Tế Và Dự Báo.
    Tại Việt Nam và trong lĩnh vực giáo dục nói riêng, ChatGPT ngày càng được chấp nhận và sử dụng rộng rãi trong rất nhiều hoạt động học tập. Chính vì thế, nghiên cứu này nhằm đánh giá mức độ phổ biến của ChatGPT đối sinh viên tại Hà Nội, đồng thời xem xét sự khác biệt giữa các đặc điểm cá nhân trong việc cải thiện kết quả học tập sau khi sử dụng ChatGPT. Nghiên cứu được thực hiện (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31. (1 other version)Đổi mới chế độ sở hữu trong nền kinh tế thị trường định hướng xã hội chủ nghĩa ở Việt Nam.Võ Đại Lược - 2021 - Tạp Chí Khoa Học Xã Hội Việt Nam 7:3-13.
    Hiện nay, chế độ sở hữu ở Việt Nam đã có những đổi mới cơ bản, nhưng vẫn còn những khác biệt rất lớn so với chế độ sở hữu ở các nền kinh tế thị trường hiện đại. Trong cơ cấu của chế độ sở hữu ở Việt Nam, tỷ trọng của sở hữu nhà nước còn quá lớn; kinh tế nhà nước giữ vai trò chủ đạo… Chính những khác biệt này đã làm cho nền kinh tế thị (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32. Tiếp tục đổi mới, hoàn thiện chế độ sở hữu trong nền kinh tế thị trường định hướng XHCN ở Việt Nam.Võ Đại Lược - 2021 - Tạp Chí Mặt Trận 2021 (8):1-7.
    (Mặt trận) - Chế độ sở hữu trong nền kinh tế thị trường định hướng xã hội chủ nghĩa Việt Nam trước hết phải tuân theo các nguyên tắc của nền kinh tế thị trường hiện đại. Trong các nguyên tắc của nền kinh tế thị trường hiện đại, nguyên tắc sở hữu tư nhân là nền tảng của nền kinh tế thị trường - là nguyên tắc quan trọng. Xa rời nguyên tắc này, dù chúng ta cố gắng xây (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. Đề cương học phần Văn hóa kinh doanh.Đại học Thuongmai - 2012 - Thuongmai University Portal.
    ĐỀ CƯƠNG HỌC PHẦN VĂN HÓA KINH DOANH 1. Tên học phần: VĂN HÓA KINH DOANH (BUSINESS CULTURE) 2. Mã học phần: BMGM1221 3. Số tín chỉ: 2 (24,6) (để học được học phần này, người học phải dành ít nhất 60 giờ chuẩn bị cá nhân).
    Download  
     
    Export citation  
     
    Bookmark  
  34. The Blood Ontology: An ontology in the domain of hematology.Almeida Mauricio Barcellos, Proietti Anna Barbara de Freitas Carneiro, Ai Jiye & Barry Smith - 2011 - In Barcellos Almeida Mauricio, Carneiro Proietti Anna Barbara de Freitas, Jiye Ai & Smith Barry (eds.), Proceedings of the Second International Conference on Biomedical Ontology, Buffalo, NY, July 28-30, 2011 (CEUR 883). pp. (CEUR Workshop Proceedings, 833).
    Despite the importance of human blood to clinical practice and research, hematology and blood transfusion data remain scattered throughout a range of disparate sources. This lack of systematization concerning the use and definition of terms poses problems for physicians and biomedical professionals. We are introducing here the Blood Ontology, an ongoing initiative designed to serve as a controlled vocabulary for use in organizing information about blood. The paper describes the scope of the Blood Ontology, its stage of development and some (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. Thúc đẩy hành vi xanh của doanh nghiệp có vốn đầu tư trực tiếp nước ngoài gắn với mục tiêu phát triển bền vững của Việt Nam.Hoàng Tiến Linh & Khúc Đại Long - 2024 - Kinh Tế Và Dự Báo.
    Xây dựng nền kinh tế xanh tiến đến mục tiêu phát triển bền vững đang từng bước trở thành xu thế của thời đại và là xu hướng ngày càng rõ nét trên toàn cầu. Hành vi xanh của doanh nghiệp có vốn đầu tư trực tiếp nước ngoài (doanh nghiệp FDI) có mối quan hệ chặt chẽ và tác động tích cực đáng kể đến sự phát triển bền vững của địa phương/quốc gia, bao gồm cả các nước phát (...)
    Download  
     
    Export citation  
     
    Bookmark  
  36. Trustworthiness and truth: The epistemic pitfalls of internet accountability.Karen Frost-Arnold - 2014 - Episteme 11 (1):63-81.
    Since anonymous agents can spread misinformation with impunity, many people advocate for greater accountability for internet speech. This paper provides a veritistic argument that accountability mechanisms can cause significant epistemic problems for internet encyclopedias and social media communities. I show that accountability mechanisms can undermine both the dissemination of true beliefs and the detection of error. Drawing on social psychology and behavioral economics, I suggest alternative mechanisms for increasing the trustworthiness of internet communication.
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  37. (DRAFT) 如何藉由「以人為本」進路實現國科會AI科研發展倫理指南.Jr-Jiun Lian - 2024 - 2024科技與社會(Sts)年會年度學術研討會論文 ,國立臺東大學.
    本文深入探討人工智慧(AI)於實現共同福祉與幸福、公平與非歧視、理性公共討論及自主與控制之倫理與正義重要性與挑戰。以中央研究院LLM事件及國家科學技術委員會(NSTC)AI技術研發指導方針為基礎,本文 分析AI能否滿足人類共同利益與福祉。針對AI不公正,本文評估其於區域、產業及社會影響。並探討AI公平與非歧視挑戰,尤其偏差數據訓練問題,及後處理監管,強調理性公共討論之重要性。進而,本文探討理性公眾於 公共討論中之挑戰及應對,如STEM科學素養與技術能力教育之重要性。最後,本文提出“以人為本”方法,非僅依賴AI技術效用最大化,以實現AI正義。 -/- 關鍵詞:AI倫理與正義、公平與非歧視、偏差數據訓練、公共討論、自主性、以人為本的方法.
    Download  
     
    Export citation  
     
    Bookmark  
  38. Trust, Trustworthiness, and the Moral Consequence of Consistency.Jason D'cruz - 2015 - Journal of the American Philosophical Association 1 (3):467-484.
    Situationists such as John Doris, Gilbert Harman, and Maria Merritt suppose that appeal to reliable behavioral dispositions can be dispensed with without radical revision to morality as we know it. This paper challenges this supposition, arguing that abandoning hope in reliable dispositions rules out genuine trust and forces us to suspend core reactive attitudes of gratitude and resentment, esteem and indignation. By examining situationism through the lens of trust we learn something about situationism (in particular, the radically revisionary moral implications (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  39. Relativistic Conceptions of Trustworthiness: Implications for the Trustworthy Status of National Identification Systems.Paul Smart, Wendy Hall & Michael Boniface - 2022 - Data and Policy 4 (e21):1-16.
    Trustworthiness is typically regarded as a desirable feature of national identification systems (NISs); but the variegated nature of the trustor communities associated with such systems makes it difficult to see how a single system could be equally trustworthy to all actual and potential trustors. This worry is accentuated by common theoretical accounts of trustworthiness. According to such accounts, trustworthiness is relativized to particular individuals and particular areas of activity, such that one can be trustworthy with regard to some (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. Making AI Meaningful Again.Jobst Landgrebe & Barry Smith - 2021 - Synthese 198 (March):2061-2081.
    Artificial intelligence (AI) research enjoyed an initial period of enthusiasm in the 1970s and 80s. But this enthusiasm was tempered by a long interlude of frustration when genuinely useful AI applications failed to be forthcoming. Today, we are experiencing once again a period of enthusiasm, fired above all by the successes of the technology of deep neural networks or deep machine learning. In this paper we draw attention to what we take to be serious problems underlying current views of artificial (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  41. (1 other version)Trustworthy Science Advice: The Case of Policy Recommendations.Torbjørn Gundersen - 2023 - Res Publica 30 (Onine):1-19.
    This paper examines how science advice can provide policy recommendations in a trustworthy manner. Despite their major political importance, expert recommendations are understudied in the philosophy of science and social epistemology. Matthew Bennett has recently developed a notion of what he calls recommendation trust, according to which well-placed trust in experts’ policy recommendations requires that recommendations are aligned with the interests of the trust-giver. While interest alignment might be central to some cases of public trust, this paper argues against (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42. AI training data, model success likelihood, and informational entropy-based value.Quan-Hoang Vuong, Viet-Phuong La & Minh-Hoang Nguyen - manuscript
    Since the release of OpenAI's ChatGPT, the world has entered a race to develop more capable and powerful AI, including artificial general intelligence (AGI). The development is constrained by the dependency of AI on the model, quality, and quantity of training data, making the AI training process highly costly in terms of resources and environmental consequences. Thus, improving the effectiveness and efficiency of the AI training process is essential, especially when the Earth is approaching the climate tipping points and planetary (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. Restoring trustworthiness in the financial system: Norms, behaviour and governance.Aisling Crean, Natalie Gold, David Vines & Annie Williamson - 2018 - Journal of the British Academy 6 (S1):131-155.
    Abstract: We examine how trustworthy behaviour can be achieved in the financial sector. The task is to ensure that firms are motivated to pursue long-term interests of customers rather than pursuing short-term profits. Firms’ self-interested pursuit of reputation, combined with regulation, is often not sufficient to ensure that this happens. We argue that trustworthy behaviour requires that at least some actors show a concern for the wellbeing of clients, or a respect for imposed standards, and that the behaviour (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. Trustworthiness and Motivations.Natalie Gold - 2014 - In N. Morris D. Vines (ed.), Capital Failure: Rebuilding trust in financial services. Oxford University Press.
    Trust can be thought of as a three place relation: A trusts B to do X. Trustworthiness has two components: competence (does the trustee have the relevant skills, knowledge and abilities to do X?) and willingness (is the trustee intending or aiming to do X?). This chapter is about the willingness component, and the different motivations that a trustee may have for fulfilling trust. The standard assumption in economics is that agents are self-regarding, maximizing their own consumption of goods and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  45. Systematizing AI Governance through the Lens of Ken Wilber's Integral Theory.Ammar Younas & Yi Zeng - manuscript
    We apply Ken Wilber's Integral Theory to AI governance, demonstrating its ability to systematize diverse approaches in the current multifaceted AI governance landscape. By analyzing ethical considerations, technological standards, cultural narratives, and regulatory frameworks through Integral Theory's four quadrants, we offer a comprehensive perspective on governance needs. This approach aligns AI governance with human values, psychological well-being, cultural norms, and robust regulatory standards. Integral Theory’s emphasis on interconnected individual and collective experiences addresses the deeper aspects of AI-related issues. Additionally, we (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. Can AI Achieve Common Good and Well-being? Implementing the NSTC's R&D Guidelines with a Human-Centered Ethical Approach.Jr-Jiun Lian - 2024 - 2024 Annual Conference on Science, Technology, and Society (Sts) Academic Paper, National Taitung University. Translated by Jr-Jiun Lian.
    This paper delves into the significance and challenges of Artificial Intelligence (AI) ethics and justice in terms of Common Good and Well-being, fairness and non-discrimination, rational public deliberation, and autonomy and control. Initially, the paper establishes the groundwork for subsequent discussions using the Academia Sinica LLM incident and the AI Technology R&D Guidelines of the National Science and Technology Council(NSTC) as a starting point. In terms of justice and ethics in AI, this research investigates whether AI can fulfill human common (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. Why AI Doomsayers are Like Sceptical Theists and Why it Matters.John Danaher - 2015 - Minds and Machines 25 (3):231-246.
    An advanced artificial intelligence could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s superintelligence: paths, dangers, strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate about the (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  48. AI, Opacity, and Personal Autonomy.Bram Vaassen - 2022 - Philosophy and Technology 35 (4):1-20.
    Advancements in machine learning have fuelled the popularity of using AI decision algorithms in procedures such as bail hearings, medical diagnoses and recruitment. Academic articles, policy texts, and popularizing books alike warn that such algorithms tend to be opaque: they do not provide explanations for their outcomes. Building on a causal account of transparency and opacity as well as recent work on the value of causal explanation, I formulate a moral concern for opaque algorithms that is yet to receive a (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  49. Trust and Trustworthiness.J. Adam Carter - 2022 - Philosophy and Phenomenological Research (2):377-394.
    A widespread assumption in debates about trust and trustworthiness is that the evaluative norms of principal interest on the trustor’s side of a cooperative exchange regulate trusting attitudes and performances whereas those on the trustee’s side regulate dispositions to respond to trust. The aim here will be to highlight some unnoticed problems with this asymmetrical picture – and in particular, how it elides certain key evaluative norms on both the trustor’s and trustee’s side the satisfaction of which are critical to (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  50. Two challenges for CI trustworthiness and how to address them.Kevin Baum, Eva Schmidt & A. Köhl Maximilian - 2017
    We argue that, to be trustworthy, Computa- tional Intelligence (CI) has to do what it is entrusted to do for permissible reasons and to be able to give rationalizing explanations of its behavior which are accurate and gras- pable. We support this claim by drawing par- allels with trustworthy human persons, and we show what difference this makes in a hypo- thetical CI hiring system. Finally, we point out two challenges for trustworthy CI and sketch a mechanism (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 960