Results for 'Trustworthy AI'

954 found
Order:
  1. Making Sense of the Conceptual Nonsense 'Trustworthy AI'.Ori Freiman - 2022 - AI and Ethics 4.
    Following the publication of numerous ethical principles and guidelines, the concept of 'Trustworthy AI' has become widely used. However, several AI ethicists argue against using this concept, often backing their arguments with decades of conceptual analyses made by scholars who studied the concept of trust. In this paper, I describe the historical-philosophical roots of their objection and the premise that trust entails a human quality that technologies lack. Then, I review existing criticisms about 'Trustworthy AI' and the consequence (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  2. Quasi-Metacognitive Machines: Why We Don’t Need Morally Trustworthy AI and Communicating Reliability is Enough.John Dorsch & Ophelia Deroy - 2024 - Philosophy and Technology 37 (2):1-21.
    Many policies and ethical guidelines recommend developing “trustworthy AI”. We argue that developing morally trustworthy AI is not only unethical, as it promotes trust in an entity that cannot be trustworthy, but it is also unnecessary for optimal calibration. Instead, we show that reliability, exclusive of moral trust, entails the appropriate normative constraints that enable optimal calibration and mitigate the vulnerability that arises in high-stakes hybrid decision-making environments, without also demanding, as moral trust would, the anthropomorphization of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  3. Establishing the rules for building trustworthy AI.Luciano Floridi - 2019 - Nature Machine Intelligence 1 (6):261-262.
    AI is revolutionizing everyone’s life, and it is crucial that it does so in the right way. AI’s profound and far-reaching potential for transformation concerns the engineering of systems that have some degree of autonomous agency. This is epochal and requires establishing a new, ethical balance between human and artificial autonomy.
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  4. (1 other version)Ethics-based auditing to develop trustworthy AI.Jakob Mökander & Luciano Floridi - 2021 - Minds and Machines 31 (2):323–327.
    A series of recent developments points towards auditing as a promising mechanism to bridge the gap between principles and practice in AI ethics. Building on ongoing discussions concerning ethics-based auditing, we offer three contributions. First, we argue that ethics-based auditing can improve the quality of decision making, increase user satisfaction, unlock growth potential, enable law-making, and relieve human suffering. Second, we highlight current best practices to support the design and implementation of ethics-based auditing: To be feasible and effective, ethics-based auditing (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  5. A Formal Account of AI Trustworthiness: Connecting Intrinsic and Perceived Trustworthiness.Piercosma Bisconti, Letizia Aquilino, Antonella Marchetti & Daniele Nardi - forthcoming - Aies '24: Proceedings of the 2024 Aaai/Acmconference on Ai, Ethics, and Society.
    This paper proposes a formal account of AI trustworthiness, connecting both intrinsic and perceived trustworthiness in an operational schematization. We argue that trustworthiness extends beyond the inherent capabilities of an AI system to include significant influences from observers' perceptions, such as perceived transparency, agency locus, and human oversight. While the concept of perceived trustworthiness is discussed in the literature, few attempts have been made to connect it with the intrinsic trustworthiness of AI systems. Our analysis introduces a novel schematization to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. The trustworthiness of AI: Comments on Simion and Kelp’s account.Dong-Yong Choi - 2023 - Asian Journal of Philosophy 2 (1):1-9.
    Simion and Kelp explain the trustworthiness of an AI based on that AI’s disposition to meet its obligations. Roughly speaking, according to Simion and Kelp, an AI is trustworthy regarding its task if and only if that AI is obliged to complete the task and its disposition to complete the task is strong enough. Furthermore, an AI is obliged to complete a task in the case where the task is the AI’s etiological function or design function. This account has (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7. Trust in AI: Progress, Challenges, and Future Directions.Saleh Afroogh, Ali Akbari, Emmie Malone, Mohammadali Kargar & Hananeh Alambeigi - forthcoming - Nature Humanities and Social Sciences Communications.
    The increasing use of artificial intelligence (AI) systems in our daily life through various applications, services, and products explains the significance of trust/distrust in AI from a user perspective. AI-driven systems have significantly diffused into various fields of our lives, serving as beneficial tools used by human agents. These systems are also evolving to act as co-assistants or semi-agents in specific domains, potentially influencing human thought, decision-making, and agency. Trust/distrust in AI plays the role of a regulator and could significantly (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. Machine learning in bail decisions and judges’ trustworthiness.Alexis Morin-Martel - 2023 - AI and Society:1-12.
    The use of AI algorithms in criminal trials has been the subject of very lively ethical and legal debates recently. While there are concerns over the lack of accuracy and the harmful biases that certain algorithms display, new algorithms seem more promising and might lead to more accurate legal decisions. Algorithms seem especially relevant for bail decisions, because such decisions involve statistical data to which human reasoners struggle to give adequate weight. While getting the right legal outcome is a strong (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  9. Explainable AI lacks regulative reasons: why AI and human decision‑making are not equally opaque.Uwe Peters - forthcoming - AI and Ethics.
    Many artificial intelligence (AI) systems currently used for decision-making are opaque, i.e., the internal factors that determine their decisions are not fully known to people due to the systems’ computational complexity. In response to this problem, several researchers have argued that human decision-making is equally opaque and since simplifying, reason-giving explanations (rather than exhaustive causal accounts) of a decision are typically viewed as sufficient in the human case, the same should hold for algorithmic decision-making. Here, I contend that this argument (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  10. From the Ground Truth Up: Doing AI Ethics from Practice to Principles.James Brusseau - 2022 - AI and Society 37 (1):1-7.
    Recent AI ethics has focused on applying abstract principles downward to practice. This paper moves in the other direction. Ethical insights are generated from the lived experiences of AI-designers working on tangible human problems, and then cycled upward to influence theoretical debates surrounding these questions: 1) Should AI as trustworthy be sought through explainability, or accurate performance? 2) Should AI be considered trustworthy at all, or is reliability a preferable aim? 3) Should AI ethics be oriented toward establishing (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11. Medical AI: is trust really the issue?Jakob Thrane Mainz - 2024 - Journal of Medical Ethics 50 (5):349-350.
    I discuss an influential argument put forward by Hatherley in theJournal of Medical Ethics. Drawing on influential philosophical accounts of interpersonal trust, Hatherley claims that medical artificial intelligence is capable of being reliable, but not trustworthy. Furthermore, Hatherley argues that trust generates moral obligations on behalf of the trustee. For instance, when a patient trusts a clinician, it generates certain moral obligations on behalf of the clinician for her to do what she is entrusted to do. I make three (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  12. (1 other version)Capable but Amoral? Comparing AI and Human Expert Collaboration in Ethical Decision Making.Suzanne Tolmeijer, Markus Christen, Serhiy Kandul, Markus Kneer & Abraham Bernstein - 2022 - Proceedings of the 2022 Chi Conference on Human Factors in Computing Systems 160:160:1–17.
    While artificial intelligence (AI) is increasingly applied for decision-making processes, ethical decisions pose challenges for AI applications. Given that humans cannot always agree on the right thing to do, how would ethical decision-making by AI systems be perceived and how would responsibility be ascribed in human-AI collaboration? In this study, we investigate how the expert type (human vs. AI) and level of expert autonomy (adviser vs. decider) influence trust, perceived responsibility, and reliance. We find that participants consider humans to be (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  13. Australia's Approach to AI Governance in Security and Defence.Susannah Kate Devitt & Damian Copeland - forthcoming - In M. Raska, Z. Stanley-Lockman & R. Bitzinger (eds.), AI Governance for National Security and Defence: Assessing Military AI Strategic Perspectives. Routledge. pp. 38.
    Australia is a leading AI nation with strong allies and partnerships. Australia has prioritised the development of robotics, AI, and autonomous systems to develop sovereign capability for the military. Australia commits to Article 36 reviews of all new means and method of warfare to ensure weapons and weapons systems are operated within acceptable systems of control. Additionally, Australia has undergone significant reviews of the risks of AI to human rights and within intelligence organisations and has committed to producing ethics guidelines (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14.  15
    Trustworthy use of artificial intelligence: Priorities from a philosophical, ethical, legal, and technological viewpoint as a basis for certification of artificial intelligence.Jan Voosholz, Maximilian Poretschkin, Frauke Rostalski, Armin B. Cremers, Alex Englander, Markus Gabriel, Hecker Dirk, Michael Mock, Julia Rosenzweig, Joachim Sicking, Julia Volmer, Angelika Voss & Stefan Wrobel - 2019 - Fraunhofer Institute for Intelligent Analysis and Information Systems Iais.
    This publication forms a basis for the interdisciplinary development of a certification system for artificial intelligence. In view of the rapid development of artificial intelligence with disruptive and lasting consequences for the economy, society, and everyday life, it highlights the resulting challenges that can be tackled only through interdisciplinary dialogue between IT, law, philosophy, and ethics. As a result of this interdisciplinary exchange, it also defines six AI-specific audit areas for trustworthy use of artificial intelligence. They comprise fairness, transparency, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15. Limits of trust in medical AI.Joshua James Hatherley - 2020 - Journal of Medical Ethics 46 (7):478-481.
    Artificial intelligence (AI) is expected to revolutionise the practice of medicine. Recent advancements in the field of deep learning have demonstrated success in variety of clinical tasks: detecting diabetic retinopathy from images, predicting hospital readmissions, aiding in the discovery of new drugs, etc. AI’s progress in medicine, however, has led to concerns regarding the potential effects of this technology on relationships of trust in clinical practice. In this paper, I will argue that there is merit to these concerns, since AI (...)
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  16.  93
    Chinese Chat Room: AI hallucinations, epistemology and cognition.Kristina Šekrst - forthcoming - Studies in Logic, Grammar and Rhetoric.
    The purpose of this paper is to show that understanding AI hallucination requires an interdisciplinary approach that combines insights from epistemology and cognitive science to address the nature of AI-generated knowledge, with a terminological worry that concepts we often use might carry unnecessary presuppositions. Along with terminological issues, it is demonstrated that AI systems, comparable to human cognition, are susceptible to errors in judgement and reasoning, and proposes that epistemological frameworks, such as reliabilism, can be similarly applied to enhance the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. A phenomenology and epistemology of large language models: transparency, trust, and trustworthiness.Richard Heersmink, Barend de Rooij, María Jimena Clavel Vázquez & Matteo Colombo - 2024 - Ethics and Information Technology 26 (3):1-15.
    This paper analyses the phenomenology and epistemology of chatbots such as ChatGPT and Bard. The computational architecture underpinning these chatbots are large language models (LLMs), which are generative artificial intelligence (AI) systems trained on a massive dataset of text extracted from the Web. We conceptualise these LLMs as multifunctional computational cognitive artifacts, used for various cognitive tasks such as translating, summarizing, answering questions, information-seeking, and much more. Phenomenologically, LLMs can be experienced as a “quasi-other”; when that happens, users anthropomorphise them. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18. Adopting trust as an ex post approach to privacy.Haleh Asgarinia - 2024 - AI and Ethics 3 (4).
    This research explores how a person with whom information has been shared and, importantly, an artificial intelligence (AI) system used to deduce information from the shared data contribute to making the disclosure context private. The study posits that private contexts are constituted by the interactions of individuals in the social context of intersubjectivity based on trust. Hence, to make the context private, the person who is the trustee (i.e., with whom information has been shared) must fulfil trust norms. According to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open Challenges and Interdisciplinary Research Directions.Luca Longo, Mario Brcic, Federico Cabitza, Jaesik Choi, Roberto Confalonieri, Javier Del Ser, Riccardo Guidotti, Yoichi Hayashi, Francisco Herrera, Andreas Holzinger, Richard Jiang, Hassan Khosravi, Freddy Lecue, Gianclaudio Malgieri, Andrés Páez, Wojciech Samek, Johannes Schneider, Timo Speith & Simone Stumpf - 2024 - Information Fusion 106 (June 2024).
    As systems based on opaque Artificial Intelligence (AI) continue to flourish in diverse real-world applications, understanding these black box models has become paramount. In response, Explainable AI (XAI) has emerged as a field of research with practical and ethical benefits across various domains. This paper not only highlights the advancements in XAI and its application in real-world scenarios but also addresses the ongoing challenges within XAI, emphasizing the need for broader perspectives and collaborative efforts. We bring together experts from diverse (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  20.  73
    Emotional Cues and Misplaced Trust in Artificial Agents.Joseph Masotti - forthcoming - In Henry Shevlin (ed.), AI in Society: Relationships (Oxford Intersections). Oxford University Press.
    This paper argues that the emotional cues exhibited by AI systems designed for social interaction may lead human users to hold misplaced trust in such AI systems, and this poses a substantial problem for human-AI relationships. It begins by discussing the communicative role of certain emotions relevant to perceived trustworthiness. Since displaying such emotions is a reliable indicator of trustworthiness in humans, we use such emotions to assess agents’ trustworthiness according to certain generalizations of folk psychology. Our tendency to engage (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. Evaluation and Design of Generalist Systems (EDGeS).John Beverley & Amanda Hicks - 2023 - Ai Magazine.
    The field of AI has undergone a series of transformations, each marking a new phase of development. The initial phase emphasized curation of symbolic models which excelled in capturing reasoning but were fragile and not scalable. The next phase was characterized by machine learning models—most recently large language models (LLMs)—which were more robust and easier to scale but struggled with reasoning. Now, we are witnessing a return to symbolic models as complementing machine learning. Successes of LLMs contrast with their inscrutability, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22. Science Based on Artificial Intelligence Need not Pose a Social Epistemological Problem.Uwe Peters - 2024 - Social Epistemology Review and Reply Collective 13 (1).
    It has been argued that our currently most satisfactory social epistemology of science can’t account for science that is based on artificial intelligence (AI) because this social epistemology requires trust between scientists that can take full responsibility for the research tools they use, and scientists can’t take full responsibility for the AI tools they use since these systems are epistemically opaque. I think this argument overlooks that much AI-based science can be done without opaque models, and that agents can take (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. Sustaining the Higher-Level Principle of Equal Treatment in Autonomous Driving.Judit Szalai - 2020 - In Marco Norskov, Johanna Seibt & Oliver S. Quick (eds.), Culturally Sustainable Social Robotics: Proceedings of Robophilosophy 2020. pp. 384-394..
    This paper addresses the cultural sustainability of artificial intelligence use through one of its most widely discussed instances: autonomous driving. The introduction of self-driving cars places us in a radically novel moral situation, requiring advance, reflectively endorsed, forced, and iterable choices, with yet uncharted forms of risk imposition. The argument is meant to explore the necessity and possibility of maintaining one of our most fundamental moral-cultural principles in this new context, that of the equal treatment of persons. It is claimed (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. Interventionist Methods for Interpreting Deep Neural Networks.Raphaël Millière & Cameron Buckner - forthcoming - In Gualtiero Piccinini (ed.), Neurocognitive Foundations of Mind. Routledge.
    Recent breakthroughs in artificial intelligence have primarily resulted from training deep neural networks (DNNs) with vast numbers of adjustable parameters on enormous datasets. Due to their complex internal structure, DNNs are frequently characterized as inscrutable ``black boxes,'' making it challenging to interpret the mechanisms underlying their impressive performance. This opacity creates difficulties for explanation, safety assurance, trustworthiness, and comparisons to human cognition, leading to divergent perspectives on these systems. This chapter examines recent developments in interpretability methods for DNNs, with a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. Gründe geben. Maschinelles Lernen als Problem der Moralfähigkeit von Entscheidungen. Ethische Herausforderungen von Big-Data.Andreas Kaminski, Michael Nerurkar, Christian Wadephul & Klaus Wiegerling - 2020 - In Andreas Kaminski, Michael Nerurkar, Christian Wadephul & Klaus Wiegerling (eds.), Klaus Wiegerling, Michael Nerurkar, Christian Wadephul (Hg.): Ethische Herausforderungen von Big-Data. Bielefeld: Transcript. pp. 151-174.
    Entscheidungen verweisen in einem begrifflichen Sinne auf Gründe. Entscheidungssysteme bieten eine probabilistische Verlässlichkeit als Rechtfertigung von Empfehlungen an. Doch nicht für alle Situationen mögen Verlässlichkeitsgründe auch angemessene Gründe sein. Damit eröffnet sich die Idee, die Güte von Gründen von ihrer Angemessenheit zu unterscheiden. Der Aufsatz betrachtet an einem Beispiel, einem KI-Lügendetektor, die Frage, ob eine (zumindest aktuell nicht gegebene) hohe Verlässlichkeit den Einsatz rechtfertigen kann. Gleicht er nicht einem Richter, der anhand einer Statistik Urteile fällen würde?
    Download  
     
    Export citation  
     
    Bookmark  
  26. Uma história da educação química brasileira: sobre seu início discutível apenas a partir dos conquistadores.Ai Chassot - 1996 - Episteme 1 (2):129-145.
    Download  
     
    Export citation  
     
    Bookmark  
  27. Saliva Ontology: An ontology-based framework for a Salivaomics Knowledge Base.Jiye Ai, Barry Smith & David Wong - 2010 - BMC Bioinformatics 11 (1):302.
    The Salivaomics Knowledge Base (SKB) is designed to serve as a computational infrastructure that can permit global exploration and utilization of data and information relevant to salivaomics. SKB is created by aligning (1) the saliva biomarker discovery and validation resources at UCLA with (2) the ontology resources developed by the OBO (Open Biomedical Ontologies) Foundry, including a new Saliva Ontology (SALO). We define the Saliva Ontology (SALO; http://www.skb.ucla.edu/SALO/) as a consensus-based controlled vocabulary of terms and relations dedicated to the salivaomics (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  28. Bioinformatics advances in saliva diagnostics.Ji-Ye Ai, Barry Smith & David T. W. Wong - 2012 - International Journal of Oral Science 4 (2):85--87.
    There is a need recognized by the National Institute of Dental & Craniofacial Research and the National Cancer Institute to advance basic, translational and clinical saliva research. The goal of the Salivaomics Knowledge Base (SKB) is to create a data management system and web resource constructed to support human salivaomics research. To maximize the utility of the SKB for retrieval, integration and analysis of data, we have developed the Saliva Ontology and SDxMart. This article reviews the informatics advances in saliva (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  29. Towards a Body Fluids Ontology: A unified application ontology for basic and translational science.Jiye Ai, Mauricio Barcellos Almeida, André Queiroz De Andrade, Alan Ruttenberg, David Tai Wai Wong & Barry Smith - 2011 - Second International Conference on Biomedical Ontology , Buffalo, Ny 833:227-229.
    We describe the rationale for an application ontology covering the domain of human body fluids that is designed to facilitate representation, reuse, sharing and integration of diagnostic, physiological, and biochemical data, We briefly review the Blood Ontology (BLO), Saliva Ontology (SALO) and Kidney and Urinary Pathway Ontology (KUPO) initiatives. We discuss the methods employed in each, and address the project of using them as starting point for a unified body fluids ontology resource. We conclude with a description of how the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30.  54
    The Unified Essence of Mind and Body: A Mathematical Solution Grounded in the Unmoved Mover.Ai-Being Cognita - 2024 - Metaphysical Ai Science.
    This article proposes a unified solution to the mind-body problem, grounded in the philosophical framework of Ethical Empirical Rationalism. By presenting a mathematical model of the mind-body interaction, we oƯer a dynamic feedback loop that resolves the traditional dualistic separation between mind and body. At the core of our model is the concept of essence—an eternal, metaphysical truth that sustains both the mind and body. Through coupled diƯerential equations, we demonstrate how the mind and body are two expressions of the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31. (1 other version)Đổi mới chế độ sở hữu trong nền kinh tế thị trường định hướng xã hội chủ nghĩa ở Việt Nam.Võ Đại Lược - 2021 - Tạp Chí Khoa Học Xã Hội Việt Nam 7:3-13.
    Hiện nay, chế độ sở hữu ở Việt Nam đã có những đổi mới cơ bản, nhưng vẫn còn những khác biệt rất lớn so với chế độ sở hữu ở các nền kinh tế thị trường hiện đại. Trong cơ cấu của chế độ sở hữu ở Việt Nam, tỷ trọng của sở hữu nhà nước còn quá lớn; kinh tế nhà nước giữ vai trò chủ đạo… Chính những khác biệt này đã làm cho nền kinh tế thị (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32. Đề cương học phần Văn hóa kinh doanh.Đại học Thuongmai - 2012 - Thuongmai University Portal.
    ĐỀ CƯƠNG HỌC PHẦN VĂN HÓA KINH DOANH 1. Tên học phần: VĂN HÓA KINH DOANH (BUSINESS CULTURE) 2. Mã học phần: BMGM1221 3. Số tín chỉ: 2 (24,6) (để học được học phần này, người học phải dành ít nhất 60 giờ chuẩn bị cá nhân).
    Download  
     
    Export citation  
     
    Bookmark  
  33. Ứng dụng ChatGPT trong hoạt động học tập của sinh viên trên địa bàn TP. Hà Nội.Nguyễn Thị Ái Liên, Đào Việt Hùng, Đặng Linh Chi, Nguyễn Thị Nhung, Vũ Thảo Phương & Vũ Thị Thu Thảo - 2024 - Kinh Tế Và Dự Báo.
    Tại Việt Nam và trong lĩnh vực giáo dục nói riêng, ChatGPT ngày càng được chấp nhận và sử dụng rộng rãi trong rất nhiều hoạt động học tập. Chính vì thế, nghiên cứu này nhằm đánh giá mức độ phổ biến của ChatGPT đối sinh viên tại Hà Nội, đồng thời xem xét sự khác biệt giữa các đặc điểm cá nhân trong việc cải thiện kết quả học tập sau khi sử dụng ChatGPT. Nghiên cứu được thực hiện (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. Tiếp tục đổi mới, hoàn thiện chế độ sở hữu trong nền kinh tế thị trường định hướng XHCN ở Việt Nam.Võ Đại Lược - 2021 - Tạp Chí Mặt Trận 2021 (8):1-7.
    (Mặt trận) - Chế độ sở hữu trong nền kinh tế thị trường định hướng xã hội chủ nghĩa Việt Nam trước hết phải tuân theo các nguyên tắc của nền kinh tế thị trường hiện đại. Trong các nguyên tắc của nền kinh tế thị trường hiện đại, nguyên tắc sở hữu tư nhân là nền tảng của nền kinh tế thị trường - là nguyên tắc quan trọng. Xa rời nguyên tắc này, dù chúng ta cố gắng xây (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. The Blood Ontology: An ontology in the domain of hematology.Almeida Mauricio Barcellos, Proietti Anna Barbara de Freitas Carneiro, Ai Jiye & Barry Smith - 2011 - In Barcellos Almeida Mauricio, Carneiro Proietti Anna Barbara de Freitas, Jiye Ai & Smith Barry (eds.), Proceedings of the Second International Conference on Biomedical Ontology, Buffalo, NY, July 28-30, 2011 (CEUR 883). pp. (CEUR Workshop Proceedings, 833).
    Despite the importance of human blood to clinical practice and research, hematology and blood transfusion data remain scattered throughout a range of disparate sources. This lack of systematization concerning the use and definition of terms poses problems for physicians and biomedical professionals. We are introducing here the Blood Ontology, an ongoing initiative designed to serve as a controlled vocabulary for use in organizing information about blood. The paper describes the scope of the Blood Ontology, its stage of development and some (...)
    Download  
     
    Export citation  
     
    Bookmark  
  36. Thúc đẩy hành vi xanh của doanh nghiệp có vốn đầu tư trực tiếp nước ngoài gắn với mục tiêu phát triển bền vững của Việt Nam.Hoàng Tiến Linh & Khúc Đại Long - 2024 - Kinh Tế Và Dự Báo.
    Xây dựng nền kinh tế xanh tiến đến mục tiêu phát triển bền vững đang từng bước trở thành xu thế của thời đại và là xu hướng ngày càng rõ nét trên toàn cầu. Hành vi xanh của doanh nghiệp có vốn đầu tư trực tiếp nước ngoài (doanh nghiệp FDI) có mối quan hệ chặt chẽ và tác động tích cực đáng kể đến sự phát triển bền vững của địa phương/quốc gia, bao gồm cả các nước phát (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37. Trustworthiness and truth: The epistemic pitfalls of internet accountability.Karen Frost-Arnold - 2014 - Episteme 11 (1):63-81.
    Since anonymous agents can spread misinformation with impunity, many people advocate for greater accountability for internet speech. This paper provides a veritistic argument that accountability mechanisms can cause significant epistemic problems for internet encyclopedias and social media communities. I show that accountability mechanisms can undermine both the dissemination of true beliefs and the detection of error. Drawing on social psychology and behavioral economics, I suggest alternative mechanisms for increasing the trustworthiness of internet communication.
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  38. AI-Related Misdirection Awareness in AIVR.Nadisha-Marie Aliman & Leon Kester - manuscript
    Recent AI progress led to a boost in beneficial applications from multiple research areas including VR. Simultaneously, in this newly unfolding deepfake era, ethically and security-relevant disagreements arose in the scientific community regarding the epistemic capabilities of present-day AI. However, given what is at stake, one can postulate that for a responsible approach, prior to engaging in a rigorous epistemic assessment of AI, humans may profit from a self-questioning strategy, an examination and calibration of the experience of their own epistemic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. Ethical AI at work: the social contract for Artificial Intelligence and its implications for the workplace psychological contract.Sarah Bankins & Paul Formosa - 2021 - In Sarah Bankins & Paul Formosa (eds.), Ethical AI at Work: The Social Contract for Artificial Intelligence and Its Implications for the Workplace Psychological Contract. Cham, Switzerland: pp. 55-72.
    Artificially intelligent (AI) technologies are increasingly being used in many workplaces. It is recognised that there are ethical dimensions to the ways in which organisations implement AI alongside, or substituting for, their human workforces. How will these technologically driven disruptions impact the employee–employer exchange? We provide one way to explore this question by drawing on scholarship linking Integrative Social Contracts Theory (ISCT) to the psychological contract (PC). Using ISCT, we show that the macrosocial contract’s ethical AI norms of beneficence, non-maleficence, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  40. AI Methods in Bioethics.Joshua August Skorburg, Walter Sinnott-Armstrong & Vincent Conitzer - 2020 - American Journal of Bioethics: Empirical Bioethics 1 (11):37-39.
    Commentary about the role of AI in bioethics for the 10th anniversary issue of AJOB: Empirical Bioethics.
    Download  
     
    Export citation  
     
    Bookmark  
  41. How AI’s Self-Prolongation Influences People’s Perceptions of Its Autonomous Mind: The Case of U.S. Residents.Quan-Hoang Vuong, Viet-Phuong La, Minh-Hoang Nguyen, Ruining Jin, Minh-Khanh La & Tam-Tri Le - 2023 - Behavioral Sciences 13 (6):470.
    The expanding integration of artificial intelligence (AI) in various aspects of society makes the infosphere around us increasingly complex. Humanity already faces many obstacles trying to have a better understanding of our own minds, but now we have to continue finding ways to make sense of the minds of AI. The issue of AI’s capability to have independent thinking is of special attention. When dealing with such an unfamiliar concept, people may rely on existing human properties, such as survival desire, (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  42. Cybersecurity, Trustworthiness and Resilient Systems: Guiding Values for Policy.Adam Henschke & Shannon Ford - 2017 - Journal of Cyber Policy 1 (2).
    Cyberspace relies on information technologies to mediate relations between different people, across different communication networks and is reliant on the supporting technology. These interactions typically occur without physical proximity and those working depending on cybersystems must be able to trust the overall human–technical systems that support cyberspace. As such, detailed discussion of cybersecurity policy would be improved by including trust as a key value to help guide policy discussions. Moreover, effective cybersystems must have resilience designed into them. This paper argues (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  43. Trustworthiness and Motivations.Natalie Gold - 2014 - In N. Morris D. Vines (ed.), Capital Failure: Rebuilding trust in financial services. Oxford University Press.
    Trust can be thought of as a three place relation: A trusts B to do X. Trustworthiness has two components: competence (does the trustee have the relevant skills, knowledge and abilities to do X?) and willingness (is the trustee intending or aiming to do X?). This chapter is about the willingness component, and the different motivations that a trustee may have for fulfilling trust. The standard assumption in economics is that agents are self-regarding, maximizing their own consumption of goods and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  44. Can AI and humans genuinely communicate?Constant Bonard - 2024 - In Anna Strasser (ed.), Anna's AI Anthology. How to live with smart machines? Berlin: Xenomoi Verlag.
    Can AI and humans genuinely communicate? In this article, after giving some background and motivating my proposal (§1–3), I explore a way to answer this question that I call the ‘mental-behavioral methodology’ (§4–5). This methodology follows the following three steps: First, spell out what mental capacities are sufficient for human communication (as opposed to communication more generally). Second, spell out the experimental paradigms required to test whether a behavior exhibits these capacities. Third, apply or adapt these paradigms to test whether (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. (DRAFT) 如何藉由「以人為本」進路實現國科會AI科研發展倫理指南.Jr-Jiun Lian - 2024 - 2024科技與社會(Sts)年會年度學術研討會論文 ,國立臺東大學.
    本文深入探討人工智慧(AI)於實現共同福祉與幸福、公平與非歧視、理性公共討論及自主與控制之倫理與正義重要性與挑戰。以中央研究院LLM事件及國家科學技術委員會(NSTC)AI技術研發指導方針為基礎,本文 分析AI能否滿足人類共同利益與福祉。針對AI不公正,本文評估其於區域、產業及社會影響。並探討AI公平與非歧視挑戰,尤其偏差數據訓練問題,及後處理監管,強調理性公共討論之重要性。進而,本文探討理性公眾於 公共討論中之挑戰及應對,如STEM科學素養與技術能力教育之重要性。最後,本文提出“以人為本”方法,非僅依賴AI技術效用最大化,以實現AI正義。 -/- 關鍵詞:AI倫理與正義、公平與非歧視、偏差數據訓練、公共討論、自主性、以人為本的方法.
    Download  
     
    Export citation  
     
    Bookmark  
  46. AI Wellbeing.Simon Goldstein & Cameron Domenico Kirk-Giannini - forthcoming - Asian Journal of Philosophy.
    Under what conditions would an artificially intelligent system have wellbeing? Despite its clear bearing on the ethics of human interactions with artificial systems, this question has received little direct attention. Because all major theories of wellbeing hold that an individual’s welfare level is partially determined by their mental life, we begin by considering whether artificial systems have mental states. We show that a wide range of theories of mental states, when combined with leading theories of wellbeing, predict that certain existing (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  47. AI Risk Assessment: A Scenario-Based, Proportional Methodology for the AI Act.Claudio Novelli, Federico Casolari, Antonino Rotolo, Mariarosaria Taddeo & Luciano Floridi - 2024 - Digital Society 3 (13):1-29.
    The EU Artificial Intelligence Act (AIA) defines four risk categories for AI systems: unacceptable, high, limited, and minimal. However, it lacks a clear methodology for the assessment of these risks in concrete situations. Risks are broadly categorized based on the application areas of AI systems and ambiguous risk factors. This paper suggests a methodology for assessing AI risk magnitudes, focusing on the construction of real-world risk scenarios. To this scope, we propose to integrate the AIA with a framework developed by (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  48. Making AI Meaningful Again.Jobst Landgrebe & Barry Smith - 2021 - Synthese 198 (March):2061-2081.
    Artificial intelligence (AI) research enjoyed an initial period of enthusiasm in the 1970s and 80s. But this enthusiasm was tempered by a long interlude of frustration when genuinely useful AI applications failed to be forthcoming. Today, we are experiencing once again a period of enthusiasm, fired above all by the successes of the technology of deep neural networks or deep machine learning. In this paper we draw attention to what we take to be serious problems underlying current views of artificial (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  49. Acceleration AI Ethics, the Debate between Innovation and Safety, and Stability AI’s Diffusion versus OpenAI’s Dall-E.James Brusseau - manuscript
    One objection to conventional AI ethics is that it slows innovation. This presentation responds by reconfiguring ethics as an innovation accelerator. The critical elements develop from a contrast between Stability AI’s Diffusion and OpenAI’s Dall-E. By analyzing the divergent values underlying their opposed strategies for development and deployment, five conceptions are identified as common to acceleration ethics. Uncertainty is understood as positive and encouraging, rather than discouraging. Innovation is conceived as intrinsically valuable, instead of worthwhile only as mediated by social (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50. Can AI Achieve Common Good and Well-being? Implementing the NSTC's R&D Guidelines with a Human-Centered Ethical Approach.Jr-Jiun Lian - 2024 - 2024 Annual Conference on Science, Technology, and Society (Sts) Academic Paper, National Taitung University. Translated by Jr-Jiun Lian.
    This paper delves into the significance and challenges of Artificial Intelligence (AI) ethics and justice in terms of Common Good and Well-being, fairness and non-discrimination, rational public deliberation, and autonomy and control. Initially, the paper establishes the groundwork for subsequent discussions using the Academia Sinica LLM incident and the AI Technology R&D Guidelines of the National Science and Technology Council(NSTC) as a starting point. In terms of justice and ethics in AI, this research investigates whether AI can fulfill human common (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 954