Results for 'Ai Chassot'

941 found
Order:
  1. Uma história da educação química brasileira: sobre seu início discutível apenas a partir dos conquistadores.Ai Chassot - 1996 - Episteme 1 (2):129-145.
    Download  
     
    Export citation  
     
    Bookmark  
  2. Bioinformatics advances in saliva diagnostics.Ji-Ye Ai, Barry Smith & David T. W. Wong - 2012 - International Journal of Oral Science 4 (2):85--87.
    There is a need recognized by the National Institute of Dental & Craniofacial Research and the National Cancer Institute to advance basic, translational and clinical saliva research. The goal of the Salivaomics Knowledge Base (SKB) is to create a data management system and web resource constructed to support human salivaomics research. To maximize the utility of the SKB for retrieval, integration and analysis of data, we have developed the Saliva Ontology and SDxMart. This article reviews the informatics advances in saliva (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  3. Saliva Ontology: An ontology-based framework for a Salivaomics Knowledge Base.Jiye Ai, Barry Smith & David Wong - 2010 - BMC Bioinformatics 11 (1):302.
    The Salivaomics Knowledge Base (SKB) is designed to serve as a computational infrastructure that can permit global exploration and utilization of data and information relevant to salivaomics. SKB is created by aligning (1) the saliva biomarker discovery and validation resources at UCLA with (2) the ontology resources developed by the OBO (Open Biomedical Ontologies) Foundry, including a new Saliva Ontology (SALO). We define the Saliva Ontology (SALO; http://www.skb.ucla.edu/SALO/) as a consensus-based controlled vocabulary of terms and relations dedicated to the salivaomics (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  4. Towards a Body Fluids Ontology: A unified application ontology for basic and translational science.Jiye Ai, Mauricio Barcellos Almeida, André Queiroz De Andrade, Alan Ruttenberg, David Tai Wai Wong & Barry Smith - 2011 - Second International Conference on Biomedical Ontology , Buffalo, Ny 833:227-229.
    We describe the rationale for an application ontology covering the domain of human body fluids that is designed to facilitate representation, reuse, sharing and integration of diagnostic, physiological, and biochemical data, We briefly review the Blood Ontology (BLO), Saliva Ontology (SALO) and Kidney and Urinary Pathway Ontology (KUPO) initiatives. We discuss the methods employed in each, and address the project of using them as starting point for a unified body fluids ontology resource. We conclude with a description of how the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5. Đề cương học phần Văn hóa kinh doanh.Đại học Thuongmai - 2012 - Thuongmai University Portal.
    ĐỀ CƯƠNG HỌC PHẦN VĂN HÓA KINH DOANH 1. Tên học phần: VĂN HÓA KINH DOANH (BUSINESS CULTURE) 2. Mã học phần: BMGM1221 3. Số tín chỉ: 2 (24,6) (để học được học phần này, người học phải dành ít nhất 60 giờ chuẩn bị cá nhân).
    Download  
     
    Export citation  
     
    Bookmark  
  6. (1 other version)Đổi mới chế độ sở hữu trong nền kinh tế thị trường định hướng xã hội chủ nghĩa ở Việt Nam.Võ Đại Lược - 2021 - Tạp Chí Khoa Học Xã Hội Việt Nam 7:3-13.
    Hiện nay, chế độ sở hữu ở Việt Nam đã có những đổi mới cơ bản, nhưng vẫn còn những khác biệt rất lớn so với chế độ sở hữu ở các nền kinh tế thị trường hiện đại. Trong cơ cấu của chế độ sở hữu ở Việt Nam, tỷ trọng của sở hữu nhà nước còn quá lớn; kinh tế nhà nước giữ vai trò chủ đạo… Chính những khác biệt này đã làm cho nền kinh tế thị (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7. Tiếp tục đổi mới, hoàn thiện chế độ sở hữu trong nền kinh tế thị trường định hướng XHCN ở Việt Nam.Võ Đại Lược - 2021 - Tạp Chí Mặt Trận 2021 (8):1-7.
    (Mặt trận) - Chế độ sở hữu trong nền kinh tế thị trường định hướng xã hội chủ nghĩa Việt Nam trước hết phải tuân theo các nguyên tắc của nền kinh tế thị trường hiện đại. Trong các nguyên tắc của nền kinh tế thị trường hiện đại, nguyên tắc sở hữu tư nhân là nền tảng của nền kinh tế thị trường - là nguyên tắc quan trọng. Xa rời nguyên tắc này, dù chúng ta cố gắng xây (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. The Blood Ontology: An ontology in the domain of hematology.Almeida Mauricio Barcellos, Proietti Anna Barbara de Freitas Carneiro, Ai Jiye & Barry Smith - 2011 - In Barcellos Almeida Mauricio, Carneiro Proietti Anna Barbara de Freitas, Jiye Ai & Smith Barry (eds.), Proceedings of the Second International Conference on Biomedical Ontology, Buffalo, NY, July 28-30, 2011 (CEUR 883). pp. (CEUR Workshop Proceedings, 833).
    Despite the importance of human blood to clinical practice and research, hematology and blood transfusion data remain scattered throughout a range of disparate sources. This lack of systematization concerning the use and definition of terms poses problems for physicians and biomedical professionals. We are introducing here the Blood Ontology, an ongoing initiative designed to serve as a controlled vocabulary for use in organizing information about blood. The paper describes the scope of the Blood Ontology, its stage of development and some (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9.  79
    Thúc đẩy hành vi xanh của doanh nghiệp có vốn đầu tư trực tiếp nước ngoài gắn với mục tiêu phát triển bền vững của Việt Nam.Hoàng Tiến Linh & Khúc Đại Long - 2024 - Kinh Tế Và Dự Báo.
    Xây dựng nền kinh tế xanh tiến đến mục tiêu phát triển bền vững đang từng bước trở thành xu thế của thời đại và là xu hướng ngày càng rõ nét trên toàn cầu. Hành vi xanh của doanh nghiệp có vốn đầu tư trực tiếp nước ngoài (doanh nghiệp FDI) có mối quan hệ chặt chẽ và tác động tích cực đáng kể đến sự phát triển bền vững của địa phương/quốc gia, bao gồm cả các nước phát (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10. Making AI Meaningful Again.Jobst Landgrebe & Barry Smith - 2021 - Synthese 198 (March):2061-2081.
    Artificial intelligence (AI) research enjoyed an initial period of enthusiasm in the 1970s and 80s. But this enthusiasm was tempered by a long interlude of frustration when genuinely useful AI applications failed to be forthcoming. Today, we are experiencing once again a period of enthusiasm, fired above all by the successes of the technology of deep neural networks or deep machine learning. In this paper we draw attention to what we take to be serious problems underlying current views of artificial (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  11. Ethical AI at Work: The Social Contract for Artificial Intelligence and Its Implications for the Workplace Psychological Contract.Sarah Bankins & Paul Formosa - 2021 - In Sarah Bankins & Paul Formosa (eds.), Ethical AI at Work: The Social Contract for Artificial Intelligence and Its Implications for the Workplace Psychological Contract. Cham, Switzerland:
    Artificially intelligent (AI) technologies are increasingly being used in many workplaces. It is recognised that there are ethical dimensions to the ways in which organisations implement AI alongside, or substituting for, their human workforces. How will these technologically driven disruptions impact the employee–employer exchange? We provide one way to explore this question by drawing on scholarship linking Integrative Social Contracts Theory (ISCT) to the psychological contract (PC). Using ISCT, we show that the macrosocial contract’s ethical AI norms of beneficence, non-maleficence, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  12.  62
    Systematizing AI Governance through the Lens of Ken Wilber's Integral Theory.Ammar Younas & Yi Zeng - manuscript
    We apply Ken Wilber's Integral Theory to AI governance, demonstrating its ability to systematize diverse approaches in the current multifaceted AI governance landscape. By analyzing ethical considerations, technological standards, cultural narratives, and regulatory frameworks through Integral Theory's four quadrants, we offer a comprehensive perspective on governance needs. This approach aligns AI governance with human values, psychological well-being, cultural norms, and robust regulatory standards. Integral Theory’s emphasis on interconnected individual and collective experiences addresses the deeper aspects of AI-related issues. Additionally, we (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13.  87
    Can AI Achieve Common Good and Well-being? Implementing the NSTC's R&D Guidelines with a Human-Centered Ethical Approach.Jr-Jiun Lian - 2024 - 2024 Annual Conference on Science, Technology, and Society (Sts) Academic Paper, National Taitung University. Translated by Jr-Jiun Lian.
    This paper delves into the significance and challenges of Artificial Intelligence (AI) ethics and justice in terms of Common Good and Well-being, fairness and non-discrimination, rational public deliberation, and autonomy and control. Initially, the paper establishes the groundwork for subsequent discussions using the Academia Sinica LLM incident and the AI Technology R&D Guidelines of the National Science and Technology Council(NSTC) as a starting point. In terms of justice and ethics in AI, this research investigates whether AI can fulfill human common (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. Certifiable AI.Jobst Landgrebe - 2022 - Applied Sciences 12 (3):1050.
    Implicit stochastic models, including both ‘deep neural networks’ (dNNs) and the more recent unsupervised foundational models, cannot be explained. That is, it cannot be determined how they work, because the interactions of the millions or billions of terms that are contained in their equations cannot be captured in the form of a causal model. Because users of stochastic AI systems would like to understand how they operate in order to be able to use them safely and reliably, there has emerged (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  15. Why AI Doomsayers are Like Sceptical Theists and Why it Matters.John Danaher - 2015 - Minds and Machines 25 (3):231-246.
    An advanced artificial intelligence could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s superintelligence: paths, dangers, strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate about the (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  16.  58
    AI systems must not confuse users about their sentience or moral status.Eric Schwitzgebel - 2023 - Patterns 4.
    One relatively neglected challenge in ethical artificial intelligence (AI) design is ensuring that AI systems invite a degree of emotional and moral concern appropriate to their moral standing. Although experts generally agree that current AI chatbots are not sentient to any meaningful degree, these systems can already provoke substantial attachment and sometimes intense emotional responses in users. Furthermore, rapid advances in AI technology could soon create AIs of plausibly debatable sentience and moral standing, at least by some relevant definitions. Morally (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17.  73
    AI training data, model success likelihood, and informational entropy-based value.Quan-Hoang Vuong, Viet-Phuong La & Minh-Hoang Nguyen - manuscript
    Since the release of OpenAI's ChatGPT, the world has entered a race to develop more capable and powerful AI, including artificial general intelligence (AGI). The development is constrained by the dependency of AI on the model, quality, and quantity of training data, making the AI training process highly costly in terms of resources and environmental consequences. Thus, improving the effectiveness and efficiency of the AI training process is essential, especially when the Earth is approaching the climate tipping points and planetary (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18. AI Art is Theft: Labour, Extraction, and Exploitation, Or, On the Dangers of Stochastic Pollocks.Trystan S. Goetze - 2024 - Proceedings of the 2024 Acm Conference on Fairness, Accountability, and Transparency:186-196.
    Since the launch of applications such as DALL-E, Midjourney, and Stable Diffusion, generative artificial intelligence has been controversial as a tool for creating artwork. While some have presented longtermist worries about these technologies as harbingers of fully automated futures to come, more pressing is the impact of generative AI on creative labour in the present. Already, business leaders have begun replacing human artistic labour with AI-generated images. In response, the artistic community has launched a protest movement, which argues that AI (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  19. AI-Related Misdirection Awareness in AIVR.Nadisha-Marie Aliman & Leon Kester - manuscript
    Recent AI progress led to a boost in beneficial applications from multiple research areas including VR. Simultaneously, in this newly unfolding deepfake era, ethically and security-relevant disagreements arose in the scientific community regarding the epistemic capabilities of present-day AI. However, given what is at stake, one can postulate that for a responsible approach, prior to engaging in a rigorous epistemic assessment of AI, humans may profit from a self-questioning strategy, an examination and calibration of the experience of their own epistemic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20. AI Sovereignty: Navigating the Future of International AI Governance.Yu Chen - manuscript
    The rapid proliferation of artificial intelligence (AI) technologies has ushered in a new era of opportunities and challenges, prompting nations to grapple with the concept of AI sovereignty. This article delves into the definition and implications of AI sovereignty, drawing parallels to the well-established notion of cyber sovereignty. By exploring the connotations of AI sovereignty, including control over AI development, data sovereignty, economic impacts, national security considerations, and ethical and cultural dimensions, the article provides a comprehensive understanding of this emerging (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. Transparent, explainable, and accountable AI for robotics.Sandra Wachter, Brent Mittelstadt & Luciano Floridi - 2017 - Science (Robotics) 2 (6):eaan6080.
    To create fair and accountable AI and robotics, we need precise regulation and better methods to certify, explain, and audit inscrutable systems.
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  22. Medical AI: is trust really the issue?Jakob Thrane Mainz - 2024 - Journal of Medical Ethics 50 (5):349-350.
    I discuss an influential argument put forward by Hatherley in theJournal of Medical Ethics. Drawing on influential philosophical accounts of interpersonal trust, Hatherley claims that medical artificial intelligence is capable of being reliable, but not trustworthy. Furthermore, Hatherley argues that trust generates moral obligations on behalf of the trustee. For instance, when a patient trusts a clinician, it generates certain moral obligations on behalf of the clinician for her to do what she is entrusted to do. I make three objections (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  23. AI Wellbeing.Simon Goldstein & Cameron Domenico Kirk-Giannini - forthcoming - Asian Journal of Philosophy.
    Under what conditions would an artificially intelligent system have wellbeing? Despite its clear bearing on the ethics of human interactions with artificial systems, this question has received little direct attention. Because all major theories of wellbeing hold that an individual’s welfare level is partially determined by their mental life, we begin by considering whether artificial systems have mental states. We show that a wide range of theories of mental states, when combined with leading theories of wellbeing, predict that certain existing (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  24. The Whiteness of AI.Stephen Cave & Kanta Dihal - 2020 - Philosophy and Technology 33 (4):685-703.
    This paper focuses on the fact that AI is predominantly portrayed as white—in colour, ethnicity, or both. We first illustrate the prevalent Whiteness of real and imagined intelligent machines in four categories: humanoid robots, chatbots and virtual assistants, stock images of AI, and portrayals of AI in film and television. We then offer three interpretations of the Whiteness of AI, drawing on critical race theory, particularly the idea of the White racial frame. First, we examine the extent to which this (...)
    Download  
     
    Export citation  
     
    Bookmark   27 citations  
  25. Interpreting AI-Generated Art: Arthur Danto’s Perspective on Intention, Authorship, and Creative Traditions in the Age of Artificial Intelligence.Raquel Cascales - 2023 - Polish Journal of Aesthetics 71 (4):17-29.
    Arthur C. Danto did not live to witness the proliferation of AI in artistic creation. However, his philosophy of art offers key ideas about art that can provide an interesting perspective on artwork generated by artificial intelligence (AI). In this article, I analyze how his ideas about contemporary art, intention, interpretation, and authorship could be applied to the ongoing debate about AI and artistic creation. At the same time, it is also interesting to consider whether the incorporation of AI into (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  26. Values in science and AI alignment research.Leonard Dung - manuscript
    Roughly, empirical AI alignment research (AIA) is an area of AI research which investigates empirically how to design AI systems in line with human goals. This paper examines the role of non-epistemic values in AIA. It argues that: (1) Sciences differ in the degree to which values influence them. (2) AIA is strongly value-laden. (3) This influence of values is managed inappropriately and thus threatens AIA’s epistemic integrity and ethical beneficence. (4) AIA should strive to achieve value transparency, critical scrutiny (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. AI Human Impact: Toward a Model for Ethical Investing in AI-Intensive Companies.James Brusseau - manuscript
    Does AI conform to humans, or will we conform to AI? An ethical evaluation of AI-intensive companies will allow investors to knowledgeably participate in the decision. The evaluation is built from nine performance indicators that can be analyzed and scored to reflect a technology’s human-centering. When summed, the scores convert into objective investment guidance. The strategy of incorporating ethics into financial decisions will be recognizable to participants in environmental, social, and governance investing, however, this paper argues that conventional ESG frameworks (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  28. How AI’s Self-Prolongation Influences People’s Perceptions of Its Autonomous Mind: The Case of U.S. Residents.Quan-Hoang Vuong, Viet-Phuong La, Minh-Hoang Nguyen, Ruining Jin, Minh-Khanh La & Tam-Tri Le - 2023 - Behavioral Sciences 13 (6):470.
    The expanding integration of artificial intelligence (AI) in various aspects of society makes the infosphere around us increasingly complex. Humanity already faces many obstacles trying to have a better understanding of our own minds, but now we have to continue finding ways to make sense of the minds of AI. The issue of AI’s capability to have independent thinking is of special attention. When dealing with such an unfamiliar concept, people may rely on existing human properties, such as survival desire, (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  29. AI Methods in Bioethics.Joshua August Skorburg, Walter Sinnott-Armstrong & Vincent Conitzer - 2020 - American Journal of Bioethics: Empirical Bioethics 1 (11):37-39.
    Commentary about the role of AI in bioethics for the 10th anniversary issue of AJOB: Empirical Bioethics.
    Download  
     
    Export citation  
     
    Bookmark  
  30. Taking AI Risks Seriously: a New Assessment Model for the AI Act.Claudio Novelli, Casolari Federico, Antonino Rotolo, Mariarosaria Taddeo & Luciano Floridi - 2023 - AI and Society 38 (3):1-5.
    The EU proposal for the Artificial Intelligence Act (AIA) defines four risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of AI, the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. This problem is particularly challenging when it comes to regulating general-purpose AI (GPAI), which has versatile and often unpredictable applications. Recent amendments to the compromise text, though introducing context-specific assessments, remain insufficient. To address this, (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  31. Good AI for the Present of Humanity Democratizing AI Governance.Nicholas Kluge Corrêa & Nythamar De Oliveira - 2021 - AI Ethics Journal 2 (2):1-16.
    What does Cyberpunk and AI Ethics have to do with each other? Cyberpunk is a sub-genre of science fiction that explores the post-human relationships between human experience and technology. One similarity between AI Ethics and Cyberpunk literature is that both seek a dialogue in which the reader may inquire about the future and the ethical and social problems that our technological advance may bring upon society. In recent years, an increasing number of ethical matters involving AI have been pointed and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  32. How AI can be a force for good.Mariarosaria Taddeo & Luciano Floridi - 2018 - Science Magazine 361 (6404):751-752.
    This article argues that an ethical framework will help to harness the potential of AI while keeping humans in control.
    Download  
     
    Export citation  
     
    Bookmark   78 citations  
  33.  73
    A Philosophical Inquiry into AI-Inclusive Epistemology.Ammar Younas & Yi Zeng - unknown
    This paper introduces the concept of AI-inclusive epistemology, suggesting that artificial intelligence (AI) may develop its own epistemological perspectives, function as an epistemic agent, and assume the role of a quasi-member of society. We explore the unique capabilities of advanced AI systems and their potential to provide distinct insights within knowledge systems traditionally dominated by human cognition. Additionally, the paper proposes a framework for a sustainable symbiotic society where AI and human intelligences collaborate to enhance the breadth and depth of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. (1 other version)AI and its new winter: from myths to realities.Luciano Floridi - 2020 - Philosophy and Technology 33 (1):1-3.
    An AI winter may be defined as the stage when technology, business, and the media come to terms with what AI can or cannot really do as a technology without exaggeration. Through discussion of previous AI winters, this paper examines the hype cycle (which by turn characterises AI as a social panacea or a nightmare of apocalyptic proportions) and argues that AI should be treated as a normal technology, neither as a miracle nor as a plague, but rather as of (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  35. Ethical AI at work: the social contract for Artificial Intelligence and its implications for the workplace psychological contract.Sarah Bankins & Paul Formosa - 2021 - In Sarah Bankins & Paul Formosa (eds.), Ethical AI at Work: The Social Contract for Artificial Intelligence and Its Implications for the Workplace Psychological Contract. Cham, Switzerland: pp. 55-72.
    Artificially intelligent (AI) technologies are increasingly being used in many workplaces. It is recognised that there are ethical dimensions to the ways in which organisations implement AI alongside, or substituting for, their human workforces. How will these technologically driven disruptions impact the employee–employer exchange? We provide one way to explore this question by drawing on scholarship linking Integrative Social Contracts Theory (ISCT) to the psychological contract (PC). Using ISCT, we show that the macrosocial contract’s ethical AI norms of beneficence, non-maleficence, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  36. AI Rights for Human Safety.Peter Salib & Simon Goldstein - manuscript
    AI companies are racing to create artificial general intelligence, or “AGI.” If they succeed, the result will be human-level AI systems that can independently pursue high-level goals by formulating and executing long-term plans in the real world. Leading AI researchers agree that some of these systems will likely be “misaligned”–pursuing goals that humans do not desire. This goal mismatch will put misaligned AIs and humans into strategic competition with one another. As with present-day strategic competition between nations with incompatible goals, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37. AI and the expert; a blueprint for the ethical use of opaque AI.Amber Ross - forthcoming - AI and Society:1-12.
    The increasing demand for transparency in AI has recently come under scrutiny. The question is often posted in terms of “epistemic double standards”, and whether the standards for transparency in AI ought to be higher than, or equivalent to, our standards for ordinary human reasoners. I agree that the push for increased transparency in AI deserves closer examination, and that comparing these standards to our standards of transparency for other opaque systems is an appropriate starting point. I suggest that a (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  38.  41
    Rethinking AI: Moving Beyond Humans as Exclusive Creators.Renee Ye - 2024 - Proceedings of the Annual Meeting of the Cognitive Science Society, Volume 46.
    Termed the 'Made-by-Human Hypothesis,' I challenge the commonly accepted notion that Artificial Intelligence (AI) is exclusively crafted by humans, emphasizing its impediment to progress. I argue that influences beyond human agency significantly shape AI's trajectory. Introducing the 'Hybrid Hypothesis,' I suggest that the creation of AI is multi-sourced; methods such as evolutionary algorithms influencing AI originate from diverse sources and yield varied impacts. I argue that the development of AI models will increasingly adopt a 'Human+' hybrid composition, where human expertise (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39.  72
    (DRAFT) 如何藉由「以人為本」進路實現國科會AI科研發展倫理指南.Jr-Jiun Lian - 2024 - 2024科技與社會(Sts)年會年度學術研討會論文 ,國立臺東大學.
    本文深入探討人工智慧(AI)於實現共同福祉與幸福、公平與非歧視、理性公共討論及自主與控制之倫理與正義重要性與挑戰。以中央研究院LLM事件及國家科學技術委員會(NSTC)AI技術研發指導方針為基礎,本文 分析AI能否滿足人類共同利益與福祉。針對AI不公正,本文評估其於區域、產業及社會影響。並探討AI公平與非歧視挑戰,尤其偏差數據訓練問題,及後處理監管,強調理性公共討論之重要性。進而,本文探討理性公眾於 公共討論中之挑戰及應對,如STEM科學素養與技術能力教育之重要性。最後,本文提出“以人為本”方法,非僅依賴AI技術效用最大化,以實現AI正義。 -/- 關鍵詞:AI倫理與正義、公平與非歧視、偏差數據訓練、公共討論、自主性、以人為本的方法.
    Download  
     
    Export citation  
     
    Bookmark  
  40. AI Alignment vs. AI Ethical Treatment: Ten Challenges.Adam Bradley & Bradford Saad - manuscript
    A morally acceptable course of AI development should avoid two dangers: creating unaligned AI systems that pose a threat to humanity and mistreating AI systems that merit moral consideration in their own right. This paper argues these two dangers interact and that if we create AI systems that merit moral consideration, simultaneously avoiding both of these dangers would be extremely challenging. While our argument is straightforward and supported by a wide range of pretheoretical moral judgments, it has far-reaching moral implications (...)
    Download  
     
    Export citation  
     
    Bookmark  
  41.  88
    A Formal Account of AI Trustworthiness: Connecting Intrinsic and Perceived Trustworthiness.Piercosma Bisconti, Letizia Aquilino, Antonella Marchetti & Daniele Nardi - forthcoming - Aies '24: Proceedings of the 2024 Aaai/Acmconference on Ai, Ethics, and Society.
    This paper proposes a formal account of AI trustworthiness, connecting both intrinsic and perceived trustworthiness in an operational schematization. We argue that trustworthiness extends beyond the inherent capabilities of an AI system to include significant influences from observers' perceptions, such as perceived transparency, agency locus, and human oversight. While the concept of perceived trustworthiness is discussed in the literature, few attempts have been made to connect it with the intrinsic trustworthiness of AI systems. Our analysis introduces a novel schematization to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42. AI-Driven Organizational Change: Transforming Structures and Processes in the Modern Workplace.Mohammed Elkahlout, Mohammed B. Karaja, Abeer A. Elsharif, Ibtesam M. Dheir, Basem S. Abunasser & Samy S. Abu-Naser - 2024 - Information Journal of Academic Information Systems Research (Ijaisr) 8 (8):38-45.
    Abstract: Artificial Intelligence (AI) is revolutionizing organizational dynamics by reshaping both structures and processes. This paper explores how AI-driven innovations are transforming organizational frameworks, from hierarchical adjustments to decentralized decision-making models. It examines the impact of AI on various processes, including workflow automation, data analysis, and enhanced decision support systems. Through case studies and empirical research, the paper highlights the benefits of AI in improving efficiency, driving innovation, and fostering agility within organizations. Additionally, it addresses the challenges associated with AI (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. AI Risk Assessment: A Scenario-Based, Proportional Methodology for the AI Act.Claudio Novelli, Federico Casolari, Antonino Rotolo, Mariarosaria Taddeo & Luciano Floridi - 2024 - Digital Society 3 (13):1-29.
    The EU Artificial Intelligence Act (AIA) defines four risk categories for AI systems: unacceptable, high, limited, and minimal. However, it lacks a clear methodology for the assessment of these risks in concrete situations. Risks are broadly categorized based on the application areas of AI systems and ambiguous risk factors. This paper suggests a methodology for assessing AI risk magnitudes, focusing on the construction of real-world risk scenarios. To this scope, we propose to integrate the AIA with a framework developed by (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  44. Military AI as a Convergent Goal of Self-Improving AI.Alexey Turchin & Denkenberger David - 2018 - In Turchin Alexey & David Denkenberger (eds.), Artificial Intelligence Safety and Security. CRC Press.
    Better instruments to predict the future evolution of artificial intelligence (AI) are needed, as the destiny of our civilization depends on it. One of the ways to such prediction is the analysis of the convergent drives of any future AI, started by Omohundro. We show that one of the convergent drives of AI is a militarization drive, arising from AI’s need to wage a war against its potential rivals by either physical or software means, or to increase its bargaining power. (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  45. Medical AI and human dignity: Contrasting perceptions of human and artificially intelligent (AI) decision making in diagnostic and medical resource allocation contexts.Paul Formosa, Wendy Rogers, Yannick Griep, Sarah Bankins & Deborah Richards - 2022 - Computers in Human Behaviour 133.
    Forms of Artificial Intelligence (AI) are already being deployed into clinical settings and research into its future healthcare uses is accelerating. Despite this trajectory, more research is needed regarding the impacts on patients of increasing AI decision making. In particular, the impersonal nature of AI means that its deployment in highly sensitive contexts-of-use, such as in healthcare, raises issues associated with patients’ perceptions of (un) dignified treatment. We explore this issue through an experimental vignette study comparing individuals’ perceptions of being (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. AI, Opacity, and Personal Autonomy.Bram Vaassen - 2022 - Philosophy and Technology 35 (4):1-20.
    Advancements in machine learning have fuelled the popularity of using AI decision algorithms in procedures such as bail hearings, medical diagnoses and recruitment. Academic articles, policy texts, and popularizing books alike warn that such algorithms tend to be opaque: they do not provide explanations for their outcomes. Building on a causal account of transparency and opacity as well as recent work on the value of causal explanation, I formulate a moral concern for opaque algorithms that is yet to receive a (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  47. AI Decision Making with Dignity? Contrasting Workers’ Justice Perceptions of Human and AI Decision Making in a Human Resource Management Context.Sarah Bankins, Paul Formosa, Yannick Griep & Deborah Richards - forthcoming - Information Systems Frontiers.
    Using artificial intelligence (AI) to make decisions in human resource management (HRM) raises questions of how fair employees perceive these decisions to be and whether they experience respectful treatment (i.e., interactional justice). In this experimental survey study with open-ended qualitative questions, we examine decision making in six HRM functions and manipulate the decision maker (AI or human) and decision valence (positive or negative) to determine their impact on individuals’ experiences of interactional justice, trust, dehumanization, and perceptions of decision-maker role appropriate- (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  48. Toward an Ethics of AI Assistants: an Initial Framework.John Danaher - 2018 - Philosophy and Technology 31 (4):629-653.
    Personal AI assistants are now nearly ubiquitous. Every leading smartphone operating system comes with a personal AI assistant that promises to help you with basic cognitive tasks: searching, planning, messaging, scheduling and so on. Usage of such devices is effectively a form of algorithmic outsourcing: getting a smart algorithm to do something on your behalf. Many have expressed concerns about this algorithmic outsourcing. They claim that it is dehumanising, leads to cognitive degeneration, and robs us of our freedom and autonomy. (...)
    Download  
     
    Export citation  
     
    Bookmark   27 citations  
  49. AI, Concepts, and the Paradox of Mental Representation, with a brief discussion of psychological essentialism.Eric Dietrich - 2001 - J. Of Exper. And Theor. AI 13 (1):1-7.
    Mostly philosophers cause trouble. I know because on alternate Thursdays I am one -- and I live in a philosophy department where I watch all of them cause trouble. Everyone in artificial intelligence knows how much trouble philosophers can cause (and in particular, we know how much trouble one philosopher -- John Searle -- has caused). And, we know where they tend to cause it: in knowledge representation and the semantics of data structures. This essay is about a recent case (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  50. (1 other version)AI Extenders and the Ethics of Mental Health.Karina Vold & Jose Hernandez-Orallo - forthcoming - In Marcello Ienca & Fabrice Jotterand (eds.), Ethics of Artificial Intelligence in Brain and Mental Health.
    The extended mind thesis maintains that the functional contributions of tools and artefacts can become so essential for our cognition that they can be constitutive parts of our minds. In other words, our tools can be on a par with our brains: our minds and cognitive processes can literally ‘extend’ into the tools. Several extended mind theorists have argued that this ‘extended’ view of the mind offers unique insights into how we understand, assess, and treat certain cognitive conditions. In this (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
1 — 50 / 941