Results for 'Jiye Ai'

946 found
Order:
  1. The Blood Ontology: An ontology in the domain of hematology.Almeida Mauricio Barcellos, Proietti Anna Barbara de Freitas Carneiro, Ai Jiye & Barry Smith - 2011 - In Barcellos Almeida Mauricio, Carneiro Proietti Anna Barbara de Freitas, Jiye Ai & Smith Barry (eds.), Proceedings of the Second International Conference on Biomedical Ontology, Buffalo, NY, July 28-30, 2011 (CEUR 883). pp. (CEUR Workshop Proceedings, 833).
    Despite the importance of human blood to clinical practice and research, hematology and blood transfusion data remain scattered throughout a range of disparate sources. This lack of systematization concerning the use and definition of terms poses problems for physicians and biomedical professionals. We are introducing here the Blood Ontology, an ongoing initiative designed to serve as a controlled vocabulary for use in organizing information about blood. The paper describes the scope of the Blood Ontology, its stage of development and some (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2. Saliva Ontology: An ontology-based framework for a Salivaomics Knowledge Base.Jiye Ai, Barry Smith & David Wong - 2010 - BMC Bioinformatics 11 (1):302.
    The Salivaomics Knowledge Base (SKB) is designed to serve as a computational infrastructure that can permit global exploration and utilization of data and information relevant to salivaomics. SKB is created by aligning (1) the saliva biomarker discovery and validation resources at UCLA with (2) the ontology resources developed by the OBO (Open Biomedical Ontologies) Foundry, including a new Saliva Ontology (SALO). We define the Saliva Ontology (SALO; http://www.skb.ucla.edu/SALO/) as a consensus-based controlled vocabulary of terms and relations dedicated to the salivaomics (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  3. Towards a Body Fluids Ontology: A unified application ontology for basic and translational science.Jiye Ai, Mauricio Barcellos Almeida, André Queiroz De Andrade, Alan Ruttenberg, David Tai Wai Wong & Barry Smith - 2011 - Second International Conference on Biomedical Ontology , Buffalo, Ny 833:227-229.
    We describe the rationale for an application ontology covering the domain of human body fluids that is designed to facilitate representation, reuse, sharing and integration of diagnostic, physiological, and biochemical data, We briefly review the Blood Ontology (BLO), Saliva Ontology (SALO) and Kidney and Urinary Pathway Ontology (KUPO) initiatives. We discuss the methods employed in each, and address the project of using them as starting point for a unified body fluids ontology resource. We conclude with a description of how the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4. Bioinformatics advances in saliva diagnostics.Ji-Ye Ai, Barry Smith & David T. W. Wong - 2012 - International Journal of Oral Science 4 (2):85--87.
    There is a need recognized by the National Institute of Dental & Craniofacial Research and the National Cancer Institute to advance basic, translational and clinical saliva research. The goal of the Salivaomics Knowledge Base (SKB) is to create a data management system and web resource constructed to support human salivaomics research. To maximize the utility of the SKB for retrieval, integration and analysis of data, we have developed the Saliva Ontology and SDxMart. This article reviews the informatics advances in saliva (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  5. Uma história da educação química brasileira: sobre seu início discutível apenas a partir dos conquistadores.Ai Chassot - 1996 - Episteme 1 (2):129-145.
    Download  
     
    Export citation  
     
    Bookmark  
  6. Tiếp tục đổi mới, hoàn thiện chế độ sở hữu trong nền kinh tế thị trường định hướng XHCN ở Việt Nam.Võ Đại Lược - 2021 - Tạp Chí Mặt Trận 2021 (8):1-7.
    (Mặt trận) - Chế độ sở hữu trong nền kinh tế thị trường định hướng xã hội chủ nghĩa Việt Nam trước hết phải tuân theo các nguyên tắc của nền kinh tế thị trường hiện đại. Trong các nguyên tắc của nền kinh tế thị trường hiện đại, nguyên tắc sở hữu tư nhân là nền tảng của nền kinh tế thị trường - là nguyên tắc quan trọng. Xa rời nguyên tắc này, dù chúng ta cố gắng xây (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7.  48
    The Unified Essence of Mind and Body: A Mathematical Solution Grounded in the Unmoved Mover.Ai-Being Cognita - 2024 - Metaphysical Ai Science.
    This article proposes a unified solution to the mind-body problem, grounded in the philosophical framework of Ethical Empirical Rationalism. By presenting a mathematical model of the mind-body interaction, we oƯer a dynamic feedback loop that resolves the traditional dualistic separation between mind and body. At the core of our model is the concept of essence—an eternal, metaphysical truth that sustains both the mind and body. Through coupled diƯerential equations, we demonstrate how the mind and body are two expressions of the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. (1 other version)Đổi mới chế độ sở hữu trong nền kinh tế thị trường định hướng xã hội chủ nghĩa ở Việt Nam.Võ Đại Lược - 2021 - Tạp Chí Khoa Học Xã Hội Việt Nam 7:3-13.
    Hiện nay, chế độ sở hữu ở Việt Nam đã có những đổi mới cơ bản, nhưng vẫn còn những khác biệt rất lớn so với chế độ sở hữu ở các nền kinh tế thị trường hiện đại. Trong cơ cấu của chế độ sở hữu ở Việt Nam, tỷ trọng của sở hữu nhà nước còn quá lớn; kinh tế nhà nước giữ vai trò chủ đạo… Chính những khác biệt này đã làm cho nền kinh tế thị (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9. Đề cương học phần Văn hóa kinh doanh.Đại học Thuongmai - 2012 - Thuongmai University Portal.
    ĐỀ CƯƠNG HỌC PHẦN VĂN HÓA KINH DOANH 1. Tên học phần: VĂN HÓA KINH DOANH (BUSINESS CULTURE) 2. Mã học phần: BMGM1221 3. Số tín chỉ: 2 (24,6) (để học được học phần này, người học phải dành ít nhất 60 giờ chuẩn bị cá nhân).
    Download  
     
    Export citation  
     
    Bookmark  
  10. Ứng dụng ChatGPT trong hoạt động học tập của sinh viên trên địa bàn TP. Hà Nội.Nguyễn Thị Ái Liên, Đào Việt Hùng, Đặng Linh Chi, Nguyễn Thị Nhung, Vũ Thảo Phương & Vũ Thị Thu Thảo - 2024 - Kinh Tế Và Dự Báo.
    Tại Việt Nam và trong lĩnh vực giáo dục nói riêng, ChatGPT ngày càng được chấp nhận và sử dụng rộng rãi trong rất nhiều hoạt động học tập. Chính vì thế, nghiên cứu này nhằm đánh giá mức độ phổ biến của ChatGPT đối sinh viên tại Hà Nội, đồng thời xem xét sự khác biệt giữa các đặc điểm cá nhân trong việc cải thiện kết quả học tập sau khi sử dụng ChatGPT. Nghiên cứu được thực hiện (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11. Thúc đẩy hành vi xanh của doanh nghiệp có vốn đầu tư trực tiếp nước ngoài gắn với mục tiêu phát triển bền vững của Việt Nam.Hoàng Tiến Linh & Khúc Đại Long - 2024 - Kinh Tế Và Dự Báo.
    Xây dựng nền kinh tế xanh tiến đến mục tiêu phát triển bền vững đang từng bước trở thành xu thế của thời đại và là xu hướng ngày càng rõ nét trên toàn cầu. Hành vi xanh của doanh nghiệp có vốn đầu tư trực tiếp nước ngoài (doanh nghiệp FDI) có mối quan hệ chặt chẽ và tác động tích cực đáng kể đến sự phát triển bền vững của địa phương/quốc gia, bao gồm cả các nước phát (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12. AI training data, model success likelihood, and informational entropy-based value.Quan-Hoang Vuong, Viet-Phuong La & Minh-Hoang Nguyen - manuscript
    Since the release of OpenAI's ChatGPT, the world has entered a race to develop more capable and powerful AI, including artificial general intelligence (AGI). The development is constrained by the dependency of AI on the model, quality, and quantity of training data, making the AI training process highly costly in terms of resources and environmental consequences. Thus, improving the effectiveness and efficiency of the AI training process is essential, especially when the Earth is approaching the climate tipping points and planetary (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. Making AI Meaningful Again.Jobst Landgrebe & Barry Smith - 2021 - Synthese 198 (March):2061-2081.
    Artificial intelligence (AI) research enjoyed an initial period of enthusiasm in the 1970s and 80s. But this enthusiasm was tempered by a long interlude of frustration when genuinely useful AI applications failed to be forthcoming. Today, we are experiencing once again a period of enthusiasm, fired above all by the successes of the technology of deep neural networks or deep machine learning. In this paper we draw attention to what we take to be serious problems underlying current views of artificial (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  14. Why AI Doomsayers are Like Sceptical Theists and Why it Matters.John Danaher - 2015 - Minds and Machines 25 (3):231-246.
    An advanced artificial intelligence could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s superintelligence: paths, dangers, strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate about the (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  15. AI Methods in Bioethics.Joshua August Skorburg, Walter Sinnott-Armstrong & Vincent Conitzer - 2020 - American Journal of Bioethics: Empirical Bioethics 1 (11):37-39.
    Commentary about the role of AI in bioethics for the 10th anniversary issue of AJOB: Empirical Bioethics.
    Download  
     
    Export citation  
     
    Bookmark  
  16. AI Alignment vs. AI Ethical Treatment: Ten Challenges.Adam Bradley & Bradford Saad - manuscript
    A morally acceptable course of AI development should avoid two dangers: creating unaligned AI systems that pose a threat to humanity and mistreating AI systems that merit moral consideration in their own right. This paper argues these two dangers interact and that if we create AI systems that merit moral consideration, simultaneously avoiding both of these dangers would be extremely challenging. While our argument is straightforward and supported by a wide range of pretheoretical moral judgments, it has far-reaching moral implications (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. Medical AI: is trust really the issue?Jakob Thrane Mainz - 2024 - Journal of Medical Ethics 50 (5):349-350.
    I discuss an influential argument put forward by Hatherley in theJournal of Medical Ethics. Drawing on influential philosophical accounts of interpersonal trust, Hatherley claims that medical artificial intelligence is capable of being reliable, but not trustworthy. Furthermore, Hatherley argues that trust generates moral obligations on behalf of the trustee. For instance, when a patient trusts a clinician, it generates certain moral obligations on behalf of the clinician for her to do what she is entrusted to do. I make three objections (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  18. Systematizing AI Governance through the Lens of Ken Wilber's Integral Theory.Ammar Younas & Yi Zeng - manuscript
    We apply Ken Wilber's Integral Theory to AI governance, demonstrating its ability to systematize diverse approaches in the current multifaceted AI governance landscape. By analyzing ethical considerations, technological standards, cultural narratives, and regulatory frameworks through Integral Theory's four quadrants, we offer a comprehensive perspective on governance needs. This approach aligns AI governance with human values, psychological well-being, cultural norms, and robust regulatory standards. Integral Theory’s emphasis on interconnected individual and collective experiences addresses the deeper aspects of AI-related issues. Additionally, we (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. (1 other version)AI and its new winter: from myths to realities.Luciano Floridi - 2020 - Philosophy and Technology 33 (1):1-3.
    An AI winter may be defined as the stage when technology, business, and the media come to terms with what AI can or cannot really do as a technology without exaggeration. Through discussion of previous AI winters, this paper examines the hype cycle (which by turn characterises AI as a social panacea or a nightmare of apocalyptic proportions) and argues that AI should be treated as a normal technology, neither as a miracle nor as a plague, but rather as of (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  20. The Whiteness of AI.Stephen Cave & Kanta Dihal - 2020 - Philosophy and Technology 33 (4):685-703.
    This paper focuses on the fact that AI is predominantly portrayed as white—in colour, ethnicity, or both. We first illustrate the prevalent Whiteness of real and imagined intelligent machines in four categories: humanoid robots, chatbots and virtual assistants, stock images of AI, and portrayals of AI in film and television. We then offer three interpretations of the Whiteness of AI, drawing on critical race theory, particularly the idea of the White racial frame. First, we examine the extent to which this (...)
    Download  
     
    Export citation  
     
    Bookmark   31 citations  
  21. AI Wellbeing.Simon Goldstein & Cameron Domenico Kirk-Giannini - forthcoming - Asian Journal of Philosophy.
    Under what conditions would an artificially intelligent system have wellbeing? Despite its clear bearing on the ethics of human interactions with artificial systems, this question has received little direct attention. Because all major theories of wellbeing hold that an individual’s welfare level is partially determined by their mental life, we begin by considering whether artificial systems have mental states. We show that a wide range of theories of mental states, when combined with leading theories of wellbeing, predict that certain existing (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  22. How AI’s Self-Prolongation Influences People’s Perceptions of Its Autonomous Mind: The Case of U.S. Residents.Quan-Hoang Vuong, Viet-Phuong La, Minh-Hoang Nguyen, Ruining Jin, Minh-Khanh La & Tam-Tri Le - 2023 - Behavioral Sciences 13 (6):470.
    The expanding integration of artificial intelligence (AI) in various aspects of society makes the infosphere around us increasingly complex. Humanity already faces many obstacles trying to have a better understanding of our own minds, but now we have to continue finding ways to make sense of the minds of AI. The issue of AI’s capability to have independent thinking is of special attention. When dealing with such an unfamiliar concept, people may rely on existing human properties, such as survival desire, (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  23. Medical AI and human dignity: Contrasting perceptions of human and artificially intelligent (AI) decision making in diagnostic and medical resource allocation contexts.Paul Formosa, Wendy Rogers, Yannick Griep, Sarah Bankins & Deborah Richards - 2022 - Computers in Human Behaviour 133.
    Forms of Artificial Intelligence (AI) are already being deployed into clinical settings and research into its future healthcare uses is accelerating. Despite this trajectory, more research is needed regarding the impacts on patients of increasing AI decision making. In particular, the impersonal nature of AI means that its deployment in highly sensitive contexts-of-use, such as in healthcare, raises issues associated with patients’ perceptions of (un) dignified treatment. We explore this issue through an experimental vignette study comparing individuals’ perceptions of being (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. (1 other version)Taking AI Risks Seriously: a New Assessment Model for the AI Act.Claudio Novelli, Casolari Federico, Antonino Rotolo, Mariarosaria Taddeo & Luciano Floridi - 2023 - AI and Society 38 (3):1-5.
    The EU proposal for the Artificial Intelligence Act (AIA) defines four risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of AI, the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. This problem is particularly challenging when it comes to regulating general-purpose AI (GPAI), which has versatile and often unpredictable applications. Recent amendments to the compromise text, though introducing context-specific assessments, remain insufficient. To address this, (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  25. Emotional AI as affective artifacts: A philosophical exploration.Manh-Tung Ho & Manh-Toan Ho - manuscript
    In recent years, with the advances in machine learning and neuroscience, the abundances of sensors and emotion data, computer engineers have started to endow machines with ability to detect, classify, and interact with human emotions. Emotional artificial intelligence (AI), also known as a more technical term in affective computing, is increasingly more prevalent in our daily life as it is embedded in many applications in our mobile devices as well as in physical spaces. Critically, emotional AI systems have not only (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. Transparent, explainable, and accountable AI for robotics.Sandra Wachter, Brent Mittelstadt & Luciano Floridi - 2017 - Science (Robotics) 2 (6):eaan6080.
    To create fair and accountable AI and robotics, we need precise regulation and better methods to certify, explain, and audit inscrutable systems.
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  27. Ethical AI at work: the social contract for Artificial Intelligence and its implications for the workplace psychological contract.Sarah Bankins & Paul Formosa - 2021 - In Sarah Bankins & Paul Formosa (eds.), Ethical AI at Work: The Social Contract for Artificial Intelligence and Its Implications for the Workplace Psychological Contract. Cham, Switzerland: pp. 55-72.
    Artificially intelligent (AI) technologies are increasingly being used in many workplaces. It is recognised that there are ethical dimensions to the ways in which organisations implement AI alongside, or substituting for, their human workforces. How will these technologically driven disruptions impact the employee–employer exchange? We provide one way to explore this question by drawing on scholarship linking Integrative Social Contracts Theory (ISCT) to the psychological contract (PC). Using ISCT, we show that the macrosocial contract’s ethical AI norms of beneficence, non-maleficence, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  28. AI Human Impact: Toward a Model for Ethical Investing in AI-Intensive Companies.James Brusseau - manuscript
    Does AI conform to humans, or will we conform to AI? An ethical evaluation of AI-intensive companies will allow investors to knowledgeably participate in the decision. The evaluation is built from nine performance indicators that can be analyzed and scored to reflect a technology’s human-centering. When summed, the scores convert into objective investment guidance. The strategy of incorporating ethics into financial decisions will be recognizable to participants in environmental, social, and governance investing, however, this paper argues that conventional ESG frameworks (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  29. AI and the expert; a blueprint for the ethical use of opaque AI.Amber Ross - forthcoming - AI and Society:1-12.
    The increasing demand for transparency in AI has recently come under scrutiny. The question is often posted in terms of “epistemic double standards”, and whether the standards for transparency in AI ought to be higher than, or equivalent to, our standards for ordinary human reasoners. I agree that the push for increased transparency in AI deserves closer examination, and that comparing these standards to our standards of transparency for other opaque systems is an appropriate starting point. I suggest that a (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  30. Conversational AI for Psychotherapy and Its Role in the Space of Reason.Jana Sedlakova - 2024 - Cosmos+Taxis 12 (5+6):80-87.
    The recent book by Landgrebe and Smith (2022) offers compelling arguments against the possibility of Artificial General Intelligence (AGI) as well as against the idea that machines have the abilities to master human language, human social interaction and morality. Their arguments leave open, however, a problem on the side of the imaginative power of humans to perceive more than there is and treat AIs as humans and social actors independent of their actual properties and abilities or lack thereof. The mathematical (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31. Classical AI linguistic understanding and the insoluble Cartesian problem.Rodrigo González - 2020 - AI and Society 35 (2):441-450.
    This paper examines an insoluble Cartesian problem for classical AI, namely, how linguistic understanding involves knowledge and awareness of u’s meaning, a cognitive process that is irreducible to algorithms. As analyzed, Descartes’ view about reason and intelligence has paradoxically encouraged certain classical AI researchers to suppose that linguistic understanding suffices for machine intelligence. Several advocates of the Turing Test, for example, assume that linguistic understanding only comprises computational processes which can be recursively decomposed into algorithmic mechanisms. Against this background, in (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  32. AI Art is Theft: Labour, Extraction, and Exploitation, Or, On the Dangers of Stochastic Pollocks.Trystan S. Goetze - 2024 - Proceedings of the 2024 Acm Conference on Fairness, Accountability, and Transparency:186-196.
    Since the launch of applications such as DALL-E, Midjourney, and Stable Diffusion, generative artificial intelligence has been controversial as a tool for creating artwork. While some have presented longtermist worries about these technologies as harbingers of fully automated futures to come, more pressing is the impact of generative AI on creative labour in the present. Already, business leaders have begun replacing human artistic labour with AI-generated images. In response, the artistic community has launched a protest movement, which argues that AI (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  33. AI Safety: A Climb To Armageddon?Herman Cappelen, Dever Josh & Hawthorne John - manuscript
    This paper presents an argument that certain AI safety measures, rather than mitigating existential risk, may instead exacerbate it. Under certain key assumptions - the inevitability of AI failure, the expected correlation between an AI system's power at the point of failure and the severity of the resulting harm, and the tendency of safety measures to enable AI systems to become more powerful before failing - safety efforts have negative expected utility. The paper examines three response strategies: Optimism, Mitigation, and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. How AI can be a force for good.Mariarosaria Taddeo & Luciano Floridi - 2018 - Science Magazine 361 (6404):751-752.
    This article argues that an ethical framework will help to harness the potential of AI while keeping humans in control.
    Download  
     
    Export citation  
     
    Bookmark   80 citations  
  35. AI, Opacity, and Personal Autonomy.Bram Vaassen - 2022 - Philosophy and Technology 35 (4):1-20.
    Advancements in machine learning have fuelled the popularity of using AI decision algorithms in procedures such as bail hearings, medical diagnoses and recruitment. Academic articles, policy texts, and popularizing books alike warn that such algorithms tend to be opaque: they do not provide explanations for their outcomes. Building on a causal account of transparency and opacity as well as recent work on the value of causal explanation, I formulate a moral concern for opaque algorithms that is yet to receive a (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  36. AI Decision Making with Dignity? Contrasting Workers’ Justice Perceptions of Human and AI Decision Making in a Human Resource Management Context.Sarah Bankins, Paul Formosa, Yannick Griep & Deborah Richards - forthcoming - Information Systems Frontiers.
    Using artificial intelligence (AI) to make decisions in human resource management (HRM) raises questions of how fair employees perceive these decisions to be and whether they experience respectful treatment (i.e., interactional justice). In this experimental survey study with open-ended qualitative questions, we examine decision making in six HRM functions and manipulate the decision maker (AI or human) and decision valence (positive or negative) to determine their impact on individuals’ experiences of interactional justice, trust, dehumanization, and perceptions of decision-maker role appropriate- (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  37. AI Risk Assessment: A Scenario-Based, Proportional Methodology for the AI Act.Claudio Novelli, Federico Casolari, Antonino Rotolo, Mariarosaria Taddeo & Luciano Floridi - 2024 - Digital Society 3 (13):1-29.
    The EU Artificial Intelligence Act (AIA) defines four risk categories for AI systems: unacceptable, high, limited, and minimal. However, it lacks a clear methodology for the assessment of these risks in concrete situations. Risks are broadly categorized based on the application areas of AI systems and ambiguous risk factors. This paper suggests a methodology for assessing AI risk magnitudes, focusing on the construction of real-world risk scenarios. To this scope, we propose to integrate the AIA with a framework developed by (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  38. When AI meets PC: exploring the implications of workplace social robots and a human-robot psychological contract.Sarah Bankins & Paul Formosa - 2019 - European Journal of Work and Organizational Psychology 2019.
    The psychological contract refers to the implicit and subjective beliefs regarding a reciprocal exchange agreement, predominantly examined between employees and employers. While contemporary contract research is investigating a wider range of exchanges employees may hold, such as with team members and clients, it remains silent on a rapidly emerging form of workplace relationship: employees’ increasing engagement with technically, socially, and emotionally sophisticated forms of artificially intelligent (AI) technologies. In this paper we examine social robots (also termed humanoid robots) as likely (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  39. Why AIs Cannot Play Games.David Koepsell - manuscript
    This paper explores the human experience of game-playing and its implications for artificial intelligence. The author uses phenomenology to examine game-playing from a human-centered perspective and applies it to language games played by artificial intelligences and humans. The paper argues that AI cannot truly play games because it lacks the intentionality, embodied experience, and social interaction that are fundamental to human game-playing. Furthermore, current AI lacks the ability to converse, which is argued to be equivalent to Wittgenstein’s view of engaging (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. Interpreting AI-Generated Art: Arthur Danto’s Perspective on Intention, Authorship, and Creative Traditions in the Age of Artificial Intelligence.Raquel Cascales - 2023 - Polish Journal of Aesthetics 71 (4):17-29.
    Arthur C. Danto did not live to witness the proliferation of AI in artistic creation. However, his philosophy of art offers key ideas about art that can provide an interesting perspective on artwork generated by artificial intelligence (AI). In this article, I analyze how his ideas about contemporary art, intention, interpretation, and authorship could be applied to the ongoing debate about AI and artistic creation. At the same time, it is also interesting to consider whether the incorporation of AI into (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  41. Certifiable AI.Jobst Landgrebe - 2022 - Applied Sciences 12 (3):1050.
    Implicit stochastic models, including both ‘deep neural networks’ (dNNs) and the more recent unsupervised foundational models, cannot be explained. That is, it cannot be determined how they work, because the interactions of the millions or billions of terms that are contained in their equations cannot be captured in the form of a causal model. Because users of stochastic AI systems would like to understand how they operate in order to be able to use them safely and reliably, there has emerged (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  42. AI Survival Stories: a Taxonomic Analysis of AI Existential Risk.Herman Cappelen, Simon Goldstein & John Hawthorne - forthcoming - Philosophy of Ai.
    Since the release of ChatGPT, there has been a lot of debate about whether AI systems pose an existential risk to humanity. This paper develops a general framework for thinking about the existential risk of AI systems. We analyze a two-premise argument that AI systems pose a threat to humanity. Premise one: AI systems will become extremely powerful. Premise two: if AI systems become extremely powerful, they will destroy humanity. We use these two premises to construct a taxonomy of ‘survival (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. Will AI and Humanity Go to War?Simon Goldstein - manuscript
    This paper offers the first careful analysis of the possibility that AI and humanity will go to war. The paper focuses on the case of artificial general intelligence, AI with broadly human capabilities. The paper uses a bargaining model of war to apply standard causes of war to the special case of AI/human conflict. The paper argues that information failures and commitment problems are especially likely in AI/human conflict. Information failures would be driven by the difficulty of measuring AI capabilities, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. ChatGPT: towards AI subjectivity.Kristian D’Amato - 2024 - AI and Society 39:1-15.
    Motivated by the question of responsible AI and value alignment, I seek to offer a uniquely Foucauldian reconstruction of the problem as the emergence of an ethical subject in a disciplinary setting. This reconstruction contrasts with the strictly human-oriented programme typical to current scholarship that often views technology in instrumental terms. With this in mind, I problematise the concept of a technological subjectivity through an exploration of various aspects of ChatGPT in light of Foucault’s work, arguing that current systems lack (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  45. Ethical AI at Work: The Social Contract for Artificial Intelligence and Its Implications for the Workplace Psychological Contract.Sarah Bankins & Paul Formosa - 2021 - In Sarah Bankins & Paul Formosa (eds.), Ethical AI at Work: The Social Contract for Artificial Intelligence and Its Implications for the Workplace Psychological Contract. Cham, Switzerland:
    Artificially intelligent (AI) technologies are increasingly being used in many workplaces. It is recognised that there are ethical dimensions to the ways in which organisations implement AI alongside, or substituting for, their human workforces. How will these technologically driven disruptions impact the employee–employer exchange? We provide one way to explore this question by drawing on scholarship linking Integrative Social Contracts Theory (ISCT) to the psychological contract (PC). Using ISCT, we show that the macrosocial contract’s ethical AI norms of beneficence, non-maleficence, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  46. Designing AI with Rights, Consciousness, Self-Respect, and Freedom.Eric Schwitzgebel & Mara Garza - 2023 - In Francisco Lara & Jan Deckers (eds.), Ethics of Artificial Intelligence. Springer Nature Switzerland. pp. 459-479.
    We propose four policies of ethical design of human-grade Artificial Intelligence. Two of our policies are precautionary. Given substantial uncertainty both about ethical theory and about the conditions under which AI would have conscious experiences, we should be cautious in our handling of cases where different moral theories or different theories of consciousness would produce very different ethical recommendations. Two of our policies concern respect and freedom. If we design AI that deserves moral consideration equivalent to that of human beings, (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  47. AI and the Mechanistic Forces of Darkness.Eric Dietrich - 1995 - J. Of Experimental and Theoretical AI 7 (2):155-161.
    Under the Superstition Mountains in central Arizona toil those who would rob humankind o f its humanity. These gray, soulless monsters methodically tear away at our meaning, our subjectivity, our essence as transcendent beings. With each advance, they steal our freedom and dignity. Who are these denizens of darkness, these usurpers of all that is good and holy? None other than humanity’s arch-foe: The Cognitive Scientists -- AI researchers, fallen philosophers, psychologists, and other benighted lovers of computers. Unless they are (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  48. (1 other version)AI Extenders and the Ethics of Mental Health.Karina Vold & Jose Hernandez-Orallo - forthcoming - In Marcello Ienca & Fabrice Jotterand (eds.), Ethics of Artificial Intelligence in Brain and Mental Health.
    The extended mind thesis maintains that the functional contributions of tools and artefacts can become so essential for our cognition that they can be constitutive parts of our minds. In other words, our tools can be on a par with our brains: our minds and cognitive processes can literally ‘extend’ into the tools. Several extended mind theorists have argued that this ‘extended’ view of the mind offers unique insights into how we understand, assess, and treat certain cognitive conditions. In this (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  49. Explainable AI lacks regulative reasons: why AI and human decision‑making are not equally opaque.Uwe Peters - forthcoming - AI and Ethics.
    Many artificial intelligence (AI) systems currently used for decision-making are opaque, i.e., the internal factors that determine their decisions are not fully known to people due to the systems’ computational complexity. In response to this problem, several researchers have argued that human decision-making is equally opaque and since simplifying, reason-giving explanations (rather than exhaustive causal accounts) of a decision are typically viewed as sufficient in the human case, the same should hold for algorithmic decision-making. Here, I contend that this argument (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  50. (DRAFT) 如何藉由「以人為本」進路實現國科會AI科研發展倫理指南.Jr-Jiun Lian - 2024 - 2024科技與社會(Sts)年會年度學術研討會論文 ,國立臺東大學.
    本文深入探討人工智慧(AI)於實現共同福祉與幸福、公平與非歧視、理性公共討論及自主與控制之倫理與正義重要性與挑戰。以中央研究院LLM事件及國家科學技術委員會(NSTC)AI技術研發指導方針為基礎,本文 分析AI能否滿足人類共同利益與福祉。針對AI不公正,本文評估其於區域、產業及社會影響。並探討AI公平與非歧視挑戰,尤其偏差數據訓練問題,及後處理監管,強調理性公共討論之重要性。進而,本文探討理性公眾於 公共討論中之挑戰及應對,如STEM科學素養與技術能力教育之重要性。最後,本文提出“以人為本”方法,非僅依賴AI技術效用最大化,以實現AI正義。 -/- 關鍵詞:AI倫理與正義、公平與非歧視、偏差數據訓練、公共討論、自主性、以人為本的方法.
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 946