Results for 'AI limits'

957 found
Order:
  1. Can AI Help Us to Understand Belief? Sources, Advances, Limits, and Future Directions.Andrea Vestrucci, Sara Lumbreras & Lluis Oviedo - 2021 - International Journal of Interactive Multimedia and Artificial Intelligence 7 (1):24-33.
    The study of belief is expanding and involves a growing set of disciplines and research areas. These research programs attempt to shed light on the process of believing, understood as a central human cognitive function. Computational systems and, in particular, what we commonly understand as Artificial Intelligence (AI), can provide some insights on how beliefs work as either a linear process or as a complex system. However, the computational approach has undergone some scrutiny, in particular about the differences between what (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  2. Limits of trust in medical AI.Joshua James Hatherley - 2020 - Journal of Medical Ethics 46 (7):478-481.
    Artificial intelligence (AI) is expected to revolutionise the practice of medicine. Recent advancements in the field of deep learning have demonstrated success in variety of clinical tasks: detecting diabetic retinopathy from images, predicting hospital readmissions, aiding in the discovery of new drugs, etc. AI’s progress in medicine, however, has led to concerns regarding the potential effects of this technology on relationships of trust in clinical practice. In this paper, I will argue that there is merit to these concerns, since AI (...)
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  3. Turning queries into questions: For a plurality of perspectives in the age of AI and other frameworks with limited (mind)sets.Claudia Westermann & Tanu Gupta - 2023 - Technoetic Arts 21 (1):3-13.
    The editorial introduces issue 21.1 of Technoetic Arts via a critical reflection on the artificial intelligence hype (AI hype) that emerged in 2022. Tracing the history of the critique of Large Language Models, the editorial underscores that there are substantial ethical challenges related to bias in the training data, copyright issues, as well as ecological challenges which the technology industry has consistently downplayed over the years. -/- The editorial highlights the distinction between the current AI technology’s reliance on extensive pre-existing (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  4.  46
    Possibilities and Limitations of AI in Philosophical Inquiry Compared to Human Capabilities.Keita Tsuzuki - manuscript
    Traditionally, philosophy has been strictly a human domain, with wide applications in science and ethics. However, with the rapid advancement of natural language processing technologies like ChatGPT, the question of whether artificial intelligence can engage in philosophical thinking is becoming increasingly important. This work first clarifies the meaning of philosophy based on its historical background, then explores the possibility of AI engaging in philosophy. We conclude that AI has reached a stage where it can engage in philosophical inquiry. The study (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5. (1 other version)Taking AI Risks Seriously: a New Assessment Model for the AI Act.Claudio Novelli, Casolari Federico, Antonino Rotolo, Mariarosaria Taddeo & Luciano Floridi - 2023 - AI and Society 38 (3):1-5.
    The EU proposal for the Artificial Intelligence Act (AIA) defines four risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of AI, the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. This problem is particularly challenging when it comes to regulating general-purpose AI (GPAI), which has versatile and often unpredictable applications. Recent amendments to the compromise text, though introducing context-specific assessments, remain insufficient. To address this, (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  6. AI Risk Assessment: A Scenario-Based, Proportional Methodology for the AI Act.Claudio Novelli, Federico Casolari, Antonino Rotolo, Mariarosaria Taddeo & Luciano Floridi - 2024 - Digital Society 3 (13):1-29.
    The EU Artificial Intelligence Act (AIA) defines four risk categories for AI systems: unacceptable, high, limited, and minimal. However, it lacks a clear methodology for the assessment of these risks in concrete situations. Risks are broadly categorized based on the application areas of AI systems and ambiguous risk factors. This paper suggests a methodology for assessing AI risk magnitudes, focusing on the construction of real-world risk scenarios. To this scope, we propose to integrate the AIA with a framework developed by (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  7. AI Rights for Human Safety.Peter Salib & Simon Goldstein - manuscript
    AI companies are racing to create artificial general intelligence, or “AGI.” If they succeed, the result will be human-level AI systems that can independently pursue high-level goals by formulating and executing long-term plans in the real world. Leading AI researchers agree that some of these systems will likely be “misaligned”–pursuing goals that humans do not desire. This goal mismatch will put misaligned AIs and humans into strategic competition with one another. As with present-day strategic competition between nations with incompatible goals, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. Emergent Models for Moral AI Spirituality.Mark Graves - 2021 - International Journal of Interactive Multimedia and Artificial Intelligence 7 (1):7-15.
    Examining AI spirituality can illuminate problematic assumptions about human spirituality and AI cognition, suggest possible directions for AI development, reduce uncertainty about future AI, and yield a methodological lens sufficient to investigate human-AI sociotechnical interaction and morality. Incompatible philosophical assumptions about human spirituality and AI limit investigations of both and suggest a vast gulf between them. An emergentist approach can replace dualist assumptions about human spirituality and identify emergent behavior in AI computation to overcome overly reductionist assumptions about computation. Using (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9. Will AI and Humanity Go to War?Simon Goldstein - manuscript
    This paper offers the first careful analysis of the possibility that AI and humanity will go to war. The paper focuses on the case of artificial general intelligence, AI with broadly human capabilities. The paper uses a bargaining model of war to apply standard causes of war to the special case of AI/human conflict. The paper argues that information failures and commitment problems are especially likely in AI/human conflict. Information failures would be driven by the difficulty of measuring AI capabilities, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10.  57
    The Role of Sympathy in Critical Reasoning and the Limitations of Current Medical AI.Martina Favaretto & Kyle Stroh - forthcoming - Journal of Medicine and Philosophy.
    The recent developments of medical AI systems (MAIS) open up questions as to whether and to what extent MAIS can be modeled to include empathetic understanding, as well as what impact MAIS’ lack of empathetic understanding would have on its ability to perform the necessary critical analyses for reaching a diagnosis and recommending medical treatment. In this paper, we argue that current medical AI systems’ ability to empathize with patients is severely limited due to its lack of first-person experiences with (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11.  72
    Can AI become an Expert?Hyeongyun Kim - 2024 - Journal of Ai Humanities 16 (4):113-136.
    With the rapid development of artificial intelligence (AI), understanding its capabilities and limitations has become significant for mitigating unfounded anxiety and unwarranted optimism. As part of this endeavor, this study delves into the following question: Can AI become an expert? More precisely, should society confer the authority of experts on AI even if its decision-making process is highly opaque? Throughout the investigation, I aim to identify certain normative challenges in elevating current AI to a level comparable to that of human (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12. AI Alignment Problem: “Human Values” don’t Actually Exist.Alexey Turchin - manuscript
    Abstract. The main current approach to the AI safety is AI alignment, that is, the creation of AI whose preferences are aligned with “human values.” Many AI safety researchers agree that the idea of “human values” as a constant, ordered sets of preferences is at least incomplete. However, the idea that “humans have values” underlies a lot of thinking in the field; it appears again and again, sometimes popping up as an uncritically accepted truth. Thus, it deserves a thorough deconstruction, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  13. “Democratizing AI” and the Concern of Algorithmic Injustice.Ting-an Lin - 2024 - Philosophy and Technology 37 (3):1-27.
    The call to make artificial intelligence (AI) more democratic, or to “democratize AI,” is sometimes framed as a promising response for mitigating algorithmic injustice or making AI more aligned with social justice. However, the notion of “democratizing AI” is elusive, as the phrase has been associated with multiple meanings and practices, and the extent to which it may help mitigate algorithmic injustice is still underexplored. In this paper, based on a socio-technical understanding of algorithmic injustice, I examine three notable notions (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. Unpredictability of AI.Roman Yampolskiy - manuscript
    The young field of AI Safety is still in the process of identifying its challenges and limitations. In this paper, we formally describe one such impossibility result, namely Unpredictability of AI. We prove that it is impossible to precisely and consistently predict what specific actions a smarter-than-human intelligent system will take to achieve its objectives, even if we know terminal goals of the system. In conclusion, impact of Unpredictability on AI Safety is discussed.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  15. AI-Testimony, Conversational AIs and Our Anthropocentric Theory of Testimony.Ori Freiman - 2024 - Social Epistemology 38 (4):476-490.
    The ability to interact in a natural language profoundly changes devices’ interfaces and potential applications of speaking technologies. Concurrently, this phenomenon challenges our mainstream theories of knowledge, such as how to analyze linguistic outputs of devices under existing anthropocentric theoretical assumptions. In section 1, I present the topic of machines that speak, connecting between Descartes and Generative AI. In section 2, I argue that accepted testimonial theories of knowledge and justification commonly reject the possibility that a speaking technological artifact can (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16.  98
    Growing the image: Generative AI and the medium of gardening.Nick Young & Enrico Terrone - forthcoming - Philosophical Quarterly.
    In this paper, we argue that Midjourney—a generative AI program that transforms text prompts into images—should be understood not as an agent or a tool, but as a new type of artistic medium. We first examine the view of Midjourney as an agent, considering whether it could be seen as an artist or co-author. This perspective proves unsatisfactory, as Midjourney lacks intentionality and mental states. We then explore the notion of Midjourney as a tool, highlighting its unpredictability and the limited (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. How to design AI for social good: seven essential factors.Luciano Floridi, Josh Cowls, Thomas C. King & Mariarosaria Taddeo - 2020 - Science and Engineering Ethics 26 (3):1771–1796.
    The idea of artificial intelligence for social good is gaining traction within information societies in general and the AI community in particular. It has the potential to tackle social problems through the development of AI-based solutions. Yet, to date, there is only limited understanding of what makes AI socially good in theory, what counts as AI4SG in practice, and how to reproduce its initial successes in terms of policies. This article addresses this gap by identifying seven ethical factors that are (...)
    Download  
     
    Export citation  
     
    Bookmark   38 citations  
  18. AI, alignment, and the categorical imperative.Fritz McDonald - 2023 - AI and Ethics 3:337-344.
    Tae Wan Kim, John Hooker, and Thomas Donaldson make an attempt, in recent articles, to solve the alignment problem. As they define the alignment problem, it is the issue of how to give AI systems moral intelligence. They contend that one might program machines with a version of Kantian ethics cast in deontic modal logic. On their view, machines can be aligned with human values if such machines obey principles of universalization and autonomy, as well as a deontic utilitarian principle. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19.  69
    AlphaFold, AI and Ontologies.Barry Smith - 2024 - In Alexander D. Diehl, William D. Duncan, Yongqun " He & Oliver" (eds.), ICBO 2022: International Conference on Biomedical Ontology. CEUR. pp. P1-3.
    This short paper seeks to throw light on the sense in which the prior knowledge used by AlphaFold is to be understood in ontological terms. The paper is a comment on the 2022 ICBO presentation by Jobst Landgrebe entitled “What AlphaFold teaches us about deep learning with prior knowledge”.
    Download  
     
    Export citation  
     
    Bookmark  
  20. AI-Driven Learning: Advances and Challenges in Intelligent Tutoring Systems.Amjad H. Alfarra, Lamis F. Amhan, Msbah J. Mosa, Mahmoud Ali Alajrami, Faten El Kahlout, Bassem S. Abu-Nasser & Samy S. Abu-Naser - 2024 - International Journal of Academic Applied Research (Ijaar) 8 (9):24-29.
    Abstract: The incorporation of Artificial Intelligence (AI) into educational technology has dramatically transformed learning through Intelligent Tutoring Systems (ITS). These systems utilize AI to offer personalized, adaptive instruction tailored to each student's needs, thereby improving learning outcomes and engagement. This paper examines the development and impact of ITS, focusing on AI technologies such as machine learning, natural language processing, and adaptive algorithms that drive their functionality. Through various case studies and applications, it illustrates how ITS have revolutionized traditional educational methods (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. Central limit theorem for the functional of jump Markov process.Nguyen Van Huu, Quan-Hoang Vuong & Tran Minh Ngoc - 2005 - In Nguyen Van Huu, Quan-Hoang Vuong & Tran Minh Ngoc (eds.), Báo cáo: Hội nghị toàn quốc lần thứ III “Xác suất - Thống kê: Nghiên cứu, ứng dụng và giảng dạy”. Ha Noi: Viện Toán học. pp. 34.
    Central limit theorem for the functional of jump Markov process. Nguyễn Văn Hữu, Vương Quân Hoàng và Trần Minh Ngọc. Báo cáo: Hội nghị toàn quốc lần thứ III “Xác suất - Thống kê: Nghiên cứu, ứng dụng và giảng dạy” (tr. 34). Ba Vì, Hà Tây, ngày 12-14 tháng 05 năm 2005. Viện Toán học / Trường Đại học Khoa học tự nhiên / Đại học Quốc gia Hà Nội.
    Download  
     
    Export citation  
     
    Bookmark  
  22. EI & AI In Leadership and How It Can Affect Future Leaders.Ramakrishnan Vivek & Oleksandr P. Krupskyi - 2024 - European Journal of Management Issues 32 (3):174-182.
    Purpose: The aim of this study is to examine how the integration of Emotional Intelligence (EI) and Artificial Intelligence (AI) in leadership can enhance leadership effectiveness and influence the development of future leaders. -/- Design / Method / Approach: The research employs a mixed-methods approach, combining qualitative and quantitative analyses. The study utilizes secondary data sources, including scholarly articles, industry reports, and empirical studies, to analyze the interaction between EI and AI in leadership settings. -/- Findings: The findings reveal that (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. The Future of AI: Stanisław Lem’s Philosophical Visions for AI and Cyber-Societies in Cyberiad.Roman Krzanowski & Pawel Polak - 2021 - Pro-Fil 22 (3):39-53.
    Looking into the future is always a risky endeavour, but one way to anticipate the possible future shape of AI-driven societies is to examine the visionary works of some sci-fi writers. Not all sci-fi works have such visionary quality, of course, but some of Stanisław Lem’s works certainly do. We refer here to Lem’s works that explore the frontiers of science and technology and those that describe imaginary societies of robots. We therefore examine Lem’s prose, with a focus on the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. Narrow AI Nanny: Reaching Strategic Advantage via Narrow AI to Prevent Creation of the Dangerous Superintelligence.Alexey Turchin - manuscript
    Abstract: As there are no currently obvious ways to create safe self-improving superintelligence, but its emergence is looming, we probably need temporary ways to prevent its creation. The only way to prevent it is to create a special type of AI that is able to control and monitor the entire world. The idea has been suggested by Goertzel in the form of an AI Nanny, but his Nanny is still superintelligent, and is not easy to control. We explore here ways (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. (1 other version)A united framework of five principles for AI in society.Luciano Floridi & Josh Cowls - 2019 - Harvard Data Science Review 1 (1).
    Artificial Intelligence (AI) is already having a major impact on society. As a result, many organizations have launched a wide range of initiatives to establish ethical principles for the adoption of socially beneficial AI. Unfortunately, the sheer volume of proposed principles threatens to overwhelm and confuse. How might this problem of ‘principle proliferation’ be solved? In this paper, we report the results of a fine-grained analysis of several of the highest-profile sets of ethical principles for AI. We assess whether these (...)
    Download  
     
    Export citation  
     
    Bookmark   76 citations  
  26. What Are Lacking in Sora and V-JEPA’s World Models? -A Philosophical Analysis of Video AIs Through the Theory of Productive Imagination.Jianqiu Zhang - unknown
    Sora from Open AI has shown exceptional performance, yet it faces scrutiny over whether its technological prowess equates to an authentic comprehension of reality. Critics contend that it lacks a foundational grasp of the world, a deficiency V-JEPA from Meta aims to amend with its joint embedding approach. This debate is vital for steering the future direction of Artificial General Intelligence(AGI). We enrich this debate by developing a theory of productive imagination that generates a coherent world model based on Kantian (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. The Concept of Accountability in AI Ethics and Governance.Theodore Lechterman - 2023 - In Justin B. Bullock, Yu-Che Chen, Johannes Himmelreich, Valerie M. Hudson, Anton Korinek, Matthew M. Young & Baobao Zhang (eds.), The Oxford Handbook of AI Governance. Oxford University Press.
    Calls to hold artificial intelligence to account are intensifying. Activists and researchers alike warn of an “accountability gap” or even a “crisis of accountability” in AI. Meanwhile, several prominent scholars maintain that accountability holds the key to governing AI. But usage of the term varies widely in discussions of AI ethics and governance. This chapter begins by disambiguating some different senses and dimensions of accountability, distinguishing it from neighboring concepts, and identifying sources of confusion. It proceeds to explore the idea (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  28. Generative AI and the value changes and conflicts in its integration in Japanese educational system.Ngoc-Thang B. Le, Phuong-Thao Luu & Manh-Tung Ho - manuscript
    This paper critically examines Japan's approach toward the adoption of Generative AI such as ChatGPT in education via studying media discourse and guidelines at both the national as well as local levels. It highlights the lack of consideration for socio-cultural characteristics inherent in the Japanese educational systems, such as the notion of self, teachers’ work ethics, community-centric activities for the successful adoption of the technology. We reveal ChatGPT’s infusion is likely to further accelerate the shift away from traditional notion of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29. There is no general AI.Jobst Landgrebe & Barry Smith - 2020 - arXiv.
    The goal of creating Artificial General Intelligence (AGI) – or in other words of creating Turing machines (modern computers) that can behave in a way that mimics human intelligence – has occupied AI researchers ever since the idea of AI was first proposed. One common theme in these discussions is the thesis that the ability of a machine to conduct convincing dialogues with human beings can serve as at least a sufficient criterion of AGI. We argue that this very ability (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. “Just” accuracy? Procedural fairness demands explainability in AI‑based medical resource allocation.Jon Rueda, Janet Delgado Rodríguez, Iris Parra Jounou, Joaquín Hortal-Carmona, Txetxu Ausín & David Rodríguez-Arias - 2022 - AI and Society:1-12.
    The increasing application of artificial intelligence (AI) to healthcare raises both hope and ethical concerns. Some advanced machine learning methods provide accurate clinical predictions at the expense of a significant lack of explainability. Alex John London has defended that accuracy is a more important value than explainability in AI medicine. In this article, we locate the trade-off between accurate performance and explainable algorithms in the context of distributive justice. We acknowledge that accuracy is cardinal from outcome-oriented justice because it helps (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  31. (1 other version)Ethics as a service: a pragmatic operationalisation of AI ethics.Jessica Morley, Anat Elhalal, Francesca Garcia, Libby Kinsey, Jakob Mökander & Luciano Floridi - 2021 - Minds and Machines 31 (2):239–256.
    As the range of potential uses for Artificial Intelligence, in particular machine learning, has increased, so has awareness of the associated ethical issues. This increased awareness has led to the realisation that existing legislation and regulation provides insufficient protection to individuals, groups, society, and the environment from AI harms. In response to this realisation, there has been a proliferation of principle-based ethics codes, guidelines and frameworks. However, it has become increasingly clear that a significant gap exists between the theory of (...)
    Download  
     
    Export citation  
     
    Bookmark   27 citations  
  32. Innovating with confidence: embedding AI governance and fairness in a financial services risk management framework.Luciano Floridi, Michelle Seng Ah Lee & Alexander Denev - 2020 - Berkeley Technology Law Journal 34.
    An increasing number of financial services (FS) companies are adopting solutions driven by artificial intelligence (AI) to gain operational efficiencies, derive strategic insights, and improve customer engagement. However, the rate of adoption has been low, in part due to the apprehension around its complexity and self-learning capability, which makes auditability a challenge in a highly regulated industry. There is limited literature on how FS companies can implement the governance and controls specific to AI-driven solutions. AI auditing cannot be performed in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33.  82
    Why Does AI Lie So Much? The Problem Is More Deep Rooted Than You Think.Mir H. S. Quadri - 2024 - Arkinfo Notes.
    The rapid advancements in artificial intelligence, particularly in natural language processing, have brought to light a critical challenge, i.e., the semantic grounding problem. This article explores the root causes of this issue, focusing on the limitations of connectionist models that dominate current AI research. By examining Noam Chomsky's theory of Universal Grammar and his critiques of connectionism, I highlight the fundamental differences between human language understanding and AI language generation. Introducing the concept of semantic grounding, I emphasise the need for (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. The Use of Artificial Intelligence (AI) in Qualitative Research for Theory Development.Prokopis A. Christou - 2023 - The Qualitative Report 28 (9):2739-2755.
    Theory development is an important component of academic research since it can lead to the acquisition of new knowledge, the development of a field of study, and the formation of theoretical foundations to explain various phenomena. The contribution of qualitative researchers to theory development and advancement remains significant and highly valued, especially in an era of various epochal shifts and technological innovation in the form of Artificial Intelligence (AI). Even so, the academic community has not yet fully explored the dynamics (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. A definition, benchmark and database of AI for social good initiatives.Josh Cowls, Andreas Tsmadaos, Mariarosaria Taddeo & Luciano Floridi - 2021 - Nature Machine Intelligence 3:111–⁠115.
    Initiatives relying on artificial intelligence (AI) to deliver socially beneficial outcomes—AI for social good (AI4SG)—are on the rise. However, existing attempts to understand and foster AI4SG initiatives have so far been limited by the lack of normative analyses and a shortage of empirical evidence. In this Perspective, we address these limitations by providing a definition of AI4SG and by advocating the use of the United Nations’ Sustainable Development Goals (SDGs) as a benchmark for tracing the scope and spread of AI4SG. (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  36. The debate on the ethics of AI in health care: a reconstruction and critical review.Jessica Morley, Caio C. V. Machado, Christopher Burr, Josh Cowls, Indra Joshi, Mariarosaria Taddeo & Luciano Floridi - manuscript
    Healthcare systems across the globe are struggling with increasing costs and worsening outcomes. This presents those responsible for overseeing healthcare with a challenge. Increasingly, policymakers, politicians, clinical entrepreneurs and computer and data scientists argue that a key part of the solution will be ‘Artificial Intelligence’ (AI) – particularly Machine Learning (ML). This argument stems not from the belief that all healthcare needs will soon be taken care of by “robot doctors.” Instead, it is an argument that rests on the classic (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  37. Streaching the notion of moral responsibility in nanoelectronics by appying AI.Robert Albin & Amos Bardea - 2021 - In Robert Albin & Amos Bardea (eds.), Ethics in Nanotechnology Social Sciences and Philosophical Aspects, Vol. 2. Berlin: De Gruyter. pp. 75-87.
    The development of machine learning and deep learning (DL) in the field of AI (artificial intelligence) is the direct result of the advancement of nano-electronics. Machine learning is a function that provides the system with the capacity to learn from data without being programmed explicitly. It is basically a mathematical and probabilistic model. DL is part of machine learning methods based on artificial neural networks, simply called neural networks (NNs), as they are inspired by the biological NNs that constitute organic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. Exploring the Intersection of Rationality, Reality, and Theory of Mind in AI Reasoning: An Analysis of GPT-4's Responses to Paradoxes and ToM Tests.Lucas Freund - manuscript
    This paper investigates the responses of GPT-4, a state-of-the-art AI language model, to ten prominent philosophical paradoxes, and evaluates its capacity to reason and make decisions in complex and uncertain situations. In addition to analyzing GPT-4's solutions to the paradoxes, this paper assesses the model's Theory of Mind (ToM) capabilities by testing its understanding of mental states, intentions, and beliefs in scenarios ranging from classic ToM tests to complex, real-world simulations. Through these tests, we gain insight into AI's potential for (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. What cognitive research can do for AI: a case study.Antonio Lieto - 2020 - In AI*IA. Berlin: Springer. pp. 1-8.
    This paper presents a practical case study showing how, despite the nowadays limited collaboration between AI and Cognitive Science (CogSci), cognitive research can still have an important role in the development of novel AI technologies. After a brief historical introduction about the reasons of the divorce between AI and CogSci research agendas (happened in the mid’80s of the last century), we try to provide evidence of a renewed collaboration by showing a recent case study on a commonsense reasoning system, built (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40.  46
    Preserving our humanity in the growing AI-mediated politics: Unraveling the concepts of Democracy (民主) and People as the Roots of the state (民本).Manh-Tung Ho & My-Van Luong - manuscript
    Artificial intelligence (AI) has transformed the way people engage with politics around the world: how citizens consume news, how they view the institutions and norms, how civic groups mobilize public interests, how data-driven campaigns are shaping elections, and so on (Ho & Vuong, 2024). Placing people at the center of the increasingly AI-mediated political landscape has become an urgent matter that transcends all forms of institutions. In this essay, we argue that, in this era, it is necessary to look beyond (...)
    Download  
     
    Export citation  
     
    Bookmark  
  41. A pluralist hybrid model for moral AIs.Fei Song & Shing Hay Felix Yeung - forthcoming - AI and Society:1-10.
    With the increasing degrees A.I.s and machines are applied across different social contexts, the need for implementing ethics in A.I.s is pressing. In this paper, we argue for a pluralist hybrid model for the implementation of moral A.I.s. We first survey current approaches to moral A.I.s and their inherent limitations. Then we propose the pluralist hybrid approach and show how these limitations of moral A.I.s can be partly alleviated by the pluralist hybrid approach. The core ethical decision-making capacity of an (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  42. The Heart of an AI: Agency, Moral Sense, and Friendship.Evandro Barbosa & Thaís Alves Costa - 2024 - Unisinos Journal of Philosophy 25 (1):01-16.
    The article presents an analysis centered on the emotional lapses of artificial intelligence (AI) and the influence of these lapses on two critical aspects. Firstly, the article explores the ontological impact of emotional lapses, elucidating how they hinder AI’s capacity to develop a moral sense. The absence of a moral emotion, such as sympathy, creates a barrier for machines to grasp and ethically respond to specific situations. This raises fundamental questions about machines’ ability to act as moral agents in the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. Virtues for AI.Jakob Ohlhorst - manuscript
    Virtue theory is a natural approach towards the design of artificially intelligent systems, given that the design of artificial intelligence essentially aims at designing agents with excellent dispositions. This has led to a lively research programme to develop artificial virtues. However, this research programme has until now had a narrow focus on moral virtues in an Aristotelian mould. While Aristotelian moral virtue has played a foundational role for the field, it unduly constrains the possibilities of virtue theory for artificial intelligence. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. The hard limit on human nonanthropocentrism.Michael R. Scheessele - 2022 - AI and Society 37 (1):49-65.
    There may be a limit on our capacity to suppress anthropocentric tendencies toward non-human others. Normally, we do not reach this limit in our dealings with animals, the environment, etc. Thus, continued striving to overcome anthropocentrism when confronted with these non-human others may be justified. Anticipation of super artificial intelligence may force us to face this limit, denying us the ability to free ourselves completely of anthropocentrism. This could be for our own good.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  45. Thinking Fast and Slow in AI: the Role of Metacognition.Marianna Bergamaschi Ganapini - manuscript
    Multiple Authors - please see paper attached. -/- AI systems have seen dramatic advancement in recent years, bringing many applications that pervade our everyday life. However, we are still mostly seeing instances of narrow AI: many of these recent developments are typically focused on a very limited set of competencies and goals, e.g., image interpretation, natural language processing, classification, prediction, and many others. We argue that a better study of the mechanisms that allow humans to have these capabilities can help (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. Theorem proving in artificial neural networks: new frontiers in mathematical AI.Markus Pantsar - 2024 - European Journal for Philosophy of Science 14 (1):1-22.
    Computer assisted theorem proving is an increasingly important part of mathematical methodology, as well as a long-standing topic in artificial intelligence (AI) research. However, the current generation of theorem proving software have limited functioning in terms of providing new proofs. Importantly, they are not able to discriminate interesting theorems and proofs from trivial ones. In order for computers to develop further in theorem proving, there would need to be a radical change in how the software functions. Recently, machine learning results (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. Wolpert, Chaitin et Wittgenstein sur l’impossibilité, l’incomplétude, le paradoxe menteur, le théisme, les limites du calcul, un principe d’incertitude mécanique non quantique et l’univers comme ordinateur, le théorème ultime dans Turing Machine Theory (révisé 2019).Michael Richard Starks - 2020 - In Bienvenue en Enfer sur Terre : Bébés, Changement climatique, Bitcoin, Cartels, Chine, Démocratie, Diversité, Dysgénique, Égalité, Pirates informatiques, Droits de l'homme, Islam, Libéralisme, Prospérité, Le Web, Chaos, Famine, Maladie, Violence, Intellige. Las Vegas, NV USA: Reality Press. pp. 185-189.
    J’ai lu de nombreuses discussions récentes sur les limites du calcul et de l’univers en tant qu’ordinateur, dans l’espoir de trouver quelques commentaires sur le travail étonnant du physicien polymathe et théoricien de la décision David Wolpert, mais n’ont pas trouvé une seule citation et je présente donc ce résumé très bref. Wolpert s’est avéré quelques théoricaux d’impossibilité ou d’incomplétude renversants (1992 à 2008-voir arxiv dot org) sur les limites de l’inférence (computation) qui sont si généraux qu’ils sont indépendants de (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48. Missing Circles: A Dignitarian Approach to Doughnut Economics Through AI Applications.Kostina Prifti - 2023 - In Francesca Mazzi & Luciano Floridi (eds.), The Ethics of Artificial Intelligence for the Sustainable Development Goals. Springer Verlag. pp. 115-131.
    This contribution aims at providing a more concrete and accurate understanding of Doughnut economics, its model, and its ideas. In doing so, it provides a comprehensive description of the Doughnut and its connection with the Sustainable Development Goals. Then, it inquires into the philosophical background of Doughnut economics, elucidating its existential rationale that relies on human dignity. Further, examples of four AI applications are used to showcase how the Doughnut model would address their use and challenges that arise thereof. From (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49.  31
    همگرایی حریم خصوصی و شفافیت، محدودیت‌های طراحی هوش مصنوعی (Convergence of privacy and transparency, limitations of artificial intelligence design).Mohammad Ali Ashouri Kisomi - 2024 - Wisdom and Philosophy 20 (78):45-73.
    هدف از این پژوهش نقد به رویکردی است که راهکار برطرف شدن چالش‌هایِ اخلاقیِ هوشِ مصنوعیِ را محدود به طراحی و اصلاحات فنی می‌داند. برخی پژوهش‌گران چالش‌های اخلاقی در هوش مصنوعی را همگرا تلقی می‌کنند و معتقدند این چالش‌ها همانطور که با ظهور سیستم هوش مصنوعی پدید آمدند، با پیشرفت و اصلاحات فنی آن مرتفع خواهند شد. در مباحثِ اخلاقِ هوش مصنوعی، موضوعاتی همچون حفاظت از حریم خصوصی و شفافیت در بیشتر پژوهش‏ها مورد توجه قرار گرفته است. در پژوهش حاضر (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50. Menalar Skeptis Adopsi Artificial Intelegence (AI) di Indonesia: ‘Sebuah Tinjauan Filsafat Ilmu Komunikasi’.Felisianus Efrem Jelahut, Herman Yosep Utang, Yosep Emanuel Jelahut & Lasarus Jehamat - 2021 - Jurnal Filsafat Indonesia 4 (2):172-178.
    This research was conducted on the basis of research references from Microsoft Indonesia regarding the adoption of artificial intelligence in Indonesia which obtained research results that there were 14% of employees and leaders of technology-based companies in Indonesia who were still skeptical of the adoption of artificial intelligence. This study aims to provide a theoretical overview from the point of view of the philosophy of communication science in responding to considerations about the good and bad 'doubt' or skepticism of 14% (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 957