Results for 'Explainable Artificial Intelligence'

998 found
Order:
  1. Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open Challenges and Interdisciplinary Research Directions.Luca Longo, Mario Brcic, Federico Cabitza, Jaesik Choi, Roberto Confalonieri, Javier Del Ser, Riccardo Guidotti, Yoichi Hayashi, Francisco Herrera, Andreas Holzinger, Richard Jiang, Hassan Khosravi, Freddy Lecue, Gianclaudio Malgieri, Andrés Páez, Wojciech Samek, Johannes Schneider, Timo Speith & Simone Stumpf - 2024 - Information Fusion 106 (June 2024).
    As systems based on opaque Artificial Intelligence (AI) continue to flourish in diverse real-world applications, understanding these black box models has become paramount. In response, Explainable AI (XAI) has emerged as a field of research with practical and ethical benefits across various domains. This paper not only highlights the advancements in XAI and its application in real-world scenarios but also addresses the ongoing challenges within XAI, emphasizing the need for broader perspectives and collaborative efforts. We bring together (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2. The Pragmatic Turn in Explainable Artificial Intelligence (XAI).Andrés Páez - 2019 - Minds and Machines 29 (3):441-459.
    In this paper I argue that the search for explainable models and interpretable decisions in AI must be reformulated in terms of the broader project of offering a pragmatic and naturalistic account of understanding in AI. Intuitively, the purpose of providing an explanation of a model or a decision is to make it understandable to its stakeholders. But without a previous grasp of what it means to say that an agent understands a model or a decision, the explanatory strategies (...)
    Download  
     
    Export citation  
     
    Bookmark   30 citations  
  3. What do we want from Explainable Artificial Intelligence (XAI)? – A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research.Markus Langer, Daniel Oster, Timo Speith, Lena Kästner, Kevin Baum, Holger Hermanns, Eva Schmidt & Andreas Sesing - 2021 - Artificial Intelligence 296 (C):103473.
    Previous research in Explainable Artificial Intelligence (XAI) suggests that a main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems (we call these “stakeholders' desiderata”) in a variety of contexts. However, the literature on XAI is vast, spreads out across multiple largely disconnected disciplines, and it often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders' desiderata. This paper discusses the main classes of (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  4. Can Artificial Intelligence Make Art?Elzė Sigutė Mikalonytė & Markus Kneer - 2022 - ACM Transactions on Human-Robot Interactions.
    In two experiments (total N=693) we explored whether people are willing to consider paintings made by AI-driven robots as art, and robots as artists. Across the two experiments, we manipulated three factors: (i) agent type (AI-driven robot v. human agent), (ii) behavior type (intentional creation of a painting v. accidental creation), and (iii) object type (abstract v. representational painting). We found that people judge robot paintings and human painting as art to roughly the same extent. However, people are much less (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  5. Artificial Intelligence, Robots and the Ethics of the Future.Constantin Vica & Cristina Voinea - 2019 - Revue Roumaine de Philosophie 63 (2):223–234.
    The future rests under the sign of technology. Given the prevalence of technological neutrality and inevitabilism, most conceptualizations of the future tend to ignore moral problems. In this paper we argue that every choice about future technologies is a moral choice and even the most technology-dominated scenarios of the future are, in fact, moral provocations we have to imagine solutions to. We begin by explaining the intricate connection between morality and the future. After a short excursion into the history of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. CAN ARTIFICIAL INTELLIGENCE THINK WITHOUT THE UNCONSCIOUS ?Derya Ölçener - 2020
    Today, humanity is trying to turn the artificial intelligence that it produces into natural intelligence. Although this effort is technologically exciting, it often raises ethical concerns. Therefore, the intellectual ability of artificial intelligence will always bring new questions. Although there have been significant developments in the consciousness of artificial intelligence, the issue of consciousness must be fully explained in order to complete this development. When consciousness is fully understood by human beings, the subject (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7.  98
    Artificial Intelligence, Phenomenology, and the Molyneux Problem.Chris A. Kramer - 2023 - The Philosophy of Humor Yearbook 4 (1):225-226.
    This short article is a “conversation” in which an android, Mort, replies to Richard Marc Rubin’s android named Sol in “The Robot Sol Explains Laughter to His Android Brethren” (The Philosophy of Humor Yearbook, 2022). There Sol offers an explanation for how androids can laugh--largely a reaction to frustration and unmet expectations: “my account says that laughter is one of four ways of dealing with frustration, difficulties, and insults. It is a way of getting by. If you need to label (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. Trust in Medical Artificial Intelligence: A Discretionary Account.Philip J. Nickel - 2022 - Ethics and Information Technology 24 (1):1-10.
    This paper sets out an account of trust in AI as a relationship between clinicians, AI applications, and AI practitioners in which AI is given discretionary authority over medical questions by clinicians. Compared to other accounts in recent literature, this account more adequately explains the normative commitments created by practitioners when inviting clinicians’ trust in AI. To avoid committing to an account of trust in AI applications themselves, I sketch a reductive view on which discretionary authority is exercised by AI (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  9. The Artificial Intelligence Explanatory Trade-Off on the Logic of Discovery in Chemistry.José Ferraz-Caetano - 2023 - Philosophies 8 (2):17.
    Explanation is a foundational goal in the exact sciences. Besides the contemporary considerations on ‘description’, ‘classification’, and ‘prediction’, we often see these terms in thriving applications of artificial intelligence (AI) in chemistry hypothesis generation. Going beyond describing ‘things in the world’, these applications can make accurate numerical property calculations from theoretical or topological descriptors. This association makes an interesting case for a logic of discovery in chemistry: are these induction-led ventures showing a shift in how chemists can problematize (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10. Shortcuts to Artificial Intelligence.Nello Cristianini - forthcoming - In Marcello Pelillo & Teresa Scantamburlo (eds.), Machines We Trust. MIT Press.
    The current paradigm of Artificial Intelligence emerged as the result of a series of cultural innovations, some technical and some social. Among them are apparently small design decisions, that led to a subtle reframing of the field’s original goals, and are by now accepted as standard. They correspond to technical shortcuts, aimed at bypassing problems that were otherwise too complicated or too expensive to solve, while still delivering a viable version of AI. Far from being a series of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  11. Science Based on Artificial Intelligence Need not Pose a Social Epistemological Problem.Uwe Peters - 2024 - Social Epistemology Review and Reply Collective 13 (1).
    It has been argued that our currently most satisfactory social epistemology of science can’t account for science that is based on artificial intelligence (AI) because this social epistemology requires trust between scientists that can take full responsibility for the research tools they use, and scientists can’t take full responsibility for the AI tools they use since these systems are epistemically opaque. I think this argument overlooks that much AI-based science can be done without opaque models, and that agents (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12. The Use of Artificial Intelligence (AI) in Qualitative Research for Theory Development.Prokopis A. Christou - 2023 - The Qualitative Report 28 (9):2739-2755.
    Theory development is an important component of academic research since it can lead to the acquisition of new knowledge, the development of a field of study, and the formation of theoretical foundations to explain various phenomena. The contribution of qualitative researchers to theory development and advancement remains significant and highly valued, especially in an era of various epochal shifts and technological innovation in the form of Artificial Intelligence (AI). Even so, the academic community has not yet fully explored (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. The virtues of interpretable medical artificial intelligence.Joshua Hatherley, Robert Sparrow & Mark Howard - forthcoming - Cambridge Quarterly of Healthcare Ethics:1-10.
    Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are 'black boxes'. The initial response in the literature was a demand for 'explainable AI'. However, recently, several authors have suggested that making AI more explainable or 'interpretable' is likely to be at the cost of the accuracy of these systems and that prioritising interpretability in medical AI may constitute a 'lethal prejudice'. In this paper, we defend (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. Critical Analysis of the “No Relevant Difference” Argument in Defense of the Rights of Artificial Intelligence.Mazarian Alireza - 2019 - Journal of Philosophical Theological Research 21 (1):165-190.
    There are many new philosophical queries about the moral status and rights of artificial intelligences; questions such as whether such entities can be considered as morally responsible entities and as having special rights. Recently, the contemporary philosophy of mind philosopher, Eric Schwitzgebel, has tried to defend the possibility of equal rights of AIs and human beings (in an imaginary future), by designing a new argument (2015). In this paper, after an introduction, the author reviews and analyzes the main argument (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15. AI as IA: The use and abuse of artificial intelligence (AI) for human enhancement through intellectual augmentation (IA).Alexandre Erler & Vincent C. Müller - 2023 - In Fabrice Jotterand & Marcello Ienca (eds.), The Routledge Handbook of the Ethics of Human Enhancement. Routledge. pp. 187-199.
    This paper offers an overview of the prospects and ethics of using AI to achieve human enhancement, and more broadly what we call intellectual augmentation (IA). After explaining the central notions of human enhancement, IA, and AI, we discuss the state of the art in terms of the main technologies for IA, with or without brain-computer interfaces. Given this picture, we discuss potential ethical problems, namely inadequate performance, safety, coercion and manipulation, privacy, cognitive liberty, authenticity, and fairness in more detail. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16. Against the opacity, and for a qualitative understanding, of artificially intelligent technologies.Mahdi Khalili - 2023 - AI and Ethics.
    This paper aims, first, to argue against using opaque AI technologies in decision making processes, and second to suggest that we need to possess a qualitative form of understanding about them. It first argues that opaque artificially intelligent technologies are suitable for users who remain indifferent to the understanding of decisions made by means of these technologies. According to virtue ethics, this implies that these technologies are not well-suited for those who care about realizing their moral capacity. The paper then (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. Unexplainability and Incomprehensibility of Artificial Intelligence.Roman Yampolskiy - manuscript
    Explainability and comprehensibility of AI are important requirements for intelligent systems deployed in real-world domains. Users want and frequently need to understand how decisions impacting them are made. Similarly it is important to understand how an intelligent system functions for safety and security reasons. In this paper, we describe two complementary impossibility results (Unexplainability and Incomprehensibility), essentially showing that advanced AIs would not be able to accurately explain some of their decisions and for the decisions they could explain people would (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  18. Levels of explicability for medical artificial intelligence: What do we normatively need and what can we technically reach?Frank Ursin, Felix Lindner, Timo Ropinski, Sabine Salloch & Cristian Timmermann - 2023 - Ethik in der Medizin 35 (2):173-199.
    Definition of the problem The umbrella term “explicability” refers to the reduction of opacity of artificial intelligence (AI) systems. These efforts are challenging for medical AI applications because higher accuracy often comes at the cost of increased opacity. This entails ethical tensions because physicians and patients desire to trace how results are produced without compromising the performance of AI systems. The centrality of explicability within the informed consent process for medical AI systems compels an ethical reflection on the (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  19. AISC 17 Talk: The Explanatory Problems of Deep Learning in Artificial Intelligence and Computational Cognitive Science: Two Possible Research Agendas.Antonio Lieto - 2018 - In Proceedings of AISC 2017.
    Endowing artificial systems with explanatory capacities about the reasons guiding their decisions, represents a crucial challenge and research objective in the current fields of Artificial Intelligence (AI) and Computational Cognitive Science [Langley et al., 2017]. Current mainstream AI systems, in fact, despite the enormous progresses reached in specific tasks, mostly fail to provide a transparent account of the reasons determining their behavior (both in cases of a successful or unsuccessful output). This is due to the fact that (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20. A Cartesian critique of the artificial intelligence.Rajakishore Nath - 2010 - Philosophical Papers and Review 3 (2):27-33.
    This paper deals with the philosophical problems concerned with research in the field of artificial intelligence (AI), in particular with problems arising out of claims that AI exhibits ‘consciousness’, ‘thinking’ and other ‘inner’ processes and that they simulate human intelligence and cognitive processes in general. The argument is to show how Cartesian mind is non-mechanical. Descartes’ concept of ‘I think’ presupposes subjective experience, because it is ‘I’ who experiences the world. Likewise, Descartes’ notion of ‘I’ negates the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. Moral Agency in Artificial Intelligence (Robots).The Journal of Ethical Reflections & Saleh Gorbanian - 2020 - Ethical Reflections, 1 (1):11-32.
    Growing technological advances in intelligent artifacts and bitter experiences of the past have emphasized the need to use and operate ethics in this field. Accordingly, it is vital to discuss the ethical integrity of having intelligent artifacts. Concerning the method of gathering materials, the current study uses library and documentary research followed by attribution style. Moreover, descriptive analysis is employed in order to analyze data. Explaining and criticizing the opposing views in this field and reviewing the related literature, it is (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22. Developing Artificial Human-Like Arithmetical Intelligence (and Why).Markus Pantsar - 2023 - Minds and Machines 33 (3):379-396.
    Why would we want to develop artificial human-like arithmetical intelligence, when computers already outperform humans in arithmetical calculations? Aside from arithmetic consisting of much more than mere calculations, one suggested reason is that AI research can help us explain the development of human arithmetical cognition. Here I argue that this question needs to be studied already in the context of basic, non-symbolic, numerical cognition. Analyzing recent machine learning research on artificial neural networks, I show how AI studies (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  23. Designing AI for Explainability and Verifiability: A Value Sensitive Design Approach to Avoid Artificial Stupidity in Autonomous Vehicles.Steven Umbrello & Roman Yampolskiy - 2022 - International Journal of Social Robotics 14 (2):313-322.
    One of the primary, if not most critical, difficulties in the design and implementation of autonomous systems is the black-boxed nature of the decision-making structures and logical pathways. How human values are embodied and actualised in situ may ultimately prove to be harmful if not outright recalcitrant. For this reason, the values of stakeholders become of particular significance given the risks posed by opaque structures of intelligent agents (IAs). This paper explores how decision matrix algorithms, via the belief-desire-intention model for (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  24. Explainable AI lacks regulative reasons: why AI and human decision‑making are not equally opaque.Uwe Peters - forthcoming - AI and Ethics.
    Many artificial intelligence (AI) systems currently used for decision-making are opaque, i.e., the internal factors that determine their decisions are not fully known to people due to the systems’ computational complexity. In response to this problem, several researchers have argued that human decision-making is equally opaque and since simplifying, reason-giving explanations (rather than exhaustive causal accounts) of a decision are typically viewed as sufficient in the human case, the same should hold for algorithmic decision-making. Here, I contend that (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  25. Artificial Forms of Life.Sebastian Sunday Grève - 2023 - Philosophies 8 (5).
    The logical problem of artificial intelligence—the question of whether the notion sometimes referred to as ‘strong’ AI is self-contradictory—is, essentially, the question of whether an artificial form of life is possible. This question has an immediately paradoxical character, which can be made explicit if we recast it (in terms that would ordinarily seem to be implied by it) as the question of whether an unnatural form of nature is possible. The present paper seeks to explain this paradoxical (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. Book: Cognitive Design for Artificial Minds.Antonio Lieto - 2021 - London, UK: Routledge, Taylor & Francis Ltd.
    Book Description (Blurb): Cognitive Design for Artificial Minds explains the crucial role that human cognition research plays in the design and realization of artificial intelligence systems, illustrating the steps necessary for the design of artificial models of cognition. It bridges the gap between the theoretical, experimental and technological issues addressed in the context of AI of cognitive inspiration and computational cognitive science. -/- Beginning with an overview of the historical, methodological and technical issues in the field (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  27. “Just” accuracy? Procedural fairness demands explainability in AI‑based medical resource allocation.Jon Rueda, Janet Delgado Rodríguez, Iris Parra Jounou, Joaquín Hortal-Carmona, Txetxu Ausín & David Rodríguez-Arias - 2022 - AI and Society:1-12.
    The increasing application of artificial intelligence (AI) to healthcare raises both hope and ethical concerns. Some advanced machine learning methods provide accurate clinical predictions at the expense of a significant lack of explainability. Alex John London has defended that accuracy is a more important value than explainability in AI medicine. In this article, we locate the trade-off between accurate performance and explainable algorithms in the context of distributive justice. We acknowledge that accuracy is cardinal from outcome-oriented justice (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  28. Machine intelligence: a chimera.Mihai Nadin - 2019 - AI and Society 34 (2):215-242.
    The notion of computation has changed the world more than any previous expressions of knowledge. However, as know-how in its particular algorithmic embodiment, computation is closed to meaning. Therefore, computer-based data processing can only mimic life’s creative aspects, without being creative itself. AI’s current record of accomplishments shows that it automates tasks associated with intelligence, without being intelligent itself. Mistaking the abstract for the concrete has led to the religion of “everything is an output of computation”—even the humankind that (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  29.  85
    An Introduction to Artificial Psychology Application Fuzzy Set Theory and Deep Machine Learning in Psychological Research using R.Farahani Hojjatollah - 2023 - Springer Cham. Edited by Hojjatollah Farahani, Marija Blagojević, Parviz Azadfallah, Peter Watson, Forough Esrafilian & Sara Saljoughi.
    Artificial Psychology (AP) is a highly multidisciplinary field of study in psychology. AP tries to solve problems which occur when psychologists do research and need a robust analysis method. Conventional statistical approaches have deep rooted limitations. These approaches are excellent on paper but often fail to model the real world. Mind researchers have been trying to overcome this by simplifying the models being studied. This stance has not received much practical attention recently. Promoting and improving artificial intelligence (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. On the morality of artificial agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.
    Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility of agents (...)
    Download  
     
    Export citation  
     
    Bookmark   288 citations  
  31. Unjustified Sample Sizes and Generalizations in Explainable AI Research: Principles for More Inclusive User Studies.Uwe Peters & Mary Carman - forthcoming - IEEE Intelligent Systems.
    Many ethical frameworks require artificial intelligence (AI) systems to be explainable. Explainable AI (XAI) models are frequently tested for their adequacy in user studies. Since different people may have different explanatory needs, it is important that participant samples in user studies are large enough to represent the target population to enable generalizations. However, it is unclear to what extent XAI researchers reflect on and justify their sample sizes or avoid broad generalizations across people. We analyzed XAI (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  32. Intelligent Behaviour.Dimitri Coelho Mollo - 2022 - Erkenntnis 89 (2):705-721.
    The notion of intelligence is relevant to several fields of research, including cognitive and comparative psychology, neuroscience, artificial intelligence, and philosophy, among others. However, there is little agreement within and across these fields on how to characterise and explain intelligence. I put forward a behavioural, operational characterisation of intelligence that can play an integrative role in the sciences of intelligence, as well as preserve the distinctive explanatory value of the notion, setting it apart from (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  33. Explaining Go: Challenges in Achieving Explainability in AI Go Programs.Zack Garrett - 2023 - Journal of Go Studies 17 (2):29-60.
    There has been a push in recent years to provide better explanations for how AIs make their decisions. Most of this push has come from the ethical concerns that go hand in hand with AIs making decisions that affect humans. Outside of the strictly ethical concerns that have prompted the study of explainable AIs (XAIs), there has been research interest in the mere possibility of creating XAIs in various domains. In general, the more accurate we make our models the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. Transparent, explainable, and accountable AI for robotics.Sandra Wachter, Brent Mittelstadt & Luciano Floridi - 2017 - Science (Robotics) 2 (6):eaan6080.
    To create fair and accountable AI and robotics, we need precise regulation and better methods to certify, explain, and audit inscrutable systems.
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  35. Ameliorating Algorithmic Bias, or Why Explainable AI Needs Feminist Philosophy.Linus Ta-Lun Huang, Hsiang-Yun Chen, Ying-Tung Lin, Tsung-Ren Huang & Tzu-Wei Hung - 2022 - Feminist Philosophy Quarterly 8 (3).
    Artificial intelligence (AI) systems are increasingly adopted to make decisions in domains such as business, education, health care, and criminal justice. However, such algorithmic decision systems can have prevalent biases against marginalized social groups and undermine social justice. Explainable artificial intelligence (XAI) is a recent development aiming to make an AI system’s decision processes less opaque and to expose its problematic biases. This paper argues against technical XAI, according to which the detection and interpretation of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  36. Explaining Explanations in AI.Brent Mittelstadt - forthcoming - FAT* 2019 Proceedings 1.
    Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it’s important to remember Box’s maxim that "All models are wrong but some are useful." We focus on (...)
    Download  
     
    Export citation  
     
    Bookmark   42 citations  
  37. Local explanations via necessity and sufficiency: unifying theory and practice.David Watson, Limor Gultchin, Taly Ankur & Luciano Floridi - 2022 - Minds and Machines 32:185-218.
    Necessity and sufficiency are the building blocks of all successful explanations. Yet despite their importance, these notions have been conceptually underdeveloped and inconsistently applied in explainable artificial intelligence (XAI), a fast-growing research area that is so far lacking in firm theoretical foundations. Building on work in logic, probability, and causality, we establish the central role of necessity and sufficiency in XAI, unifying seemingly disparate methods in a single formal framework. We provide a sound and complete algorithm for (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  38. May Artificial Intelligence take health and sustainability on a honeymoon? Towards green technologies for multidimensional health and environmental justice.Cristian Moyano-Fernández, Jon Rueda, Janet Delgado & Txetxu Ausín - 2024 - Global Bioethics 35 (1).
    The application of Artificial Intelligence (AI) in healthcare and epidemiology undoubtedly has many benefits for the population. However, due to its environmental impact, the use of AI can produce social inequalities and long-term environmental damages that may not be thoroughly contemplated. In this paper, we propose to consider the impacts of AI applications in medical care from the One Health paradigm and long-term global health. From health and environmental justice, rather than settling for a short and fleeting green (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. Cultural Bias in Explainable AI Research.Uwe Peters & Mary Carman - forthcoming - Journal of Artificial Intelligence Research.
    For synergistic interactions between humans and artificial intelligence (AI) systems, AI outputs often need to be explainable to people. Explainable AI (XAI) systems are commonly tested in human user studies. However, whether XAI researchers consider potential cultural differences in human explanatory needs remains unexplored. We highlight psychological research that found significant differences in human explanations between many people from Western, commonly individualist countries and people from non-Western, often collectivist countries. We argue that XAI research currently overlooks (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. From Biological Synapses to "Intelligent" Robots.Birgitta Dresp-Langley - 2022 - Electronics 11:1-28.
    This selective review explores biologically inspired learning as a model for intelligent robot control and sensing technology on the basis of specific examples. Hebbian synaptic learning is discussed as a functionally relevant model for machine learning and intelligence, as explained on the basis of examples from the highly plastic biological neural networks of invertebrates and vertebrates. Its potential for adaptive learning and control without supervision, the generation of functional complexity, and control architectures based on self-organization is brought forward. Learning (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  41. Artificial consciousness: from impossibility to multiplicity.Chuanfei Chin - 2017 - In Vincent C. Müller (ed.), Philosophy and theory of artificial intelligence 2017. Berlin: Springer. pp. 3-18.
    How has multiplicity superseded impossibility in philosophical challenges to artificial consciousness? I assess a trajectory in recent debates on artificial consciousness, in which metaphysical and explanatory challenges to the possibility of building conscious machines lead to epistemological concerns about the multiplicity underlying ‘what it is like’ to be a conscious creature or be in a conscious state. First, I analyse earlier challenges which claim that phenomenal consciousness cannot arise, or cannot be built, in machines. These are based on (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42. Future progress in artificial intelligence: A survey of expert opinion.Vincent C. Müller & Nick Bostrom - 2016 - In Vincent C. Müller (ed.), Fundamental Issues of Artificial Intelligence. Cham: Springer. pp. 553-571.
    There is, in some quarters, concern about high–level machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity. In other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development, and how fast they see these developing. (...)
    Download  
     
    Export citation  
     
    Bookmark   37 citations  
  43. Trusting artificial intelligence in cybersecurity is a double-edged sword.Mariarosaria Taddeo, Tom McCutcheon & Luciano Floridi - 2019 - Philosophy and Technology 32 (1):1-15.
    Applications of artificial intelligence (AI) for cybersecurity tasks are attracting greater attention from the private and the public sectors. Estimates indicate that the market for AI in cybersecurity will grow from US$1 billion in 2016 to a US$34.8 billion net worth by 2025. The latest national cybersecurity and defence strategies of several governments explicitly mention AI capabilities. At the same time, initiatives to define new standards and certification procedures to elicit users’ trust in AI are emerging on a (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  44. Artificial intelligence and the ‘Good Society’: the US, EU, and UK approach.Corinne Cath, Sandra Wachter, Brent Mittelstadt, Mariarosaria Taddeo & Luciano Floridi - 2018 - Science and Engineering Ethics 24 (2):505-528.
    In October 2016, the White House, the European Parliament, and the UK House of Commons each issued a report outlining their visions on how to prepare society for the widespread use of artificial intelligence. In this article, we provide a comparative assessment of these three reports in order to facilitate the design of policies favourable to the development of a ‘good AI society’. To do so, we examine how each report addresses the following three topics: the development of (...)
    Download  
     
    Export citation  
     
    Bookmark   27 citations  
  45. Artificial Intelligence for the Internal Democracy of Political Parties.Claudio Novelli, Giuliano Formisano, Prathm Juneja, Sandri Giulia & Luciano Floridi - manuscript
    The article argues that AI can enhance the measurement and implementation of democratic processes within political parties, known as Intra-Party Democracy (IPD). It identifies the limitations of traditional methods for measuring IPD, which often rely on formal parameters, self-reported data, and tools like surveys. Such limitations lead to the collection of partial data, rare updates, and significant demands on resources. To address these issues, the article suggests that specific data management and Machine Learning (ML) techniques, such as natural language processing (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. Beneficial Artificial Intelligence Coordination by means of a Value Sensitive Design Approach.Steven Umbrello - 2019 - Big Data and Cognitive Computing 3 (1):5.
    This paper argues that the Value Sensitive Design (VSD) methodology provides a principled approach to embedding common values in to AI systems both early and throughout the design process. To do so, it draws on an important case study: the evidence and final report of the UK Select Committee on Artificial Intelligence. This empirical investigation shows that the different and often disparate stakeholder groups that are implicated in AI design and use share some common values that can be (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  47. Human and Artificial Intelligence: A Critical Comparison.Thomas Fuchs - 2022 - In Rainer M. Holm-Hadulla, Joachim Funke & Michael Wink (eds.), Intelligence - Theories and Applications. Springer. pp. 249-259.
    Advances in artificial intelligence and robotics increasingly call into question the distinction between simulation and reality of the human person. On the one hand, they suggest a computeromorphic understanding of human intelligence, and on the other, an anthropomorphization of AI systems. In other words: We increasingly conceive of ourselves in the image of our machines, while conversely we elevate our machines to new subjects. So what distinguishes human intelligence from artificial intelligence? The essay sets (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  48. Artificial intelligence crime: an interdisciplinary analysis of foreseeable threats and solutions.Thomas C. King, Nikita Aggarwal, Mariarosaria Taddeo & Luciano Floridi - 2020 - Science and Engineering Ethics 26 (1):89-120.
    Artificial intelligence research and regulation seek to balance the benefits of innovation against any potential harms and disruption. However, one unintended consequence of the recent surge in AI research is the potential re-orientation of AI technologies to facilitate criminal acts, term in this article AI-Crime. AIC is theoretically feasible thanks to published experiments in automating fraud targeted at social media users, as well as demonstrations of AI-driven manipulation of simulated markets. However, because AIC is still a relatively young (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  49. Social Robots and Society.Sven Nyholm, Cindy Friedman, Michael T. Dale, Anna Puzio, Dina Babushkina, Guido Lohr, Bart Kamphorst, Arthur Gwagwa & Wijnand IJsselsteijn - 2023 - In Ibo van de Poel (ed.), Ethics of Socially Disruptive Technologies: An Introduction. Cambridge, UK: Open Book Publishers. pp. 53-82.
    Advancements in artificial intelligence and (social) robotics raise pertinent questions as to how these technologies may help shape the society of the future. The main aim of the chapter is to consider the social and conceptual disruptions that might be associated with social robots, and humanoid social robots in particular. This chapter starts by comparing the concepts of robots and artificial intelligence and briefly explores the origins of these expressions. It then explains the definition of a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50. Artificial Intelligence and Patient-Centered Decision-Making.Jens Christian Bjerring & Jacob Busch - 2020 - Philosophy and Technology 34 (2):349-371.
    Advanced AI systems are rapidly making their way into medical research and practice, and, arguably, it is only a matter of time before they will surpass human practitioners in terms of accuracy, reliability, and knowledge. If this is true, practitioners will have a prima facie epistemic and professional obligation to align their medical verdicts with those of advanced AI systems. However, in light of their complexity, these AI systems will often function as black boxes: the details of their contents, calculations, (...)
    Download  
     
    Export citation  
     
    Bookmark   34 citations  
1 — 50 / 998