Switch to: References

Add citations

You must login to add citations.
  1. On the individuation of complex computational models: Gilbert Simondon and the technicity of AI.Susana Aires - forthcoming - AI and Society:1-14.
    The proliferation of AI systems across all domains of life as well as the complexification and opacity of algorithmic techniques, epitomised by the bourgeoning field of Deep Learning (DL), call for new methods in the Humanities for reflecting on the techno-human relation in a way that places the technical operation at its core. Grounded on the work of the philosopher of technology Gilbert Simondon, this paper puts forward individuation theory as a valuable approach to reflect on contemporary information technologies, offering (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Security and Privacy Protection in Developing Ethical AI: A Mixed-Methods Study from a Marketing Employee Perspective.Xuequn Wang, Xiaolin Lin & Bin Shao - forthcoming - Journal of Business Ethics:1-20.
    Despite chatbots’ increasing popularity, firms often fail to fully achieve their benefits because of their underutilization. We argue that ethical concerns dealing with chatbot-related privacy and security may prevent firms from developing a culture of embracing chatbot use and fully integrating chatbots into their workflows. Our research draws upon the stimulus-organism-response theory (SOR) and a study by Floridi et al. (Minds and Machines, 28:689–707, 2018 ) on the ethical artificial intelligence framework to investigate how chatbot affordances can foster employees’ positive (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Mapping the landscape of ethical considerations in explainable AI research.Luca Nannini, Marta Marchiori Manerba & Isacco Beretta - 2024 - Ethics and Information Technology 26 (3):1-22.
    With its potential to contribute to the ethical governance of AI, eXplainable AI (XAI) research frequently asserts its relevance to ethical considerations. Yet, the substantiation of these claims with rigorous ethical analysis and reflection remains largely unexamined. This contribution endeavors to scrutinize the relationship between XAI and ethical considerations. By systematically reviewing research papers mentioning ethical terms in XAI frameworks and tools, we investigate the extent and depth of ethical discussions in scholarly research. We observe a limited and often superficial (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Can large language models help solve the cost problem for the right to explanation?Lauritz Munch & Jens Christian Bjerring - forthcoming - Journal of Medical Ethics.
    By now a consensus has emerged that people, when subjected to high-stakes decisions through automated decision systems, have a moral right to have these decisions explained to them. However, furnishing such explanations can be costly. So the right to an explanation creates what we call the cost problem: providing subjects of automated decisions with appropriate explanations of the grounds of these decisions can be costly for the companies and organisations that use these automated decision systems. In this paper, we explore (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Unnatural Images: On AI-Generated Photographs.Amanda Wasielewski - 2024 - Critical Inquiry 51 (1):1-29.
    In artificial-intelligence (AI) and computer-vision research, photographic images are typically referred to as natural images. This means that images used or produced in this context are conceptualized within a binary as either natural or synthetic. Recent advances in creative AI technology, particularly generative adversarial networks and diffusion models, have afforded the ability to create photographic-seeming images, that is, synthetic images that appear natural, based on learnings from vast databases of digital photographs. Contemporary discussions of these images have thus far revolved (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI, Radical Ignorance, and the Institutional Approach to Consent.Etye Steinberg - 2024 - Philosophy and Technology 37 (3):1-26.
    More and more, we face AI-based products and services. Using these services often requires our explicit consent, e.g., by agreeing to the services’ Terms and Conditions clause. Current advances introduce the ability of AI to evolve and change its own modus operandi over time in such a way that we cannot know, at the moment of consent, what it is in the future to which we are now agreeing. Therefore, informed consent is impossible regarding certain kinds of AI. Call this (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Reliability and Interpretability in Science and Deep Learning.Luigi Scorzato - 2024 - Minds and Machines 34 (3):1-31.
    In recent years, the question of the reliability of Machine Learning (ML) methods has acquired significant importance, and the analysis of the associated uncertainties has motivated a growing amount of research. However, most of these studies have applied standard error analysis to ML models—and in particular Deep Neural Network (DNN) models—which represent a rather significant departure from standard scientific modelling. It is therefore necessary to integrate the standard error analysis with a deeper epistemological analysis of the possible differences between DNN (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Conceptualizing understanding in explainable artificial intelligence (XAI): an abilities-based approach.Timo Speith, Barnaby Crook, Sara Mann, Astrid Schomäcker & Markus Langer - 2024 - Ethics and Information Technology 26 (2):1-15.
    A central goal of research in explainable artificial intelligence (XAI) is to facilitate human understanding. However, understanding is an elusive concept that is difficult to target. In this paper, we argue that a useful way to conceptualize understanding within the realm of XAI is via certain human abilities. We present four criteria for a useful conceptualization of understanding in XAI and show that these are fulfilled by an abilities-based approach: First, thinking about understanding in terms of specific abilities is motivated (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial intelligence in medical education: Typologies and ethical approaches.Agnieszka Pregowska & Mark Perkins - 2024 - Ethics and Bioethics (in Central Europe) 14 (1-2):96-113.
    Artificial Intelligence (AI) has an increasing role to play in medical education and has great potential to revolutionize health professional education systems overall. However, this is accompanied by substantial questions concerning technical and ethical risks which are of particular importance because the quality of medical education has a direct effect on physical and psychological health and wellbeing. This article establishes an overarching distinction of AI across two typological dimensions, functional and humanistic. As indispensable foundations, these are then related to medical (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Exploring explainable AI in the tax domain.Łukasz Górski, Błażej Kuźniacki, Marco Almada, Kamil Tyliński, Madalena Calvo, Pablo Matias Asnaghi, Luciano Almada, Hilario Iñiguez, Fernando Rubianes, Octavio Pera & Juan Ignacio Nigrelli - forthcoming - Artificial Intelligence and Law:1-29.
    This paper analyses whether current explainable AI (XAI) techniques can help to address taxpayer concerns about the use of AI in taxation. As tax authorities around the world increase their use of AI-based techniques, taxpayers are increasingly at a loss about whether and how the ensuing decisions follow the procedures required by law and respect their substantive rights. The use of XAI has been proposed as a response to this issue, but it is still an open question whether current XAI (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Expropriated Minds: On Some Practical Problems of Generative AI, Beyond Our Cognitive Illusions.Fabio Paglieri - 2024 - Philosophy and Technology 37 (2):1-30.
    This paper discusses some societal implications of the most recent and publicly discussed application of advanced machine learning techniques: generative AI models, such as ChatGPT (text generation) and DALL-E (text-to-image generation). The aim is to shift attention away from conceptual disputes, e.g. regarding their level of intelligence and similarities/differences with human performance, to focus instead on practical problems, pertaining the impact that these technologies might have (and already have) on human societies. After a preliminary clarification of how generative AI works (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Levels of explicability for medical artificial intelligence: What do we normatively need and what can we technically reach?Frank Ursin, Felix Lindner, Timo Ropinski, Sabine Salloch & Cristian Timmermann - 2023 - Ethik in der Medizin 35 (2):173-199.
    Definition of the problem The umbrella term “explicability” refers to the reduction of opacity of artificial intelligence (AI) systems. These efforts are challenging for medical AI applications because higher accuracy often comes at the cost of increased opacity. This entails ethical tensions because physicians and patients desire to trace how results are produced without compromising the performance of AI systems. The centrality of explicability within the informed consent process for medical AI systems compels an ethical reflection on the trade-offs. Which (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Understanding, Idealization, and Explainable AI.Will Fleisher - 2022 - Episteme 19 (4):534-560.
    Many AI systems that make important decisions are black boxes: how they function is opaque even to their developers. This is due to their high complexity and to the fact that they are trained rather than programmed. Efforts to alleviate the opacity of black box systems are typically discussed in terms of transparency, interpretability, and explainability. However, there is little agreement about what these key concepts mean, which makes it difficult to adjudicate the success or promise of opacity alleviation methods. (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Relation between prognostics predictor evaluation metrics and local interpretability SHAP values.Marcia L. Baptista, Kai Goebel & Elsa M. P. Henriques - 2022 - Artificial Intelligence 306:103667.
    Download  
     
    Export citation  
     
    Bookmark  
  • Understanding responsibility in Responsible AI. Dianoetic virtues and the hard problem of context.Mihaela Constantinescu, Cristina Voinea, Radu Uszkai & Constantin Vică - 2021 - Ethics and Information Technology 23 (4):803-814.
    During the last decade there has been burgeoning research concerning the ways in which we should think of and apply the concept of responsibility for Artificial Intelligence. Despite this conceptual richness, there is still a lack of consensus regarding what Responsible AI entails on both conceptual and practical levels. The aim of this paper is to connect the ethical dimension of responsibility in Responsible AI with Aristotelian virtue ethics, where notions of context and dianoetic virtues play a grounding role for (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Toward a Psychology of Deep Reinforcement Learning Agents Using a Cognitive Architecture.Konstantinos Mitsopoulos, Sterling Somers, Joel Schooler, Christian Lebiere, Peter Pirolli & Robert Thomson - 2022 - Topics in Cognitive Science 14 (4):756-779.
    We argue that cognitive models can provide a common ground between human users and deep reinforcement learning (Deep RL) algorithms for purposes of explainable artificial intelligence (AI). Casting both the human and learner as cognitive models provides common mechanisms to compare and understand their underlying decision-making processes. This common grounding allows us to identify divergences and explain the learner's behavior in human understandable terms. We present novel salience techniques that highlight the most relevant features in each model's decision-making, as well (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Leveraging Artificial Intelligence in Marketing for Social Good—An Ethical Perspective.Erik Hermann - 2022 - Journal of Business Ethics 179 (1):43-61.
    Artificial intelligence is shaping strategy, activities, interactions, and relationships in business and specifically in marketing. The drawback of the substantial opportunities AI systems and applications provide in marketing are ethical controversies. Building on the literature on AI ethics, the authors systematically scrutinize the ethical challenges of deploying AI in marketing from a multi-stakeholder perspective. By revealing interdependencies and tensions between ethical principles, the authors shed light on the applicability of a purely principled, deontological approach to AI ethics in marketing. To (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Toward personalized XAI: A case study in intelligent tutoring systems.Cristina Conati, Oswald Barral, Vanessa Putnam & Lea Rieger - 2021 - Artificial Intelligence 298 (C):103503.
    Download  
     
    Export citation  
     
    Bookmark  
  • What do we want from Explainable Artificial Intelligence (XAI)? – A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research.Markus Langer, Daniel Oster, Timo Speith, Lena Kästner, Kevin Baum, Holger Hermanns, Eva Schmidt & Andreas Sesing - 2021 - Artificial Intelligence 296 (C):103473.
    Previous research in Explainable Artificial Intelligence (XAI) suggests that a main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems (we call these “stakeholders' desiderata”) in a variety of contexts. However, the literature on XAI is vast, spreads out across multiple largely disconnected disciplines, and it often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders' desiderata. This paper discusses the main classes of stakeholders calling for explainability (...)
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  • Explainable Artificial Intelligence (XAI) to Enhance Trust Management in Intrusion Detection Systems Using Decision Tree Model.Basim Mahbooba, Mohan Timilsina, Radhya Sahal & Martin Serrano - 2021 - Complexity 2021:1-11.
    Despite the growing popularity of machine learning models in the cyber-security applications ), most of these models are perceived as a black-box. The eXplainable Artificial Intelligence has become increasingly important to interpret the machine learning models to enhance trust management by allowing human experts to understand the underlying data evidence and causal reasoning. According to IDS, the critical role of trust management is to understand the impact of the malicious data to detect any intrusion in the system. The previous studies (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Deference to Opaque Systems and Morally Exemplary Decisions.James Fritz - forthcoming - AI and Society:1-13.
    Many have recently argued that there are weighty reasons against making high-stakes decisions solely on the basis of recommendations from artificially intelligent (AI) systems. Even if deference to a given AI system were known to reliably result in the right action being taken, the argument goes, that deference would lack morally important characteristics: the resulting decisions would not, for instance, be based on an appreciation of right-making reasons. Nor would they be performed from moral virtue; nor would they have moral (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Explaining AI through mechanistic interpretability.Lena Kästner & Barnaby Crook - 2024 - European Journal for Philosophy of Science 14 (4):1-25.
    Recent work in explainable artificial intelligence (XAI) attempts to render opaque AI systems understandable through a divide-and-conquer strategy. However, this fails to illuminate how trained AI systems work as a whole. Precisely this kind of functional understanding is needed, though, to satisfy important societal desiderata such as safety. To remedy this situation, we argue, AI researchers should seek mechanistic interpretability, viz. apply coordinated discovery strategies familiar from the life sciences to uncover the functional organisation of complex AI systems. Additionally, theorists (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • On the Scope of the Right to Explanation.James Fritz - forthcoming - AI and Ethics.
    As opaque algorithmic systems take up a larger and larger role in shaping our lives, calls for explainability in various algorithmic systems have increased. Many moral and political philosophers have sought to vindicate these calls for explainability by developing theories on which decision-subjects—that is, individuals affected by decisions—have a moral right to the explanation of the systems that affect them. Existing theories tend to suggest that the right to explanation arises solely in virtue of facts about how decision-subjects are affected (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Doing versus saying: responsible AI among large firms.Jacques Bughin - forthcoming - AI and Society:1-13.
    Responsible Artificial Intelligence (RAI) is a subset of the ethics associated with the use of artificial intelligence, which will only increase with the recent advent of new regulatory frameworks. However, if many firms have announced the establishment of AI governance rules, there is currently an important gap in understanding whether and why these announcements are being implemented or remain “decoupled” from operations. We assess how large global firms have so far implemented RAI, and the antecedents to RAI implementation across a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Scoping Review Shows the Dynamics and Complexities Inherent to the Notion of “Responsibility” in Artificial Intelligence within the Healthcare Context.Sarah Bouhouita-Guermech & Hazar Haidar - 2024 - Asian Bioethics Review 16 (3):315-344.
    The increasing integration of artificial intelligence (AI) in healthcare presents a host of ethical, legal, social, and political challenges involving various stakeholders. These challenges prompt various studies proposing frameworks and guidelines to tackle these issues, emphasizing distinct phases of AI development, deployment, and oversight. As a result, the notion of responsible AI has become widespread, incorporating ethical principles such as transparency, fairness, responsibility, and privacy. This paper explores the existing literature on AI use in healthcare to examine how it addresses, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Explainable AI in the military domain.Nathan Gabriel Wood - 2024 - Ethics and Information Technology 26 (2):1-13.
    Artificial intelligence (AI) has become nearly ubiquitous in modern society, from components of mobile applications to medical support systems, and everything in between. In societally impactful systems imbued with AI, there has been increasing concern related to opaque AI, that is, artificial intelligence where it is unclear how or why certain decisions are reached. This has led to a recent boom in research on “explainable AI” (XAI), or approaches to making AI more explainable and understandable to human users. In the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • “Guess what I'm doing”: Extending legibility to sequential decision tasks.Miguel Faria, Francisco S. Melo & Ana Paiva - 2024 - Artificial Intelligence 330 (C):104107.
    Download  
     
    Export citation  
     
    Bookmark  
  • Algorithmic Transparency, Manipulation, and Two Concepts of Liberty.Ulrik Franke - 2024 - Philosophy and Technology 37 (1):1-6.
    As more decisions are made by automated algorithmic systems, the transparency of these systems has come under scrutiny. While such transparency is typically seen as beneficial, there is a also a critical, Foucauldian account of it. From this perspective, worries have recently been articulated that algorithmic transparency can be used for manipulation, as part of a disciplinary power structure. Klenk (Philosophy & Technology 36, 79, 2023) recently argued that such manipulation should not be understood as exploitation of vulnerable victims, but (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Understanding via exemplification in XAI: how explaining image classification benefits from exemplars.Sara Mann - forthcoming - AI and Society:1-16.
    Artificial intelligent (AI) systems that perform image classification tasks are being used to great success in many application contexts. However, many of these systems are opaque, even to experts. This lack of understanding can be problematic for ethical, legal, or practical reasons. The research field Explainable AI (XAI) has therefore developed several approaches to explain image classifiers. The hope is to bring about understanding, e.g., regarding why certain images are classified as belonging to a particular target class. Most of these (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Explainable AI tools for legal reasoning about cases: A study on the European Court of Human Rights.Joe Collenette, Katie Atkinson & Trevor Bench-Capon - 2023 - Artificial Intelligence 317 (C):103861.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Creating meaningful work in the age of AI: explainable AI, explainability, and why it matters to organizational designers.Kristin Wulff & Hanne Finnestrand - forthcoming - AI and Society:1-14.
    In this paper, we contribute to research on enterprise artificial intelligence (AI), specifically to organizations improving the customer experiences and their internal processes through using the type of AI called machine learning (ML). Many organizations are struggling to get enough value from their AI efforts, and part of this is related to the area of explainability. The need for explainability is especially high in what is called black-box ML models, where decisions are made without anyone understanding how an AI reached (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Cognitive architectures for artificial intelligence ethics.Steve J. Bickley & Benno Torgler - 2023 - AI and Society 38 (2):501-519.
    As artificial intelligence (AI) thrives and propagates through modern life, a key question to ask is how to include humans in future AI? Despite human involvement at every stage of the production process from conception and design through to implementation, modern AI is still often criticized for its “black box” characteristics. Sometimes, we do not know what really goes on inside or how and why certain conclusions are met. Future AI will face many dilemmas and ethical issues unforeseen by their (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • On Explainable AI and Abductive Inference.Kyrylo Medianovskyi & Ahti-Veikko Pietarinen - 2022 - Philosophies 7 (2):35.
    Modern explainable AI methods remain far from providing human-like answers to ‘why’ questions, let alone those that satisfactorily agree with human-level understanding. Instead, the results that such methods provide boil down to sets of causal attributions. Currently, the choice of accepted attributions rests largely, if not solely, on the explainee’s understanding of the quality of explanations. The paper argues that such decisions may be transferred from a human to an XAI agent, provided that its machine-learning algorithms perform genuinely abductive inferences. (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Embedding artificial intelligence in society: looking beyond the EU AI master plan using the culture cycle.Simone Borsci, Ville V. Lehtola, Francesco Nex, Michael Ying Yang, Ellen-Wien Augustijn, Leila Bagheriye, Christoph Brune, Ourania Kounadi, Jamy Li, Joao Moreira, Joanne Van Der Nagel, Bernard Veldkamp, Duc V. Le, Mingshu Wang, Fons Wijnhoven, Jelmer M. Wolterink & Raul Zurita-Milla - forthcoming - AI and Society:1-20.
    The European Union Commission’s whitepaper on Artificial Intelligence proposes shaping the emerging AI market so that it better reflects common European values. It is a master plan that builds upon the EU AI High-Level Expert Group guidelines. This article reviews the masterplan, from a culture cycle perspective, to reflect on its potential clashes with current societal, technical, and methodological constraints. We identify two main obstacles in the implementation of this plan: the lack of a coherent EU vision to drive future (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Boosting court judgment prediction and explanation using legal entities.Irene Benedetto, Alkis Koudounas, Lorenzo Vaiani, Eliana Pastor, Luca Cagliero, Francesco Tarasconi & Elena Baralis - forthcoming - Artificial Intelligence and Law:1-36.
    The automatic prediction of court case judgments using Deep Learning and Natural Language Processing is challenged by the variety of norms and regulations, the inherent complexity of the forensic language, and the length of legal judgments. Although state-of-the-art transformer-based architectures and Large Language Models (LLMs) are pre-trained on large-scale datasets, the underlying model reasoning is not transparent to the legal expert. This paper jointly addresses court judgment prediction and explanation by not only predicting the judgment but also providing legal experts (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Explainability, Public Reason, and Medical Artificial Intelligence.Michael Da Silva - 2023 - Ethical Theory and Moral Practice 26 (5):743-762.
    The contention that medical artificial intelligence (AI) should be ‘explainable’ is widespread in contemporary philosophy and in legal and best practice documents. Yet critics argue that ‘explainability’ is not a stable concept; non-explainable AI is often more accurate; mechanisms intended to improve explainability do not improve understanding and introduce new epistemic concerns; and explainability requirements are ad hoc where human medical decision-making is often opaque. A recent ‘political response’ to these issues contends that AI used in high-stakes scenarios, including medical (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Knowledge representation and acquisition for ethical AI: challenges and opportunities.Vaishak Belle - 2023 - Ethics and Information Technology 25 (1):1-12.
    Machine learning (ML) techniques have become pervasive across a range of different applications, and are now widely used in areas as disparate as recidivism prediction, consumer credit-risk analysis, and insurance pricing. Likewise, in the physical world, ML models are critical components in autonomous agents such as robotic surgeons and self-driving cars. Among the many ethical dimensions that arise in the use of ML technology in such applications, analyzing morally permissible actions is both immediate and profound. For example, there is the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Toward accountable human-centered AI: rationale and promising directions.Junaid Qadir, Mohammad Qamar Islam & Ala Al-Fuqaha - 2022 - Journal of Information, Communication and Ethics in Society 20 (2):329-342.
    Purpose Along with the various beneficial uses of artificial intelligence, there are various unsavory concomitants including the inscrutability of AI tools, the fragility of AI models under adversarial settings, the vulnerability of AI models to bias throughout their pipeline, the high planetary cost of running large AI models and the emergence of exploitative surveillance capitalism-based economic logic built on AI technology. This study aims to document these harms of AI technology and study how these technologies and their developers and users (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The role of empathy for artificial intelligence accountability.Ramya Srinivasan & Beatriz San Miguel González - 2022 - Journal of Responsible Technology 9 (C):100021.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Influencing laughter with AI-mediated communication.Gregory Mills, Eleni Gregoromichelaki, Chris Howes & Vladislav Maraev - 2021 - Interaction Studies 22 (3):416-463.
    Previous experimental findings support the hypothesis that laughter and positive emotions are contagious in face-to-face and mediated communication. To test this hypothesis, we describe four experiments in which participants communicate via a chat tool that artificially adds or removes laughter, without participants being aware of the manipulation. We found no evidence to support the contagion hypothesis. However, artificially exposing participants to more lols decreased participants’ use of hahas but led to more involvement and improved task-performance. Similarly, artificially exposing participants to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Detecting and explaining unfairness in consumer contracts through memory networks.Federico Ruggeri, Francesca Lagioia, Marco Lippi & Paolo Torroni - 2021 - Artificial Intelligence and Law 30 (1):59-92.
    Recent work has demonstrated how data-driven AI methods can leverage consumer protection by supporting the automated analysis of legal documents. However, a shortcoming of data-driven approaches is poor explainability. We posit that in this domain useful explanations of classifier outcomes can be provided by resorting to legal rationales. We thus consider several configurations of memory-augmented neural networks where rationales are given a special role in the modeling of context knowledge. Our results show that rationales not only contribute to improve the (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Effective Human Oversight of AI-Based Systems: A Signal Detection Perspective on the Detection of Inaccurate and Unfair Outputs.Markus Langer, Kevin Baum & Nadine Schlicker - 2024 - Minds and Machines 35 (1):1-30.
    Legislation and ethical guidelines around the globe call for effective human oversight of AI-based systems in high-risk contexts – that is oversight that reliably reduces the risks otherwise associated with the use of AI-based systems. Such risks may relate to the imperfect accuracy of systems (e.g., inaccurate classifications) or to ethical concerns (e.g., unfairness of outputs). Given the significant role that human oversight is expected to play in the operation of AI-based systems, it is crucial to better understand the conditions (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Allure of Simplicity.Thomas Grote - 2023 - Philosophy of Medicine 4 (1).
    This paper develops an account of the opacity problem in medical machine learning (ML). Guided by pragmatist assumptions, I argue that opacity in ML models is problematic insofar as it potentially undermines the achievement of two key purposes: ensuring generalizability and optimizing clinician–machine decision-making. Three opacity amelioration strategies are examined, with explainable artificial intelligence (XAI) as the predominant approach, challenged by two revisionary strategies in the form of reliabilism and the interpretability by design. Comparing the three strategies, I argue that (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Deep Learning Applied to Scientific Discovery: A Hot Interface with Philosophy of Science.Louis Vervoort, Henry Shevlin, Alexey A. Melnikov & Alexander Alodjants - 2023 - Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 54 (2):339-351.
    We review publications in automated scientific discovery using deep learning, with the aim of shedding light on problems with strong connections to philosophy of science, of physics in particular. We show that core issues of philosophy of science, related, notably, to the nature of scientific theories; the nature of unification; and of causation loom large in scientific deep learning. Therefore, advances in deep learning could, and ideally should, have impact on philosophy of science, and vice versa. We suggest lines of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • G -LIME: Statistical learning for local interpretations of deep neural networks using global priors.Xuhong Li, Haoyi Xiong, Xingjian Li, Xiao Zhang, Ji Liu, Haiyan Jiang, Zeyu Chen & Dejing Dou - 2023 - Artificial Intelligence 314 (C):103823.
    Download  
     
    Export citation  
     
    Bookmark  
  • How Much Should You Care About Algorithmic Transparency as Manipulation?Ulrik Franke - 2022 - Philosophy and Technology 35 (4):1-7.
    Wang (_Philosophy & Technology_ 35, 2022) introduces a Foucauldian power account of algorithmic transparency. This short commentary explores when this power account is appropriate. It is first observed that the power account is a constructionist one, and that such accounts often come with both factual and evaluative claims. In an instance of Hume’s law, the evaluative claims do not follow from the factual claims, leaving open the question of how much constructionist commitment (Hacking, 1999) one should have. The concept of (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Explainability for experts: A design framework for making algorithms supporting expert decisions more explainable.Auste Simkute, Ewa Luger, Bronwyn Jones, Michael Evans & Rhianne Jones - 2021 - Journal of Responsible Technology 7-8 (C):100017.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • A sociotechnical perspective for the future of AI: narratives, inequalities, and human control.Andreas Theodorou & Laura Sartori - 2022 - Ethics and Information Technology 24 (1):1-11.
    Different people have different perceptions about artificial intelligence (AI). It is extremely important to bring together all the alternative frames of thinking—from the various communities of developers, researchers, business leaders, policymakers, and citizens—to properly start acknowledging AI. This article highlights the ‘fruitful collaboration’ that sociology and AI could develop in both social and technical terms. We discuss how biases and unfairness are among the major challenges to be addressed in such a sociotechnical perspective. First, as intelligent machines reveal their nature (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Moral control and ownership in AI systems.Raul Gonzalez Fabre, Javier Camacho Ibáñez & Pedro Tejedor Escobar - 2021 - AI and Society 36 (1):289-303.
    AI systems are bringing an augmentation of human capabilities to shape the world. They may also drag a replacement of human conscience in large chunks of life. AI systems can be designed to leave moral control in human hands, to obstruct or diminish that moral control, or even to prevent it, replacing human morality with pre-packaged or developed ‘solutions’ by the ‘intelligent’ machine itself. Artificial Intelligent systems (AIS) are increasingly being used in multiple applications and receiving more attention from the (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Beyond transparency and explainability: on the need for adequate and contextualized user guidelines for LLM use.Kristian González Barman, Nathan Wood & Pawel Pawlowski - 2024 - Ethics and Information Technology 26 (3):1-12.
    Large language models (LLMs) such as ChatGPT present immense opportunities, but without proper training for users (and potentially oversight), they carry risks of misuse as well. We argue that current approaches focusing predominantly on transparency and explainability fall short in addressing the diverse needs and concerns of various user groups. We highlight the limitations of existing methodologies and propose a framework anchored on user-centric guidelines. In particular, we argue that LLM users should be given guidelines on what tasks LLMs can (...)
    Download  
     
    Export citation  
     
    Bookmark