Switch to: References

Add citations

You must login to add citations.
  1. Unnatural Images: On AI-Generated Photographs.Amanda Wasielewski - 2024 - Critical Inquiry 51 (1):1-29.
    In artificial-intelligence (AI) and computer-vision research, photographic images are typically referred to as natural images. This means that images used or produced in this context are conceptualized within a binary as either natural or synthetic. Recent advances in creative AI technology, particularly generative adversarial networks and diffusion models, have afforded the ability to create photographic-seeming images, that is, synthetic images that appear natural, based on learnings from vast databases of digital photographs. Contemporary discussions of these images have thus far revolved (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Beyond transparency and explainability: on the need for adequate and contextualized user guidelines for LLM use.Kristian González Barman, Nathan Wood & Pawel Pawlowski - 2024 - Ethics and Information Technology 26 (3):1-12.
    Large language models (LLMs) such as ChatGPT present immense opportunities, but without proper training for users (and potentially oversight), they carry risks of misuse as well. We argue that current approaches focusing predominantly on transparency and explainability fall short in addressing the diverse needs and concerns of various user groups. We highlight the limitations of existing methodologies and propose a framework anchored on user-centric guidelines. In particular, we argue that LLM users should be given guidelines on what tasks LLMs can (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Possibility of Scientific Explanation from Models Based on Artificial Neural Networks.Alejandro E. Rodríguez-Sánchez - 2024 - Revista Colombiana de Filosofía de la Ciencia 24 (48).
    In Artificial Intelligence, Artificial Neural Networks are very accurate models in tasks such as classification and regression in the study of natural phenomena, but they are considered “black boxes” because they do not allow direct explanation of what they address. This paper reviews the possibility of scientific explanation from these models and concludes that other efforts are required to understand their inner workings. This poses challenges to access scientific explanation through their use, since the nature of Artificial Neural Networks makes (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Scoping Review Shows the Dynamics and Complexities Inherent to the Notion of “Responsibility” in Artificial Intelligence within the Healthcare Context.Sarah Bouhouita-Guermech & Hazar Haidar - 2024 - Asian Bioethics Review 16 (3):315-344.
    The increasing integration of artificial intelligence (AI) in healthcare presents a host of ethical, legal, social, and political challenges involving various stakeholders. These challenges prompt various studies proposing frameworks and guidelines to tackle these issues, emphasizing distinct phases of AI development, deployment, and oversight. As a result, the notion of responsible AI has become widespread, incorporating ethical principles such as transparency, fairness, responsibility, and privacy. This paper explores the existing literature on AI use in healthcare to examine how it addresses, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial intelligence in medical education: Typologies and ethical approaches.Agnieszka Pregowska & Mark Perkins - 2024 - Ethics and Bioethics (in Central Europe) 14 (1-2):96-113.
    Artificial Intelligence (AI) has an increasing role to play in medical education and has great potential to revolutionize health professional education systems overall. However, this is accompanied by substantial questions concerning technical and ethical risks which are of particular importance because the quality of medical education has a direct effect on physical and psychological health and wellbeing. This article establishes an overarching distinction of AI across two typological dimensions, functional and humanistic. As indispensable foundations, these are then related to medical (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Expropriated Minds: On Some Practical Problems of Generative AI, Beyond Our Cognitive Illusions.Fabio Paglieri - 2024 - Philosophy and Technology 37 (2):1-30.
    This paper discusses some societal implications of the most recent and publicly discussed application of advanced machine learning techniques: generative AI models, such as ChatGPT (text generation) and DALL-E (text-to-image generation). The aim is to shift attention away from conceptual disputes, e.g. regarding their level of intelligence and similarities/differences with human performance, to focus instead on practical problems, pertaining the impact that these technologies might have (and already have) on human societies. After a preliminary clarification of how generative AI works (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • On the nexus between code of business ethics, human resource supply chain management and corporate culture: evidence from MENA countries.Moh'D. Anwer Al-Shboul - forthcoming - Journal of Information, Communication and Ethics in Society.
    Purpose This paper aims to analyze the relationships between human resource supply chain management (HRSCM), corporate culture (CC) and the code of business ethics (CBE) in the MENA region. Design/methodology/approach In this study, the author adopted a quantitative approach through an online Google Form survey for the data-gathering process. All questionnaires were distributed to the manufacturing and service firms that are listed in the Chambers of the Industries of Jordan, Saudi Arabia, Morocco and Egypt in the MENA region using a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Algorithmic Transparency, Manipulation, and Two Concepts of Liberty.Ulrik Franke - 2024 - Philosophy and Technology 37 (1):1-6.
    As more decisions are made by automated algorithmic systems, the transparency of these systems has come under scrutiny. While such transparency is typically seen as beneficial, there is a also a critical, Foucauldian account of it. From this perspective, worries have recently been articulated that algorithmic transparency can be used for manipulation, as part of a disciplinary power structure. Klenk (Philosophy & Technology 36, 79, 2023) recently argued that such manipulation should not be understood as exploitation of vulnerable victims, but (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Democratizing AI from a Sociotechnical Perspective.Merel Noorman & Tsjalling Swierstra - 2023 - Minds and Machines 33 (4):563-586.
    Artificial Intelligence (AI) technologies offer new ways of conducting decision-making tasks that influence the daily lives of citizens, such as coordinating traffic, energy distributions, and crowd flows. They can sort, rank, and prioritize the distribution of fines or public funds and resources. Many of the changes that AI technologies promise to bring to such tasks pertain to decisions that are collectively binding. When these technologies become part of critical infrastructures, such as energy networks, citizens are affected by these decisions whether (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Explainability, Public Reason, and Medical Artificial Intelligence.Michael Da Silva - 2023 - Ethical Theory and Moral Practice 26 (5):743-762.
    The contention that medical artificial intelligence (AI) should be ‘explainable’ is widespread in contemporary philosophy and in legal and best practice documents. Yet critics argue that ‘explainability’ is not a stable concept; non-explainable AI is often more accurate; mechanisms intended to improve explainability do not improve understanding and introduce new epistemic concerns; and explainability requirements are ad hoc where human medical decision-making is often opaque. A recent ‘political response’ to these issues contends that AI used in high-stakes scenarios, including medical (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The black box problem revisited. Real and imaginary challenges for automated legal decision making.Bartosz Brożek, Michał Furman, Marek Jakubiec & Bartłomiej Kucharzyk - 2024 - Artificial Intelligence and Law 32 (2):427-440.
    This paper addresses the black-box problem in artificial intelligence (AI), and the related problem of explainability of AI in the legal context. We argue, first, that the black box problem is, in fact, a superficial one as it results from an overlap of four different – albeit interconnected – issues: the opacity problem, the strangeness problem, the unpredictability problem, and the justification problem. Thus, we propose a framework for discussing both the black box problem and the explainability of AI. We (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Levels of explicability for medical artificial intelligence: What do we normatively need and what can we technically reach?Frank Ursin, Felix Lindner, Timo Ropinski, Sabine Salloch & Cristian Timmermann - 2023 - Ethik in der Medizin 35 (2):173-199.
    Definition of the problem The umbrella term “explicability” refers to the reduction of opacity of artificial intelligence (AI) systems. These efforts are challenging for medical AI applications because higher accuracy often comes at the cost of increased opacity. This entails ethical tensions because physicians and patients desire to trace how results are produced without compromising the performance of AI systems. The centrality of explicability within the informed consent process for medical AI systems compels an ethical reflection on the trade-offs. Which (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Deep Learning Applied to Scientific Discovery: A Hot Interface with Philosophy of Science.Louis Vervoort, Henry Shevlin, Alexey A. Melnikov & Alexander Alodjants - 2023 - Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 54 (2):339-351.
    We review publications in automated scientific discovery using deep learning, with the aim of shedding light on problems with strong connections to philosophy of science, of physics in particular. We show that core issues of philosophy of science, related, notably, to the nature of scientific theories; the nature of unification; and of causation loom large in scientific deep learning. Therefore, advances in deep learning could, and ideally should, have impact on philosophy of science, and vice versa. We suggest lines of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Explainable AI tools for legal reasoning about cases: A study on the European Court of Human Rights.Joe Collenette, Katie Atkinson & Trevor Bench-Capon - 2023 - Artificial Intelligence 317 (C):103861.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Understanding, Idealization, and Explainable AI.Will Fleisher - 2022 - Episteme 19 (4):534-560.
    Many AI systems that make important decisions are black boxes: how they function is opaque even to their developers. This is due to their high complexity and to the fact that they are trained rather than programmed. Efforts to alleviate the opacity of black box systems are typically discussed in terms of transparency, interpretability, and explainability. However, there is little agreement about what these key concepts mean, which makes it difficult to adjudicate the success or promise of opacity alleviation methods. (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Cognitive architectures for artificial intelligence ethics.Steve J. Bickley & Benno Torgler - 2023 - AI and Society 38 (2):501-519.
    As artificial intelligence (AI) thrives and propagates through modern life, a key question to ask is how to include humans in future AI? Despite human involvement at every stage of the production process from conception and design through to implementation, modern AI is still often criticized for its “black box” characteristics. Sometimes, we do not know what really goes on inside or how and why certain conclusions are met. Future AI will face many dilemmas and ethical issues unforeseen by their (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Toward accountable human-centered AI: rationale and promising directions.Junaid Qadir, Mohammad Qamar Islam & Ala Al-Fuqaha - 2022 - Journal of Information, Communication and Ethics in Society 20 (2):329-342.
    Purpose Along with the various beneficial uses of artificial intelligence, there are various unsavory concomitants including the inscrutability of AI tools, the fragility of AI models under adversarial settings, the vulnerability of AI models to bias throughout their pipeline, the high planetary cost of running large AI models and the emergence of exploitative surveillance capitalism-based economic logic built on AI technology. This study aims to document these harms of AI technology and study how these technologies and their developers and users (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Justificatory explanations in machine learning: for increased transparency through documenting how key concepts drive and underpin design and engineering decisions.David Casacuberta, Ariel Guersenzvaig & Cristian Moyano-Fernández - 2024 - AI and Society 39 (1):279-293.
    Given the pervasiveness of AI systems and their potential negative effects on people’s lives (especially among already marginalised groups), it becomes imperative to comprehend what goes on when an AI system generates a result, and based on what reasons, it is achieved. There are consistent technical efforts for making systems more “explainable” by reducing their opaqueness and increasing their interpretability and explainability. In this paper, we explore an alternative non-technical approach towards explainability that complement existing ones. Leaving aside technical, statistical, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Conceptual challenges for interpretable machine learning.David S. Watson - 2022 - Synthese 200 (2):1-33.
    As machine learning has gradually entered into ever more sectors of public and private life, there has been a growing demand for algorithmic explainability. How can we make the predictions of complex statistical models more intelligible to end users? A subdiscipline of computer science known as interpretable machine learning (IML) has emerged to address this urgent question. Numerous influential methods have been proposed, from local linear approximations to rule lists and counterfactuals. In this article, I highlight three conceptual challenges that (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • The winter, the summer and the summer dream of artificial intelligence in law: Presidential address to the 18th International Conference on Artificial Intelligence and Law.Enrico Francesconi - 2022 - Artificial Intelligence and Law 30 (2):147-161.
    This paper reflects my address as IAAIL president at ICAIL 2021. It is aimed to give my vision of the status of the AI and Law discipline, and possible future perspectives. In this respect, I go through different seasons of AI research : from the Winter of AI, namely a period of mistrust in AI, to the Summer of AI, namely the current period of great interest in the discipline with lots of expectations. One of the results of the first (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Embedded ethics: a proposal for integrating ethics into the development of medical AI.Alena Buyx, Sami Haddadin, Ruth Müller, Daniel Tigard, Amelia Fiske & Stuart McLennan - 2022 - BMC Medical Ethics 23 (1):1-10.
    The emergence of ethical concerns surrounding artificial intelligence (AI) has led to an explosion of high-level ethical principles being published by a wide range of public and private organizations. However, there is a need to consider how AI developers can be practically assisted to anticipate, identify and address ethical issues regarding AI technologies. This is particularly important in the development of AI intended for healthcare settings, where applications will often interact directly with patients in various states of vulnerability. In this (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Embedding artificial intelligence in society: looking beyond the EU AI master plan using the culture cycle.Simone Borsci, Ville V. Lehtola, Francesco Nex, Michael Ying Yang, Ellen-Wien Augustijn, Leila Bagheriye, Christoph Brune, Ourania Kounadi, Jamy Li, Joao Moreira, Joanne Van Der Nagel, Bernard Veldkamp, Duc V. Le, Mingshu Wang, Fons Wijnhoven, Jelmer M. Wolterink & Raul Zurita-Milla - forthcoming - AI and Society:1-20.
    The European Union Commission’s whitepaper on Artificial Intelligence proposes shaping the emerging AI market so that it better reflects common European values. It is a master plan that builds upon the EU AI High-Level Expert Group guidelines. This article reviews the masterplan, from a culture cycle perspective, to reflect on its potential clashes with current societal, technical, and methodological constraints. We identify two main obstacles in the implementation of this plan: the lack of a coherent EU vision to drive future (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Openness and privacy in born-digital archives: reflecting the role of AI development.Angeliki Tzouganatou - 2022 - AI and Society 37 (3):991-999.
    Galleries, libraries, archives and museums are striving to retain audience attention to issues related to cultural heritage, by implementing various novel opportunities for audience engagement through technological means online. Although born-digital assets for cultural heritage may have inundated the Internet in some areas, most of the time they are stored in “digital warehouses,” and the questions of the digital ecosystem’s sustainability, meaningful public participation and creative reuse of data still remain. Emerging technologies, such as artificial intelligence, are used to bring (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Understanding responsibility in Responsible AI. Dianoetic virtues and the hard problem of context.Mihaela Constantinescu, Cristina Voinea, Radu Uszkai & Constantin Vică - 2021 - Ethics and Information Technology 23 (4):803-814.
    During the last decade there has been burgeoning research concerning the ways in which we should think of and apply the concept of responsibility for Artificial Intelligence. Despite this conceptual richness, there is still a lack of consensus regarding what Responsible AI entails on both conceptual and practical levels. The aim of this paper is to connect the ethical dimension of responsibility in Responsible AI with Aristotelian virtue ethics, where notions of context and dianoetic virtues play a grounding role for (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • AI under great uncertainty: implications and decision strategies for public policy.Maria Nordström - 2022 - AI and Society 37 (4):1703-1714.
    Decisions where there is not enough information for a well-informed decision due to unidentified consequences, options, or undetermined demarcation of the decision problem are called decisions under great uncertainty. This paper argues that public policy decisions on _how_ and _if_ to implement decision-making processes based on machine learning and AI for public use are such decisions. Decisions on public policy on AI are uncertain due to three features specific to the current landscape of AI, namely (i) the vagueness of the (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Achieving Operational Excellence Through Artificial Intelligence: Driving Forces and Barriers.Muhammad Usman Tariq, Marc Poulin & Abdullah A. Abonamah - 2021 - Frontiers in Psychology 12.
    This paper presents an in-depth literature review on the driving forces and barriers for achieving operational excellence through artificial intelligence. Artificial intelligence is a technological concept spanning operational management, philosophy, humanities, statistics, mathematics, computer sciences, and social sciences. AI refers to machines mimicking human behavior in terms of cognitive functions. The evolution of new technological procedures and advancements in producing intelligence for machines creates a positive impact on decisions, operations, strategies, and management incorporated in the production process of goods and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Detecting and explaining unfairness in consumer contracts through memory networks.Federico Ruggeri, Francesca Lagioia, Marco Lippi & Paolo Torroni - 2021 - Artificial Intelligence and Law 30 (1):59-92.
    Recent work has demonstrated how data-driven AI methods can leverage consumer protection by supporting the automated analysis of legal documents. However, a shortcoming of data-driven approaches is poor explainability. We posit that in this domain useful explanations of classifier outcomes can be provided by resorting to legal rationales. We thus consider several configurations of memory-augmented neural networks where rationales are given a special role in the modeling of context knowledge. Our results show that rationales not only contribute to improve the (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • What do we want from Explainable Artificial Intelligence (XAI)? – A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research.Markus Langer, Daniel Oster, Timo Speith, Lena Kästner, Kevin Baum, Holger Hermanns, Eva Schmidt & Andreas Sesing - 2021 - Artificial Intelligence 296 (C):103473.
    Previous research in Explainable Artificial Intelligence (XAI) suggests that a main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems (we call these “stakeholders' desiderata”) in a variety of contexts. However, the literature on XAI is vast, spreads out across multiple largely disconnected disciplines, and it often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders' desiderata. This paper discusses the main classes of stakeholders calling for explainability (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Exploring explainable AI in the tax domain.Łukasz Górski, Błażej Kuźniacki, Marco Almada, Kamil Tyliński, Madalena Calvo, Pablo Matias Asnaghi, Luciano Almada, Hilario Iñiguez, Fernando Rubianes, Octavio Pera & Juan Ignacio Nigrelli - forthcoming - Artificial Intelligence and Law:1-29.
    This paper analyses whether current explainable AI (XAI) techniques can help to address taxpayer concerns about the use of AI in taxation. As tax authorities around the world increase their use of AI-based techniques, taxpayers are increasingly at a loss about whether and how the ensuing decisions follow the procedures required by law and respect their substantive rights. The use of XAI has been proposed as a response to this issue, but it is still an open question whether current XAI (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Explainable AI in the military domain.Nathan Gabriel Wood - 2024 - Ethics and Information Technology 26 (2):1-13.
    Artificial intelligence (AI) has become nearly ubiquitous in modern society, from components of mobile applications to medical support systems, and everything in between. In societally impactful systems imbued with AI, there has been increasing concern related to opaque AI, that is, artificial intelligence where it is unclear how or why certain decisions are reached. This has led to a recent boom in research on “explainable AI” (XAI), or approaches to making AI more explainable and understandable to human users. In the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Boosting court judgment prediction and explanation using legal entities.Irene Benedetto, Alkis Koudounas, Lorenzo Vaiani, Eliana Pastor, Luca Cagliero, Francesco Tarasconi & Elena Baralis - forthcoming - Artificial Intelligence and Law:1-36.
    The automatic prediction of court case judgments using Deep Learning and Natural Language Processing is challenged by the variety of norms and regulations, the inherent complexity of the forensic language, and the length of legal judgments. Although state-of-the-art transformer-based architectures and Large Language Models (LLMs) are pre-trained on large-scale datasets, the underlying model reasoning is not transparent to the legal expert. This paper jointly addresses court judgment prediction and explanation by not only predicting the judgment but also providing legal experts (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • “Guess what I'm doing”: Extending legibility to sequential decision tasks.Miguel Faria, Francisco S. Melo & Ana Paiva - 2024 - Artificial Intelligence 330 (C):104107.
    Download  
     
    Export citation  
     
    Bookmark  
  • Understanding via exemplification in XAI: how explaining image classification benefits from exemplars.Sara Mann - forthcoming - AI and Society:1-16.
    Artificial intelligent (AI) systems that perform image classification tasks are being used to great success in many application contexts. However, many of these systems are opaque, even to experts. This lack of understanding can be problematic for ethical, legal, or practical reasons. The research field Explainable AI (XAI) has therefore developed several approaches to explain image classifiers. The hope is to bring about understanding, e.g., regarding why certain images are classified as belonging to a particular target class. Most of these (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Is this a violation? Learning and understanding norm violations in online communities.Thiago Freitas dos Santos, Nardine Osman & Marco Schorlemmer - 2024 - Artificial Intelligence 327 (C):104058.
    Download  
     
    Export citation  
     
    Bookmark  
  • Human performance consequences of normative and contrastive explanations: An experiment in machine learning for reliability maintenance.Davide Gentile, Birsen Donmez & Greg A. Jamieson - 2023 - Artificial Intelligence 321 (C):103945.
    Download  
     
    Export citation  
     
    Bookmark  
  • Attitudinal Tensions in the Joint Pursuit of Explainable and Trusted AI.Devesh Narayanan & Zhi Ming Tan - 2023 - Minds and Machines 33 (1):55-82.
    It is frequently demanded that AI-based Decision Support Tools (AI-DSTs) ought to be both explainable to, and trusted by, those who use them. The joint pursuit of these two principles is ordinarily believed to be uncontroversial. In fact, a common view is that AI systems should be made explainable so that they can be trusted, and in turn, accepted by decision-makers. However, the moral scope of these two principles extends far beyond this particular instrumental connection. This paper argues that if (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Creating meaningful work in the age of AI: explainable AI, explainability, and why it matters to organizational designers.Kristin Wulff & Hanne Finnestrand - forthcoming - AI and Society:1-14.
    In this paper, we contribute to research on enterprise artificial intelligence (AI), specifically to organizations improving the customer experiences and their internal processes through using the type of AI called machine learning (ML). Many organizations are struggling to get enough value from their AI efforts, and part of this is related to the area of explainability. The need for explainability is especially high in what is called black-box ML models, where decisions are made without anyone understanding how an AI reached (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Subjectivity of Explainable Artificial Intelligence.Александр Николаевич Райков - 2022 - Russian Journal of Philosophical Sciences 65 (1):72-90.
    The article addresses the problem of identifying methods to develop the ability of artificial intelligence (AI) systems to provide explanations for their findings. This issue is not new, but, nowadays, the increasing complexity of AI systems is forcing scientists to intensify research in this direction. Modern neural networks contain hundreds of layers of neurons. The number of parameters of these networks reaches trillions, genetic algorithms generate thousands of generations of solutions, and the semantics of AI models become more complicated, going (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Local and global explanations of agent behavior: Integrating strategy summaries with saliency maps.Tobias Huber, Katharina Weitz, Elisabeth André & Ofra Amir - 2021 - Artificial Intelligence 301 (C):103571.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • How Much Should You Care About Algorithmic Transparency as Manipulation?Ulrik Franke - 2022 - Philosophy and Technology 35 (4):1-7.
    Wang (_Philosophy & Technology_ 35, 2022) introduces a Foucauldian power account of algorithmic transparency. This short commentary explores when this power account is appropriate. It is first observed that the power account is a constructionist one, and that such accounts often come with both factual and evaluative claims. In an instance of Hume’s law, the evaluative claims do not follow from the factual claims, leaving open the question of how much constructionist commitment (Hacking, 1999) one should have. The concept of (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Thirty years of artificial intelligence and law: the third decade.Serena Villata, Michal Araszkiewicz, Kevin Ashley, Trevor Bench-Capon, L. Karl Branting, Jack G. Conrad & Adam Wyner - 2022 - Artificial Intelligence and Law 30 (4):561-591.
    The first issue of Artificial Intelligence and Law journal was published in 1992. This paper offers some commentaries on papers drawn from the Journal’s third decade. They indicate a major shift within Artificial Intelligence, both generally and in AI and Law: away from symbolic techniques to those based on Machine Learning approaches, especially those based on Natural Language texts rather than feature sets. Eight papers are discussed: two concern the management and use of documents available on the World Wide Web, (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • An experiential account of a large-scale interdisciplinary data analysis of public engagement.Julian “Iñaki” Goñi, Claudio Fuentes & Maria Paz Raveau - 2023 - AI and Society 38 (2):581-593.
    This article presents our experience as a multidisciplinary team systematizing and analyzing the transcripts from a large-scale (1.775 conversations) series of conversations about Chile’s future. This project called “Tenemos Que Hablar de Chile” [We have to talk about Chile] gathered more than 8000 people from all municipalities, achieving gender, age, and educational parity. In this sense, this article takes an experiential approach to describe how certain interdisciplinary methodological decisions were made. We sought to apply analytical variables derived from social science (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI models and the future of genomic research and medicine: True sons of knowledge?Harald König, Daniel Frank, Martina Baumann & Reinhard Heil - 2021 - Bioessays 43 (10):2100025.
    The increasing availability of large‐scale, complex data has made research into how human genomes determine physiology in health and disease, as well as its application to drug development and medicine, an attractive field for artificial intelligence (AI) approaches. Looking at recent developments, we explore how such approaches interconnect and may conflict with needs for and notions of causal knowledge in molecular genetics and genomic medicine. We provide reasons to suggest that—while capable of generating predictive knowledge at unprecedented pace and scale—if (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • On Explainable AI and Abductive Inference.Kyrylo Medianovskyi & Ahti-Veikko Pietarinen - 2022 - Philosophies 7 (2):35.
    Modern explainable AI methods remain far from providing human-like answers to ‘why’ questions, let alone those that satisfactorily agree with human-level understanding. Instead, the results that such methods provide boil down to sets of causal attributions. Currently, the choice of accepted attributions rests largely, if not solely, on the explainee’s understanding of the quality of explanations. The paper argues that such decisions may be transferred from a human to an XAI agent, provided that its machine-learning algorithms perform genuinely abductive inferences. (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system.Doron Kliger, Tsvi Kuflik & Avital Shulner-Tal - 2022 - Ethics and Information Technology 24 (1).
    In light of the widespread use of algorithmic (intelligent) systems across numerous domains, there is an increasing awareness about the need to explain their underlying decision-making process and resulting outcomes. Since oftentimes these systems are being considered as black boxes, adding explanations to their outcomes may contribute to the perception of their transparency and, as a result, increase users’ trust and fairness perception towards the system, regardless of its actual fairness, which can be measured using various fairness tests and measurements. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Explainability for experts: A design framework for making algorithms supporting expert decisions more explainable.Auste Simkute, Ewa Luger, Bronwyn Jones, Michael Evans & Rhianne Jones - 2021 - Journal of Responsible Technology 7-8 (C):100017.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • (1 other version)The role of empathy for artificial intelligence accountability.Ramya Srinivasan & Beatriz San Miguel González - 2022 - Journal of Responsible Technology 9 (C):100021.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • A sociotechnical perspective for the future of AI: narratives, inequalities, and human control.Andreas Theodorou & Laura Sartori - 2022 - Ethics and Information Technology 24 (1):1-11.
    Different people have different perceptions about artificial intelligence (AI). It is extremely important to bring together all the alternative frames of thinking—from the various communities of developers, researchers, business leaders, policymakers, and citizens—to properly start acknowledging AI. This article highlights the ‘fruitful collaboration’ that sociology and AI could develop in both social and technical terms. We discuss how biases and unfairness are among the major challenges to be addressed in such a sociotechnical perspective. First, as intelligent machines reveal their nature (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • (1 other version)The role of empathy for artificial intelligence accountability.Ramya Srinivasan & Beatriz San Miguel González - 2022 - Journal of Responsible Technology 9 (C):100021.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Artificial intelligence in hospitals: providing a status quo of ethical considerations in academia to guide future research.Milad Mirbabaie, Lennart Hofeditz, Nicholas R. J. Frick & Stefan Stieglitz - 2022 - AI and Society 37 (4):1361-1382.
    The application of artificial intelligence (AI) in hospitals yields many advantages but also confronts healthcare with ethical questions and challenges. While various disciplines have conducted specific research on the ethical considerations of AI in hospitals, the literature still requires a holistic overview. By conducting a systematic discourse approach highlighted by expert interviews with healthcare specialists, we identified the status quo of interdisciplinary research in academia on ethical considerations and dimensions of AI in hospitals. We found 15 fundamental manuscripts by constructing (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation