Switch to: References

Add citations

You must login to add citations.
  1. A Teleological Approach to Information Systems Design.Mattia Fumagalli, Roberta Ferrario & Giancarlo Guizzardi - 2024 - Minds and Machines 34 (3):1-35.
    In recent years, the design and production of information systems have seen significant growth. However, these information artefacts often exhibit characteristics that compromise their reliability. This issue appears to stem from the neglect or underestimation of certain crucial aspects in the application of Information Systems Design (ISD). For example, it is frequently difficult to prove when one of these products does not work properly or works incorrectly (falsifiability), their usage is often left to subjective experience and somewhat arbitrary choices (anecdotes), (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • On the Philosophy of Unsupervised Learning.David S. Watson - 2023 - Philosophy and Technology 36 (2):1-26.
    Unsupervised learning algorithms are widely used for many important statistical tasks with numerous applications in science and industry. Yet despite their prevalence, they have attracted remarkably little philosophical scrutiny to date. This stands in stark contrast to supervised and reinforcement learning algorithms, which have been widely studied and critically evaluated, often with an emphasis on ethical concerns. In this article, I analyze three canonical unsupervised learning problems: clustering, abstraction, and generative modeling. I argue that these methods raise unique epistemological and (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Predicting inmates misconduct using the SHAP approach.Fábio M. Oliveira, Marcelo S. Balbino, Luis E. Zarate, Fawn Ngo, Ramakrishna Govindu, Anurag Agarwal & Cristiane N. Nobre - 2024 - Artificial Intelligence and Law 32 (2):369-395.
    Internal misconduct is a universal problem in prisons and affects the maintenance of social order. Consequently, correctional institutions often develop rehabilitation programs to reduce the likelihood of inmates committing internal offenses and criminal recidivism after release. Therefore, it is necessary to identify the profile of each offender, both for the appropriate indication of a rehabilitation program and the level of internal security to which he must be submitted. In this context, this work aims to discover the most significant characteristics in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Going beyond the “common suspects”: to be presumed innocent in the era of algorithms, big data and artificial intelligence.Athina Sachoulidou - forthcoming - Artificial Intelligence and Law:1-54.
    This article explores the trend of increasing automation in law enforcement and criminal justice settings through three use cases: predictive policing, machine evidence and recidivism algorithms. The focus lies on artificial-intelligence-driven tools and technologies employed, whether at pre-investigation stages or within criminal proceedings, in order to decode human behaviour and facilitate decision-making as to whom to investigate, arrest, prosecute, and eventually punish. In this context, this article first underlines the existence of a persistent dilemma between the goal of increasing the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Explainable AI tools for legal reasoning about cases: A study on the European Court of Human Rights.Joe Collenette, Katie Atkinson & Trevor Bench-Capon - 2023 - Artificial Intelligence 317 (C):103861.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Argumentative explanations for pattern-based text classifiers.Piyawat Lertvittayakumjorn & Francesca Toni - 2023 - Argument and Computation 14 (2):163-234.
    Recent works in Explainable AI mostly address the transparency issue of black-box models or create explanations for any kind of models (i.e., they are model-agnostic), while leaving explanations of interpretable models largely underexplored. In this paper, we fill this gap by focusing on explanations for a specific interpretable model, namely pattern-based logistic regression (PLR) for binary text classification. We do so because, albeit interpretable, PLR is challenging when it comes to explanations. In particular, we found that a standard way to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Subjectivity of Explainable Artificial Intelligence.Александр Николаевич Райков - 2022 - Russian Journal of Philosophical Sciences 65 (1):72-90.
    The article addresses the problem of identifying methods to develop the ability of artificial intelligence (AI) systems to provide explanations for their findings. This issue is not new, but, nowadays, the increasing complexity of AI systems is forcing scientists to intensify research in this direction. Modern neural networks contain hundreds of layers of neurons. The number of parameters of these networks reaches trillions, genetic algorithms generate thousands of generations of solutions, and the semantics of AI models become more complicated, going (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • On the robustness of sparse counterfactual explanations to adverse perturbations.Marco Virgolin & Saverio Fracaros - 2023 - Artificial Intelligence 316 (C):103840.
    Download  
     
    Export citation  
     
    Bookmark  
  • Self-fulfilling Prophecy in Practical and Automated Prediction.Owen C. King & Mayli Mertens - 2023 - Ethical Theory and Moral Practice 26 (1):127-152.
    A self-fulfilling prophecy is, roughly, a prediction that brings about its own truth. Although true predictions are hard to fault, self-fulfilling prophecies are often regarded with suspicion. In this article, we vindicate this suspicion by explaining what self-fulfilling prophecies are and what is problematic about them, paying special attention to how their problems are exacerbated through automated prediction. Our descriptive account of self-fulfilling prophecies articulates the four elements that define them. Based on this account, we begin our critique by showing (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Logic Explained Networks.Gabriele Ciravegna, Pietro Barbiero, Francesco Giannini, Marco Gori, Pietro Liò, Marco Maggini & Stefano Melacci - 2023 - Artificial Intelligence 314 (C):103822.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • A framework for step-wise explaining how to solve constraint satisfaction problems.Bart Bogaerts, Emilio Gamba & Tias Guns - 2021 - Artificial Intelligence 300 (C):103550.
    Download  
     
    Export citation  
     
    Bookmark  
  • Explanation and Agency: exploring the normative-epistemic landscape of the “Right to Explanation”.Esther Keymolen & Fleur Jongepier - 2022 - Ethics and Information Technology 24 (4):1-11.
    A large part of the explainable AI literature focuses on what explanations are in general, what algorithmic explainability is more specifically, and how to code these principles of explainability into AI systems. Much less attention has been devoted to the question of why algorithmic decisions and systems should be explainable and whether there ought to be a right to explanation and why. We therefore explore the normative landscape of the need for AI to be explainable and individuals having a right (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Public procurement of artificial intelligence systems: new risks and future proofing.Merve Hickok - forthcoming - AI and Society:1-15.
    Public entities around the world are increasingly deploying artificial intelligence and algorithmic decision-making systems to provide public services or to use their enforcement powers. The rationale for the public sector to use these systems is similar to private sector: increase efficiency and speed of transactions and lower the costs. However, public entities are first and foremost established to meet the needs of the members of society and protect the safety, fundamental rights, and wellbeing of those they serve. Currently AI systems (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Yield Response of Different Rice Ecotypes to Meteorological, Agro-Chemical, and Soil Physiographic Factors for Interpretable Precision Agriculture Using Extreme Gradient Boosting and Support Vector Regression.Md Sabbir Ahmed, Md Tasin Tazwar, Haseen Khan, Swadhin Roy, Junaed Iqbal, Md Golam Rabiul Alam, Md Rafiul Hassan & Mohammad Mehedi Hassan - 2022 - Complexity 2022:1-20.
    The food security of more than half of the world’s population depends on rice production which is one of the key objectives of precision agriculture. The traditional rice almanac used astronomical and climate factors to estimate yield response. However, this research integrated meteorological, agro-chemical, and soil physiographic factors for yield response prediction. Besides, the impact of those factors on the production of three major rice ecotypes has also been studied in this research. Moreover, this study found a different set of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI Documentation: A path to accountability.Florian Königstorfer & Stefan Thalmann - 2022 - Journal of Responsible Technology 11:100043.
    Download  
     
    Export citation  
     
    Bookmark  
  • Defining Explanation and Explanatory Depth in XAI.Stefan Buijsman - 2022 - Minds and Machines 32 (3):563-584.
    Explainable artificial intelligence (XAI) aims to help people understand black box algorithms, particularly of their outputs. But what are these explanations and when is one explanation better than another? The manipulationist definition of explanation from the philosophy of science offers good answers to these questions, holding that an explanation consists of a generalization that shows what happens in counterfactual cases. Furthermore, when it comes to explanatory depth this account holds that a generalization that has more abstract variables, is broader in (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Artificial agents’ explainability to support trust: considerations on timing and context.Guglielmo Papagni, Jesse de Pagter, Setareh Zafari, Michael Filzmoser & Sabine T. Koeszegi - 2023 - AI and Society 38 (2):947-960.
    Strategies for improving the explainability of artificial agents are a key approach to support the understandability of artificial agents’ decision-making processes and their trustworthiness. However, since explanations are not inclined to standardization, finding solutions that fit the algorithmic-based decision-making processes of artificial agents poses a compelling challenge. This paper addresses the concept of trust in relation to complementary aspects that play a role in interpersonal and human–agent relationships, such as users’ confidence and their perception of artificial agents’ reliability. Particularly, this (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Karl Jaspers and artificial neural nets: on the relation of explaining and understanding artificial intelligence in medicine.Christopher Poppe & Georg Starke - 2022 - Ethics and Information Technology 24 (3):1-10.
    Assistive systems based on Artificial Intelligence (AI) are bound to reshape decision-making in all areas of society. One of the most intricate challenges arising from their implementation in high-stakes environments such as medicine concerns their frequently unsatisfying levels of explainability, especially in the guise of the so-called black-box problem: highly successful models based on deep learning seem to be inherently opaque, resisting comprehensive explanations. This may explain why some scholars claim that research should focus on rendering AI systems understandable, rather (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Cognitive architectures for artificial intelligence ethics.Steve J. Bickley & Benno Torgler - 2023 - AI and Society 38 (2):501-519.
    As artificial intelligence (AI) thrives and propagates through modern life, a key question to ask is how to include humans in future AI? Despite human involvement at every stage of the production process from conception and design through to implementation, modern AI is still often criticized for its “black box” characteristics. Sometimes, we do not know what really goes on inside or how and why certain conclusions are met. Future AI will face many dilemmas and ethical issues unforeseen by their (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Applying AI for social good: Aligning academic journal ratings with the United Nations Sustainable Development Goals (SDGs).David Steingard, Marcello Balduccini & Akanksha Sinha - 2023 - AI and Society 38 (2):613-629.
    This paper offers three contributions to the burgeoning movements of AI for Social Good (AI4SG) and AI and the United Nations Sustainable Development Goals (SDGs). First, we introduce the SDG-Intense Evaluation framework (SDGIE) that aims to situate variegated automated/AI models in a larger ecosystem of computational approaches to advance the SDGs. To foster knowledge collaboration for solving complex social and environmental problems encompassed by the SDGs, the SDGIE framework details a benchmark structure of data-algorithm-output to effectively standardize AI approaches to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • How to Make AlphaGo’s Children Explainable.Woosuk Park - 2022 - Philosophies 7 (3):55.
    Under the rubric of understanding the problem of explainability of AI in terms of abductive cognition, I propose to review the lessons from AlphaGo and her more powerful successors. As AI players in Baduk have arrived at superhuman level, there seems to be no hope for understanding the secret of their breathtakingly brilliant moves. Without making AI players explainable in some ways, both human and AI players would be less-than omniscient, if not ignorant, epistemic agents. Are we bound to have (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • “Please understand we cannot provide further information”: evaluating content and transparency of GDPR-mandated AI disclosures.Alexander J. Wulf & Ognyan Seizov - 2024 - AI and Society 39 (1):235-256.
    The General Data Protection Regulation (GDPR) of the EU confirms the protection of personal data as a fundamental human right and affords data subjects more control over the way their personal information is processed, shared, and analyzed. However, where data are processed by artificial intelligence (AI) algorithms, asserting control and providing adequate explanations is a challenge. Due to massive increases in computing power and big data processing, modern AI algorithms are too complex and opaque to be understood by most data (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Philosophy of science at sea: Clarifying the interpretability of machine learning.Claus Beisbart & Tim Räz - 2022 - Philosophy Compass 17 (6):e12830.
    Philosophy Compass, Volume 17, Issue 6, June 2022.
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • How the Brunswikian Lens Model Illustrates the Relationship Between Physiological and Behavioral Signals and Psychological Emotional and Cognitive States.Judee K. Burgoon, Rebecca Xinran Wang, Xunyu Chen, Tina Saiying Ge & Bradley Dorn - 2022 - Frontiers in Psychology 12.
    Social relationships are constructed by and through the relational communication that people exchange. Relational messages are implicit nonverbal and verbal messages that signal how people regard one another and define their interpersonal relationships—equal or unequal, affectionate or hostile, inclusive or exclusive, similar or dissimilar, and so forth. Such signals can be measured automatically by the latest machine learning software tools and combined into meaningful factors that represent the socioemotional expressions that constitute relational messages between people. Relational messages operate continuously on (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Explainability for experts: A design framework for making algorithms supporting expert decisions more explainable.Auste Simkute, Ewa Luger, Bronwyn Jones, Michael Evans & Rhianne Jones - 2021 - Journal of Responsible Technology 7-8 (C):100017.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Keeping the organization in the loop: a socio-technical extension of human-centered artificial intelligence.Thomas Herrmann & Sabine Pfeiffer - forthcoming - AI and Society:1-20.
    The human-centered AI approach posits a future in which the work done by humans and machines will become ever more interactive and integrated. This article takes human-centered AI one step further. It argues that the integration of human and machine intelligence is achievable only if human organizations—not just individual human workers—are kept “in the loop.” We support this argument with evidence of two case studies in the area of predictive maintenance, by which we show how organizational practices are needed and (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Explaining Machine Learning Decisions.John Zerilli - 2022 - Philosophy of Science 89 (1):1-19.
    The operations of deep networks are widely acknowledged to be inscrutable. The growing field of Explainable AI has emerged in direct response to this problem. However, owing to the nature of the opacity in question, XAI has been forced to prioritise interpretability at the expense of completeness, and even realism, so that its explanations are frequently interpretable without being underpinned by more comprehensive explanations faithful to the way a network computes its predictions. While this has been taken to be a (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • The Intriguing Relation Between Counterfactual Explanations and Adversarial Examples.Timo Freiesleben - 2021 - Minds and Machines 32 (1):77-109.
    The same method that creates adversarial examples to fool image-classifiers can be used to generate counterfactual explanations that explain algorithmic decisions. This observation has led researchers to consider CEs as AEs by another name. We argue that the relationship to the true label and the tolerance with respect to proximity are two properties that formally distinguish CEs and AEs. Based on these arguments, we introduce CEs, AEs, and related concepts mathematically in a common framework. Furthermore, we show connections between current (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Sources of Understanding in Supervised Machine Learning Models.Paulo Pirozelli - 2022 - Philosophy and Technology 35 (2):1-19.
    In the last decades, supervised machine learning has seen the widespread growth of highly complex, non-interpretable models, of which deep neural networks are the most typical representative. Due to their complexity, these models have showed an outstanding performance in a series of tasks, as in image recognition and machine translation. Recently, though, there has been an important discussion over whether those non-interpretable models are able to provide any sort of understanding whatsoever. For some scholars, only interpretable models can provide understanding. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Influencing laughter with AI-mediated communication.Gregory Mills, Eleni Gregoromichelaki, Chris Howes & Vladislav Maraev - 2021 - Interaction Studies 22 (3):416-463.
    Previous experimental findings support the hypothesis that laughter and positive emotions are contagious in face-to-face and mediated communication. To test this hypothesis, we describe four experiments in which participants communicate via a chat tool that artificially adds or removes laughter, without participants being aware of the manipulation. We found no evidence to support the contagion hypothesis. However, artificially exposing participants to more lols decreased participants’ use of hahas but led to more involvement and improved task-performance. Similarly, artificially exposing participants to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Analogue Models and Universal Machines. Paradigms of Epistemic Transparency in Artificial Intelligence.Hajo Greif - 2022 - Minds and Machines 32 (1):111-133.
    The problem of epistemic opacity in Artificial Intelligence is often characterised as a problem of intransparent algorithms that give rise to intransparent models. However, the degrees of transparency of an AI model should not be taken as an absolute measure of the properties of its algorithms but of the model’s degree of intelligibility to human users. Its epistemically relevant elements are to be specified on various levels above and beyond the computational one. In order to elucidate this claim, I first (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system.Doron Kliger, Tsvi Kuflik & Avital Shulner-Tal - 2022 - Ethics and Information Technology 24 (1).
    In light of the widespread use of algorithmic (intelligent) systems across numerous domains, there is an increasing awareness about the need to explain their underlying decision-making process and resulting outcomes. Since oftentimes these systems are being considered as black boxes, adding explanations to their outcomes may contribute to the perception of their transparency and, as a result, increase users’ trust and fairness perception towards the system, regardless of its actual fairness, which can be measured using various fairness tests and measurements. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Embedded ethics: a proposal for integrating ethics into the development of medical AI.Alena Buyx, Sami Haddadin, Ruth Müller, Daniel Tigard, Amelia Fiske & Stuart McLennan - 2022 - BMC Medical Ethics 23 (1):1-10.
    The emergence of ethical concerns surrounding artificial intelligence (AI) has led to an explosion of high-level ethical principles being published by a wide range of public and private organizations. However, there is a need to consider how AI developers can be practically assisted to anticipate, identify and address ethical issues regarding AI technologies. This is particularly important in the development of AI intended for healthcare settings, where applications will often interact directly with patients in various states of vulnerability. In this (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Toward a Psychology of Deep Reinforcement Learning Agents Using a Cognitive Architecture.Konstantinos Mitsopoulos, Sterling Somers, Joel Schooler, Christian Lebiere, Peter Pirolli & Robert Thomson - 2022 - Topics in Cognitive Science 14 (4):756-779.
    We argue that cognitive models can provide a common ground between human users and deep reinforcement learning (Deep RL) algorithms for purposes of explainable artificial intelligence (AI). Casting both the human and learner as cognitive models provides common mechanisms to compare and understand their underlying decision-making processes. This common grounding allows us to identify divergences and explain the learner's behavior in human understandable terms. We present novel salience techniques that highlight the most relevant features in each model's decision-making, as well (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Beyond explainability: justifiability and contestability of algorithmic decision systems.Clément Henin & Daniel Le Métayer - 2022 - AI and Society 37 (4):1397-1410.
    In this paper, we point out that explainability is useful but not sufficient to ensure the legitimacy of algorithmic decision systems. We argue that the key requirements for high-stakes decision systems should be justifiability and contestability. We highlight the conceptual differences between explanations and justifications, provide dual definitions of justifications and contestations, and suggest different ways to operationalize justifiability and contestability.
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Levels of explainable artificial intelligence for human-aligned conversational explanations.Richard Dazeley, Peter Vamplew, Cameron Foale, Charlotte Young, Sunil Aryal & Francisco Cruz - 2021 - Artificial Intelligence 299 (C):103525.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • A top-level model of case-based argumentation for explanation: Formalisation and experiments.Henry Prakken & Rosa Ratsma - 2022 - Argument and Computation 13 (2):159-194.
    This paper proposes a formal top-level model of explaining the outputs of machine-learning-based decision-making applications and evaluates it experimentally with three data sets. The model draws on AI & law research on argumentation with cases, which models how lawyers draw analogies to past cases and discuss their relevant similarities and differences in terms of relevant factors and dimensions in the problem domain. A case-based approach is natural since the input data of machine-learning applications can be seen as cases. While the (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Artificial Intelligence Regulation: a framework for governance.Patricia Gomes Rêgo de Almeida, Carlos Denner dos Santos & Josivania Silva Farias - 2021 - Ethics and Information Technology 23 (3):505-525.
    This article develops a conceptual framework for regulating Artificial Intelligence (AI) that encompasses all stages of modern public policy-making, from the basics to a sustainable governance. Based on a vast systematic review of the literature on Artificial Intelligence Regulation (AIR) published between 2010 and 2020, a dispersed body of knowledge loosely centred around the “framework” concept was organised, described, and pictured for better understanding. The resulting integrative framework encapsulates 21 prior depictions of the policy-making process, aiming to achieve gold-standard societal (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • “That's (not) the output I expected!” On the role of end user expectations in creating explanations of AI systems.Maria Riveiro & Serge Thill - 2021 - Artificial Intelligence 298:103507.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI.Juan Manuel Durán & Karin Rolanda Jongsma - 2021 - Journal of Medical Ethics 47 (5):medethics - 2020-106820.
    The use of black box algorithms in medicine has raised scholarly concerns due to their opaqueness and lack of trustworthiness. Concerns about potential bias, accountability and responsibility, patient autonomy and compromised trust transpire with black box algorithms. These worries connect epistemic concerns with normative issues. In this paper, we outline that black box algorithms are less problematic for epistemic reasons than many scholars seem to believe. By outlining that more transparency in algorithms is not always necessary, and by explaining that (...)
    Download  
     
    Export citation  
     
    Bookmark   46 citations  
  • What do we want from Explainable Artificial Intelligence (XAI)? – A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research.Markus Langer, Daniel Oster, Timo Speith, Lena Kästner, Kevin Baum, Holger Hermanns, Eva Schmidt & Andreas Sesing - 2021 - Artificial Intelligence 296 (C):103473.
    Previous research in Explainable Artificial Intelligence (XAI) suggests that a main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems (we call these “stakeholders' desiderata”) in a variety of contexts. However, the literature on XAI is vast, spreads out across multiple largely disconnected disciplines, and it often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders' desiderata. This paper discusses the main classes of stakeholders calling for explainability (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Using Precision Public Health to Manage Climate Change: Opportunities, Challenges, and Health Justice.Walter G. Johnson - 2020 - Journal of Law, Medicine and Ethics 48 (4):681-693.
    Amid public health concerns over climate change, “precision public health” is emerging in next generation approaches to practice. These novel methods promise to augment public health operations by using ever larger and more robust health datasets combined with new tools for collecting and analyzing data. Precision strategies to protecting the public health could more effectively or efficiently address the systemic threats of climate change, but may also propagate or exacerbate health disparities for the populations most vulnerable in a changing climate. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • GLocalX - From Local to Global Explanations of Black Box AI Models.Mattia Setzu, Riccardo Guidotti, Anna Monreale, Franco Turini, Dino Pedreschi & Fosca Giannotti - 2021 - Artificial Intelligence 294 (C):103457.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Evaluating XAI: A comparison of rule-based and example-based explanations.Jasper van der Waa, Elisabeth Nieuwburg, Anita Cremers & Mark Neerincx - 2021 - Artificial Intelligence 291 (C):103404.
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Towards Transparency by Design for Artificial Intelligence.Heike Felzmann, Eduard Fosch-Villaronga, Christoph Lutz & Aurelia Tamò-Larrieux - 2020 - Science and Engineering Ethics 26 (6):3333-3361.
    In this article, we develop the concept of Transparency by Design that serves as practical guidance in helping promote the beneficial functions of transparency while mitigating its challenges in automated-decision making environments. With the rise of artificial intelligence and the ability of AI systems to make automated and self-learned decisions, a call for transparency of how such systems reach decisions has echoed within academic and policy circles. The term transparency, however, relates to multiple concepts, fulfills many functions, and holds different (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Artificial intelligence in medicine and the disclosure of risks.Maximilian Kiener - 2021 - AI and Society 36 (3):705-713.
    This paper focuses on the use of ‘black box’ AI in medicine and asks whether the physician needs to disclose to patients that even the best AI comes with the risks of cyberattacks, systematic bias, and a particular type of mismatch between AI’s implicit assumptions and an individual patient’s background situation.Pacecurrent clinical practice, I argue that, under certain circumstances, these risks do need to be disclosed. Otherwise, the physician either vitiates a patient’s informed consent or violates a more general obligation (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Ethical challenges in argumentation and dialogue in a healthcare context.Mark Snaith, Rasmus Øjvind Nielsen, Sita Ramchandra Kotnis & Alison Pease - forthcoming - Argument and Computation:1-16.
    As the average age of the population increases, so too do the number of people living with chronic illnesses. With limited resources available, the development of dialogue-based e-health systems that provide justified general health advice offers a cost-effective solution to the management of chronic conditions. It is however imperative that such systems are responsible in their approach. We present in this paper two main challenges for the deployment of e-health systems, that have a particular relevance to dialogue and argumentation: collecting (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • ICAIL Doctoral Consortium, Montreal 2019.Michał Araszkiewicz, Ilaria Angela Amantea, Saurabh Chakravarty, Robert van Doesburg, Maria Dymitruk, Marie Garin, Leilani Gilpin, Daphne Odekerken & Seyedeh Sajedeh Salehi - 2020 - Artificial Intelligence and Law 28 (2):267-280.
    This is a report on the Doctoral Consortium co-located with the 17th International Conference on Artificial Intelligence and Law in Montreal.
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability.Mark Coeckelbergh - 2020 - Science and Engineering Ethics 26 (4):2051-2068.
    This paper discusses the problem of responsibility attribution raised by the use of artificial intelligence technologies. It is assumed that only humans can be responsible agents; yet this alone already raises many issues, which are discussed starting from two Aristotelian conditions for responsibility. Next to the well-known problem of many hands, the issue of “many things” is identified and the temporal dimension is emphasized when it comes to the control condition. Special attention is given to the epistemic condition, which draws (...)
    Download  
     
    Export citation  
     
    Bookmark   52 citations  
  • The Rhetoric and Reality of Anthropomorphism in Artificial Intelligence.David Watson - 2019 - Minds and Machines 29 (3):417-440.
    Artificial intelligence has historically been conceptualized in anthropomorphic terms. Some algorithms deploy biomimetic designs in a deliberate attempt to effect a sort of digital isomorphism of the human brain. Others leverage more general learning strategies that happen to coincide with popular theories of cognitive science and social epistemology. In this paper, I challenge the anthropomorphic credentials of the neural network algorithm, whose similarities to human cognition I argue are vastly overstated and narrowly construed. I submit that three alternative supervised learning (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations