Switch to: Citations

Add references

You must login to add references.
  1. Algorithmic and human decision making: for a double standard of transparency.Mario Günther & Atoosa Kasirzadeh - 2022 - AI and Society 37 (1):375-381.
    Should decision-making algorithms be held to higher standards of transparency than human beings? The way we answer this question directly impacts what we demand from explainable algorithms, how we govern them via regulatory proposals, and how explainable algorithms may help resolve the social problems associated with decision making supported by artificial intelligence. Some argue that algorithms and humans should be held to the same standards of transparency and that a double standard of transparency is hardly justified. We give two arguments (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • What Is Justified Group Belief.Jennifer Lackey - 2016 - Philosophical Review Recent Issues 125 (3):341-396.
    This essay raises new objections to the two dominant approaches to understanding the justification of group beliefs—_inflationary_ views, where groups are treated as entities that can float freely from the epistemic status of their members’ beliefs, and _deflationary_ views, where justified group belief is understood as nothing more than the aggregation of the justified beliefs of the group's members. If this essay is right, we need to look in an altogether different place for an adequate account of justified group belief. (...)
    Download  
     
    Export citation  
     
    Bookmark   61 citations  
  • Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard?John Zerilli, Alistair Knott, James Maclaurin & Colin Gavaghan - 2018 - Philosophy and Technology 32 (4):661-683.
    We are sceptical of concerns over the opacity of algorithmic decision tools. While transparency and explainability are certainly important desiderata in algorithmic governance, we worry that automated decision-making is being held to an unrealistically high standard, possibly owing to an unrealistically high estimate of the degree of transparency attainable from human decision-makers. In this paper, we review evidence demonstrating that much human decision-making is fraught with transparency problems, show in what respects AI fares little worse or better and argue that (...)
    Download  
     
    Export citation  
     
    Bookmark   70 citations  
  • Explaining Machine Learning Decisions.John Zerilli - 2022 - Philosophy of Science 89 (1):1-19.
    The operations of deep networks are widely acknowledged to be inscrutable. The growing field of Explainable AI has emerged in direct response to this problem. However, owing to the nature of the opacity in question, XAI has been forced to prioritise interpretability at the expense of completeness, and even realism, so that its explanations are frequently interpretable without being underpinned by more comprehensive explanations faithful to the way a network computes its predictions. While this has been taken to be a (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • A Misdirected Principle with a Catch: Explicability for AI.Scott Robbins - 2019 - Minds and Machines 29 (4):495-514.
    There is widespread agreement that there should be a principle requiring that artificial intelligence be ‘explicable’. Microsoft, Google, the World Economic Forum, the draft AI ethics guidelines for the EU commission, etc. all include a principle for AI that falls under the umbrella of ‘explicability’. Roughly, the principle states that “for AI to promote and not constrain human autonomy, our ‘decision about who should decide’ must be informed by knowledge of how AI would act instead of us” :689–707, 2018). There (...)
    Download  
     
    Export citation  
     
    Bookmark   35 citations  
  • Autonomous Machines, Moral Judgment, and Acting for the Right Reasons.Duncan Purves, Ryan Jenkins & Bradley J. Strawser - 2015 - Ethical Theory and Moral Practice 18 (4):851-872.
    We propose that the prevalent moral aversion to AWS is supported by a pair of compelling objections. First, we argue that even a sophisticated robot is not the kind of thing that is capable of replicating human moral judgment. This conclusion follows if human moral judgment is not codifiable, i.e., it cannot be captured by a list of rules. Moral judgment requires either the ability to engage in wide reflective equilibrium, the ability to perceive certain facts as moral considerations, moral (...)
    Download  
     
    Export citation  
     
    Bookmark   40 citations  
  • Explanation in artificial intelligence: Insights from the social sciences.Tim Miller - 2019 - Artificial Intelligence 267 (C):1-38.
    Download  
     
    Export citation  
     
    Bookmark   131 citations  
  • Artificial Intelligence and Black‐Box Medical Decisions: Accuracy versus Explainability.Alex John London - 2019 - Hastings Center Report 49 (1):15-21.
    Although decision‐making algorithms are not new to medicine, the availability of vast stores of medical data, gains in computing power, and breakthroughs in machine learning are accelerating the pace of their development, expanding the range of questions they can address, and increasing their predictive power. In many cases, however, the most powerful machine learning techniques purchase diagnostic or predictive accuracy at the expense of our ability to access “the knowledge within the machine.” Without an explanation in terms of reasons or (...)
    Download  
     
    Export citation  
     
    Bookmark   66 citations  
  • Experts: Which ones should you trust?Alvin I. Goldman - 2001 - Philosophy and Phenomenological Research 63 (1):85-110.
    Mainstream epistemology is a highly theoretical and abstract enterprise. Traditional epistemologists rarely present their deliberations as critical to the practical problems of life, unless one supposes—as Hume, for example, did not—that skeptical worries should trouble us in our everyday affairs. But some issues in epistemology are both theoretically interesting and practically quite pressing. That holds of the problem to be discussed here: how laypersons should evaluate the testimony of experts and decide which of two or more rival experts is most (...)
    Download  
     
    Export citation  
     
    Bookmark   353 citations  
  • AI4People—an ethical framework for a good AI society: opportunities, risks, principles, and recommendations.Luciano Floridi, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, Virginia Dignum, Christoph Luetge, Robert Madelin, Ugo Pagallo, Francesca Rossi, Burkhard Schafer, Peggy Valcke & Effy Vayena - 2018 - Minds and Machines 28 (4):689-707.
    This article reports the findings of AI4People, an Atomium—EISMD initiative designed to lay the foundations for a “Good AI Society”. We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations—to assess, to develop, to incentivise, and to support good AI—which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other (...)
    Download  
     
    Export citation  
     
    Bookmark   175 citations  
  • Epistemic dependence.John Hardwig - 1985 - Journal of Philosophy 82 (7):335-349.
    find myself believing all sorts 0f things for which I d0 not possess evidence: that smoking cigarettes causes lung cancer, that my car keeps stalling because the carburetor needs LO be rebuilt, that mass media threaten democracy, that slums cause emotional disorders, that my irregular heart beat is premature ventricular contraction, that students} grades are not correlated with success in the ncmacadcmic world, that nuclear power plants are not safe (enough) . . . The list 0f things I believe, though (...)
    Download  
     
    Export citation  
     
    Bookmark   321 citations  
  • Explaining Explanations in AI.Brent Mittelstadt - forthcoming - FAT* 2019 Proceedings 1.
    Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it’s important to remember Box’s maxim that "All models are wrong but some are useful." We focus on (...)
    Download  
     
    Export citation  
     
    Bookmark   42 citations