Switch to: Citations

Add references

You must login to add references.
  1. A Theory of Justice: Revised Edition.John Rawls - 1999 - Harvard University Press.
    Previous edition, 1st, published in 1971.
    Download  
     
    Export citation  
     
    Bookmark   1911 citations  
  • (3 other versions)Freedom and Resentment.Peter Strawson - 1962 - Proceedings of the British Academy 48:187-211.
    The doyen of living English philosophers, by these reflections, took hold of and changed the outlook of a good many other philosophers, if not quite enough. He did so, essentially, by assuming that talk of freedom and responsibility is talk not of facts or truths, in a certain sense, but of our attitudes. His more explicit concern was to look again at the question of whether determinism and freedom are consistent with one another -- by shifting attention to certain personal (...)
    Download  
     
    Export citation  
     
    Bookmark   1376 citations  
  • The ethics of algorithms: mapping the debate.Brent Mittelstadt, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter & Luciano Floridi - 2016 - Big Data and Society 3 (2):2053951716679679.
    In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences (...)
    Download  
     
    Export citation  
     
    Bookmark   233 citations  
  • (1 other version)Virtue and Reason.John McDowell - 1979 - The Monist 62 (3):331-50.
    1. Presumably the point of, say, inculcating a moral outlook lies in a concern with how people live. It may seem that the very idea of a moral outlook makes room for, and requires, the existence of moral theory, conceived as a discipline which seeks to formulate acceptable principles of conduct. It is then natural to think of ethics as a branch of philosophy related to moral theory, so conceived, rather as the philosophy of science is related to science. On (...)
    Download  
     
    Export citation  
     
    Bookmark   612 citations  
  • AI4People—an ethical framework for a good AI society: opportunities, risks, principles, and recommendations.Luciano Floridi, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, Virginia Dignum, Christoph Luetge, Robert Madelin, Ugo Pagallo, Francesca Rossi, Burkhard Schafer, Peggy Valcke & Effy Vayena - 2018 - Minds and Machines 28 (4):689-707.
    This article reports the findings of AI4People, an Atomium—EISMD initiative designed to lay the foundations for a “Good AI Society”. We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations—to assess, to develop, to incentivise, and to support good AI—which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other (...)
    Download  
     
    Export citation  
     
    Bookmark   213 citations  
  • How the machine ‘thinks’: Understanding opacity in machine learning algorithms.Jenna Burrell - 2016 - Big Data and Society 3 (1):205395171562251.
    This article considers the issue of opacity as a problem for socially consequential mechanisms of classification and ranking, such as spam filters, credit card fraud detection, search engines, news trends, market segmentation and advertising, insurance or loan qualification, and credit scoring. These mechanisms of classification all frequently rely on computational algorithms, and in many cases on machine learning algorithms to do this work. In this article, I draw a distinction between three forms of opacity: opacity as intentional corporate or state (...)
    Download  
     
    Export citation  
     
    Bookmark   225 citations  
  • Transparency in Complex Computational Systems.Kathleen A. Creel - 2020 - Philosophy of Science 87 (4):568-589.
    Scientists depend on complex computational systems that are often ineliminably opaque, to the detriment of our ability to give scientific explanations and detect artifacts. Some philosophers have s...
    Download  
     
    Export citation  
     
    Bookmark   71 citations  
  • Why Does Inequality Matter?Thomas Scanlon - 2017 - Oxford University Press.
    Inequality is widely regarded as morally objectionable: T. M. Scanlon investigates why it matters to us. He considers the nature and importance of equality of opportunity, whether the pursuit of greater equality involves objectionable interference with individual liberty, and whether the rich can be said to deserve their greater rewards.
    Download  
     
    Export citation  
     
    Bookmark   60 citations  
  • Killer robots.Robert Sparrow - 2007 - Journal of Applied Philosophy 24 (1):62–77.
    The United States Army’s Future Combat Systems Project, which aims to manufacture a “robot army” to be ready for deployment by 2012, is only the latest and most dramatic example of military interest in the use of artificially intelligent systems in modern warfare. This paper considers the ethics of a decision to send artificially intelligent robots into war, by asking who we should hold responsible when an autonomous weapon system is involved in an atrocity of the sort that would normally (...)
    Download  
     
    Export citation  
     
    Bookmark   246 citations  
  • Artificial Intelligence and Black‐Box Medical Decisions: Accuracy versus Explainability.Alex John London - 2019 - Hastings Center Report 49 (1):15-21.
    Although decision‐making algorithms are not new to medicine, the availability of vast stores of medical data, gains in computing power, and breakthroughs in machine learning are accelerating the pace of their development, expanding the range of questions they can address, and increasing their predictive power. In many cases, however, the most powerful machine learning techniques purchase diagnostic or predictive accuracy at the expense of our ability to access “the knowledge within the machine.” Without an explanation in terms of reasons or (...)
    Download  
     
    Export citation  
     
    Bookmark   91 citations  
  • Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard?John Zerilli, Alistair Knott, James Maclaurin & Colin Gavaghan - 2018 - Philosophy and Technology 32 (4):661-683.
    We are sceptical of concerns over the opacity of algorithmic decision tools. While transparency and explainability are certainly important desiderata in algorithmic governance, we worry that automated decision-making is being held to an unrealistically high standard, possibly owing to an unrealistically high estimate of the degree of transparency attainable from human decision-makers. In this paper, we review evidence demonstrating that much human decision-making is fraught with transparency problems, show in what respects AI fares little worse or better and argue that (...)
    Download  
     
    Export citation  
     
    Bookmark   81 citations  
  • Understanding, Idealization, and Explainable AI.Will Fleisher - 2022 - Episteme 19 (4):534-560.
    Many AI systems that make important decisions are black boxes: how they function is opaque even to their developers. This is due to their high complexity and to the fact that they are trained rather than programmed. Efforts to alleviate the opacity of black box systems are typically discussed in terms of transparency, interpretability, and explainability. However, there is little agreement about what these key concepts mean, which makes it difficult to adjudicate the success or promise of opacity alleviation methods. (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • The Right to Explanation.Kate Vredenburgh - 2021 - Journal of Political Philosophy 30 (2):209-229.
    Journal of Political Philosophy, Volume 30, Issue 2, Page 209-229, June 2022.
    Download  
     
    Export citation  
     
    Bookmark   35 citations  
  • Deep learning: A philosophical introduction.Cameron Buckner - 2019 - Philosophy Compass 14 (10):e12625.
    Deep learning is currently the most prominent and widely successful method in artificial intelligence. Despite having played an active role in earlier artificial intelligence and neural network research, philosophers have been largely silent on this technology so far. This is remarkable, given that deep learning neural networks have blown past predicted upper limits on artificial intelligence performance—recognizing complex objects in natural photographs and defeating world champions in strategy games as complex as Go and chess—yet there remains no universally accepted explanation (...)
    Download  
     
    Export citation  
     
    Bookmark   50 citations  
  • Empiricism without Magic: Transformational Abstraction in Deep Convolutional Neural Networks.Cameron Buckner - 2018 - Synthese (12):1-34.
    In artificial intelligence, recent research has demonstrated the remarkable potential of Deep Convolutional Neural Networks (DCNNs), which seem to exceed state-of-the-art performance in new domains weekly, especially on the sorts of very difficult perceptual discrimination tasks that skeptics thought would remain beyond the reach of artificial intelligence. However, it has proven difficult to explain why DCNNs perform so well. In philosophy of mind, empiricists have long suggested that complex cognition is based on information derived from sensory experience, often appealing to (...)
    Download  
     
    Export citation  
     
    Bookmark   53 citations  
  • There Is No Techno-Responsibility Gap.Daniel W. Tigard - 2021 - Philosophy and Technology 34 (3):589-607.
    In a landmark essay, Andreas Matthias claimed that current developments in autonomous, artificially intelligent (AI) systems are creating a so-called responsibility gap, which is allegedly ever-widening and stands to undermine both the moral and legal frameworks of our society. But how severe is the threat posed by emerging technologies? In fact, a great number of authors have indicated that the fear is thoroughly instilled. The most pessimistic are calling for a drastic scaling-back or complete moratorium on AI systems, while the (...)
    Download  
     
    Export citation  
     
    Bookmark   48 citations  
  • Algorithmic bias: on the implicit biases of social technology.Gabbrielle Johnson - 2020 - Synthese 198 (10):9941-9961.
    Often machine learning programs inherit social patterns reflected in their training data without any directed effort by programmers to include such biases. Computer scientists call this algorithmic bias. This paper explores the relationship between machine bias and human cognitive bias. In it, I argue similarities between algorithmic and cognitive biases indicate a disconcerting sense in which sources of bias emerge out of seemingly innocuous patterns of information processing. The emergent nature of this bias obscures the existence of the bias itself, (...)
    Download  
     
    Export citation  
     
    Bookmark   42 citations  
  • What do we want from Explainable Artificial Intelligence (XAI)? – A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research.Markus Langer, Daniel Oster, Timo Speith, Lena Kästner, Kevin Baum, Holger Hermanns, Eva Schmidt & Andreas Sesing - 2021 - Artificial Intelligence 296 (C):103473.
    Previous research in Explainable Artificial Intelligence (XAI) suggests that a main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems (we call these “stakeholders' desiderata”) in a variety of contexts. However, the literature on XAI is vast, spreads out across multiple largely disconnected disciplines, and it often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders' desiderata. This paper discusses the main classes of stakeholders calling for explainability (...)
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  • Animal Ethics in Context.Clare Palmer - 2010 - Columbia University Press.
    It is widely agreed that because animals feel pain we should not make them suffer gratuitously. Some ethical theories go even further: because of the capacities that they possess, animals have the right not to be harmed or killed. These views concern what not to do to animals, but we also face questions about when we should, and should not, assist animals that are hungry or distressed. Should we feed a starving stray kitten? And if so, does this commit us, (...)
    Download  
     
    Export citation  
     
    Bookmark   78 citations  
  • Machine Ethics.Michael Anderson & Susan Leigh Anderson (eds.) - 2011 - Cambridge Univ. Press.
    The essays in this volume represent the first steps by philosophers and artificial intelligence researchers toward explaining why it is necessary to add an ...
    Download  
     
    Export citation  
     
    Bookmark   68 citations  
  • Against Interpretability: a Critical Examination of the Interpretability Problem in Machine Learning.Maya Krishnan - 2020 - Philosophy and Technology 33 (3):487-502.
    The usefulness of machine learning algorithms has led to their widespread adoption prior to the development of a conceptual framework for making sense of them. One common response to this situation is to say that machine learning suffers from a “black box problem.” That is, machine learning algorithms are “opaque” to human users, failing to be “interpretable” or “explicable” in terms that would render categorization procedures “understandable.” The purpose of this paper is to challenge the widespread agreement about the existence (...)
    Download  
     
    Export citation  
     
    Bookmark   41 citations  
  • The Algorithmic Leviathan: Arbitrariness, Fairness, and Opportunity in Algorithmic Decision-Making Systems.Kathleen Creel & Deborah Hellman - 2022 - Canadian Journal of Philosophy 52 (1):26-43.
    This article examines the complaint that arbitrary algorithmic decisions wrong those whom they affect. It makes three contributions. First, it provides an analysis of what arbitrariness means in this context. Second, it argues that arbitrariness is not of moral concern except when special circumstances apply. However, when the same algorithm or different algorithms based on the same data are used in multiple contexts, a person may be arbitrarily excluded from a broad range of opportunities. The third contribution is to explain (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Discrimination and Disrespect.Benjamin Eidelson - 2015 - Oxford, United Kingdom: Oxford University Press UK.
    Hardly anyone disputes that discrimination can be a grave moral wrong. Yet this consensus masks fundamental disagreements about what makes something discrimination, as well as precisely why acts of discrimination are wrong. Benjamin Eidelson develops systematic answers to those two questions. He claims that discrimination is a form of differential treatment distinguished by its special connection to the differential ascription of some property to different people, and goes on to argue that what makes some cases of discrimination intrinsically wrongful is (...)
    Download  
     
    Export citation  
     
    Bookmark   40 citations  
  • From Responsibility to Reason-Giving Explainable Artificial Intelligence.Kevin Baum, Susanne Mantel, Timo Speith & Eva Schmidt - 2022 - Philosophy and Technology 35 (1):1-30.
    We argue that explainable artificial intelligence (XAI), specifically reason-giving XAI, often constitutes the most suitable way of ensuring that someone can properly be held responsible for decisions that are based on the outputs of artificial intelligent (AI) systems. We first show that, to close moral responsibility gaps (Matthias 2004), often a human in the loop is needed who is directly responsible for particular AI-supported decisions. Second, we appeal to the epistemic condition on moral responsibility to argue that, in order to (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Explaining Machine Learning Decisions.John Zerilli - 2022 - Philosophy of Science 89 (1):1-19.
    The operations of deep networks are widely acknowledged to be inscrutable. The growing field of Explainable AI has emerged in direct response to this problem. However, owing to the nature of the opacity in question, XAI has been forced to prioritise interpretability at the expense of completeness, and even realism, so that its explanations are frequently interpretable without being underpinned by more comprehensive explanations faithful to the way a network computes its predictions. While this has been taken to be a (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • The risks of autonomous machines: from responsibility gaps to control gaps.Frank Hindriks & Herman Veluwenkamp - 2023 - Synthese 201 (1):1-17.
    Responsibility gaps concern the attribution of blame for harms caused by autonomous machines. The worry has been that, because they are artificial agents, it is impossible to attribute blame, even though doing so would be appropriate given the harms they cause. We argue that there are no responsibility gaps. The harms can be blameless. And if they are not, the blame that is appropriate is indirect and can be attributed to designers, engineers, software developers, manufacturers or regulators. The real problem (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Equalized Odds is a Requirement of Algorithmic Fairness.David Gray Grant - 2023 - Synthese 201 (3).
    Statistical criteria of fairness are formal measures of how an algorithm performs that aim to help us determine whether an algorithm would be fair to use in decision-making. In this paper, I introduce a new version of the criterion known as “Equalized Odds,” argue that it is a requirement of procedural fairness, and show that it is immune to a number of objections to the standard version.
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Autonomous Machines, Moral Judgment, and Acting for the Right Reasons.Duncan Purves, Ryan Jenkins & Bradley J. Strawser - 2015 - Ethical Theory and Moral Practice 18 (4):851-872.
    We propose that the prevalent moral aversion to AWS is supported by a pair of compelling objections. First, we argue that even a sophisticated robot is not the kind of thing that is capable of replicating human moral judgment. This conclusion follows if human moral judgment is not codifiable, i.e., it cannot be captured by a list of rules. Moral judgment requires either the ability to engage in wide reflective equilibrium, the ability to perceive certain facts as moral considerations, moral (...)
    Download  
     
    Export citation  
     
    Bookmark   50 citations  
  • Statistical resentment, or: what’s wrong with acting, blaming, and believing on the basis of statistics alone.David Enoch & Levi Spectre - 2021 - Synthese 199 (3-4):5687-5718.
    Statistical evidence—say, that 95% of your co-workers badmouth each other—can never render resenting your colleague appropriate, in the way that other evidence (say, the testimony of a reliable friend) can. The problem of statistical resentment is to explain why. We put the problem of statistical resentment in several wider contexts: The context of the problem of statistical evidence in legal theory; the epistemological context—with problems like the lottery paradox for knowledge, epistemic impurism and doxastic wrongdoing; and the context of a (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Failing to Treat Persons as Individuals.Erin Beeghly - 2018 - Ergo: An Open Access Journal of Philosophy 5.
    If someone says, “You’ve stereotyped me,” we hear the statement as an accusation. One way to interpret the accusation is as follows: you haven’t seen or treated me as an individual. In this essay, I interpret and evaluate a theory of wrongful stereotyping inspired by this thought, which I call the failure-to-individualize theory of wrongful stereotyping. According to this theory, stereotyping is wrong if and only if it involves failing to treat persons as individuals. I argue that the theory—however one (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • Algorithms and Autonomy: The Ethics of Automated Decision Systems.Alan Rubel, Clinton Castro & Adam Pham - 2021 - Cambridge University Press.
    Algorithms influence every facet of modern life: criminal justice, education, housing, entertainment, elections, social media, news feeds, work… the list goes on. Delegating important decisions to machines, however, gives rise to deep moral concerns about responsibility, transparency, freedom, fairness, and democracy. Algorithms and Autonomy connects these concerns to the core human value of autonomy in the contexts of algorithmic teacher evaluation, risk assessment in criminal sentencing, predictive policing, background checks, news feeds, ride-sharing platforms, social media, and election interference. Using these (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Introspection.Eric Schwitzgebel - 2010 - Stanford Encyclopedia of Philosophy.
    Download  
     
    Export citation  
     
    Bookmark   71 citations  
  • Explaining the Justificatory Asymmetry between Statistical and Individualized Evidence.Renee Bolinger - 2021 - In Jon Robson & Zachary Hoskins, The Social Epistemology of Legal Trials. Routledge. pp. 60-76.
    In some cases, there appears to be an asymmetry in the evidential value of statistical and more individualized evidence. For example, while I may accept that Alex is guilty based on eyewitness testimony that is 80% likely to be accurate, it does not seem permissible to do so based on the fact that 80% of a group that Alex is a member of are guilty. In this paper I suggest that rather than reflecting a deep defect in statistical evidence, this (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • II—What’s Wrong with Paternalism: Autonomy, Belief, and Action.David Enoch - 2016 - Proceedings of the Aristotelian Society 116 (1):21-48.
    Several influential characterizations of paternalism or its distinctive wrongness emphasize a belief or judgement that it typically involves—namely, 10 the judgement that the paternalized is likely to act irrationally, or some such. But it's not clear what about such a belief can be morally objectionable if it has the right epistemic credentials (if it is true, say, and is best supported by the evidence). In this paper, I elaborate on this point, placing it in the context of the relevant epistemological (...)
    Download  
     
    Export citation  
     
    Bookmark   32 citations  
  • Connectionism.James Garson & Cameron Buckner - 2019 - Stanford Encyclopedia of Philosophy.
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  • Noncomparative justice.Joel Feinberg - 1974 - Philosophical Review 83 (3):297-338.
    Download  
     
    Export citation  
     
    Bookmark   61 citations  
  • What's Wrong with Machine Bias.Clinton Castro - 2019 - Ergo: An Open Access Journal of Philosophy 6.
    Data-driven, decision-making technologies used in the justice system to inform decisions about bail, parole, and prison sentencing are biased against historically marginalized groups (Angwin, Larson, Mattu, & Kirchner 2016). But these technologies’ judgments—which reproduce patterns of wrongful discrimination embedded in the historical datasets that they are trained on—are well-evidenced. This presents a puzzle: how can we account for the wrong these judgments engender without also indicting morally permissible statistical inferences about persons? I motivate this puzzle and attempt an answer.
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Thomson on privacy.Thomas Scanlon - 1975 - Philosophy and Public Affairs 4 (4):315-322.
    Download  
     
    Export citation  
     
    Bookmark   54 citations  
  • Contingency inattention: against causal debunking in ethics.Regina Rini - 2020 - Philosophical Studies 177 (2):369-389.
    It is a philosophical truism that we must think of others as moral agents, not merely as causal or statistical objects. But why? I argue that this follows from the best resolution of an antinomy between our experience of morality as necessarily binding on the will and our knowledge that all moral beliefs originate in contingent histories. We can address this antinomy only by understanding moral deliberation via interpersonal relationships, which simultaneously vindicate and constrains morality’s bind on the will. This (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Neutrality, Publicity, and State Funding of the Arts.Harry Brighouse - 1995 - Philosophy and Public Affairs 24 (1):35-63.
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • In defense of procedural rights : A response to Wellman.David Enoch - 2018 - Legal Theory 24 (1):40-49.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • The responsibility gap: Ascribing responsibility for the actions of learning automata. [REVIEW]Andreas Matthias - 2004 - Ethics and Information Technology 6 (3):175-183.
    Traditionally, the manufacturer/operator of a machine is held (morally and legally) responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it. The society must decide between not using this kind of machine any more (which is not a (...)
    Download  
     
    Export citation  
     
    Bookmark   219 citations