Switch to: Citations

Add references

You must login to add references.
  1. Beyond accuracy: Epistemic flaws with statistical generalizations.Jessie Munton - 2019 - Philosophical Issues 29 (1):228-240.
    What, if anything, is epistemically wrong with beliefs involving accurate statistical generalizations about demographic groups? This paper argues that there is a perfectly general, underappreciated epistemic flaw which affects both ethically charged and uncharged statistical generalizations. Though common to both, this flaw can also explain why demographic statistical generalizations give rise to the concerns they do. To identify this flaw, we need to distinguish between the accuracy and the projectability of statistical beliefs. Statistical beliefs are accompanied by an implicit representation (...)
    Download  
     
    Export citation  
     
    Bookmark   31 citations  
  • Justice, health, and healthcare.Norman Daniels - 2001 - American Journal of Bioethics 1 (2):2 – 16.
    Healthcare (including public health) is special because it protects normal functioning, which in turn protects the range of opportunities open to individuals. I extend this account in two ways. First, since the distribution of goods other than healthcare affect population health and its distribution, I claim that Rawls's principles of justice describe a fair distribution of the social determinants of health, giving a partial account of when health inequalities are unjust. Second, I supplement a principled account of justice for health (...)
    Download  
     
    Export citation  
     
    Bookmark   106 citations  
  • Transparency in Complex Computational Systems.Kathleen A. Creel - 2020 - Philosophy of Science 87 (4):568-589.
    Scientists depend on complex computational systems that are often ineliminably opaque, to the detriment of our ability to give scientific explanations and detect artifacts. Some philosophers have s...
    Download  
     
    Export citation  
     
    Bookmark   53 citations  
  • The Algorithmic Leviathan: Arbitrariness, Fairness, and Opportunity in Algorithmic Decision-Making Systems.Kathleen Creel & Deborah Hellman - 2022 - Canadian Journal of Philosophy 52 (1):26-43.
    This article examines the complaint that arbitrary algorithmic decisions wrong those whom they affect. It makes three contributions. First, it provides an analysis of what arbitrariness means in this context. Second, it argues that arbitrariness is not of moral concern except when special circumstances apply. However, when the same algorithm or different algorithms based on the same data are used in multiple contexts, a person may be arbitrarily excluded from a broad range of opportunities. The third contribution is to explain (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Epistemology of disagreement: The good news.David Christensen - 2007 - Philosophical Review 116 (2):187-217.
    How should one react when one has a belief, but knows that other people—who have roughly the same evidence as one has, and seem roughly as likely to react to it correctly—disagree? This paper argues that the disagreement of other competent inquirers often requires one to be much less confident in one’s opinions than one would otherwise be.
    Download  
     
    Export citation  
     
    Bookmark   581 citations  
  • Just Machines.Clinton Castro - 2022 - Public Affairs Quarterly 36 (2):163-183.
    A number of findings in the field of machine learning have given rise to questions about what it means for automated scoring- or decisionmaking systems to be fair. One center of gravity in this discussion is whether such systems ought to satisfy classification parity (which requires parity in accuracy across groups, defined by protected attributes) or calibration (which requires similar predictions to have similar meanings across groups, defined by protected attributes). Central to this discussion are impossibility results, owed to Kleinberg (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • The Ideal of Shared Decision Making Between Physicians and Patients.Dan W. Brock - 1991 - Kennedy Institute of Ethics Journal 1 (1):28-47.
    In lieu of an abstract, here is a brief excerpt of the content:The Ideal of Shared Decision Making Between Physicians and PatientsDan W. Brock (bio)IntroductionShared treatment decision making, with its division of labor between physician and patient, is a common ideal in medical ethics for the physician-patient relationship.1 Most simply put, the physician's role is to use his or her training, knowledge, and experience to provide the patient with facts about the diagnosis and about the prognoses without treatment and with (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • Just data? Solidarity and justice in data-driven medicine.Matthias Braun & Patrik Hummel - 2020 - Life Sciences, Society and Policy 16 (1):1-18.
    This paper argues that data-driven medicine gives rise to a particular normative challenge. Against the backdrop of a distinction between the good and the right, harnessing personal health data towards the development and refinement of data-driven medicine is to be welcomed from the perspective of the good. Enacting solidarity drives progress in research and clinical practice. At the same time, such acts of sharing could—especially considering current developments in big data and artificial intelligence—compromise the right by leading to injustices and (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Two Dimensions of Opacity and the Deep Learning Predicament.Florian J. Boge - 2021 - Minds and Machines 32 (1):43-75.
    Deep neural networks have become increasingly successful in applications from biology to cosmology to social science. Trained DNNs, moreover, correspond to models that ideally allow the prediction of new phenomena. Building in part on the literature on ‘eXplainable AI’, I here argue that these models are instrumental in a sense that makes them non-explanatory, and that their automated generation is opaque in a unique way. This combination implies the possibility of an unprecedented gap between discovery and explanation: When unsupervised models (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Artificial Intelligence and Patient-Centered Decision-Making.Jens Christian Bjerring & Jacob Busch - 2020 - Philosophy and Technology 34 (2):349-371.
    Advanced AI systems are rapidly making their way into medical research and practice, and, arguably, it is only a matter of time before they will surpass human practitioners in terms of accuracy, reliability, and knowledge. If this is true, practitioners will have a prima facie epistemic and professional obligation to align their medical verdicts with those of advanced AI systems. However, in light of their complexity, these AI systems will often function as black boxes: the details of their contents, calculations, (...)
    Download  
     
    Export citation  
     
    Bookmark   35 citations  
  • On Predicting Recidivism: Epistemic Risk, Tradeoffs, and Values in Machine Learning.Justin B. Biddle - 2022 - Canadian Journal of Philosophy 52 (3):321-341.
    Recent scholarship in philosophy of science and technology has shown that scientific and technological decision making are laden with values, including values of a social, political, and/or ethical character. This paper examines the role of value judgments in the design of machine-learning systems generally and in recidivism-prediction algorithms specifically. Drawing on work on inductive and epistemic risk, the paper argues that ML systems are value laden in ways similar to human decision making, because the development and design of ML systems (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Should we replace radiologists with deep learning? Pigeons, error and trust in medical AI.Ramón Alvarado - 2021 - Bioethics 36 (2):121-133.
    Bioethics, Volume 36, Issue 2, Page 121-133, February 2022.
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Algorithmic and human decision making: for a double standard of transparency.Mario Günther & Atoosa Kasirzadeh - 2022 - AI and Society 37 (1):375-381.
    Should decision-making algorithms be held to higher standards of transparency than human beings? The way we answer this question directly impacts what we demand from explainable algorithms, how we govern them via regulatory proposals, and how explainable algorithms may help resolve the social problems associated with decision making supported by artificial intelligence. Some argue that algorithms and humans should be held to the same standards of transparency and that a double standard of transparency is hardly justified. We give two arguments (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • In AI We Trust: Ethics, Artificial Intelligence, and Reliability.Mark Ryan - 2020 - Science and Engineering Ethics 26 (5):2749-2767.
    One of the main difficulties in assessing artificial intelligence (AI) is the tendency for people to anthropomorphise it. This becomes particularly problematic when we attach human moral activities to AI. For example, the European Commission’s High-level Expert Group on AI (HLEG) have adopted the position that we should establish a relationship of trust with AI and should cultivate trustworthy AI (HLEG AI Ethics guidelines for trustworthy AI, 2019, p. 35). Trust is one of the most important and defining activities in (...)
    Download  
     
    Export citation  
     
    Bookmark   44 citations  
  • Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard?John Zerilli, Alistair Knott, James Maclaurin & Colin Gavaghan - 2018 - Philosophy and Technology 32 (4):661-683.
    We are sceptical of concerns over the opacity of algorithmic decision tools. While transparency and explainability are certainly important desiderata in algorithmic governance, we worry that automated decision-making is being held to an unrealistically high standard, possibly owing to an unrealistically high estimate of the degree of transparency attainable from human decision-makers. In this paper, we review evidence demonstrating that much human decision-making is fraught with transparency problems, show in what respects AI fares little worse or better and argue that (...)
    Download  
     
    Export citation  
     
    Bookmark   70 citations  
  • The Right to Explanation.Kate Vredenburgh - 2021 - Journal of Political Philosophy 30 (2):209-229.
    Journal of Political Philosophy, Volume 30, Issue 2, Page 209-229, June 2022.
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  • Epistemic injustice and data science technologies.John Symons & Ramón Alvarado - 2022 - Synthese 200 (2):1-26.
    Technologies that deploy data science methods are liable to result in epistemic harms involving the diminution of individuals with respect to their standing as knowers or their credibility as sources of testimony. Not all harms of this kind are unjust but when they are we ought to try to prevent or correct them. Epistemically unjust harms will typically intersect with other more familiar and well-studied kinds of harm that result from the design, development, and use of data science technologies. However, (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Understanding from Machine Learning Models.Emily Sullivan - 2022 - British Journal for the Philosophy of Science 73 (1):109-133.
    Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic models. However, an increasing number of scientists are going in the opposite direction by utilizing opaque machine learning models to make predictions and draw inferences, suggesting that scientists are opting for models that have less potential for understanding. Are scientists trading understanding for some other epistemic or pragmatic good when they choose a machine learning model? Or are the assumptions behind why minimal models provide understanding misguided? In (...)
    Download  
     
    Export citation  
     
    Bookmark   48 citations  
  • Towards a pragmatist dealing with algorithmic bias in medical machine learning.Georg Starke, Eva De Clercq & Bernice S. Elger - 2021 - Medicine, Health Care and Philosophy 24 (3):341-349.
    Machine Learning (ML) is on the rise in medicine, promising improved diagnostic, therapeutic and prognostic clinical tools. While these technological innovations are bound to transform health care, they also bring new ethical concerns to the forefront. One particularly elusive challenge regards discriminatory algorithmic judgements based on biases inherent in the training data. A common line of reasoning distinguishes between justified differential treatments that mirror true disparities between socially salient groups, and unjustified biases which do not, leading to misdiagnosis and erroneous (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Trustworthy artificial intelligence.Mona Simion & Christoph Kelp - 2023 - Asian Journal of Philosophy 2 (1):1-12.
    This paper develops an account of trustworthy AI. Its central idea is that whether AIs are trustworthy is a matter of whether they live up to their function-based obligations. We argue that this account serves to advance the literature in a couple of important ways. First, it serves to provide a rationale for why a range of properties that are widely assumed in the scientific literature, as well as in policy, to be required of trustworthy AI, such as safety, justice, (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • From hostile worlds to multiple spheres: towards a normative pragmatics of justice for the Googlization of health.Tamar Sharon - 2021 - Medicine, Health Care and Philosophy 24 (3):315-327.
    The datafication and digitalization of health and medicine has engendered a proliferation of new collaborations between public health institutions and data corporations like Google, Apple, Microsoft and Amazon. Critical perspectives on these new partnerships tend to frame them as an instance of market transgressions by tech giants into the sphere of health and medicine, in line with a “hostile worlds” doctrine that upholds that the borders between market and non-market spheres should be carefully policed. This article seeks to outline the (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Interpreting causality in the health sciences.Federica Russo & Jon Williamson - 2007 - International Studies in the Philosophy of Science 21 (2):157 – 170.
    We argue that the health sciences make causal claims on the basis of evidence both of physical mechanisms, and of probabilistic dependencies. Consequently, an analysis of causality solely in terms of physical mechanisms or solely in terms of probabilistic relationships, does not do justice to the causal claims of these sciences. Yet there seems to be a single relation of cause in these sciences - pluralism about causality will not do either. Instead, we maintain, the health sciences require a theory (...)
    Download  
     
    Export citation  
     
    Bookmark   201 citations  
  • Explanatory pragmatism: a context-sensitive framework for explainable medical AI.Diana Robinson & Rune Nyrup - 2022 - Ethics and Information Technology 24 (1).
    Explainable artificial intelligence (XAI) is an emerging, multidisciplinary field of research that seeks to develop methods and tools for making AI systems more explainable or interpretable. XAI researchers increasingly recognise explainability as a context-, audience- and purpose-sensitive phenomenon, rather than a single well-defined property that can be directly measured and optimised. However, since there is currently no overarching definition of explainability, this poses a risk of miscommunication between the many different researchers within this multidisciplinary space. This is the problem we (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Stereotyping Patients.Katherine Puddifoot - 2019 - Journal of Social Philosophy 50 (1):69-90.
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Testimonial injustice in medical machine learning.Giorgia Pozzi - 2023 - Journal of Medical Ethics 49 (8):536-540.
    Machine learning (ML) systems play an increasingly relevant role in medicine and healthcare. As their applications move ever closer to patient care and cure in clinical settings, ethical concerns about the responsibility of their use come to the fore. I analyse an aspect of responsible ML use that bears not only an ethical but also a significant epistemic dimension. I focus on ML systems’ role in mediating patient–physician relations. I thereby consider how ML systems may silence patients’ voices and relativise (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Trustworthy AI: a plea for modest anthropocentrism.Rune Nyrup - 2023 - Asian Journal of Philosophy 2 (2):1-10.
    Simion and Kelp defend a non-anthropocentric account of trustworthy AI, based on the idea that the obligations of AI systems should be sourced in purely functional norms. In this commentary, I highlight some pressing counterexamples to their account, involving AI systems that reliably fulfil their functions but are untrustworthy because those functions are antagonistic to the interests of the trustor. Instead, I outline an alternative account, based on the idea that AI systems should not be considered primarily as tools but (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The ethics of crashes with self‐driving cars: A roadmap, II.Sven Nyholm - 2018 - Philosophy Compass 13 (7):e12506.
    Self‐driving cars hold out the promise of being much safer than regular cars. Yet they cannot be 100% safe. Accordingly, we need to think about who should be held responsible when self‐driving cars crash and people are injured or killed. We also need to examine what new ethical obligations might be created for car users by the safety potential of self‐driving cars. The article first considers what lessons might be learned from the growing legal literature on responsibility for crashes with (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  • Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence.Shakir Mohamed, Marie-Therese Png & William Isaac - 2020 - Philosophy and Technology 33 (4):659-684.
    This paper explores the important role of critical science, and in particular of post-colonial and decolonial theories, in understanding and shaping the ongoing advances in artificial intelligence. Artificial intelligence is viewed as amongst the technological advances that will reshape modern societies and their relations. While the design and deployment of systems that continually adapt holds the promise of far-reaching positive change, they simultaneously pose significant risks, especially to already vulnerable peoples. Values and power are central to this discussion. Decolonial theories (...)
    Download  
     
    Export citation  
     
    Bookmark   31 citations  
  • Computer knows best? The need for value-flexibility in medical AI.Rosalind J. McDougall - 2019 - Journal of Medical Ethics 45 (3):156-160.
    Artificial intelligence is increasingly being developed for use in medicine, including for diagnosis and in treatment decision making. The use of AI in medical treatment raises many ethical issues that are yet to be explored in depth by bioethicists. In this paper, I focus specifically on the relationship between the ethical ideal of shared decision making and AI systems that generate treatment recommendations, using the example of IBM’s Watson for Oncology. I argue that use of this type of system creates (...)
    Download  
     
    Export citation  
     
    Bookmark   42 citations  
  • The responsibility gap: Ascribing responsibility for the actions of learning automata. [REVIEW]Andreas Matthias - 2004 - Ethics and Information Technology 6 (3):175-183.
    Traditionally, the manufacturer/operator of a machine is held (morally and legally) responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it. The society must decide between not using this kind of machine any more (which is not a (...)
    Download  
     
    Export citation  
     
    Bookmark   173 citations  
  • The responsibility gap: Ascribing responsibility for the actions of learning automata.Andreas Matthias - 2004 - Ethics and Information Technology 6 (3):175-183.
    Traditionally, the manufacturer/operator of a machine is held (morally and legally) responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it. The society must decide between not using this kind of machine any more (which is not a (...)
    Download  
     
    Export citation  
     
    Bookmark   174 citations  
  • Artificial Intelligence and Black‐Box Medical Decisions: Accuracy versus Explainability.Alex John London - 2019 - Hastings Center Report 49 (1):15-21.
    Although decision‐making algorithms are not new to medicine, the availability of vast stores of medical data, gains in computing power, and breakthroughs in machine learning are accelerating the pace of their development, expanding the range of questions they can address, and increasing their predictive power. In many cases, however, the most powerful machine learning techniques purchase diagnostic or predictive accuracy at the expense of our ability to access “the knowledge within the machine.” Without an explanation in terms of reasons or (...)
    Download  
     
    Export citation  
     
    Bookmark   66 citations  
  • Artificial intelligence in medicine and the disclosure of risks.Maximilian Kiener - 2021 - AI and Society 36 (3):705-713.
    This paper focuses on the use of ‘black box’ AI in medicine and asks whether the physician needs to disclose to patients that even the best AI comes with the risks of cyberattacks, systematic bias, and a particular type of mismatch between AI’s implicit assumptions and an individual patient’s background situation.Pacecurrent clinical practice, I argue that, under certain circumstances, these risks do need to be disclosed. Otherwise, the physician either vitiates a patient’s informed consent or violates a more general obligation (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Responsibility, second opinions and peer-disagreement: ethical and epistemological challenges of using AI in clinical diagnostic contexts.Hendrik Kempt & Saskia K. Nagel - 2022 - Journal of Medical Ethics 48 (4):222-229.
    In this paper, we first classify different types of second opinions and evaluate the ethical and epistemological implications of providing those in a clinical context. Second, we discuss the issue of how artificial intelligent could replace the human cognitive labour of providing such second opinion and find that several AI reach the levels of accuracy and efficiency needed to clarify their use an urgent ethical issue. Third, we outline the normative conditions of how AI may be used as second opinion (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Enabling Fairness in Healthcare Through Machine Learning.Geoff Keeling & Thomas Grote - 2022 - Ethics and Information Technology 24 (3):1-13.
    The use of machine learning systems for decision-support in healthcare may exacerbate health inequalities. However, recent work suggests that algorithms trained on sufficiently diverse datasets could in principle combat health inequalities. One concern about these algorithms is that their performance for patients in traditionally disadvantaged groups exceeds their performance for patients in traditionally advantaged groups. This renders the algorithmic decisions unfair relative to the standard fairness metrics in machine learning. In this paper, we defend the permissible use of affirmative algorithms; (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Understanding the Problem of “Hype”: Exaggeration, Values, and Trust in Science.Kristen Intemann - 2022 - Canadian Journal of Philosophy 52 (3):279-294.
    Several science studies scholars report instances of scientific “hype,” or sensationalized exaggeration, in journal articles, institutional press releases, and science journalism in a variety of fields (e.g., Caulfield and Condit 2012). Yet, how “hype” is being conceived varies. I will argue that hype is best understood as a particular kind of exaggeration, one that explicitly or implicitly exaggerates various positive aspects of science in ways that undermine the goals of science communication in a particular context. This account also makes clear (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Responsibility for Killer Robots.Johannes Himmelreich - 2019 - Ethical Theory and Moral Practice 22 (3):731-747.
    Future weapons will make life-or-death decisions without a human in the loop. When such weapons inflict unwarranted harm, no one appears to be responsible. There seems to be a responsibility gap. I first reconstruct the argument for such responsibility gaps to then argue that this argument is not sound. The argument assumes that commanders have no control over whether autonomous weapons inflict harm. I argue against this assumption. Although this investigation concerns a specific case of autonomous weapons systems, I take (...)
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  • On statistical criteria of algorithmic fairness.Brian Hedden - 2021 - Philosophy and Public Affairs 49 (2):209-231.
    Predictive algorithms are playing an increasingly prominent role in society, being used to predict recidivism, loan repayment, job performance, and so on. With this increasing influence has come an increasing concern with the ways in which they might be unfair or biased against individuals in virtue of their race, gender, or, more generally, their group membership. Many purported criteria of algorithmic fairness concern statistical relationships between the algorithm’s predictions and the actual outcomes, for instance requiring that the rate of false (...)
    Download  
     
    Export citation  
     
    Bookmark   34 citations  
  • Trust, Distrust and Commitment.Katherine Hawley - 2012 - Noûs 48 (1):1-20.
    Download  
     
    Export citation  
     
    Bookmark   124 citations  
  • Trust, Distrust and Commitment.Katherine Hawley - 2014 - Noûs 48 (1):1-20.
    I outline a number of parallels between trust and distrust, emphasising the significance of situations in which both trust and distrust would be an imposition upon the (dis)trustee. I develop an account of both trust and distrust in terms of commitment, and argue that this enables us to understand the nature of trustworthiness. Note that this article is available open access on the journal website.
    Download  
     
    Export citation  
     
    Bookmark   135 citations  
  • On the ethics of algorithmic decision-making in healthcare.Thomas Grote & Philipp Berens - 2020 - Journal of Medical Ethics 46 (3):205-211.
    In recent years, a plethora of high-profile scientific publications has been reporting about machine learning algorithms outperforming clinicians in medical diagnosis or treatment recommendations. This has spiked interest in deploying relevant algorithms with the aim of enhancing decision-making in healthcare. In this paper, we argue that instead of straightforwardly enhancing the decision-making capabilities of clinicians and healthcare institutions, deploying machines learning algorithms entails trade-offs at the epistemic and the normative level. Whereas involving machine learning might improve the accuracy of medical (...)
    Download  
     
    Export citation  
     
    Bookmark   54 citations  
  • How competitors become collaborators—Bridging the gap(s) between machine learning algorithms and clinicians.Thomas Grote & Philipp Berens - 2021 - Bioethics 36 (2):134-142.
    Bioethics, Volume 36, Issue 2, Page 134-142, February 2022.
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Beyond generalization: a theory of robustness in machine learning.Thomas Grote & Timo Freiesleben - 2023 - Synthese 202 (4):1-28.
    The term robustness is ubiquitous in modern Machine Learning (ML). However, its meaning varies depending on context and community. Researchers either focus on narrow technical definitions, such as adversarial robustness, natural distribution shifts, and performativity, or they simply leave open what exactly they mean by robustness. In this paper, we provide a conceptual analysis of the term robustness, with the aim to develop a common language, that allows us to weave together different strands of robustness research. We define robustness as (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Experts: Which ones should you trust?Alvin I. Goldman - 2001 - Philosophy and Phenomenological Research 63 (1):85-110.
    Mainstream epistemology is a highly theoretical and abstract enterprise. Traditional epistemologists rarely present their deliberations as critical to the practical problems of life, unless one supposes—as Hume, for example, did not—that skeptical worries should trouble us in our everyday affairs. But some issues in epistemology are both theoretically interesting and practically quite pressing. That holds of the problem to be discussed here: how laypersons should evaluate the testimony of experts and decide which of two or more rival experts is most (...)
    Download  
     
    Export citation  
     
    Bookmark   353 citations  
  • The Confounding Question of Confounding Causes in Randomized Trials.Jonathan Fuller - 2019 - British Journal for the Philosophy of Science 70 (3):901-926.
    It is sometimes thought that randomized study group allocation is uniquely proficient at producing comparison groups that are evenly balanced for all confounding causes. Philosophers have argued that in real randomized controlled trials this balance assumption typically fails. But is the balance assumption an important ideal? I run a thought experiment, the CONFOUND study, to answer this question. I then suggest a new account of causal inference in ideal and real comparative group studies that helps clarify the roles of confounding (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI.Juan Manuel Durán & Karin Rolanda Jongsma - 2021 - Journal of Medical Ethics 47 (5).
    The use of black box algorithms in medicine has raised scholarly concerns due to their opaqueness and lack of trustworthiness. Concerns about potential bias, accountability and responsibility, patient autonomy and compromised trust transpire with black box algorithms. These worries connect epistemic concerns with normative issues. In this paper, we outline that black box algorithms are less problematic for epistemic reasons than many scholars seem to believe. By outlining that more transparency in algorithms is not always necessary, and by explaining that (...)
    Download  
     
    Export citation  
     
    Bookmark   45 citations  
  • Learning from Words: Testimony as a Source of Knowledge. [REVIEW]Jennifer Lackey - 2012 - Philosophy Now 88:44-45.
    Download  
     
    Export citation  
     
    Bookmark   256 citations  
  • Artificial intelligence in medicine: Overcoming or recapitulating structural challenges to improving patient care?Alex John London - 2022 - Cell Reports Medicine 100622 (3):1-8.
    There is considerable enthusiasm about the prospect that artificial intelligence (AI) will help to improve the safety and efficacy of health services and the efficiency of health systems. To realize this potential, however, AI systems will have to overcome structural problems in the culture and practice of medicine and the organization of health systems that impact the data from which AI models are built, the environments into which they will be deployed, and the practices and incentives that structure their development. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Randomized Controlled Trials in Medical AI.Konstantin Genin & Thomas Grote - 2021 - Philosophy of Medicine 2 (1).
    Various publications claim that medical AI systems perform as well, or better, than clinical experts. However, there have been very few controlled trials and the quality of existing studies has been called into question. There is growing concern that existing studies overestimate the clinical benefits of AI systems. This has led to calls for more, and higher-quality, randomized controlled trials of medical AI systems. While this a welcome development, AI RCTs raise novel methodological challenges that have seen little discussion. We (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations