Switch to: References

Add citations

You must login to add citations.
  1. Deux enjeux philosophiques entourant la structure des recommandations issues du secteur public.Marc-Kevin Daoust & Victor Babin - 2023 - Dialogue 62 (3):413-429.
    L’une des fonctions des institutions publiques des démocraties libérales est de formuler des recommandations à l’attention des décideurs. Or, les institutions publiques savent que leurs recommandations seront souvent ignorées en partie par le décideur. Cette situation de « conformité partielle » aux recommandations soulève plusieurs problèmes de nature philosophique pour les institutions. En nous appuyant sur une analyse de 570 recommandations tirées de 40 documents et rapports du secteur public québécois, nous identifions deux enjeux entourant la structure des recommandations issues (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Two Philosophical Issues Surrounding the Structure of Public-Policy Recommendations.Marc-Kevin Daoust & Victor Babin - 2023 - Dialogue 62 (3):431-446.
    One of the key responsibilities of public institutions in liberal democracies is to formulate recommendations for decision makers. However, public institutions realize that decision makers will often partly ignore their recommendations. This situation of “partial compliance” with recommendations raises a number of philosophical issues for institutions. Based on an analysis of 570 recommendations drawn from 40 Quebec public-sector documents and reports, we identify two issues surrounding the structure of public-policy recommendations.
    Download  
     
    Export citation  
     
    Bookmark  
  • Limits of the Numerical: The Abuses and Uses of Quantification, ed. C. Newfield, A. Alexandrova and S. John. University of Chicago Press, 2022, 317 pages. [REVIEW]Kate Vredenburgh - 2024 - Economics and Philosophy 40 (3):737-743.
    Download  
     
    Export citation  
     
    Bookmark  
  • Defending explicability as a principle for the ethics of artificial intelligence in medicine.Jonathan Adams - 2023 - Medicine, Health Care and Philosophy 26 (4):615-623.
    The difficulty of explaining the outputs of artificial intelligence (AI) models and what has led to them is a notorious ethical problem wherever these technologies are applied, including in the medical domain, and one that has no obvious solution. This paper examines the proposal, made by Luciano Floridi and colleagues, to include a new ‘principle of explicability’ alongside the traditional four principles of bioethics that make up the theory of ‘principlism’. It specifically responds to a recent set of criticisms that (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Machine learning in bail decisions and judges’ trustworthiness.Alexis Morin-Martel - 2023 - AI and Society:1-12.
    The use of AI algorithms in criminal trials has been the subject of very lively ethical and legal debates recently. While there are concerns over the lack of accuracy and the harmful biases that certain algorithms display, new algorithms seem more promising and might lead to more accurate legal decisions. Algorithms seem especially relevant for bail decisions, because such decisions involve statistical data to which human reasoners struggle to give adequate weight. While getting the right legal outcome is a strong (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Explainable AI lacks regulative reasons: why AI and human decision‑making are not equally opaque.Uwe Peters - forthcoming - AI and Ethics.
    Many artificial intelligence (AI) systems currently used for decision-making are opaque, i.e., the internal factors that determine their decisions are not fully known to people due to the systems’ computational complexity. In response to this problem, several researchers have argued that human decision-making is equally opaque and since simplifying, reason-giving explanations (rather than exhaustive causal accounts) of a decision are typically viewed as sufficient in the human case, the same should hold for algorithmic decision-making. Here, I contend that this argument (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • On the Scope of the Right to Explanation.James Fritz - forthcoming - AI and Ethics.
    As opaque algorithmic systems take up a larger and larger role in shaping our lives, calls for explainability in various algorithmic systems have increased. Many moral and political philosophers have sought to vindicate these calls for explainability by developing theories on which decision-subjects—that is, individuals affected by decisions—have a moral right to the explanation of the systems that affect them. Existing theories tend to suggest that the right to explanation arises solely in virtue of facts about how decision-subjects are affected (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The perfect technological storm: artificial intelligence and moral complacency.Marten H. L. Kaas - 2024 - Ethics and Information Technology 26 (3):1-12.
    Artificially intelligent machines are different in kind from all previous machines and tools. While many are used for relatively benign purposes, the types of artificially intelligent machines that we should care about, the ones that are worth focusing on, are the machines that purport to replace humans entirely and thereby engage in what Brian Cantwell Smith calls “judgment.” As impressive as artificially intelligent machines are, their abilities are still derived from humans and as such lack the sort of normative commitments (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Explainability, Public Reason, and Medical Artificial Intelligence.Michael Da Silva - 2023 - Ethical Theory and Moral Practice 26 (5):743-762.
    The contention that medical artificial intelligence (AI) should be ‘explainable’ is widespread in contemporary philosophy and in legal and best practice documents. Yet critics argue that ‘explainability’ is not a stable concept; non-explainable AI is often more accurate; mechanisms intended to improve explainability do not improve understanding and introduce new epistemic concerns; and explainability requirements are ad hoc where human medical decision-making is often opaque. A recent ‘political response’ to these issues contends that AI used in high-stakes scenarios, including medical (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Logics and collaboration.Liz Sonenberg - 2023 - Logic Journal of the IGPL 31 (6):1024-1046.
    Since the early days of artificial intelligence (AI), many logics have been explored as tools for knowledge representation and reasoning. In the spirit of the Crossley Festscrift and recognizing John Crossley’s diverse interests and his legacy in both mathematical logic and computer science, I discuss examples from my own research that sit in the overlap of logic and AI, with a focus on supporting human–AI interactions.
    Download  
     
    Export citation  
     
    Bookmark