Switch to: References

Add citations

You must login to add citations.
  1. The Rhetoric and Reality of Anthropomorphism in Artificial Intelligence.David Watson - 2019 - Minds and Machines 29 (3):417-440.
    Artificial intelligence has historically been conceptualized in anthropomorphic terms. Some algorithms deploy biomimetic designs in a deliberate attempt to effect a sort of digital isomorphism of the human brain. Others leverage more general learning strategies that happen to coincide with popular theories of cognitive science and social epistemology. In this paper, I challenge the anthropomorphic credentials of the neural network algorithm, whose similarities to human cognition I argue are vastly overstated and narrowly construed. I submit that three alternative supervised learning (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark   7 citations  
  • Are AI systems biased against the poor? A machine learning analysis using Word2Vec and GloVe embeddings.Georgina Curto, Mario Fernando Jojoa Acosta, Flavio Comim & Begoña Garcia-Zapirain - forthcoming - AI and Society:1-16.
    Among the myriad of technical approaches and abstract guidelines proposed to the topic of AI bias, there has been an urgent call to translate the principle of fairness into the operational AI reality with the involvement of social sciences specialists to analyse the context of specific types of bias, since there is not a generalizable solution. This article offers an interdisciplinary contribution to the topic of AI and societal bias, in particular against the poor, providing a conceptual framework of the (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  • Artificial Intelligence and Patient-Centered Decision-Making.Jens Christian Bjerring & Jacob Busch - 2020 - Philosophy and Technology 34 (2):349-371.
    Advanced AI systems are rapidly making their way into medical research and practice, and, arguably, it is only a matter of time before they will surpass human practitioners in terms of accuracy, reliability, and knowledge. If this is true, practitioners will have a prima facie epistemic and professional obligation to align their medical verdicts with those of advanced AI systems. However, in light of their complexity, these AI systems will often function as black boxes: the details of their contents, calculations, (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • How to design AI for social good: seven essential factors.Luciano Floridi, Josh Cowls, Thomas C. King & Mariarosaria Taddeo - 2020 - Science and Engineering Ethics 26 (3):1771–1796.
    The idea of artificial intelligence for social good is gaining traction within information societies in general and the AI community in particular. It has the potential to tackle social problems through the development of AI-based solutions. Yet, to date, there is only limited understanding of what makes AI socially good in theory, what counts as AI4SG in practice, and how to reproduce its initial successes in terms of policies. This article addresses this gap by identifying seven ethical factors that are (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark   27 citations  
  • The Explanation Game: A Formal Framework for Interpretable Machine Learning.David S. Watson & Luciano Floridi - 2020 - Synthese 198 (10):1–⁠32.
    We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealised explanation game in which players collaborate to find the best explanation for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players to explore overlapping causal (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Conceptual Challenges for Interpretable Machine Learning.David S. Watson - 2022 - Synthese 200 (2):1-33.
    As machine learning has gradually entered into ever more sectors of public and private life, there has been a growing demand for algorithmic explainability. How can we make the predictions of complex statistical models more intelligible to end users? A subdiscipline of computer science known as interpretable machine learning has emerged to address this urgent question. Numerous influential methods have been proposed, from local linear approximations to rule lists and counterfactuals. In this article, I highlight three conceptual challenges that are (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • What is Interpretability?Adrian Erasmus, Tyler D. P. Brunet & Eyal Fisher - 2021 - Philosophy and Technology 34:833–862.
    We argue that artificial networks are explainable and offer a novel theory of interpretability. Two sets of conceptual questions are prominent in theoretical engagements with artificial neural networks, especially in the context of medical artificial intelligence: Are networks explainable, and if so, what does it mean to explain the output of a network? And what does it mean for a network to be interpretable? We argue that accounts of “explanation” tailored specifically to neural networks have ineffectively reinvented the wheel. In (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark   10 citations  
  • High Hopes for “Deep Medicine”? AI, Economics, and the Future of Care.Robert Sparrow & Joshua Hatherley - 2020 - Hastings Center Report 50 (1):14-17.
    In Deep Medicine, Eric Topol argues that the development of artificial intelligence (AI) for healthcare will lead to a dramatic shift in the culture and practice of medicine. Topol claims that, rather than replacing physicians, AI could function alongside of them in order to allow them to devote more of their time to face-to-face patient care. Unfortunately, these high hopes for AI-enhanced medicine fail to appreciate a number of factors that, we believe, suggest a radically different picture for the future (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • When Doctors and AI Interact: on Human Responsibility for Artificial Risks.Mario Verdicchio & Andrea Perin - 2022 - Philosophy and Technology 35 (1):1-28.
    A discussion concerning whether to conceive Artificial Intelligence systems as responsible moral entities, also known as “artificial moral agents”, has been going on for some time. In this regard, we argue that the notion of “moral agency” is to be attributed only to humans based on their autonomy and sentience, which AI systems lack. We analyze human responsibility in the presence of AI systems in terms of meaningful control and due diligence and argue against fully automated systems in medicine. With (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  • How to Design a Governable Digital Health Ecosystem.Jessica Morley & Luciano Floridi - manuscript
    It has been suggested that to overcome the challenges facing the UK’s National Health Service (NHS) of an ageing population and reduced available funding, the NHS should be transformed into a more informationally mature and heterogeneous organisation, reliant on data-based and algorithmically-driven interactions between human, artificial, and hybrid (semi-artificial) agents. This transformation process would offer significant benefit to patients, clinicians, and the overall system, but it would also rely on a fundamental transformation of the healthcare system in a way that (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • What the Near Future of Artificial Intelligence Could Be.Luciano Floridi - 2019 - Philosophy and Technology 32 (1):1-15.
    In this article, I shall argue that AI’s likely developments and possible challenges are best understood if we interpret AI not as a marriage between some biological-like intelligence and engineered artefacts, but as a divorce between agency and intelligence, that is, the ability to solve problems successfully and the necessity of being intelligent in doing so. I shall then look at five developments: (1) the growing shift from logic to statistics, (2) the progressive adaptation of the environment to AI rather (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Accountability in the Machine Learning Pipeline: The Critical Role of Research Ethics Oversight.Melissa D. McCradden, James A. Anderson & Randi Zlotnik Shaul - 2020 - American Journal of Bioethics 20 (11):40-42.
    Char and colleagues provide a useful conceptual framework for the proactive identification of ethical issues arising throughout the lifecycle of machine learning applications in healthcare. Th...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • What’s in the Box?: Uncertain Accountability of Machine Learning Applications in Healthcare.Ma'N. Zawati & Michael Lang - 2020 - American Journal of Bioethics 20 (11):37-40.
    Machine learning is an increasingly significant part of modern healthcare, transforming the way clinical decisions are made and health resources are managed. These developme...
    Download  
     
    Export citation  
     
    Bookmark   1 citation