Results for 'Machine Explainability'

999 found
Order:
  1. Organisms ≠ Machines.Daniel J. Nicholson - 2013 - Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences 44 (4):669-678.
    The machine conception of the organism (MCO) is one of the most pervasive notions in modern biology. However, it has not yet received much attention by philosophers of biology. The MCO has its origins in Cartesian natural philosophy, and it is based on the metaphorical redescription of the organism as a machine. In this paper I argue that although organisms and machines resemble each other in some basic respects, they are actually very different kinds of systems. I submit (...)
    Download  
     
    Export citation  
     
    Bookmark   48 citations  
  2. Clinical applications of machine learning algorithms: beyond the black box.David S. Watson, Jenny Krutzinna, Ian N. Bruce, Christopher E. M. Griffiths, Iain B. McInnes, Michael R. Barnes & Luciano Floridi - 2019 - British Medical Journal 364:I886.
    Machine learning algorithms may radically improve our ability to diagnose and treat disease. For moral, legal, and scientific reasons, it is essential that doctors and patients be able to understand and explain the predictions of these models. Scalable, customisable, and ethical solutions can be achieved by working together with relevant stakeholders, including patients, data scientists, and policy makers.
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  3. Can machines be people? Reflections on the Turing triage test.Robert Sparrow - 2012 - In Patrick Lin, Keith Abney & George Bekey (eds.), Robot Ethics: The Ethical and Social Implications of Robotics. MIT Press. pp. 301-315.
    In, “The Turing Triage Test”, published in Ethics and Information Technology, I described a hypothetical scenario, modelled on the famous Turing Test for machine intelligence, which might serve as means of testing whether or not machines had achieved the moral standing of people. In this paper, I: (1) explain why the Turing Triage Test is of vital interest in the context of contemporary debates about the ethics of AI; (2) address some issues that complexify the application of this test; (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  4. Machine intelligence: a chimera.Mihai Nadin - 2019 - AI and Society 34 (2):215-242.
    The notion of computation has changed the world more than any previous expressions of knowledge. However, as know-how in its particular algorithmic embodiment, computation is closed to meaning. Therefore, computer-based data processing can only mimic life’s creative aspects, without being creative itself. AI’s current record of accomplishments shows that it automates tasks associated with intelligence, without being intelligent itself. Mistaking the abstract for the concrete has led to the religion of “everything is an output of computation”—even the humankind that conceived (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  5. The Pragmatic Turn in Explainable Artificial Intelligence (XAI).Andrés Páez - 2019 - Minds and Machines 29 (3):441-459.
    In this paper I argue that the search for explainable models and interpretable decisions in AI must be reformulated in terms of the broader project of offering a pragmatic and naturalistic account of understanding in AI. Intuitively, the purpose of providing an explanation of a model or a decision is to make it understandable to its stakeholders. But without a previous grasp of what it means to say that an agent understands a model or a decision, the explanatory strategies will (...)
    Download  
     
    Export citation  
     
    Bookmark   30 citations  
  6. Explaining Explanations in AI.Brent Mittelstadt - forthcoming - FAT* 2019 Proceedings 1.
    Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it’s important to remember Box’s maxim that "All models are wrong but some are useful." We focus (...)
    Download  
     
    Export citation  
     
    Bookmark   42 citations  
  7. The Experience Machine.Ben Bramble - 2016 - Philosophy Compass 11 (3):136-145.
    In this paper, I reconstruct Robert Nozick's experience machine objection to hedonism about well-being. I then explain and briefly discuss the most important recent criticisms that have been made of it. Finally, I question the conventional wisdom that the experience machine, while it neatly disposes of hedonism, poses no problem for desire-based theories of well-being.
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  8.  35
    What explains collective action: The impact of social capital, incentive structures and economic benefits.Engjell Skreli, Orjon Xhoxhi, Drini Imami & Klodjan Rama - 2023 - Journal of International Development 36:1-25.
    This study focuses in testing the power of reciprocity and leadership as collective action incentive structures and cooperation economic benefits in explaining collective action initiation in the context of a post-communist transition economy. The paper is based on a structured survey targeting Albanian export-oriented farmers. Different from most previous studies, this paper uses both regression analysis and machine learning procedure which is better suited for analysing non-linear relationships. The empirical findings are at odds with common sense that non-cooperation is (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9. Do Machines Have Prima Facie Duties?Gary Comstock - 2015 - In Machine Medical Ethics. London: Springer. pp. 79-92.
    A properly programmed artificially intelligent agent may eventually have one duty, the duty to satisfice expected welfare. We explain this claim and defend it against objections.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  10. Consequences of unexplainable machine learning for the notions of a trusted doctor and patient autonomy.Michal Klincewicz & Lily Frank - 2020 - Proceedings of the 2nd EXplainable AI in Law Workshop (XAILA 2019) Co-Located with 32nd International Conference on Legal Knowledge and Information Systems (JURIX 2019).
    This paper provides an analysis of the way in which two foundational principles of medical ethics–the trusted doctor and patient autonomy–can be undermined by the use of machine learning (ML) algorithms and addresses its legal significance. This paper can be a guide to both health care providers and other stakeholders about how to anticipate and in some cases mitigate ethical conflicts caused by the use of ML in healthcare. It can also be read as a road map as to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11. Transparent, explainable, and accountable AI for robotics.Sandra Wachter, Brent Mittelstadt & Luciano Floridi - 2017 - Science (Robotics) 2 (6):eaan6080.
    To create fair and accountable AI and robotics, we need precise regulation and better methods to certify, explain, and audit inscrutable systems.
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  12. The Experience Machine and the Experience Requirement.Jennifer Hawkins - 2015 - In Guy Fletcher (ed.), The Routledge Handbook of Philosophy of Well-Being. Routledge. pp. 355-365.
    In this article I explore various facets of Nozick’s famous thought experiment involving the experience machine. Nozick’s original target is hedonism—the view that the only intrinsic prudential value is pleasure. But the argument, if successful, undermines any experientialist theory, i.e. any theory that limits intrinsic prudential value to mental states. I first highlight problems arising from the way Nozick sets up the thought experiment. He asks us to imagine choosing whether or not to enter the machine and uses (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  13. Explaining Go: Challenges in Achieving Explainability in AI Go Programs.Zack Garrett - 2023 - Journal of Go Studies 17 (2):29-60.
    There has been a push in recent years to provide better explanations for how AIs make their decisions. Most of this push has come from the ethical concerns that go hand in hand with AIs making decisions that affect humans. Outside of the strictly ethical concerns that have prompted the study of explainable AIs (XAIs), there has been research interest in the mere possibility of creating XAIs in various domains. In general, the more accurate we make our models the harder (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. The explanation game: a formal framework for interpretable machine learning.David S. Watson & Luciano Floridi - 2020 - Synthese 198 (10):1–⁠32.
    We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealised explanation game in which players collaborate to find the best explanation for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players to explore overlapping (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  15.  25
    How Entropy Explains the Emergence of Consciousness: The Entropic Theory.Peter C. Lugten - 2024 - Journal of Neurobehavioral Sciences 11 (1):10-18.
    Background: Emergentism as an ontology of consciousness leaves unanswered the question as to its mechanism. Aim: I aim to solve the Body-Mind problem by explaining how conscious organisms emerged on an evolutionary basis at various times in accordance with an accepted scientific principle, through a mechanism that cannot be understood, in principle. Proposal: The reason for this cloak of secrecy is found in a seeming contradiction in the behaviour of information with respect to the first two laws of thermodynamics. Information, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16. What is Wrong with Machine Art? Autonomy, Spirituality, Consciousness, and Human Survival.Ioannis Trisokkas - 2020 - Humanities Bulletin 3 (2):9-26.
    There is a well-documented Pre-Reflective Hostility against Machine Art (PRHMA), exemplified by the sentiments of fear and anxiety. How can it be explained? The present paper attempts to find the answer to this question by surveying a considerable amount of research on machine art. It is found that explanations of PRHMA based on the (alleged) fact that machine art lacks an element that is (allegedly) found in human art (for example, autonomy) do not work. Such explanations cannot (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. Can machines have first-person properties?Mark F. Sharlow - manuscript
    One of the most important ongoing debates in the philosophy of mind is the debate over the reality of the first-person character of consciousness.[1] Philosophers on one side of this debate hold that some features of experience are accessible only from a first-person standpoint. Some members of this camp, notably Frank Jackson, have maintained that epiphenomenal properties play roles in consciousness [2]; others, notably John R. Searle, have rejected dualism and regarded mental phenomena as entirely biological.[3] In the opposite camp (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18. Mind as Machine: The Influence of Mechanism on the Conceptual Foundations of the Computer Metaphor.Pavel Baryshnikov - 2022 - RUDN Journal of Philosophy 26 (4):755-769.
    his article will focus on the mechanistic origins of the computer metaphor, which forms the conceptual framework for the methodology of the cognitive sciences, some areas of artificial intelligence and the philosophy of mind. The connection between the history of computing technology, epistemology and the philosophy of mind is expressed through the metaphorical dictionaries of the philosophical discourse of a particular era. The conceptual clarification of this connection and the substantiation of the mechanistic components of the computer metaphor is the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. Intuitive Biases in Judgements about Thought Experiments: The Experience Machine Revisited.Dan Weijers - 2013 - Philosophical Writings 41 (1):17-31.
    This paper is a warning that objections based on thought experiments can be misleading because they may elicit judgments that, unbeknownst to the judger, have been seriously skewed by psychological biases. The fact that most people choose not to plug in to the Experience Machine in Nozick’s (1974) famous thought experiment has long been used as a knock-down objection to hedonism because it is widely thought to show that real experiences are more important to us than pleasurable experiences. This (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  20. Do the Laws of Physics Forbid the Operation of Time Machines?John Earman, Chris Smeenk & Christian Wüthrich - 2009 - Synthese 169 (1):91 - 124.
    We address the question of whether it is possible to operate a time machine by manipulating matter and energy so as to manufacture closed timelike curves. This question has received a great deal of attention in the physics literature, with attempts to prove no- go theorems based on classical general relativity and various hybrid theories serving as steps along the way towards quantum gravity. Despite the effort put into these no-go theorems, there is no widely accepted definition of a (...)
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  21. Inductive Risk, Understanding, and Opaque Machine Learning Models.Emily Sullivan - 2022 - Philosophy of Science 89 (5):1065-1074.
    Under what conditions does machine learning (ML) model opacity inhibit the possibility of explaining and understanding phenomena? In this article, I argue that nonepistemic values give shape to the ML opacity problem even if we keep researcher interests fixed. Treating ML models as an instance of doing model-based science to explain and understand phenomena reveals that there is (i) an external opacity problem, where the presence of inductive risk imposes higher standards on externally validating models, and (ii) an internal (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  22. On Reason and Spectral Machines: Robert Brandom and Bounded Posthumanism.David Roden - 2017 - In Rosi Braidotti & Rick Dolphijn (eds.), Philosophy After Nature. Lanham: Rowman & Littlefield International. pp. 99-119.
    I distinguish two theses regarding technological successors to current humans (posthumans): an anthropologically bounded posthumanism (ABP) and an anthropologically unbounded posthumanism (AUP). ABP proposes transcendental conditions on agency that can be held to constrain the scope for “weirdness” in the space of possible posthumans a priori. AUP, by contrast, leaves the nature of posthuman agency to be settled empirically (or technologically). Given AUP there are no “future proof” constraints on the strangeness of posthuman agents. -/- In Posthuman Life I defended (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  23. Is the Cell Really a Machine?Daniel J. Nicholson - 2019 - Journal of Theoretical Biology 477:108–126.
    It has become customary to conceptualize the living cell as an intricate piece of machinery, different to a man-made machine only in terms of its superior complexity. This familiar understanding grounds the conviction that a cell's organization can be explained reductionistically, as well as the idea that its molecular pathways can be construed as deterministic circuits. The machine conception of the cell owes a great deal of its success to the methods traditionally used in molecular biology. However, the (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  24. Three Moral Themes of Leibniz's Spiritual Machine Between "New System" and "New Essays".Markku Roinila - 2023 - le Present Est Plein de L’Avenir, Et Chargé du Passé : Vorträge des Xi. Internationalen Leibniz-Kongresses, 31. Juli – 4. August 2023.
    The advance of mechanism in science and philosophy in the 17th century created a great interest to machines or automata. Leibniz was no exception - in an early memoir Drôle de pensée he wrote admiringly about a machine that could walk on water, exhibited in Paris. The idea of automatic processing in general had a large role in his thought, as can be seen, for example, in his invention of the binary code and the so-called Calculemus!-model for solving controversies. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. Applying mechanical philosophy to web science: The case of social machines.Paul R. Smart, Kieron O’Hara & Wendy Hall - 2021 - European Journal for Philosophy of Science 11 (3):1-29.
    Social machines are a prominent focus of attention for those who work in the field of Web and Internet science. Although a number of online systems have been described as social machines, there is, as yet, little consensus as to the precise meaning of the term “social machine.” This presents a problem for the scientific study of social machines, especially when it comes to the provision of a theoretical framework that directs, informs, and explicates the scientific and engineering activities (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. Deepfake detection by human crowds, machines, and machine-informed crowds.Matthew Groh, Ziv Epstein, Chaz Firestone & Rosalind Picard - 2022 - Proceedings of the National Academy of Sciences 119 (1):e2110013119.
    The recent emergence of machine-manipulated media raises an important societal question: How can we know whether a video that we watch is real or fake? In two online studies with 15,016 participants, we present authentic videos and deepfakes and ask participants to identify which is which. We compare the performance of ordinary human observers with the leading computer vision deepfake detection model and find them similarly accurate, while making different kinds of mistakes. Together, participants with access to the model’s (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  27.  33
    Epistemic virtues of harnessing rigorous machine learning systems in ethically sensitive domains.Thomas F. Burns - 2023 - Journal of Medical Ethics 49 (8):547-548.
    Some physicians, in their care of patients at risk of misusing opioids, use machine learning (ML)-based prediction drug monitoring programmes (PDMPs) to guide their decision making in the prescription of opioids. This can cause a conflict: a PDMP Score can indicate a patient is at a high risk of opioid abuse while a patient expressly reports oppositely. The prescriber is then left to balance the credibility and trust of the patient with the PDMP Score. Pozzi1 argues that a prescriber (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28.  86
    An Introduction to Artificial Psychology Application Fuzzy Set Theory and Deep Machine Learning in Psychological Research using R.Farahani Hojjatollah - 2023 - Springer Cham. Edited by Hojjatollah Farahani, Marija Blagojević, Parviz Azadfallah, Peter Watson, Forough Esrafilian & Sara Saljoughi.
    Artificial Psychology (AP) is a highly multidisciplinary field of study in psychology. AP tries to solve problems which occur when psychologists do research and need a robust analysis method. Conventional statistical approaches have deep rooted limitations. These approaches are excellent on paper but often fail to model the real world. Mind researchers have been trying to overcome this by simplifying the models being studied. This stance has not received much practical attention recently. Promoting and improving artificial intelligence helps mind researchers (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29. Interprétabilité et explicabilité pour l’apprentissage machine : entre modèles descriptifs, modèles prédictifs et modèles causaux. Une nécessaire clarification épistémologique.Christophe Denis & Franck Varenne - 2019 - Actes de la Conférence Nationale En Intelligence Artificielle - CNIA 2019.
    Le déficit d’explicabilité des techniques d’apprentissage machine (AM) pose des problèmes opérationnels, juridiques et éthiques. Un des principaux objectifs de notre projet est de fournir des explications éthiques des sorties générées par une application fondée sur de l’AM, considérée comme une boîte noire. La première étape de ce projet, présentée dans cet article, consiste à montrer que la validation de ces boîtes noires diffère épistémologiquement de celle mise en place dans le cadre d’une modélisation mathématique et causale d’un phénomène (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  30. The Use and Misuse of Counterfactuals in Ethical Machine Learning.Atoosa Kasirzadeh & Andrew Smart - 2021 - In ACM Conference on Fairness, Accountability, and Transparency (FAccT 21).
    The use of counterfactuals for considerations of algorithmic fairness and explainability is gaining prominence within the machine learning community and industry. This paper argues for more caution with the use of counterfactuals when the facts to be considered are social categories such as race or gender. We review a broad body of papers from philosophy and social sciences on social ontology and the semantics of counterfactuals, and we conclude that the counterfactual approach in machine learning fairness and (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  31. Interprétabilité et explicabilité de phénomènes prédits par de l’apprentissage machine.Christophe Denis & Franck Varenne - 2022 - Revue Ouverte d'Intelligence Artificielle 3 (3-4):287-310.
    Le déficit d’explicabilité des techniques d’apprentissage machine (AM) pose des problèmes opérationnels, juridiques et éthiques. Un des principaux objectifs de notre projet est de fournir des explications éthiques des sorties générées par une application fondée sur de l’AM, considérée comme une boîte noire. La première étape de ce projet, présentée dans cet article, consiste à montrer que la validation de ces boîtes noires diffère épistémologiquement de celle mise en place dans le cadre d’une modélisation mathéma- tique et causale d’un (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32. The Boundaries of Meaning: A Case Study in Neural Machine Translation.Yuri Balashov - 2022 - Inquiry: An Interdisciplinary Journal of Philosophy 66.
    The success of deep learning in natural language processing raises intriguing questions about the nature of linguistic meaning and ways in which it can be processed by natural and artificial systems. One such question has to do with subword segmentation algorithms widely employed in language modeling, machine translation, and other tasks since 2016. These algorithms often cut words into semantically opaque pieces, such as ‘period’, ‘on’, ‘t’, and ‘ist’ in ‘period|on|t|ist’. The system then represents the resulting segments in a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. Epistemic Normativity & Epistemic Autonomy: The True Belief Machine.Spencer Paulson - 2023 - Philosophical Studies 180 (8):2415-2433.
    Here I will re-purpose Nozick’s (1974) “Experience Machine” thought experiment against hedonism into an argument against Veritic Epistemic Consequentialism. According to VEC, the right action, epistemically speaking, is the one that results in at least as favorable a ratio of true to false belief as any other action available. A consequence of VEC is that it would be epistemically right to outsource all your cognitive endeavors to a matrix-like “True Belief Machine” that uploads true beliefs through artificial stimulation. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. Against the Virtual: Kleinherenbrink’s Externality Thesis and Deleuze’s Machine Ontology.Ekin Erkan - 2020 - Cosmos and History 16 (1):492-599.
    Drawing from Arjen Kleinherenbrink's recent book, Against Continuity: Gilles Deleuze's Speculative Realism (2019), this paper undertakes a detailed review of Kleinherenbrink's fourfold "externality thesis" vis-à-vis Deleuze's machine ontology. Reading Deleuze as a philosopher of the actual, this paper renders Deleuzean syntheses as passive contemplations, pulling other (passive) entities into an (active) experience and designating relations as expressed through contraction. In addition to reviewing Kleinherenbrink's book (which argues that the machine ontology is a guiding current that emerges in Deleuze's (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. Explaining Perception: An Assessment of Current Ecological and Cognitivist Approaches.Christopher Albert Fields - 1985 - Dissertation, University of Colorado at Boulder
    Ecological realism and cognitivism are the two major current contenders in the field of cognitive perceptual theory. This thesis examines these theories, and the debate between them. It shows that the debate, as it exists in the literature, is inconclusive, primarily because of problems in the current formulations of the two contending theories. The most obvious difficulties in the two theories are removed, leaving reconstructed versions of both. The debate is then re-examined in the context of the reconstructed theories. It (...)
    Download  
     
    Export citation  
     
    Bookmark  
  36. “Just” accuracy? Procedural fairness demands explainability in AI‑based medical resource allocation.Jon Rueda, Janet Delgado Rodríguez, Iris Parra Jounou, Joaquín Hortal-Carmona, Txetxu Ausín & David Rodríguez-Arias - 2022 - AI and Society:1-12.
    The increasing application of artificial intelligence (AI) to healthcare raises both hope and ethical concerns. Some advanced machine learning methods provide accurate clinical predictions at the expense of a significant lack of explainability. Alex John London has defended that accuracy is a more important value than explainability in AI medicine. In this article, we locate the trade-off between accurate performance and explainable algorithms in the context of distributive justice. We acknowledge that accuracy is cardinal from outcome-oriented justice (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  37. Connectionist models of mind: scales and the limits of machine imitation.Pavel Baryshnikov - 2020 - Philosophical Problems of IT and Cyberspace 2 (19):42-58.
    This paper is devoted to some generalizations of explanatory potential of connectionist approaches to theoretical problems of the philosophy of mind. Are considered both strong, and weaknesses of neural network models. Connectionism has close methodological ties with modern neurosciences and neurophilosophy. And this fact strengthens its positions, in terms of empirical naturalistic approaches. However, at the same time this direction inherits weaknesses of computational approach, and in this case all system of anticomputational critical arguments becomes applicable to the connectionst models (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. Cosmos is a (fatalistic) state machine: Objective theory (cosmos, objective reality, scientific image) vs. Subjective theory (consciousness, subjective reality, manifest image).Xiaoyang Yu - manuscript
    As soon as you believe an imagination to be nonfictional, this imagination becomes your ontological theory of the reality. Your ontological theory (of the reality) can describe a system as the reality. However, actually this system is only a theory/conceptual-space/imagination/visual-imagery of yours, not the actual reality (i.e., the thing-in-itself). An ontological theory (of the reality) actually only describes your (subjective/mental) imagination/visual-imagery/conceptual-space. An ontological theory of the reality, is being described as a situation model (SM). There is no way to prove/disprove (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. A Theory Explains Deep Learning.Kenneth Kijun Lee & Chase Kihwan Lee - manuscript
    This is our journal for developing Deduction Theory and studying Deep Learning and Artificial intelligence. Deduction Theory is a Theory of Deducing World’s Relativity by Information Coupling and Asymmetry. We focus on information processing, see intelligence as an information structure that relatively close object-oriented, probability-oriented, unsupervised learning, relativity information processing and massive automated information processing. We see deep learning and machine learning as an attempt to make all types of information processing relatively close to probability information processing. We will (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. Features necessary for a self-conscious robot in the light of “Consciousness Explained” by Daniel Dennett.Jakub Grad - manuscript
    Self-consciousness relates to important themes, such as sentience and personhood, and is often the cornerstone of moral theories (Warren, 1997). However, not much attention is given to future moral creatures of the earth: robots. This may be due to the unsettled status of their experience, which is why I have chosen to find the necessary features of self-consciousness in them. Philosophy of mind is also my interest which I have developed since I rejected the idea of souls and could not (...)
    Download  
     
    Export citation  
     
    Bookmark  
  41. The nonhuman condition: Radical democracy through new materialist lenses.Hans Asenbaum, Amanda Machin, Jean-Paul Gagnon, Diana Leong, Melissa Orlie & James Louis Smith - 2023 - Contemporary Political Theory (Online first):584-615.
    Radical democratic thinking is becoming intrigued by the material situatedness of its political agents and by the role of nonhuman participants in political interaction. At stake here is the displacement of narrow anthropocentrism that currently guides democratic theory and practice, and its repositioning into what we call ‘the nonhuman condition’. This Critical Exchange explores the nonhuman condition. It asks: What are the implications of decentering the human subject via a new materialist reading of radical democracy? Does this reading dilute political (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  42. Two challenges for CI trustworthiness and how to address them.Kevin Baum, Eva Schmidt & A. Köhl Maximilian - 2017
    We argue that, to be trustworthy, Computa- tional Intelligence (CI) has to do what it is entrusted to do for permissible reasons and to be able to give rationalizing explanations of its behavior which are accurate and gras- pable. We support this claim by drawing par- allels with trustworthy human persons, and we show what difference this makes in a hypo- thetical CI hiring system. Finally, we point out two challenges for trustworthy CI and sketch a mechanism which could be (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. ANNs and Unifying Explanations: Reply to Erasmus, Brunet, and Fisher.Yunus Prasetya - 2022 - Philosophy and Technology 35 (2):1-9.
    In a recent article, Erasmus, Brunet, and Fisher (2021) argue that Artificial Neural Networks (ANNs) are explainable. They survey four influential accounts of explanation: the Deductive-Nomological model, the Inductive-Statistical model, the Causal-Mechanical model, and the New-Mechanist model. They argue that, on each of these accounts, the features that make something an explanation is invariant with regard to the complexity of the explanans and the explanandum. Therefore, they conclude, the complexity of ANNs (and other Machine Learning models) does not make (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  44. Algorithms for Ethical Decision-Making in the Clinic: A Proof of Concept.Lukas J. Meier, Alice Hein, Klaus Diepold & Alena Buyx - 2022 - American Journal of Bioethics 22 (7):4-20.
    Machine intelligence already helps medical staff with a number of tasks. Ethical decision-making, however, has not been handed over to computers. In this proof-of-concept study, we show how an algorithm based on Beauchamp and Childress’ prima-facie principles could be employed to advise on a range of moral dilemma situations that occur in medical institutions. We explain why we chose fuzzy cognitive maps to set up the advisory system and how we utilized machine learning to train it. We report (...)
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  45. Should we be afraid of AI?Luciano Floridi - 2019 - Aeon Magazine.
    Machines seem to be getting smarter and smarter and much better at human jobs, yet true AI is utterly implausible. This article explains the reasons why this is the case.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  46. Kant on Descartes and the Brutes.Steve Naragon - 1990 - Kant Studien 81 (1):1-23.
    Despite Kant's belief in a universal causal determinism among phenomena and his rejection of any noumenal agency in brutes, he nevertheless rejected Descartes's hypothesis that brutes are machines. Explaining Kant's response to Descartes forms the basis for this discussion of the nature of consciousness and matter in Kant's system. Kant's numerous remarks on animal psychology-as found in his lecture notes and reflections on metaphysics and anthropology-suggest a theory of consciousness and self-consciousness at odds with that traditionally ascribed to him.
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  47. The Relations Between Pedagogical and Scientific Explanations of Algorithms: Case Studies from the French Administration.Maël Pégny - manuscript
    The opacity of some recent Machine Learning (ML) techniques have raised fundamental questions on their explainability, and created a whole domain dedicated to Explainable Artificial Intelligence (XAI). However, most of the literature has been dedicated to explainability as a scientific problem dealt with typical methods of computer science, from statistics to UX. In this paper, we focus on explainability as a pedagogical problem emerging from the interaction between lay users and complex technological systems. We defend an (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48. Levels of explicability for medical artificial intelligence: What do we normatively need and what can we technically reach?Frank Ursin, Felix Lindner, Timo Ropinski, Sabine Salloch & Cristian Timmermann - 2023 - Ethik in der Medizin 35 (2):173-199.
    Definition of the problem The umbrella term “explicability” refers to the reduction of opacity of artificial intelligence (AI) systems. These efforts are challenging for medical AI applications because higher accuracy often comes at the cost of increased opacity. This entails ethical tensions because physicians and patients desire to trace how results are produced without compromising the performance of AI systems. The centrality of explicability within the informed consent process for medical AI systems compels an ethical reflection on the trade-offs. Which (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  49. Local explanations via necessity and sufficiency: unifying theory and practice.David Watson, Limor Gultchin, Taly Ankur & Luciano Floridi - 2022 - Minds and Machines 32:185-218.
    Necessity and sufficiency are the building blocks of all successful explanations. Yet despite their importance, these notions have been conceptually underdeveloped and inconsistently applied in explainable artificial intelligence (XAI), a fast-growing research area that is so far lacking in firm theoretical foundations. Building on work in logic, probability, and causality, we establish the central role of necessity and sufficiency in XAI, unifying seemingly disparate methods in a single formal framework. We provide a sound and complete algorithm for computing explanatory factors (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  50. Why Attention is Not Explanation: Surgical Intervention and Causal Reasoning about Neural Models.Christopher Grimsley, Elijah Mayfield & Julia Bursten - 2020 - Proceedings of the 12th Conference on Language Resources and Evaluation.
    As the demand for explainable deep learning grows in the evaluation of language technologies, the value of a principled grounding for those explanations grows as well. Here we study the state-of-the-art in explanation for neural models for natural-language processing (NLP) tasks from the viewpoint of philosophy of science. We focus on recent evaluation work that finds brittleness in explanations obtained through attention mechanisms.We harness philosophical accounts of explanation to suggest broader conclusions from these studies. From this analysis, we assert the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
1 — 50 / 999