Switch to: References

Add citations

You must login to add citations.
  1. Tests of Animal Consciousness are Tests of Machine Consciousness.Leonard Dung - forthcoming - Erkenntnis.
    If a machine attains consciousness, how could we find out? In this paper, I make three related claims regarding positive tests of machine consciousness. All three claims center on the idea that an AI can be constructed “ad hoc”, that is, with the purpose of satisfying a particular test of consciousness while clearly not being conscious. First, a proposed test of machine consciousness can be legitimate, even if AI can be constructed ad hoc specifically to pass this test. This is (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Prospects of Artificial Consciousness: Ethical Dimensions and Concerns.Elisabeth Hildt - 2023 - American Journal of Bioethics Neuroscience 14 (2):58-71.
    Can machines be conscious and what would be the ethical implications? This article gives an overview of current robotics approaches toward machine consciousness and considers factors that hamper an understanding of machine consciousness. After addressing the epistemological question of how we would know whether a machine is conscious and discussing potential advantages of potential future machine consciousness, it outlines the role of consciousness for ascribing moral status. As machine consciousness would most probably differ considerably from human consciousness, several complex questions (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • The Measurement Problem of Consciousness.Heather Browning & Walter Veit - 2020 - Philosophical Topics 48 (1):85-108.
    This paper addresses what we consider to be the most pressing challenge for the emerging science of consciousness: the measurement problem of consciousness. That is, by what methods can we determine the presence of and properties of consciousness? Most methods are currently developed through evaluation of the presence of consciousness in humans and here we argue that there are particular problems in application of these methods to nonhuman cases—what we call the indicator validity problem and the extrapolation problem. The first (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Why the Epistemic Objection Against Using Sentience as Criterion of Moral Status is Flawed.Leonard Dung - 2022 - Science and Engineering Ethics 28 (6):1-15.
    According to a common view, sentience is necessary and sufficient for moral status. In other words, whether a being has intrinsic moral relevance is determined by its capacity for conscious experience. The _epistemic objection_ derives from our profound uncertainty about sentience. According to this objection, we cannot use sentience as a _criterion_ to ascribe moral status in practice because we won’t know in the foreseeable future which animals and AI systems are sentient while ethical questions regarding the possession of moral (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Lessons From the Quest for Artificial Consciousness: The Emergence Criterion, Insight‐Oriented Ai, and Imago Dei.Sara Lumbreras - 2022 - Zygon 57 (4):963-983.
    There are several lessons that can already be drawn from the current research programs on strong AI and building conscious machines, even if they arguably have not produced fruits yet. The first one is that functionalist approaches to consciousness do not account for the key importance of subjective experience and can be easily confounded by the way in which algorithms work and succeed. Authenticity and emergence are key concepts that can be useful in discerning valid approaches versus invalid ones and (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Hume’s Law as Another Philosophical Problem for Autonomous Weapons Systems.Robert James M. Boyles - 2021 - Journal of Military Ethics 20 (2):113-128.
    This article contends that certain types of Autonomous Weapons Systems (AWS) are susceptible to Hume’s Law. Hume’s Law highlights the seeming impossibility of deriving moral judgments, if not all evaluative ones, from purely factual premises. If autonomous weapons make use of factual data from their environments to carry out specific actions, then justifying their ethical decisions may prove to be intractable in light of the said problem. In this article, Hume’s original formulation of the no-ought-from-is thesis is evaluated in relation (...)
    Download  
     
    Export citation  
     
    Bookmark