Switch to: References

Add citations

You must login to add citations.
  1. A Critical Notice on the Moral Grounding Question in David Chalmers’ Reality+.Anand Jayprakash Vaidya - 2023 - Sophia 62 (1):195-200.
    In this critical discussion, I evaluate David Chalmers’ position on the moral grounding question from his (2022) Reality +. The moral grounding question asks: in virtue of what does an entity x have moral standing? Chalmers argues for the claim that phenomenal consciousness is a necessary condition for moral standing. After a brief introduction to his book, I evaluate his position on the moral grounding question from the perspective of access consciousness as opposed to phenomenal consciousness, as well as the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Can machines have emotions?Anand Jayprakash Vaidya - forthcoming - AI and Society:1-16.
    In this paper I articulate the question of whether machines can have emotions. I then reject a common argument against why they cannot have emotions based on the lack of a capacity for feelings. The goal of this paper is not to decisively show that machines can have emotions, but to decisively show that the naïve argument for the conclusion that they cannot needs to be critically examined. I argue that machines that have artificial general intelligence can have emotions based (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Does artificial intelligence exhibit basic fundamental subjectivity? A neurophilosophical argument.Georg Northoff & Steven S. Gouveia - forthcoming - Phenomenology and the Cognitive Sciences:1-22.
    Does artificial intelligence (AI) exhibit consciousness or self? While this question is hotly debated, here we take a slightly different stance by focusing on those features that make possible both, namely a basic or fundamental subjectivity. Learning from humans and their brain, we first ask what we mean by subjectivity. Subjectivity is manifest in the perspectiveness and mineness of our experience which, ontologically, can be traced to a point of view. Adopting a non-reductive neurophilosophical strategy, we assume that the point (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Language and Intelligence.Carlos Montemayor - 2021 - Minds and Machines 31 (4):471-486.
    This paper explores aspects of GPT-3 that have been discussed as harbingers of artificial general intelligence and, in particular, linguistic intelligence. After introducing key features of GPT-3 and assessing its performance in the light of the conversational standards set by Alan Turing in his seminal paper from 1950, the paper elucidates the difference between clever automation and genuine linguistic intelligence. A central theme of this discussion on genuine conversational intelligence is that members of a linguistic community never merely respond “algorithmically” (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Personhood and AI: Why large language models don’t understand us.Jacob Browning - 2024 - AI and Society 39 (5):2499-2506.
    Recent artificial intelligence advances, especially those of large language models (LLMs), have increasingly shown glimpses of human-like intelligence. This has led to bold claims that these systems are no longer a mere “it” but now a “who,” a kind of person deserving respect. In this paper, I argue that this view depends on a Cartesian account of personhood, on which identifying someone as a person is based on their cognitive sophistication and ability to address common-sense reasoning problems. I contrast this (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation