Switch to: References

Citations of:

AI Extenders: The Ethical and Societal Implications of Humans Cognitively Extended by AI

In Jose Hernandez-Orallo & Karina Vold (eds.), Proceedings of the AAAI/ACM. pp. 507-513 (2019)

Add citations

You must login to add citations.
  1. Reclaiming Control: Extended Mindreading and the Tracking of Digital Footprints.Uwe Peters - 2022 - Social Epistemology 36 (3):267-282.
    It is well known that on the Internet, computer algorithms track our website browsing, clicks, and search history to infer our preferences, interests, and goals. The nature of this algorithmic tracking remains unclear, however. Does it involve what many cognitive scientists and philosophers call ‘mindreading’, i.e., an epistemic capacity to attribute mental states to people to predict, explain, or influence their actions? Here I argue that it does. This is because humans are in a particular way embedded in the process (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Is a subpersonal virtue epistemology possible?Hadeel Naeem - 2023 - Philosophical Explorations 26 (3):350-367.
    Virtue reliabilists argue that an agent can only gain knowledge if she responsibly employs a reliable belief-forming process. This in turn demands that she is either aware that her process is reliable or is sensitive to her process’s reliability in some other way. According to a recent argument in the philosophy of mind, sometimes a cognitive mechanism (i.e. precision estimation) can ensure that a belief-forming process is only employed when it’s reliable. If this is correct, epistemic responsibility can sometimes be (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Twenty Years Beyond the Turing Test: Moving Beyond the Human Judges Too.José Hernández-Orallo - 2020 - Minds and Machines 30 (4):533-562.
    In the last 20 years the Turing test has been left further behind by new developments in artificial intelligence. At the same time, however, these developments have revived some key elements of the Turing test: imitation and adversarialness. On the one hand, many generative models, such as generative adversarial networks, build imitators under an adversarial setting that strongly resembles the Turing test. The term “Turing learning” has been used for this kind of setting. On the other hand, AI benchmarks are (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Varieties of artifacts: Embodied, perceptual, cognitive, and affective.Richard Heersmink - 2021 - Topics in Cognitive Science (4):1-24.
    The primary goal of this essay is to provide a comprehensive overview and analysis of the various relations between material artifacts and the embodied mind. A secondary goal of this essay is to identify some of the trends in the design and use of artifacts. First, based on their functional properties, I identify four categories of artifacts co-opted by the embodied mind, namely (1) embodied artifacts, (2) perceptual artifacts, (3) cognitive artifacts, and (4) affective artifacts. These categories can overlap and (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • ChatGPT and the Technology-Education Tension: Applying Contextual Virtue Epistemology to a Cognitive Artifact.Guido Cassinadri - 2024 - Philosophy and Technology 37 (14):1-28.
    According to virtue epistemology, the main aim of education is the development of the cognitive character of students (Pritchard, 2014, 2016). Given the proliferation of technological tools such as ChatGPT and other LLMs for solving cognitive tasks, how should educational practices incorporate the use of such tools without undermining the cognitive character of students? Pritchard (2014, 2016) argues that it is possible to properly solve this ‘technology-education tension’ (TET) by combining the virtue epistemology framework with the theory of extended cognition (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI and Ethics When Human Beings Collaborate With AI Agents.José J. Cañas - 2022 - Frontiers in Psychology 13.
    The relationship between a human being and an AI system has to be considered as a collaborative process between two agents during the performance of an activity. When there is a collaboration between two people, a fundamental characteristic of that collaboration is that there is co-supervision, with each agent supervising the actions of the other. Such supervision ensures that the activity achieves its objectives, but it also means that responsibility for the consequences of the activity is shared. If there is (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI Assistants and the Paradox of Internal Automaticity.William A. Bauer & Veljko Dubljević - 2019 - Neuroethics 13 (3):303-310.
    What is the ethical impact of artificial intelligence assistants on human lives, and specifically how much do they threaten our individual autonomy? Recently, as part of forming an ethical framework for thinking about the impact of AI assistants on our lives, John Danaher claims that if the external automaticity generated by the use of AI assistants threatens our autonomy and is therefore ethically problematic, then the internal automaticity we already live with should be viewed in the same way. He takes (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations