Switch to: References

Add citations

You must login to add citations.
  1. Existence hacked: meaning, freedom, death, and intimacy in the age of AI.Florentina C. Andreescu - forthcoming - AI and Society:1-13.
    Everyday life is increasingly restructured by algorithms that participate, not only as medium, but also as partners, co-creators, mentors, and figures of authority, in our affective and creative experiences. Their agentic capacity is enabled by big data capitalism as well as through the newly acquired ability to generate meaning (text) and visuals (images, videos, holograms). AI technology engages with aspects of existence that constitute the core of what it means to be human. Promising transcendence of existential givens it induces an (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Phenomenal transparency and the boundary of cognition.Julian Hauser & Hadeel Naeem - forthcoming - Phenomenology and the Cognitive Sciences:1-20.
    Phenomenal transparency was once widely believed to be necessary for cognitive extension. Recently, this claim has come under attack, with a new consensus coalescing around the idea that transparency is neither necessary for internal nor extended cognitive processes. We take these recent critiques as an opportunity to refine the concept of transparency relevant for cognitive extension. In particular, we highlight that transparency concerns an agent’s employment of a resource – and that such employment is compatible with an agent consciously apprehending (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A phenomenology and epistemology of large language models: transparency, trust, and trustworthiness.Richard Heersmink, Barend de Rooij, María Jimena Clavel Vázquez & Matteo Colombo - 2024 - Ethics and Information Technology 26 (3):1-15.
    This paper analyses the phenomenology and epistemology of chatbots such as ChatGPT and Bard. The computational architecture underpinning these chatbots are large language models (LLMs), which are generative artificial intelligence (AI) systems trained on a massive dataset of text extracted from the Web. We conceptualise these LLMs as multifunctional computational cognitive artifacts, used for various cognitive tasks such as translating, summarizing, answering questions, information-seeking, and much more. Phenomenologically, LLMs can be experienced as a “quasi-other”; when that happens, users anthropomorphise them. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Real Feeling and Fictional Time in Human-AI Interactions.Krueger Joel & Tom Roberts - 2024 - Topoi 43 (3).
    As technology improves, artificial systems are increasingly able to behave in human-like ways: holding a conversation; providing information, advice, and support; or taking on the role of therapist, teacher, or counsellor. This enhanced behavioural complexity, we argue, encourages deeper forms of affective engagement on the part of the human user, with the artificial agent helping to stabilise, subdue, prolong, or intensify a person’s emotional condition. Here, we defend a fictionalist account of human/AI interaction, according to which these encounters involve an (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Affective Artificial Agents as sui generis Affective Artifacts.Marco Facchin & Giacomo Zanotti - 2024 - Topoi 43 (3).
    AI-based technologies are increasingly pervasive in a number of contexts. Our affective and emotional life makes no exception. In this article, we analyze one way in which AI-based technologies can affect them. In particular, our investigation will focus on affective artificial agents, namely AI-powered software or robotic agents designed to interact with us in affectively salient ways. We build upon the existing literature on affective artifacts with the aim of providing an original analysis of affective artificial agents and their distinctive (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Incels, autism, and hopelessness: affective incorporation of online interaction as a challenge for phenomenological psychopathology.Sanna K. Tirkkonen & Daniel Vespermann - 2023 - Frontiers in Psychology 14:1235929.
    Recent research has drawn attention to the prevalence of self-reported autism within online communities of involuntary celibates (incels). These studies suggest that some individuals with autism may be particularly vulnerable to the impact of incel forums and the hopelessness they generate. However, a more precise description of the experiential connection between inceldom, self-reported autism, and hopelessness has remained unarticulated. Therefore, this article combines empirical studies on the incel community with phenomenological and embodiment approaches to autism, hopelessness, and online affectivity. We (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The ethics of the extended mind: Mental privacy, manipulation and agency.Robert William Clowes, Paul R. Smart & Richard Heersmink - 2024 - In Jan-Hendrik Heinrichs, Birgit Beck & Orsolya Friedrich (eds.), Neuro-ProsthEthics: Ethical Implications of Applied Situated Cognition. Berlin, Germany: J. B. Metzler. pp. 13–35.
    According to proponents of the extended mind, bio-external resources, such as a notebook or a smartphone, are candidate parts of the cognitive and mental machinery that realises cognitive states and processes. The present chapter discusses three areas of ethical concern associated with the extended mind, namely mental privacy, mental manipulation, and agency. We also examine the ethics of the extended mind from the standpoint of three general normative frameworks, namely, consequentialism, deontology, and virtue ethics.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Transparency and its roles in realizing greener AI.Omoregie Charles Osifo - 2023 - Journal of Information, Communication and Ethics in Society 21 (2):202-218.
    Purpose The purpose of this paper is to identify the key roles of transparency in making artificial intelligence (AI) greener (i.e. causing lesser carbon dioxide emissions) during the design, development and manufacturing stages or processes of AI technologies (e.g. apps, systems, agents, tools, artifacts) and use the “explicability requirement” as an essential value within the framework of transparency in supporting arguments for realizing greener AI. Design/methodology/approach The approach of this paper is argumentative, which is supported by ideas from existing literature (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • On human centered artificial intelligence. [REVIEW]Gloria Andrada - 2023 - Metascience.
    Download  
     
    Export citation  
     
    Bookmark  
  • Phenomenal transparency and the extended mind.Paul Smart, Gloria Andrada & Robert William Clowes - 2022 - Synthese 200 (4):1-25.
    Proponents of the extended mind have suggested that phenomenal transparency may be important to the way we evaluate putative cases of cognitive extension. In particular, it has been suggested that in order for a bio-external resource to count as part of the machinery of the mind, it must qualify as a form of transparent equipment or transparent technology. The present paper challenges this claim. It also challenges the idea that phenomenological properties can be used to settle disputes regarding the constitutional (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Entangled AI: artificial intelligence that serves the future.Alexandra Köves, Katalin Feher, Lilla Vicsek & Máté Fischer - forthcoming - AI and Society:1-12.
    While debate is heating up regarding the development of AI and its perceived impacts on human society, policymaking is struggling to catch up with the demand to exercise some regulatory control over its rapid advancement. This paper aims to introduce the concept of entangled AI that emerged from participatory backcasting research with an AI expert panel. The concept of entanglement has been adapted from quantum physics to effectively capture the envisioned form of artificial intelligence in which a strong interconnectedness between (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Should We Discourage AI Extension? Epistemic Responsibility and AI.Hadeel Naeem & Julian Hauser - 2024 - Philosophy and Technology 37 (3):1-17.
    We might worry that our seamless reliance on AI systems makes us prone to adopting the strange errors that these systems commit. One proposed solution is to design AI systems so that they are not phenomenally transparent to their users. This stops cognitive extension and the automatic uptake of errors. Although we acknowledge that some aspects of AI extension are concerning, we can address these concerns without discouraging transparent employment altogether. First, we believe that the potential danger should be put (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Embedding AI in society: ethics, policy, governance, and impacts.Michael Pflanzer, Veljko Dubljević, William A. Bauer, Darby Orcutt, George List & Munindar P. Singh - 2023 - AI and Society 38 (4):1267-1271.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Personal Autonomy and (Digital) Technology: An Enactive Sensorimotor Framework.Marta Pérez-Verdugo & Xabier E. Barandiaran - 2023 - Philosophy and Technology 36 (4):1-28.
    Many digital technologies, designed and controlled by intensive data-driven corporate platforms, have become ubiquitous for many of our daily activities. This has raised political and ethical concerns over how they might be threatening our personal autonomy. However, not much philosophical attention has been paid to the specific role that their hyper-designed (sensorimotor) interfaces play in this regard. In this paper, we aim to offer a novel framework that can ground personal autonomy on sensorimotor interaction and, from there, directly address how (...)
    Download  
     
    Export citation  
     
    Bookmark