Switch to: References

Add citations

You must login to add citations.
  1. Should We Discourage AI Extension? Epistemic Responsibility and AI.Hadeel Naeem & Julian Hauser - 2024 - Philosophy and Technology 37 (3):1-17.
    We might worry that our seamless reliance on AI systems makes us prone to adopting the strange errors that these systems commit. One proposed solution is to design AI systems so that they are not phenomenally transparent to their users. This stops cognitive extension and the automatic uptake of errors. Although we acknowledge that some aspects of AI extension are concerning, we can address these concerns without discouraging transparent employment altogether. First, we believe that the potential danger should be put (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • On the Opacity of Deep Neural Networks.Anders Søgaard - 2023 - Canadian Journal of Philosophy:1-16.
    Deep neural networks are said to be opaque, impeding the development of safe and trustworthy artificial intelligence, but where this opacity stems from is less clear. What are the sufficient properties for neural network opacity? Here, I discuss five common properties of deep neural networks and two different kinds of opacity. Which of these properties are sufficient for what type of opacity? I show how each kind of opacity stems from only one of these five properties, and then discuss to (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Interacting with Machines: Can an Artificially Intelligent Agent Be a Partner?Philipp Schmidt & Sophie Loidolt - 2023 - Philosophy and Technology 36 (3):1-32.
    In the past decade, the fields of machine learning and artificial intelligence (AI) have seen unprecedented developments that raise human-machine interactions (HMI) to the next level.Smart machines, i.e., machines endowed with artificially intelligent systems, have lost their character as mere instruments. This, at least, seems to be the case if one considers how humans experience their interactions with them. Smart machines are construed to serve complex functions involving increasing degrees of freedom, and they generate solutions not fully anticipated by humans. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Manipulation, Algorithm Design, and the Multiple Dimensions of Autonomy.Reuben Sass - 2024 - Philosophy and Technology 37 (3):1-20.
    Much discussion of the ethics of algorithms has focused on harms to autonomy—especially harms stemming from manipulation. Nonetheless, although manipulation can often be harmful, we suggest that in certain contexts it may not impair autonomy. To fully assess the impact of algorithm design on autonomy, we argue for a need to move beyond a focus on manipulation towards a multidimensional account of autonomy itself. Drawing on the autonomy literature and recent data ethics, we propose a novel account which takes autonomy (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • LLMs beyond the lab: the ethics and epistemics of real-world AI research.Joost Mollen - 2025 - Ethics and Information Technology 27 (1):1-11.
    Research under real-world conditions is crucial to the development and deployment of robust AI systems. Exposing large language models to complex use settings yields knowledge about their performance and impact, which cannot be obtained under controlled laboratory conditions or through anticipatory methods. This epistemic need for real-world research is exacerbated by large-language models’ opaque internal operations and potential for emergent behavior. However, despite its epistemic value and widespread application, the ethics of real-world AI research has received little scholarly attention. To (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Personal Autonomy and (Digital) Technology: An Enactive Sensorimotor Framework.Marta Pérez-Verdugo & Xabier E. Barandiaran - 2023 - Philosophy and Technology 36 (4):1-28.
    Many digital technologies, designed and controlled by intensive data-driven corporate platforms, have become ubiquitous for many of our daily activities. This has raised political and ethical concerns over how they might be threatening our personal autonomy. However, not much philosophical attention has been paid to the specific role that their hyper-designed (sensorimotor) interfaces play in this regard. In this paper, we aim to offer a novel framework that can ground personal autonomy on sensorimotor interaction and, from there, directly address how (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Should I use ChatGPT as an Academic Aid?Laura Gorrieri - 2025 - Philosophy and Technology 38 (1):1-5.
    Aylsworth and Castro’s recent paper, Should I Use ChatGPT to Write My Papers?, argues that students in the humanities have a moral obligation to refrain from using AI tools such as ChatGPT for writing assignments. Their claim is that writing is an autonomy-fostering activity, essential for intellectual growth and critical reflection, and that every agent has the moral duty to respect their own autonomy. While the authors raise significant ethical concerns, the paper lacks the identification of which specific features of (...)
    Download  
     
    Export citation  
     
    Bookmark