Switch to: References

Add citations

You must login to add citations.
  1. Mapping the Ethics of Generative AI: A Comprehensive Scoping Review.Thilo Hagendorff - 2024 - Minds and Machines 34 (4):1-27.
    The advent of generative artificial intelligence and the widespread adoption of it in society engendered intensive debates about its ethical implications and risks. These risks often differ from those associated with traditional discriminative machine learning. To synthesize the recent discourse and map its normative concepts, we conducted a scoping review on the ethics of generative artificial intelligence, including especially large language models and text-to-image models. Our analysis provides a taxonomy of 378 normative issues in 19 topic areas and ranks them (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Plagiarism, Academic Ethics, and the Utilization of Generative AI in Academic Writing.Julian Koplin - 2023 - International Journal of Applied Philosophy 37 (2):17-40.
    In the wake of ChatGPT’s release, academics and journal editors have begun making important decisions about whether and how to integrate generative artificial intelligence (AI) into academic publishing. Some argue that AI outputs in scholarly works constitute plagiarism, and so should be disallowed by academic journals. Others suggest that it is acceptable to integrate AI output into academic papers, provided that its contributions are transparently disclosed. By drawing on Taylor’s work on academic norms, this paper argues against both views. Unlike (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Quasi-Metacognitive Machines: Why We Don’t Need Morally Trustworthy AI and Communicating Reliability is Enough.John Dorsch & Ophelia Deroy - 2024 - Philosophy and Technology 37 (2):1-21.
    Many policies and ethical guidelines recommend developing “trustworthy AI”. We argue that developing morally trustworthy AI is not only unethical, as it promotes trust in an entity that cannot be trustworthy, but it is also unnecessary for optimal calibration. Instead, we show that reliability, exclusive of moral trust, entails the appropriate normative constraints that enable optimal calibration and mitigate the vulnerability that arises in high-stakes hybrid decision-making environments, without also demanding, as moral trust would, the anthropomorphization of AI and thus (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation