Switch to: References

Add citations

You must login to add citations.
  1. Can ChatGPT be an author? Generative AI creative writing assistance and perceptions of authorship, creatorship, responsibility, and disclosure.Paul Formosa, Sarah Bankins, Rita Matulionyte & Omid Ghasemi - forthcoming - AI and Society.
    The increasing use of Generative AI raises many ethical, philosophical, and legal issues. A key issue here is uncertainties about how different degrees of Generative AI assistance in the production of text impacts assessments of the human authorship of that text. To explore this issue, we developed an experimental mixed methods survey study (N = 602) asking participants to reflect on a scenario of a human author receiving assistance to write a short novel as part of a 3 (high, medium, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Mapping the Ethics of Generative AI: A Comprehensive Scoping Review.Thilo Hagendorff - 2024 - Minds and Machines 34 (4):1-27.
    The advent of generative artificial intelligence and the widespread adoption of it in society engendered intensive debates about its ethical implications and risks. These risks often differ from those associated with traditional discriminative machine learning. To synthesize the recent discourse and map its normative concepts, we conducted a scoping review on the ethics of generative artificial intelligence, including especially large language models and text-to-image models. Our analysis provides a taxonomy of 378 normative issues in 19 topic areas and ranks them (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Deficient epistemic virtues and prevalence of epistemic vices as precursors to transgressions in research misconduct.Bor Luen Tang - 2024 - Research Ethics 20 (2):272-287.
    Scientific research is supposed to acquire or generate knowledge, but such a purpose would be severely undermined by instances of research misconduct (RM) and questionable research practices (QRP). RM and QRP are often framed in terms of moral transgressions by individuals (bad apples) whose aberrant acts could be made conducive by shortcomings in regulatory measures of organizations or institutions (bad barrels). This notion presupposes, to an extent, that the erring parties know exactly what they are doing is wrong and morally (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • (4 other versions)Editors’ statement on the responsible use of generative AI technologies in scholarly journal publishing.Gregory E. Kaebnick, David Christopher Magnus, Audiey Kao, Mohammad Hosseini, David Resnik, Veljko Dubljević, Christy Rentmeester, Bert Gordijn & Mark J. Cherry - 2023 - Medicine, Health Care and Philosophy 26 (4):499-503.
    Generative artificial intelligence (AI) has the potential to transform many aspects of scholarly publishing. Authors, peer reviewers, and editors might use AI in a variety of ways, and those uses might augment their existing work or might instead be intended to replace it. We are editors of bioethics and humanities journals who have been contemplating the implications of this ongoing transformation. We believe that generative AI may pose a threat to the goals that animate our work but could also be (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • (4 other versions)Editors’ Statement on the Responsible Use of Generative AI Technologies in Scholarly Journal Publishing.Gregory E. Kaebnick, David Christopher Magnus, Audiey Kao, Mohammad Hosseini, David Resnik, Veljko Dubljević, Christy Rentmeester, Bert Gordijn & Mark J. Cherry - 2023 - American Journal of Bioethics Neuroscience 14 (4):337-340.
    The new generative artificial intelligence (AI) tools, and especially the large language models (LLMs) of which ChatGPT is the most prominent example, have the potential to transform many aspects o...
    Download  
     
    Export citation  
     
    Bookmark  
  • (4 other versions)Editors’ Statement on the Responsible Use of Generative AI Technologies in Scholarly Journal Publishing.Gregory E. Kaebnick, David Christopher Magnus, Audiey Kao, Mohammad Hosseini, David Resnik, Veljko Dubljević, Christy Rentmeester, Bert Gordijn & Mark J. Cherry - 2023 - Hastings Center Report 53 (5):3-6.
    Generative artificial intelligence (AI) has the potential to transform many aspects of scholarly publishing. Authors, peer reviewers, and editors might use AI in a variety of ways, and those uses might augment their existing work or might instead be intended to replace it. We are editors of bioethics and humanities journals who have been contemplating the implications of this ongoing transformation. We believe that generative AI may pose a threat to the goals that animate our work but could also be (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Guidance needed for using artificial intelligence to screen journal submissions for misconduct.Mohammad Hosseini & David B. Resnik - forthcoming - Research Ethics.
    Journals and publishers are increasingly using artificial intelligence (AI) to screen submissions for potential misconduct, including plagiarism and data or image manipulation. While using AI can enhance the integrity of published manuscripts, it can also increase the risk of false/unsubstantiated allegations. Ambiguities related to journals’ and publishers’ responsibilities concerning fairness and transparency also raise ethical concerns. In this Topic Piece, we offer the following guidance: (1) All cases of suspected misconduct identified by AI tools should be carefully reviewed by humans (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • (4 other versions)Editors’ Statement on the Responsible Use of Generative AI Technologies in Scholarly Journal Publishing.Gregory E. Kaebnick, David Christopher Magnus, Audiey Kao, Mohammad Hosseini, David Resnik, Veljko Dubljević, Christy Rentmeester, Bert Gordijn & Mark J. Cherry - 2023 - American Journal of Bioethics 24 (3):5-8.
    The new generative artificial intelligence (AI) tools, and especially the large language models (LLMs) of which ChatGPT is the most prominent example, have the potential to transform many aspects o...
    Download  
     
    Export citation  
     
    Bookmark  
  • The Impact of AUTOGEN and Similar Fine-Tuned Large Language Models on the Integrity of Scholarly Writing.David B. Resnik & Mohammad Hosseini - 2023 - American Journal of Bioethics 23 (10):50-52.
    Artificial intelligence (AI), large language models (LLMs), such as Open AI’s ChatGPT, have a remarkable ability to process and generate human language but have also raised complex and novel ethica...
    Download  
     
    Export citation  
     
    Bookmark   1 citation