Switch to: References

Add citations

You must login to add citations.
  1. Guidance needed for using artificial intelligence to screen journal submissions for misconduct.Mohammad Hosseini & David B. Resnik - forthcoming - Research Ethics.
    Journals and publishers are increasingly using artificial intelligence (AI) to screen submissions for potential misconduct, including plagiarism and data or image manipulation. While using AI can enhance the integrity of published manuscripts, it can also increase the risk of false/unsubstantiated allegations. Ambiguities related to journals’ and publishers’ responsibilities concerning fairness and transparency also raise ethical concerns. In this Topic Piece, we offer the following guidance: (1) All cases of suspected misconduct identified by AI tools should be carefully reviewed by humans (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Deficient epistemic virtues and prevalence of epistemic vices as precursors to transgressions in research misconduct.Bor Luen Tang - 2024 - Research Ethics 20 (2):272-287.
    Scientific research is supposed to acquire or generate knowledge, but such a purpose would be severely undermined by instances of research misconduct (RM) and questionable research practices (QRP). RM and QRP are often framed in terms of moral transgressions by individuals (bad apples) whose aberrant acts could be made conducive by shortcomings in regulatory measures of organizations or institutions (bad barrels). This notion presupposes, to an extent, that the erring parties know exactly what they are doing is wrong and morally (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Editors’ Statement on the Responsible Use of Generative AI Technologies in Scholarly Journal Publishing.Gregory E. Kaebnick, David Christopher Magnus, Audiey Kao, Mohammad Hosseini, David Resnik, Veljko Dubljević, Christy Rentmeester, Bert Gordijn & Mark J. Cherry - 2023 - American Journal of Bioethics Neuroscience 14 (4):337-340.
    The new generative artificial intelligence (AI) tools, and especially the large language models (LLMs) of which ChatGPT is the most prominent example, have the potential to transform many aspects o...
    Download  
     
    Export citation  
     
    Bookmark  
  • Editors’ Statement on the Responsible Use of Generative AI Technologies in Scholarly Journal Publishing.Gregory E. Kaebnick, David Christopher Magnus, Audiey Kao, Mohammad Hosseini, David Resnik, Veljko Dubljević, Christy Rentmeester, Bert Gordijn & Mark J. Cherry - 2023 - Hastings Center Report 53 (5):3-6.
    Generative artificial intelligence (AI) has the potential to transform many aspects of scholarly publishing. Authors, peer reviewers, and editors might use AI in a variety of ways, and those uses might augment their existing work or might instead be intended to replace it. We are editors of bioethics and humanities journals who have been contemplating the implications of this ongoing transformation. We believe that generative AI may pose a threat to the goals that animate our work but could also be (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Impact of AUTOGEN and Similar Fine-Tuned Large Language Models on the Integrity of Scholarly Writing.David B. Resnik & Mohammad Hosseini - 2023 - American Journal of Bioethics 23 (10):50-52.
    Artificial intelligence (AI), large language models (LLMs), such as Open AI’s ChatGPT, have a remarkable ability to process and generate human language but have also raised complex and novel ethica...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Editors’ Statement on the Responsible Use of Generative AI Technologies in Scholarly Journal Publishing.Gregory E. Kaebnick, David Christopher Magnus, Audiey Kao, Mohammad Hosseini, David Resnik, Veljko Dubljević, Christy Rentmeester, Bert Gordijn & Mark J. Cherry - 2023 - American Journal of Bioethics 24 (3):5-8.
    The new generative artificial intelligence (AI) tools, and especially the large language models (LLMs) of which ChatGPT is the most prominent example, have the potential to transform many aspects o...
    Download  
     
    Export citation  
     
    Bookmark  
  • Editors’ statement on the responsible use of generative AI technologies in scholarly journal publishing.Gregory E. Kaebnick, David Christopher Magnus, Audiey Kao, Mohammad Hosseini, David Resnik, Veljko Dubljević, Christy Rentmeester, Bert Gordijn & Mark J. Cherry - 2023 - Medicine, Health Care and Philosophy 26 (4):499-503.
    Generative artificial intelligence (AI) has the potential to transform many aspects of scholarly publishing. Authors, peer reviewers, and editors might use AI in a variety of ways, and those uses might augment their existing work or might instead be intended to replace it. We are editors of bioethics and humanities journals who have been contemplating the implications of this ongoing transformation. We believe that generative AI may pose a threat to the goals that animate our work but could also be (...)
    Download  
     
    Export citation  
     
    Bookmark