Switch to: References

Add citations

You must login to add citations.
  1. How can we assess whether to trust collectives of scientists?Elinor Clark - forthcoming - British Journal for the Philosophy of Science.
    A great many important decisions we make in life depend on scientific information that we are not in a position to assess. So it seems we must defer to experts. By now there are a variety of criteria on offer by which non-experts can judge the trustworthiness of a scientist responsible for producing or promulgating this information. But science is, for the most part, a collective not an individual enterprise. This paper explores which of the criteria for judging the trustworthiness (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • “Technical” Contributors and Authorship Distribution in Health Science.Elise Smith - 2023 - Science and Engineering Ethics 29 (4):1-19.
    In health sciences, technical contributions may be undervalued and excluded in the author byline. In this paper, I demonstrate how authorship is a historical construct which perpetuates systemic injustices including technical undervaluation. I make use of Pierre Bourdieu’s conceptual work to demonstrate how the power dynamics at play in academia make it very challenging to change the habitual state or “habitus”. To counter this, I argue that we must reconceive technical contributions to not be a priori less important based on (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The ethics of disclosing the use of artificial intelligence tools in writing scholarly manuscripts.Mohammad Hosseini, David B. Resnik & Kristi Holmes - 2023 - Research Ethics 19 (4):449-465.
    In this article, we discuss ethical issues related to using and disclosing artificial intelligence (AI) tools, such as ChatGPT and other systems based on large language models (LLMs), to write or edit scholarly manuscripts. Some journals, such as Science, have banned the use of LLMs because of the ethical problems they raise concerning responsible authorship. We argue that this is not a reasonable response to the moral conundrums created by the use of LLMs because bans are unenforceable and would encourage (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations