Switch to: References

Add citations

You must login to add citations.
  1. Balancing AI and academic integrity: what are the positions of academic publishers and universities?Bashar Haruna Gulumbe, Shuaibu Muhammad Audu & Abubakar Muhammad Hashim - forthcoming - AI and Society:1-10.
    This paper navigates the relationship between the growing influence of Artificial Intelligence (AI) and the foundational principles of academic integrity. It offers an in-depth analysis of how key academic stakeholders—publishers and universities—are crafting strategies and guidelines to integrate AI into the sphere of scholarly work. These efforts are not merely reactionary but are part of a broader initiative to harness AI’s potential while maintaining ethical standards. The exploration reveals a diverse array of stances, reflecting the varied applications of AI in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Taking Into Account Sentient Non-Humans in AI Ambitious Value Learning: Sentientist Coherent Extrapolated Volition.Adrià Moret - 2023 - Journal of Artificial Intelligence and Consciousness 10 (02):309-334.
    Ambitious value learning proposals to solve the AI alignment problem and avoid catastrophic outcomes from a possible future misaligned artificial superintelligence (such as Coherent Extrapolated Volition [CEV]) have focused on ensuring that an artificial superintelligence (ASI) would try to do what humans would want it to do. However, present and future sentient non-humans, such as non-human animals and possible future digital minds could also be affected by the ASI’s behaviour in morally relevant ways. This paper puts forward Sentientist Coherent Extrapolated (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The second-order problem of other minds.Ori Friedman & Arber Tasimi - 2023 - Behavioral and Brain Sciences 46:e31.
    The target article proposes that people perceive social robots as depictions rather than as genuine social agents. We suggest that people might instead view social robots as social agents, albeit agents with more restricted capacities and moral rights than humans. We discuss why social robots, unlike other kinds of depictions, present a special challenge for testing the depiction hypothesis.
    Download  
     
    Export citation  
     
    Bookmark  
  • Moral Uncertainty and Our Relationships with Unknown Minds.John Danaher - 2023 - Cambridge Quarterly of Healthcare Ethics 32 (4):482-495.
    We are sometimes unsure of the moral status of our relationships with other entities. Recent case studies in this uncertainty include our relationships with artificial agents (robots, assistant AI, etc.), animals, and patients with “locked-in” syndrome. Do these entities have basic moral standing? Could they count as true friends or lovers? What should we do when we do not know the answer to these questions? An influential line of reasoning suggests that, in such cases of moral uncertainty, we need meta-moral (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • How persuasive is AI-generated argumentation? An analysis of the quality of an argumentative text produced by the GPT-3 AI text generator.Martin Hinton & Jean H. M. Wagemans - 2023 - Argument and Computation 14 (1):59-74.
    In this paper, we use a pseudo-algorithmic procedure for assessing an AI-generated text. We apply the Comprehensive Assessment Procedure for Natural Argumentation (CAPNA) in evaluating the arguments produced by an Artificial Intelligence text generator, GPT-3, in an opinion piece written for the Guardian newspaper. The CAPNA examines instances of argumentation in three aspects: their Process, Reasoning and Expression. Initial Analysis is conducted using the Argument Type Identification Procedure (ATIP) to establish, firstly, that an argument is present and, secondly, its specific (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Do androids dream of informed consent? The need to understand the ethical implications of experimentation on simulated beings.Alexander Gariti - 2024 - Monash Bioethics Review 42 (2):260-278.
    Creating simulations of the world can be a valuable way to test new ideas, predict the future, and broaden our understanding of a given topic. Presumably, the more similar the simulation is to the real world, the more transferable the knowledge generated in the simulation will be and, therefore, the more useful. As such, there is an incentive to create more advanced and representative simulations of the real world. Simultaneously, there are ethical and practical limitation to what can be done (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • “I Am Not Your Robot:” the metaphysical challenge of humanity’s AIS ownership.Tyler L. Jaynes - 2021 - AI and Society 37 (4):1689-1702.
    Despite the reality that self-learning artificial intelligence systems (SLAIS) are gaining in sophistication, humanity’s focus regarding SLAIS-human interactions are unnervingly centred upon transnational commercial sectors and, most generally, around issues of intellectual property law. But as SLAIS gain greater environmental interaction capabilities in digital spaces, or the ability to self-author code to drive their development as algorithmic models, a concern arises as to whether a system that displays a “deceptive” level of human-like engagement with users in our physical world ought (...)
    Download  
     
    Export citation  
     
    Bookmark