Switch to: Citations

Add references

You must login to add references.
  1. Friendly AI will still be our master. Or, why we should not want to be the pets of super-intelligent computers.Robert Sparrow - 2024 - AI and Society 39 (5):2439-2444.
    When asked about humanity’s future relationship with computers, Marvin Minsky famously replied “If we’re lucky, they might decide to keep us as pets”. A number of eminent authorities continue to argue that there is a real danger that “super-intelligent” machines will enslave—perhaps even destroy—humanity. One might think that it would swiftly follow that we should abandon the pursuit of AI. Instead, most of those who purport to be concerned about the existential threat posed by AI default to worrying about what (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • (1 other version)Taking AI Risks Seriously: a New Assessment Model for the AI Act.Claudio Novelli, Casolari Federico, Antonino Rotolo, Mariarosaria Taddeo & Luciano Floridi - 2023 - AI and Society 38 (3):1-5.
    The EU proposal for the Artificial Intelligence Act (AIA) defines four risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of AI, the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. This problem is particularly challenging when it comes to regulating general-purpose AI (GPAI), which has versatile and often unpredictable applications. Recent amendments to the compromise text, though introducing context-specific assessments, remain insufficient. To address this, (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Freedom, AI and God: why being dominated by a friendly super-AI might not be so bad.Morgan Luck - forthcoming - AI and Society:1-8.
    One response to the existential threat posed by a super-intelligent AI is to design it to be friendly to us. Some have argued that even if this were possible, the resulting AI would treat us as we do our pets. Sparrow (AI & Soc. https://doi.org/10.1007/s00146-023-01698-x, 2023) argues that this would be a bad outcome, for such an AI would dominate us—resulting in our freedom being diminished (Pettit in Just freedom: A moral compass for a complex world. WW Norton & Company, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Concepts of Existential Catastrophe.Hilary Greaves - 2024 - The Monist 107 (2):109-129.
    The notion of existential catastrophe is increasingly appealed to in discussion of risk management around emerging technologies, but it is not completely clear what this notion amounts to. Here, I provide an opinionated survey of the space of plausibly useful definitions of existential catastrophe. Inter alia, I discuss: whether to define existential catastrophe in ex post or ex ante terms, whether an ex ante definition should be in terms of loss of expected value or loss of potential, and what kind (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Language Agents Reduce the Risk of Existential Catastrophe.Simon Goldstein & Cameron Domenico Kirk-Giannini - 2023 - AI and Society:1-11.
    Recent advances in natural language processing have given rise to a new kind of AI architecture: the language agent. By repeatedly calling an LLM to perform a variety of cognitive tasks, language agents are able to function autonomously to pursue goals specified in natural language and stored in a human-readable format. Because of their architecture, language agents exhibit behavior that is predictable according to the laws of folk psychology: they function as though they have desires and beliefs, and then make (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Catastrophic risk.H. Orri Stefánsson - 2020 - Philosophy Compass 15 (11):1-11.
    Catastrophic risk raises questions that are not only of practical importance, but also of great philosophical interest, such as how to define catastrophe and what distinguishes catastrophic outcomes from non-catastrophic ones. Catastrophic risk also raises questions about how to rationally respond to such risks. How to rationally respond arguably partly depends on the severity of the uncertainty, for instance, whether quantitative probabilistic information is available, or whether only comparative likelihood information is available, or neither type of information. Finally, catastrophic risk (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • The case for global governance of AI: arguments, counter-arguments, and challenges ahead.Mark Coeckelbergh - forthcoming - AI and Society:1-4.
    It is increasingly recognized that as artificial intelligence becomes more powerful and pervasive in society and creates risks and ethical issues that cross borders, a global approach is needed for the governance of these risks. But why, exactly, do we need this and what does that mean? In this Open Forum paper, author argues for global governance of AI for moral reasons but also outlines the governance challenges that this project raises.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Three lines of defense against risks from AI.Jonas Schuett - forthcoming - AI and Society:1-15.
    Organizations that develop and deploy artificial intelligence (AI) systems need to manage the associated risks—for economic, legal, and ethical reasons. However, it is not always clear who is responsible for AI risk management. The three lines of defense (3LoD) model, which is considered best practice in many industries, might offer a solution. It is a risk management framework that helps organizations to assign and coordinate risk management roles and responsibilities. In this article, I suggest ways in which AI companies could (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Utilitarianism, decision theory and eternity.Frank Arntzenius - 2014 - Philosophical Perspectives 28 (1):31-58.
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • A comment on the pursuit to align AI: we do not need value-aligned AI, we need AI that is risk-averse.Rebecca Raper - forthcoming - AI and Society:1-3.
    Download  
     
    Export citation  
     
    Bookmark   1 citation