Switch to: References

Add citations

You must login to add citations.
  1. How to deal with risks of AI suffering.Leonard Dung - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    1. 1.1. Suffering is bad. This is why, ceteris paribus, there are strong moral reasons to prevent suffering. Moreover, typically, those moral reasons are stronger when the amount of suffering at st...
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • How Much Should Governments Pay to Prevent Catastrophes? Longtermism's Limited Role.Carl Shulman & Elliott Thornley - forthcoming - In Jacob Barrett, Hilary Greaves & David Thorstad (eds.), Essays on Longtermism. Oxford University Press.
    Longtermists have argued that humanity should significantly increase its efforts to prevent catastrophes like nuclear wars, pandemics, and AI disasters. But one prominent longtermist argument overshoots this conclusion: the argument also implies that humanity should reduce the risk of existential catastrophe even at extreme cost to the present generation. This overshoot means that democratic governments cannot use the longtermist argument to guide their catastrophe policy. In this paper, we show that the case for preventing catastrophe does not depend on longtermism. (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • The role of robotics and AI in technologically mediated human evolution: a constructive proposal.Jeffrey White - 2020 - AI and Society 35 (1):177-185.
    This paper proposes that existing computational modeling research programs may be combined into platforms for the information of public policy. The main idea is that computational models at select levels of organization may be integrated in natural terms describing biological cognition, thereby normalizing a platform for predictive simulations able to account for both human and environmental costs associated with different action plans and institutional arrangements over short and long time spans while minimizing computational requirements. Building from established research programs, the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Existential risks: a philosophical analysis.Phil Torres - 2023 - Inquiry: An Interdisciplinary Journal of Philosophy 66 (4):614-639.
    This paper examines and analyzes five definitions of ‘existential risk.’ It tentatively adopts a pluralistic approach according to which the definition that scholars employ should depend upon the particular context of use. More specifically, the notion that existential risks are ‘risks of human extinction or civilizational collapse’ is best when communicating with the public, whereas equating existential risks with a ‘significant loss of expected value’ may be the most effective definition for establishing existential risk studies as a legitimate field of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • (1 other version)Agency, qualia and life: connecting mind and body biologically.David Longinotti - 2017 - In Vincent C. Müller (ed.), Philosophy and theory of artificial intelligence 2017. Berlin: Springer. pp. 43-56.
    Many believe that a suitably programmed computer could act for its own goals and experience feelings. I challenge this view and argue that agency, mental causation and qualia are all founded in the unique, homeostatic nature of living matter. The theory was formulated for coherence with the concept of an agent, neuroscientific data and laws of physics. By this method, I infer that a successful action is homeostatic for its agent and can be caused by a feeling - which does (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Explorative Nanophilosophy as Tecnoscienza: An Italian Perspective on the Role of Speculation in Nanoindustry.Steven Umbrello - 2019 - TECNOSCIENZA: Italian Journal of Science and Technology Studies 10 (1):71-88.
    There are two primary camps in which nanotechnology today can be categorized normal nanotechnology and speculative nanotechnology. The birth of nanotechnology proper was conceived through discourses of speculative nanotechnology. However, current nanotech-nology research has detracted from its speculative promises in favour of more attainable material products. Nonetheless, normal nanotechnology has leveraged the popular support and consequential funding it needs to conduct research and development (R&D) as a result of popular conceptions of speculative nanotechnology and its promises. Similarly, the scholarly literature (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Global Solutions vs. Local Solutions for the AI Safety Problem.Alexey Turchin - 2019 - Big Data Cogn. Comput 3 (1).
    There are two types of artificial general intelligence (AGI) safety solutions: global and local. Most previously suggested solutions are local: they explain how to align or “box” a specific AI (Artificial Intelligence), but do not explain how to prevent the creation of dangerous AI in other places. Global solutions are those that ensure any AI on Earth is not dangerous. The number of suggested global solutions is much smaller than the number of proposed local solutions. Global solutions can be divided (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • A Value-Sensitive Design Approach to Intelligent Agents.Steven Umbrello & Angelo Frank De Bellis - 2018 - In Yampolskiy Roman (ed.), Artificial Intelligence Safety and Security. CRC Press. pp. 395-410.
    This chapter proposed a novel design methodology called Value-Sensitive Design and its potential application to the field of artificial intelligence research and design. It discusses the imperatives in adopting a design philosophy that embeds values into the design of artificial agents at the early stages of AI development. Because of the high risk stakes in the unmitigated design of artificial agents, this chapter proposes that even though VSD may turn out to be a less-than-optimal design methodology, it currently provides a (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Superintelligence as a Cause or Cure for Risks of Astronomical Suffering.Kaj Sotala & Lukas Gloor - 2017 - Informatica: An International Journal of Computing and Informatics 41 (4):389-400.
    Discussions about the possible consequences of creating superintelligence have included the possibility of existential risk, often understood mainly as the risk of human extinction. We argue that suffering risks (s-risks) , where an adverse outcome would bring about severe suffering on an astronomical scale, are risks of a comparable severity and probability as risks of extinction. Preventing them is the common interest of many different value systems. Furthermore, we argue that in the same way as superintelligent AI both contributes to (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • International governance of advancing artificial intelligence.Nicholas Emery-Xu, Richard Jordan & Robert Trager - forthcoming - AI and Society:1-26.
    New technologies with military applications may demand new modes of governance. In this article, we develop a taxonomy of technology governance forms, outline their strengths, and red-team their weaknesses. In particular, we consider the challenges and opportunities posed by advancing artificial intelligence, which is likely to have substantial dual-use properties. We conclude that subnational governance, though prevalent and mitigating some risks, is insufficient when the individual rewards from societally harmful actions outweigh normative sanctions, as is likely to be the case (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Autonomous reboot: Aristotle, autonomy and the ends of machine ethics.Jeffrey White - 2022 - AI and Society 37 (2):647-659.
    Tonkens has issued a seemingly impossible challenge, to articulate a comprehensive ethical framework within which artificial moral agents satisfy a Kantian inspired recipe—"rational" and "free"—while also satisfying perceived prerogatives of machine ethicists to facilitate the creation of AMAs that are perfectly and not merely reliably ethical. Challenges for machine ethicists have also been presented by Anthony Beavers and Wendell Wallach. Beavers pushes for the reinvention of traditional ethics to avoid "ethical nihilism" due to the reduction of morality to mechanical causation. (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The race for an artificial general intelligence: implications for public policy.Wim Naudé & Nicola Dimitri - 2020 - AI and Society 35 (2):367-379.
    An arms race for an artificial general intelligence would be detrimental for and even pose an existential threat to humanity if it results in an unfriendly AGI. In this paper, an all-pay contest model is developed to derive implications for public policy to avoid such an outcome. It is established that, in a winner-takes-all race, where players must invest in R&D, only the most competitive teams will participate. Thus, given the difficulty of AGI, the number of competing teams is unlikely (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • On the promotion of safe and socially beneficial artificial intelligence.Seth D. Baum - 2017 - AI and Society 32 (4):543-551.
    This paper discusses means for promoting artificial intelligence that is designed to be safe and beneficial for society. The promotion of beneficial AI is a social challenge because it seeks to motivate AI developers to choose beneficial AI designs. Currently, the AI field is focused mainly on building AIs that are more capable, with little regard to social impacts. Two types of measures are available for encouraging the AI field to shift more toward building beneficial AI. Extrinsic measures impose constraints (...)
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  • Dreyfus on the “Fringe”: information processing, intelligent activity, and the future of thinking machines.Jeffrey White - 2019 - AI and Society 34 (2):301-312.
    From his preliminary analysis in 1965, Hubert Dreyfus projected a future much different than those with which his contemporaries were practically concerned, tempering their optimism in realizing something like human intelligence through conventional methods. At that time, he advised that there was nothing “directly” to be done toward machines with human-like intelligence, and that practical research should aim at a symbiosis between human beings and computers with computers doing what they do best, processing discrete symbols in formally structured problem domains. (...)
    Download  
     
    Export citation  
     
    Bookmark