Switch to: References

Add citations

You must login to add citations.
  1. Human Extinction and AI: What We Can Learn from the Ultimate Threat.Andrea Lavazza & Murilo Vilaça - 2024 - Philosophy and Technology 37 (1):1-21.
    Human extinction is something generally deemed as undesirable, although some scholars view it as a potential solution to the problems of the Earth since it would reduce the moral evil and the suffering that are brought about by humans. We contend that humans collectively have absolute intrinsic value as sentient, conscious and rational entities, and we should preserve them from extinction. However, severe threats, such as climate change and incurable viruses, might push humanity to the brink of extinction. Should that (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The future of urban models in the Big Data and AI era: a bibliometric analysis.Marion Maisonobe - 2022 - AI and Society 37 (1):177-194.
    This article questions the effects on urban research dynamics of the Big Data and AI turn in urban management. Increasing access to large datasets collected in real time could make certain mathematical models developed in research fields related to the management of urban systems obsolete. These ongoing evolutions are the subject of numerous works whose main angle of reflection is the future of cities rather than the transformations at work in the academic field. Our article proposes grasp the scientific dynamics (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • People Copy the Actions of Artificial Intelligence.Michal Klichowski - 2020 - Frontiers in Psychology 11.
    Download  
     
    Export citation  
     
    Bookmark  
  • Discourse analysis of academic debate of ethics for AGI.Ross Graham - 2022 - AI and Society 37 (4):1519-1532.
    Artificial general intelligence is a greatly anticipated technology with non-trivial existential risks, defined as machine intelligence with competence as great/greater than humans. To date, social scientists have dedicated little effort to the ethics of AGI or AGI researchers. This paper employs inductive discourse analysis of the academic literature of two intellectual groups writing on the ethics of AGI—applied and/or ‘basic’ scientific disciplines henceforth referred to as technicians (e.g., computer science, electrical engineering, physics), and philosophy-adjacent disciplines henceforth referred to as PADs (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Optimising peace through a Universal Global Peace Treaty to constrain the risk of war from a militarised artificial superintelligence.Elias G. Carayannis & John Draper - 2023 - AI and Society 38 (6):2679-2692.
    This article argues that an artificial superintelligence (ASI) emerging in a world where war is still normalised constitutes a catastrophic existential risk, either because the ASI might be employed by a nation–state to war for global domination, i.e., ASI-enabled warfare, or because the ASI wars on behalf of itself to establish global domination, i.e., ASI-directed warfare. Presently, few states declare war or even war on each other, in part due to the 1945 UN Charter, which states Member States should “refrain (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Automated decision-making and the problem of evil.Andrea Berber - forthcoming - AI and Society:1-10.
    The intention of this paper is to point to the dilemma humanity may face in light of AI advancements. The dilemma is whether to create a world with less evil or maintain the human status of moral agents. This dilemma may arise as a consequence of using automated decision-making systems for high-stakes decisions. The use of automated decision-making bears the risk of eliminating human moral agency and autonomy and reducing humans to mere moral patients. On the other hand, it also (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Autonomy and Machine Learning as Risk Factors at the Interface of Nuclear Weapons, Computers and People.S. M. Amadae & Shahar Avin - 2019 - In Vincent Boulanin (ed.), The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk: Euro-Atlantic Perspectives. Stockholm, Sweden: pp. 105-118.
    This article assesses how autonomy and machine learning impact the existential risk of nuclear war. It situates the problem of cyber security, which proceeds by stealth, within the larger context of nuclear deterrence, which is effective when it functions with transparency and credibility. Cyber vulnerabilities poses new weaknesses to the strategic stability provided by nuclear deterrence. This article offers best practices for the use of computer and information technologies integrated into nuclear weapons systems. Focusing on nuclear command and control, avoiding (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • COVID-19 and Singularity: Can the Philippines Survive Another Existential Threat?Robert James M. Boyles, Mark Anthony Dacela, Tyrone Renzo Evangelista & Jon Carlos Rodriguez - 2022 - Asia-Pacific Social Science Review 22 (2):181–195.
    In general, existential threats are those that may potentially result in the extinction of the entire human species, if not significantly endanger its living population. Among the said threats include, but not limited to, pandemics and the impacts of a technological singularity. As regards pandemics, significant work has already been done on how to mitigate, if not prevent, the aftereffects of this type of disaster. For one, certain problem areas on how to properly manage pandemic responses have already been identified, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Robustness to fundamental uncertainty in AGI alignment.I. I. I. G. Gordon Worley - manuscript
    The AGI alignment problem has a bimodal distribution of outcomes with most outcomes clustering around the poles of total success and existential, catastrophic failure. Consequently, attempts to solve AGI alignment should, all else equal, prefer false negatives (ignoring research programs that would have been successful) to false positives (pursuing research programs that will unexpectedly fail). Thus, we propose adopting a policy of responding to points of metaphysical and practical uncertainty associated with the alignment problem by limiting and choosing necessary assumptions (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Global Solutions vs. Local Solutions for the AI Safety Problem.Alexey Turchin - 2019 - Big Data Cogn. Comput 3 (1).
    There are two types of artificial general intelligence (AGI) safety solutions: global and local. Most previously suggested solutions are local: they explain how to align or “box” a specific AI (Artificial Intelligence), but do not explain how to prevent the creation of dangerous AI in other places. Global solutions are those that ensure any AI on Earth is not dangerous. The number of suggested global solutions is much smaller than the number of proposed local solutions. Global solutions can be divided (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation