Results for 'Dangers of AI'

1000+ found
Order:
  1. Facing Janus: An Explanation of the Motivations and Dangers of AI Development.Aaron Graifman - manuscript
    This paper serves as an intuition building mechanism for understanding the basics of AI, misalignment, and the reasons for why strong AI is being pursued. The approach is to engage with both pro and anti AI development arguments to gain a deeper understanding of both views, and hopefully of the issue as a whole. We investigate the basics of misalignment, common misconceptions, and the arguments for why we would want to pursue strong AI anyway. The paper delves into various aspects (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2. AI Art is Theft: Labour, Extraction, and Exploitation, Or, On the Dangers of Stochastic Pollocks.Trystan S. Goetze - forthcoming - Proceedings of the 2024 Acm Conference on Fairness, Accountability, and Transparency (Facct ’24).
    Since the launch of applications such as DALL-E, Midjourney, and Stable Diffusion, generative artificial intelligence has been controversial as a tool for creating artwork. While some have presented longtermist worries about these technologies as harbingers of fully automated futures to come, more pressing is the impact of generative AI on creative labour in the present. Already, business leaders have begun replacing human artistic labour with AI-generated images. In response, the artistic community has launched a protest movement, which argues that AI (...)
    Download  
     
    Export citation  
     
    Bookmark  
  3.  80
    Are editors of flesh and blood necessary for meeting yet another danger with AI?Johan Gamper - manuscript
    As a writer, it is hard to defend oneself from the accusation of being a robot. Even though the argument is ad hominem it perhaps is too difficult to create a “reversed” Turing test. It is suggested in this article that editors of flesh and blood still are necessary.
    Download  
     
    Export citation  
     
    Bookmark  
  4. Companion robots: the hallucinatory danger of human-robot interactions.Piercosma Bisconti & Daniele Nardi - 2018 - In Piercosma Bisconti & Daniele Nardi (eds.), AIES '18: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. pp. 17-22.
    The advent of the so-called Companion Robots is raising many ethical concerns among scholars and in the public opinion. Focusing mainly on robots caring for the elderly, in this paper we analyze these concerns to distinguish which are directly ascribable to robotic, and which are instead preexistent. One of these is the “deception objection”, namely the ethical unacceptability of deceiving the user about the simulated nature of the robot’s behaviors. We argue on the inconsistency of this charge, as today formulated. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5. Narrow AI Nanny: Reaching Strategic Advantage via Narrow AI to Prevent Creation of the Dangerous Superintelligence.Alexey Turchin - manuscript
    Abstract: As there are no currently obvious ways to create safe self-improving superintelligence, but its emergence is looming, we probably need temporary ways to prevent its creation. The only way to prevent it is to create a special type of AI that is able to control and monitor the entire world. The idea has been suggested by Goertzel in the form of an AI Nanny, but his Nanny is still superintelligent, and is not easy to control. We explore here ways (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. Saliva Ontology: An ontology-based framework for a Salivaomics Knowledge Base.Jiye Ai, Barry Smith & David Wong - 2010 - BMC Bioinformatics 11 (1):302.
    The Salivaomics Knowledge Base (SKB) is designed to serve as a computational infrastructure that can permit global exploration and utilization of data and information relevant to salivaomics. SKB is created by aligning (1) the saliva biomarker discovery and validation resources at UCLA with (2) the ontology resources developed by the OBO (Open Biomedical Ontologies) Foundry, including a new Saliva Ontology (SALO). We define the Saliva Ontology (SALO; http://www.skb.ucla.edu/SALO/) as a consensus-based controlled vocabulary of terms and relations dedicated to the salivaomics (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  7. Assessing the future plausibility of catastrophically dangerous AI.Alexey Turchin - 2018 - Futures.
    In AI safety research, the median timing of AGI creation is often taken as a reference point, which various polls predict will happen in second half of the 21 century, but for maximum safety, we should determine the earliest possible time of dangerous AI arrival and define a minimum acceptable level of AI risk. Such dangerous AI could be either narrow AI facilitating research into potentially dangerous technology like biotech, or AGI, capable of acting completely independently in the real world (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. Bioinformatics advances in saliva diagnostics.Ji-Ye Ai, Barry Smith & David T. W. Wong - 2012 - International Journal of Oral Science 4 (2):85--87.
    There is a need recognized by the National Institute of Dental & Craniofacial Research and the National Cancer Institute to advance basic, translational and clinical saliva research. The goal of the Salivaomics Knowledge Base (SKB) is to create a data management system and web resource constructed to support human salivaomics research. To maximize the utility of the SKB for retrieval, integration and analysis of data, we have developed the Saliva Ontology and SDxMart. This article reviews the informatics advances in saliva (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  9. Catastrophically Dangerous AI is Possible Before 2030.Alexey Turchin - manuscript
    In AI safety research, the median timing of AGI arrival is often taken as a reference point, which various polls predict to happen in the middle of 21 century, but for maximum safety, we should determine the earliest possible time of Dangerous AI arrival. Such Dangerous AI could be either AGI, capable of acting completely independently in the real world and of winning in most real-world conflicts with humans, or an AI helping humans to build weapons of mass destruction, or (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10. Towards a Body Fluids Ontology: A unified application ontology for basic and translational science.Jiye Ai, Mauricio Barcellos Almeida, André Queiroz De Andrade, Alan Ruttenberg, David Tai Wai Wong & Barry Smith - 2011 - Second International Conference on Biomedical Ontology , Buffalo, Ny 833:227-229.
    We describe the rationale for an application ontology covering the domain of human body fluids that is designed to facilitate representation, reuse, sharing and integration of diagnostic, physiological, and biochemical data, We briefly review the Blood Ontology (BLO), Saliva Ontology (SALO) and Kidney and Urinary Pathway Ontology (KUPO) initiatives. We discuss the methods employed in each, and address the project of using them as starting point for a unified body fluids ontology resource. We conclude with a description of how the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11. Theology Meets AI: Examining Perspectives, Tasks, and Theses on the Intersection of Technology and Religion.Anna Puzio - 2023 - In Anna Puzio, Nicole Kunkel & Hendrik Klinge (eds.), Alexa, wie hast du's mit der Religion? Theologische Zugänge zu Technik und Künstlicher Intelligenz. Darmstadt: Wbg.
    Artificial intelligence (AI), blockchain, virtual and augmented reality, (semi-)autonomous ve- hicles, autoregulatory weapon systems, enhancement, reproductive technologies and human- oid robotics – these technologies (and many others) are no longer speculative visions of the future; they have already found their way into our lives or are on the verge of a breakthrough. These rapid technological developments awaken a need for orientation: what distinguishes hu- man from machine and human intelligence from artificial intelligence, how far should the body be allowed to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12. Military AI as a Convergent Goal of Self-Improving AI.Alexey Turchin & Denkenberger David - 2018 - In Artificial Intelligence Safety and Security. Louiswille: CRC Press.
    Better instruments to predict the future evolution of artificial intelligence (AI) are needed, as the destiny of our civilization depends on it. One of the ways to such prediction is the analysis of the convergent drives of any future AI, started by Omohundro. We show that one of the convergent drives of AI is a militarization drive, arising from AI’s need to wage a war against its potential rivals by either physical or software means, or to increase its bargaining power. (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  13. Why AI Doomsayers are Like Sceptical Theists and Why it Matters.John Danaher - 2015 - Minds and Machines 25 (3):231-246.
    An advanced artificial intelligence could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s superintelligence: paths, dangers, strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate about (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  14. The Blood Ontology: An ontology in the domain of hematology.Almeida Mauricio Barcellos, Proietti Anna Barbara de Freitas Carneiro, Ai Jiye & Barry Smith - 2011 - In Proceedings of the Second International Conference on Biomedical Ontology, Buffalo, NY, July 28-30, 2011 (CEUR 883). pp. (CEUR Workshop Proceedings, 833).
    Despite the importance of human blood to clinical practice and research, hematology and blood transfusion data remain scattered throughout a range of disparate sources. This lack of systematization concerning the use and definition of terms poses problems for physicians and biomedical professionals. We are introducing here the Blood Ontology, an ongoing initiative designed to serve as a controlled vocabulary for use in organizing information about blood. The paper describes the scope of the Blood Ontology, its stage of development and some (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15. Global Solutions vs. Local Solutions for the AI Safety Problem.Alexey Turchin - 2019 - Big Data Cogn. Comput 3 (1).
    There are two types of artificial general intelligence (AGI) safety solutions: global and local. Most previously suggested solutions are local: they explain how to align or “box” a specific AI (Artificial Intelligence), but do not explain how to prevent the creation of dangerous AI in other places. Global solutions are those that ensure any AI on Earth is not dangerous. The number of suggested global solutions is much smaller than the number of proposed local solutions. Global solutions can be divided (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  16. The expected AI as a sociocultural construct and its impact on the discourse on technology.Auli Viidalepp - 2023 - Dissertation, University of Tartu
    The thesis introduces and criticizes the discourse on technology, with a specific reference to the concept of AI. The discourse on AI is particularly saturated with reified metaphors which drive connotations and delimit understandings of technology in society. To better analyse the discourse on AI, the thesis proposes the concept of “Expected AI”, a composite signifier filled with historical and sociocultural connotations, and numerous referent objects. Relying on cultural semiotics, science and technology studies, and a diverse selection of heuristic concepts, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. The Global Catastrophic Risks Connected with Possibility of Finding Alien AI During SETI.Alexey Turchin - 2018 - Journal of British Interpanetary Society 71 (2):71-79.
    Abstract: This article examines risks associated with the program of passive search for alien signals (Search for Extraterrestrial Intelligence, or SETI) connected with the possibility of finding of alien transmission which includes description of AI system aimed on self-replication (SETI-attack). A scenario of potential vulnerability is proposed as well as the reasons why the proportion of dangerous to harmless signals may be high. The article identifies necessary conditions for the feasibility and effectiveness of the SETI-attack: ETI existence, possibility of AI, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18. First human upload as AI Nanny.Alexey Turchin - manuscript
    Abstract: As there are no visible ways to create safe self-improving superintelligence, but it is looming, we probably need temporary ways to prevent its creation. The only way to prevent it, is to create special AI, which is able to control and monitor all places in the world. The idea has been suggested by Goertzel in form of AI Nanny, but his Nanny is still superintelligent and not easy to control, as was shown by Bensinger at al. We explore here (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. The Rhetoric and Reality of Anthropomorphism in Artificial Intelligence.David Watson - 2019 - Minds and Machines 29 (3):417-440.
    Artificial intelligence has historically been conceptualized in anthropomorphic terms. Some algorithms deploy biomimetic designs in a deliberate attempt to effect a sort of digital isomorphism of the human brain. Others leverage more general learning strategies that happen to coincide with popular theories of cognitive science and social epistemology. In this paper, I challenge the anthropomorphic credentials of the neural network algorithm, whose similarities to human cognition I argue are vastly overstated and narrowly construed. I submit that three alternative supervised learning (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  20.  68
    Addressing Social Misattributions of Large Language Models: An HCXAI-based Approach.Andrea Ferrario, Alberto Termine & Alessandro Facchini - forthcoming - Available at Https://Arxiv.Org/Abs/2403.17873 (Extended Version of the Manuscript Accepted for the Acm Chi Workshop on Human-Centered Explainable Ai 2024 (Hcxai24).
    Human-centered explainable AI (HCXAI) advocates for the integration of social aspects into AI explanations. Central to the HCXAI discourse is the Social Transparency (ST) framework, which aims to make the socio-organizational context of AI systems accessible to their users. In this work, we suggest extending the ST framework to address the risks of social misattributions in Large Language Models (LLMs), particularly in sensitive areas like mental health. In fact LLMs, which are remarkably capable of simulating roles and personas, may lead (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. Guilty Artificial Minds: Folk Attributions of Mens Rea and Culpability to Artificially Intelligent Agents.Michael T. Stuart & Markus Kneer - 2021 - Proceedings of the ACM on Human-Computer Interaction 5 (CSCW2).
    While philosophers hold that it is patently absurd to blame robots or hold them morally responsible [1], a series of recent empirical studies suggest that people do ascribe blame to AI systems and robots in certain contexts [2]. This is disconcerting: Blame might be shifted from the owners, users or designers of AI systems to the systems themselves, leading to the diminished accountability of the responsible human agents [3]. In this paper, we explore one of the potential underlying reasons for (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  22. Philosophy of AI: A structured overview.Vincent C. Müller - 2024 - In Nathalie A. Smuha (ed.), Cambridge handbook on the law, ethics and policy of Artificial Intelligence. Cambridge University Press. pp. 1-25.
    This paper presents the main topics, arguments, and positions in the philosophy of AI at present (excluding ethics). Apart from the basic concepts of intelligence and computation, the main topics of ar-tificial cognition are perception, action, meaning, rational choice, free will, consciousness, and normativity. Through a better understanding of these topics, the philosophy of AI contributes to our understand-ing of the nature, prospects, and value of AI. Furthermore, these topics can be understood more deeply through the discussion of AI; so (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. The Whiteness of AI.Stephen Cave & Kanta Dihal - 2020 - Philosophy and Technology 33 (4):685-703.
    This paper focuses on the fact that AI is predominantly portrayed as white—in colour, ethnicity, or both. We first illustrate the prevalent Whiteness of real and imagined intelligent machines in four categories: humanoid robots, chatbots and virtual assistants, stock images of AI, and portrayals of AI in film and television. We then offer three interpretations of the Whiteness of AI, drawing on critical race theory, particularly the idea of the White racial frame. First, we examine the extent to which this (...)
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  24. Artificial Multipandemic as the Most Plausible and Dangerous Global Catastrophic Risk Connected with Bioweapons and Synthetic Biology.Alexey Turchin, Brian Patrick Green & David Denkenberger - manuscript
    Pandemics have been suggested as global risks many times, but it has been shown that the probability of human extinction due to one pandemic is small, as it will not be able to affect and kill all people, but likely only half, even in the worst cases. Assuming that the probability of the worst pandemic to kill a person is 0.5, and assuming linear interaction between different pandemics, 30 strong pandemics running simultaneously will kill everyone. Such situations cannot happen naturally, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. Artificial Intelligence for the Internal Democracy of Political Parties.Claudio Novelli, Giuliano Formisano, Prathm Juneja, Sandri Giulia & Luciano Floridi - manuscript
    The article argues that AI can enhance the measurement and implementation of democratic processes within political parties, known as Intra-Party Democracy (IPD). It identifies the limitations of traditional methods for measuring IPD, which often rely on formal parameters, self-reported data, and tools like surveys. Such limitations lead to the collection of partial data, rare updates, and significant demands on resources. To address these issues, the article suggests that specific data management and Machine Learning (ML) techniques, such as natural language processing (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. A danger of definition: Polar predicates in moral theory.Mark Alfano - 2009 - Journal of Ethics and Social Philosophy 3 (3):1-14.
    In this paper, I use an example from the history of philosophy to show how independently defining each side of a pair of contrary predicates is apt to lead to contradiction. In the Euthyphro, piety is defined as that which is loved by some of the gods while impiety is defined as that which is hated by some of the gods. Socrates points out that since the gods harbor contrary sentiments, some things are both pious and impious. But “pious” and (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  27. The promise and perils of AI in medicine.Robert Sparrow & Joshua James Hatherley - 2019 - International Journal of Chinese and Comparative Philosophy of Medicine 17 (2):79-109.
    What does Artificial Intelligence (AI) have to contribute to health care? And what should we be looking out for if we are worried about its risks? In this paper we offer a survey, and initial evaluation, of hopes and fears about the applications of artificial intelligence in medicine. AI clearly has enormous potential as a research tool, in genomics and public health especially, as well as a diagnostic aid. It’s also highly likely to impact on the organisational and business practices (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  28. The Ethics of AI Ethics: An Evaluation of Guidelines.Thilo Hagendorff - 2020 - Minds and Machines 30 (1):99-120.
    Current advances in research, development and application of artificial intelligence systems have yielded a far-reaching discourse on AI ethics. In consequence, a number of ethics guidelines have been released in recent years. These guidelines comprise normative principles and recommendations aimed to harness the “disruptive” potentials of new AI technologies. Designed as a semi-systematic evaluation, this paper analyzes and compares 22 guidelines, highlighting overlaps but also omissions. As a result, I give a detailed overview of the field of AI ethics. Finally, (...)
    Download  
     
    Export citation  
     
    Bookmark   141 citations  
  29. Toward an Ethics of AI Assistants: an Initial Framework.John Danaher - 2018 - Philosophy and Technology 31 (4):629-653.
    Personal AI assistants are now nearly ubiquitous. Every leading smartphone operating system comes with a personal AI assistant that promises to help you with basic cognitive tasks: searching, planning, messaging, scheduling and so on. Usage of such devices is effectively a form of algorithmic outsourcing: getting a smart algorithm to do something on your behalf. Many have expressed concerns about this algorithmic outsourcing. They claim that it is dehumanising, leads to cognitive degeneration, and robs us of our freedom and autonomy. (...)
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  30. Unpredictability of AI.Roman Yampolskiy - manuscript
    The young field of AI Safety is still in the process of identifying its challenges and limitations. In this paper, we formally describe one such impossibility result, namely Unpredictability of AI. We prove that it is impossible to precisely and consistently predict what specific actions a smarter-than-human intelligent system will take to achieve its objectives, even if we know terminal goals of the system. In conclusion, impact of Unpredictability on AI Safety is discussed.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  31. One danger of biomedical enhancements.Alex Rajczi - 2008 - Bioethics 22 (6):328–336.
    In the near future, our society may develop a vast array of medical enhancements. There is a large debate about enhancements, and that debate has identified many possible harms. This paper describes a harm that has so far been overlooked. Because of some particular features of enhancements, we could come to place more value on them than we actually should. This over-valuation would lead us to devote time, energy, and resources to enhancements that could be better spent somewhere else. That (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  32. Pantheism and the Dangers of Hegelianism in Nineteenth-Century France.Kirill Chepurin - 2023 - In Kirill Chepurin, Adi Efal-Lautenschläger, Daniel Whistler & Ayşe Yuva (eds.), Hegel and Schelling in Early Nineteenth-Century France: Volume 2 - Studies. Cham: Springer. pp. 143-169.
    This study rethinks the critical reception of Hegelianism in nineteenth-century France, arguing that this reception orbits around "pantheism" as the central political-theological threat. It is Hegel’s alleged pantheism that French authors often take to be the root cause of the other dangers that become associated with Hegelianism over the course of the century, ranging from the defence of the status quo to radical socialism to pangermanism. Moreover, the widespread fixation on the term "pantheism" as the enemy of all that (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. The future of AI in our hands? - To what extent are we as individuals morally responsible for guiding the development of AI in a desirable direction?Erik Persson & Maria Hedlund - 2022 - AI and Ethics 2:683-695.
    Artificial intelligence (AI) is becoming increasingly influential in most people’s lives. This raises many philosophical questions. One is what responsibility we have as individuals to guide the development of AI in a desirable direction. More specifically, how should this responsibility be distributed among individuals and between individuals and other actors? We investigate this question from the perspectives of five principles of distribution that dominate the discussion about responsibility in connection with climate change: effectiveness, equality, desert, need, and ability. Since much (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. Unownability of AI: Why Legal Ownership of Artificial Intelligence is Hard.Roman Yampolskiy - manuscript
    To hold developers responsible, it is important to establish the concept of AI ownership. In this paper we review different obstacles to ownership claims over advanced intelligent systems, including unexplainability, unpredictability, uncontrollability, self-modification, AI-rights, ease of theft when it comes to AI models and code obfuscation. We conclude that it is difficult if not impossible to establish ownership claims over AI models beyond a reasonable doubt.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  35. The Dangers of Re-colonization: Possible Boundaries Between Latin American Philosophy and Indigenous Philosophy from Latin America.Jorge Sanchez-Perez - 2023 - Comparative Philosophy 14 (2).
    The field of Latin American philosophy has established itself as a relevant subfield of philosophical inquiry. However, there might be good reasons to consider that our focus on the subfield could have distracted us from considering another subfield that, although it might share some geographical proximity, does not share the same historical basic elements. In this paper, I argue for a possible and meaningful conceptual difference between Latin American Philosophy and Indigenous philosophy produced in Latin America. First, I raise what (...)
    Download  
     
    Export citation  
     
    Bookmark  
  36. OVERVIEW OF AI ETHICS IN CONTEMPORARY EURASIAN SOCIETY.Ammar Younas - 2022 - 34 International Scientific Conference of Young Scientists Andquot;Science and Innovation": Collection of Scientific Papers: October 20, 2022.
    Download  
     
    Export citation  
     
    Bookmark  
  37.  26
    Unveiling the Creation of AI-Generated Artworks: Broadening Worringerian Abstraction and Empathy Beyond Contemplation.Leonardo Arriagada - 2024 - Estudios Artísticos 10 (16):142-158.
    In his groundbreaking work, Abstraction and Empathy, Wilhelm Worringer delved into the intricacies of various abstract and figurative artworks, contending that they evoke distinct impulses in the human audience—specifically, the urges towards abstraction and empathy. This article asserts the presence of empirical evidence supporting the extension of Worringer’s concepts beyond the realm of art appreciation to the domain of art-making. Consequently, it posits that abstraction and empathy serve as foundational principles guiding the production of both abstract and figurative art. This (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. Dangers of Catcalling: Exploring the Lived Experiences of Women Catcalled in Quezon City.Mary Grace Pagurayan, Phoebe Bayta, Daizz Antoinette Reyes, Zhaera Mae Carido, Mark Apigo, Juliane Catapang, Suya Francisco, Ma Theresa Borjal, Nicholas Camilon, Keana Marie Nacion, Kyle Patrick De Guzman & Princess May Poblete - 2023 - Philippine College of Criminology Research Journal 7:18-37.
    Despite being a women's problem for a long time, catcalling has recently attracted lawmakers' attention. In 2019, the Philippine government enacted Republic Act 11313, or the Safe Spaces Act, which prohibits and punishes gender-based sexual harassment. However, despite the existence of the law, catcalling continues to be rampant. This study aims to explore the experiences of women in Quezon City who have been subjected to catcalling and to provide answers regarding the effects of catcalling on the victims, the locations where (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. Descartes and the Danger of Irresolution.Shoshana Brassfield - 2013 - Essays in Philosophy 14 (2):162-178.
    Descartes's approach to practical judgments about what is beneficial or harmful, or what to pursue or avoid, is almost exactly the opposite of his approach to theoretical judgments about the true nature of things. Instead of the cautious skepticism for which Descartes is known, throughout his ethical writings he recommends developing the habit of making firm judgments and resolutely carrying them out, no matter how doubtful and uncertain they may be. Descartes, strikingly, takes irresolution to be the source of remorse (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  40.  74
    Challenges of AI for Promoting Sikhism in the 21st Century (Guest Editorial).Devinder Pal Singh - 2023 - The Sikh Review, Kolkata, WB, India 71 (09):6-8.
    Artificial Intelligence (AI) is a technology that enables machines or computer systems to perform tasks that usually require human intelligence. AI systems can understand and interpret information, make decisions, and solve problems based on patterns and data. They can also improve their performance over time by learning from their experiences. AI is used in various applications, such as enhancing knowledge and understanding, helping as voice assistants, aiding in image recognition, facilitating self-driving cars, and helping diagnose diseases. The appropriate usage of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  41. THE IMPROVED MATHEMATICAL MODEL FOR CONTINUOUS FORECASTING OF THE EPIDEMIC.V. R. Manuraj - 2022 - Journal of Science Technology and Research (JSTAR) 3 (1):55-64.
    COVID-19 began in China in December 2019. As of January 2021, over a hundred million instances had been reported worldwide, leaving a deep socio-economic impact globally. Current investigation studies determined that artificial intelligence (AI) can play a key role in reducing the effect of the virus spread. The prediction of COVID-19 incidence in different countries and territories is important because it serves as a guide for governments, healthcare providers, and the general public in developing management strategies to battle the disease. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42. Uses and Abuses of AI Ethics.Lily E. Frank & Michal Klincewicz - forthcoming - In David J. Gunkel (ed.), Handbook of the Ethics of AI. Edward Elgar Publishing.
    In this chapter we take stock of some of the complexities of the sprawling field of AI ethics. We consider questions like "what is the proper scope of AI ethics?" And "who counts as an AI ethicist?" At the same time, we flag several potential uses and abuses of AI ethics. These include challenges for the AI ethicist, including what qualifications they should have; the proper place and extent of futuring and speculation in the field; and the dilemmas concerning how (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. The trustworthiness of AI: Comments on Simion and Kelp’s account.Dong-Yong Choi - 2023 - Asian Journal of Philosophy 2 (1):1-9.
    Simion and Kelp explain the trustworthiness of an AI based on that AI’s disposition to meet its obligations. Roughly speaking, according to Simion and Kelp, an AI is trustworthy regarding its task if and only if that AI is obliged to complete the task and its disposition to complete the task is strong enough. Furthermore, an AI is obliged to complete a task in the case where the task is the AI’s etiological function or design function. This account has a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. Should we be afraid of AI?Luciano Floridi - 2019 - Aeon Magazine.
    Machines seem to be getting smarter and smarter and much better at human jobs, yet true AI is utterly implausible. This article explains the reasons why this is the case.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  45. The Future of AI: Stanisław Lem’s Philosophical Visions for AI and Cyber-Societies in Cyberiad.Roman Krzanowski & Pawel Polak - 2021 - Pro-Fil 22 (3):39-53.
    Looking into the future is always a risky endeavour, but one way to anticipate the possible future shape of AI-driven societies is to examine the visionary works of some sci-fi writers. Not all sci-fi works have such visionary quality, of course, but some of Stanisław Lem’s works certainly do. We refer here to Lem’s works that explore the frontiers of science and technology and those that describe imaginary societies of robots. We therefore examine Lem’s prose, with a focus on the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. Theory and philosophy of AI (Minds and Machines, 22/2 - Special volume).Vincent C. Müller (ed.) - 2012 - Springer.
    Invited papers from PT-AI 2011. - Vincent C. Müller: Introduction: Theory and Philosophy of Artificial Intelligence - Nick Bostrom: The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents - Hubert L. Dreyfus: A History of First Step Fallacies - Antoni Gomila, David Travieso and Lorena Lobo: Wherein is Human Cognition Systematic - J. Kevin O'Regan: How to Build a Robot that Is Conscious and Feels - Oron Shagrir: Computation, Implementation, Cognition.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  47. Augustine on the dangers of friendship.Tamer Nawar - 2015 - Classical Quarterly 65 (2):836-851.
    The philosophers of antiquity had much to say about the place of friendship in the good life and its role in helping us live virtuously. Augustine is unusual in giving substantial attention to the dangers of friendship and its potential to serve as an obstacle (rather than an aid) to virtue. Despite the originality of Augustine’s thought on this topic, this area of his thinking has received little attention. This paper will show how Augustine, especially in the early books (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  48. New developments in the philosophy of AI.Vincent C. Müller - 2016 - In Vincent C. Müller (ed.), Fundamental Issues of Artificial Intelligence. Cham: Springer.
    The philosophy of AI has seen some changes, in particular: 1) AI moves away from cognitive science, and 2) the long term risks of AI now appear to be a worthy concern. In this context, the classical central concerns – such as the relation of cognition and computation, embodiment, intelligence & rationality, and information – will regain urgency.
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  49. Turning queries into questions: For a plurality of perspectives in the age of AI and other frameworks with limited (mind)sets.Claudia Westermann & Tanu Gupta - 2023 - Technoetic Arts 21 (1):3-13.
    The editorial introduces issue 21.1 of Technoetic Arts via a critical reflection on the artificial intelligence hype (AI hype) that emerged in 2022. Tracing the history of the critique of Large Language Models, the editorial underscores that there are substantial ethical challenges related to bias in the training data, copyright issues, as well as ecological challeges which the technology industry has consistently downplayed over the years. -/- The editorial highlights the distinction between the current AI technology’s reliance on extensive pre-existing (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50.  80
    The Probability of a Global Catastrophe in the World with Exponentially Growing Technologies.Alexey Turchin & Justin Shovelain - manuscript
    Abstract. In this article is presented a model of the change of the probability of the global catastrophic risks in the world with exponentially evolving technologies. Increasingly cheaper technologies become accessible to a larger number of agents. Also, the technologies become more capable to cause a global catastrophe. Examples of such dangerous technologies are artificial viruses constructed by the means of synthetic biology, non-aligned AI and, to less extent, nanotech and nuclear proliferation. The model shows at least double exponential growth (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 1000