Contents
76 found
Order:
1 — 50 / 76
  1. Artificial Leviathan: Exploring Social Evolution of LLM Agents Through the Lens of Hobbesian Social Contract Theory.Gordon Dai, Weijia Zhang, Jinhan Li, Siqi Yang, Chidera Ibe, Srihas Rao, Arthur Caetano & Misha Sra - manuscript
    The emergence of Large Language Models (LLMs) and advancements in Artificial Intelligence (AI) offer an opportunity for computational social science research at scale. Building upon prior explorations of LLM agent design, our work introduces a simulated agent society where complex social relationships dynamically form and evolve over time. Agents are imbued with psychological drives and placed in a sandbox survival environment. We conduct an evaluation of the agent society through the lens of Thomas Hobbes's seminal Social Contract Theory (SCT). We (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  2. LLMs Can Never Be Ideally Rational.Simon Goldstein - manuscript
    LLMs have dramatically improved in capabilities in recent years. This raises the question of whether LLMs could become genuine agents with beliefs and desires. This paper demonstrates an in principle limit to LLM agency, based on their architecture. LLMs are next word predictors: given a string of text, they calculate the probability that various words can come next. LLMs produce outputs that reflect these probabilities. I show that next word predictors are exploitable. If LLMs are prompted to make probabilistic predictions (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  3. Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest”.Alexey Turchin - manuscript
    In this article we explore a promising way to AI safety: to send a message now (by openly publishing it on the Internet) that may be read by any future AI, no matter who builds it and what goal system it has. Such a message is designed to affect the AI’s behavior in a positive way, that is, to increase the chances that the AI will be benevolent. In other words, we try to persuade “paperclip maximizer” that it is in (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  4. “Even an AI could do that”.Emanuele Arielli - forthcoming - Http://Manovich.Net/Index.Php/Projects/Artificial-Aesthetics.
    Chapter 1 of the ongoing online publication "Artificial Aesthetics: A Critical Guide to AI, Media and Design", Lev Manovich and Emanuele Arielli -/- Book information: Assume you're a designer, an architect, a photographer, a videographer, a curator, an art historian, a musician, a writer, an artist, or any other creative professional or student. Perhaps you're a digital content creator who works across multiple platforms. Alternatively, you could be an art historian, curator, or museum professional. -/- You may be wondering how (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  5. AI-aesthetics and the artificial author.Emanuele Arielli - forthcoming - Proceedings of the European Society for Aesthetics.
    ABSTRACT. Consider this scenario: you discover that an artwork you greatly admire, or a captivating novel that deeply moved you, is in fact the product of artificial intelligence, not a human’s work. Would your aesthetic judgment shift? Would you perceive the work differently? If so, why? The advent of artificial intelligence (AI) in the realm of art has sparked numerous philosophical questions related to the authorship and artistic intent behind AI-generated works. This paper explores the debate between viewing AI as (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  6. Human Perception and The Artificial Gaze.Emanuele Arielli & Lev Manovich - forthcoming - In Emanuele Arielli & Lev Manovich (eds.), Artificial Aesthetics.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  7. Ethics of Artificial Intelligence.Stefan Buijsman, Michael Klenk & Jeroen van den Hoven - forthcoming - In Nathalie Smuha (ed.), Cambridge Handbook on the Law, Ethics and Policy of AI. Cambridge University Press.
    Artificial Intelligence (AI) is increasingly adopted in society, creating numerous opportunities but at the same time posing ethical challenges. Many of these are familiar, such as issues of fairness, responsibility and privacy, but are presented in a new and challenging guise due to our limited ability to steer and predict the outputs of AI systems. This chapter first introduces these ethical challenges, stressing that overviews of values are a good starting point but frequently fail to suffice due to the context (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  8. The fetish of artificial intelligence. In response to Iason Gabriel’s “Towards a Theory of Justice for Artificial Intelligence”.Albert Efimov - forthcoming - Philosophy Science.
    The article presents the grounds for defining the fetish of artificial intelligence (AI). The fundamental differences of AI from all previous technological innovations are highlighted, as primarily related to the introduction into the human cognitive sphere and fundamentally new uncontrolled consequences for society. Convincing arguments are presented that the leaders of the globalist project are the main beneficiaries of the AI fetish. This is clearly manifested in the works of philosophers close to big technology corporations and their mega-projects. It is (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  9. Real Sparks of Artificial Intelligence and the Importance of Inner Interpretability.Alex Grzankowski - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    The present paper looks at one of the most thorough articles on the intelligence of GPT, research conducted by engineers at Microsoft. Although there is a great deal of value in their work, I will argue that, for familiar philosophical reasons, their methodology, ‘Black-box Interpretability’ is wrongheaded. But there is a better way. There is an exciting and emerging discipline of ‘Inner Interpretability’ (also sometimes called ‘White-box Interpretability’) that aims to uncover the internal activations and weights of models in order (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  10. The Simulation Hypothesis, Social Knowledge, and a Meaningful Life.Grace Helton - forthcoming - Oxford Studies in Philosophy of Mind.
    (Draft of Feb 2023, see upcoming issue for Chalmers' reply) In Reality+: Virtual Worlds and the Problems of Philosophy, David Chalmers argues, among other things, that: if we are living in a full-scale simulation, we would still enjoy broad swathes of knowledge about non-psychological entities, such as atoms and shrubs; and, our lives might still be deeply meaningful. Chalmers views these claims as at least weakly connected: The former claim helps forestall a concern that if objects in the simulation are (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  11. Living with Uncertainty: Full Transparency of AI isn’t Needed for Epistemic Trust in AI-based Science.Uwe Peters - forthcoming - Social Epistemology Review and Reply Collective.
    Can AI developers be held epistemically responsible for the processing of their AI systems when these systems are epistemically opaque? And can explainable AI (XAI) provide public justificatory reasons for opaque AI systems’ outputs? Koskinen (2024) gives negative answers to both questions. Here, I respond to her and argue for affirmative answers. More generally, I suggest that when considering people’s uncertainty about the factors causally determining an opaque AI’s output, it might be worth keeping in mind that a degree of (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  12. Cultural Bias in Explainable AI Research.Uwe Peters & Mary Carman - forthcoming - Journal of Artificial Intelligence Research.
    For synergistic interactions between humans and artificial intelligence (AI) systems, AI outputs often need to be explainable to people. Explainable AI (XAI) systems are commonly tested in human user studies. However, whether XAI researchers consider potential cultural differences in human explanatory needs remains unexplored. We highlight psychological research that found significant differences in human explanations between many people from Western, commonly individualist countries and people from non-Western, often collectivist countries. We argue that XAI research currently overlooks these variations and that (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  13. Artificial Intelligence: Arguments for Catastrophic Risk.Adam Bales, William D'Alessandro & Cameron Domenico Kirk-Giannini - 2024 - Philosophy Compass 19 (2):e12964.
    Recent progress in artificial intelligence (AI) has drawn attention to the technology’s transformative potential, including what some see as its prospects for causing large-scale harm. We review two influential arguments purporting to show how AI could pose catastrophic risks. The first argument — the Problem of Power-Seeking — claims that, under certain assumptions, advanced AI systems are likely to engage in dangerous power-seeking behavior in pursuit of their goals. We review reasons for thinking that AI systems might seek power, that (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  14. Can AI and humans genuinely communicate?Constant Bonard - 2024 - In Anna Strasser (ed.), Anna's AI Anthology. How to live with smart machines? Berlin: Xenomoi Verlag.
    Can AI and humans genuinely communicate? In this article, after giving some background and motivating my proposal (§1–3), I explore a way to answer this question that I call the ‘mental-behavioral methodology’ (§4–5). This methodology follows the following three steps: First, spell out what mental capacities are sufficient for human communication (as opposed to communication more generally). Second, spell out the experimental paradigms required to test whether a behavior exhibits these capacities. Third, apply or adapt these paradigms to test whether (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  15. Affective Artificial Agents as sui generis Affective Artifacts.Marco Facchin & Giacomo Zanotti - 2024 - Topoi 43 (3).
    AI-based technologies are increasingly pervasive in a number of contexts. Our affective and emotional life makes no exception. In this article, we analyze one way in which AI-based technologies can affect them. In particular, our investigation will focus on affective artificial agents, namely AI-powered software or robotic agents designed to interact with us in affectively salient ways. We build upon the existing literature on affective artifacts with the aim of providing an original analysis of affective artificial agents and their distinctive (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  16. Review of the panel discussion “Philosophical and Ethical Analysis of the Concepts of Death and Human Existence in the Context of Cybernetic Immortality”, Samara, 29.03 2024.Oleg Gurov - 2024 - Artificial Societies 19 (2).
    This publication constitutes a comprehensive account of the panel discussion entitled “Philosophical and Ethical Analysis of the Concepts of Death and Human Existence in the Context of Cybernetic Immortality” which transpired within the confines of the international scientific symposium “The Seventh Lemovsky Readings” held in Samara from March 28th to 30th, 2024. The aforementioned panel discussion, which congregated scores of erudite scholars representing preeminent research institutions across the Russian Federation, emerged as one of the cardinal events of the conference. Eminent (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  17. The FHJ debate: Will artificial intelligence replace clinical decision-making within our lifetimes?Joshua Hatherley, Anne Kinderlerer, Jens Christian Bjerring, Lauritz Munch & Lynsey Threlfall - 2024 - Future Healthcare Journal 11 (3):100178.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  18. Chess AI does not know chess - The death of Type B strategy and its philosophical implications.Spyridon Kakos - 2024 - Harmonia Philosophica Articles.
    Playing chess is one of the first sectors of human thinking that were conquered by computers. From the historical win of Deep Blue against chess champion Garry Kasparov until today, computers have completely dominated the world of chess leaving no room for question as to who is the king in this sport. However, the better computers become in chess the more obvious their basic disadvantage becomes: Even though they can defeat any human in chess and play phenomenally great and intuitive (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  19. The marriage of astrology and AI: A model of alignment with human values and intentions.Kenneth McRitchie - 2024 - Correlation 36 (1):43-49.
    Astrology research has been using artificial intelligence (AI) to improve the understanding of astrological properties and processes. Like the large language models of AI, astrology is also a language model with a similar underlying linguistic structure but with a distinctive layer of lifestyle contexts. Recent research in semantic proximities and planetary dominance models have helped to quantify effective astrological information. As AI learning and intelligence grows, a major concern is with maintaining its alignment with human values and intentions. Astrology has (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  20. A Robust Governance for the AI Act: AI Office, AI Board, Scientific Panel, and National Authorities.Claudio Novelli, Philipp Hacker, Jessica Morley, Jarle Trondal & Luciano Floridi - 2024 - European Journal of Risk Regulation 4:1-25.
    Regulation is nothing without enforcement. This particularly holds for the dynamic field of emerging technologies. Hence, this article has two ambitions. First, it explains how the EU´s new Artificial Intelligence Act (AIA) will be implemented and enforced by various institutional bodies, thus clarifying the governance framework of the AIA. Second, it proposes a normative model of governance, providing recommendations to ensure uniform and coordinated execution of the AIA and the fulfilment of the legislation. Taken together, the article explores how the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  21. Artificial Intelligence and an Anthropological Ethics of Work: Implications on the Social Teaching of the Church.Justin Nnaemeka Onyeukaziri - 2024 - Religions 15 (5):623.
    It is the contention of this paper that ethics of work ought to be anthropological, and artificial intelligence (AI) research and development, which is the focus of work today, should be anthropological, that is, human-centered. This paper discusses the philosophical and theological implications of the development of AI research on the intrinsic nature of work and the nature of the human person. AI research and the implications of its development and advancement, being a relatively new phenomenon, have not been comprehensively (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  22. Science Based on Artificial Intelligence Need not Pose a Social Epistemological Problem.Uwe Peters - 2024 - Social Epistemology Review and Reply Collective 13 (1).
    It has been argued that our currently most satisfactory social epistemology of science can’t account for science that is based on artificial intelligence (AI) because this social epistemology requires trust between scientists that can take full responsibility for the research tools they use, and scientists can’t take full responsibility for the AI tools they use since these systems are epistemically opaque. I think this argument overlooks that much AI-based science can be done without opaque models, and that agents can take (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  23. Are generics and negativity about social groups common on social media? A comparative analysis of Twitter (X) data.Uwe Peters & Ignacio Ojea Quintana - 2024 - Synthese 203 (6):1-22.
    Many philosophers hold that generics (i.e., unquantified generalizations) are pervasive in communication and that when they are about social groups, this may offend and polarize people because generics gloss over variations between individuals. Generics about social groups might be particularly common on Twitter (X). This remains unexplored, however. Using machine learning (ML) techniques, we therefore developed an automatic classifier for social generics, applied it to 1.1 million tweets about people, and analyzed the tweets. While it is often suggested that generics (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  24. Why Does AI Lie So Much? The Problem Is More Deep Rooted Than You Think.Mir H. S. Quadri - 2024 - Arkinfo Notes.
    The rapid advancements in artificial intelligence, particularly in natural language processing, have brought to light a critical challenge, i.e., the semantic grounding problem. This article explores the root causes of this issue, focusing on the limitations of connectionist models that dominate current AI research. By examining Noam Chomsky's theory of Universal Grammar and his critiques of connectionism, I highlight the fundamental differences between human language understanding and AI language generation. Introducing the concept of semantic grounding, I emphasise the need for (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  25. Intelligence, from Natural Origins to Artificial Frontiers - Human Intelligence vs. Artificial Intelligence.Nicolae Sfetcu - 2024 - Bucharest, Romania: MultiMedia Publishing.
    The parallel history of the evolution of human intelligence and artificial intelligence is a fascinating journey, highlighting the distinct but interconnected paths of biological evolution and technological innovation. This history can be seen as a series of interconnected developments, each advance in human intelligence paving the way for the next leap in artificial intelligence. Human intelligence and artificial intelligence have long been intertwined, evolving in parallel trajectories throughout history. As humans have sought to understand and reproduce intelligence, AI has emerged (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  26. On human centered artificial intelligence. [REVIEW]Gloria Andrada - 2023 - Metascience.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  27. What’s Stopping Us Achieving AGI?Albert Efimov - 2023 - Philosophy Now 3 (155):20-24.
    A. Efimov, D. Dubrovsky, and F. Matveev explore limitations on the development of AI presented by the need to understand language and be embodied.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  28. A MACRO-SHIFTED FUTURE: PREFERRED OR ACCIDENTALLY POSSIBLE IN THE CONTEXT OF ADVANCES IN ARTIFICIAL INTELLIGENCE SCIENCE AND TECHNOLOGY.Albert Efimov - 2023 - In Наука и феномен человека в эпоху цивилизационного Макросдвига. Moscow: pp. 748.
    This article is devoted to the topical aspects of the transformation of society, science, and man in the context of E. László’s work «Macroshift». The author offers his own attempt to consider the attributes of macroshift and then use these attributes to operationalize further analysis, highlighting three essential elements: the world has come to a situation of technological indistinguishability between the natural and the artificial, to machines that know everything about humans. Antiquity aspired to beauty and saw beauty in realistic (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  29. Explaining Go: Challenges in Achieving Explainability in AI Go Programs.Zack Garrett - 2023 - Journal of Go Studies 17 (2):29-60.
    There has been a push in recent years to provide better explanations for how AIs make their decisions. Most of this push has come from the ethical concerns that go hand in hand with AIs making decisions that affect humans. Outside of the strictly ethical concerns that have prompted the study of explainable AIs (XAIs), there has been research interest in the mere possibility of creating XAIs in various domains. In general, the more accurate we make our models the harder (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  30. Chess and Antirealism.Samuel Kahn - 2023 - Asian Journal of Philosophy 2 (76):1-20.
    In this article, I make a novel argument for scientific antirealism. My argument is as follows: (1) the best human chess players would lose to the best computer chess programs; (2) if the best human chess players would lose to the best computer chess programs, then there is good reason to think that the best human chess players do not understand how to make winning moves; (3) if there is good reason to think that the best human chess players do (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  31. Humans in the meta-human era (Meta-philosophical analysis).Spyridon Kakos - 2023 - Harmonia Philosophica Papers.
    Humans are obsolete. In the post-ChatGPT era, artificial intelligence systems have replaced us in the last sectors of life that we thought were our personal kingdom. Yet, humans still have a place in this life. But they can find it only if they forget all those things that we believe make us unique. Only if we go back to doing nothing, can we truly be alive and meet our Self. Only if we stop thinking can we accept the Cosmos as (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  32. Action and Agency in Artificial Intelligence: A Philosophical Critique.Justin Nnaemeka Onyeukaziri - 2023 - Philosophia: International Journal of Philosophy (Philippine e-journal) 24 (1):73-90.
    The objective of this work is to explore the notion of “action” and “agency” in artificial intelligence (AI). It employs a metaphysical notion of action and agency as an epistemological tool in the critique of the notion of “action” and “agency” in artificial intelligence. Hence, both a metaphysical and cognitive analysis is employed in the investigation of the quiddity and nature of action and agency per se, and how they are, by extension employed in the language and science of artificial (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  33. Artificial Intelligence and Neuroscience Research: Theologico-Philosophical Implications for the Christian Notion of the Human Person.Justin Nnaemeka Onyeukaziri - 2023 - Maritain Studies/Etudes Maritainiennes 39:85-103.
    This paper explores the theological and philosophical implications of artificial intelligence (AI) and Neuroscience research on the Christian’s notion of the human person. The paschal mystery of Christ is the intuitive foundation of Christian anthropology. In the intellectual history of the Christianity, Platonism and Aristotelianism have been employed to articulate the Christian philosophical anthropology. The Aristotelian systematization has endured to this era. Since the modern period of the Western intellectual history, Aristotelianism has been supplanted by the positive sciences as the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  34. AI-aesthetics and the Anthropocentric Myth of Creativity.Emanuele Arielli & Lev Manovich - 2022 - NODES 1 (19-20).
    Since the beginning of the 21st century, technologies like neural networks, deep learning and “artificial intelligence” (AI) have gradually entered the artistic realm. We witness the development of systems that aim to assess, evaluate and appreciate artifacts according to artistic and aesthetic criteria or by observing people’s preferences. In addition to that, AI is now used to generate new synthetic artifacts. When a machine paints a Rembrandt, composes a Bach sonata, or completes a Beethoven symphony, we say that this is (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  35. Interdisciplinary Communication by Plausible Analogies: the Case of Buddhism and Artificial Intelligence.Michael Cooper - 2022 - Dissertation, University of South Florida
    Communicating interdisciplinary information is difficult, even when two fields are ostensibly discussing the same topic. In this work, I’ll discuss the capacity for analogical reasoning to provide a framework for developing novel judgments utilizing similarities in separate domains. I argue that analogies are best modeled after Paul Bartha’s By Parallel Reasoning, and that they can be used to create a Toulmin-style warrant that expresses a generalization. I argue that these comparisons provide insights into interdisciplinary research. In order to demonstrate this (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  36. Interprétabilité et explicabilité de phénomènes prédits par de l’apprentissage machine.Christophe Denis & Franck Varenne - 2022 - Revue Ouverte d'Intelligence Artificielle 3 (3-4):287-310.
    Le déficit d’explicabilité des techniques d’apprentissage machine (AM) pose des problèmes opérationnels, juridiques et éthiques. Un des principaux objectifs de notre projet est de fournir des explications éthiques des sorties générées par une application fondée sur de l’AM, considérée comme une boîte noire. La première étape de ce projet, présentée dans cet article, consiste à montrer que la validation de ces boîtes noires diffère épistémologiquement de celle mise en place dans le cadre d’une modélisation mathéma- tique et causale d’un phénomène (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  37. Artificial Intelligence and the Notions of the “Natural” and the “Artificial.”.Justin Nnaemeka Onyeukaziri - 2022 - Journal of Data Analysis 17 (No. 4):101-116.
    This paper argues that to negate the ontological difference between the natural and the artificial, is not plausible; nor is the reduction of the natural to the artificial or vice versa possible. Except if one intends to empty the semantic content of the terms and notions: “natural” and “artificial.” Most philosophical discussions on Artificial Intelligence (AI) have always been in relation to the human person, especially as it relates to human intelligence, consciousness and/or mind in general. This paper, intends to (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  38. Machine learning in scientific grant review: algorithmically predicting project efficiency in high energy physics.Vlasta Sikimić & Sandro Radovanović - 2022 - European Journal for Philosophy of Science 12 (3):1-21.
    As more objections have been raised against grant peer-review for being costly and time-consuming, the legitimate question arises whether machine learning algorithms could help assess the epistemic efficiency of the proposed projects. As a case study, we investigated whether project efficiency in high energy physics can be algorithmically predicted based on the data from the proposal. To analyze the potential of algorithmic prediction in HEP, we conducted a study on data about the structure and outcomes of HEP experiments with the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  39. (1 other version)Walking Through The Turing Wall.Albert Efimov - 2021 - IFAC Papers Online 54 (13):215-220.
    Can the machines that play board games or recognize images only in the comfort of the virtual world be intelligent? To become reliable and convenient assistants to humans, machines need to learn how to act and communicate in the physical reality, just like people do. The authors propose two novel ways of designing and building Artificial General Intelligence (AGI). The first one seeks to unify all participants at any instance of the Turing test – the judge, the machine, the human (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  40. From Specialized to Hyper-Specialized Labour: Future Labor Markets as Helmed by Advanced Computer Intelligence.Tyler Jaynes - 2021 - In Pritika Nehra (ed.), Loneliness and the Crisis of Work. Newcastle upon Tyne, UK: Cambridge Scholars Publishing. pp. 159-175.
    With the transition of the pandemic-gripped labor market en masse to remote capabilities to avert from a national or international economic meltdown, a concern arises that many job seekers simply cannot fit into the new roles being developed and implemented. Beyond the loss of on-site work, the market is unable to reverse the loss of many roles that are, and have been, taken over by artificial (computer) intelligence systems. The “business-as-usual” mentality that many have come to associate with pre-pandemic life (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  41. (1 other version)Kantian Notion of freedom and Autonomy of Artificial Agency.Manas Kumar Sahu - 2021 - Prometeica - Revista De Filosofía Y Ciencias 23:136-149.
    The objective of this paper is to provide a critical analysis of the Kantian notion of freedom (especially the problem of the third antinomy and its resolution in the critique of pure reason); its significance in the contemporary debate on free-will and determinism, and the possibility of autonomy of artificial agency in the Kantian paradigm of autonomy. Kant's resolution of the third antinomy by positing the ground in the noumenal self resolves the problem of antinomies; however, invites an explanatory gap (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  42. Performance vs. competence in human–machine comparisons.Chaz Firestone - 2020 - Proceedings of the National Academy of Sciences 41.
    Does the human mind resemble the machines that can behave like it? Biologically inspired machine-learning systems approach “human-level” accuracy in an astounding variety of domains, and even predict human brain activity—raising the exciting possibility that such systems represent the world like we do. However, even seemingly intelligent machines fail in strange and “unhumanlike” ways, threatening their status as models of our minds. How can we know when human–machine behavioral differences reflect deep disparities in their underlying capacities, vs. when such failures (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   9 citations  
  43. AI-Completeness: Using Deep Learning to Eliminate the Human Factor.Kristina Šekrst - 2020 - In Sandro Skansi (ed.), Guide to Deep Learning Basics. Springer. pp. 117-130.
    Computational complexity is a discipline of computer science and mathematics which classifies computational problems depending on their inherent difficulty, i.e. categorizes algorithms according to their performance, and relates these classes to each other. P problems are a class of computational problems that can be solved in polynomial time using a deterministic Turing machine while solutions to NP problems can be verified in polynomial time, but we still do not know whether they can be solved in polynomial time as well. A (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  44. パラコンシステント、決定不能、ランダム、計算可能、不完全とはどういう意味 ですか? 「ゴーデルの方法:決定不可能な世界への冒険:」のレビュー(Godel's Way: exploits into an Undecidable World) byA. da Costa 160p (2012) (2019年のレビュー改訂).Michael Richard Starks - 2020 - In 地獄へようこそ : 赤ちゃん、気候変動、ビットコイン、カルテル、中国、民主主義、多様性、ディスジェニックス、平等、ハッカー、人権、イスラム教、自由主義、繁栄、ウェブ、カオス、飢餓、病気、暴力、人工知能、戦争. Las Vegas, NV USA: Reality Press. pp. 158-171.
    「ゴーデルの道」では、3人の著名な科学者が、デシッド不能、不完全性、ランダム性、計算可能性、パラコンシステンションなどの問題について議論しています。私は、ウィトゲンシュタイニアンの視点から、全く異なる 解決策を持つ2つの基本的な問題があることをこれらの問題に取り組んでいます。科学的または経験的な問題は、言語がどのように理解的に使用できるか(数学と論理に特定の質問を含む)、特定の文脈で実際にどのように 単語を使用するかを調べて決定する必要がある、観察的および哲学的な問題を調査する必要がある世界に関する事実です。私たちがプレイしている言語ゲームについて明確になると、これらのトピックは他の人と同じように 普通の科学的、数学的な質問であると見なされます。ウィトゲンシュタインの洞察はめったに等しくなく、決して上回ることはなく、彼がブルーブックスとブラウンブックスを口述した80年前と同じくらい適切です。失敗 にもかかわらず、本当に完成した本ではなく一連のノートは、半世紀以上にわたって物理学、数学、哲学の出血エッジで働いてきたこれらの3人の有名な学者の作品のユニークな源です。ダ・コスタとドリアは、普遍的な計 算に書いて以来、ウォルパート(以下または私の記事を参照)によって引用されています(ウォルパートとヤナフスキーの「理由の外側の限界」の私のレビューを参照)、,そして彼の多くの成果の中で、ダ・コスタはパラ コンシタンションのパイオニアです。 現代の2つのシス・エムスの見解から人間の行動のための包括的な最新の枠組みを望む人は、私の著書「ルートヴィヒ・ヴィトゲンシュタインとジョン・サールの第2回(2019)における哲学、心理学、ミンと言語の論 理的構造」を参照することができます。私の著作の多くにご興味がある人は、運命の惑星における「話す猿--哲学、心理学、科学、宗教、政治―記事とレビュー2006-2019 第3回(2019)」と21世紀4日(2019年)の自殺ユートピア妄想st Century 4th ed (2019)などを見ることができます。 .
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  45. Was bedeuten Parakonsistente, Unentscheidbar, Zufällig, Berechenbar und Unvollständige? Eine Rezension von „Godels Weg: Exploits in eine unentscheidbare Welt“ (Godels Way: Exploits into a unecidable world) von Gregory Chaitin, Francisco A Doria, Newton C.A. da Costa 160p (2012).Michael Richard Starks - 2020 - In Willkommen in der Hölle auf Erden: Babys, Klimawandel, Bitcoin, Kartelle, China, Demokratie, Vielfalt, Dysgenie, Gleichheit, Hacker, Menschenrechte, Islam, Liberalismus, Wohlstand, Internet, Chaos, Hunger, Krankheit, Gewalt, Künstliche Intelligenz, Krieg. Reality Press. pp. 1171-185.
    In "Godel es Way" diskutieren drei namhafte Wissenschaftler Themen wie Unentschlossenheit, Unvollständigkeit, Zufälligkeit, Berechenbarkeit und Parakonsistenz. Ich gehe diese Fragen aus Wittgensteiner Sicht an, dass es zwei grundlegende Fragen gibt, die völlig unterschiedliche Lösungen haben. Es gibt die wissenschaftlichen oder empirischen Fragen, die Fakten über die Welt sind, die beobachtungs- und philosophische Fragen untersuchen müssen, wie Sprache verständlich verwendet werden kann (die bestimmte Fragen in Mathematik und Logik beinhalten), die entschieden werden müssen, indem man sich anschaut,wie wir Wörter in bestimmten (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  46. (1 other version)Gli ominoidi o gli androidi distruggeranno la Terra? Una recensione di Come Creare una Mente (How to Create a Mind) di Ray Kurzweil (2012) (recensione rivista nel 2019).Michael Richard Starks - 2020 - In Benvenuti all'inferno sulla Terra: Bambini, Cambiamenti climatici, Bitcoin, Cartelli, Cina, Democrazia, Diversità, Disgenetica, Uguaglianza, Pirati Informatici, Diritti umani, Islam, Liberalismo, Prosperità, Web, Caos, Fame, Malattia, Violenza, Intellige. Las Vegas, NV USA: Reality Press. pp. 150-162.
    Alcuni anni fa, ho raggiunto il punto in cui di solito posso dire dal titolo di un libro, o almeno dai titoli dei capitoli, quali tipi di errori filosofici saranno fatti e con quale frequenza. Nel caso di opere nominalmente scientifiche queste possono essere in gran parte limitate a determinati capitoli che sono filosofici o cercanodi trarre conclusioni generali sul significato o sul significato a lungoterminedell'opera. Normalmente però le questioni scientifiche di fatto sono generosamente intrecciate con incomprodellami filosofici su ciò (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  47. Wolpert, Chaitin and Wittgenstein 不可能性、不完全性、嘘つきパラドックス、 無神論、計算の限界、非量子力学的不確実性原理、そしてコンピューターとして の宇宙-チューリング機械理論の究極の定理 (2019年改訂レビュー).Michael Richard Starks - 2020 - In 地獄へようこそ : 赤ちゃん、気候変動、ビットコイン、カルテル、中国、民主主義、多様性、ディスジェニックス、平等、ハッカー、人権、イスラム教、自由主義、繁栄、ウェブ、カオス、飢餓、病気、暴力、人工知能、戦争. Las Vegas, NV USA: Reality Press. pp. 173-177.
    私は計算と宇宙の限界に関する最近の議論をコンピュータとして読み、ポリマス物理学者と意思決定理論家デビッド・ウォルパートの驚くべき仕事に関するいくつかのコメントを見つけることを望んでいますが、単一の引用 を見つけていないので、私はこの非常に簡単な要約を提示します。ウォルパートは、計算を行うデバイスから独立し、物理学の法則から独立している推論(計算)の限界に関する驚くべき不可能または不完全な定理(199 2年から2008年のarxiv.org参照)を証明したので、コンピュータ、物理学、人間の行動に適用されます。彼らは、カントールの対角化、嘘つきのパラドックス、ワールドラインを利用して、チューリングマシ ン理論の究極の定理である可能性のあるものを提供し、不可能、不完全性、計算の限界、そしてコンピュータとしての宇宙に関する洞察を提供し、すべての可能な宇宙とすべての存在またはメカニズムを生み出し、とりわけ 非量子機械不確実性原理と単一主義の証明を生み出します。チャイティン、ソロモノフ、コモルガロフ、ヴィトゲンシュタインの古典的な作品と、どのプログラム(したがってデバイスも)が所有するよりも複雑なシーケン ス(またはデバイス)を生成できないという考えには明らかなつながりがあります。この作品の体は、物理的な宇宙よりも複雑な存在はあり得ないので無テズムを意味すると言うかもしれませんし、ヴィトゲンチニアンの観 点から見ると、「より複雑な」は無意味です(満足の条件はありません、すなわち、真実のメーカーやテスト)。「神」(つまり、無限の時間/空間とエネルギーを持つ「デバイス」)でさえ、与えられた「数」が「ランダ ム」であるかどうかを判断したり、与えられた「公式」、定理または「文章」または「デバイス」(これらはすべて複雑な言語ゲームである)が特定の「システム」の一部であることを示す特定の方法を見つけることができ ません。 現代の2つのシス・エムスの見解から人間の行動のための包括的な最新の枠組みを望む人は、私の著書「ルートヴィヒ・ヴィトゲンシュタインとジョン・サールの第2回(2019)における哲学、心理学、ミンと言語の論 理的構造」を参照することができます。私の著作の多くにご興味がある人は、21世紀4日(2019年)の「話す猿--哲学、心理学、科学、宗教、政治」を見ることができます。 .
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  48. (1 other version)Gli ominoidi o gli androidi distruggeranno la Terra? Una recensione di Come Creare una Mente (How to Create a Mind) di Ray Kurzweil (2012) (recensione rivista nel 2019).Michael Richard Starks - 2020 - In Benvenuti all'inferno sulla Terra: Bambini, Cambiamenti climatici, Bitcoin, Cartelli, Cina, Democrazia, Diversità, Disgenetica, Uguaglianza, Pirati Informatici, Diritti umani, Islam, Liberalismo, Prosperità, Web, Caos, Fame, Malattia, Violenza, Intellige. Las Vegas, NV USA: Reality Press. pp. 150-162.
    Alcuni anni fa, ho raggiunto il punto in cui di solito posso dire dal titolo di un libro, o almeno dai titoli dei capitoli, quali tipi di errori filosofici saranno fatti e con quale frequenza. Nel caso di opere nominalmente scientifiche queste possono essere in gran parte limitate a determinati capitoli che sono filosofici o cercanodi trarre conclusioni generali sul significato o sul significato a lungoterminedell'opera. Normalmente però le questioni scientifiche di fatto sono generosamente intrecciate con incomprodellami filosofici su ciò (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  49. 인간이나 안드로이드가 지구를 파괴 할 것인가? — '마음 만드는 법'의 검토 (How to Create a Mind) Ray Kurzweil (2010).Michael Richard Starks - 2020 - In 지구상의 지옥에 오신 것을 환영합니다 : 아기, 기후 변화, 비트 코인, 카르텔, 중국, 민주주의, 다양성, 역학, 평등, 해커, 인권, 이슬람, 자유주의, 번영, 웹, 혼돈, 기아, 질병, 폭력, 인공 지능, 전쟁. Las Vegas, NV USA: Reality Press. pp. 172-186.
    몇 년전, 저는 보통 책의 제목이나 적어도 장 제목에서 어떤 종류의 철학적 실수를 저지르고 얼마나 자주 알 수 있는지 를 알 수 있는 지점에 도달했습니다. 명목상 과학적 작품의 경우, 이들은 크게 철학적 왁스 또는 의미 또는 긴에 대한 일반적인 결론을 그리려는 특정 장으로 제한 될 수있다-작업의기간 의의. 그러나 일반적으로 사실의 과학적 문제는 이러한 사실이 무엇을 의미하는지에 관해서는 철학적 횡설수설과 관대하게 얽혀있다. Wittgenstein이 약 80 년 전에 과학 문제와 다양한 언어 게임에 의한 설명 사이에 설명 한 명확한 차이점은 거의 고려되지 않으므로 (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  50. Wolpert, Chaitin et Wittgenstein sur l’impossibilité, l’incomplétude, le paradoxe menteur, le théisme, les limites du calcul, un principe d’incertitude mécanique non quantique et l’univers comme ordinateur, le théorème ultime dans Turing Machine Theory (révisé 2019).Michael Richard Starks - 2020 - In Bienvenue en Enfer sur Terre : Bébés, Changement climatique, Bitcoin, Cartels, Chine, Démocratie, Diversité, Dysgénique, Égalité, Pirates informatiques, Droits de l'homme, Islam, Libéralisme, Prospérité, Le Web, Chaos, Famine, Maladie, Violence, Intellige. Las Vegas, NV USA: Reality Press. pp. 185-189.
    J’ai lu de nombreuses discussions récentes sur les limites du calcul et de l’univers en tant qu’ordinateur, dans l’espoir de trouver quelques commentaires sur le travail étonnant du physicien polymathe et théoricien de la décision David Wolpert, mais n’ont pas trouvé une seule citation et je présente donc ce résumé très bref. Wolpert s’est avéré quelques théoricaux d’impossibilité ou d’incomplétude renversants (1992 à 2008-voir arxiv dot org) sur les limites de l’inférence (computation) qui sont si généraux qu’ils sont indépendants de (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 76