Contents
90 found
Order:
1 — 50 / 90
  1. Revised: From Color, to Consciousness, toward Strong AI.Xinyuan Gu - manuscript
    This article cohesively discusses three topics, namely color and its perception, the yet-to-be-solved hard problem of consciousness, and the theoretical possibility of strong AI. First, the article restores color back into the physical world by giving cross-species evidence. Secondly, the article proposes a dual-field with function Q hypothesis (DFFQ) which might explain the ‘first-person point of view’ and so the hard problem of consciousness. Finally, the article discusses what DFFQ might bring to artificial intelligence and how it might allow strong (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  2. The Unlikeliest of Duos; Why Super Intelligent AI Will Cooperate with Humans.Griffin Pithie - manuscript
    The focus of this article is the "good-will theory", which explains the effect humans can have on the safety of AI, along with how it is in the best interest of a superintelligent AI to work alongside humans and not overpower them. Future papers dealing with the good-will theory will be published, but discuss different talking points in regards to possible or real objections to the theory.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  3. Before the Systematicity Debate: Recovering the Rationales for Systematizing Thought.Matthieu Queloz - manuscript
    Over the course of the twentieth century, the notion of the systematicity of thought has acquired a much narrower meaning than it used to carry for much of its history. The so-called “systematicity debate” that has dominated the philosophy of language, cognitive science, and AI research over the last thirty years understands the systematicity of thought in terms of the compositionality of thought. But there is an older, broader, and more demanding notion of systematicity that is now increasingly relevant again. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  4. Can AI Rely on the Systematicity of Truth? The Challenge of Modelling Normative Domains.Matthieu Queloz - manuscript
    A key assumption fuelling optimism about the progress of Large Language Models (LLMs) in modelling the world is that the truth is systematic: true statements about the world form a whole that is not just consistent, in that it contains no contradictions, but cohesive, in that the truths are inferentially interlinked. This holds out the prospect that LLMs might rely on that systematicity to fill in gaps and correct inaccuracies in the training data: consistency and cohesiveness promise to facilitate progress (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  5. Probable General Intelligence algorithm.Anton Venglovskiy - manuscript
    Contains a description of a generalized and constructive formal model for the processes of subjective and creative thinking. According to the author, the algorithm presented in the article is capable of real and arbitrarily complex thinking and is potentially able to report on the presence of consciousness.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  6. Trust in AI: Progress, Challenges, and Future Directions.Saleh Afroogh, Ali Akbari, Emmie Malone, Mohammadali Kargar & Hananeh Alambeigi - forthcoming - Nature Humanities and Social Sciences Communications.
    The increasing use of artificial intelligence (AI) systems in our daily life through various applications, services, and products explains the significance of trust/distrust in AI from a user perspective. AI-driven systems have significantly diffused into various fields of our lives, serving as beneficial tools used by human agents. These systems are also evolving to act as co-assistants or semi-agents in specific domains, potentially influencing human thought, decision-making, and agency. Trust/distrust in AI plays the role of a regulator and could significantly (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  7. Explicit Legg-Hutter intelligence calculations which suggest non-Archimedean intelligence.Samuel Allen Alexander & Arthur Paul Pedersen - forthcoming - Lecture Notes in Computer Science.
    Are the real numbers rich enough to measure intelligence? We generalize a result of Alexander and Hutter about the so-called Legg-Hutter intelligence measures of reinforcement learning agents. Using the generalized result, we exhibit a paradox: in one particular version of the Legg-Hutter intelligence measure, certain agents all have intelligence 0, even though in a certain sense some of them outperform others. We show that this paradox disappears if we vary the Legg-Hutter intelligence measure to be hyperreal-valued rather than real-valued.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  8. “Even an AI could do that”.Emanuele Arielli - forthcoming - Http://Manovich.Net/Index.Php/Projects/Artificial-Aesthetics.
    Chapter 1 of the ongoing online publication "Artificial Aesthetics: A Critical Guide to AI, Media and Design", Lev Manovich and Emanuele Arielli -/- Book information: Assume you're a designer, an architect, a photographer, a videographer, a curator, an art historian, a musician, a writer, an artist, or any other creative professional or student. Perhaps you're a digital content creator who works across multiple platforms. Alternatively, you could be an art historian, curator, or museum professional. -/- You may be wondering how (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  9. Experience replay algorithms and the function of episodic memory.Alexandria Boyle - forthcoming - In Lynn Nadel & Sara Aronowitz (eds.), Space, Time, and Memory. Oxford University Press.
    Episodic memory is memory for past events. It’s characteristically associated with an experience of ‘mentally replaying’ one’s experiences in the mind’s eye. This biological phenomenon has inspired the development of several ‘experience replay’ algorithms in AI. In this chapter, I ask whether experience replay algorithms might shed light on a puzzle about episodic memory’s function: what does episodic memory contribute to the cognitive systems in which it is found? I argue that experience replay algorithms can serve as idealized models of (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  10. AI Survival Stories: a Taxonomic Analysis of AI Existential Risk.Herman Cappelen, Simon Goldstein & John Hawthorne - forthcoming - Philosophy of Ai.
    Since the release of ChatGPT, there has been a lot of debate about whether AI systems pose an existential risk to humanity. This paper develops a general framework for thinking about the existential risk of AI systems. We analyze a two-premise argument that AI systems pose a threat to humanity. Premise one: AI systems will become extremely powerful. Premise two: if AI systems become extremely powerful, they will destroy humanity. We use these two premises to construct a taxonomy of ‘survival (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  11. The Simulation Hypothesis, Social Knowledge, and a Meaningful Life.Grace Helton - forthcoming - Oxford Studies in Philosophy of Mind.
    (Draft of Feb 2023, see upcoming issue for Chalmers' reply) In Reality+: Virtual Worlds and the Problems of Philosophy, David Chalmers argues, among other things, that: if we are living in a full-scale simulation, we would still enjoy broad swathes of knowledge about non-psychological entities, such as atoms and shrubs; and, our lives might still be deeply meaningful. Chalmers views these claims as at least weakly connected: The former claim helps forestall a concern that if objects in the simulation are (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  12. Making AI Intelligible: Philosophical Foundations. By Herman Cappelen and Josh Dever. [REVIEW]Nikhil Mahant - forthcoming - Philosophical Quarterly.
    Linguistic outputs generated by modern machine-learning neural net AI systems seem to have the same contents—i.e., meaning, semantic value, etc.—as the corresponding human-generated utterances and texts. Building upon this essential premise, Herman Cappelen and Josh Dever's Making AI Intelligible sets for itself the task of addressing the question of how AI-generated outputs have the contents that they seem to have (henceforth, ‘the question of AI Content’). In pursuing this ambitious task, the book makes several high-level, framework observations about how a (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  13. Explaining Explanations in AI.Brent Mittelstadt - forthcoming - FAT* 2019 Proceedings 1.
    Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it’s important to remember Box’s maxim that "All models are wrong but some are useful." We focus on (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   48 citations  
  14. Axe the X in XAI: A Plea for Understandable AI.Andrés Páez - forthcoming - In Juan Manuel Durán & Giorgia Pozzi (eds.), Philosophy of science for machine learning: Core issues and new perspectives. Springer.
    In a recent paper, Erasmus et al. (2021) defend the idea that the ambiguity of the term “explanation” in explainable AI (XAI) can be solved by adopting any of four different extant accounts of explanation in the philosophy of science: the Deductive Nomological, Inductive Statistical, Causal Mechanical, and New Mechanist models. In this chapter, I show that the authors’ claim that these accounts can be applied to deep neural networks as they would to any natural phenomenon is mistaken. I also (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  15. Transparencia, explicabilidad y confianza en los sistemas de aprendizaje automático.Andrés Páez - forthcoming - In Juan David Gutiérrez & Rubén Francisco Manrique (eds.), Más allá del algoritmo: oportunidades, retos y ética de la Inteligencia Artificial. Bogotá: Ediciones Uniandes.
    Uno de los principios éticos mencionados más frecuentemente en los lineamientos para el desarrollo de la inteligencia artificial (IA) es la transparencia algorítmica. Sin embargo, no existe una definición estándar de qué es un algoritmo transparente ni tampoco es evidente por qué la opacidad algorítmica representa un reto para el desarrollo ético de la IA. También se afirma a menudo que la transparencia algorítmica fomenta la confianza en la IA, pero esta aseveración es más una suposición a priori que una (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  16. Living with Uncertainty: Full Transparency of AI isn’t Needed for Epistemic Trust in AI-based Science.Uwe Peters - forthcoming - Social Epistemology Review and Reply Collective.
    Can AI developers be held epistemically responsible for the processing of their AI systems when these systems are epistemically opaque? And can explainable AI (XAI) provide public justificatory reasons for opaque AI systems’ outputs? Koskinen (2024) gives negative answers to both questions. Here, I respond to her and argue for affirmative answers. More generally, I suggest that when considering people’s uncertainty about the factors causally determining an opaque AI’s output, it might be worth keeping in mind that a degree of (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  17. Unjustified Sample Sizes and Generalizations in Explainable AI Research: Principles for More Inclusive User Studies.Uwe Peters & Mary Carman - forthcoming - IEEE Intelligent Systems.
    Many ethical frameworks require artificial intelligence (AI) systems to be explainable. Explainable AI (XAI) models are frequently tested for their adequacy in user studies. Since different people may have different explanatory needs, it is important that participant samples in user studies are large enough to represent the target population to enable generalizations. However, it is unclear to what extent XAI researchers reflect on and justify their sample sizes or avoid broad generalizations across people. We analyzed XAI user studies (N = (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  18. Chinese Chat Room: AI hallucinations, epistemology and cognition.Kristina Šekrst - forthcoming - Studies in Logic, Grammar and Rhetoric.
    The purpose of this paper is to show that understanding AI hallucination requires an interdisciplinary approach that combines insights from epistemology and cognitive science to address the nature of AI-generated knowledge, with a terminological worry that concepts we often use might carry unnecessary presuppositions. Along with terminological issues, it is demonstrated that AI systems, comparable to human cognition, are susceptible to errors in judgement and reasoning, and proposes that epistemological frameworks, such as reliabilism, can be similarly applied to enhance the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  19. Some resonances between Eastern thought and Integral Biomathics in the framework of the WLIMES formalism for modelling living systems.Plamen L. Simeonov & Andree C. Ehresmann - forthcoming - Progress in Biophysics and Molecular Biology 131 (Special).
    Forty-two years ago, Capra published “The Tao of Physics” (Capra, 1975). In this book (page 17) he writes: “The exploration of the atomic and subatomic world in the twentieth century has …. necessitated a radical revision of many of our basic concepts” and that, unlike ‘classical’ physics, the sub-atomic and quantum “modern physics” shows resonances with Eastern thoughts and “leads us to a view of the world which is very similar to the views held by mystics of all ages and (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  20. Variable Value Alignment by Design; averting risks with robot religion.Jeffrey White - forthcoming - Embodied Intelligence 2023.
    Abstract: One approach to alignment with human values in AI and robotics is to engineer artiTicial systems isomorphic with human beings. The idea is that robots so designed may autonomously align with human values through similar developmental processes, to realize project ideal conditions through iterative interaction with social and object environments just as humans do, such as are expressed in narratives and life stories. One persistent problem with human value orientation is that different human beings champion different values as ideal, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  21. Unveiling the Creation of AI-Generated Artworks: Broadening Worringerian Abstraction and Empathy Beyond Contemplation.Leonardo Arriagada - 2024 - Estudios Artísticos 10 (16):142-158.
    In his groundbreaking work, Abstraction and Empathy, Wilhelm Worringer delved into the intricacies of various abstract and figurative artworks, contending that they evoke distinct impulses in the human audience—specifically, the urges towards abstraction and empathy. This article asserts the presence of empirical evidence supporting the extension of Worringer’s concepts beyond the realm of art appreciation to the domain of art-making. Consequently, it posits that abstraction and empathy serve as foundational principles guiding the production of both abstract and figurative art. This (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  22. ChatGPT and the Technology-Education Tension: Applying Contextual Virtue Epistemology to a Cognitive Artifact.Guido Cassinadri - 2024 - Philosophy and Technology 37 (14):1-28.
    According to virtue epistemology, the main aim of education is the development of the cognitive character of students (Pritchard, 2014, 2016). Given the proliferation of technological tools such as ChatGPT and other LLMs for solving cognitive tasks, how should educational practices incorporate the use of such tools without undermining the cognitive character of students? Pritchard (2014, 2016) argues that it is possible to properly solve this ‘technology-education tension’ (TET) by combining the virtue epistemology framework with the theory of extended cognition (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   4 citations  
  23. Synthetic Media Detection, the Wheel, and the Burden of Proof.Keith Raymond Harris - 2024 - Philosophy and Technology 37 (4):1-20.
    Deepfakes and other forms of synthetic media are widely regarded as serious threats to our knowledge of the world. Various technological responses to these threats have been proposed. The reactive approach proposes to use artificial intelligence to identify synthetic media. The proactive approach proposes to use blockchain and related technologies to create immutable records of verified media content. I argue that both approaches, but especially the reactive approach, are vulnerable to a problem analogous to the ancient problem of the criterion—a (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  24. The FHJ debate: Will artificial intelligence replace clinical decision-making within our lifetimes?Joshua Hatherley, Anne Kinderlerer, Jens Christian Bjerring, Lauritz Munch & Lynsey Threlfall - 2024 - Future Healthcare Journal 11 (3):100178.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  25. (1 other version)Năm yếu tố tiền đề của tương tác giữa người và máy trong kỷ nguyên trí tuệ nhân tạo.Manh-Tung Ho & T. Hong-Kong Nguyen - 2024 - Tạp Chí Thông Tin Và Truyền Thông 4 (4/2024):84-91.
    Bài viết này giới thiệu năm yếu tố tiền đề đó với mục đích gia tăng nhận thức về quan hệ giữa người và máy trong bối cảnh công nghệ ngày càng thay đổi cuộc sống thường nhật. Năm tiền đề bao gồm: Tiền đề về cấu trúc xã hội, văn hóa, chính trị, và lịch sử; về tính tự chủ và sự tự do của con người; về nền tảng triết học và nhân văn của nhân loại; về hiện (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  26. Understanding with Toy Surrogate Models in Machine Learning.Andrés Páez - 2024 - Minds and Machines 34 (4):45.
    In the natural and social sciences, it is common to use toy models—extremely simple and highly idealized representations—to understand complex phenomena. Some of the simple surrogate models used to understand opaque machine learning (ML) models, such as rule lists and sparse decision trees, bear some resemblance to scientific toy models. They allow non-experts to understand how an opaque ML model works globally via a much simpler model that highlights the most relevant features of the input space and their effect on (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  27. Inteligența, de la originile naturale la frontierele artificiale - Inteligența Umană vs. Inteligența Artificială.Nicolae Sfetcu - 2024 - Bucharest, Romania: MultiMedia Publishing.
    Istoria paralelă a evoluției inteligenței umane și a inteligenței artificiale este o călătorie fascinantă, evidențiind căile distincte, dar interconectate, ale evoluției biologice și inovației tehnologice. Această istorie poate fi văzută ca o serie de evoluții interconectate, fiecare progres în inteligența umană deschizând calea pentru următorul salt în inteligența artificială. Inteligența umană și inteligența artificială s-au împletit de mult timp, evoluând în traiectorii paralele de-a lungul istoriei. Pe măsură ce oamenii au căutat să înțeleagă și să reproducă inteligența, IA a apărut (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  28. Intelligence, from Natural Origins to Artificial Frontiers - Human Intelligence vs. Artificial Intelligence.Nicolae Sfetcu - 2024 - Bucharest, Romania: MultiMedia Publishing.
    The parallel history of the evolution of human intelligence and artificial intelligence is a fascinating journey, highlighting the distinct but interconnected paths of biological evolution and technological innovation. This history can be seen as a series of interconnected developments, each advance in human intelligence paving the way for the next leap in artificial intelligence. Human intelligence and artificial intelligence have long been intertwined, evolving in parallel trajectories throughout history. As humans have sought to understand and reproduce intelligence, AI has emerged (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  29. Consilience and AI as technological prostheses.Jeffrey B. White - 2024 - AI and Society 39 (5):1-3.
    Edward Wilson wrote in Consilience that “Human history can be viewed through the lens of ecology as the accumulation of environmental prostheses” (1999 p 316), with technologies mediating our collective habitation of the Earth and its complex, interdependent ecosystems. Wilson emphasized the defining characteristic of complex systems, that they undergo transformations which are irreversible. His view is now standard, and his central point bears repeated emphasis, today: natural systems can be broken, species—including us—can disappear, ecosystems can fail, and technological prostheses (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  30. Universal Agent Mixtures and the Geometry of Intelligence.Samuel Allen Alexander, David Quarel, Len Du & Marcus Hutter - 2023 - Aistats.
    Inspired by recent progress in multi-agent Reinforcement Learning (RL), in this work we examine the collective intelligent behaviour of theoretical universal agents by introducing a weighted mixture operation. Given a weighted set of agents, their weighted mixture is a new agent whose expected total reward in any environment is the corresponding weighted average of the original agents' expected total rewards in that environment. Thus, if RL agent intelligence is quantified in terms of performance across environments, the weighted mixture's intelligence is (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  31. Explaining Go: Challenges in Achieving Explainability in AI Go Programs.Zack Garrett - 2023 - Journal of Go Studies 17 (2):29-60.
    There has been a push in recent years to provide better explanations for how AIs make their decisions. Most of this push has come from the ethical concerns that go hand in hand with AIs making decisions that affect humans. Outside of the strictly ethical concerns that have prompted the study of explainable AIs (XAIs), there has been research interest in the mere possibility of creating XAIs in various domains. In general, the more accurate we make our models the harder (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  32. The best game in town: The reemergence of the language-of-thought hypothesis across the cognitive sciences.Jake Quilty-Dunn, Nicolas Porot & Eric Mandelbaum - 2023 - Behavioral and Brain Sciences 46:e261.
    Mental representations remain the central posits of psychology after many decades of scrutiny. However, there is no consensus about the representational format(s) of biological cognition. This paper provides a survey of evidence from computational cognitive psychology, perceptual psychology, developmental psychology, comparative psychology, and social psychology, and concludes that one type of format that routinely crops up is the language-of-thought (LoT). We outline six core properties of LoTs: (i) discrete constituents; (ii) role-filler independence; (iii) predicate–argument structure; (iv) logical operators; (v) inferential (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   14 citations  
  33. Provocări în inteligența artificială.Nicolae Sfetcu - 2023 - It and C 2 (3):3-10.
    Inteligența artificială este un domeniu transformator care a captat atenția oamenilor de știință, inginerilor, întreprinderilor și guvernelor din întreaga lume. Pe măsură ce avansăm mai departe în secolul 21, au apărut câteva tendințe proeminente în domeniul IA. Inteligența artificială și tehnologia de învățare automată sunt utilizate în majoritatea aplicațiilor esențiale ale anilor 2020. Propunerile de control al capabilității inteligenței artificiale, denumite și în mod mai restrictiv confinarea IA, urmăresc să sporească posibilitatea de a monitoriza și controla comportamentul sistemelor IA, inclusiv (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  34. How far can we get in creating a digital replica of a philosopher?Anna Strasser, Eric Schwitzgebel & Matthew Crosby - 2023 - In Raul Hakli, Pekka Mäkelä & Johanna Seibt (eds.), Social Robots in Social Institutions. Proceedings of Robophilosophy 2022. IOS PRESS. pp. 371-380.
    Can we build machines with which we can have interesting conversations? Observing the new optimism of AI regarding deep learning and new language models, we set ourselves an ambitious goal: We want to find out how far we can get in creating a digital replica of a philosopher. This project has two aims; one more technical, investigating of how the best model can be built. The other one, more philosophical, explores the limits and risks which are accompanied by the creation (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  35. Pseudo-visibility: A Game Mechanic Involving Willful Ignorance.Samuel Allen Alexander & Arthur Paul Pedersen - 2022 - FLAIRS-35.
    We present a game mechanic called pseudo-visibility for games inhabited by non-player characters (NPCs) driven by reinforcement learning (RL). NPCs are incentivized to pretend they cannot see pseudo-visible players: the training environment simulates an NPC to determine how the NPC would act if the pseudo-visible player were invisible, and penalizes the NPC for acting differently. NPCs are thereby trained to selectively ignore pseudo-visible players, except when they judge that the reaction penalty is an acceptable tradeoff (e.g., a guard might accept (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  36. AI-aesthetics and the Anthropocentric Myth of Creativity.Emanuele Arielli & Lev Manovich - 2022 - NODES 1 (19-20).
    Since the beginning of the 21st century, technologies like neural networks, deep learning and “artificial intelligence” (AI) have gradually entered the artistic realm. We witness the development of systems that aim to assess, evaluate and appreciate artifacts according to artistic and aesthetic criteria or by observing people’s preferences. In addition to that, AI is now used to generate new synthetic artifacts. When a machine paints a Rembrandt, composes a Bach sonata, or completes a Beethoven symphony, we say that this is (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  37. (1 other version)The Bias Dilemma: The Ethics of Algorithmic Bias in Natural-Language Processing.Oisín Deery & Katherine Bailey - 2022 - Feminist Philosophy Quarterly 8 (3).
    Addressing biases in natural-language processing (NLP) systems presents an underappreciated ethical dilemma, which we think underlies recent debates about bias in NLP models. In brief, even if we could eliminate bias from language models or their outputs, we would thereby often withhold descriptively or ethically useful information, despite avoiding perpetuating or amplifying bias. Yet if we do not debias, we can perpetuate or amplify bias, even if we retain relevant descriptively or ethically useful information. Understanding this dilemma provides for a (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  38. ANNs and Unifying Explanations: Reply to Erasmus, Brunet, and Fisher.Yunus Prasetya - 2022 - Philosophy and Technology 35 (2):1-9.
    In a recent article, Erasmus, Brunet, and Fisher (2021) argue that Artificial Neural Networks (ANNs) are explainable. They survey four influential accounts of explanation: the Deductive-Nomological model, the Inductive-Statistical model, the Causal-Mechanical model, and the New-Mechanist model. They argue that, on each of these accounts, the features that make something an explanation is invariant with regard to the complexity of the explanans and the explanandum. Therefore, they conclude, the complexity of ANNs (and other Machine Learning models) does not make them (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  39. A plea for integrated empirical and philosophical research on the impacts of feminized AI workers.Hannah Read, Javier Gomez-Lavin, Andrea Beltrama & Lisa Miracchi Titus - 2022 - Analysis 999 (1):89-97.
    Feminist philosophers have long emphasized the ways in which women’s oppression takes a variety of forms depending on complex combinations of factors. These include women’s objectification, dehumanization and unjust gendered divisions of labour caused in part by sexist ideologies regarding women’s social role. This paper argues that feminized artificial intelligence (feminized AI) poses new and important challenges to these perennial feminist philosophical issues. Despite the recent surge in theoretical and empirical attention paid to the ethics of AI in general, a (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  40. Machine learning in scientific grant review: algorithmically predicting project efficiency in high energy physics.Vlasta Sikimić & Sandro Radovanović - 2022 - European Journal for Philosophy of Science 12 (3):1-21.
    As more objections have been raised against grant peer-review for being costly and time-consuming, the legitimate question arises whether machine learning algorithms could help assess the epistemic efficiency of the proposed projects. As a case study, we investigated whether project efficiency in high energy physics can be algorithmically predicted based on the data from the proposal. To analyze the potential of algorithmic prediction in HEP, we conducted a study on data about the structure and outcomes of HEP experiments with the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  41. Metaphysics , Meaning, and Morality: A Theological Reflection on A.I.Jordan Joseph Wales - 2022 - Journal of Moral Theology 11 (Special Issue 1):157-181.
    Theologians often reflect on the ethical uses and impacts of artificial intelligence, but when it comes to artificial intelligence techniques themselves, some have questioned whether much exists to discuss in the first place. If the significance of computational operations is attributed rather than intrinsic, what are we to say about them? Ancient thinkers—namely Augustine of Hippo (lived 354–430)—break the impasse, enabling us to draw forth the moral and metaphysical significance of current developments like the “deep neural networks” that are responsible (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  42. On a Possible Basis for Metaphysical Self-development in Natural and Artificial Systems.Jeffrey White - 2022 - Filozofia i Nauka. Studia Filozoficzne I Interdyscyplinarne 10:71-100.
    Recent research into the nature of self in artificial and biological systems raises interest in a uniquely determining immutable sense of self, a “metaphysical ‘I’” associated with inviolable personal values and moral convictions that remain constant in the face of environmental change, distinguished from an object “me” that changes with its environment. Complementary research portrays processes associated with self as multimodal routines selectively enacted on the basis of contextual cues informing predictive self or world models, with the notion of the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  43. Measuring Intelligence and Growth Rate: Variations on Hibbard's Intelligence Measure.Samuel Alexander & Bill Hibbard - 2021 - Journal of Artificial General Intelligence 12 (1):1-25.
    In 2011, Hibbard suggested an intelligence measure for agents who compete in an adversarial sequence prediction game. We argue that Hibbard’s idea should actually be considered as two separate ideas: first, that the intelligence of such agents can be measured based on the growth rates of the runtimes of the competitors that they defeat; and second, one specific (somewhat arbitrary) method for measuring said growth rates. Whereas Hibbard’s intelligence measure is based on the latter growth-rate-measuring method, we survey other methods (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  44. Measuring the intelligence of an idealized mechanical knowing agent.Samuel Alexander - 2020 - Lecture Notes in Computer Science 12226.
    We define a notion of the intelligence level of an idealized mechanical knowing agent. This is motivated by efforts within artificial intelligence research to define real-number intelligence levels of compli- cated intelligent systems. Our agents are more idealized, which allows us to define a much simpler measure of intelligence level for them. In short, we define the intelligence level of a mechanical knowing agent to be the supremum of the computable ordinals that have codes the agent knows to be codes (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  45. Short-circuiting the definition of mathematical knowledge for an Artificial General Intelligence.Samuel Alexander - 2020 - Cifma.
    We propose that, for the purpose of studying theoretical properties of the knowledge of an agent with Artificial General Intelligence (that is, the knowledge of an AGI), a pragmatic way to define such an agent’s knowledge (restricted to the language of Epistemic Arithmetic, or EA) is as follows. We declare an AGI to know an EA-statement φ if and only if that AGI would include φ in the resulting enumeration if that AGI were commanded: “Enumerate all the EA-sentences which you (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  46. Self-referential theories.Samuel A. Alexander - 2020 - Journal of Symbolic Logic 85 (4):1687-1716.
    We study the structure of families of theories in the language of arithmetic extended to allow these families to refer to one another and to themselves. If a theory contains schemata expressing its own truth and expressing a specific Turing index for itself, and contains some other mild axioms, then that theory is untrue. We exhibit some families of true self-referential theories that barely avoid this forbidden pattern.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  47. The Archimedean trap: Why traditional reinforcement learning will probably not yield AGI.Samuel Allen Alexander - 2020 - Journal of Artificial General Intelligence 11 (1):70-85.
    After generalizing the Archimedean property of real numbers in such a way as to make it adaptable to non-numeric structures, we demonstrate that the real numbers cannot be used to accurately measure non-Archimedean structures. We argue that, since an agent with Artificial General Intelligence (AGI) should have no problem engaging in tasks that inherently involve non-Archimedean rewards, and since traditional reinforcement learning rewards are real numbers, therefore traditional reinforcement learning probably will not lead to AGI. We indicate two possible ways (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  48. (1 other version)CG-Art. Una discusión estética sobre la relación entre creatividad artística y computación.Leonardo Arriagada - 2020 - In Jorge Mauricio Molina Mejía, Pablo Valdivia Martin & René Alejandro Venegas Velásquez (eds.), Actas III Congreso Internacional de Lingüística Computacional y de Corpus - CILCC 2020. Universidad de Antioquía y University of Groningen. pp. 261-264.
    En era de la inteligencia artificial (IA) no han sido pocos los que se han preguntado si una máquina puede crear arte. En este sentido, la investigadora cognitiva Margaret Boden (2011) ha definido un tipo especial de arte al relacionar los conceptos "creatividad" y "computación". Así, el arte generado por computador (computer-generated art) es “the artwork results from some computer program being left to run by itself, with minimal or zero interference from a human being” (p. 141). Uno de los (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  49. A Critical Reflection on Automated Science: Will Science Remain Human?Marta Bertolaso & Fabio Sterpetti (eds.) - 2020 - Cham: Springer.
    This book provides a critical reflection on automated science and addresses the question whether the computational tools we developed in last decades are changing the way we humans do science. More concretely: Can machines replace scientists in crucial aspects of scientific practice? The contributors to this book rethink and refine some of the main concepts by which science is understood, drawing a fascinating picture of the developments we expect over the next decades of human-machine co-evolution. The volume covers examples from (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  50. WG-A: A Framework for Exploring Analogical Generalization and Argumentation.Michael Cooper, Lindsey Fields, Marc Gabriel Badilla & John Licato - 2020 - CogSci 2020.
    Reasoning about analogical arguments is known to be subject to a variety of cognitive biases, and a lack of clarity about which factors can be considered strengths or weaknesses of an analogical argument. This can make it difficult both to design empirical experiments to study how people reason about analogical arguments, and to develop scalable tutoring tools for teaching how to reason and analyze analogical arguments. To address these concerns, we describe WG-A (Warrant Game — Analogy), a framework for people (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 90