Switch to: References

Add citations

You must login to add citations.
  1. Analyzing Machine‐Learned Representations: A Natural Language Case Study.Ishita Dasgupta, Demi Guo, Samuel J. Gershman & Noah D. Goodman - 2020 - Cognitive Science 44 (12):e12925.
    As modern deep networks become more complex, and get closer to human‐like capabilities in certain domains, the question arises as to how the representations and decision rules they learn compare to the ones in humans. In this work, we study representations of sentences in one such artificial system for natural language processing. We first present a diagnostic test dataset to examine the degree of abstract composable structure represented. Analyzing performance on these diagnostic tests indicates a lack of systematicity in representations (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Editors’ Review and Introduction: Levels of Explanation in Cognitive Science: From Molecules to Culture.Matteo Colombo & Markus Knauff - 2020 - Topics in Cognitive Science 12 (4):1224-1240.
    Cognitive science began as a multidisciplinary endeavor to understand how the mind works. Since the beginning, cognitive scientists have been asking questions about the right methodologies and levels of explanation to pursue this goal, and make cognitive science a coherent science of the mind. Key questions include: Is there a privileged level of explanation in cognitive science? How do different levels of explanation fit together, or relate to one another? How should explanations at one level inform or constrain explanations at (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Empiricism in the foundations of cognition.Timothy Childers, Juraj Hvorecký & Ondrej Majer - 2023 - AI and Society 38 (1):67-87.
    This paper traces the empiricist program from early debates between nativism and behaviorism within philosophy, through debates about early connectionist approaches within the cognitive sciences, and up to their recent iterations within the domain of deep learning. We demonstrate how current debates on the nature of cognition via deep network architecture echo some of the core issues from the Chomsky/Quine debate and investigate the strength of support offered by these various lines of research to the empiricist standpoint. Referencing literature from (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Empiricism without Magic: Transformational Abstraction in Deep Convolutional Neural Networks.Cameron Buckner - 2018 - Synthese (12):1-34.
    In artificial intelligence, recent research has demonstrated the remarkable potential of Deep Convolutional Neural Networks (DCNNs), which seem to exceed state-of-the-art performance in new domains weekly, especially on the sorts of very difficult perceptual discrimination tasks that skeptics thought would remain beyond the reach of artificial intelligence. However, it has proven difficult to explain why DCNNs perform so well. In philosophy of mind, empiricists have long suggested that complex cognition is based on information derived from sensory experience, often appealing to (...)
    Download  
     
    Export citation  
     
    Bookmark   42 citations  
  • Deep learning: A philosophical introduction.Cameron Buckner - 2019 - Philosophy Compass 14 (10):e12625.
    Deep learning is currently the most prominent and widely successful method in artificial intelligence. Despite having played an active role in earlier artificial intelligence and neural network research, philosophers have been largely silent on this technology so far. This is remarkable, given that deep learning neural networks have blown past predicted upper limits on artificial intelligence performance—recognizing complex objects in natural photographs and defeating world champions in strategy games as complex as Go and chess—yet there remains no universally accepted explanation (...)
    Download  
     
    Export citation  
     
    Bookmark   43 citations  
  • The Emperor Is Naked: Replies to commentaries on the target article.Jelle Bruineberg, Krzysztof Dołęga, Joe Dewhurst & Manuel Baltieri - 2022 - Behavioral and Brain Sciences 45:e219.
    The 35 commentaries cover a wide range of topics and take many different stances on the issues explored by the target article. We have organised our response to the commentaries around three central questions: Are Friston blankets just Pearl blankets? What ontological and metaphysical commitments are implied by the use of Friston blankets? What kind of explanatory work are Friston blankets capable of? We conclude our reply with a short critical reflection on the indiscriminate use of both Markov blankets and (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Artificial Interdisciplinarity: Artificial Intelligence for Research on Complex Societal Problems.Seth D. Baum - 2020 - Philosophy and Technology 34 (1):45-63.
    This paper considers the question: In what ways can artificial intelligence assist with interdisciplinary research for addressing complex societal problems and advancing the social good? Problems such as environmental protection, public health, and emerging technology governance do not fit neatly within traditional academic disciplines and therefore require an interdisciplinary approach. However, interdisciplinary research poses large cognitive challenges for human researchers that go beyond the substantial challenges of narrow disciplinary research. The challenges include epistemic divides between disciplines, the massive bodies of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Towards a Benchmark for Scientific Understanding in Humans and Machines.Kristian Gonzalez Barman, Sascha Caron, Tom Claassen & Henk de Regt - 2024 - Minds and Machines 34 (1):1-16.
    Scientific understanding is a fundamental goal of science. However, there is currently no good way to measure the scientific understanding of agents, whether these be humans or Artificial Intelligence systems. Without a clear benchmark, it is challenging to evaluate and compare different levels of scientific understanding. In this paper, we propose a framework to create a benchmark for scientific understanding, utilizing tools from philosophy of science. We adopt a behavioral conception of understanding, according to which genuine understanding should be recognized (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Noise induced hearing loss: Building an application using the ANGELIC methodology.Latifa Al-Abdulkarim, Katie Atkinson, Trevor Bench-Capon, Stuart Whittle, Rob Williams & Catriona Wolfenden - 2018 - Argument and Computation 10 (1):5-22.
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Face Recognition Depends on Specialized Mechanisms Tuned to View‐Invariant Facial Features: Insights from Deep Neural Networks Optimized for Face or Object Recognition.Naphtali Abudarham, Idan Grosbard & Galit Yovel - 2021 - Cognitive Science 45 (9):e13031.
    Face recognition is a computationally challenging classification task. Deep convolutional neural networks (DCNNs) are brain‐inspired algorithms that have recently reached human‐level performance in face and object recognition. However, it is not clear to what extent DCNNs generate a human‐like representation of face identity. We have recently revealed a subset of facial features that are used by humans for face recognition. This enables us now to ask whether DCNNs rely on the same facial information and whether this human‐like representation depends on (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The argument for near-term human disempowerment through AI.Leonard Dung - 2024 - AI and Society:1-14.
    Many researchers and intellectuals warn about extreme risks from artificial intelligence. However, these warnings typically came without systematic arguments in support. This paper provides an argument that AI will lead to the permanent disempowerment of humanity, e.g. human extinction, by 2100. It rests on four substantive premises which it motivates and defends: first, the speed of advances in AI capability, as well as the capability level current systems have already reached, suggest that it is practically possible to build AI systems (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • From Reflex to Reflection: Two Tricks AI Could Learn from Us.Jean-Louis Dessalles - 2019 - Philosophies 4 (2):27.
    Deep learning and other similar machine learning techniques have a huge advantage over other AI methods: they do function when applied to real-world data, ideally from scratch, without human intervention. However, they have several shortcomings that mere quantitative progress is unlikely to overcome. The paper analyses these shortcomings as resulting from the type of compression achieved by these techniques, which is limited to statistical compression. Two directions for qualitative improvement, inspired by comparison with cognitive processes, are proposed here, in the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Why Machines Will Never Rule the World: Artificial Intelligence without Fear.Jobst Landgrebe & Barry Smith - 2022 - Abingdon, England: Routledge.
    The book’s core argument is that an artificial intelligence that could equal or exceed human intelligence—sometimes called artificial general intelligence (AGI)—is for mathematical reasons impossible. It offers two specific reasons for this claim: Human intelligence is a capability of a complex dynamic system—the human brain and central nervous system. Systems of this sort cannot be modelled mathematically in a way that allows them to operate inside a computer. In supporting their claim, the authors, Jobst Landgrebe and Barry Smith, marshal evidence (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Philosophy and theory of artificial intelligence 2017.Vincent C. Müller (ed.) - 2017 - Berlin: Springer.
    This book reports on the results of the third edition of the premier conference in the field of philosophy of artificial intelligence, PT-AI 2017, held on November 4 - 5, 2017 at the University of Leeds, UK. It covers: advanced knowledge on key AI concepts, including complexity, computation, creativity, embodiment, representation and superintelligence; cutting-edge ethical issues, such as the AI impact on human dignity and society, responsibilities and rights of machines, as well as AI threats to humanity and AI safety; (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The best game in town: The reemergence of the language-of-thought hypothesis across the cognitive sciences.Jake Quilty-Dunn, Nicolas Porot & Eric Mandelbaum - 2023 - Behavioral and Brain Sciences 46:e261.
    Mental representations remain the central posits of psychology after many decades of scrutiny. However, there is no consensus about the representational format(s) of biological cognition. This paper provides a survey of evidence from computational cognitive psychology, perceptual psychology, developmental psychology, comparative psychology, and social psychology, and concludes that one type of format that routinely crops up is the language-of-thought (LoT). We outline six core properties of LoTs: (i) discrete constituents; (ii) role-filler independence; (iii) predicate–argument structure; (iv) logical operators; (v) inferential (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence.Carlos Zednik - 2019 - Philosophy and Technology 34 (2):265-288.
    Many of the computing systems programmed using Machine Learning are opaque: it is difficult to know why they do what they do or how they work. Explainable Artificial Intelligence aims to develop analytic techniques that render opaque computing systems transparent, but lacks a normative framework with which to evaluate these techniques’ explanatory successes. The aim of the present discussion is to develop such a framework, paying particular attention to different stakeholders’ distinct explanatory requirements. Building on an analysis of “opacity” from (...)
    Download  
     
    Export citation  
     
    Bookmark   53 citations  
  • The Rhetoric and Reality of Anthropomorphism in Artificial Intelligence.David Watson - 2019 - Minds and Machines 29 (3):417-440.
    Artificial intelligence has historically been conceptualized in anthropomorphic terms. Some algorithms deploy biomimetic designs in a deliberate attempt to effect a sort of digital isomorphism of the human brain. Others leverage more general learning strategies that happen to coincide with popular theories of cognitive science and social epistemology. In this paper, I challenge the anthropomorphic credentials of the neural network algorithm, whose similarities to human cognition I argue are vastly overstated and narrowly construed. I submit that three alternative supervised learning (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • The Challenge of Modeling the Acquisition of Mathematical Concepts.Alberto Testolin - 2020 - Frontiers in Human Neuroscience 14.
    Download  
     
    Export citation  
     
    Bookmark  
  • 深層学習の哲学的意義.Takayuki Suzuki - 2021 - Kagaku Tetsugaku 53 (2):151-167.
    Download  
     
    Export citation  
     
    Bookmark  
  • From Implausible Artificial Neurons to Idealized Cognitive Models: Rebooting Philosophy of Artificial Intelligence.Catherine Stinson - 2020 - Philosophy of Science 87 (4):590-611.
    There is a vast literature within philosophy of mind that focuses on artificial intelligence, but hardly mentions methodological questions. There is also a growing body of work in philosophy of science about modeling methodology that hardly mentions examples from cognitive science. Here these discussions are connected. Insights developed in the philosophy of science literature about the importance of idealization provide a way of understanding the neural implausibility of connectionist networks. Insights from neurocognitive science illuminate how relevant similarities between models and (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Moving beyond content‐specific computation in artificial neural networks.Nicholas Shea - 2021 - Mind and Language 38 (1):156-177.
    A basic deep neural network (DNN) is trained to exhibit a large set of input–output dispositions. While being a good model of the way humans perform some tasks automatically, without deliberative reasoning, more is needed to approach human‐like artificial intelligence. Analysing recent additions brings to light a distinction between two fundamentally different styles of computation: content‐specific and non‐content‐specific computation (as first defined here). For example, deep episodic RL networks draw on both. So does human conceptual reasoning. Combining the two takes (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Time-Based Binding as a Solution to and a Limitation for Flexible Cognition.Mehdi Senoussi, Pieter Verbeke & Tom Verguts - 2022 - Frontiers in Psychology 12.
    Why can’t we keep as many items as we want in working memory? It has long been debated whether this resource limitation is a bug or instead a feature. We propose that the resource limitation is a consequence of a useful feature. Specifically, we propose that flexible cognition requires time-based binding, and time-based binding necessarily limits the number of memoranda that can be stored simultaneously. Time-based binding is most naturally instantiated via neural oscillations, for which there exists ample experimental evidence. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Anthropomorphism in AI.Arleen Salles, Kathinka Evers & Michele Farisco - 2020 - American Journal of Bioethics Neuroscience 11 (2):88-95.
    AI research is growing rapidly raising various ethical issues related to safety, risks, and other effects widely discussed in the literature. We believe that in order to adequately address those issues and engage in a productive normative discussion it is necessary to examine key concepts and categories. One such category is anthropomorphism. It is a well-known fact that AI’s functionalities and innovations are often anthropomorphized. The general public’s anthropomorphic attitudes and some of their ethical consequences have been widely discussed in (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • Cognitive Systems, Predictive Processing, and the Self.Robert D. Rupert - 2021 - Review of Philosophy and Psychology 13 (4):947-972.
    This essay presents the conditional probability of co-contribution account of the individuation of cognitive systems (CPC) and argues that CPC provides an attractive basis for a theory of the cognitive self. The argument proceeds in a largely indirect way, by emphasizing empirical challenges faced by an approach that relies entirely on predictive processing (PP) mechanisms to ground a theory of the cognitive self. Given the challenges faced by PP-based approaches, we should prefer a theory of the cognitive self of the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Privacy concerns in educational data mining and learning analytics.Isak Potgieter - 2020 - International Review of Information Ethics 28.
    Education at all levels is increasingly augmented and enhanced by data mining and analytics, catalysed by the growing prevalence of automated distance learning. With an unprecedented capacity to scale both horizontally and vertically, data mining and analytics are set to be a transformative part of the future of education. We reflect on the assumptions behind data mining and the potential consequences of learning analytics, with reference to an issue brief prepared for the U.S. Department of Education entitled Enhancing Teaching and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Unbearable Shallow Understanding of Deep Learning.Alessio Plebe & Giorgio Grasso - 2019 - Minds and Machines 29 (4):515-553.
    This paper analyzes the rapid and unexpected rise of deep learning within Artificial Intelligence and its applications. It tackles the possible reasons for this remarkable success, providing candidate paths towards a satisfactory explanation of why it works so well, at least in some domains. A historical account is given for the ups and downs, which have characterized neural networks research and its evolution from “shallow” to “deep” learning architectures. A precise account of “success” is given, in order to sieve out (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Deep learning and cognitive science.Pietro Perconti & Alessio Plebe - 2020 - Cognition 203:104365.
    In recent years, the family of algorithms collected under the term ``deep learning'' has revolutionized artificial intelligence, enabling machines to reach human-like performances in many complex cognitive tasks. Although deep learning models are grounded in the connectionist paradigm, their recent advances were basically developed with engineering goals in mind. Despite of their applied focus, deep learning models eventually seem fruitful for cognitive purposes. This can be thought as a kind of biological exaptation, where a physiological structure becomes applicable for a (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Alien Reasoning: Is a Major Change in Scientific Research Underway?Thomas Nickles - 2020 - Topoi 39 (4):901-914.
    Are we entering a major new phase of modern science, one in which our standard, human modes of reasoning and understanding, including heuristics, have decreasing value? The new methods challenge human intelligibility. The digital revolution inspires such claims, but they are not new. During several historical periods, scientific progress has challenged traditional concepts of reasoning and rationality, intelligence and intelligibility, explanation and knowledge. The increasing intelligence of machine learning and networking is a deliberately sought, somewhat alien intelligence. As such, it (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • The philosophy of linguistics: Scientific underpinnings and methodological disputes.Ryan M. Nefdt - 2019 - Philosophy Compass 14 (12):e12636.
    This article surveys the philosophical literature on theoretical linguistics. The focus of the paper is centred around the major debates in the philosophy of linguistics, past and present, with specific relation to how they connect to the philosophy of science. Specific issues such as scientific realism in linguistics, the scientific status of grammars, the methodological underpinnings of formal semantics, and the integration of linguistics into the larger cognitive sciences form the crux of the discussion.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Are machines radically contextualist?Ryan M. Nefdt - 2023 - Mind and Language 38 (3):750-771.
    In this article, I describe a novel position on the semantics of artificial intelligence. I present a problem for the current artificial neural networks used in machine learning, specifically with relation to natural language tasks. I then propose that from a metasemantic level, meaning in machines can best be interpreted as radically contextualist. Finally, I consider what this might mean for human‐level semantic competence from a comparative perspective.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • A Puzzle concerning Compositionality in Machines.Ryan M. Nefdt - 2020 - Minds and Machines 30 (1):47-75.
    This paper attempts to describe and address a specific puzzle related to compositionality in artificial networks such as Deep Neural Networks and machine learning in general. The puzzle identified here touches on a larger debate in Artificial Intelligence related to epistemic opacity but specifically focuses on computational applications of human level linguistic abilities or properties and a special difficulty with relation to these. Thus, the resulting issue is both general and unique. A partial solution is suggested.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Ethical Issues in Consent for the Reuse of Data in Health Data Platforms.Alex McKeown, Miranda Mourby, Paul Harrison, Sophie Walker, Mark Sheehan & Ilina Singh - 2021 - Science and Engineering Ethics 27 (1):1-21.
    Data platforms represent a new paradigm for carrying out health research. In the platform model, datasets are pooled for remote access and analysis, so novel insights for developing better stratified and/or personalised medicine approaches can be derived from their integration. If the integration of diverse datasets enables development of more accurate risk indicators, prognostic factors, or better treatments and interventions, this obviates the need for the sharing and reuse of data; and a platform-based approach is an appropriate model for facilitating (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Making AI Meaningful Again.Jobst Landgrebe & Barry Smith - 2021 - Synthese 198 (March):2061-2081.
    Artificial intelligence (AI) research enjoyed an initial period of enthusiasm in the 1970s and 80s. But this enthusiasm was tempered by a long interlude of frustration when genuinely useful AI applications failed to be forthcoming. Today, we are experiencing once again a period of enthusiasm, fired above all by the successes of the technology of deep neural networks or deep machine learning. In this paper we draw attention to what we take to be serious problems underlying current views of artificial (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Master and Slave: the Dialectic of Human-Artificial Intelligence Engagement.Tae Wan Kim, Fabrizio Maimone, Katherina Pattit, Alejo José Sison & Benito Teehankee - 2021 - Humanistic Management Journal 6 (3):355-371.
    The massive introduction of artificial intelligence has triggered significant societal concerns, ranging from “technological unemployment” and the dominance of algorithms in the work place and in everyday life, among others. While AI is made by humans and is, therefore, dependent on the latter for its purpose, the increasing capabilities of AI to carry out productive activities for humans can lead the latter to unwitting slavish existence. This has become evident, for example, in the area of social media use, where AI (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • A philosophical view on singularity and strong AI.Christian Hugo Hoffmann - forthcoming - AI and Society:1-18.
    More intellectual modesty, but also conceptual clarity is urgently needed in AI, perhaps more than in many other disciplines. AI research has been coined by hypes and hubris since its early beginnings in the 1950s. For instance, the Nobel laureate Herbert Simon predicted after his participation in the Dartmouth workshop that “machines will be capable, within 20 years, of doing any work that a man can do”. And expectations are in some circles still high to overblown today. This paper addresses (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Turning biases into hypotheses through method: A logic of scientific discovery for machine learning.Maja Bak Herrie & Simon Aagaard Enni - 2021 - Big Data and Society 8 (1).
    Machine learning systems have shown great potential for performing or supporting inferential reasoning through analyzing large data sets, thereby potentially facilitating more informed decision-making. However, a hindrance to such use of ML systems is that the predictive models created through ML are often complex, opaque, and poorly understood, even if the programs “learning” the models are simple, transparent, and well understood. ML models become difficult to trust, since lay-people, specialists, and even researchers have difficulties gauging the reasonableness, correctness, and reliability (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • 15 challenges for AI: or what AI (currently) can’t do.Thilo Hagendorff & Katharina Wezel - 2020 - AI and Society 35 (2):355-365.
    The current “AI Summer” is marked by scientific breakthroughs and economic successes in the fields of research, development, and application of systems with artificial intelligence. But, aside from the great hopes and promises associated with artificial intelligence, there are a number of challenges, shortcomings and even limitations of the technology. For one, these challenges arise from methodological and epistemological misconceptions about the capabilities of artificial intelligence. Secondly, they result from restrictions of the social context in which the development of applications (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Machine learning in human creativity: status and perspectives.Mirko Farina, Andrea Lavazza, Giuseppe Sartori & Witold Pedrycz - forthcoming - AI and Society:1-13.
    As we write this research paper, we notice an explosion in popularity of machine learning in numerous fields (ranging from governance, education, and management to criminal justice, fraud detection, and internet of things). In this contribution, rather than focusing on any of those fields, which have been well-reviewed already, we decided to concentrate on a series of more recent applications of deep learning models and technologies that have only recently gained significant track in the relevant literature. These applications are concerned (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Abstraction, mimesis and the evolution of deep learning.Jon Eklöf, Thomas Hamelryck, Cadell Last, Alexander Grima & Ulrika Lundh Snis - forthcoming - AI and Society:1-9.
    Deep learning developers typically rely on deep learning software frameworks (DLSFs)—simply described as pre-packaged libraries of programming components that provide high-level access to deep learning functionality. New DLSFs progressively encapsulate mathematical, statistical and computational complexity. Such higher levels of abstraction subsequently make it easier for deep learning methodology to spread through mimesis (i.e., imitation of models perceived as successful). In this study, we quantify this increase in abstraction and discuss its implications. Analyzing publicly available code from Github, we found that (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Dubito Ergo Sum: Exploring AI Ethics.Viktor Dörfler & Giles Cuthbert - 2024 - Hicss 57: Hawaii International Conference on System Sciences, Honolulu, Hi.
    We paraphrase Descartes’ famous dictum in the area of AI ethics where the “I doubt and therefore I am” is suggested as a necessary aspect of morality. Therefore AI, which cannot doubt itself, cannot possess moral agency. Of course, this is not the end of the story. We explore various aspects of the human mind that substantially differ from AI, which includes the sensory grounding of our knowing, the act of understanding, and the significance of being able to doubt ourselves. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Autonomy and Machine Learning as Risk Factors at the Interface of Nuclear Weapons, Computers and People.S. M. Amadae & Shahar Avin - 2019 - In Vincent Boulanin (ed.), The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk: Euro-Atlantic Perspectives. Stockholm, Sweden: pp. 105-118.
    This article assesses how autonomy and machine learning impact the existential risk of nuclear war. It situates the problem of cyber security, which proceeds by stealth, within the larger context of nuclear deterrence, which is effective when it functions with transparency and credibility. Cyber vulnerabilities poses new weaknesses to the strategic stability provided by nuclear deterrence. This article offers best practices for the use of computer and information technologies integrated into nuclear weapons systems. Focusing on nuclear command and control, avoiding (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Linguistic Competence and New Empiricism in Philosophy and Science.Vanja Subotić - 2023 - Dissertation, University of Belgrade
    The topic of this dissertation is the nature of linguistic competence, the capacity to understand and produce sentences of natural language. I defend the empiricist account of linguistic competence embedded in the connectionist cognitive science. This strand of cognitive science has been opposed to the traditional symbolic cognitive science, coupled with transformational-generative grammar, which was committed to nativism due to the view that human cognition, including language capacity, should be construed in terms of symbolic representations and hardwired rules. Similarly, linguistic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Emergent Models for Moral AI Spirituality.Mark Graves - 2021 - International Journal of Interactive Multimedia and Artificial Intelligence 7 (1):7-15.
    Examining AI spirituality can illuminate problematic assumptions about human spirituality and AI cognition, suggest possible directions for AI development, reduce uncertainty about future AI, and yield a methodological lens sufficient to investigate human-AI sociotechnical interaction and morality. Incompatible philosophical assumptions about human spirituality and AI limit investigations of both and suggest a vast gulf between them. An emergentist approach can replace dualist assumptions about human spirituality and identify emergent behavior in AI computation to overcome overly reductionist assumptions about computation. Using (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Computer Simulations, Machine Learning and the Laplacean Demon: Opacity in the Case of High Energy Physics.Florian J. Boge & Paul Grünke - forthcoming - In Andreas Kaminski, Michael Resch & Petra Gehring (eds.), The Science and Art of Simulation II.
    In this paper, we pursue three general aims: (I) We will define a notion of fundamental opacity and ask whether it can be found in High Energy Physics (HEP), given the involvement of machine learning (ML) and computer simulations (CS) therein. (II) We identify two kinds of non-fundamental, contingent opacity associated with CS and ML in HEP respectively, and ask whether, and if so how, they may be overcome. (III) We address the question of whether any kind of opacity, contingent (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Ontology and Cognitive Outcomes.David Limbaugh, Jobst Landgrebe, David Kasmier, Ronald Rudnicki, James Llinas & Barry Smith - 2020 - Journal of Knowledge Structures and Systems 1 (1): 3-22.
    The term ‘intelligence’ as used in this paper refers to items of knowledge collected for the sake of assessing and maintaining national security. The intelligence community (IC) of the United States (US) is a community of organizations that collaborate in collecting and processing intelligence for the US. The IC relies on human-machine-based analytic strategies that 1) access and integrate vast amounts of information from disparate sources, 2) continuously process this information, so that, 3) a maximally comprehensive understanding of world actors (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Updating the Frame Problem for Artificial Intelligence Research.Lisa Miracchi - 2020 - Journal of Artificial Intelligence and Consciousness 7 (2):217-230.
    The Frame Problem is the problem of how one can design a machine to use information so as to behave competently, with respect to the kinds of tasks a genuinely intelligent agent can reliably, effectively perform. I will argue that the way the Frame Problem is standardly interpreted, and so the strategies considered for attempting to solve it, must be updated. We must replace overly simplistic and reductionist assumptions with more sophisticated and plausible ones. In particular, the standard interpretation assumes (...)
    Download  
     
    Export citation  
     
    Bookmark