Contents
58 found
Order:
1 — 50 / 58
  1. Can AI Abstract the Architecture of Mathematics?Posina Rayudu - manuscript
    The irrational exuberance associated with contemporary artificial intelligence (AI) reminds me of Charles Dickens: "it was the age of foolishness, it was the epoch of belief" (cf. Nature Editorial, 2016; to get a feel for the vanity fair that is AI, see Mitchell and Krakauer, 2023; Stilgoe, 2023). It is particularly distressing—feels like yet another rerun of Seinfeld, which is all about nothing (pun intended); we have seen it in the 60s and again in the 90s. AI might have had (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  2. Private memory confers no advantage.Samuel Allen Alexander - forthcoming - Cifma.
    Mathematicians and software developers use the word "function" very differently, and yet, sometimes, things that are in practice implemented using the software developer's "function", are mathematically formalized using the mathematician's "function". This mismatch can lead to inaccurate formalisms. We consider a special case of this meta-problem. Various kinds of agents might, in actual practice, make use of private memory, reading and writing to a memory-bank invisible to the ambient environment. In some sense, we humans do this when we silently subvocalize (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  3. Intention Reconsideration in Artificial Agents: a Structured Account.Fabrizio Cariani - forthcoming - Special Issue of Phil Studies.
    An important module in the Belief-Desire-Intention architecture for artificial agents (which builds on Michael Bratman's work in the philosophy of action) focuses on the task of intention reconsideration. The theoretical task is to formulate principles governing when an agent ought to undo a prior committed intention and reopen deliberation. Extant proposals for such a principle, if sufficiently detailed, are either too task-specific or too computationally demanding. I propose that an agent ought to reconsider an intention whenever some incompatible prospect is (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  4. Real Sparks of Artificial Intelligence and the Importance of Inner Interpretability.Alex Grzankowski - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    The present paper looks at one of the most thorough articles on the intelligence of GPT, research conducted by engineers at Microsoft. Although there is a great deal of value in their work, I will argue that, for familiar philosophical reasons, their methodology, ‘Black-box Interpretability’ is wrongheaded. But there is a better way. There is an exciting and emerging discipline of ‘Inner Interpretability’ (also sometimes called ‘White-box Interpretability’) that aims to uncover the internal activations and weights of models in order (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  5. Cultural Bias in Explainable AI Research.Uwe Peters & Mary Carman - forthcoming - Journal of Artificial Intelligence Research.
    For synergistic interactions between humans and artificial intelligence (AI) systems, AI outputs often need to be explainable to people. Explainable AI (XAI) systems are commonly tested in human user studies. However, whether XAI researchers consider potential cultural differences in human explanatory needs remains unexplored. We highlight psychological research that found significant differences in human explanations between many people from Western, commonly individualist countries and people from non-Western, often collectivist countries. We argue that XAI research currently overlooks these variations and that (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  6. Unjustified Sample Sizes and Generalizations in Explainable AI Research: Principles for More Inclusive User Studies.Uwe Peters & Mary Carman - forthcoming - IEEE Intelligent Systems.
    Many ethical frameworks require artificial intelligence (AI) systems to be explainable. Explainable AI (XAI) models are frequently tested for their adequacy in user studies. Since different people may have different explanatory needs, it is important that participant samples in user studies are large enough to represent the target population to enable generalizations. However, it is unclear to what extent XAI researchers reflect on and justify their sample sizes or avoid broad generalizations across people. We analyzed XAI user studies (N = (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  7. Universal Agent Mixtures and the Geometry of Intelligence.Samuel Allen Alexander, David Quarel, Len Du & Marcus Hutter - 2023 - Aistats.
    Inspired by recent progress in multi-agent Reinforcement Learning (RL), in this work we examine the collective intelligent behaviour of theoretical universal agents by introducing a weighted mixture operation. Given a weighted set of agents, their weighted mixture is a new agent whose expected total reward in any environment is the corresponding weighted average of the original agents' expected total rewards in that environment. Thus, if RL agent intelligence is quantified in terms of performance across environments, the weighted mixture's intelligence is (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  8. The future won’t be pretty: The nature and value of ugly, AI-designed experiments.Michael T. Stuart - 2023 - In Milena Ivanova & Alice Murphy (eds.), The Aesthetics of Scientific Experiments. New York, NY: Routledge.
    Can an ugly experiment be a good experiment? Philosophers have identified many beautiful experiments and explored ways in which their beauty might be connected to their epistemic value. In contrast, the present chapter seeks out (and celebrates) ugly experiments. Among the ugliest are those being designed by AI algorithms. Interestingly, in the contexts where such experiments tend to be deployed, low aesthetic value correlates with high epistemic value. In other words, ugly experiments can be good. Given this, we should conclude (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  9. Extended subdomains: a solution to a problem of Hernández-Orallo and Dowe.Samuel Allen Alexander - 2022 - In AGI.
    This is a paper about the general theory of measuring or estimating social intelligence via benchmarks. Hernández-Orallo and Dowe described a problem with certain proposed intelligence measures. The problem suggests that those intelligence measures might not accurately capture social intelligence. We argue that Hernández-Orallo and Dowe's problem is even more general than how they stated it, applying to many subdomains of AGI, not just the one subdomain in which they stated it. We then propose a solution. In our solution, instead (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  10. Extending Environments To Measure Self-Reflection In Reinforcement Learning.Samuel Allen Alexander, Michael Castaneda, Kevin Compher & Oscar Martinez - 2022 - Journal of Artificial General Intelligence 13 (1).
    We consider an extended notion of reinforcement learning in which the environment can simulate the agent and base its outputs on the agent's hypothetical behavior. Since good performance usually requires paying attention to whatever things the environment's outputs are based on, we argue that for an agent to achieve on-average good performance across many such extended environments, it is necessary for the agent to self-reflect. Thus weighted-average performance over the space of all suitably well-behaved extended environments could be considered a (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  11. Pseudo-visibility: A Game Mechanic Involving Willful Ignorance.Samuel Allen Alexander & Arthur Paul Pedersen - 2022 - FLAIRS-35.
    We present a game mechanic called pseudo-visibility for games inhabited by non-player characters (NPCs) driven by reinforcement learning (RL). NPCs are incentivized to pretend they cannot see pseudo-visible players: the training environment simulates an NPC to determine how the NPC would act if the pseudo-visible player were invisible, and penalizes the NPC for acting differently. NPCs are thereby trained to selectively ignore pseudo-visible players, except when they judge that the reaction penalty is an acceptable tradeoff (e.g., a guard might accept (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  12. Machine learning in scientific grant review: algorithmically predicting project efficiency in high energy physics.Vlasta Sikimić & Sandro Radovanović - 2022 - European Journal for Philosophy of Science 12 (3):1-21.
    As more objections have been raised against grant peer-review for being costly and time-consuming, the legitimate question arises whether machine learning algorithms could help assess the epistemic efficiency of the proposed projects. As a case study, we investigated whether project efficiency in high energy physics can be algorithmically predicted based on the data from the proposal. To analyze the potential of algorithmic prediction in HEP, we conducted a study on data about the structure and outcomes of HEP experiments with the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  13. Concern Across Scales: a biologically inspired embodied artificial intelligence.Matthew Sims - 2022 - Frontiers in Neurorobotics 1 (Bio A.I. - From Embodied Cogniti).
    Intelligence in current AI research is measured according to designer-assigned tasks that lack any relevance for an agent itself. As such, tasks and their evaluation reveal a lot more about our intelligence than the possible intelligence of agents that we design and evaluate. As a possible first step in remedying this, this article introduces the notion of “self-concern,” a property of a complex system that describes its tendency to bring about states that are compatible with its continued self-maintenance. Self-concern, as (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  14. Can reinforcement learning learn itself? A reply to 'Reward is enough'.Samuel Allen Alexander - 2021 - Cifma.
    In their paper 'Reward is enough', Silver et al conjecture that the creation of sufficiently good reinforcement learning (RL) agents is a path to artificial general intelligence (AGI). We consider one aspect of intelligence Silver et al did not consider in their paper, namely, that aspect of intelligence involved in designing RL agents. If that is within human reach, then it should also be within AGI's reach. This raises the question: is there an RL environment which incentivises RL agents to (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  15. Reward-Punishment Symmetric Universal Intelligence.Samuel Allen Alexander & Marcus Hutter - 2021 - In AGI.
    Can an agent's intelligence level be negative? We extend the Legg-Hutter agent-environment framework to include punishments and argue for an affirmative answer to that question. We show that if the background encodings and Universal Turing Machine (UTM) admit certain Kolmogorov complexity symmetries, then the resulting Legg-Hutter intelligence measure is symmetric about the origin. In particular, this implies reward-ignoring agents have Legg-Hutter intelligence 0 according to such UTMs.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  16. Measuring Intelligence and Growth Rate: Variations on Hibbard's Intelligence Measure.Samuel Alexander & Bill Hibbard - 2021 - Journal of Artificial General Intelligence 12 (1):1-25.
    In 2011, Hibbard suggested an intelligence measure for agents who compete in an adversarial sequence prediction game. We argue that Hibbard’s idea should actually be considered as two separate ideas: first, that the intelligence of such agents can be measured based on the growth rates of the runtimes of the competitors that they defeat; and second, one specific (somewhat arbitrary) method for measuring said growth rates. Whereas Hibbard’s intelligence measure is based on the latter growth-rate-measuring method, we survey other methods (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  17. Making AI Meaningful Again.Jobst Landgrebe & Barry Smith - 2021 - Synthese 198 (March):2061-2081.
    Artificial intelligence (AI) research enjoyed an initial period of enthusiasm in the 1970s and 80s. But this enthusiasm was tempered by a long interlude of frustration when genuinely useful AI applications failed to be forthcoming. Today, we are experiencing once again a period of enthusiasm, fired above all by the successes of the technology of deep neural networks or deep machine learning. In this paper we draw attention to what we take to be serious problems underlying current views of artificial (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   15 citations  
  18. AGI and the Knight-Darwin Law: why idealized AGI reproduction requires collaboration.Samuel Alexander - 2020 - Agi.
    Can an AGI create a more intelligent AGI? Under idealized assumptions, for a certain theoretical type of intelligence, our answer is: “Not without outside help”. This is a paper on the mathematical structure of AGI populations when parent AGIs create child AGIs. We argue that such populations satisfy a certain biological law. Motivated by observations of sexual reproduction in seemingly-asexual species, the Knight-Darwin Law states that it is impossible for one organism to asexually produce another, which asexually produces another, and (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  19. Short-circuiting the definition of mathematical knowledge for an Artificial General Intelligence.Samuel Alexander - 2020 - Cifma.
    We propose that, for the purpose of studying theoretical properties of the knowledge of an agent with Artificial General Intelligence (that is, the knowledge of an AGI), a pragmatic way to define such an agent’s knowledge (restricted to the language of Epistemic Arithmetic, or EA) is as follows. We declare an AGI to know an EA-statement φ if and only if that AGI would include φ in the resulting enumeration if that AGI were commanded: “Enumerate all the EA-sentences which you (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  20. The Archimedean trap: Why traditional reinforcement learning will probably not yield AGI.Samuel Allen Alexander - 2020 - Journal of Artificial General Intelligence 11 (1):70-85.
    After generalizing the Archimedean property of real numbers in such a way as to make it adaptable to non-numeric structures, we demonstrate that the real numbers cannot be used to accurately measure non-Archimedean structures. We argue that, since an agent with Artificial General Intelligence (AGI) should have no problem engaging in tasks that inherently involve non-Archimedean rewards, and since traditional reinforcement learning rewards are real numbers, therefore traditional reinforcement learning probably will not lead to AGI. We indicate two possible ways (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  21. Post-Turing Methodology: Breaking the Wall on the Way to Artificial General Intelligence.Albert Efimov - 2020 - Lecture Notes in Computer Science 12177.
    This article offers comprehensive criticism of the Turing test and develops quality criteria for new artificial general intelligence (AGI) assessment tests. It is shown that the prerequisites A. Turing drew upon when reducing personality and human consciousness to “suitable branches of thought” re-flected the engineering level of his time. In fact, the Turing “imitation game” employed only symbolic communication and ignored the physical world. This paper suggests that by restricting thinking ability to symbolic systems alone Turing unknowingly constructed “the wall” (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  22. There is no general AI.Jobst Landgrebe & Barry Smith - 2020 - arXiv.
    The goal of creating Artificial General Intelligence (AGI) – or in other words of creating Turing machines (modern computers) that can behave in a way that mimics human intelligence – has occupied AI researchers ever since the idea of AI was first proposed. One common theme in these discussions is the thesis that the ability of a machine to conduct convincing dialogues with human beings can serve as at least a sufficient criterion of AGI. We argue that this very ability (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  23. Ontology and Cognitive Outcomes.David Limbaugh, Jobst Landgrebe, David Kasmier, Ronald Rudnicki, James Llinas & Barry Smith - 2020 - Journal of Knowledge Structures and Systems 1 (1): 3-22.
    The term ‘intelligence’ as used in this paper refers to items of knowledge collected for the sake of assessing and maintaining national security. The intelligence community (IC) of the United States (US) is a community of organizations that collaborate in collecting and processing intelligence for the US. The IC relies on human-machine-based analytic strategies that 1) access and integrate vast amounts of information from disparate sources, 2) continuously process this information, so that, 3) a maximally comprehensive understanding of world actors (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  24. Cosa significano Paraconsistente, Indecifrabile, Casuale, Calcolabile e Incompleto? Una recensione di Godel's Way: sfrutta in un mondo indecidibile (Godel's Way: Exploits into an Undecidable World) di Gregory Chaitin, Francisco A Doria, Newton C.A. da Costa 160p (2012) (rivisto 2019).Michael Richard Starks - 2020 - In Benvenuti all'inferno sulla Terra: Bambini, Cambiamenti climatici, Bitcoin, Cartelli, Cina, Democrazia, Diversità, Disgenetica, Uguaglianza, Pirati Informatici, Diritti umani, Islam, Liberalismo, Prosperità, Web, Caos, Fame, Malattia, Violenza, Intellige. Las Vegas, NV, USA: Reality Press. pp. 163-176.
    Nel 'Godel's Way' tre eminenti scienziati discutono questioni come l'indecidibilità, l'incompletezza, la casualità, la computabilità e la paracoerenza. Affronto questi problemi dal punto di vista di Wittgensteinian che ci sono due questioni fondamentali che hanno soluzioni completamente diverse. Ci sono le questioni scientifiche o empiriche, che sono fatti sul mondo che devono essere studiati in modo osservante e filosofico su come il linguaggio può essere usato in modo intelligibilmente (che include alcune domande in matematica e logica), che devono essere decise (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  25. Gli ominoidi o gli androidi distruggeranno la Terra? Una recensione di Come Creare una Mente (How to Create a Mind) di Ray Kurzweil (2012) (recensione rivista nel 2019).Michael Richard Starks - 2020 - In Benvenuti all'inferno sulla Terra: Bambini, Cambiamenti climatici, Bitcoin, Cartelli, Cina, Democrazia, Diversità, Disgenetica, Uguaglianza, Pirati Informatici, Diritti umani, Islam, Liberalismo, Prosperità, Web, Caos, Fame, Malattia, Violenza, Intellige. Las Vegas, NV, USA: Reality Press. pp. 150-162.
    Alcuni anni fa, ho raggiunto il punto in cui di solito posso dire dal titolo di un libro, o almeno dai titoli dei capitoli, quali tipi di errori filosofici saranno fatti e con quale frequenza. Nel caso di opere nominalmente scientifiche queste possono essere in gran parte limitate a determinati capitoli che sono filosofici o cercanodi trarre conclusioni generali sul significato o sul significato a lungoterminedell'opera. Normalmente però le questioni scientifiche di fatto sono generosamente intrecciate con incomprodellami filosofici su ciò (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  26. Gli ominoidi o gli androidi distruggeranno la Terra? Una recensione di Come Creare una Mente (How to Create a Mind) di Ray Kurzweil (2012) (recensione rivista nel 2019).Michael Richard Starks - 2020 - In Benvenuti all'inferno sulla Terra: Bambini, Cambiamenti climatici, Bitcoin, Cartelli, Cina, Democrazia, Diversità, Disgenetica, Uguaglianza, Pirati Informatici, Diritti umani, Islam, Liberalismo, Prosperità, Web, Caos, Fame, Malattia, Violenza, Intellige. Las Vegas, NV, USA: Reality Press. pp. 150-162.
    Alcuni anni fa, ho raggiunto il punto in cui di solito posso dire dal titolo di un libro, o almeno dai titoli dei capitoli, quali tipi di errori filosofici saranno fatti e con quale frequenza. Nel caso di opere nominalmente scientifiche queste possono essere in gran parte limitate a determinati capitoli che sono filosofici o cercanodi trarre conclusioni generali sul significato o sul significato a lungoterminedell'opera. Normalmente però le questioni scientifiche di fatto sono generosamente intrecciate con incomprodellami filosofici su ciò (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  27. कैसे सात Socipaths जो चीन शासन कर रहे हैं विश्व युद्ध तीन और तीन तरीके उन्हें रोकने के लिए How the Seven Sociopaths Who Rule China are Winning World War and Three and Three Ways to Stop Them (2019).Michael Richard Starks - 2020 - In पृथ्वी पर नर्क में आपका स्वागत है: शिशुओं, जलवायु परिवर्तन, बिटकॉइन, कार्टेल, चीन, लोकतंत्र, विविधता, समानता, हैकर्स, मानव अधिकार, इस्लाम, उदारवाद, समृद्धि, वेब, अराजकता, भुखमरी, बीमारी, हिंसा, कृत्रिम बुद्धिमत्ता, युद्ध. Ls Vegas, NV USA: Reality Press. pp. 389-396.
    पहली बात हमें ध्यान में रखना चाहिए कि जब यह कहना है कि चीन यह कहता है या चीन ऐसा करता है, तो हम चीनी लोगों की बात नहीं कर रहे हैं, लेकिन उन सोशियोपैथों की जो सीसीपी (चीनी कम्युनिस्ट पार्टी, अर्थात सात सेनेले सोसोपैथिक सीरियल किलर (एसएसएसएसके) का नियंत्रण करते हैं। सीपी या पोलितब्यूरो के 25 सदस्यों की टंडिंग समिति। मैं हाल ही में कुछ ठेठ वामपंथी नकली समाचार कार्यक्रमों को देखा (सुंदर बहुत ही तरह एक ही तरह से (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  28. The role of robotics and AI in technologically mediated human evolution: a constructive proposal.Jeffrey White - 2020 - AI and Society 35 (1):177-185.
    This paper proposes that existing computational modeling research programs may be combined into platforms for the information of public policy. The main idea is that computational models at select levels of organization may be integrated in natural terms describing biological cognition, thereby normalizing a platform for predictive simulations able to account for both human and environmental costs associated with different action plans and institutional arrangements over short and long time spans while minimizing computational requirements. Building from established research programs, the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  29. Interprétabilité et explicabilité pour l’apprentissage machine : entre modèles descriptifs, modèles prédictifs et modèles causaux. Une nécessaire clarification épistémologique.Christophe Denis & Franck Varenne - 2019 - Actes de la Conférence Nationale En Intelligence Artificielle - CNIA 2019.
    Le déficit d’explicabilité des techniques d’apprentissage machine (AM) pose des problèmes opérationnels, juridiques et éthiques. Un des principaux objectifs de notre projet est de fournir des explications éthiques des sorties générées par une application fondée sur de l’AM, considérée comme une boîte noire. La première étape de ce projet, présentée dans cet article, consiste à montrer que la validation de ces boîtes noires diffère épistémologiquement de celle mise en place dans le cadre d’une modélisation mathématique et causale d’un phénomène physique. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  30. Present Scenario of Fog Computing and Hopes for Future Research.G. KSoni, B. Hiren Bhatt & P. Dhaval Patel - 2019 - International Journal of Computer Sciences and Engineering 7 (9).
    According to the forecast that billions of devices will get connected to the Internet by 2020. All these devices will produce a huge amount of data that will have to be handled rapidly and in a feasible manner. It will become a challenge for real-time applications to handle this huge data while considering security issues as well as time constraints. The main highlights of cloud computing are on-demand service and scalability; therefore the data generated from IoT devices are generally handled (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  31. In 30 Schritten zum Mond? Zukünftiger Fortschritt in der KI.Vincent C. Müller - 2018 - Medienkorrespondenz 20 (05.10.2018):5-15.
    Die Entwicklungen in der Künstlichen Intelligenz (KI) sind spannend. Aber wohin geht die Reise? Ich stelle eine Analyse vor, der zufolge exponentielles Wachstum von Rechengeschwindigkeit und Daten die entscheidenden Faktoren im bisherigen Fortschritt waren. Im Folgenden erläutere ich, unter welchen Annahmen dieses Wachstum auch weiterhin Fortschritt ermöglichen wird: 1) Intelligenz ist eindimensional und messbar, 2) Kognitionswissenschaft wird für KI nicht benötigt, 3) Berechnung (computation) ist hinreichend für Kognition, 4) Gegenwärtige Techniken und Architektur sind ausreichend skalierbar, 5) Technological Readiness Levels (TRL) (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  32. Simple or complex bodies? Trade-offs in exploiting body morphology for control.Matej Hoffmann & Vincent C. Müller - 2017 - In Gordana Dodig-Crnkovic & Raffaela Giovagnoli (eds.), Representation of Reality: Humans, Other Living Organism and Intelligent Machines. Heidelberg: Springer. pp. 335-345.
    Engineers fine-tune the design of robot bodies for control purposes, however, a methodology or set of tools is largely absent, and optimization of morphology (shape, material properties of robot bodies, etc.) is lagging behind the development of controllers. This has become even more prominent with the advent of compliant, deformable or ”soft” bodies. These carry substantial potential regarding their exploitation for control—sometimes referred to as ”morphological computation”. In this article, we briefly review different notions of computation by physical systems and (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  33. Why Build a Virtual Brain? Large-Scale Neural Simulations as Jump Start for Cognitive Computing.Matteo Colombo - 2016 - Journal of Experimental and Theoretical Artificial Intelligence.
    Despite the impressive amount of financial resources recently invested in carrying out large-scale brain simulations, it is controversial what the pay-offs are of pursuing this project. One idea is that from designing, building, and running a large-scale neural simulation, scientists acquire knowledge about the computational performance of the simulating system, rather than about the neurobiological system represented in the simulation. It has been claimed that this knowledge may usher in a new era of neuromorphic, cognitive computing systems. This study elucidates (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  34. From human to artificial cognition and back: New perspectives on cognitively inspired AI systems.Antonio Lieto & Daniele Radicioni - 2016 - Cognitive Systems Research 39 (c):1-3.
    We overview the main historical and technological elements characterising the rise, the fall and the recent renaissance of the cognitive approaches to Artificial Intelligence and provide some insights and suggestions about the future directions and challenges that, in our opinion, this discipline needs to face in the next years.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  35. An expert system for feeding problems in infants and children.Samy S. Abu Naser & Mariam W. Alawar - 2016 - International Journal of Medicine Research 1 (2):79--82.
    A lot of infants have significant food-related problems, as well as spitting up, rejecting new foods, or not accepting to eat at specific times. These issues are frequently ordinary and are not a sign that the baby is unwell. According to the National Institutes of Health, 25% of generally developing infants and 35% of babies with neurodevelopmental disabilities are tormented by some sort of feeding problem. Some, for example rejecting to eat specific foods or being overly finicky, are momentary and (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   17 citations  
  36. On a Cognitive Model of Semiosis.Piotr Konderak - 2015 - Studies in Logic, Grammar and Rhetoric 40 (1):129-144.
    What is the class of possible semiotic systems? What kinds of systems could count as such systems? The human mind is naturally considered the prototypical semiotic system. During years of research in semiotics the class has been broadened to include i.e. living systems like animals, or even plants. It is suggested in the literature on artificial intelligence that artificial agents are typical examples of symbol-processing entities. It also seems that semiotic processes are in fact cognitive processes. In consequence, it is (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   4 citations  
  37. Evaluating Artificial Models of Cognition.Marcin Miłkowski - 2015 - Studies in Logic, Grammar and Rhetoric 40 (1):43-62.
    Artificial models of cognition serve different purposes, and their use determines the way they should be evaluated. There are also models that do not represent any particular biological agents, and there is controversy as to how they should be assessed. At the same time, modelers do evaluate such models as better or worse. There is also a widespread tendency to call for publicly available standards of replicability and benchmarking for such models. In this paper, I argue that proper evaluation ofmodels (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  38. Dealing with Concepts: from Cognitive Psychology to Knowledge Representation.Marcello Frixione & Antonio Lieto - 2013 - Frontiers of Psychological and Behevioural Science 2 (3):96-106.
    Concept representation is still an open problem in the field of ontology engineering and, more generally, of knowledge representation. In particular, the issue of representing “non classical” concepts, i.e. concepts that cannot be defined in terms of necessary and sufficient conditions, remains unresolved. In this paper we review empirical evidence from cognitive psychology, according to which concept representation is not a unitary phenomenon. On this basis, we sketch some proposals for concept representation, taking into account suggestions from psychological research. In (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  39. A lesson from subjective computing: autonomous self-referentiality and social interaction as conditions for subjectivity.Patrick Grüneberg & Kenji Suzuki - 2013 - AISB Proceedings 2012:18-28.
    In this paper, we model a relational notion of subjectivity by means of two experiments in subjective computing. The goal is to determine to what extent a cognitive and social robot can be regarded to act subjectively. The system was implemented as a reinforcement learning agent with a coaching function. To analyze the robotic agent we used the method of levels of abstraction in order to analyze the agent at four levels of abstraction. At one level the agent is described (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  40. Gdzie jesteś, HAL?Jarek Gryz - 2013 - Przegląd Filozoficzny 22 (2):167-184.
    Sztuczna inteligencja pojawiła się jako dziedzina badawcza ponad 60 lat temu. Po spektakularnych sukcesach na początku jej istnienia oczekiwano pojawienia się maszyn myślących w ciągu kilku lat. Prognoza ta zupełnie się nie sprawdziła. Nie dość, że maszyny myślącej dotąd nie zbudowano, to nie ma zgodności wśród naukowców czym taka maszyna miałaby się charakteryzować ani nawet czy warto ją w ogóle budować. W artykule tym postaramy się prześledzić dyskusję metodologiczną towarzyszącą sztucznej inteligencji od początku jej istnienia i określić relację między sztuczną (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  41. Cognitive behavioural systems.Esposito Anna, Esposito Antonietta M., Hoffmann Rüdiger, Müller Vincent C. & Vinciarelli Alessandro (eds.) - 2012 - Springer.
    This book constitutes refereed proceedings of the COST 2102 International Training School on Cognitive Behavioural Systems held in Dresden, Germany, in February 2011. The 39 revised full papers presented were carefully reviewed and selected from various submissions. The volume presents new and original research results in the field of human-machine interaction inspired by cognitive behavioural human-human interaction features. The themes covered are on cognitive and computational social information processing, emotional and social believable Human-Computer Interaction (HCI) systems, behavioural and contextual analysis (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  42. Challenges for artificial cognitive systems.Antoni Gomila & Vincent C. Müller - 2012 - Journal of Cognitive Science 13 (4):452-469.
    The declared goal of this paper is to fill this gap: “... cognitive systems research needs questions or challenges that define progress. The challenges are not (yet more) predictions of the future, but a guideline to what are the aims and what would constitute progress.” – the quotation being from the project description of EUCogII, the project for the European Network for Cognitive Systems within which this formulation of the ‘challenges’ was originally developed (http://www.eucognition.org). So, we stick out our neck (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  43. Theory and philosophy of AI (Minds and Machines, 22/2 - Special volume).Vincent C. Müller (ed.) - 2012 - Springer.
    Invited papers from PT-AI 2011. - Vincent C. Müller: Introduction: Theory and Philosophy of Artificial Intelligence - Nick Bostrom: The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents - Hubert L. Dreyfus: A History of First Step Fallacies - Antoni Gomila, David Travieso and Lorena Lobo: Wherein is Human Cognition Systematic - J. Kevin O'Regan: How to Build a Robot that Is Conscious and Feels - Oron Shagrir: Computation, Implementation, Cognition.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  44. The feeling body: Towards an enactive approach to emotion.Giovanna Colombetti & Evan Thompson - 2008 - In W. F. Overton, U. Mueller & J. Newman (eds.), Body in Mind, Mind in Body: Developmental Perspectives on Embodiment and Consciousness. Erlbaum.
    For many years emotion theory has been characterized by a dichotomy between the head and the body. In the golden years of cognitivism, during the nineteen-sixties and seventies, emotion theory focused on the cognitive antecedents of emotion, the so-called “appraisal processes.” Bodily events were seen largely as byproducts of cognition, and as too unspecific to contribute to the variety of emotion experience. Cognition was conceptualized as an abstract, intellectual, “heady” process separate from bodily events. Although current emotion theory has moved (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   51 citations  
  45. Modelling Argument Recognition and Reconstruction.Joel Katzav & Chris Reed - 2008 - Journal of Pragmatics 40:155-172..
    A growing body of recent work in informal logic investigates the process of argumentation. Among other things, this work focuses on the ways in which individuals attempt to understand written or verbalised arguments in light of the fact that these are often presented in forms that are incomplete and unmarked. One of its aims is to develop general procedures for natural language argument recognition and reconstruction. Our aim here is to draw on this growing body of knowledge in informal logic (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  46. Decision theory, intelligent planning and counterfactuals.Michael John Shaffer - 2008 - Minds and Machines 19 (1):61-92.
    The ontology of decision theory has been subject to considerable debate in the past, and discussion of just how we ought to view decision problems has revealed more than one interesting problem, as well as suggested some novel modifications of classical decision theory. In this paper it will be argued that Bayesian, or evidential, decision-theoretic characterizations of decision situations fail to adequately account for knowledge concerning the causal connections between acts, states, and outcomes in decision situations, and so they are (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   4 citations  
  47. Computationalism under attack.Roberto Cordeschi & Marcello Frixione - 2007 - In M. Marraffa, M. De Caro & F. Ferretti (eds.), Cartographies of the Mind: Philosophy and Psychology in Intersection. Springer.
    Since the early eighties, computationalism in the study of the mind has been “under attack” by several critics of the so-called “classic” or “symbolic” approaches in AI and cognitive science. Computationalism was generically identified with such approaches. For example, it was identified with both Allen Newell and Herbert Simon’s Physical Symbol System Hypothesis and Jerry Fodor’s theory of Language of Thought, usually without taking into account the fact ,that such approaches are very different as to their methods and aims. Zenon (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  48. A Plea for Automated Language-to-Logical-Form Converters.Joseph S. Fulda - 2006 - RASK 24:87-102.
    This has been made available gratis by the publisher. -/- This piece gives the raison d'etre for the development of the converters mentioned in the title. Three reasons are given, one linguistic, one philosophical, and one practical. It is suggested that at least /two/ independent converters are needed. -/- This piece ties together the extended paper "Abstracts from Logical Form I/II," and the short piece providing the comprehensive theory alluded to in the abstract of that extended paper in "Pragmatics, Montague, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  49. The Discovery of the Artificial: Behavior, Mind and Machines Before and Beyond Cybernetics.Roberto Cordeschi - 2002 - Kluwer Academic Publishers.
    Since the second half of the XXth century, researchers in cybernetics and AI, neural nets and connectionism, Artificial Life and new robotics have endeavoured to build different machines that could simulate functions of living organisms, such as adaptation and development, problem solving and learning. In this book these research programs are discussed, particularly as regard the epistemological issues of the behaviour modelling. One of the main novelty of this book consists of the fact that certain projects involving the building of (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   24 citations  
  50. Was Roboter nicht können. Die Roboterantwort als knapp misslungene Verteidigung der starken KI-These.Geert Keil - 1998 - In Andreas Engel & Peter Gold (eds.), Der Mensch in der Perspektive der Kognitionswissenschaften. Suhrkamp. pp. 98-131.
    Theoretiker der Künstlichen Intelligenz und deren Wegbegleiter in der Philosophie des Geistes haben auf unterschiedliche Weise auf Kritik am ursprünglichen Theorieziel der KI reagiert. Eine dieser Reaktionen ist die Zurücknahme dieses Theorieziels zugunsten der Verfolgung kleinerformatiger Projekte. Eine andere Reaktion ist die Propagierung konnektionistischer Systeme, die mit ihrer dezentralen Arbeitsweise die neuronalen Netze des menschlichen Gehirns besser simulieren sollen. Eine weitere ist die sogenannte robot reply. Die Roboterantwort besteht aus zwei Elementen. Sie enthält (a) das Zugeständnis, daß das Systemverhalten eines (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
1 — 50 / 58