Philosophy of Artificial Intelligence

Edited by Eric Dietrich (State University of New York at Binghamton)
Assistant editor: Michelle Thomas (University of Western Ontario)
Related
Subcategories

Contents
2215 found
Order:
1 — 50 / 2215
Material to categorize
  1. Norms and Causation in Artificial Morality.Laura Fearnley - forthcoming - Joint Proceedings of the Acm Iui:1-4.
    There has been an increasing interest into how to build Artificial Moral Agents (AMAs) that make moral decisions on the basis of causation rather than mere correction. One promising avenue for achieving this is to use a causal modelling approach. This paper explores an open and important problem with such an approach; namely, the problem of what makes a causal model an appropriate model. I explore why we need to establish criteria for what makes a model appropriate, and offer-up such (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  2. In Conversation with Artificial Intelligence: Aligning language Models with Human Values.Atoosa Kasirzadeh - 2023 - Philosophy and Technology 36 (2):1-24.
    Large-scale language technologies are increasingly used in various forms of communication with humans across different contexts. One particular use case for these technologies is conversational agents, which output natural language text in response to prompts and queries. This mode of engagement raises a number of social and ethical questions. For example, what does it mean to align conversational agents with human norms or values? Which norms or values should they be aligned with? And how can this be accomplished? In this (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  3. The AI Ensoulment Hypothesis.Brian Cutter - forthcoming - Faith and Philosophy.
    According to the AI ensoulment hypothesis, some future AI systems will be endowed with immaterial souls. I argue that we should have at least a middling credence in the AI ensoulment hypothesis, conditional on our eventual creation of AGI and the truth of substance dualism in the human case. I offer two arguments. The first relies on an analogy between aliens and AI. The second rests on the conjecture that ensoulment occurs whenever a physical system is “fit to possess” a (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  4. Asking Chatbase to learn about academic retractions.Aisdl Team - 2023 - Sm3D Science Portal.
    It is noteworthy that Chatbase has the capability to identify notable authors writing about the topic, including the co-founders of Retraction Watch, Ivan Oransky and Adam Marcus.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  5. Chatting with Chatbase over the rationality issue of the cost of science.Aisdl Team - 2023 - Sm3D Science Portal.
    In this article, we present the outcome of our first experiment with Chatbase, a chatbot built on chatGPT’s functioning model(s). Our idea is to try instructing Chatbase to perform a reading, digesting, and summarizing task for a specifically formatted academic document.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  6. Explanation and the Right to Explanation.Elanor Taylor - forthcoming - Journal of the American Philosophical Association.
    In response to widespread use of automated decision-making technology, some have considered a right to explanation. In this paper I draw on insights from philosophical work on explanation to present a series of challenges to this idea, showing that the normative motivations for access to such explanations ask for something difficult, if not impossible, to extract from automated systems. I consider an alternative, outcomes-focused approach to the normative evaluation of automated decision-making, and recommend it as a way to pursue the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  7. Lorenzo Magnani: Discoverability—the urgent need of an ecology of human creativity. [REVIEW]Jeffrey White - 2023 - AI and Society:1-2.
    Discoverability: the urgent need of an ecology of human creativity from the prolific Lorenzo Magnani is worthy of direct attention. The message may be of special interest to philosophers, ethicists and organizing scientists involved in the development of AI and related technologies which are increasingly directed at reinforcing conditions against which Magnani directly warns, namely the “overcomputationalization” of life marked by the gradual encroachment of technologically “locked strategies” into everyday decision-making until “freedom, responsibility, and ownership of our destinies” are ceded (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  8. On the Foundations of Computing. Computing as the Fourth Great Domain of Science. [REVIEW]Gordana Dodig-Crnkovic - 2023 - Global Philosophy 33 (1):1-12.
    This review essay analyzes the book by Giuseppe Primiero, On the foundations of computing. Oxford: Oxford University Press (ISBN 978-0-19-883564-6/hbk; 978-0-19-883565-3/pbk). xix, 296 p. (2020). It gives a critical view from the perspective of physical computing as a foundation of computing and argues that the neglected pillar of material computation (Stepney) should be brought centerstage and computing recognized as the fourth great domain of science (Denning).
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  9. Deepfakes, Fake Barns, and Knowledge from Videos.Taylor Matthews - 2023 - Synthese 201 (2):1-18.
    Recent develops in AI technology have led to increasingly sophisticated forms of video manipulation. One such form has been the advent of deepfakes. Deepfakes are AI-generated videos that typically depict people doing and saying things they never did. In this paper, I demonstrate that there is a close structural relationship between deepfakes and more traditional fake barn cases in epistemology. Specifically, I argue that deepfakes generate an analogous degree of epistemic risk to that which is found in traditional cases. Given (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  10. Deep Learning Opacity in Scientific Discovery.Eamon Duede - forthcoming - Philosophy of Science.
    Philosophers have recently focused on critical, epistemological challenges that arise from the opacity of deep neural networks. One might conclude from this literature that doing good science with opaque models is exceptionally challenging, if not impossible. Yet, this is hard to square with the recent boom in optimism for AI in science alongside a flood of recent scientific breakthroughs driven by AI methods. In this paper, I argue that the disconnect between philosophical pessimism and scientific optimism is driven by a (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  11. “Just” accuracy? Procedural fairness demands explainability in AI‑based medical resource allocation.Jon Rueda, Janet Delgado Rodríguez, Iris Parra Jounou, Joaquín Hortal-Carmona, Txetxu Ausín & David Rodríguez-Arias - 2022 - AI and Society.
    The increasing application of artificial intelligence (AI) to healthcare raises both hope and ethical concerns. Some advanced machine learning methods provide accurate clinical predictions at the expense of a significant lack of explainability. Alex John London has defended that accuracy is a more important value than explainability in AI medicine. In this article, we locate the trade-off between accurate performance and explainable algorithms in the context of distributive justice. We acknowledge that accuracy is cardinal from outcome-oriented justice because it helps (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  12. Est-ce que Vous Compute?Arianna Falbo & Travis LaCroix - 2022 - Feminist Philosophy Quarterly 8 (3).
    Cultural code-switching concerns how we adjust our overall behaviours, manners of speaking, and appearance in response to a perceived change in our social environment. We defend the need to investigate cultural code-switching capacities in artificial intelligence systems. We explore a series of ethical and epistemic issues that arise when bringing cultural code-switching to bear on artificial intelligence. Building upon Dotson’s (2014) analysis of testimonial smothering, we discuss how emerging technologies in AI can give rise to epistemic oppression, and specifically, a (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  13. On the responsible subjects of self-driving cars under the sae system: An improvement scheme.Hao Zhan, Dan Wan & Zhiwei Huang - 2020 - In 2020 IEEE International Symposium on Circuits and Systems (ISCAS). Seville, Spain: IEEE. pp. 1-5.
    The issue of how to identify the liability of subjects after a traffic accident takes place remains a puzzle regarding the SAE classification system. The SAE system is not good at dealing with the problem of responsibility evaluation; therefore, building a new classification system for self-driving cars from the perspective of the subject's liability is a possible way to solve this problem. This new system divides automated driving into three levels: i) assisted driving based on the will of drivers, ii) (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  14. A pluralist hybrid model for moral AIs.Fei Song & Shing Hay Felix Yeung - forthcoming - AI and Society:1-10.
    With the increasing degrees A.I.s and machines are applied across different social contexts, the need for implementing ethics in A.I.s is pressing. In this paper, we argue for a pluralist hybrid model for the implementation of moral A.I.s. We first survey current approaches to moral A.I.s and their inherent limitations. Then we propose the pluralist hybrid approach and show how these limitations of moral A.I.s can be partly alleviated by the pluralist hybrid approach. The core ethical decision-making capacity of an (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  15. Acceleration AI Ethics, the Debate between Innovation and Safety, and Stability AI’s Diffusion versus OpenAI’s Dall-E.James Brusseau -
    One objection to conventional AI ethics is that it slows innovation. This presentation responds by reconfiguring ethics as an innovation accelerator. The critical elements develop from a contrast between Stability AI’s Diffusion and OpenAI’s Dall-E. By analyzing the divergent values underlying their opposed strategies for development and deployment, five conceptions are identified as common to acceleration ethics. Uncertainty is understood as positive and encouraging, rather than discouraging. Innovation is conceived as intrinsically valuable, instead of worthwhile only as mediated by social (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  16. Moral difference between humans and robots: paternalism and human-relative reason.Tsung-Hsing Ho - 2022 - AI and Society 37 (4):1533-1543.
    According to some philosophers, if moral agency is understood in behaviourist terms, robots could become moral agents that are as good as or even better than humans. Given the behaviourist conception, it is natural to think that there is no interesting moral difference between robots and humans in terms of moral agency (call it the _equivalence thesis_). However, such moral differences exist: based on Strawson’s account of participant reactive attitude and Scanlon’s relational account of blame, I argue that a distinct (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  17. Link Uncertainty, Implementation, and ML Opacity: A Reply to Tamir and Shech.Emily Sullivan - 2022 - In Insa Lawler, Kareem Khalifa & Elay Shech (eds.), Scientific Understanding and Representation. Routledge. pp. 341-345.
    This chapter responds to Michael Tamir and Elay Shech’s chapter “Understanding from Deep Learning Models in Context.”.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  18. Artificial Intelligence Systems, Responsibility and Agential Self-Awareness.Lydia Farina - 2022 - In Vincent C. Müller (ed.), Philosophy and Theory of Artificial Intelligence 2021. Berlin, Germany: pp. 15-25.
    This paper investigates the claim that artificial Intelligence Systems cannot be held morally responsible because they do not have an ability for agential self-awareness e.g. they cannot be aware that they are the agents of an action. The main suggestion is that if agential self-awareness and related first person representations presuppose an awareness of a self, the possibility of responsible artificial intelligence systems cannot be evaluated independently of research conducted on the nature of the self. Focusing on a specific account (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  19. Models, Algorithms, and the Subjects of Transparency.Hajo Greif - 2022 - In Vincent C. Müller (ed.), Philosophy and Theory of Artificial Intelligence 2021. Berlin: Springer. pp. 27-37.
    Concerns over epistemic opacity abound in contemporary debates on Artificial Intelligence (AI). However, it is not always clear to what extent these concerns refer to the same set of problems. We can observe, first, that the terms 'transparency' and 'opacity' are used either in reference to the computational elements of an AI model or to the models to which they pertain. Second, opacity and transparency might either be understood to refer to the properties of AI systems or to the epistemic (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  20. Gul A. Agha, Actors: A Model of Concurrent Computation in Distributed Systems[REVIEW]Varol Akman - 1990 - AI Magazine 11 (4):92-93.
    This is a review of Gul A. Agha’s Actors: A Model of Concurrent Computation in Distributed Systems (The MIT Press, Cambridge, MA, 1987), a part of the MIT Press Series in Artificial Intelligence, edited by Patrick Winston, Michael Brady, and Daniel Bobrow.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  21. Representing emotions in terms of object directedness.Varol Akman & Hakime G. Unsal - 1994 - Department of Computer Engineering Technical Reports, Bilkent University.
    A logical formalization of emotions is considered to be tricky because they appear to have no strict types, reasons, and consequences. On the other hand, such a formalization is crucial for commonsense reasoning. Here, the so-called "object directedness" of emotions is studied by using Helen Nissenbaum's influential ideas.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  22. Philosophy and Theory of Artificial Intelligence 2021.Vincent C. Müller (ed.) - 2022 - Berlin: Springer.
    This book gathers contributions from the fourth edition of the Conference on "Philosophy and Theory of Artificial Intelligence" (PT-AI), held on 27-28th of September 2021 at Chalmers University of Technology, in Gothenburg, Sweden. It covers topics at the interface between philosophy, cognitive science, ethics and computing. It discusses advanced theories fostering the understanding of human cognition, human autonomy, dignity and morality, and the development of corresponding artificial cognitive structures, analyzing important aspects of the relationship between humans and AI systems, including (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  23. Instruments, agents, and artificial intelligence: novel epistemic categories of reliability.Eamon Duede - 2022 - Synthese 200 (6):1-20.
    Deep learning (DL) has become increasingly central to science, primarily due to its capacity to quickly, efficiently, and accurately predict and classify phenomena of scientific interest. This paper seeks to understand the principles that underwrite scientists’ epistemic entitlement to rely on DL in the first place and argues that these principles are philosophically novel. The question of this paper is not whether scientists can be justified in trusting in the reliability of DL. While today’s artificial intelligence exhibits characteristics common to (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  24. Editorial: On modes of participation.Ioannis Bardakos, Dalila Honorato, Claudia Jacques, Claudia Westermann & Primavera de Filippi - 2021 - Technoetic Arts 19 (3):221-225.
    In nature validation for physiological and emotional bonding becomes a mode for supporting social connectivity. Similarly, in the blockchain ecosystem, cryptographic validation becomes the substrate for all interactions. In the dialogue between human and artificial intelligence (AI) agents, between the real and the virtual, one can distinguish threads of physical or mental entanglements allowing different modes of participation. One could even suggest that in all types of realities there exist frameworks that are to some extent equivalent and act as validation (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  25. Why consciousness is non-algorithmic, and strong AI cannot come true.G. Hirase - manuscript
    I explain why consciousness is non-algorithmic, and strong AI cannot come true, and reinforce Penrose ’ s argument.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  26. Making Intelligence: Ethics, IQ, and ML Benchmarks.Borhane Blili-Hamelin & Leif Hancox-Li - manuscript
    The ML community recognizes the importance of anticipating and mitigating the potential negative impacts of benchmark research. In this position paper, we argue that more attention needs to be paid to areas of ethical risk that lie at the technical and scientific core of ML benchmarks. We identify overlooked structural similarities between human IQ and ML benchmarks. Human intelligence and ML benchmarks share similarities in setting standards for describing, evaluating and comparing performance on tasks relevant to intelligence. This enables us (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  27. Can AI Mind Be Extended?Alice C. Helliwell - 2019 - Evental Aesthetics 8 (1):93-120.
    Andy Clark and David Chalmers’s theory of extended mind can be reevaluated in today’s world to include computational and Artificial Intelligence (AI) technology. This paper argues that AI can be an extension of human mind, and that if we agree that AI can have mind, it too can be extended. It goes on to explore the example of Ganbreeder, an image-making AI which utilizes human input to direct behavior. Ganbreeder represents one way in which AI extended mind could be achieved. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  28. Artificial Intelligence and the Secret Ballot.Jakob Mainz, Jorn Sonderholm & Rasmus Uhrenfeldt - forthcoming - AI and Society.
    In this paper, we argue that because of the advent of Artificial Intelligence, the secret ballot is now much less effective at protecting voters from voting related instances of social ostracism and social punishment. If one has access to vast amounts of data about specific electors, then it is possible, at least with respect to a significant subset of electors, to infer with high levels of accuracy how they voted in a past election. Since the accuracy levels of Artificial Intelligence (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  29. Bold because humble, humble because bold. Yann LeCun's path.Giovanni Landi - 2022 - Www.Intelligenzaartificialecomefilosofia.Com.
    Some philosophical considerations over Yann LeCun’s position paper “A Path Towards Autonomous Machine Intelligence”.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  30. The future of condition based monitoring: risks of operator removal on complex platforms.Marie Oldfield, Murray McMonies & Ella Haig - 2022 - AI and Society 2:1-12.
    Complex systems are difficult to manage, operate and maintain. This is why we see teams of highly specialised engineers in industries such as aerospace, nuclear and subsurface. Condition based monitoring is also employed to maximise the efficiency of extensive maintenance programmes instead of using periodic maintenance. A level of automation is often required in such complex engineering platforms in order to effectively and safely manage them. Advances in Artificial Intelligence related technologies have offered greater levels of automation but this potentially (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  31. An Unconventional Look at AI: Why Today’s Machine Learning Systems are not Intelligent.Nancy Salay - 2020 - In LINKs: The Art of Linking, an Annual Transdisciplinary Review, Special Edition 1, Unconventional Computing. pp. 62-67.
    Machine learning systems (MLS) that model low-level processes are the cornerstones of current AI systems. These ‘indirect’ learners are good at classifying kinds that are distinguished solely by their manifest physical properties. But the more a kind is a function of spatio-temporally extended properties — words, situation-types, social norms — the less likely an MLS will be able to track it. Systems that can interact with objects at the individual level, on the other hand, and that can sustain this interaction, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  32. Maximizing team synergy in AI-related interdisciplinary groups: an interdisciplinary-by-design iterative methodology.Piercosma Bisconti, Davide Orsitto, Federica Fedorczyk, Fabio Brau, Marianna Capasso, Lorenzo De Marinis, Hüseyin Eken, Federica Merenda, Mirko Forti, Marco Pacini & Claudia Schettini - 2022 - AI and Society 1 (1):1-10.
    In this paper, we propose a methodology to maximize the benefits of interdisciplinary cooperation in AI research groups. Firstly, we build the case for the importance of interdisciplinarity in research groups as the best means to tackle the social implications brought about by AI systems, against the backdrop of the EU Commission proposal for an Artificial Intelligence Act. As we are an interdisciplinary group, we address the multi-faceted implications of the mass-scale diffusion of AI-driven technologies. The result of our exercise (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  33. A Dilemma for Solomonoff Prediction.Sven Neth - forthcoming - Philosophy of Science.
    The framework of Solomonoff prediction assigns prior probability to hypotheses inversely proportional to their Kolmogorov complexity. There are two well-known problems. First, the Solomonoff prior is relative to a choice of Universal Turing machine. Second, the Solomonoff prior is not computable. However, there are responses to both problems. Different Solomonoff priors converge with more and more data. Further, there are computable approximations to the Solomonoff prior. I argue that there is a tension between these two responses. This is because computable (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  34. Reclaiming Control: Extended Mindreading and the Tracking of Digital Footprints.Uwe Peters - 2022 - Social Epistemology 36 (3):267-282.
    It is well known that on the Internet, computer algorithms track our website browsing, clicks, and search history to infer our preferences, interests, and goals. The nature of this algorithmic tracking remains unclear, however. Does it involve what many cognitive scientists and philosophers call ‘mindreading’, i.e., an epistemic capacity to attribute mental states to people to predict, explain, or influence their actions? Here I argue that it does. This is because humans are in a particular way embedded in the process (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  35. Development of Keyword Trend Prediction Models for Obesity Before and After the COVID-19 Pandemic Using RNN and LSTM: Analyzing the News Big Data of South Korea.Gayeong Eom & Haewon Byeon - 2022 - Frontiers in Public Health 10:894266.
    The Korea National Health and Nutrition Examination Survey (2020) reported that the prevalence of obesity (≥19 years old) was 31.4% in 2011, but it increased to 33.8% in 2019 and 38.3% in 2020, which confirmed that it increased rapidly after the outbreak of COVID-19. Obesity increases not only the risk of infection with COVID-19 but also severity and fatality rate after being infected with COVID-19 compared to people with normal weight or underweight. Therefore, identifying the difference in potential factors for (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  36. Putnam’s Problem of the Robot and Extended Minds.Jacob Berk - 2022 - Stance 15:88-99.
    In this paper, I consider Hilary Putnam’s argument for the prima facie acceptance of robotic consciousness as deserving the status of mind. I argue that such an extension of consciousness renders the category fundamentally unintelligible, and we should instead understand robots as integral products of an extended human consciousness. To this end, I propose a test from conceptual object permanence, which can be applied not just to robots, but to the innumerable artifacts of consciousness that texture our existences.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  37. Algorithmic and human decision making: for a double standard of transparency.Mario Günther & Atoosa Kasirzadeh - 2022 - AI and Society 37 (1):375-381.
    Should decision-making algorithms be held to higher standards of transparency than human beings? The way we answer this question directly impacts what we demand from explainable algorithms, how we govern them via regulatory proposals, and how explainable algorithms may help resolve the social problems associated with decision making supported by artificial intelligence. Some argue that algorithms and humans should be held to the same standards of transparency and that a double standard of transparency is hardly justified. We give two arguments (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   7 citations  
  38. Unownability of AI: Why Legal Ownership of Artificial Intelligence is Hard.Roman Yampolskiy - manuscript
    To hold developers responsible, it is important to establish the concept of AI ownership. In this paper we review different obstacles to ownership claims over advanced intelligent systems, including unexplainability, unpredictability, uncontrollability, self-modification, AI-rights, ease of theft when it comes to AI models and code obfuscation. We conclude that it is difficult if not impossible to establish ownership claims over AI models beyond a reasonable doubt.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  39. Certifiable AI.Jobst Landgrebe - 2022 - Applied Sciences 12 (3):1050.
    Implicit stochastic models, including both ‘deep neural networks’ (dNNs) and the more recent unsupervised foundational models, cannot be explained. That is, it cannot be determined how they work, because the interactions of the millions or billions of terms that are contained in their equations cannot be captured in the form of a causal model. Because users of stochastic AI systems would like to understand how they operate in order to be able to use them safely and reliably, there has emerged (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  40. Examining the Intelligence in Artificial Intelligence.David Cycleback - 2020 - Center for Artifact Studies.
    The following looks at several problems and questions concerning our understanding of the word ‘intelligence’ and the phrase ‘artificial intelligence’ (AI), including: how to define these terms; whether intelligence can exist in AI; if artificial intelligence in AI is identifiable; and what (if any) kind of intelligence is important to AI.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  41. Are Propositional Attitudes Mental States?Umut Baysan - 2022 - Minds and Machines 32 (3):417-432.
    I present an argument that propositional attitudes are not mental states. In a nutshell, the argument is that if propositional attitudes are mental states, then only minded beings could have them; but there are reasons to think that some non-minded beings could bear propositional attitudes. To illustrate this, I appeal to cases of genuine group intentionality. I argue that these are cases in which some group entities bear propositional attitudes, but they are not subjects of mental states. Although propositional attitudes (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  42. Exploring RoBERTa's theory of mind through textual entailment.Michael Cohen - manuscript
    Within psychology, philosophy, and cognitive science, theory of mind refers to the cognitive ability to reason about the mental states of other people, thus recognizing them as having beliefs, knowledge, intentions and emotions of their own. In this project, we construct a natural language inference (NLD) dataset that tests the ability of a state of the art language model, RoBERTa-large finetuned on the MNLI dataset, to make theory of mind inferences related to knowledge and belief. Experimental results suggest that the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  43. Artificial Intelligence and Analytic Pragmatism / Umjetna inteligencija i analitički pragmatizam (Bosnian translation by Nijaz Ibrulj).Nijaz Ibrulj & Robert B. Brandom - 2022 - Sophos 1 (15):201-222.
    The text "Artificial Intelligence and Analytic Pragmatism" was translated from the book by Robert B. Brand: Between Saying and Doing: Towards an Analytical Pragmatism. Chapter 3. Oxford University Press. pp. 69 - 92.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  44. Whispers and Shouts. The measurement of the human act.Fernando Flores Morador & Luis de Marcos Ortega (eds.) - 2021 - Alcalá de Henares, Madrid: Departement of Computational Sciences. University of Alcalá; Madrid.
    The 20th Century is the starting point for the most ambitious attempts to extrapolate human life into artificial systems. Norbert Wiener’s Cybernetics, Claude Shannon’s Information Theory, John von Neumann’s Cellular Automata, Universal Constructor to the Turing Test, Artificial Intelligence to Maturana and Varela’s Autopoietic Organization, all shared the goal of understanding in what sense humans resemble a machine. This scientific and technological movement has embraced all disciplines without exceptions, not only mathematics and physics but also biology, sociology, psychology, economics etc. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  45. Combining Fast and Slow Thinking for Human-like and Efficient Navigation in Constrained Environments.Marianna Bergamaschi Ganapini, Murray Campbell, Francesco Fabiano, Lior Horesh, Jon Lenchner, Andrea Loreggia, Nicholas Mattei, Taher Rahgooy, Francesca Rossi, Biplav Srivastava & Brent Venable - manuscript
    [Multiple authors] In this paper, we propose a general architecture that is based on fast/slow solvers and a metacognitive component. We then present experimental results on the behavior of an instance of this architecture, for AI systems that make decisions about navigating in a constrained environment. We show how combining the fast and slow decision modalities allows the system to evolve over time and gradually pass from slow to fast thinking with enough experience, and that this greatly helps in decision (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  46. The problem of artificial qualia.Wael Basille - 2021 - Dissertation, Sorbonne Université
    Is it possible to build a conscious machine, an artifact that has qualitative experiences such as feeling pain, seeing the redness of a flower or enjoying the taste of coffee ? What makes such experiences conscious is their phenomenal character: it is like something to have such experiences. In contemporary philosophy of mind, the question of the qualitative aspect of conscious experiences is often addressed in terms of qualia. In a pre-theoretical and intuitive sense, qualia refer to the phenomenal character (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  47. AI with Alien Content and Alien Metasemantics.Herman Cappelen & Joshua Dever - forthcoming - In Ernest Lepore (ed.), Oxford Handbook of Applied Philosophy of Language. OUP.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  48. There Is No Agency Without Attention.Paul Bello & Will Bridewell - 2017 - AI Magazine 38 (4):27-33.
    For decades AI researchers have built agents that are capable of carrying out tasks that require human-level or human-like intelligence. During this time, questions of how these programs compared in kind to humans have surfaced and led to beneficial interdisciplinary discussions, but conceptual progress has been slower than technological progress. Within the past decade, the term agency has taken on new import as intelligent agents have become a noticeable part of our everyday lives. Research on autonomous vehicles and personal assistants (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  49. How Robots’ Unintentional Metacommunication Affects Human–Robot Interactions. A Systemic Approach.Piercosma Bisconti - 2021 - Minds and Machines 31 (4):487-504.
    In this paper, we theoretically address the relevance of unintentional and inconsistent interactional elements in human–robot interactions. We argue that elements failing, or poorly succeeding, to reproduce a humanlike interaction create significant consequences in human–robot relational patterns and may affect human–human relations. When considering social interactions as systems, the absence of a precise interactional element produces a general reshaping of the interactional pattern, eventually generating new types of interactional settings. As an instance of this dynamic, we study the absence of (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  50. Thinking Fast and Slow in AI: the Role of Metacognition.Marianna Bergamaschi Ganapini - manuscript
    Multiple Authors - please see paper attached. -/- AI systems have seen dramatic advancement in recent years, bringing many applications that pervade our everyday life. However, we are still mostly seeing instances of narrow AI: many of these recent developments are typically focused on a very limited set of competencies and goals, e.g., image interpretation, natural language processing, classification, prediction, and many others. We argue that a better study of the mechanisms that allow humans to have these capabilities can help (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 2215