Philosophy of Artificial Intelligence

Edited by Eric Dietrich (State University of New York at Binghamton)
Assistant editor: Michelle Thomas (University of Western Ontario)
Related
Subcategories

Contents
2535 found
Order:
1 — 50 / 2535
Material to categorize
  1. Sentience, Vulcans, and Zombies: The Value of Phenomenal Consciousness.Joshua Shepherd - forthcoming - AI and Society.
    Many think that a specific aspect of phenomenal consciousness – valenced or affective experience – is essential to consciousness’s moral significance (valence sentientism). They hold that valenced experience is necessary for well-being, or moral status, or psychological intrinsic value (or all three). Some think that phenomenal consciousness generally is necessary for non-derivative moral significance (broad sentientism). Few think that consciousness is unnecessary for moral significance (non-necessitarianism). In this paper I consider the prospects for these views. I first consider the prospects (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  2. Consciousness Requires Mortal Computation.Kleiner Johannes - manuscript
    All organisms compute, though in vastly different ways. Whereas biological systems carry out mortal computation, contemporary AI systems and all previous general purpose computers carry out immortal computation. Here, we show that if Computational Functionalism holds true, consciousness requires mortal computation. This implies that none of the contemporary AI systems, and no AI system that runs on hardware of the type in use today, can be conscious.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  3. Artificial thinking and doomsday projections: a discourse on trust, ethics and safety.Jeffrey White, Dietrich Brandt, Jan Söffner & Larry Stapleton - forthcoming - AI and Society:1-6.
    The article reflects on where AI is headed and the world along with it, considering trust, ethics and safety. Implicit in artificial thinking and doomsday appraisals is the engineered divorce from reality of sublime human embodiment. Jeffrey White, Dietrich Brandt, Jan Soeffner, and Larry Stapleton, four scholars associated with AI & Society, address these issues, and more, in the following exchange.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  4. Disagreement & classification in comparative cognitive science.Alexandria Boyle - forthcoming - Noûs.
    Comparative cognitive science often involves asking questions like ‘Do nonhumans have C?’ where C is a capacity we take humans to have. These questions frequently generate unproductive disagreements, in which one party affirms and the other denies that nonhumans have the relevant capacity on the basis of the same evidence. I argue that these questions can be productively understood as questions about natural kinds: do nonhuman capacities fall into the same natural kinds as our own? Understanding such questions in this (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  5. A Deontic Logic for Programming Rightful Machines: Kant’s Normative Demand for Consistency in the Law.Ava Thomas Wright - 2023 - Logics for Ai and Law: Joint Proceedings of the Third International Workshop on Logics for New-Generation Artificial Intelligence (Lingai) and the International Workshop on Logic, Ai and Law (Lail).
    In this paper, I set out some basic elements of a deontic logic with an implementation appropriate for handling conflicting legal obligations for purposes of programming autonomous machine agents. Kantian justice demands that the prescriptive system of enforceable public laws be consistent, yet statutes or case holdings may often describe legal obligations that contradict; moreover, even fundamental constitutional rights may come into conflict. I argue that a deontic logic of the law should not try to work around such conflicts but, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  6. On Political Theory and Large Language Models.Emma Rodman - forthcoming - Political Theory.
    Political theory as a discipline has long been skeptical of computational methods. In this paper, I argue that it is time for theory to make a perspectival shift on these methods. Specifically, we should consider integrating recently developed generative large language models like GPT-4 as tools to support our creative work as theorists. Ultimately, I suggest that political theorists should embrace this technology as a method of supporting our capacity for creativity—but that we should do so in a way that (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  7. Online consent: how much do we need to know?Bartek Chomanski & Lode Lauwaert - forthcoming - AI and Society.
    This paper argues, against the prevailing view, that consent to privacy policies that regular internet users usually give is largely unproblematic from the moral point of view. To substantiate this claim, we rely on the idea of the right not to know (RNTK), as developed by bioethicists. Defenders of the RNTK in bioethical literature on informed consent claim that patients generally have the right to refuse medically relevant information. In this article we extend the application of the RNTK to online (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  8. The Use of Artificial Intelligence (AI) in Qualitative Research for Theory Development.Prokopis A. Christou - 2023 - The Qualitative Report 28 (9):2739-2755.
    Theory development is an important component of academic research since it can lead to the acquisition of new knowledge, the development of a field of study, and the formation of theoretical foundations to explain various phenomena. The contribution of qualitative researchers to theory development and advancement remains significant and highly valued, especially in an era of various epochal shifts and technological innovation in the form of Artificial Intelligence (AI). Even so, the academic community has not yet fully explored the dynamics (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  9. Can Deep CNNs Avoid Infinite Regress/Circularity in Content Constitution?Jesse Lopes - 2023 - Minds and Machines 33 (3):507-524.
    The representations of deep convolutional neural networks (CNNs) are formed from generalizing similarities and abstracting from differences in the manner of the empiricist theory of abstraction (Buckner, Synthese 195:5339–5372, 2018). The empiricist theory of abstraction is well understood to entail infinite regress and circularity in content constitution (Husserl, Logical Investigations. Routledge, 2001). This paper argues these entailments hold a fortiori for deep CNNs. Two theses result: deep CNNs require supplementation by Quine’s “apparatus of identity and quantification” in order to (1) (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  10. Developing Artificial Human-Like Arithmetical Intelligence (and Why).Markus Pantsar - 2023 - Minds and Machines 33 (3):379-396.
    Why would we want to develop artificial human-like arithmetical intelligence, when computers already outperform humans in arithmetical calculations? Aside from arithmetic consisting of much more than mere calculations, one suggested reason is that AI research can help us explain the development of human arithmetical cognition. Here I argue that this question needs to be studied already in the context of basic, non-symbolic, numerical cognition. Analyzing recent machine learning research on artificial neural networks, I show how AI studies could potentially shed (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  11. The Role of Information in Consciousness.Harry Haroutioun Haladjian - forthcoming - Psychology of Consciousness: Theory, Research, and Practice.
    This article comprehensively examines how information processing relates to attention and consciousness. We argue that no current theoretical framework investigating consciousness has a satisfactory and holistic account of their informational relationship. Our key theoretical contribution is showing how the dissociation between consciousness and attention must be understood in informational terms in order to make the debate scientifically sound. No current theories clarify the difference between attention and consciousness in terms of information. We conclude with two proposals to advance the debate. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  12. “Just” accuracy? Procedural fairness demands explainability in AI‑based medical resource allocation.Jon Rueda, Janet Delgado Rodríguez, Iris Parra Jounou, Joaquín Hortal-Carmona, Txetxu Ausín & David Rodríguez-Arias - 2022 - AI and Society:1-12.
    The increasing application of artificial intelligence (AI) to healthcare raises both hope and ethical concerns. Some advanced machine learning methods provide accurate clinical predictions at the expense of a significant lack of explainability. Alex John London has defended that accuracy is a more important value than explainability in AI medicine. In this article, we locate the trade-off between accurate performance and explainable algorithms in the context of distributive justice. We acknowledge that accuracy is cardinal from outcome-oriented justice because it helps (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  13. Thêm bài AI xuất bản trên Science Editing.Aisdl Team - 2023 - Ai-Pub Comms.
    Thành viên AISDL đã đóng góp hơn một chục nghiên cứu đã xuất bản liên quan tới trí tuệ nhân tạo (AI), tuy nhiên các ấn phẩm thuần túy nội dung bên trong của đội ngũ AISDL không có hợp tác bên ngoài đã hoàn thành là 3 bài này.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  14. Transcendence: Measuring Intelligence.Marten Kaas - 2023 - Journal of Science Fiction and Philosophy 6.
    Among the many common criticisms of the Turing test, a valid criticism concerns its scope. Intelligence is a complex and multi-dimensional phenomenon that will require testing using as many different formats as possible. The Turing test continues to be valuable as a source of evidence to support the inductive inference that a machine possesses a certain kind of intelligence and when interpreted as providing a behavioural test for a certain kind of intelligence. This paper raises the novel criticism that the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  15. L'interaction humain-machine à la lumière de Turing et Wittgenstein.Charles Bodon - 2023 - Revue Implications Philosophiques.
    Nous proposons une étude de la constitution du sens dans l'interaction humain-machine à partir des définitions que donnent Turing et Wittgenstein à propos de la pensée, la compréhension, et de la décision. Nous voulons montrer par l'analyse comparative des proximités et différences conceptuelles entre les deux auteurs que le sens commun entre humains et machines se co-constitue dans et à partir de l'action, et que c'est précisément dans cette co-constitution que réside la valeur sociale de leur interaction. Il s'agira pour (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  16. Development and validation of the AI attitude scale (AIAS-4): a brief measure of general attitude toward artificial intelligence.Simone Grassini - 2023 - Frontiers in Psychology 14:1191628.
    The rapid advancement of artificial intelligence (AI) has generated an increasing demand for tools that can assess public attitudes toward AI. This study proposes the development and the validation of the AI Attitude Scale (AIAS), a concise self-report instrument designed to evaluate public perceptions of AI technology. The first version of the AIAS that the present manuscript proposes comprises five items, including one reverse-scored item, which aims to gauge individuals’ beliefs about AI’s influence on their lives, careers, and humanity overall. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  17. Philosophy of AI: A structured overview.Vincent C. Müller - 2024 - In Nathalie A. Smuha (ed.), Cambridge handbook on the law, ethics and policy of Artificial Intelligence. Cambridge: Cambridge University Press. pp. 1-25.
    This paper presents the main topics, arguments, and positions in the philosophy of AI at present (excluding ethics). Apart from the basic concepts of intelligence and computation, the main topics of ar-tificial cognition are perception, action, meaning, rational choice, free will, consciousness, and normativity. Through a better understanding of these topics, the philosophy of AI contributes to our understand-ing of the nature, prospects, and value of AI. Furthermore, these topics can be understood more deeply through the discussion of AI; so (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  18. Reasoning with Concepts: A Unifying Framework.Peter Gärdenfors & Matías Osta-Vélez - 2023 - Minds and Machines 1 (3):451-485.
    Over the past few decades, cognitive science has identified several forms of reasoning that make essential use of conceptual knowledge. Despite significant theoretical and empirical progress, there is still no unified framework for understanding how concepts are used in reasoning. This paper argues that the theory of conceptual spaces is capable of filling this gap. Our strategy is to demonstrate how various inference mechanisms which clearly rely on conceptual information—including similarity, typicality, and diagnosticity-based reasoning—can be modeled using principles derived from (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  19. Reasoning with Concepts: A Unifying Framework.Gardenfors Peter & Osta-Vélez Matías - 2023 - Minds and Machines.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  20. Ilusion of Losing (A Ilusão da Verdade).Mota Victor - manuscript
    somehow, an ilusion can be the path to a surprinsingly truth about yourself.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  21. AI Language Models Cannot Replace Human Research Participants.Jacqueline Harding, William D’Alessandro, N. G. Laskowski & Robert Long - forthcoming - AI and Society:1-3.
    In a recent letter, Dillion et. al (2023) make various suggestions regarding the idea of artificially intelligent systems, such as large language models, replacing human subjects in empirical moral psychology. We argue that human subjects are in various ways indispensable.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  22. Qualitative Axioms of Uncertainty as a Foundation for Probability and Decision-Making.Patrick Suppes - 2016 - Minds and Machines 26 (2):185-202.
    Although the concept of uncertainty is as old as Epicurus’s writings, and an excellent quantitative theory, with entropy as the measure of uncertainty having been developed in recent times, there has been little exploration of the qualitative theory. The purpose of the present paper is to give a qualitative axiomatization of uncertainty, in the spirit of the many studies of qualitative comparative probability. The qualitative axioms are fundamentally about the uncertainty of a partition of the probability space of events. Of (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  23. How AI’s Self-Prolongation Influences People’s Perceptions of Its Autonomous Mind: The Case of U.S. Residents.Quan-Hoang Vuong, Viet-Phuong La, Minh-Hoang Nguyen, Ruining Jin, Minh-Khanh La & Tam-Tri Le - 2023 - Behavioral Sciences 13 (6):470.
    The expanding integration of artificial intelligence (AI) in various aspects of society makes the infosphere around us increasingly complex. Humanity already faces many obstacles trying to have a better understanding of our own minds, but now we have to continue finding ways to make sense of the minds of AI. The issue of AI’s capability to have independent thinking is of special attention. When dealing with such an unfamiliar concept, people may rely on existing human properties, such as survival desire, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  24. The Prospect of a Humanitarian Artificial Intelligence: Agency and Value Alignment.Montemayor Carlos - 2023
    In this open access book, Carlos Montemayor illuminates the development of artificial intelligence (AI) by examining our drive to live a dignified life. -/- He uses the notions of agency and attention to consider our pursuit of what is important. His method shows how the best way to guarantee value alignment between humans and potentially intelligent machines is through attention routines that satisfy similar needs. Setting out a theoretical framework for AI Montemayor acknowledges its legal, moral, and political implications and (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  25. How virtue signalling makes us better: moral preferences with respect to autonomous vehicle type choices.Robin Kopecky, Michaela Jirout Košová, Daniel D. Novotný, Jaroslav Flegr & David Černý - 2023 - AI and Society 38 (2):937-946.
    One of the moral questions concerning autonomous vehicles (henceforth AVs) is the choice between types that differ in their built-in algorithms for dealing with rare situations of unavoidable lethal collision. It does not appear to be possible to avoid questions about how these algorithms should be designed. We present the results of our study of moral preferences (N = 2769) with respect to three types of AVs: (1) selfish, which protects the lives of passenger(s) over any number of bystanders; (2) (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  26. In Conversation with Artificial Intelligence: Aligning language Models with Human Values.Atoosa Kasirzadeh - 2023 - Philosophy and Technology 36 (2):1-24.
    Large-scale language technologies are increasingly used in various forms of communication with humans across different contexts. One particular use case for these technologies is conversational agents, which output natural language text in response to prompts and queries. This mode of engagement raises a number of social and ethical questions. For example, what does it mean to align conversational agents with human norms or values? Which norms or values should they be aligned with? And how can this be accomplished? In this (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  27. The AI Ensoulment Hypothesis.Brian Cutter - forthcoming - Faith and Philosophy.
    According to the AI ensoulment hypothesis, some future AI systems will be endowed with immaterial souls. I argue that we should have at least a middling credence in the AI ensoulment hypothesis, conditional on our eventual creation of AGI and the truth of substance dualism in the human case. I offer two arguments. The first relies on an analogy between aliens and AI. The second rests on the conjecture that ensoulment occurs whenever a physical system is “fit to possess” a (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  28. Asking Chatbase to learn about academic retractions.Aisdl Team - 2023 - Sm3D Science Portal.
    It is noteworthy that Chatbase has the capability to identify notable authors writing about the topic, including the co-founders of Retraction Watch, Ivan Oransky and Adam Marcus.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  29. Chatting with Chatbase over the rationality issue of the cost of science.Aisdl Team - 2023 - Sm3D Science Portal.
    In this article, we present the outcome of our first experiment with Chatbase, a chatbot built on chatGPT’s functioning model(s). Our idea is to try instructing Chatbase to perform a reading, digesting, and summarizing task for a specifically formatted academic document.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  30. Lorenzo Magnani: Discoverability—the urgent need of an ecology of human creativity. [REVIEW]Jeffrey White - 2023 - AI and Society:1-2.
    Discoverability: the urgent need of an ecology of human creativity from the prolific Lorenzo Magnani is worthy of direct attention. The message may be of special interest to philosophers, ethicists and organizing scientists involved in the development of AI and related technologies which are increasingly directed at reinforcing conditions against which Magnani directly warns, namely the “overcomputationalization” of life marked by the gradual encroachment of technologically “locked strategies” into everyday decision-making until “freedom, responsibility, and ownership of our destinies” are ceded (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  31. On the Foundations of Computing. Computing as the Fourth Great Domain of Science. [REVIEW]Gordana Dodig-Crnkovic - 2023 - Global Philosophy 33 (1):1-12.
    This review essay analyzes the book by Giuseppe Primiero, On the foundations of computing. Oxford: Oxford University Press (ISBN 978-0-19-883564-6/hbk; 978-0-19-883565-3/pbk). xix, 296 p. (2020). It gives a critical view from the perspective of physical computing as a foundation of computing and argues that the neglected pillar of material computation (Stepney) should be brought centerstage and computing recognized as the fourth great domain of science (Denning).
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  32. Deepfakes, Fake Barns, and Knowledge from Videos.Taylor Matthews - 2023 - Synthese 201 (2):1-18.
    Recent develops in AI technology have led to increasingly sophisticated forms of video manipulation. One such form has been the advent of deepfakes. Deepfakes are AI-generated videos that typically depict people doing and saying things they never did. In this paper, I demonstrate that there is a close structural relationship between deepfakes and more traditional fake barn cases in epistemology. Specifically, I argue that deepfakes generate an analogous degree of epistemic risk to that which is found in traditional cases. Given (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  33. Deep Learning Opacity in Scientific Discovery.Eamon Duede - forthcoming - Philosophy of Science.
    Philosophers have recently focused on critical, epistemological challenges that arise from the opacity of deep neural networks. One might conclude from this literature that doing good science with opaque models is exceptionally challenging, if not impossible. Yet, this is hard to square with the recent boom in optimism for AI in science alongside a flood of recent scientific breakthroughs driven by AI methods. In this paper, I argue that the disconnect between philosophical pessimism and scientific optimism is driven by a (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  34. Est-ce que Vous Compute?Arianna Falbo & Travis LaCroix - 2022 - Feminist Philosophy Quarterly 8 (3).
    Cultural code-switching concerns how we adjust our overall behaviours, manners of speaking, and appearance in response to a perceived change in our social environment. We defend the need to investigate cultural code-switching capacities in artificial intelligence systems. We explore a series of ethical and epistemic issues that arise when bringing cultural code-switching to bear on artificial intelligence. Building upon Dotson’s (2014) analysis of testimonial smothering, we discuss how emerging technologies in AI can give rise to epistemic oppression, and specifically, a (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  35. On the responsible subjects of self-driving cars under the sae system: An improvement scheme.Hao Zhan, Dan Wan & Zhiwei Huang - 2020 - In 2020 IEEE International Symposium on Circuits and Systems (ISCAS). Seville, Spain: IEEE. pp. 1-5.
    The issue of how to identify the liability of subjects after a traffic accident takes place remains a puzzle regarding the SAE classification system. The SAE system is not good at dealing with the problem of responsibility evaluation; therefore, building a new classification system for self-driving cars from the perspective of the subject's liability is a possible way to solve this problem. This new system divides automated driving into three levels: i) assisted driving based on the will of drivers, ii) (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  36. A pluralist hybrid model for moral AIs.Fei Song & Shing Hay Felix Yeung - forthcoming - AI and Society:1-10.
    With the increasing degrees A.I.s and machines are applied across different social contexts, the need for implementing ethics in A.I.s is pressing. In this paper, we argue for a pluralist hybrid model for the implementation of moral A.I.s. We first survey current approaches to moral A.I.s and their inherent limitations. Then we propose the pluralist hybrid approach and show how these limitations of moral A.I.s can be partly alleviated by the pluralist hybrid approach. The core ethical decision-making capacity of an (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  37. Acceleration AI Ethics, the Debate between Innovation and Safety, and Stability AI’s Diffusion versus OpenAI’s Dall-E.James Brusseau - manuscript
    One objection to conventional AI ethics is that it slows innovation. This presentation responds by reconfiguring ethics as an innovation accelerator. The critical elements develop from a contrast between Stability AI’s Diffusion and OpenAI’s Dall-E. By analyzing the divergent values underlying their opposed strategies for development and deployment, five conceptions are identified as common to acceleration ethics. Uncertainty is understood as positive and encouraging, rather than discouraging. Innovation is conceived as intrinsically valuable, instead of worthwhile only as mediated by social (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  38. Moral difference between humans and robots: paternalism and human-relative reason.Tsung-Hsing Ho - 2022 - AI and Society 37 (4):1533-1543.
    According to some philosophers, if moral agency is understood in behaviourist terms, robots could become moral agents that are as good as or even better than humans. Given the behaviourist conception, it is natural to think that there is no interesting moral difference between robots and humans in terms of moral agency (call it the _equivalence thesis_). However, such moral differences exist: based on Strawson’s account of participant reactive attitude and Scanlon’s relational account of blame, I argue that a distinct (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  39. Link Uncertainty, Implementation, and ML Opacity: A Reply to Tamir and Shech.Emily Sullivan - 2022 - In Insa Lawler, Kareem Khalifa & Elay Shech (eds.), Scientific Understanding and Representation. Routledge. pp. 341-345.
    This chapter responds to Michael Tamir and Elay Shech’s chapter “Understanding from Deep Learning Models in Context.”.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  40. Artificial Intelligence Systems, Responsibility and Agential Self-Awareness.Lydia Farina - 2022 - In Vincent C. Müller (ed.), Philosophy and Theory of Artificial Intelligence 2021. Berlin, Germany: pp. 15-25.
    This paper investigates the claim that artificial Intelligence Systems cannot be held morally responsible because they do not have an ability for agential self-awareness e.g. they cannot be aware that they are the agents of an action. The main suggestion is that if agential self-awareness and related first person representations presuppose an awareness of a self, the possibility of responsible artificial intelligence systems cannot be evaluated independently of research conducted on the nature of the self. Focusing on a specific account (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  41. Models, Algorithms, and the Subjects of Transparency.Hajo Greif - 2022 - In Vincent C. Müller (ed.), Philosophy and Theory of Artificial Intelligence 2021. Berlin: Springer. pp. 27-37.
    Concerns over epistemic opacity abound in contemporary debates on Artificial Intelligence (AI). However, it is not always clear to what extent these concerns refer to the same set of problems. We can observe, first, that the terms 'transparency' and 'opacity' are used either in reference to the computational elements of an AI model or to the models to which they pertain. Second, opacity and transparency might either be understood to refer to the properties of AI systems or to the epistemic (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  42. Gul A. Agha, Actors: A Model of Concurrent Computation in Distributed Systems[REVIEW]Varol Akman - 1990 - AI Magazine 11 (4):92-93.
    This is a review of Gul A. Agha’s Actors: A Model of Concurrent Computation in Distributed Systems (The MIT Press, Cambridge, MA, 1987), a part of the MIT Press Series in Artificial Intelligence, edited by Patrick Winston, Michael Brady, and Daniel Bobrow.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  43. Representing emotions in terms of object directedness.Varol Akman & Hakime G. Unsal - 1994 - Department of Computer Engineering Technical Reports, Bilkent University.
    A logical formalization of emotions is considered to be tricky because they appear to have no strict types, reasons, and consequences. On the other hand, such a formalization is crucial for commonsense reasoning. Here, the so-called "object directedness" of emotions is studied by using Helen Nissenbaum's influential ideas.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  44. Philosophy and Theory of Artificial Intelligence 2021.Vincent C. Müller (ed.) - 2022 - Berlin: Springer.
    This book gathers contributions from the fourth edition of the Conference on "Philosophy and Theory of Artificial Intelligence" (PT-AI), held on 27-28th of September 2021 at Chalmers University of Technology, in Gothenburg, Sweden. It covers topics at the interface between philosophy, cognitive science, ethics and computing. It discusses advanced theories fostering the understanding of human cognition, human autonomy, dignity and morality, and the development of corresponding artificial cognitive structures, analyzing important aspects of the relationship between humans and AI systems, including (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  45. The paradox of the artificial intelligence system development process: the use case of corporate wellness programs using smart wearables.Alessandra Angelucci, Ziyue Li, Niya Stoimenova & Stefano Canali - forthcoming - AI and Society:1-11.
    Artificial intelligence systems have been widely applied to various contexts, including high-stake decision processes in healthcare, banking, and judicial systems. Some developed AI models fail to offer a fair output for specific minority groups, sparking comprehensive discussions about AI fairness. We argue that the development of AI systems is marked by a central paradox: the less participation one stakeholder has within the AI system’s life cycle, the more influence they have over the way the system will function. This means that (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  46. Instruments, agents, and artificial intelligence: novel epistemic categories of reliability.Eamon Duede - 2022 - Synthese 200 (6):1-20.
    Deep learning (DL) has become increasingly central to science, primarily due to its capacity to quickly, efficiently, and accurately predict and classify phenomena of scientific interest. This paper seeks to understand the principles that underwrite scientists’ epistemic entitlement to rely on DL in the first place and argues that these principles are philosophically novel. The question of this paper is not whether scientists can be justified in trusting in the reliability of DL. While today’s artificial intelligence exhibits characteristics common to (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  47. Editorial: On modes of participation.Ioannis Bardakos, Dalila Honorato, Claudia Jacques, Claudia Westermann & Primavera de Filippi - 2021 - Technoetic Arts 19 (3):221-225.
    In nature validation for physiological and emotional bonding becomes a mode for supporting social connectivity. Similarly, in the blockchain ecosystem, cryptographic validation becomes the substrate for all interactions. In the dialogue between human and artificial intelligence (AI) agents, between the real and the virtual, one can distinguish threads of physical or mental entanglements allowing different modes of participation. One could even suggest that in all types of realities there exist frameworks that are to some extent equivalent and act as validation (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  48. Why consciousness is non-algorithmic, and strong AI cannot come true.G. Hirase - manuscript
    I explain why consciousness is non-algorithmic, and strong AI cannot come true, and reinforce Penrose ’ s argument.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  49. Making Intelligence: Ethics, IQ, and ML Benchmarks.Borhane Blili-Hamelin & Leif Hancox-Li - manuscript
    The ML community recognizes the importance of anticipating and mitigating the potential negative impacts of benchmark research. In this position paper, we argue that more attention needs to be paid to areas of ethical risk that lie at the technical and scientific core of ML benchmarks. We identify overlooked structural similarities between human IQ and ML benchmarks. Human intelligence and ML benchmarks share similarities in setting standards for describing, evaluating and comparing performance on tasks relevant to intelligence. This enables us (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  50. Can AI Mind Be Extended?Alice C. Helliwell - 2019 - Evental Aesthetics 8 (1):93-120.
    Andy Clark and David Chalmers’s theory of extended mind can be reevaluated in today’s world to include computational and Artificial Intelligence (AI) technology. This paper argues that AI can be an extension of human mind, and that if we agree that AI can have mind, it too can be extended. It goes on to explore the example of Ganbreeder, an image-making AI which utilizes human input to direct behavior. Ganbreeder represents one way in which AI extended mind could be achieved. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 2535