Contents
70 found
Order:
1 — 50 / 70
Material to categorize
  1. LLMs and practical knowledge: What is intelligence?Barry Smith - 2024 - In Kristof Nyiri (ed.), Electrifying the Future, 11th Budapest Visual Learning Conference. Budapest: Hungarian Academy of Science. pp. 19-26.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  2. As máquinas podem cuidar?E. M. Carvalho - 2024 - O Que Nos Faz Pensar 31 (53):6-24.
    Applications and devices of artificial intelligence are increasingly common in the healthcare field. Robots fulfilling some caregiving functions are not a distant future. In this scenario, we must ask ourselves if it is possible for machines to care to the extent of completely replacing human care and if such replacement, if possible, is desirable. In this paper, I argue that caregiving requires know-how permeated by affectivity that is far from being achieved by currently available machines. I also maintain that the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  3. The value of testimonial-based beliefs in the face of AI-generated quasi-testimony.Felipe Alejandro Álvarez Osorio & Ruth Marcela Espinosa Sarmiento - 2024 - Aufklärung 11 (Especial):25-38.
    The value of testimony as a source of knowledge has been a subject of epistemological debates. The "trust theory of testimony" suggests that human testimony is based on an affective relationship supported by social norms. However, the advent of generative artificial intelligence challenges our understanding of genuine testimony. The concept of "quasi-testimony" seeks to characterize utterances produced by non-human entities that mimic testimony but lack certain fundamental attributes. This article analyzes these issues in depth, exploring philosophical perspectives on testimony and (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  4. Emotional Cues and Misplaced Trust in Artificial Agents.Joseph Masotti - forthcoming - In Henry Shevlin (ed.), AI in Society: Relationships (Oxford Intersections). Oxford University Press.
    This paper argues that the emotional cues exhibited by AI systems designed for social interaction may lead human users to hold misplaced trust in such AI systems, and this poses a substantial problem for human-AI relationships. It begins by discussing the communicative role of certain emotions relevant to perceived trustworthiness. Since displaying such emotions is a reliable indicator of trustworthiness in humans, we use such emotions to assess agents’ trustworthiness according to certain generalizations of folk psychology. Our tendency to engage (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  5. Artificial Intelligence, Creativity, and the Precarity of Human Connection.Lindsay Brainard - forthcoming - Oxford Intersections: Ai in Society.
    There is an underappreciated respect in which the widespread availability of generative artificial intelligence (AI) models poses a threat to human connection. My central contention is that human creativity is especially capable of helping us connect to others in a valuable way, but the widespread availability of generative AI models reduces our incentives to engage in various sorts of creative work in the arts and sciences. I argue that creative endeavors must be motivated by curiosity, and so they must disclose (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  6. Trust in AI: Progress, Challenges, and Future Directions.Saleh Afroogh, Ali Akbari, Emmie Malone, Mohammadali Kargar & Hananeh Alambeigi - forthcoming - Nature Humanities and Social Sciences Communications.
    The increasing use of artificial intelligence (AI) systems in our daily life through various applications, services, and products explains the significance of trust/distrust in AI from a user perspective. AI-driven systems have significantly diffused into various fields of our lives, serving as beneficial tools used by human agents. These systems are also evolving to act as co-assistants or semi-agents in specific domains, potentially influencing human thought, decision-making, and agency. Trust/distrust in AI plays the role of a regulator and could significantly (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  7. Will AI and Humanity Go to War?Simon Goldstein - manuscript
    This paper offers the first careful analysis of the possibility that AI and humanity will go to war. The paper focuses on the case of artificial general intelligence, AI with broadly human capabilities. The paper uses a bargaining model of war to apply standard causes of war to the special case of AI/human conflict. The paper argues that information failures and commitment problems are especially likely in AI/human conflict. Information failures would be driven by the difficulty of measuring AI capabilities, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  8. From Past to Present: A study of AI-driven gamification in heritage education.Sepehr Vaez Afshar, Sarvin Eshaghi, Mahyar Hadighi & Guzden Varinlioglu - 2024 - 42Nd Conference on Education and Research in Computer Aided Architectural Design in Europe: Data-Driven Intelligence 2:249-258.
    The use of Artificial Intelligence (AI) in educational gamification marks a significant advancement, transforming traditional learning methods by offering interactive, adaptive, and personalized content. This approach makes historical content more relatable and promotes active learning and exploration. This research presents an innovative approach to heritage education, combining AI and gamification, explicitly targeting the Silk Roads. It represents a significant progression in a series of research, transitioning from basic 2D textual interactions to a 3D environment using photogrammetry, combining historical authenticity and (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  9. Interpretable and accurate prediction models for metagenomics data.Edi Prifti, Antoine Danchin, Jean-Daniel Zucker & Eugeni Belda - 2020 - Gigascience 9 (3):giaa010.
    Background: Microbiome biomarker discovery for patient diagnosis, prognosis, and risk evaluation is attracting broad interest. Selected groups of microbial features provide signatures that characterize host disease states such as cancer or cardio-metabolic diseases. Yet, the current predictive models stemming from machine learning still behave as black boxes and seldom generalize well. Their interpretation is challenging for physicians and biologists, which makes them difficult to trust and use routinely in the physician-patient decision-making process. Novel methods that provide interpretability and biological insight (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  10. Beyond the AI Divide: Towards an Inclusive Future Free from AI Caste Systems and AI Dalits.Yu Chen - manuscript
    In the rapidly evolving landscape of artificial intelligence (AI), disparities in access and benefits are becoming increasingly apparent, leading to the emergence of an AI divide. This divide not only amplifies existing socio-economic inequalities but also fosters the creation of AI caste systems, where marginalized groups—referred to as AI Dalits—are systematically excluded from AI advancements. This article explores the definitions and contributing factors of the AI divide and delves into the concept of AI caste systems, illustrating how they perpetuate inequality. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  11. A phenomenology and epistemology of large language models: transparency, trust, and trustworthiness.Richard Heersmink, Barend de Rooij, María Jimena Clavel Vázquez & Matteo Colombo - 2024 - Ethics and Information Technology 26 (3):1-15.
    This paper analyses the phenomenology and epistemology of chatbots such as ChatGPT and Bard. The computational architecture underpinning these chatbots are large language models (LLMs), which are generative artificial intelligence (AI) systems trained on a massive dataset of text extracted from the Web. We conceptualise these LLMs as multifunctional computational cognitive artifacts, used for various cognitive tasks such as translating, summarizing, answering questions, information-seeking, and much more. Phenomenologically, LLMs can be experienced as a “quasi-other”; when that happens, users anthropomorphise them. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  12. Self-Absorption in the Digital Era: A Review of "Self-Improvement Technologies of the Soul in the Age of Artificial Intelligence" by Mark Coeckelbergh. [REVIEW]James J. Hughes - 2024 - Journal of Ethics and Emerging Technologies 33 (1).
    Mark Coeckelbergh is a Belgian philosopher who specializes in the philosophy of technology. His work primarily explores the intersection of technology and society, specifically the philosophical implications of emerging technologies such as AI and robotics. He has written on whether machines can be moral agents and how ethical frameworks should be applied to autonomous machines. He has a broad philosophical perspective drawing on classical sources, Eastern philosophy, Marxism, Foucault, phenomenology, and the postmodernists. In this short text, he brings his remarkable (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  13. What is it for a Machine Learning Model to Have a Capability?Jacqueline Harding & Nathaniel Sharadin - forthcoming - British Journal for the Philosophy of Science.
    What can contemporary machine learning (ML) models do? Given the proliferation of ML models in society, answering this question matters to a variety of stakeholders, both public and private. The evaluation of models' capabilities is rapidly emerging as a key subfield of modern ML, buoyed by regulatory attention and government grants. Despite this, the notion of an ML model possessing a capability has not been interrogated: what are we saying when we say that a model is able to do something? (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  14. A Risk-Based Regulatory Approach to Autonomous Weapon Systems.Alexander Blanchard, Claudio Novelli, Luciano Floridi & Mariarosaria Taddeo - manuscript
    International regulation of autonomous weapon systems (AWS) is increasingly conceived as an exercise in risk management. This requires a shared approach for assessing the risks of AWS. This paper presents a structured approach to risk assessment and regulation for AWS, adapting a qualitative framework inspired by the Intergovernmental Panel on Climate Change (IPCC). It examines the interactions among key risk factors—determinants, drivers, and types—to evaluate the risk magnitude of AWS and establish risk tolerance thresholds through a risk matrix informed by (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  15. Artificial Intelligence for the Internal Democracy of Political Parties.Claudio Novelli, Giuliano Formisano, Prathm Juneja, Sandri Giulia & Luciano Floridi - 2024 - Minds and Machines 34 (36):1-26.
    The article argues that AI can enhance the measurement and implementation of democratic processes within political parties, known as Intra-Party Democracy (IPD). It identifies the limitations of traditional methods for measuring IPD, which often rely on formal parameters, self-reported data, and tools like surveys. Such limitations lead to partial data collection, rare updates, and significant resource demands. To address these issues, the article suggests that specific data management and Machine Learning techniques, such as natural language processing and sentiment analysis, can (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  16. (1 other version)Giới thiệu về năm tiền đề của tương tác giữa người và máy trong kỉ nguyên trí tuệ nhân tạo.Manh-Tung Ho & T. Hong-Kong Nguyen - manuscript
    Bài viết này giới thiệu năm yếu tố tiền đề đó với mục đích gia tăng nhận thức về quan hệ giữa người và máy trong bối cảnh công nghệ ngày càng thay đổi cuộc sống thường nhật. Năm tiền đề bao gồm: Tiền đề về cấu trúc xã hội, văn hóa, chính trị, và lịch sử; về tính tự chủ và sự tự do của con người; về nền tảng triết học và nhân văn của nhân loại; về hiện (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  17. A narrative review of the active ingredients in psychotherapy delivered by conversational agents.Arthur Herbener, Michal Klincewicz & Malene Flensborg Damholdt A. Show More - 2024 - Computers in Human Behavior Reports 14.
    The present narrative review seeks to unravel where we are now, and where we need to go to delineate the active ingredients in psychotherapy delivered by conversational agents (e.g., chatbots). While psychotherapy delivered by conversational agents has shown promising effectiveness for depression, anxiety, and psychological distress across several randomized controlled trials, little emphasis has been placed on the therapeutic processes in these interventions. The theoretical framework of this narrative review is grounded in prominent perspectives on the active ingredients in psychotherapy. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  18. Cultural Bias in Explainable AI Research.Uwe Peters & Mary Carman - forthcoming - Journal of Artificial Intelligence Research.
    For synergistic interactions between humans and artificial intelligence (AI) systems, AI outputs often need to be explainable to people. Explainable AI (XAI) systems are commonly tested in human user studies. However, whether XAI researchers consider potential cultural differences in human explanatory needs remains unexplored. We highlight psychological research that found significant differences in human explanations between many people from Western, commonly individualist countries and people from non-Western, often collectivist countries. We argue that XAI research currently overlooks these variations and that (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  19. Artificial Psychology.Jay Friedenberg - 2008 - Psychology Press.
    What does it mean to be human? Philosophers and theologians have been wrestling with this question for centuries. Recent advances in cognition, neuroscience, artificial intelligence and robotics have yielded insights that bring us even closer to an answer. There are now computer programs that can accurately recognize faces, engage in conversation, and even compose music. There are also robots that can walk up a flight of stairs, work cooperatively with each other and express emotion. If machines can do everything we (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  20. Adopting trust as an ex post approach to privacy.Haleh Asgarinia - 2024 - AI and Ethics 3 (4).
    This research explores how a person with whom information has been shared and, importantly, an artificial intelligence (AI) system used to deduce information from the shared data contribute to making the disclosure context private. The study posits that private contexts are constituted by the interactions of individuals in the social context of intersubjectivity based on trust. Hence, to make the context private, the person who is the trustee (i.e., with whom information has been shared) must fulfil trust norms. According to (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  21. AI or Your Lying Eyes: Some Shortcomings of Artificially Intelligent Deepfake Detectors.Keith Raymond Harris - 2024 - Philosophy and Technology 37 (7):1-19.
    Deepfakes pose a multi-faceted threat to the acquisition of knowledge. It is widely hoped that technological solutions—in the form of artificially intelligent systems for detecting deepfakes—will help to address this threat. I argue that the prospects for purely technological solutions to the problem of deepfakes are dim. Especially given the evolving nature of the threat, technological solutions cannot be expected to prevent deception at the hands of deepfakes, or to preserve the authority of video footage. Moreover, the success of such (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  22. Escape climate apathy by harnessing the power of generative AI.Quan-Hoang Vuong & Manh-Tung Ho - 2024 - AI and Society 39:1-2.
    “Throw away anything that sounds too complicated. Only keep what is simple to grasp...If the information appears fuzzy and causes the brain to implode after two sentences, toss it away and stop listening. Doing so will make the news as orderly and simple to understand as the truth.” - In “GHG emissions,” The Kingfisher Story Collection, (Vuong 2022a).
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   4 citations  
  23. Beat the Simulation and Seize Control of Your Life.Julian Friedland & Kristian Myrseth - 2023 - Psychology Today 12 (26).
    The simulation hypothesis can reinforce a cynical dismissal of human potential. This attitude can allow online platform designers to rationalize employing manipulative neuromarketing techniques to control user decisions. We point to cognitive boosting techniques at both user and designer levels to build critical reflection and mindfulness.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  24. The Challenges of Artificial Judicial Decision-Making for Liberal Democracy.Christoph Winter - 2022 - In P. Bystranowski, Bartosz Janik & M. Prochnicki (eds.), Judicial Decision-Making: Integrating Empirical and Theoretical Perspectives. Springer Nature. pp. 179-204.
    The application of artificial intelligence (AI) to judicial decision-making has already begun in many jurisdictions around the world. While AI seems to promise greater fairness, access to justice, and legal certainty, issues of discrimination and transparency have emerged and put liberal democratic principles under pressure, most notably in the context of bail decisions. Despite this, there has been no systematic analysis of the risks to liberal democratic values from implementing AI into judicial decision-making. This article sets out to fill this (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  25. Prosthetic Godhood and Lacan’s Alethosphere: The Psychoanalytic Significance of the Interplay of Randomness and Structure in Generative Art.Rayan Magon - 2023 - 26Th Generative Art Conference.
    Psychoanalysis, particularly as articulated by figures like Freud and Lacan, highlights the inherent division within the human subject—a schism between the conscious and unconscious mind. It could be said that this suggests that such an internal division becomes amplified in the context of generative art, where technology and algorithms are used to generate artistic expressions that are meant to emerge from the depths of the unconscious. Here, we encounter the tension between the conscious artist and the generative process itself, which (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  26. Ethics of Artificial Intelligence.Stefan Buijsman, Michael Klenk & Jeroen van den Hoven - forthcoming - In Nathalie Smuha (ed.), Cambridge Handbook on the Law, Ethics and Policy of AI. Cambridge University Press.
    Artificial Intelligence (AI) is increasingly adopted in society, creating numerous opportunities but at the same time posing ethical challenges. Many of these are familiar, such as issues of fairness, responsibility and privacy, but are presented in a new and challenging guise due to our limited ability to steer and predict the outputs of AI systems. This chapter first introduces these ethical challenges, stressing that overviews of values are a good starting point but frequently fail to suffice due to the context (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  27. Operationalising Representation in Natural Language Processing.Jacqueline Harding - 2023 - British Journal for the Philosophy of Science.
    Despite its centrality in the philosophy of cognitive science, there has been little prior philosophical work engaging with the notion of representation in contemporary NLP practice. This paper attempts to fill that lacuna: drawing on ideas from cognitive science, I introduce a framework for evaluating the representational claims made about components of neural NLP models, proposing three criteria with which to evaluate whether a component of a model represents a property and operationalising these criteria using probing classifiers, a popular analysis (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  28. The Curious Case of Uncurious Creation.Lindsay Brainard - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    This paper seeks to answer the question: Can contemporary forms of artificial intelligence be creative? To answer this question, I consider three conditions that are commonly taken to be necessary for creativity. These are novelty, value, and agency. I argue that while contemporary AI models may have a claim to novelty and value, they cannot satisfy the kind of agency condition required for creativity. From this discussion, a new condition for creativity emerges. Creativity requires curiosity, a motivation to pursue epistemic (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  29. Diffusing the Creator: Attributing Credit for Generative AI Outputs.Donal Khosrowi, Finola Finn & Elinor Clark - 2023 - Aies '23: Proceedings of the 2023 Aaai/Acm Conference on Ai, Ethics, and Society.
    The recent wave of generative AI (GAI) systems like Stable Diffusion that can produce images from human prompts raises controversial issues about creatorship, originality, creativity and copyright. This paper focuses on creatorship: who creates and should be credited with the outputs made with the help of GAI? Existing views on creatorship are mixed: some insist that GAI systems are mere tools, and human prompters are creators proper; others are more open to acknowledging more significant roles for GAI, but most conceive (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  30. Quantum Intrinsic Curiosity Algorithms.Shanna Dobson & Julian Scaff - manuscript
    We propose a quantum curiosity algorithm as a means to implement quantum thinking into AI, and we illustrate 5 new quantum curiosity types. We then introduce 6 new hybrid quantum curiosity types combining animal and plant curiosity elements with biomimicry beyond human sensing. We then introduce 4 specialized quantum curiosity types, which incorporate quantum thinking into coding frameworks to radically transform problem-solving and discovery in science, medicine, and systems analysis. We conclude with a forecasting of the future of quantum thinking (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  31. The Icon and the Idol: A Christian Perspective on Sociable Robots.Jordan Joseph Wales - 2023 - In Jens Zimmermann (ed.), Human Flourishing in a Technological World: A Theological Perspective. Oxford University Press. pp. 94-115.
    Consulting early and medieval Christian thinkers, I theologically analyze the question of how we are to construe and live well with the sociable robot under the ancient theological concept of “glory”—the manifestation of God’s nature and life outside of himself. First, the oft-noted Western wariness toward robots may in part be rooted in protecting a certain idea of the “person” as a relational subject capable of self-gift. Historically, this understanding of the person derived from Christian belief in God the Trinity, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  32. Evaluation and Design of Generalist Systems (EDGeS).John Beverley & Amanda Hicks - 2023 - Ai Magazine.
    The field of AI has undergone a series of transformations, each marking a new phase of development. The initial phase emphasized curation of symbolic models which excelled in capturing reasoning but were fragile and not scalable. The next phase was characterized by machine learning models—most recently large language models (LLMs)—which were more robust and easier to scale but struggled with reasoning. Now, we are witnessing a return to symbolic models as complementing machine learning. Successes of LLMs contrast with their inscrutability, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  33. Sono solo parole ChatGPT: anatomia e raccomandazioni per l’uso.Tommaso Caselli, Antonio Lieto, Malvina Nissim & Viviana Patti - 2023 - Sistemi Intelligenti 4:1-10.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  34. Waiting for a digital therapist: three challenges on the path to psychotherapy delivered by artificial intelligence.J. P. Grodniewicz & Mateusz Hohol - 2023 - Frontiers in Psychiatry 14 (1190084):1-12.
    Growing demand for broadly accessible mental health care, together with the rapid development of new technologies, trigger discussions about the feasibility of psychotherapeutic interventions based on interactions with Conversational Artificial Intelligence (CAI). Many authors argue that while currently available CAI can be a useful supplement for human-delivered psychotherapy, it is not yet capable of delivering fully fledged psychotherapy on its own. The goal of this paper is to investigate what are the most important obstacles on our way to developing CAI (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  35. Are Large Language Models "alive"?Francesco Maria De Collibus - manuscript
    The appearance of openly accessible Artificial Intelligence Applications such as Large Language Models, nowadays capable of almost human-level performances in complex reasoning tasks had a tremendous impact on public opinion. Are we going to be "replaced" by the machines? Or - even worse - "ruled" by them? The behavior of these systems is so advanced they might almost appear "alive" to end users, and there have been claims about these programs being "sentient". Since many of our relationships of power and (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  36. Therapeutic Conversational Artificial Intelligence and the Acquisition of Self-understanding.J. P. Grodniewicz & Mateusz Hohol - 2023 - American Journal of Bioethics 23 (5):59-61.
    In their thought-provoking article, Sedlakova and Trachsel (2023) defend the view that the status—both epistemic and ethical—of Conversational Artificial Intelligence (CAI) used in psychotherapy is complicated. While therapeutic CAI seems to be more than a mere tool implementing particular therapeutic techniques, it falls short of being a “digital therapist.” One of the main arguments supporting the latter claim is that even though “the interaction with CAI happens in the course of conversation… the conversation is profoundly different from a conversation with (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  37. (39 other versions)العقل كبرمجيات حاسوبية.Salah Osman - manuscript
    تُخبرنا النظرية الحاسوبية للعقل (أو مذهب الحوسبة)، أن عقولنا تُشبه الحواسيب في عملها؛ أي أنها تتلقى مدخلات من العالم الخارجي، ثم تُنتج بالخوارزميات مخرجات في شكل حالات ذهنية أو أفعال. وبعبارة أخرى، تذهب النظرية إلى أن الدماغ لا يعدو أن يكون معالج معلومات؛ حيث يكون العقل بمثابة «برمجيات» (سوفت وير) تعمل على «جهاز» هو الدماغ (هارد وير). وما دام العقل مجرد برمجيات تخضع للحوسبة الفيزيائية بواسطة الأدمغة، أليس من الممكن إذن منطقيًا نقلها إلى أي حاسوب مثلما نقوم بنقل أية برمجيات (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  38. Exploring the Intersection of Rationality, Reality, and Theory of Mind in AI Reasoning: An Analysis of GPT-4's Responses to Paradoxes and ToM Tests.Lucas Freund - manuscript
    This paper investigates the responses of GPT-4, a state-of-the-art AI language model, to ten prominent philosophical paradoxes, and evaluates its capacity to reason and make decisions in complex and uncertain situations. In addition to analyzing GPT-4's solutions to the paradoxes, this paper assesses the model's Theory of Mind (ToM) capabilities by testing its understanding of mental states, intentions, and beliefs in scenarios ranging from classic ToM tests to complex, real-world simulations. Through these tests, we gain insight into AI's potential for (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  39. Where there’s no will, there’s no way.Alex Thomson, Jobst Landgrebe & Barry Smith - 2023 - Ukcolumn.
    An interview by Alex Thomson of UKColumn on Landgrebe and Smith's book: Why Machines Will Never Rule the World. The subtitle of the book is Artificial Intelligence Without Fear, and the interview begins with the question of the supposedly imminent takeover of one profession or the other by artificial intelligence. Is there truly reason to be afraid that you will lose your job? The interview itself is titled 'Where this is no will there is no way', drawing on one thesis (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  40. More Human Than All Too Human: Challenges in Machine Ethics for Humanity Becoming a Spacefaring Civilization.Guy Pierre Du Plessis - 2023 - Qeios.
    It is indubitable that machines with artificial intelligence (AI) will be an essential component in humans’ quest to become a spacefaring civilization. Most would agree that long-distance space travel and the colonization of Mars will not be possible without adequately developed AI. Machines with AI have a normative function, but some argue that it can also be evaluated from the perspective of ethical norms. This essay is based on the assumption that machine ethics is an essential philosophical perspective in realizing (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  41. Chatbots shouldn’t use emojis.Carissa Véliz - 2023 - Nature 615:375.
    Limits need to be set on AI’s ability to simulate human feelings. Ensuring that chatbots don’t use emotive language, including emojis, would be a good start. Emojis are particularly manipulative. Humans instinctively respond to shapes that look like faces — even cartoonish or schematic ones — and emojis can induce these reactions.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  42. On human centered artificial intelligence. [REVIEW]Gloria Andrada - 2023 - Metascience.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  43. Artificial Knowing Otherwise.Os Keyes & Kathleen Creel - 2022 - Feminist Philosophy Quarterly 8 (3).
    While feminist critiques of AI are increasingly common in the scholarly literature, they are by no means new. Alison Adam’s Artificial Knowing (1998) brought a feminist social and epistemological stance to the analysis of AI, critiquing the symbolic AI systems of her day and proposing constructive alternatives. In this paper, we seek to revisit and renew Adam’s arguments and methodology, exploring their resonances with current feminist concerns and their relevance to contemporary machine learning. Like Adam, we ask how new AI (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  44. Artificial Intelligence, Robots, and Philosophy.Masahiro Morioka, Shin-Ichiro Inaba, Makoto Kureha, István Zoltán Zárdai, Minao Kukita, Shimpei Okamoto, Yuko Murakami & Rossa Ó Muireartaigh - 2023 - Journal of Philosophy of Life.
    This book is a collection of all the papers published in the special issue “Artificial Intelligence, Robots, and Philosophy,” Journal of Philosophy of Life, Vol.13, No.1, 2023, pp.1-146. The authors discuss a variety of topics such as science fiction and space ethics, the philosophy of artificial intelligence, the ethics of autonomous agents, and virtuous robots. Through their discussions, readers are able to think deeply about the essence of modern technology and the future of humanity. All papers were invited and completed (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  45. Techno-animism and the Pygmalion effect.Emanuele Arielli & Lev Manovich - forthcoming - Http://Manovich.Net/Index.Php/Projects/Artificial-Aesthetics.
    Chapter 3 of the ongoing publication "Artificial Aesthetics" Book information: Assume you're a designer, an architect, a photographer, a videographer, a curator, an art historian, a musician, a writer, an artist, or any other creative professional or student. Perhaps you're a digital content creator who works across multiple platforms. Alternatively, you could be an art historian, curator, or museum professional. -/- You may be wondering how AI will affect your professional area in general and your work and career. Our book (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  46. “Even an AI could do that”.Emanuele Arielli - forthcoming - Http://Manovich.Net/Index.Php/Projects/Artificial-Aesthetics.
    Chapter 1 of the ongoing online publication "Artificial Aesthetics: A Critical Guide to AI, Media and Design", Lev Manovich and Emanuele Arielli -/- Book information: Assume you're a designer, an architect, a photographer, a videographer, a curator, an art historian, a musician, a writer, an artist, or any other creative professional or student. Perhaps you're a digital content creator who works across multiple platforms. Alternatively, you could be an art historian, curator, or museum professional. -/- You may be wondering how (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  47. Algorithmic Political Bias Can Reduce Political Polarization.Uwe Peters - 2022 - Philosophy and Technology 35 (3):1-7.
    Does algorithmic political bias contribute to an entrenchment and polarization of political positions? Franke argues that it may do so because the bias involves classifications of people as liberals, conservatives, etc., and individuals often conform to the ways in which they are classified. I provide a novel example of this phenomenon in human–computer interactions and introduce a social psychological mechanism that has been overlooked in this context but should be experimentally explored. Furthermore, while Franke proposes that algorithmic political classifications entrench (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  48. (1 other version)Information Deprivation and Democratic Engagement.Adrian K. Yee - 2023 - Philosophy of Science 90 (5).
    There remains no consensus among social scientists as to how to measure and understand forms of information deprivation such as misinformation. Machine learning and statistical analyses of information deprivation typically contain problematic operationalizations which are too often biased towards epistemic elites' conceptions that can undermine their empirical adequacy. A mature science of information deprivation should include considerable citizen involvement that is sensitive to the value-ladenness of information quality and that doing so may improve the predictive and explanatory power of extant (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  49. Artificial Intelligence: A Promising Future?Nancy Salay & Selim Akl - 2019 - Queen's Quarterly 126 (1):6-19.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  50. Kantian Notion of freedom and Autonomy of Artificial Agency.Manas Sahu - 2021 - Prometeica - Revista De Filosofía Y Ciencias 23:136-149.
    The objective of this paper is to provide a critical analysis of the Kantian notion of freedom (especially the problem of the third antinomy and its resolution in the critique of pure reason); its significance in the contemporary debate on free-will and determinism, and the possibility of autonomy of artificial agency in the Kantian paradigm of autonomy. Kant's resolution of the third antinomy by positing the ground in the noumenal self resolves the problem of antinomies; however, invites an explanatory gap (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 70