Results for 'strong AI'

961 found
Order:
  1. Tu Quoque: The Strong AI Challenge to Selfhood, Intentionality and Meaning and Some Artistic Responses.Erik C. Banks - manuscript
    This paper offers a "tu quoque" defense of strong AI, based on the argument that phenomena of self-consciousness and intentionality are nothing but the "negative space" drawn around the concrete phenomena of brain states and causally connected utterances and objects. Any machine that was capable of concretely implementing the positive phenomena would automatically inherit the negative space around these that we call self-consciousness and intention. Because this paper was written for a literary audience, some examples from Greek tragedy, noir (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2. Saliva Ontology: An ontology-based framework for a Salivaomics Knowledge Base.Jiye Ai, Barry Smith & David Wong - 2010 - BMC Bioinformatics 11 (1):302.
    The Salivaomics Knowledge Base (SKB) is designed to serve as a computational infrastructure that can permit global exploration and utilization of data and information relevant to salivaomics. SKB is created by aligning (1) the saliva biomarker discovery and validation resources at UCLA with (2) the ontology resources developed by the OBO (Open Biomedical Ontologies) Foundry, including a new Saliva Ontology (SALO). We define the Saliva Ontology (SALO; http://www.skb.ucla.edu/SALO/) as a consensus-based controlled vocabulary of terms and relations dedicated to the salivaomics (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  3. Revised: From Color, to Consciousness, toward Strong AI.Xinyuan Gu - manuscript
    This article cohesively discusses three topics, namely color and its perception, the yet-to-be-solved hard problem of consciousness, and the theoretical possibility of strong AI. First, the article restores color back into the physical world by giving cross-species evidence. Secondly, the article proposes a dual-field with function Q hypothesis (DFFQ) which might explain the ‘first-person point of view’ and so the hard problem of consciousness. Finally, the article discusses what DFFQ might bring to artificial intelligence and how it might allow (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4. Why consciousness is non-algorithmic, and strong AI cannot come true.G. Hirase - manuscript
    I explain why consciousness is non-algorithmic, and strong AI cannot come true, and reinforce Penrose ’ s argument.
    Download  
     
    Export citation  
     
    Bookmark  
  5. Consciousness as computation: A defense of strong AI based on quantum-state functionalism.R. Michael Perry - 2006 - In Charles Tandy (ed.), Death and Anti-Death, Volume 4: Twenty Years After De Beauvoir, Thirty Years After Heidegger. Palo Alto: Ria University Press.
    The viewpoint that consciousness, including feeling, could be fully expressed by a computational device is known as strong artificial intelligence or strong AI. Here I offer a defense of strong AI based on machine-state functionalism at the quantum level, or quantum-state functionalism. I consider arguments against strong AI, then summarize some counterarguments I find compelling, including Torkel Franzén’s work which challenges Roger Penrose’s claim, based on Gödel incompleteness, that mathematicians have nonalgorithmic levels of “certainty.” Some consequences (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. Making AI Meaningful Again.Jobst Landgrebe & Barry Smith - 2021 - Synthese 198 (March):2061-2081.
    Artificial intelligence (AI) research enjoyed an initial period of enthusiasm in the 1970s and 80s. But this enthusiasm was tempered by a long interlude of frustration when genuinely useful AI applications failed to be forthcoming. Today, we are experiencing once again a period of enthusiasm, fired above all by the successes of the technology of deep neural networks or deep machine learning. In this paper we draw attention to what we take to be serious problems underlying current views of artificial (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  7. AI-Completeness: Using Deep Learning to Eliminate the Human Factor.Kristina Šekrst - 2020 - In Sandro Skansi (ed.), Guide to Deep Learning Basics. Springer. pp. 117-130.
    Computational complexity is a discipline of computer science and mathematics which classifies computational problems depending on their inherent difficulty, i.e. categorizes algorithms according to their performance, and relates these classes to each other. P problems are a class of computational problems that can be solved in polynomial time using a deterministic Turing machine while solutions to NP problems can be verified in polynomial time, but we still do not know whether they can be solved in polynomial time as well. A (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8.  94
    Powerful Qualities, Phenomenal Properties and AI.Ashley Coates - 2023 - In William A. Bauer & Anna Marmodoro (eds.), Artificial Dispositions: Investigating Ethical and Metaphysical Issues. New York: Bloomsbury. pp. 169-192.
    Strong AI” is the view that it is possible for an artificial agent to be mentally indistinguishable from human agents. Because the behavioral dispositions of artificial agents are determined by underlying dispositional systems, Strong AI seems to entail human behavioral dispositions are also determined by dispositional systems. It is, however, highly intuitive that non-dispositional, phenomenal properties, such as being in pain, at least partially determine certain human behavioral dispositions, like the disposition to take a pain killer. Consequently, (...) AI seems to conflict with an intuitive view of phenomenal properties’ role in determining human behavioral dispositions. My goal here is not directly to evaluate this tension, but rather to clarify how dispositionalism in the metaphysics of properties bears on it. While a tempting thought is that dispositionalism fits well with Strong AI’s thoroughly dispositional account of human behavior, I argue that this thought does not hold for dispositionalism in general. In particular, I argue that combining a version of the “powerful qualities view” with certain dispositionalist conceptions of the will leads to a version of the intuitive view of phenomenal properties that is radically incompatible with Strong AI. I argue further that this view also raises a challenge for the weaker claim that an artificial agent could be behaviorally indistinguishable from a human agent. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  9. Representation, Analytic Pragmatism and AI.Raffaela Giovagnoli - 2013 - In Gordana Dodig-Crnkovic Raffaela Giovagnoli (ed.), Computing Nature. pp. 161--169.
    Our contribution aims at individuating a valid philosophical strategy for a fruitful confrontation between human and artificial representation. The ground for this theoretical option resides in the necessity to find a solution that overcomes, on the one side, strong AI (i.e. Haugeland) and, on the other side, the view that rules out AI as explanation of human capacities (i.e. Dreyfus). We try to argue for Analytic Pragmatism (AP) as a valid strategy to present arguments for a form of weak (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  10. Designing AI for Explainability and Verifiability: A Value Sensitive Design Approach to Avoid Artificial Stupidity in Autonomous Vehicles.Steven Umbrello & Roman Yampolskiy - 2022 - International Journal of Social Robotics 14 (2):313-322.
    One of the primary, if not most critical, difficulties in the design and implementation of autonomous systems is the black-boxed nature of the decision-making structures and logical pathways. How human values are embodied and actualised in situ may ultimately prove to be harmful if not outright recalcitrant. For this reason, the values of stakeholders become of particular significance given the risks posed by opaque structures of intelligent agents (IAs). This paper explores how decision matrix algorithms, via the belief-desire-intention model for (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  11. Values in science and AI alignment research.Leonard Dung - manuscript
    Roughly, empirical AI alignment research (AIA) is an area of AI research which investigates empirically how to design AI systems in line with human goals. This paper examines the role of non-epistemic values in AIA. It argues that: (1) Sciences differ in the degree to which values influence them. (2) AIA is strongly value-laden. (3) This influence of values is managed inappropriately and thus threatens AIA’s epistemic integrity and ethical beneficence. (4) AIA should strive to achieve value transparency, critical scrutiny (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12. Conversational AI for Psychotherapy and Its Role in the Space of Reason.Jana Sedlakova - 2024 - Cosmos+Taxis 12 (5+6):80-87.
    The recent book by Landgrebe and Smith (2022) offers compelling arguments against the possibility of Artificial General Intelligence (AGI) as well as against the idea that machines have the abilities to master human language, human social interaction and morality. Their arguments leave open, however, a problem on the side of the imaginative power of humans to perceive more than there is and treat AIs as humans and social actors independent of their actual properties and abilities or lack thereof. The mathematical (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. How AI Trained on the Confucian Analects Can Solve Ethical Dilemmas.Emma So - 2024 - Curieux Academic Journal 1 (Issue 42):56-67.
    The influence of AI has spread globally, intriguing both the East and the West. As a result, some Chinese scholars have explored how AI and Chinese philosophy can be examined together, and have offered some unique insights into AI from a Chinese philosophical perspective. Similarly, we investigate how the two fields can be developed in conjunction, focusing on the popular Confucian philosophy. In this work, we use Confucianism as a philosophical foundation to investigate human-technology relations closely, proposing that a Confucian-imbued (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. The Cognitive Phenomenology Argument for Disembodied AI Consciousness.Cody Turner - 2020 - In Steven S. Gouveia (ed.), The Age of Artificial Intelligence: An Exploration. Vernon Press. pp. 111-132.
    In this chapter I offer two novel arguments for what I call strong primitivism about cognitive phenomenology, the thesis that there exists a phenomenology of cognition that is neither reducible to, nor dependent upon, sensory phenomenology. I then contend that strong primitivism implies that phenomenal consciousness does not require sensory processing. This latter contention has implications for the philosophy of artificial intelligence. For if sensory processing is not a necessary condition for phenomenal consciousness, then it plausibly follows that (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15. Australia's Approach to AI Governance in Security and Defence.Susannah Kate Devitt & Damian Copeland - forthcoming - In M. Raska, Z. Stanley-Lockman & R. Bitzinger (eds.), AI Governance for National Security and Defence: Assessing Military AI Strategic Perspectives. Routledge. pp. 38.
    Australia is a leading AI nation with strong allies and partnerships. Australia has prioritised the development of robotics, AI, and autonomous systems to develop sovereign capability for the military. Australia commits to Article 36 reviews of all new means and method of warfare to ensure weapons and weapons systems are operated within acceptable systems of control. Additionally, Australia has undergone significant reviews of the risks of AI to human rights and within intelligence organisations and has committed to producing ethics (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16. The trustworthiness of AI: Comments on Simion and Kelp’s account.Dong-Yong Choi - 2023 - Asian Journal of Philosophy 2 (1):1-9.
    Simion and Kelp explain the trustworthiness of an AI based on that AI’s disposition to meet its obligations. Roughly speaking, according to Simion and Kelp, an AI is trustworthy regarding its task if and only if that AI is obliged to complete the task and its disposition to complete the task is strong enough. Furthermore, an AI is obliged to complete a task in the case where the task is the AI’s etiological function or design function. This account has (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. Robot Autonomy vs. Human Autonomy: Social Robots, Artificial Intelligence (AI), and the Nature of Autonomy.Paul Formosa - 2021 - Minds and Machines 31 (4):595-616.
    Social robots are robots that can interact socially with humans. As social robots and the artificial intelligence that powers them becomes more advanced, they will likely take on more social and work roles. This has many important ethical implications. In this paper, we focus on one of the most central of these, the impacts that social robots can have on human autonomy. We argue that, due to their physical presence and social capacities, there is a strong potential for social (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  18. Facing Janus: An Explanation of the Motivations and Dangers of AI Development.Aaron Graifman - manuscript
    This paper serves as an intuition building mechanism for understanding the basics of AI, misalignment, and the reasons for why strong AI is being pursued. The approach is to engage with both pro and anti AI development arguments to gain a deeper understanding of both views, and hopefully of the issue as a whole. We investigate the basics of misalignment, common misconceptions, and the arguments for why we would want to pursue strong AI anyway. The paper delves into (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. Taking Into Account Sentient Non-Humans in AI Ambitious Value Learning: Sentientist Coherent Extrapolated Volition.Adrià Moret - 2023 - Journal of Artificial Intelligence and Consciousness 10 (02):309-334.
    Ambitious value learning proposals to solve the AI alignment problem and avoid catastrophic outcomes from a possible future misaligned artificial superintelligence (such as Coherent Extrapolated Volition [CEV]) have focused on ensuring that an artificial superintelligence (ASI) would try to do what humans would want it to do. However, present and future sentient non-humans, such as non-human animals and possible future digital minds could also be affected by the ASI’s behaviour in morally relevant ways. This paper puts forward Sentientist Coherent Extrapolated (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20.  88
    Is Complexity Important for Philosophy of Mind?Kristina Šekrst & Sandro Skansi - manuscript
    Computational complexity has often been ignored in the philosophy of mind, in philosophical artificial intelligence studies. The purpose of this paper is threefold. First and foremost, to show the importance of complexity rather than computability in philosophical and AI problems. Second, to rephrase the notion of computability in terms of solvability, i.e., treating computability as non-sufficient for establishing intelligence. The Church-Turing thesis is therefore revisited and rephrased in order to capture the ontological background of spatial and temporal complexity. Third, to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. Christianity, science, and three phases of being human.Bruce R. Reichenbach - 2021 - Zygon 56 (1):96-117.
    The alleged conflict between religion and science most pointedly focuses on what it is to be human. Western philosophical thought regarding this has progressed through three broad stages: mind/body dualism, Neo-Darwinism, and most recently strong artificial intelligence (AI). I trace these views with respect to their relation to Christian views of humans, suggesting that while the first two might be compatible with Christian thought, strong AI presents serious challenges to a Christian understanding of personhood, including our freedom to (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  22. Why computers can't feel pain.John Mark Bishop - 2009 - Minds and Machines 19 (4):507-516.
    The most cursory examination of the history of artificial intelligence highlights numerous egregious claims of its researchers, especially in relation to a populist form of ‘strong’ computationalism which holds that any suitably programmed computer instantiates genuine conscious mental states purely in virtue of carrying out a specific series of computations. The argument presented herein is a simple development of that originally presented in Putnam’s (Representation & Reality, Bradford Books, Cambridge in 1988 ) monograph, “Representation & Reality”, which if correct, (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  23. Artificial Forms of Life.Sebastian Sunday Grève - 2023 - Philosophies 8 (5).
    The logical problem of artificial intelligence—the question of whether the notion sometimes referred to as ‘strong’ AI is self-contradictory—is, essentially, the question of whether an artificial form of life is possible. This question has an immediately paradoxical character, which can be made explicit if we recast it (in terms that would ordinarily seem to be implied by it) as the question of whether an unnatural form of nature is possible. The present paper seeks to explain this paradoxical kind of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  24. Machine learning in bail decisions and judges’ trustworthiness.Alexis Morin-Martel - 2023 - AI and Society:1-12.
    The use of AI algorithms in criminal trials has been the subject of very lively ethical and legal debates recently. While there are concerns over the lack of accuracy and the harmful biases that certain algorithms display, new algorithms seem more promising and might lead to more accurate legal decisions. Algorithms seem especially relevant for bail decisions, because such decisions involve statistical data to which human reasoners struggle to give adequate weight. While getting the right legal outcome is a (...) desideratum of criminal trials, advocates of the relational theory of procedural justice give us good reason to think that fairness and perceived fairness of legal procedures have a value that is independent from the outcome. According to this literature, one key aspect of fairness is trustworthiness. In this paper, I argue that using certain algorithms to assist bail decisions could increase three diferent aspects of judges’ trustworthiness: (1) actual trustworthiness, (2) rich trustworthiness, and (3) perceived trustworthiness. (shrink)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  25.  54
    (2 other versions)Review of 'Tractatus Logico Philosophicus' by Ludwig Wittgenstein (1922).Michael Starks - 2016 - In Michael Starks (ed.), Suicidal Utopian Delusions in the 21st Century: Philosophy, Human Nature and the Collapse of Civilization-- Articles and Reviews 2006-2017 2nd Edition Feb 2018. Las Vegas, USA: Reality Press. pp. 246-258.
    TLP is a remarkable document which continues to seduce some the best minds in philosophy, with new books and articles dealing partly or entirely with it appearing frequently over a century after it was first conceived. The first thing to note is that W later rejected it entirely for reasons he spent the rest of his life explaining. He was doing philosophy (descriptive psychology) as though the mind was a logical mathematical machine that processed facts, and behavior was the result. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. Digital Me Ontology and Ethics.Ljupco Kocarev & Jasna Koteska - manuscript
    Digital me ontology and ethics. 21 December 2020. -/- Ljupco Kocarev and Jasna Koteska. -/- This paper addresses ontology and ethics of an AI agent called digital me. We define digital me as autonomous, decision-making, and learning agent, representing an individual and having practically immortal own life. It is assumed that digital me is equipped with the big-five personality model, ensuring that it provides a model of some aspects of a strong AI: consciousness, free will, and intentionality. As computer-based (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. (1 other version)Máquinas sin engranajes y cuerpos sin mentes. ¿cuán dualista es el funcionalismo de máquina de Turing?Rodrigo González - 2011 - Revista de Filosofía 67:183-200.
    En este trabajo examino cómo el Funcionalismo de Máquina de Turing resulta compatible con una forma de dualismo, lo que aleja a la IA clásica o fuerte del materialismo que la inspiró originalmente en el siglo XIX. Para sostener esta tesis, argumento que efectivamente existe una notable cercanía entre el pensamiento cartesiano y dicho funcionalismo, ya que el primero afirma que es concebible/posible separar mente y cuerpo, mientras que el segundo sostiene que no es estrictamente necesario que los estados mentales (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28. The Hermeneutics of Artificial Intelligence.Joshua D. F. Hooke & Sean J. Mcgrath (eds.) - 2023 - Analecta Hermeneutica.
    The papers in the following volume are the outcome of a three-year long interdisciplinary research project. The project began with an in-person meeting hosted and funded by the Daimler und Benz Stiftung in Germany in March 2020 (the world was shutting down one nation at a time as we met). During the pandemic we continued to meet monthly online with support from Memorial University of Newfoundland. From the beginning it was the goal of the Working Group on Intelligence (WGI), as (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29. Morphing Intelligence: From IQ Measurement to Artificial Brains. [REVIEW]Ekin Erkan - 2020 - Chiasma 6 (1):248-260.
    In her seminal text, What Should We Do With Our Brain? (2008), Catherine Malabou gestured towards neuroplasticity to upend Bergson's famous parallel of the brain as a "central telephonic exchange," whereby the function of the brain is simply that of a node where perceptions get in touch with motor mechanisms, the brain as an instrument limited to the transmission and divisions of movements. Drawing from the history of cybernetics one can trace how Bergson's 'telephonic exchange' prefigures the neural 'cybernetic metaphor.' (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. (1 other version)The CNS-independent consciousness system: the model system of all nature and the framework of all sciences.Jin Ma -
    This paper presents the unification of all knowledge and the framework of all sciences, so it goes the theory of consciousness, the method to measure consciousness, and the three keys of the Strong AI. “Logicality and non-absoluteness” is found out to be the intrinsicality of nature, so the “Fundamental Law of Nature” is discovered. Then, the “general methodology of research” and the “model system of nature” are developed to explain everything, especially consciousness. The Coupling Theory of Consciousness tells that (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31. Playing the Blame Game with Robots.Markus Kneer & Michael T. Stuart - 2021 - In Markus Kneer & Michael T. Stuart (eds.), Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction (HRI’21 Companion). New York, NY, USA:
    Recent research shows – somewhat astonishingly – that people are willing to ascribe moral blame to AI-driven systems when they cause harm [1]–[4]. In this paper, we explore the moral- psychological underpinnings of these findings. Our hypothesis was that the reason why people ascribe moral blame to AI systems is that they consider them capable of entertaining inculpating mental states (what is called mens rea in the law). To explore this hypothesis, we created a scenario in which an AI system (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  32. Why Machines Will Never Rule the World: Artificial Intelligence without Fear by Jobst Landgrebe & Barry Smith (Book review). [REVIEW]Walid S. Saba - 2022 - Journal of Knowledge Structures and Systems 3 (4):38-41.
    Whether it was John Searle’s Chinese Room argument (Searle, 1980) or Roger Penrose’s argument of the non-computable nature of a mathematician’s insight – an argument that was based on Gödel’s Incompleteness theorem (Penrose, 1989), we have always had skeptics that questioned the possibility of realizing strong Artificial Intelligence (AI), or what has become known by Artificial General Intelligence (AGI). But this new book by Landgrebe and Smith (henceforth, L&S) is perhaps the strongest argument ever made against strong AI. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. Consciousness, Machines, and Moral Status.Henry Shevlin - manuscript
    In light of recent breakneck pace in machine learning, questions about whether near-future artificial systems might be conscious and possess moral status are increasingly pressing. This paper argues that as matters stand these debates lack any clear criteria for resolution via the science of consciousness. Instead, insofar as they are settled at all, it is likely to be via shifts in public attitudes brought about by the increasingly close relationships between humans and AI users. Section 1 of the paper I (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  34. Science Based on Artificial Intelligence Need not Pose a Social Epistemological Problem.Uwe Peters - 2024 - Social Epistemology Review and Reply Collective 13 (1).
    It has been argued that our currently most satisfactory social epistemology of science can’t account for science that is based on artificial intelligence (AI) because this social epistemology requires trust between scientists that can take full responsibility for the research tools they use, and scientists can’t take full responsibility for the AI tools they use since these systems are epistemically opaque. I think this argument overlooks that much AI-based science can be done without opaque models, and that agents can take (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. Nonconscious Cognitive Suffering: Considering Suffering Risks of Embodied Artificial Intelligence.Steven Umbrello & Stefan Lorenz Sorgner - 2019 - Philosophies 4 (2):24.
    Strong arguments have been formulated that the computational limits of disembodied artificial intelligence (AI) will, sooner or later, be a problem that needs to be addressed. Similarly, convincing cases for how embodied forms of AI can exceed these limits makes for worthwhile research avenues. This paper discusses how embodied cognition brings with it other forms of information integration and decision-making consequences that typically involve discussions of machine cognition and similarly, machine consciousness. N. Katherine Hayles’s novel conception of nonconscious cognition (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  36. Publishing Robots.Nick Hadsell, Rich Eva & Kyle Huitt - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    If AI can write an excellent philosophy paper, we argue that philosophy journals should strongly consider publishing that paper. After all, AI stands to make significant contributions to ongoing projects in some subfields, and it benefits the world of philosophy for those contributions to be published in journals, the primary purpose of which is to disseminate significant contributions to philosophy. We also propose the Sponsorship Model of AI journal refereeing to mitigate any costs associated with our view. This model requires (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37. Hijacking Epistemic Agency - How Emerging Technologies Threaten our Wellbeing as Knowers.John Dorsch - 2022 - Proceedings of the 2022 Aaai/Acm Conference on Ai, Ethics, and Society 1.
    The aim of this project to expose the reasons behind the pandemic of misinformation (henceforth, PofM) by examining the enabling conditions of epistemic agency and the emerging technologies that threaten it. I plan to research the emotional origin of epistemic agency, i.e. on the origin of our capacity to acquire justification for belief, as well as on the significance this emotional origin has for our lives as epistemic agents in our so-called Misinformation Age. This project has three objectives. First, I (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. It's just about Time.Rowan Grigg - manuscript
    Presented is a hypothetical model of reality that is consistent with the observational data incompletely addressed by existing models such as general relativity and quantum theory, including non-locality and the accelerating expansion of the universe. The model further suggests a theory of consciousness in which a physical mechanism accounts for interactions with remote agents that were previously categorized as 'spiritual'. I explore the wider implications of this model.
    Download  
     
    Export citation  
     
    Bookmark  
  39. Does ChatGPT have semantic understanding?Lisa Miracchi Titus - 2024 - Cognitive Systems Research 83 (101174):1-13.
    Over the last decade, AI models of language and word meaning have been dominated by what we might call a statistics-of-occurrence, strategy: these models are deep neural net structures that have been trained on a large amount of unlabeled text with the aim of producing a model that exploits statistical information about word and phrase co-occurrence in order to generate behavior that is similar to what a human might produce, or representations that can be probed to exhibit behavior similar to (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  40. A feeling for the algorithm: Diversity, expertise and artificial intelligence.Catherine Stinson & Sofie Vlaad - 2024 - Big Data and Society 11 (1).
    Diversity is often announced as a solution to ethical problems in artificial intelligence (AI), but what exactly is meant by diversity and how it can solve those problems is seldom spelled out. This lack of clarity is one hurdle to motivating diversity in AI. Another hurdle is that while the most common perceptions about what diversity is are too weak to do the work set out for them, stronger notions of diversity are often defended on normative grounds that fail to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  41. Algorithmic Political Bias in Artificial Intelligence Systems.Uwe Peters - 2022 - Philosophy and Technology 35 (2):1-23.
    Some artificial intelligence systems can display algorithmic bias, i.e. they may produce outputs that unfairly discriminate against people based on their social identity. Much research on this topic focuses on algorithmic bias that disadvantages people based on their gender or racial identity. The related ethical problems are significant and well known. Algorithmic bias against other aspects of people’s social identity, for instance, their political orientation, remains largely unexplored. This paper argues that algorithmic bias against people’s political orientation can arise in (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  42. How Much Should Governments Pay to Prevent Catastrophes? Longtermism's Limited Role.Carl Shulman & Elliott Thornley - forthcoming - In Jacob Barrett, Hilary Greaves & David Thorstad (eds.), Essays on Longtermism. Oxford University Press.
    Longtermists have argued that humanity should significantly increase its efforts to prevent catastrophes like nuclear wars, pandemics, and AI disasters. But one prominent longtermist argument overshoots this conclusion: the argument also implies that humanity should reduce the risk of existential catastrophe even at extreme cost to the present generation. This overshoot means that democratic governments cannot use the longtermist argument to guide their catastrophe policy. In this paper, we show that the case for preventing catastrophe does not depend on longtermism. (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  43. Making moral machines: why we need artificial moral agents.Paul Formosa & Malcolm Ryan - forthcoming - AI and Society.
    As robots and Artificial Intelligences become more enmeshed in rich social contexts, it seems inevitable that we will have to make them into moral machines equipped with moral skills. Apart from the technical difficulties of how we could achieve this goal, we can also ask the ethical question of whether we should seek to create such Artificial Moral Agents (AMAs). Recently, several papers have argued that we have strong reasons not to develop AMAs. In response, we develop a comprehensive (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  44. Moral Encounters of the Artificial Kind: Towards a non-anthropocentric account of machine moral agency.Fabio Tollon - 2019 - Dissertation, Stellenbosch University
    The aim of this thesis is to advance a philosophically justifiable account of Artificial Moral Agency (AMA). Concerns about the moral status of Artificial Intelligence (AI) traditionally turn on questions of whether these systems are deserving of moral concern (i.e. if they are moral patients) or whether they can be sources of moral action (i.e. if they are moral agents). On the Organic View of Ethical Status, being a moral patient is a necessary condition for an entity to qualify as (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  45.  72
    Against the global replacement: On the application of the philosophy of artificial intelligence to artificial life.Brian L. Keeley - 1994 - In C.G. Langton (ed.), Artificial Life III: Proceedings of the Workshop on Artificial Life. Reading, Mass: Addison-Wesley.
    This paper is a complement to the recent wealth of literature suggesting a strong philosophical relationship between artificial life (A-Life) and artificial intelligence (AI). I seek to point out where this analogy seems to break down, or where it would lead us to draw incorrect conclusions about the philosophical situation of A-Life. First, I sketch a thought experiment (based on the work of Tom Ray) that suggests how a certain subset of A-Life experiments should be evaluated. In doing so, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  46. The Problem of Evil in Virtual Worlds.Brendan Shea - 2017 - In Mark Silcox (ed.), Experience Machines: The Philosophy of Virtual Worlds. London: Rowman & Littlefield. pp. 137-155.
    In its original form, Nozick’s experience machine serves as a potent counterexample to a simplistic form of hedonism. The pleasurable life offered by the experience machine, its seems safe to say, lacks the requisite depth that many of us find necessary to lead a genuinely worthwhile life. Among other things, the experience machine offers no opportunities to establish meaningful relationships, or to engage in long-term artistic, intellectual, or political projects that survive one’s death. This intuitive objection finds some support in (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  47. Simulative reasoning, common-sense psychology and artificial intelligence.John A. Barnden - 1995 - In Martin Davies & Tony Stone (eds.), Mental Simulation: Evaluations and Applications. Blackwell. pp. 247--273.
    The notion of Simulative Reasoning in the study of propositional attitudes within Artificial Intelligence (AI) is strongly related to the Simulation Theory of mental ascription in Philosophy. Roughly speaking, when an AI system engages in Simulative Reasoning about a target agent, it reasons with that agent’s beliefs as temporary hypotheses of its own, thereby coming to conclusions about what the agent might conclude or might have concluded. The contrast is with non-simulative meta-reasoning, where the AI system reasons within a detailed (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  48. HeX and the single anthill: playing games with Aunt Hillary.J. M. Bishop, S. J. Nasuto, T. Tanay, E. B. Roesch & M. C. Spencer - 2016 - In Vincent C. Müller (ed.), Fundamental Issues of Artificial Intelligence. Cham: Springer. pp. 367-389.
    In a reflective and richly entertaining piece from 1979, Doug Hofstadter playfully imagined a conversation between ‘Achilles’ and an anthill (the eponymous ‘Aunt Hillary’), in which he famously explored many ideas and themes related to cognition and consciousness. For Hofstadter, the anthill is able to carry on a conversation because the ants that compose it play roughly the same role that neurons play in human languaging; unfortunately, Hofstadter’s work is notably short on detail suggesting how this magic might be achieved1. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49.  49
    (1 other version)Institutional Trust in Medicine in the Age of Artificial Intelligence.Michał Klincewicz - 2023 - In David Collins, Iris Vidmar Jovanović, Mark Alfano & Hale Demir-Doğuoğlu (eds.), The Moral Psychology of Trust. Lexington Books.
    It is easier to talk frankly to a person whom one trusts. It is also easier to agree with a scientist whom one trusts. Even though in both cases the psychological state that underlies the behavior is called ‘trust’, it is controversial whether it is a token of the same psychological type. Trust can serve an affective, epistemic, or other social function, and comes to interact with other psychological states in a variety of ways. The way that the functional role (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50. Guest editor's introduction: artificial intelligence.Varol Akman - 2001 - Turkish Journal of Electrical Engineering and Computer Sciences 9 (1).
    Founded in 1993, ELEKTRIK: Turkish Journal of Electrical Engineering and Computer Sciences, has gradually become better known and is fast establishing itself as a research oriented publication outlet with high academic standards. In a modest attempt to advance this trend, this special issue of ELEKTRIK brings together five papers exemplifying the state of the art in artificial intelligence (AI). Written by experts, the papers are especially aimed at readers interested in gaining a better appraisal of the applications side of the (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 961