Results for 'AI Future'

957 found
Order:
  1. The Future of AI: Stanisław Lem’s Philosophical Visions for AI and Cyber-Societies in Cyberiad.Roman Krzanowski & Pawel Polak - 2021 - Pro-Fil 22 (3):39-53.
    Looking into the future is always a risky endeavour, but one way to anticipate the possible future shape of AI-driven societies is to examine the visionary works of some sci-fi writers. Not all sci-fi works have such visionary quality, of course, but some of Stanisław Lem’s works certainly do. We refer here to Lem’s works that explore the frontiers of science and technology and those that describe imaginary societies of robots. We therefore examine Lem’s prose, with a focus (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2.  60
    The Evolution of AI in Autonomous Systems: Innovations, Challenges, and Future Prospects.Ashraf M. H. Taha, Zakaria K. D. Alkayyali, Qasem M. M. Zarandah, Bassem S. Abu-Nasser, & Samy S. Abu-Naser - 2024 - International Journal of Academic Engineering Research (IJAER) 8 (10):1-7.
    Abstract: The rapid advancement of artificial intelligence (AI) has catalyzed significant developments in autonomous systems, which are increasingly shaping diverse sectors including transportation, robotics, and industrial automation. This paper explores the evolution of AI technologies that underpin these autonomous systems, focusing on their capabilities, applications, and the challenges they present. Key areas of discussion include the technological innovations driving autonomy, such as machine learning algorithms and sensor integration, and the practical implementations observed in autonomous vehicles, drones, and robotic systems. Additionally, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  3. Can AI Help Us to Understand Belief? Sources, Advances, Limits, and Future Directions.Andrea Vestrucci, Sara Lumbreras & Lluis Oviedo - 2021 - International Journal of Interactive Multimedia and Artificial Intelligence 7 (1):24-33.
    The study of belief is expanding and involves a growing set of disciplines and research areas. These research programs attempt to shed light on the process of believing, understood as a central human cognitive function. Computational systems and, in particular, what we commonly understand as Artificial Intelligence (AI), can provide some insights on how beliefs work as either a linear process or as a complex system. However, the computational approach has undergone some scrutiny, in particular about the differences between what (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  4. EI & AI In Leadership and How It Can Affect Future Leaders.Ramakrishnan Vivek & Oleksandr P. Krupskyi - 2024 - European Journal of Management Issues 32 (3):174-182.
    Purpose: The aim of this study is to examine how the integration of Emotional Intelligence (EI) and Artificial Intelligence (AI) in leadership can enhance leadership effectiveness and influence the development of future leaders. -/- Design / Method / Approach: The research employs a mixed-methods approach, combining qualitative and quantitative analyses. The study utilizes secondary data sources, including scholarly articles, industry reports, and empirical studies, to analyze the interaction between EI and AI in leadership settings. -/- Findings: The findings reveal (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5. AI Sovereignty: Navigating the Future of International AI Governance.Yu Chen - manuscript
    The rapid proliferation of artificial intelligence (AI) technologies has ushered in a new era of opportunities and challenges, prompting nations to grapple with the concept of AI sovereignty. This article delves into the definition and implications of AI sovereignty, drawing parallels to the well-established notion of cyber sovereignty. By exploring the connotations of AI sovereignty, including control over AI development, data sovereignty, economic impacts, national security considerations, and ethical and cultural dimensions, the article provides a comprehensive understanding of this emerging (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. The future won’t be pretty: The nature and value of ugly, AI-designed experiments.Michael T. Stuart - 2023 - In Milena Ivanova & Alice Murphy (eds.), The Aesthetics of Scientific Experiments. New York, NY: Routledge.
    Can an ugly experiment be a good experiment? Philosophers have identified many beautiful experiments and explored ways in which their beauty might be connected to their epistemic value. In contrast, the present chapter seeks out (and celebrates) ugly experiments. Among the ugliest are those being designed by AI algorithms. Interestingly, in the contexts where such experiments tend to be deployed, low aesthetic value correlates with high epistemic value. In other words, ugly experiments can be good. Given this, we should conclude (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7. AI’s New Promise: Our Posthuman Future.Diane Proudfoot & Jack Copeland - 2012 - The Philosophers' Magazine 57:73-78.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  8. The future of AI in our hands? - To what extent are we as individuals morally responsible for guiding the development of AI in a desirable direction?Erik Persson & Maria Hedlund - 2022 - AI and Ethics 2:683-695.
    Artificial intelligence (AI) is becoming increasingly influential in most people’s lives. This raises many philosophical questions. One is what responsibility we have as individuals to guide the development of AI in a desirable direction. More specifically, how should this responsibility be distributed among individuals and between individuals and other actors? We investigate this question from the perspectives of five principles of distribution that dominate the discussion about responsibility in connection with climate change: effectiveness, equality, desert, need, and ability. Since much (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9. Is there a future for AI without representation?Vincent C. Müller - 2007 - Minds and Machines 17 (1):101-115.
    This paper investigates the prospects of Rodney Brooks’ proposal for AI without representation. It turns out that the supposedly characteristic features of “new AI” (embodiment, situatedness, absence of reasoning, and absence of representation) are all present in conventional systems: “New AI” is just like old AI. Brooks proposal boils down to the architectural rejection of central control in intelligent agents—Which, however, turns out to be crucial. Some of more recent cognitive science suggests that we might do well to dispose of (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  10. Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest”.Alexey Turchin - manuscript
    In this article we explore a promising way to AI safety: to send a message now (by openly publishing it on the Internet) that may be read by any future AI, no matter who builds it and what goal system it has. Such a message is designed to affect the AI’s behavior in a positive way, that is, to increase the chances that the AI will be benevolent. In other words, we try to persuade “paperclip maximizer” that it is (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11. Philosophy and the Future of AI.R. L. Tripathi - 2024 - Open Access Journal of Data Science and Artificial Intelligence 2 (1):2.
    The article “Philosophy is crucial in the age of AI” by Anthony Grayling and Brian Ball explores the significant role philosophy has played in the development of Artificial Intelligence (AI) and its continuing relevance in guiding the future of AI technologies. The authors trace the historical contributions of philosophers and logicians, such as Gottlob Frege, Kurt Godel, and Alan Turing, in shaping the foundational principles of AI. They argue that philosophical inquiry remains essential, especially in addressing complex issues like (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12. Beyond the AI Divide: Towards an Inclusive Future Free from AI Caste Systems and AI Dalits.Yu Chen - manuscript
    In the rapidly evolving landscape of artificial intelligence (AI), disparities in access and benefits are becoming increasingly apparent, leading to the emergence of an AI divide. This divide not only amplifies existing socio-economic inequalities but also fosters the creation of AI caste systems, where marginalized groups—referred to as AI Dalits—are systematically excluded from AI advancements. This article explores the definitions and contributing factors of the AI divide and delves into the concept of AI caste systems, illustrating how they perpetuate inequality. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. Trust in AI: Progress, Challenges, and Future Directions.Saleh Afroogh, Ali Akbari, Emmie Malone, Mohammadali Kargar & Hananeh Alambeigi - forthcoming - Nature Humanities and Social Sciences Communications.
    The increasing use of artificial intelligence (AI) systems in our daily life through various applications, services, and products explains the significance of trust/distrust in AI from a user perspective. AI-driven systems have significantly diffused into various fields of our lives, serving as beneficial tools used by human agents. These systems are also evolving to act as co-assistants or semi-agents in specific domains, potentially influencing human thought, decision-making, and agency. Trust/distrust in AI plays the role of a regulator and could significantly (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. Assessing the future plausibility of catastrophically dangerous AI.Alexey Turchin - 2018 - Futures.
    In AI safety research, the median timing of AGI creation is often taken as a reference point, which various polls predict will happen in second half of the 21 century, but for maximum safety, we should determine the earliest possible time of dangerous AI arrival and define a minimum acceptable level of AI risk. Such dangerous AI could be either narrow AI facilitating research into potentially dangerous technology like biotech, or AGI, capable of acting completely independently in the real world (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15. Artificial Intelligence and the Body: Dreyfus, Bickhard, and the Future of AI.Daniel Susser - 2013 - In Vincent Müller (ed.), Philosophy and Theory of Artificial Intelligence. Springer. pp. 277-287.
    For those who find Dreyfus’s critique of AI compelling, the prospects for producing true artificial human intelligence are bleak. An important question thus becomes, what are the prospects for producing artificial non-human intelligence? Applying Dreyfus’s work to this question is difficult, however, because his work is so thoroughly human-centered. Granting Dreyfus that the body is fundamental to intelligence, how are we to conceive of non-human bodies? In this paper, I argue that bringing Dreyfus’s work into conversation with the work of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  16. High hopes for “Deep Medicine”? AI, economics, and the future of care.Robert Sparrow & Joshua Hatherley - 2020 - Hastings Center Report 50 (1):14-17.
    In Deep Medicine, Eric Topol argues that the development of artificial intelligence (AI) for healthcare will lead to a dramatic shift in the culture and practice of medicine. Topol claims that, rather than replacing physicians, AI could function alongside of them in order to allow them to devote more of their time to face-to-face patient care. Unfortunately, these high hopes for AI-enhanced medicine fail to appreciate a number of factors that, we believe, suggest a radically different picture for the (...) of healthcare. Far from facilitating a return to “the golden age of doctoring”, the role of economic and institutional considerations in determining how medical AI will be used mean that it is likely to further erode therapeutic relationships and threaten professional and patient satisfaction. (shrink)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  17. (1 other version)Future progress in artificial intelligence: A survey of expert opinion.Vincent C. Müller & Nick Bostrom - 2016 - In Vincent C. Müller (ed.), Fundamental Issues of Artificial Intelligence. Cham: Springer. pp. 553-571.
    There is, in some quarters, concern about high–level machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity. In other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development, and how fast they see these developing. We thus (...)
    Download  
     
    Export citation  
     
    Bookmark   39 citations  
  18. AI Decision Making with Dignity? Contrasting Workers’ Justice Perceptions of Human and AI Decision Making in a Human Resource Management Context.Sarah Bankins, Paul Formosa, Yannick Griep & Deborah Richards - forthcoming - Information Systems Frontiers.
    Using artificial intelligence (AI) to make decisions in human resource management (HRM) raises questions of how fair employees perceive these decisions to be and whether they experience respectful treatment (i.e., interactional justice). In this experimental survey study with open-ended qualitative questions, we examine decision making in six HRM functions and manipulate the decision maker (AI or human) and decision valence (positive or negative) to determine their impact on individuals’ experiences of interactional justice, trust, dehumanization, and perceptions of decision-maker role appropriate- (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  19. When AI meets PC: exploring the implications of workplace social robots and a human-robot psychological contract.Sarah Bankins & Paul Formosa - 2019 - European Journal of Work and Organizational Psychology 2019.
    The psychological contract refers to the implicit and subjective beliefs regarding a reciprocal exchange agreement, predominantly examined between employees and employers. While contemporary contract research is investigating a wider range of exchanges employees may hold, such as with team members and clients, it remains silent on a rapidly emerging form of workplace relationship: employees’ increasing engagement with technically, socially, and emotionally sophisticated forms of artificially intelligent (AI) technologies. In this paper we examine social robots (also termed humanoid robots) as likely (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  20. AI Art is Theft: Labour, Extraction, and Exploitation, Or, On the Dangers of Stochastic Pollocks.Trystan S. Goetze - 2024 - Proceedings of the 2024 Acm Conference on Fairness, Accountability, and Transparency:186-196.
    Since the launch of applications such as DALL-E, Midjourney, and Stable Diffusion, generative artificial intelligence has been controversial as a tool for creating artwork. While some have presented longtermist worries about these technologies as harbingers of fully automated futures to come, more pressing is the impact of generative AI on creative labour in the present. Already, business leaders have begun replacing human artistic labour with AI-generated images. In response, the artistic community has launched a protest movement, which argues that AI (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  21. How AI’s Self-Prolongation Influences People’s Perceptions of Its Autonomous Mind: The Case of U.S. Residents.Quan-Hoang Vuong, Viet-Phuong La, Minh-Hoang Nguyen, Ruining Jin, Minh-Khanh La & Tam-Tri Le - 2023 - Behavioral Sciences 13 (6):470.
    The expanding integration of artificial intelligence (AI) in various aspects of society makes the infosphere around us increasingly complex. Humanity already faces many obstacles trying to have a better understanding of our own minds, but now we have to continue finding ways to make sense of the minds of AI. The issue of AI’s capability to have independent thinking is of special attention. When dealing with such an unfamiliar concept, people may rely on existing human properties, such as survival desire, (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  22. Military AI as a Convergent Goal of Self-Improving AI.Alexey Turchin & Denkenberger David - 2018 - In Turchin Alexey & David Denkenberger (eds.), Artificial Intelligence Safety and Security. CRC Press.
    Better instruments to predict the future evolution of artificial intelligence (AI) are needed, as the destiny of our civilization depends on it. One of the ways to such prediction is the analysis of the convergent drives of any future AI, started by Omohundro. We show that one of the convergent drives of AI is a militarization drive, arising from AI’s need to wage a war against its potential rivals by either physical or software means, or to increase its (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  23. AI Survival Stories: a Taxonomic Analysis of AI Existential Risk.Herman Cappelen, Simon Goldstein & John Hawthorne - forthcoming - Philosophy of Ai.
    Since the release of ChatGPT, there has been a lot of debate about whether AI systems pose an existential risk to humanity. This paper develops a general framework for thinking about the existential risk of AI systems. We analyze a two-premise argument that AI systems pose a threat to humanity. Premise one: AI systems will become extremely powerful. Premise two: if AI systems become extremely powerful, they will destroy humanity. We use these two premises to construct a taxonomy of ‘survival (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. Medical AI and human dignity: Contrasting perceptions of human and artificially intelligent (AI) decision making in diagnostic and medical resource allocation contexts.Paul Formosa, Wendy Rogers, Yannick Griep, Sarah Bankins & Deborah Richards - 2022 - Computers in Human Behaviour 133.
    Forms of Artificial Intelligence (AI) are already being deployed into clinical settings and research into its future healthcare uses is accelerating. Despite this trajectory, more research is needed regarding the impacts on patients of increasing AI decision making. In particular, the impersonal nature of AI means that its deployment in highly sensitive contexts-of-use, such as in healthcare, raises issues associated with patients’ perceptions of (un) dignified treatment. We explore this issue through an experimental vignette study comparing individuals’ perceptions of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. Good AI for the Present of Humanity Democratizing AI Governance.Nicholas Kluge Corrêa & Nythamar De Oliveira - 2021 - AI Ethics Journal 2 (2):1-16.
    What does Cyberpunk and AI Ethics have to do with each other? Cyberpunk is a sub-genre of science fiction that explores the post-human relationships between human experience and technology. One similarity between AI Ethics and Cyberpunk literature is that both seek a dialogue in which the reader may inquire about the future and the ethical and social problems that our technological advance may bring upon society. In recent years, an increasing number of ethical matters involving AI have been pointed (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  26. Emergent Models for Moral AI Spirituality.Mark Graves - 2021 - International Journal of Interactive Multimedia and Artificial Intelligence 7 (1):7-15.
    Examining AI spirituality can illuminate problematic assumptions about human spirituality and AI cognition, suggest possible directions for AI development, reduce uncertainty about future AI, and yield a methodological lens sufficient to investigate human-AI sociotechnical interaction and morality. Incompatible philosophical assumptions about human spirituality and AI limit investigations of both and suggest a vast gulf between them. An emergentist approach can replace dualist assumptions about human spirituality and identify emergent behavior in AI computation to overcome overly reductionist assumptions about computation. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. AI-Driven Organizational Change: Transforming Structures and Processes in the Modern Workplace.Mohammed Elkahlout, Mohammed B. Karaja, Abeer A. Elsharif, Ibtesam M. Dheir, Basem S. Abunasser & Samy S. Abu-Naser - 2024 - Information Journal of Academic Information Systems Research (Ijaisr) 8 (8):38-45.
    Abstract: Artificial Intelligence (AI) is revolutionizing organizational dynamics by reshaping both structures and processes. This paper explores how AI-driven innovations are transforming organizational frameworks, from hierarchical adjustments to decentralized decision-making models. It examines the impact of AI on various processes, including workflow automation, data analysis, and enhanced decision support systems. Through case studies and empirical research, the paper highlights the benefits of AI in improving efficiency, driving innovation, and fostering agility within organizations. Additionally, it addresses the challenges associated with AI (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28. How AI Systems Can Be Blameworthy.Hannah Altehenger, Leonhard Menges & Peter Schulte - 2024 - Philosophia:1-24.
    AI systems, like self-driving cars, healthcare robots, or Autonomous Weapon Systems, already play an increasingly important role in our lives and will do so to an even greater extent in the near future. This raises a fundamental philosophical question: who is morally responsible when such systems cause unjustified harm? In the paper, we argue for the admittedly surprising claim that some of these systems can themselves be morally responsible for their conduct in an important and everyday sense of the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29. AI Governance and the Policymaking Process: Key Considerations for Reducing AI Risk.Brandon Perry & Risto Uuk - 2019 - Big Data and Cognitive Computing 3 (2):1-17.
    This essay argues that a new subfield of AI governance should be explored that examines the policy-making process and its implications for AI governance. A growing number of researchers have begun working on the question of how to mitigate the catastrophic risks of transformative artificial intelligence, including what policies states should adopt. However, this essay identifies a preceding, meta-level problem of how the space of possible policies is affected by the politics and administrative mechanisms of how those policies are created (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. The AI Ensoulment Hypothesis.Brian Cutter - forthcoming - Faith and Philosophy.
    According to the AI ensoulment hypothesis, some future AI systems will be endowed with immaterial souls. I argue that we should have at least a middling credence in the AI ensoulment hypothesis, conditional on our eventual creation of AGI and the truth of substance dualism in the human case. I offer two arguments. The first relies on an analogy between aliens and AI. The second rests on the conjecture that ensoulment occurs whenever a physical system is “fit to possess” (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  31. (1 other version)Friendly Superintelligent AI: All You Need is Love.Michael Prinzing - 2012 - In Vincent C. Müller (ed.), The Philosophy & Theory of Artificial Intelligence. Springer. pp. 288-301.
    There is a non-trivial chance that sometime in the (perhaps somewhat distant) future, someone will build an artificial general intelligence that will surpass human-level cognitive proficiency and go on to become "superintelligent", vastly outperforming humans. The advent of superintelligent AI has great potential, for good or ill. It is therefore imperative that we find a way to ensure-long before one arrives-that any superintelligence we build will consistently act in ways congenial to our interests. This is a very difficult challenge (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32. AI and Ethics: Reality or Oxymoron?Jean Kühn Keyser - manuscript
    A philosophical linguistic exploration into the existence of not of AI ethics. Using Adorno's negative dialectics the author considers contemporary approaches to AI and Ethics, especially with regards to policy and law considerations. Looking at if these approaches are in fact speaking to our historical conception of AI and what the actual emergence of the latter could imply for future ethical concerns.
    Download  
     
    Export citation  
     
    Bookmark  
  33. AI-Testimony, Conversational AIs and Our Anthropocentric Theory of Testimony.Ori Freiman - 2024 - Social Epistemology 38 (4):476-490.
    The ability to interact in a natural language profoundly changes devices’ interfaces and potential applications of speaking technologies. Concurrently, this phenomenon challenges our mainstream theories of knowledge, such as how to analyze linguistic outputs of devices under existing anthropocentric theoretical assumptions. In section 1, I present the topic of machines that speak, connecting between Descartes and Generative AI. In section 2, I argue that accepted testimonial theories of knowledge and justification commonly reject the possibility that a speaking technological artifact can (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. AI Successors Worth Creating? Commentary on Lavazza & Vilaça.Alexandre Erler - 2024 - Philosophy and Technology 37 (1):1-5.
    This is a commentary on Andrea Lavazza and Murilo Vilaça's article "Human Extinction and AI: What We Can Learn from the Ultimate Threat" (Lavazza & Vilaça, 2024). I discuss the potential concern that their proposal to create artificial successors to "insure" against the tragedy of human extinction might mean being too quick to accept that catastrophic prospect as inevitable, rather than single-mindedly focusing on avoiding it. I also consider the question of the value that we might reasonably assign to such (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  35. Aiming AI at a moving target: health.Mihai Nadin - 2020 - AI and Society 35 (4):841-849.
    Justified by spectacular achievements facilitated through applied deep learning methodology, the “Everything is possible” view dominates this new hour in the “boom and bust” curve of AI performance. The optimistic view collides head on with the “It is not possible”—ascertainments often originating in a skewed understanding of both AI and medicine. The meaning of the conflicting views can be assessed only by addressing the nature of medicine. Specifically: Which part of medicine, if any, can and should be entrusted to AI—now (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  36. AI-Driven Innovations in Agriculture: Transforming Farming Practices and Outcomes.Jehad M. Altayeb, Hassam Eleyan, Nida D. Wishah, Abed Elilah Elmahmoum, Ahmed J. Khalil, Bassem S. Abu-Nasser & Samy S. Abu-Naser - 2024 - International Journal of Academic Applied Research (Ijaar) 8 (9):1-6.
    Abstract: Artificial Intelligence (AI) is transforming the agricultural sector, enhancing both productivity and sustainability. This paper delves into the impact of AI technologies on agriculture, emphasizing their application in precision farming, predictive analytics, and automation. AI-driven tools facilitate more efficient crop and resource management, leading to higher yields and a reduced environmental footprint. The paper explores key AI technologies, such as machine learning algorithms for crop monitoring, robotics for automated planting and harvesting, and data analytics for optimizing resource use. Additionally, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37. The Future of Human-Artificial Intelligence Nexus and its Environmental Costs.Petr Spelda & Vit Stritecky - 2020 - Futures 117.
    The environmental costs and energy constraints have become emerging issues for the future development of Machine Learning (ML) and Artificial Intelligence (AI). So far, the discussion on environmental impacts of ML/AI lacks a perspective reaching beyond quantitative measurements of the energy-related research costs. Building on the foundations laid down by Schwartz et al., 2019 in the GreenAI initiative, our argument considers two interlinked phenomena, the gratuitous generalisation capability and the future where ML/AI performs the majority of quantifiable inductive (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. Group Prioritarianism: Why AI should not replace humanity.Frank Hong - 2024 - Philosophical Studies:1-19.
    If a future AI system can enjoy far more well-being than a human per resource, what would be the best way to allocate resources between these future AI and our future descendants? It is obvious that on total utilitarianism, one should give everything to the AI. However, it turns out that every Welfarist axiology on the market also gives this same recommendation, at least if we assume consequentialism. Without resorting to non-consequentialist normative theories that suggest that we (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. Editorial to “Decision theory and the future of AI”.Yang Liu, Stephan Hartmann & Huw Price - 2021 - Synthese 198 (Suppl 27):6413-6414.
    Download  
     
    Export citation  
     
    Bookmark  
  40. How to design AI for social good: seven essential factors.Luciano Floridi, Josh Cowls, Thomas C. King & Mariarosaria Taddeo - 2020 - Science and Engineering Ethics 26 (3):1771–1796.
    The idea of artificial intelligence for social good is gaining traction within information societies in general and the AI community in particular. It has the potential to tackle social problems through the development of AI-based solutions. Yet, to date, there is only limited understanding of what makes AI socially good in theory, what counts as AI4SG in practice, and how to reproduce its initial successes in terms of policies. This article addresses this gap by identifying seven ethical factors that are (...)
    Download  
     
    Export citation  
     
    Bookmark   38 citations  
  41. Future progress in artificial intelligence: A poll among experts.Vincent C. Müller & Nick Bostrom - 2014 - AI Matters 1 (1):9-11.
    [This is the short version of: Müller, Vincent C. and Bostrom, Nick (forthcoming 2016), ‘Future progress in artificial intelligence: A survey of expert opinion’, in Vincent C. Müller (ed.), Fundamental Issues of Artificial Intelligence (Synthese Library 377; Berlin: Springer).] - - - In some quarters, there is intense concern about high–level machine intelligence and superintelligent AI coming up in a few dec- ades, bringing with it significant risks for human- ity; in other quarters, these issues are ignored or considered (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  42. ChatGPT: towards AI subjectivity.Kristian D’Amato - 2024 - AI and Society 39:1-15.
    Motivated by the question of responsible AI and value alignment, I seek to offer a uniquely Foucauldian reconstruction of the problem as the emergence of an ethical subject in a disciplinary setting. This reconstruction contrasts with the strictly human-oriented programme typical to current scholarship that often views technology in instrumental terms. With this in mind, I problematise the concept of a technological subjectivity through an exploration of various aspects of ChatGPT in light of Foucault’s work, arguing that current systems lack (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  43. The Point of Blaming AI Systems.Hannah Altehenger & Leonhard Menges - 2024 - Journal of Ethics and Social Philosophy 27 (2).
    As Christian List (2021) has recently argued, the increasing arrival of powerful AI systems that operate autonomously in high-stakes contexts creates a need for “future-proofing” our regulatory frameworks, i.e., for reassessing them in the face of these developments. One core part of our regulatory frameworks that dominates our everyday moral interactions is blame. Therefore, “future-proofing” our extant regulatory frameworks in the face of the increasing arrival of powerful AI systems requires, among others things, that we ask whether it (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  44. AI-Driven Learning: Advances and Challenges in Intelligent Tutoring Systems.Amjad H. Alfarra, Lamis F. Amhan, Msbah J. Mosa, Mahmoud Ali Alajrami, Faten El Kahlout, Bassem S. Abu-Nasser & Samy S. Abu-Naser - 2024 - International Journal of Academic Applied Research (Ijaar) 8 (9):24-29.
    Abstract: The incorporation of Artificial Intelligence (AI) into educational technology has dramatically transformed learning through Intelligent Tutoring Systems (ITS). These systems utilize AI to offer personalized, adaptive instruction tailored to each student's needs, thereby improving learning outcomes and engagement. This paper examines the development and impact of ITS, focusing on AI technologies such as machine learning, natural language processing, and adaptive algorithms that drive their functionality. Through various case studies and applications, it illustrates how ITS have revolutionized traditional educational methods (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. The emperor is naked: Moral diplomacies and the ethics of AI.Constantin Vica, Cristina Voinea & Radu Uszkai - 2021 - Információs Társadalom 21 (2):83-96.
    With AI permeating our lives, there is widespread concern regarding the proper framework needed to morally assess and regulate it. This has given rise to many attempts to devise ethical guidelines that infuse guidance for both AI development and deployment. Our main concern is that, instead of a genuine ethical interest for AI, we are witnessing moral diplomacies resulting in moral bureaucracies battling for moral supremacy and political domination. After providing a short overview of what we term ‘ethics washing’ in (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  46. Theology Meets AI: Examining Perspectives, Tasks, and Theses on the Intersection of Technology and Religion.Anna Puzio - 2023 - In Anna Puzio, Nicole Kunkel & Hendrik Klinge (eds.), Alexa, wie hast du's mit der Religion? Theologische Zugänge zu Technik und Künstlicher Intelligenz. Darmstadt: Wbg.
    Artificial intelligence (AI), blockchain, virtual and augmented reality, (semi-)autonomous ve- hicles, autoregulatory weapon systems, enhancement, reproductive technologies and human- oid robotics – these technologies (and many others) are no longer speculative visions of the future; they have already found their way into our lives or are on the verge of a breakthrough. These rapid technological developments awaken a need for orientation: what distinguishes hu- man from machine and human intelligence from artificial intelligence, how far should the body be allowed (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. Supporting human autonomy in AI systems.Rafael Calvo, Dorian Peters, Karina Vold & Richard M. Ryan - 2020 - In Christopher Burr & Luciano Floridi (eds.), Ethics of digital well-being: a multidisciplinary approach. Springer.
    Autonomy has been central to moral and political philosophy for millenia, and has been positioned as a critical aspect of both justice and wellbeing. Research in psychology supports this position, providing empirical evidence that autonomy is critical to motivation, personal growth and psychological wellness. Responsible AI will require an understanding of, and ability to effectively design for, human autonomy (rather than just machine autonomy) if it is to genuinely benefit humanity. Yet the effects on human autonomy of digital experiences are (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  48. Ethical Issues in Near-Future Socially Supportive Smart Assistants for Older Adults.Alex John London - forthcoming - IEEE Transactions on Technology and Society.
    Abstract:This paper considers novel ethical issues pertaining to near-future artificial intelligence (AI) systems that seek to support, maintain, or enhance the capabilities of older adults as they age and experience cognitive decline. In particular, we focus on smart assistants (SAs) that would seek to provide proactive assistance and mediate social interactions between users and other members of their social or support networks. Such systems would potentially have significant utility for users and their caregivers if they could reduce the cognitive (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  49. AI, alignment, and the categorical imperative.Fritz McDonald - 2023 - AI and Ethics 3:337-344.
    Tae Wan Kim, John Hooker, and Thomas Donaldson make an attempt, in recent articles, to solve the alignment problem. As they define the alignment problem, it is the issue of how to give AI systems moral intelligence. They contend that one might program machines with a version of Kantian ethics cast in deontic modal logic. On their view, machines can be aligned with human values if such machines obey principles of universalization and autonomy, as well as a deontic utilitarian principle. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50. (1 other version)A united framework of five principles for AI in society.Luciano Floridi & Josh Cowls - 2019 - Harvard Data Science Review 1 (1).
    Artificial Intelligence (AI) is already having a major impact on society. As a result, many organizations have launched a wide range of initiatives to establish ethical principles for the adoption of socially beneficial AI. Unfortunately, the sheer volume of proposed principles threatens to overwhelm and confuse. How might this problem of ‘principle proliferation’ be solved? In this paper, we report the results of a fine-grained analysis of several of the highest-profile sets of ethical principles for AI. We assess whether these (...)
    Download  
     
    Export citation  
     
    Bookmark   76 citations  
1 — 50 / 957