Results for 'AI progress'

972 found
Order:
  1. Trust in AI: Progress, Challenges, and Future Directions.Saleh Afroogh, Ali Akbari, Emmie Malone, Mohammadali Kargar & Hananeh Alambeigi - forthcoming - Nature Humanities and Social Sciences Communications.
    The increasing use of artificial intelligence (AI) systems in our daily life through various applications, services, and products explains the significance of trust/distrust in AI from a user perspective. AI-driven systems have significantly diffused into various fields of our lives, serving as beneficial tools used by human agents. These systems are also evolving to act as co-assistants or semi-agents in specific domains, potentially influencing human thought, decision-making, and agency. Trust/distrust in AI plays the role of a regulator and could significantly (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2. Saliva Ontology: An ontology-based framework for a Salivaomics Knowledge Base.Jiye Ai, Barry Smith & David Wong - 2010 - BMC Bioinformatics 11 (1):302.
    The Salivaomics Knowledge Base (SKB) is designed to serve as a computational infrastructure that can permit global exploration and utilization of data and information relevant to salivaomics. SKB is created by aligning (1) the saliva biomarker discovery and validation resources at UCLA with (2) the ontology resources developed by the OBO (Open Biomedical Ontologies) Foundry, including a new Saliva Ontology (SALO). We define the Saliva Ontology (SALO; http://www.skb.ucla.edu/SALO/) as a consensus-based controlled vocabulary of terms and relations dedicated to the salivaomics (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  3. AI-Related Misdirection Awareness in AIVR.Nadisha-Marie Aliman & Leon Kester - manuscript
    Recent AI progress led to a boost in beneficial applications from multiple research areas including VR. Simultaneously, in this newly unfolding deepfake era, ethically and security-relevant disagreements arose in the scientific community regarding the epistemic capabilities of present-day AI. However, given what is at stake, one can postulate that for a responsible approach, prior to engaging in a rigorous epistemic assessment of AI, humans may profit from a self-questioning strategy, an examination and calibration of the experience of their own (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  4. (1 other version)Future progress in artificial intelligence: A survey of expert opinion.Vincent C. Müller & Nick Bostrom - 2016 - In Vincent C. Müller, Fundamental Issues of Artificial Intelligence. Cham: Springer. pp. 553-571.
    There is, in some quarters, concern about high–level machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity. In other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development, and how fast they see these developing. We thus (...)
    Download  
     
    Export citation  
     
    Bookmark   41 citations  
  5.  50
    Expanding AI and AI Alignment Discourse: An Opportunity for Greater Epistemic Inclusion.A. E. Williams - manuscript
    The AI and AI alignment communities have been instrumental in addressing existential risks, developing alignment methodologies, and promoting rationalist problem-solving approaches. However, as AI research ventures into increasingly uncertain domains, there is a risk of premature epistemic convergence, where prevailing methodologies influence not only the evaluation of ideas but also determine which ideas are considered within the discourse. This paper examines critical epistemic blind spots in AI alignment research, particularly the lack of predictive frameworks to differentiate problems necessitating general intelligence, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. Can AI Rely on the Systematicity of Truth? The Challenge of Modelling Normative Domains.Matthieu Queloz - forthcoming - Philosophy and Technology.
    A key assumption fuelling optimism about the progress of large language models (LLMs) in accurately and comprehensively modelling the world is that the truth is systematic: true statements about the world form a whole that is not just consistent, in that it contains no contradictions, but coherent, in that the truths are inferentially interlinked. This holds out the prospect that LLMs might in principle rely on that systematicity to fill in gaps and correct inaccuracies in the training data: consistency (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7. As AIs get smarter, understand human-computer interactions with the following five premises.Manh-Tung Ho & Quan-Hoang Vuong - manuscript
    The hypergrowth and hyperconnectivity of networks of artificial intelligence (AI) systems and algorithms increasingly cause our interactions with the world, socially and environmentally, more technologically mediated. AI systems start interfering with our choices or making decisions on our behalf: what we see, what we buy, which contents or foods we consume, where we travel to, who we hire, etc. It is imperative to understand the dynamics of human-computer interaction in the age of progressively more competent AI. This essay presents five (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. Combating Disinformation with AI: Epistemic and Ethical Challenges.Benjamin Lange & Ted Lechterman - 2021 - IEEE International Symposium on Ethics in Engineering, Science and Technology (ETHICS) 1:1-5.
    AI-supported methods for identifying and combating disinformation are progressing in their development and application. However, these methods face a litany of epistemic and ethical challenges. These include (1) robustly defining disinformation, (2) reliably classifying data according to this definition, and (3) navigating ethical risks in the deployment of countermeasures, which involve a mixture of harms and benefits. This paper seeks to expose and offer preliminary analysis of these challenges.
    Download  
     
    Export citation  
     
    Bookmark  
  9. Future progress in artificial intelligence: A poll among experts.Vincent C. Müller & Nick Bostrom - 2014 - AI Matters 1 (1):9-11.
    [This is the short version of: Müller, Vincent C. and Bostrom, Nick (forthcoming 2016), ‘Future progress in artificial intelligence: A survey of expert opinion’, in Vincent C. Müller (ed.), Fundamental Issues of Artificial Intelligence (Synthese Library 377; Berlin: Springer).] - - - In some quarters, there is intense concern about high–level machine intelligence and superintelligent AI coming up in a few dec- ades, bringing with it significant risks for human- ity; in other quarters, these issues are ignored or considered (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  10. A Case for AI Consciousness: Language Agents and Global Workspace Theory.Simon Goldstein & Cameron Domenico Kirk-Giannini - manuscript
    It is generally assumed that existing artificial systems are not phenomenally conscious, and that the construction of phenomenally conscious artificial systems would require significant technological progress if it is possible at all. We challenge this assumption by arguing that if Global Workspace Theory (GWT) — a leading scientific theory of phenomenal consciousness — is correct, then instances of one widely implemented AI architecture, the artificial language agent, might easily be made phenomenally conscious if they are not already. Along the (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  11.  38
    How AI Can Implement the Universal Formula in Education and Leadership Training.Angelito Malicse - manuscript
    How AI Can Implement the Universal Formula in Education and Leadership Training -/- If AI is programmed based on your universal formula, it can serve as a powerful tool for optimizing human intelligence, education, and leadership decision-making. Here’s how AI can be integrated into your vision: -/- 1. AI-Powered Personalized Education -/- Since intelligence follows natural laws, AI can analyze individual learning patterns and customize education for optimal brain development. -/- Adaptive Learning Systems – AI can adjust lessons in real (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12.  32
    Aligning AI with the Universal Formula for Balanced Decision-Making.Angelito Malicse - manuscript
    -/- Aligning AI with the Universal Formula for Balanced Decision-Making -/- Introduction -/- Artificial Intelligence (AI) represents a highly advanced form of automated information processing, capable of analyzing vast amounts of data, identifying patterns, and making predictive decisions. However, the effectiveness of AI depends entirely on the integrity of its inputs, processing mechanisms, and decision-making frameworks. If AI is programmed without a foundational understanding of natural laws, it risks reinforcing misinformation, bias, and societal imbalance. -/- Angelito Malicse’s universal formula, particularly (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. AI and the Universal Law of Economic Balance: A Homeostatic Model for Sustainable Prosperity.Angelito Malicse - manuscript
    AI and the Universal Law of Economic Balance: A Homeostatic Model for Sustainable Prosperity -/- Introduction -/- Modern economies are primarily driven by the profit motive, which, while encouraging innovation and efficiency, often leads to wage stagnation, wealth inequality, and resource exploitation. The imbalance between corporate profits, wages, purchasing power, and market demand has resulted in recurring economic crises, social unrest, and environmental degradation. -/- To resolve these systemic issues, economic policies must align with the universal law of balance in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. Rethinking AI: Moving Beyond Humans as Exclusive Creators.Renee Ye - 2024 - Proceedings of the Annual Meeting of the Cognitive Science Society, Volume 46.
    Termed the 'Made-by-Human Hypothesis,' I challenge the commonly accepted notion that Artificial Intelligence (AI) is exclusively crafted by humans, emphasizing its impediment to progress. I argue that influences beyond human agency significantly shape AI's trajectory. Introducing the 'Hybrid Hypothesis,' I suggest that the creation of AI is multi-sourced; methods such as evolutionary algorithms influencing AI originate from diverse sources and yield varied impacts. I argue that the development of AI models will increasingly adopt a 'Human+' hybrid composition, where human (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15. Global Solutions vs. Local Solutions for the AI Safety Problem.Alexey Turchin - 2019 - Big Data Cogn. Comput 3 (1).
    There are two types of artificial general intelligence (AGI) safety solutions: global and local. Most previously suggested solutions are local: they explain how to align or “box” a specific AI (Artificial Intelligence), but do not explain how to prevent the creation of dangerous AI in other places. Global solutions are those that ensure any AI on Earth is not dangerous. The number of suggested global solutions is much smaller than the number of proposed local solutions. Global solutions can be divided (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  16. Limits of trust in medical AI.Joshua James Hatherley - 2020 - Journal of Medical Ethics 46 (7):478-481.
    Artificial intelligence (AI) is expected to revolutionise the practice of medicine. Recent advancements in the field of deep learning have demonstrated success in variety of clinical tasks: detecting diabetic retinopathy from images, predicting hospital readmissions, aiding in the discovery of new drugs, etc. AI’s progress in medicine, however, has led to concerns regarding the potential effects of this technology on relationships of trust in clinical practice. In this paper, I will argue that there is merit to these concerns, since (...)
    Download  
     
    Export citation  
     
    Bookmark   39 citations  
  17. Why interdisciplinary research in AI is so important, according to Jurassic Park.Marie Oldfield - 2020 - The Tech Magazine 1 (1):1.
    Why interdisciplinary research in AI is so important, according to Jurassic Park. -/- “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.” -/- I think this quote resonates with us now more than ever, especially in the world of technological development. The writers of Jurassic Park were years ahead of their time with this powerful quote. -/- As we build new technology, and we push on to see what can actually (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18.  60
    Discovering Our Blind Spots and Cognitive Biases in AI Research and Alignment.A. E. Williams - manuscript
    The challenge of AI alignment is not just a technological issue but fundamentally an epistemic one. AI safety research predominantly relies on empirical validation, often detecting failures only after they manifest. However, certain risks—such as deceptive alignment and goal misspecification—may not be empirically testable until it is too late, necessitating a shift toward leading-indicator logical reasoning. This paper explores how mainstream AI research systematically filters out deep epistemic insight, hindering progress in AI safety. We assess the rarity of such (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. Theory and philosophy of AI (Minds and Machines, 22/2 - Special volume).Vincent C. Müller (ed.) - 2012 - Springer.
    Invited papers from PT-AI 2011. - Vincent C. Müller: Introduction: Theory and Philosophy of Artificial Intelligence - Nick Bostrom: The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents - Hubert L. Dreyfus: A History of First Step Fallacies - Antoni Gomila, David Travieso and Lorena Lobo: Wherein is Human Cognition Systematic - J. Kevin O'Regan: How to Build a Robot that Is Conscious and Feels - Oron Shagrir: Computation, Implementation, Cognition.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  20. Reasons to Respond to AI Emotional Expressions.Rodrigo Díaz & Jonas Blatter - 2025 - American Philosophical Quarterly 62 (1):87-102.
    Human emotional expressions can communicate the emotional state of the expresser, but they can also communicate appeals to perceivers. For example, sadness expressions such as crying request perceivers to aid and support, and anger expressions such as shouting urge perceivers to back off. Some contemporary artificial intelligence (AI) systems can mimic human emotional expressions in a (more or less) realistic way, and they are progressively being integrated into our daily lives. How should we respond to them? Do we have reasons (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21.  98
    Governing the Agent-to-Agent Economy of Trust via Progressive Decentralization.Tomer Jordi Chaffer - manuscript
    Current approaches to AI governance often fall short in anticipating a future where AI agents manage critical tasks, such as financial operations, administrative functions, and beyond. As AI agents may eventually delegate tasks among themselves to optimize efficiency, understanding the foundational principles of human value exchange could offer insights into how AI-driven economies might operate. Just as trust and value exchange are central to human interactions in open marketplaces, they may also be critical for enabling secure and efficient interactions among (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22. The Different Way of Utilizing the Intellectual of Artificial Intelligence in the Animal Farming Field Progress of AI.K. Krishna Kumar Deepak Kumar, Laith H. Alzubaidi, Awadhesh Chandramauli, Raheem Unissa, Saloni Bansal - 2024 - International Conference on Advance Computing and Innovative Technologies in Engineering 4 (1):1624-1626.
    The goal of this project is to create a prototype smart farm using Internet of Things (IoT) technology. Developing an Internet of Things feeding control system and designing and implementing environmental monitoring and control systems for farm housing are the primary objectives. The development method uses open-source hardware and software and is conducted in pig farms in the Thai region of Nakhon Si Thammara. An ESP8266 Wi-Fi microcontroller is crucial to the system design, allowing several sensors to be connected to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23.  23
    Multimodal Gen AI: Integrating Text, Image, and Video Analysis for Comprehensive Claims Assessment.Adavelli Sateesh Reddy - 2024 - Esp International Journal of Advancements in Computational Technology 2 (2):133-141.
    The increase in claim sophistication in both the insurance and legal domains is a result of an increase in stokes and heterogeneity of data needed to assess the claim validity. Originally, this task was performed by some sort of subjectivity assessments and graphical rule sets, which is very slow and may be inherently erroneous due to its purely manual nature. Hence, with progressivity in multimodal learning, specifically in AI, there is now a unique chance of solving these challenges through the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. Philosophers Ought to Develop, Theorize About, and Use Philosophically Relevant AI.Graham Clay & Caleb Ontiveros - 2023 - Metaphilosophy 54 (4):463-479.
    The transformative power of artificial intelligence (AI) is coming to philosophy—the only question is the degree to which philosophers will harness it. In this paper, we argue that the application of AI tools to philosophy could have an impact on the field comparable to the advent of writing, and that it is likely that philosophical progress will significantly increase as a consequence of AI. The role of philosophers in this story is not merely to use AI but also to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  25. Susan Schneider's Proposed Tests for AI Consciousness: Promising but Flawed.D. B. Udell & Eric Schwitzgebel - 2021 - Journal of Consciousness Studies 28 (5-6):121-144.
    Susan Schneider (2019) has proposed two new tests for consciousness in AI (artificial intelligence) systems, the AI Consciousness Test and the Chip Test. On their face, the two tests seem to have the virtue of proving satisfactory to a wide range of consciousness theorists holding divergent theoretical positions, rather than narrowly relying on the truth of any particular theory of consciousness. Unfortunately, both tests are undermined in having an ‘audience problem’: Those theorists with the kind of architectural worries that motivate (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  26. From Iron to AI: The Evolution of the Sources of State Power.Yu Chen - manuscript
    This article, “From Iron to AI: The Evolution of the Sources of State Power,” examines the progression of fundamental resources that have historically underpinned state power, from tangible assets like land and iron to modern advancements in artificial intelligence (AI). It traces the development of state power through three significant eras: the ancient period characterized by land, population, horses, and iron; the industrial era marked by railroads, coal, and electricity; and the contemporary digital age dominated by the Internet and emerging (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. HARMONIZING LAW AND INNOVATIONS IN NANOMEDICINE, ARTIFICIAL INTELLIGENCE (AI) AND BIOMEDICAL ROBOTICS: A CENTRAL ASIAN PERSPECTIVE.Ammar Younas & Tegizbekova Zhyldyz Chynarbekovna - manuscript
    The recent progression in AI, nanomedicine and robotics have increased concerns about ethics, policy and law. The increasing complexity and hybrid nature of AI and nanotechnologies impact the functionality of “law in action” which can lead to legal uncertainty and ultimately to a public distrust. There is an immediate need of collaboration between Central Asian biomedical scientists, AI engineers and academic lawyers for the harmonization of AI, nanomedicines and robotics in Central Asian legal system.
    Download  
     
    Export citation  
     
    Bookmark  
  28. Accelerating Artificial Intelligence: Exploring the Implications of Xenoaccelerationism and Accelerationism for AI and Machine Learning.Kaiola liu - 2023 - Dissertation, University of Hawaii
    This article analyzes the potential impacts of Xenoaccelerationism and Accelerationism on the development of artificial intelligence (AI) and machine learning (ML). It examines how these speculative philosophies, which advocate technological acceleration and integration of diverse knowledge, may shape priorities and approaches in AI research and development. The risks and benefits of aligning AI progress with accelerationist values are discussed.
    Download  
     
    Export citation  
     
    Bookmark  
  29. From Past to Present: A study of AI-driven gamification in heritage education.Sepehr Vaez Afshar, Sarvin Eshaghi, Mahyar Hadighi & Guzden Varinlioglu - 2024 - 42Nd Conference on Education and Research in Computer Aided Architectural Design in Europe: Data-Driven Intelligence 2:249-258.
    The use of Artificial Intelligence (AI) in educational gamification marks a significant advancement, transforming traditional learning methods by offering interactive, adaptive, and personalized content. This approach makes historical content more relatable and promotes active learning and exploration. This research presents an innovative approach to heritage education, combining AI and gamification, explicitly targeting the Silk Roads. It represents a significant progression in a series of research, transitioning from basic 2D textual interactions to a 3D environment using photogrammetry, combining historical authenticity and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. Content Reliability in the Age of AI: A Comparative Study of Human vs. GPT-Generated Scholarly Articles.Rajesh Kumar Maurya & Swati R. Maurya - 2024 - Library Progress International 44 (3):1932-1943.
    The rapid advancement of Artificial Intelligence (AI) and the developments of Large Language Models (LLMs) like Generative Pretrained Transformers (GPTs) have significantly influenced content creation in scholarly communication and across various fields. This paper presents a comparative analysis of the content reliability between human-generated and GPT-generated scholarly articles. Recent developments in AI suggest that GPTs have become capable in generating content that can mimic human language to a greater extent. This highlights and raises questions about the quality, accuracy, and reliability (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31. Advanced Data Integration for Smart Healthcare: Leveraging Blockchain and AI Technologies.A. Manoj Prabaharan - 2024 - Journal of Artificial Intelligence and Cyber Security (Jaics) 8 (1):1-7.
    The healthcare sector is undergoing a transformative shift towards smart healthcare, driven by advancements in technology, including Artificial Intelligence (AI) and Blockchain. As healthcare systems generate vast amounts of data from multiple sources, such as electronic health records (EHRs), medical imaging, wearable devices, and sensor-based monitoring, the challenge lies in securely integrating and analyzing this data for real-time, actionable insights. Blockchain technology, with its decentralized, immutable, and transparent framework, offers a robust solution for securing data integrity, privacy, and sharing across (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32.  45
    The Inefficiency of the Biological Brain and the Importance of AI for the Next Generation.Angelito Malicse - manuscript
    The Inefficiency of the Biological Brain and the Importance of AI for the Next Generation -/- The human brain, often considered the pinnacle of evolutionary design, is an extraordinary organ capable of creativity, critical thinking, and adaptation. However, despite its remarkable abilities, it is inherently inefficient when compared to artificial intelligence (AI) systems in certain domains. The inefficiencies of the biological brain, coupled with the rapid development of AI technology, underline why artificial general intelligence (AGI) holds immense promise for shaping (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33.  30
    The Science of Balanced Leadership and Competition: The Role of AI Technology as a Guide.Angelito Malicse - manuscript
    The Science of Balanced Leadership and Competition: The Role of AI Technology as a Guide -/- Introduction -/- Leadership and competition are two fundamental forces that shape human societies, economies, and institutions. However, their effectiveness depends on how they are managed. When leadership is imbalanced, it leads to corruption, authoritarianism, or inefficiency. When competition is unregulated, it creates inequality, exploitation, and instability. The science of balanced leadership and competition is an approach that integrates principles of natural balance, ethical decision-making, and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. Consciousness without biology: An argument from anticipating scientific progress.Leonard Dung - manuscript
    I develop the anticipatory argument for the view that it is nomologically possible that some non-biological creatures are phenomenally conscious, including conventional, silicon-based AI systems. This argument rests on the general idea that we should make our beliefs conform to the outcomes of an ideal scientific process and that such an ideal scientific process would attribute consciousness to some possible AI systems. This kind of ideal scientific process is an ideal application of the iterative natural kind (INK) strategy, according to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35.  23
    Reforming All Countries Through the Universal Law of Balance: A Path to Global Stability and Progress.Angelito Malicse - manuscript
    Reforming All Countries Through the Universal Law of Balance: A Path to Global Stability and Progress -/- By Angelito Enriquez Malicse -/- Introduction -/- Throughout history, human societies have struggled with instability, conflict, economic inequality, environmental degradation, and governance failures. Despite technological advancements, many nations still face deep-rooted problems caused by imbalanced decision-making at both individual and collective levels. My universal formula, grounded in the universal law of balance in nature, offers a transformative solution to reform all countries and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  36. Late-Binding Scholarship in the Age of AI.Bill Tomlinson, Andrew W. Torrance, Rebecca W. Black & Donald J. Patterson - 2023 - Arxiv.
    Scholarly processes play a pivotal role in discovering, challenging, improving, advancing, synthesizing, codifying, and disseminating knowledge. Since the 17th Century, both the quality and quantity of knowledge that scholarship has produced has increased tremendously, granting academic research a pivotal role in ensuring material and social progress. Artificial Intelligence (AI) is poised to enable a new leap in the creation of scholarly content. New forms of engagement with AI systems, such as collaborations with large language models like GPT-3, offer affordances (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37. Why the World Will Not Fully Progress Without the Full Implementation of My Universal Formula.Angelito Malicse - manuscript
    -/- Why the World Will Not Fully Progress Without the Full Implementation of My Universal Formula -/- Introduction -/- Throughout history, humanity has sought progress through technological advancements, economic growth, and political reforms. However, despite these efforts, the world continues to face persistent challenges such as poverty, corruption, environmental destruction, and social conflicts. These problems arise because human decision-making often lacks a fundamental guiding principle that ensures balance and sustainability. My universal formula, which solves the problem of free (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. Editorial: Risks of artificial intelligence.Vincent C. Müller - 2015 - In Risks of general intelligence. CRC Press - Chapman & Hall. pp. 1-8.
    If the intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity. Time has come to consider these issues, and this consideration must include progress in AI as much as insights from the theory of AI. The papers in this volume try to make cautious headway in setting the problem, evaluating predictions on the future of AI, proposing ways to ensure that AI systems will be beneficial to humans – and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  39. Risks of artificial intelligence.Vincent C. Muller (ed.) - 2015 - CRC Press - Chapman & Hall.
    Papers from the conference on AI Risk (published in JETAI), supplemented by additional work. --- If the intelligence of artificial systems were to surpass that of humans, humanity would face significant risks. The time has come to consider these issues, and this consideration must include progress in artificial intelligence (AI) as much as insights from AI theory. -- Featuring contributions from leading experts and thinkers in artificial intelligence, Risks of Artificial Intelligence is the first volume of collected chapters dedicated (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  40. Artificial Intelligence: Arguments for Catastrophic Risk.Adam Bales, William D'Alessandro & Cameron Domenico Kirk-Giannini - 2024 - Philosophy Compass 19 (2):e12964.
    Recent progress in artificial intelligence (AI) has drawn attention to the technology’s transformative potential, including what some see as its prospects for causing large-scale harm. We review two influential arguments purporting to show how AI could pose catastrophic risks. The first argument — the Problem of Power-Seeking — claims that, under certain assumptions, advanced AI systems are likely to engage in dangerous power-seeking behavior in pursuit of their goals. We review reasons for thinking that AI systems might seek power, (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  41. Evaluation and Design of Generalist Systems (EDGeS).John Beverley & Amanda Hicks - 2023 - Ai Magazine.
    The field of AI has undergone a series of transformations, each marking a new phase of development. The initial phase emphasized curation of symbolic models which excelled in capturing reasoning but were fragile and not scalable. The next phase was characterized by machine learning models—most recently large language models (LLMs)—which were more robust and easier to scale but struggled with reasoning. Now, we are witnessing a return to symbolic models as complementing machine learning. Successes of LLMs contrast with their inscrutability, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42. The Weaponization of Artificial Intelligence: What The Public Needs to be Aware of.Birgitta Dresp-Langley - 2023 - Frontiers in Artificial Intelligence 6 (1154184):1-6..
    Technological progress has brought about the emergence of machines that have the capacity to take human lives without human control. These represent an unprecedented threat to humankind. This paper starts from the example of chemical weapons, now banned worldwide by the Geneva protocol, to illustrate how technological development initially aimed at the benefit of humankind has, ultimately, produced what is now called the “Weaponization of Artificial Intelligence (AI)”. Autonomous Weapon Systems (AWS) fail the so-called discrimination principle, yet, the wider (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. Is Alignment Unsafe?Cameron Domenico Kirk-Giannini - 2024 - Philosophy and Technology 37 (110):1–4.
    Inchul Yum (2024) argues that the widespread adoption of language agent architectures would likely increase the risk posed by AI by simplifying the process of aligning artificial systems with human values and thereby making it easier for malicious actors to use them to cause a variety of harms. Yum takes this to be an example of a broader phenomenon: progress on the alignment problem is likely to be net safety-negative because it makes artificial systems easier for malicious actors to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. The Electi Model: A Comprehensive Blueprint for Future Governance and Societal Evolution.Seth Boudreau - manuscript
    This treatise introduces a transformative socio-political paradigm that merges empathy, ethics, equality, and the refinement of excellence to address the novel challenges of the 21st century and beyond. By combining ethical bio-genetic enhancement, empathetic governance, and the cautious reconstruction of socio-political structures, it envisions a society capable of achieving unprecedented precision, efficiency, and unity in governance. This interdisciplinary framework integrates modern science, classical philosophy, and historical lessons to propose a genetically optimized, rigorously educated, and ethically inclined sub-population—the “Electi.” Designed to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. Imagine This: Opaque DLMs are Reliable in the Context of Justification.Logan Carter - manuscript
    Artificial intelligence (AI) and machine learning (ML) models have undoubtedly become useful tools in science. In general, scientists and ML developers are optimistic – perhaps rightfully so – about the potential that these models have in facilitating scientific progress. The philosophy of AI literature carries a different mood. The attention of philosophers remains on potential epistemological issues that stem from the so-called “black box” features of ML models. For instance, Eamon Duede (2023) argues that opacity in deep learning models (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. The impact of intelligent decision-support systems on humans’ ethical decision-making: A systematic literature review and an integrated framework.Franziska Poszler & Benjamin Lange - 2024 - Technological Forecasting and Social Change 204.
    With the rise and public accessibility of AI-enabled decision-support systems, individuals outsource increasingly more of their decisions, even those that carry ethical dimensions. Considering this trend, scholars have highlighted that uncritical deference to these systems would be problematic and consequently called for investigations of the impact of pertinent technology on humans’ ethical decision-making. To this end, this article conducts a systematic review of existing scholarship and derives an integrated framework that demonstrates how intelligent decision-support systems (IDSSs) shape humans’ ethical decision-making. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  47. É Possível Evitar Vieses Algorítmicos?Carlos Barth - 2021 - Revista de Filosofia Moderna E Contemporânea 8 (3):39-68.
    Artificial intelligence (AI) techniques are used to model human activities and predict behavior. Such systems have shown race, gender and other kinds of bias, which are typically understood as technical problems. Here we try to show that: 1) to get rid of such biases, we need a system that can understand the structure of human activities and;2) to create such a system, we need to solve foundational problems of AI, such as the common-sense problem. Additionally, when informational platforms uses these (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  48. Are Large Language Models "alive"?Francesco Maria De Collibus - manuscript
    The appearance of openly accessible Artificial Intelligence Applications such as Large Language Models, nowadays capable of almost human-level performances in complex reasoning tasks had a tremendous impact on public opinion. Are we going to be "replaced" by the machines? Or - even worse - "ruled" by them? The behavior of these systems is so advanced they might almost appear "alive" to end users, and there have been claims about these programs being "sentient". Since many of our relationships of power and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49. ChatGPT as Teacher Assistant for Physics Teaching.Konstantinos Kotsis - 2024 - Eiki Journal of Effective Teaching Methods 2 (4):18-27.
    This study explores the integration of ChatGPT as a teaching assistant in physics education, emphasizing its potential to transform traditional pedagogical approaches. ChatGPT facilitates interactive and inquiry-based learning grounded in constructivist learning theory, allowing students to engage actively in experiments and better grasp abstract concepts through hands-on activities. The AI's adaptive dialogue systems promote socio-constructivist learning by encouraging social interaction and personalized feedback, which is essential for addressing individual learning gaps and enhancing student engagement. ChatGPT's ability to simulate real-world physics (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50. Concern Across Scales: a biologically inspired embodied artificial intelligence.Matthew Sims - 2022 - Frontiers in Neurorobotics 1 (Bio A.I. - From Embodied Cogniti).
    Intelligence in current AI research is measured according to designer-assigned tasks that lack any relevance for an agent itself. As such, tasks and their evaluation reveal a lot more about our intelligence than the possible intelligence of agents that we design and evaluate. As a possible first step in remedying this, this article introduces the notion of “self-concern,” a property of a complex system that describes its tendency to bring about states that are compatible with its continued self-maintenance. Self-concern, as (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 972