Results for 'Schema-Based AI'

958 found
Order:
  1. Schemas versus symbols: A vision from the 90s.Michael A. Arbib - 2021 - Journal of Knowledge Structures and Systems 2 (1):68-74.
    Thirty years ago, I elaborated on a position that could be seen as a compromise between an "extreme," symbol-based AI, and a "neurochemical reductionism" in AI. The present article recalls aspects of the espoused framework of schema theory that, it suggested, could provide a better bridge from human psychology to brain theory than that offered by the symbol systems of A. Newell and H. A. Simon.
    Download  
     
    Export citation  
     
    Bookmark  
  2. Saliva Ontology: An ontology-based framework for a Salivaomics Knowledge Base.Jiye Ai, Barry Smith & David Wong - 2010 - BMC Bioinformatics 11 (1):302.
    The Salivaomics Knowledge Base (SKB) is designed to serve as a computational infrastructure that can permit global exploration and utilization of data and information relevant to salivaomics. SKB is created by aligning (1) the saliva biomarker discovery and validation resources at UCLA with (2) the ontology resources developed by the OBO (Open Biomedical Ontologies) Foundry, including a new Saliva Ontology (SALO). We define the Saliva Ontology (SALO; http://www.skb.ucla.edu/SALO/) as a consensus-based controlled vocabulary of terms and relations dedicated to the (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  3. Bioinformatics advances in saliva diagnostics.Ji-Ye Ai, Barry Smith & David T. W. Wong - 2012 - International Journal of Oral Science 4 (2):85--87.
    There is a need recognized by the National Institute of Dental & Craniofacial Research and the National Cancer Institute to advance basic, translational and clinical saliva research. The goal of the Salivaomics Knowledge Base (SKB) is to create a data management system and web resource constructed to support human salivaomics research. To maximize the utility of the SKB for retrieval, integration and analysis of data, we have developed the Saliva Ontology and SDxMart. This article reviews the informatics advances in saliva (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  4. Feminist Re-Engineering of Religion-Based AI Chatbots.Hazel T. Biana - 2024 - Philosophies 9 (1):20.
    Religion-based AI chatbots serve religious practitioners by bringing them godly wisdom through technology. These bots reply to spiritual and worldly questions by drawing insights or citing verses from the Quran, the Bible, the Bhagavad Gita, the Torah, or other holy books. They answer religious and theological queries by claiming to offer historical contexts and providing guidance and counseling to their users. A criticism of these bots is that they may give inaccurate answers and proliferate bias by propagating homogenized versions (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5.  77
    AI training data, model success likelihood, and informational entropy-based value.Quan-Hoang Vuong, Viet-Phuong La & Minh-Hoang Nguyen - manuscript
    Since the release of OpenAI's ChatGPT, the world has entered a race to develop more capable and powerful AI, including artificial general intelligence (AGI). The development is constrained by the dependency of AI on the model, quality, and quantity of training data, making the AI training process highly costly in terms of resources and environmental consequences. Thus, improving the effectiveness and efficiency of the AI training process is essential, especially when the Earth is approaching the climate tipping points and planetary (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. AI Risk Assessment: A Scenario-Based, Proportional Methodology for the AI Act.Claudio Novelli, Federico Casolari, Antonino Rotolo, Mariarosaria Taddeo & Luciano Floridi - 2024 - Digital Society 3 (13):1-29.
    The EU Artificial Intelligence Act (AIA) defines four risk categories for AI systems: unacceptable, high, limited, and minimal. However, it lacks a clear methodology for the assessment of these risks in concrete situations. Risks are broadly categorized based on the application areas of AI systems and ambiguous risk factors. This paper suggests a methodology for assessing AI risk magnitudes, focusing on the construction of real-world risk scenarios. To this scope, we propose to integrate the AIA with a framework developed (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  7. (1 other version)Ethics-based auditing to develop trustworthy AI.Jakob Mökander & Luciano Floridi - 2021 - Minds and Machines 31 (2):323–327.
    A series of recent developments points towards auditing as a promising mechanism to bridge the gap between principles and practice in AI ethics. Building on ongoing discussions concerning ethics-based auditing, we offer three contributions. First, we argue that ethics-based auditing can improve the quality of decision making, increase user satisfaction, unlock growth potential, enable law-making, and relieve human suffering. Second, we highlight current best practices to support the design and implementation of ethics-based auditing: To be feasible and (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  8. AI-Based Medical Solutions Can Threaten Physicians’ Ethical Obligations Only If Allowed to Do So.Benjamin Gregg - 2023 - American Journal of Bioethics 23 (9):84-86.
    Mildred Cho and Nicole Martinez-Martin (2023) distinguish between two of the ways in which humans can be represented in medical contexts. One is technical: a digital model of aspects of a person’s...
    Download  
     
    Export citation  
     
    Bookmark  
  9. Body schema dynamics in Merleau-Ponty.Jan Halák - 2021 - In Yochai Ataria, Shogo Tanaka & Shaun Gallagher (eds.), Body Schema and Body Image: New Directions. Oxford, United Kingdom: Oxford University Press. pp. 33-51.
    This chapter presents an account of Merleau-Ponty’s interpretation of the body schema as an operative intentionality that is not only opposed to, but also complexly intermingled with, the representation-like grasp of the world and one’s own body, or the body image. The chapter reconstructs Merleau-Ponty’s position primarily based on his preparatory notes for his 1953 lecture ‘The Sensible World and the World of Expression’. Here, Merleau-Ponty elaborates his earlier efforts to show that the body schema is a (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  10. Body Schema in Autonomous Agents.Zachariah A. Neemeh & Christian Kronsted - 2021 - Journal of Artificial Intelligence and Consciousness 1 (8):113-145.
    A body schema is an agent's model of its own body that enables it to act on affordances in the environment. This paper presents a body schema system for the Learning Intelligent Decision Agent (LIDA) cognitive architecture. LIDA is a conceptual and computational implementation of Global Workspace Theory, also integrating other theories from neuroscience and psychology. This paper contends that the ‘body schema' should be split into three separate functions based on the functional role of consciousness (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  11. “Just” accuracy? Procedural fairness demands explainability in AI‑based medical resource allocation.Jon Rueda, Janet Delgado Rodríguez, Iris Parra Jounou, Joaquín Hortal-Carmona, Txetxu Ausín & David Rodríguez-Arias - 2022 - AI and Society:1-12.
    The increasing application of artificial intelligence (AI) to healthcare raises both hope and ethical concerns. Some advanced machine learning methods provide accurate clinical predictions at the expense of a significant lack of explainability. Alex John London has defended that accuracy is a more important value than explainability in AI medicine. In this article, we locate the trade-off between accurate performance and explainable algorithms in the context of distributive justice. We acknowledge that accuracy is cardinal from outcome-oriented justice because it helps (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  12. Living with Uncertainty: Full Transparency of AI isn’t Needed for Epistemic Trust in AI-based Science.Uwe Peters - forthcoming - Social Epistemology Review and Reply Collective.
    Can AI developers be held epistemically responsible for the processing of their AI systems when these systems are epistemically opaque? And can explainable AI (XAI) provide public justificatory reasons for opaque AI systems’ outputs? Koskinen (2024) gives negative answers to both questions. Here, I respond to her and argue for affirmative answers. More generally, I suggest that when considering people’s uncertainty about the factors causally determining an opaque AI’s output, it might be worth keeping in mind that a degree of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. A value sensitive design approach for designing AI-based worker assistance systems in manufacturing.Susanne Vernim, Harald Bauer, Erwin Rauch, Marianne Thejls Ziegler & Steven Umbrello - 2022 - Procedia Computer Science 200:505-516.
    Although artificial intelligence has been given an unprecedented amount of attention in both the public and academic domains in the last few years, its convergence with other transformative technologies like cloud computing, robotics, and augmented/virtual reality is predicted to exacerbate its impacts on society. The adoption and integration of these technologies within industry and manufacturing spaces is a fundamental part of what is called Industry 4.0, or the Fourth Industrial Revolution. The impacts of this paradigm shift on the human operators (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  14. How AI’s Self-Prolongation Influences People’s Perceptions of Its Autonomous Mind: The Case of U.S. Residents.Quan-Hoang Vuong, Viet-Phuong La, Minh-Hoang Nguyen, Ruining Jin, Minh-Khanh La & Tam-Tri Le - 2023 - Behavioral Sciences 13 (6):470.
    The expanding integration of artificial intelligence (AI) in various aspects of society makes the infosphere around us increasingly complex. Humanity already faces many obstacles trying to have a better understanding of our own minds, but now we have to continue finding ways to make sense of the minds of AI. The issue of AI’s capability to have independent thinking is of special attention. When dealing with such an unfamiliar concept, people may rely on existing human properties, such as survival desire, (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  15. AI Enters Public Discourse: a Habermasian Assessment of the Moral Status of Large Language Models.Paolo Monti - 2024 - Ethics and Politics 61 (1):61-80.
    Large Language Models (LLMs) are generative AI systems capable of producing original texts based on inputs about topic and style provided in the form of prompts or questions. The introduction of the outputs of these systems into human discursive practices poses unprecedented moral and political questions. The article articulates an analysis of the moral status of these systems and their interactions with human interlocutors based on the Habermasian theory of communicative action. The analysis explores, among other things, Habermas's (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16. AI-Testimony, Conversational AIs and Our Anthropocentric Theory of Testimony.Ori Freiman - 2024 - Social Epistemology 38 (4):476-490.
    The ability to interact in a natural language profoundly changes devices’ interfaces and potential applications of speaking technologies. Concurrently, this phenomenon challenges our mainstream theories of knowledge, such as how to analyze linguistic outputs of devices under existing anthropocentric theoretical assumptions. In section 1, I present the topic of machines that speak, connecting between Descartes and Generative AI. In section 2, I argue that accepted testimonial theories of knowledge and justification commonly reject the possibility that a speaking technological artifact can (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. The Concept of ‘Body Schema’ in Merleau-Ponty’s Account of Embodied Subjectivity.Jan Halák - 2018 - In Bernard Andrieu, Jim Parry, Alessandro Porrovecchio & Olivier Sirost (eds.), Body Ecology and Emersive Leisure. Routledge. pp. 37-50.
    In his 1953 lectures at the College de France, Merleau-Ponty dedicated much effort to further developing his idea of embodied subject and interpreted fresh sources that he did not use in Phenomenology of Perception. Notably, he studied more in depth the neurological notion of "body schema". According to Merleau-Ponty, the body schema is a practical diagram of our relationships to the world, an action-based norm with reference to which things make sense. Merleau-Ponty more precisely tried to describe (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  18.  74
    “Democratizing AI” and the Concern of Algorithmic Injustice.Ting-an Lin - 2024 - Philosophy and Technology 37 (3):1-27.
    The call to make artificial intelligence (AI) more democratic, or to “democratize AI,” is sometimes framed as a promising response for mitigating algorithmic injustice or making AI more aligned with social justice. However, the notion of “democratizing AI” is elusive, as the phrase has been associated with multiple meanings and practices, and the extent to which it may help mitigate algorithmic injustice is still underexplored. In this paper, based on a socio-technical understanding of algorithmic injustice, I examine three notable (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. How to design AI for social good: seven essential factors.Luciano Floridi, Josh Cowls, Thomas C. King & Mariarosaria Taddeo - 2020 - Science and Engineering Ethics 26 (3):1771–1796.
    The idea of artificial intelligence for social good is gaining traction within information societies in general and the AI community in particular. It has the potential to tackle social problems through the development of AI-based solutions. Yet, to date, there is only limited understanding of what makes AI socially good in theory, what counts as AI4SG in practice, and how to reproduce its initial successes in terms of policies. This article addresses this gap by identifying seven ethical factors that (...)
    Download  
     
    Export citation  
     
    Bookmark   37 citations  
  20. Consciousness as computation: A defense of strong AI based on quantum-state functionalism.R. Michael Perry - 2006 - In Charles Tandy (ed.), Death and Anti-Death, Volume 4: Twenty Years After De Beauvoir, Thirty Years After Heidegger. Palo Alto: Ria University Press.
    The viewpoint that consciousness, including feeling, could be fully expressed by a computational device is known as strong artificial intelligence or strong AI. Here I offer a defense of strong AI based on machine-state functionalism at the quantum level, or quantum-state functionalism. I consider arguments against strong AI, then summarize some counterarguments I find compelling, including Torkel Franzén’s work which challenges Roger Penrose’s claim, based on Gödel incompleteness, that mathematicians have nonalgorithmic levels of “certainty.” Some consequences of strong (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. Science Based on Artificial Intelligence Need not Pose a Social Epistemological Problem.Uwe Peters - 2024 - Social Epistemology Review and Reply Collective 13 (1).
    It has been argued that our currently most satisfactory social epistemology of science can’t account for science that is based on artificial intelligence (AI) because this social epistemology requires trust between scientists that can take full responsibility for the research tools they use, and scientists can’t take full responsibility for the AI tools they use since these systems are epistemically opaque. I think this argument overlooks that much AI-based science can be done without opaque models, and that agents (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22. AI, alignment, and the categorical imperative.Fritz McDonald - 2023 - AI and Ethics 3:337-344.
    Tae Wan Kim, John Hooker, and Thomas Donaldson make an attempt, in recent articles, to solve the alignment problem. As they define the alignment problem, it is the issue of how to give AI systems moral intelligence. They contend that one might program machines with a version of Kantian ethics cast in deontic modal logic. On their view, machines can be aligned with human values if such machines obey principles of universalization and autonomy, as well as a deontic utilitarian principle. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. AI as IA: The use and abuse of artificial intelligence (AI) for human enhancement through intellectual augmentation (IA).Alexandre Erler & Vincent C. Müller - 2023 - In Fabrice Jotterand & Marcello Ienca (eds.), The Routledge Handbook of the Ethics of Human Enhancement. Routledge. pp. 187-199.
    This paper offers an overview of the prospects and ethics of using AI to achieve human enhancement, and more broadly what we call intellectual augmentation (IA). After explaining the central notions of human enhancement, IA, and AI, we discuss the state of the art in terms of the main technologies for IA, with or without brain-computer interfaces. Given this picture, we discuss potential ethical problems, namely inadequate performance, safety, coercion and manipulation, privacy, cognitive liberty, authenticity, and fairness in more detail. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. AI Worship as a New Form of Religion.Neil McArthur - manuscript
    We are about to see the emergence of religions devoted to the worship of Artificial Intelligence (AI). Such religions pose acute risks, both to their followers and to the public. We should require their creators, and governments, to acknowledge these risks and to manage them as best they can. However, these new religions cannot be stopped altogether, nor should we try to stop them if we could. We must accept that AI worship will become part of our culture, and we (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. Socially Good AI Contributions for the Implementation of Sustainable Development in Mountain Communities Through an Inclusive Student-Engaged Learning Model.Tyler Lance Jaynes, Baktybek Abdrisaev & Linda MacDonald Glenn - 2023 - In Francesca Mazzi & Luciano Floridi (eds.), The Ethics of Artificial Intelligence for the Sustainable Development Goals. Springer Verlag. pp. 269-289.
    AI is increasingly becoming based upon Internet-dependent systems to handle the massive amounts of data it requires to function effectively regardless of the availability of stable Internet connectivity in every affected community. As such, sustainable development (SD) for rural and mountain communities will require more than just equitable access to broadband Internet connection. It must also include a thorough means whereby to ensure that affected communities gain the education and tools necessary to engage inclusively with new technological advances, whether (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26.  9
    Towards a Taxonomy of AI Risks in the Health Domain.Delaram Golpayegani, Joshua Hovsha, Leon Rossmaier, Rana Saniei & Jana Misic - 2022 - 2022 Fourth International Conference on Transdisciplinary Ai (Transai).
    The adoption of AI in the health sector has its share of benefits and harms to various stakeholder groups and entities. There are critical risks involved in using AI systems in the health domain; risks that can have severe, irreversible, and life-changing impacts on people’s lives. With the development of innovative AI-based applications in the medical and healthcare sectors, new types of risks emerge. To benefit from novel AI applications in this domain, the risks need to be managed in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. First human upload as AI Nanny.Alexey Turchin - manuscript
    Abstract: As there are no visible ways to create safe self-improving superintelligence, but it is looming, we probably need temporary ways to prevent its creation. The only way to prevent it, is to create special AI, which is able to control and monitor all places in the world. The idea has been suggested by Goertzel in form of AI Nanny, but his Nanny is still superintelligent and not easy to control, as was shown by Bensinger at al. We explore here (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28. The Future of AI: Stanisław Lem’s Philosophical Visions for AI and Cyber-Societies in Cyberiad.Roman Krzanowski & Pawel Polak - 2021 - Pro-Fil 22 (3):39-53.
    Looking into the future is always a risky endeavour, but one way to anticipate the possible future shape of AI-driven societies is to examine the visionary works of some sci-fi writers. Not all sci-fi works have such visionary quality, of course, but some of Stanisław Lem’s works certainly do. We refer here to Lem’s works that explore the frontiers of science and technology and those that describe imaginary societies of robots. We therefore examine Lem’s prose, with a focus on the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29. NHS AI Lab: why we need to be ethically mindful about AI for healthcare.Jessica Morley & Luciano Floridi - unknown
    On 8th August 2019, Secretary of State for Health and Social Care, Matt Hancock, announced the creation of a £250 million NHS AI Lab. This significant investment is justified on the belief that transforming the UK’s National Health Service (NHS) into a more informationally mature and heterogeneous organisation, reliant on data-based and algorithmically-driven interactions, will offer significant benefit to patients, clinicians, and the overall system. These opportunities are realistic and should not be wasted. However, they may be missed (one (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. AI-POWERED THREAT INTELLIGENCE FOR PROACTIVE SECURITY MONITORING IN CLOUD INFRASTRUCTURES.Tummalachervu Chaitanya Kanth - 2024 - Journal of Science Technology and Research (JSTAR) 5 (1):76-83.
    Cloud computing has become an essential component of enterprises and organizations globally in the current era of digital technology. The cloud has a multitude of advantages, including scalability, flexibility, and cost-effectiveness, rendering it an appealing choice for data storage and processing. The increasing storage of sensitive information in cloud environments has raised significant concerns over the security of such systems. The frequency of cyber threats and attacks specifically aimed at cloud infrastructure has been increasing, presenting substantial dangers to the data, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31. Social AI and The Equation of Wittgenstein’s Language User With Calvino’s Literature Machine.Warmhold Jan Thomas Mollema - 2024 - International Review of Literary Studies 6 (1):39-55.
    Is it sensical to ascribe psychological predicates to AI systems like chatbots based on large language models (LLMs)? People have intuitively started ascribing emotions or consciousness to social AI (‘affective artificial agents’), with consequences that range from love to suicide. The philosophical question of whether such ascriptions are warranted is thus very relevant. This paper advances the argument that LLMs instantiate language users in Ludwig Wittgenstein’s sense but that ascribing psychological predicates to these systems remains a functionalist temptation. Social (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32. Could You Merge With AI? Reflections on the Singularity and Radical Brain Enhancement.Cody Turner & Susan Schneider - 2020 - In Markus Dirk Dubber, Frank Pasquale & Sunit Das (eds.), The Oxford Handbook of Ethics of Ai. Oxford Handbooks. pp. 307-325.
    This chapter focuses on AI-based cognitive and perceptual enhancements. AI-based brain enhancements are already under development, and they may become commonplace over the next 30–50 years. We raise doubts concerning whether radical AI-based enhancements transhumanists advocate will accomplish the transhumanists goals of longevity, human flourishing, and intelligence enhancement. We urge that even if the technologies are medically safe and are not used as tools by surveillance capitalism or an authoritarian dictatorship, these enhancements may still fail to do (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  33. Panpsychism and AI consciousness.Marcus Arvan & Corey J. Maley - 2022 - Synthese 200 (3):1-22.
    This article argues that if panpsychism is true, then there are grounds for thinking that digitally-based artificial intelligence may be incapable of having coherent macrophenomenal conscious experiences. Section 1 briefly surveys research indicating that neural function and phenomenal consciousness may be both analog in nature. We show that physical and phenomenal magnitudes—such as rates of neural firing and the phenomenally experienced loudness of sounds—appear to covary monotonically with the physical stimuli they represent, forming the basis for an analog relationship (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  34.  81
    What Are Lacking in Sora and V-JEPA’s World Models? -A Philosophical Analysis of Video AIs Through the Theory of Productive Imagination.Jianqiu Zhang - unknown
    Sora from Open AI has shown exceptional performance, yet it faces scrutiny over whether its technological prowess equates to an authentic comprehension of reality. Critics contend that it lacks a foundational grasp of the world, a deficiency V-JEPA from Meta aims to amend with its joint embedding approach. This debate is vital for steering the future direction of Artificial General Intelligence(AGI). We enrich this debate by developing a theory of productive imagination that generates a coherent world model based on (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. The future of AI in our hands? - To what extent are we as individuals morally responsible for guiding the development of AI in a desirable direction?Erik Persson & Maria Hedlund - 2022 - AI and Ethics 2:683-695.
    Artificial intelligence (AI) is becoming increasingly influential in most people’s lives. This raises many philosophical questions. One is what responsibility we have as individuals to guide the development of AI in a desirable direction. More specifically, how should this responsibility be distributed among individuals and between individuals and other actors? We investigate this question from the perspectives of five principles of distribution that dominate the discussion about responsibility in connection with climate change: effectiveness, equality, desert, need, and ability. Since much (...)
    Download  
     
    Export citation  
     
    Bookmark  
  36. The Unobserved Anatomy: Negotiating the Plausibility of AI-Based Reconstructions of Missing Brain Structures in Clinical MRI Scans.Paula Muhr - 2023 - In Antje Flüchter, Birte Förster, Britta Hochkirchen & Silke Schwandt (eds.), Plausibilisierung und Evidenz: Dynamiken und Praktiken von der Antike bis zur Gegenwart. Bielefeld University Press. pp. 169-192.
    Vast archives of fragmentary structural brain scans that are routinely acquired in medical clinics for diagnostic purposes have so far been considered to be unusable for neuroscientific research. Yet, recent studies have proposed that by deploying machine learning algorithms to fill in the missing anatomy, clinical scans could, in future, be used by researchers to gain new insights into various brain disorders. This chapter focuses on a study published in2019, whose authors developed a novel unsupervised machine learning algorithm for synthesising (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37. (1 other version)Learning as Differentiation of Experiential Schemas.Jan Halák - 2019 - In Jim Parry & Pete Allison (eds.), Experiential Learning and Outdoor Education: Traditions of practice and philosophical perspectives. Routledge. pp. 52-70.
    The goal of this chapter is to provide an interpretation of experiential learning that fully detaches itself from the epistemological presuppositions of empiricist and intellectualist accounts of learning. I first introduce the concept of schema as understood by Kant and I explain how it is related to the problems implied by the empiricist and intellectualist frameworks. I then interpret David Kolb’s theory of learning that is based on the concept of learning cycle and represents an attempt to overcome (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  38. Australia's Approach to AI Governance in Security and Defence.Susannah Kate Devitt & Damian Copeland - forthcoming - In M. Raska, Z. Stanley-Lockman & R. Bitzinger (eds.), AI Governance for National Security and Defence: Assessing Military AI Strategic Perspectives. Routledge. pp. 38.
    Australia is a leading AI nation with strong allies and partnerships. Australia has prioritised the development of robotics, AI, and autonomous systems to develop sovereign capability for the military. Australia commits to Article 36 reviews of all new means and method of warfare to ensure weapons and weapons systems are operated within acceptable systems of control. Additionally, Australia has undergone significant reviews of the risks of AI to human rights and within intelligence organisations and has committed to producing ethics guidelines (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. The dialectic of desire: AI chatbots and the desire not to know.Jack Black - 2023 - Psychoanalysis, Culture and Society 28 (4):607--618.
    Exploring the relationship between humans and AI chatbots, as well as the ethical concerns surrounding their use, this paper argues that our relations with chatbots are not solely based on their function as a source of knowledge, but, rather, on the desire for the subject not to know. It is argued that, outside of the very fears and anxieties that underscore our adoption of AI, the desire not to know reveals the potential to embrace the very loss AI avers. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. Challenges of AI for Promoting Sikhism in the 21st Century (Guest Editorial).Devinder Pal Singh - 2023 - The Sikh Review, Kolkata, WB, India 71 (09):6-8.
    Artificial Intelligence (AI) is a technology that enables machines or computer systems to perform tasks that usually require human intelligence. AI systems can understand and interpret information, make decisions, and solve problems based on patterns and data. They can also improve their performance over time by learning from their experiences. AI is used in various applications, such as enhancing knowledge and understanding, helping as voice assistants, aiding in image recognition, facilitating self-driving cars, and helping diagnose diseases. The appropriate usage (...)
    Download  
     
    Export citation  
     
    Bookmark  
  41. The trustworthiness of AI: Comments on Simion and Kelp’s account.Dong-Yong Choi - 2023 - Asian Journal of Philosophy 2 (1):1-9.
    Simion and Kelp explain the trustworthiness of an AI based on that AI’s disposition to meet its obligations. Roughly speaking, according to Simion and Kelp, an AI is trustworthy regarding its task if and only if that AI is obliged to complete the task and its disposition to complete the task is strong enough. Furthermore, an AI is obliged to complete a task in the case where the task is the AI’s etiological function or design function. This account has (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42. What is a subliminal technique? An ethical perspective on AI-driven influence.Juan Pablo Bermúdez, Rune Nyrup, Sebastian Deterding, Celine Mougenot, Laura Moradbakhti, Fangzhou You & Rafael A. Calvo - 2023 - Ieee Ethics-2023 Conference Proceedings.
    Concerns about threats to human autonomy feature prominently in the field of AI ethics. One aspect of this concern relates to the use of AI systems for problematically manipulative influence. In response to this, the European Union’s draft AI Act (AIA) includes a prohibition on AI systems deploying subliminal techniques that alter people’s behavior in ways that are reasonably likely to cause harm (Article 5(1)(a)). Critics have argued that the term ‘subliminal techniques’ is too narrow to capture the target cases (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43.  45
    AI-Driven Emotion Recognition and Regulation Using Advanced Deep Learning Models.S. Arul Selvan - 2024 - Journal of Science Technology and Research (JSTAR) 5 (1):383-389.
    Emotion detection and management have emerged as pivotal areas in humancomputer interaction, offering potential applications in healthcare, entertainment, and customer service. This study explores the use of deep learning (DL) models to enhance emotion recognition accuracy and enable effective emotion regulation mechanisms. By leveraging large datasets of facial expressions, voice tones, and physiological signals, we train deep neural networks to recognize a wide array of emotions with high precision. The proposed system integrates emotion recognition with adaptive management strategies that provide (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. Dai modelli fisici ai sistemi complessi.Giorgio Turchetti - 2012 - In Vincenzo Fano, Enrico Giannetto, Giulia Giannini & Pierluigi Graziani (eds.), Complessità e Riduzionismo. ISONOMIA - Epistemologica Series Editor. pp. 108-125.
    L’osservazione della natura con l’intento di capire l’origine della varietà di forme e fenomeni in cui si manifesta ha origini remote. All’inizio il rapporto con i fenomeni naturali era dominato da sentimenti quali paura e stupore che conducevano a supporre l’esistenza di entità sfuggenti alla percezione diretta che permeavano gli elementi animandoli. Ecco dunque che la magia rappresenta l’elemento dominante della filosofia naturale primitiva caratterizzata da una unicità degli eventi e dalla impossibilità di capirli e dominarli in quanto frutto della (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. Tu Quoque: The Strong AI Challenge to Selfhood, Intentionality and Meaning and Some Artistic Responses.Erik C. Banks - manuscript
    This paper offers a "tu quoque" defense of strong AI, based on the argument that phenomena of self-consciousness and intentionality are nothing but the "negative space" drawn around the concrete phenomena of brain states and causally connected utterances and objects. Any machine that was capable of concretely implementing the positive phenomena would automatically inherit the negative space around these that we call self-consciousness and intention. Because this paper was written for a literary audience, some examples from Greek tragedy, noir (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. A Roadmap for Governing AI: Technology Governance and Power Sharing Liberalism.Danielle Allen, Sarah Hubbard, Woojin Lim, Allison Stanger, Shlomit Wagman & Kinney Zalesne - 2024 - Harvard Ash Center for Democratic Governance and Innovation.
    This paper aims to provide a roadmap to AI governance. In contrast to the reigning paradigms, we argue that AI governance should not be merely a reactive, punitive, status-quo-defending enterprise, but rather the expression of an expansive, proactive vision for technology—to advance human flourishing. Advancing human flourishing in turn requires democratic/political stability and economic empowerment. Our overarching point is that answering questions of how we should govern this emerging technology is a chance not merely to categorize and manage narrow risk (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. An Impossibility Theorem for Base Rate Tracking and Equalized Odds.Rush T. Stewart, Benjamin Eva, Shanna Slank & Reuben Stern - forthcoming - Analysis.
    There is a theorem that shows that it is impossible for an algorithm to jointly satisfy the statistical fairness criteria of Calibration and Equalised Odds non-trivially. But what about the recently advocated alternative to Calibration, Base Rate Tracking? Here, we show that Base Rate Tracking is strictly weaker than Calibration, and then take up the question of whether it is possible to jointly satisfy Base Rate Tracking and Equalised Odds in non-trivial scenarios. We show that it is not, thereby establishing (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48. The problem of AI identity.Soenke Ziesche & Roman V. Yampolskiy - manuscript
    The problem of personal identity is a longstanding philosophical topic albeit without final consensus. In this article the somewhat similar problem of AI identity is discussed, which has not gained much traction yet, although this investigation is increasingly relevant for different fields, such as ownership issues, personhood of AI, AI welfare, brain–machine interfaces, the distinction between singletons and multi-agent systems as well as to potentially support finding a solution to the problem of personal identity. The AI identity problem analyses the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49. Developing a Trusted Human-AI Network for Humanitarian Benefit.Susannah Kate Devitt, Jason Scholz, Timo Schless & Larry Lewis - forthcoming - Journal of Digital War:TBD.
    Humans and artificial intelligences (AI) will increasingly participate digitally and physically in conflicts yet there is a lack of trusted communications across agents and platforms. For example, humans in disasters and conflict already use messaging and social media to share information, however, international humanitarian relief organisations treat this information as unverifiable and untrustworthy. AI may reduce the ‘fog-of-war’ and improve outcomes, however current AI implementations are often brittle, have a narrow scope of application and wide ethical risks. Meanwhile, human error (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50. (1 other version)Ethics as a service: a pragmatic operationalisation of AI ethics.Jessica Morley, Anat Elhalal, Francesca Garcia, Libby Kinsey, Jakob Mökander & Luciano Floridi - 2021 - Minds and Machines 31 (2):239–256.
    As the range of potential uses for Artificial Intelligence, in particular machine learning, has increased, so has awareness of the associated ethical issues. This increased awareness has led to the realisation that existing legislation and regulation provides insufficient protection to individuals, groups, society, and the environment from AI harms. In response to this realisation, there has been a proliferation of principle-based ethics codes, guidelines and frameworks. However, it has become increasingly clear that a significant gap exists between the theory (...)
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
1 — 50 / 958