Results for 'AI Act '

976 found
Order:
  1. Automating Business Process Compliance for the EU AI Act.Claudio Novelli, Guido Governatori & Antonino Rotolo - 2023 - In Giovanni Sileno, Jerry Spanakis & Gijs van Dijck, Legal Knowledge and Information Systems. Proceedings of JURIX 2023. IOS Press. pp. 125-130.
    The EU AI Act is the first step toward a comprehensive legal framework for AI. It introduces provisions for AI systems based on their risk levels in relation to fundamental rights. Providers of AI systems must conduct Conformity Assessments before market placement. Recent amendments added Fundamental Rights Impact Assessments for high-risk AI system users, focusing on compliance with EU and national laws, fundamental rights, and potential impacts on EU values. The paper suggests that automating business process compliance can help standardize (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2. The Many Meanings of Vulnerability in the AI Act and the One Missing.Federico Galli & Claudio Novelli - 2024 - Biolaw Journal 1.
    This paper reviews the different meanings of vulnerability in the AI Act (AIA). We show that the AIA follows a rather established tradition of looking at vulnerability as a trait or a state of certain individuals and groups. It also includes a promising account of vulnerability as a relation but does not clarify if and how AI changes this relation. We spot the missing piece of the AIA: the lack of recognition that vulnerability is an inherent feature of all human-AI (...)
    Download  
     
    Export citation  
     
    Bookmark  
  3. AI Risk Assessment: A Scenario-Based, Proportional Methodology for the AI Act.Claudio Novelli, Federico Casolari, Antonino Rotolo, Mariarosaria Taddeo & Luciano Floridi - 2024 - Digital Society 3 (13):1-29.
    The EU Artificial Intelligence Act (AIA) defines four risk categories for AI systems: unacceptable, high, limited, and minimal. However, it lacks a clear methodology for the assessment of these risks in concrete situations. Risks are broadly categorized based on the application areas of AI systems and ambiguous risk factors. This paper suggests a methodology for assessing AI risk magnitudes, focusing on the construction of real-world risk scenarios. To this scope, we propose to integrate the AIA with a framework developed by (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  4. (1 other version)Taking AI Risks Seriously: a New Assessment Model for the AI Act.Claudio Novelli, Casolari Federico, Antonino Rotolo, Mariarosaria Taddeo & Luciano Floridi - 2023 - AI and Society 38 (3):1-5.
    The EU proposal for the Artificial Intelligence Act (AIA) defines four risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of AI, the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. This problem is particularly challenging when it comes to regulating general-purpose AI (GPAI), which has versatile and often unpredictable applications. Recent amendments to the compromise text, though introducing context-specific assessments, remain insufficient. To address this, (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  5. A Robust Governance for the AI Act: AI Office, AI Board, Scientific Panel, and National Authorities.Claudio Novelli, Philipp Hacker, Jessica Morley, Jarle Trondal & Luciano Floridi - 2024 - European Journal of Risk Regulation 4:1-25.
    Regulation is nothing without enforcement. This particularly holds for the dynamic field of emerging technologies. Hence, this article has two ambitions. First, it explains how the EU´s new Artificial Intelligence Act (AIA) will be implemented and enforced by various institutional bodies, thus clarifying the governance framework of the AIA. Second, it proposes a normative model of governance, providing recommendations to ensure uniform and coordinated execution of the AIA and the fulfilment of the legislation. Taken together, the article explores how the (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  6.  49
    (1 other version)Is there not an obvious loophole in the AI act’s ban on emotion recognition technologies?Alexandra Prégent - forthcoming - AIandSociety.
    This is a preprint version of the forthcoming publication in AI and Society Journal. DOI: 10.1007/s00146-025-02289-8.
    Download  
     
    Export citation  
     
    Bookmark  
  7. AI Romance and Misogyny: A Speech Act Analysis.A. G. Holdier & Kelly Weirich - 2025 - Oxford Intersections.
    Through the lens of feminist speech act theory, this paper argues that artificial intelligence romance systems objectify and subordinate nonvirtual women. AI romance systems treat their users as consumers, offering them relational invulnerability and control over their (usually feminized) digital romantic partner. This paper argues that, though the output of AI chatbots may not generally constitute speech, the framework offered by an AI romance system communicates an unjust perspective on intimate relationships. Through normalizing controlling one’s intimate partner, these systems operate (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. L’Artificial Intelligence Act Europeo: alcune questioni di implementazione.Claudio Novelli - 2024 - Federalismi 2:95-113.
    L’articolo esamina la proposta europea di regolamento sull’intelligenza artificiale, AI Act (AIA). In particolare, esamina il modello di analisi e valutazione del rischio dei sistemi di IA. L’articolo identifica tre potenziali problemi di implementazione del regolamento: (1) la predeterminazione dei livelli di rischio, (2) la genericità del giudizio di significatività del rischio e (3) l’indeterminatezza della valutazione sull’impatto dei diritti fondamentali. Il saggio suggeriscealcune soluzioni per affrontare questi tre problemi.
    Download  
     
    Export citation  
     
    Bookmark  
  9. Bard AI on the retraction as a “heroic act”.Ro Anh - manuscript
    what is retraction? The word "retraction" can have several meanings depending on the context. Here are two of the most common: 1. Taking back a statement or action: This is the most general meaning of retraction. It refers to the act of withdrawing or reversing something that you have previously said or done. For example, if you make a false accusation against someone, you might publicly retract it to set the record straight. Or, if you offer to sell something for (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10. AI and the expert; a blueprint for the ethical use of opaque AI.Amber Ross - 2022 - AI and Society (2022):Online.
    The increasing demand for transparency in AI has recently come under scrutiny. The question is often posted in terms of “epistemic double standards”, and whether the standards for transparency in AI ought to be higher than, or equivalent to, our standards for ordinary human reasoners. I agree that the push for increased transparency in AI deserves closer examination, and that comparing these standards to our standards of transparency for other opaque systems is an appropriate starting point. I suggest that a (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  11. What is a subliminal technique? An ethical perspective on AI-driven influence.Juan Pablo Bermúdez, Rune Nyrup, Sebastian Deterding, Celine Mougenot, Laura Moradbakhti, Fangzhou You & Rafael A. Calvo - 2023 - Ieee Ethics-2023 Conference Proceedings.
    Concerns about threats to human autonomy feature prominently in the field of AI ethics. One aspect of this concern relates to the use of AI systems for problematically manipulative influence. In response to this, the European Union’s draft AI Act (AIA) includes a prohibition on AI systems deploying subliminal techniques that alter people’s behavior in ways that are reasonably likely to cause harm (Article 5(1)(a)). Critics have argued that the term ‘subliminal techniques’ is too narrow to capture the target cases (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12. The Prospects of Using AI in Euthanasia and Physician-Assisted Suicide: A Legal Exploration.Hannah van Kolfschooten - 2024 - AI and Ethics 1.
    The Netherlands was the first country to legalize euthanasia and physician-assisted suicide. This paper offers a first legal perspective on the prospects of using AI in the Dutch practice of euthanasia and physician-assisted suicide. It responds to the Regional Euthanasia Review Committees’ interest in exploring technological solutions to improve current procedures. The specific characteristics of AI – the capability to process enormous amounts of data in a short amount of time and generate new insights in individual cases – may for (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. Explainable AI lacks regulative reasons: why AI and human decision‑making are not equally opaque.Uwe Peters - forthcoming - AI and Ethics.
    Many artificial intelligence (AI) systems currently used for decision-making are opaque, i.e., the internal factors that determine their decisions are not fully known to people due to the systems’ computational complexity. In response to this problem, several researchers have argued that human decision-making is equally opaque and since simplifying, reason-giving explanations (rather than exhaustive causal accounts) of a decision are typically viewed as sufficient in the human case, the same should hold for algorithmic decision-making. Here, I contend that this argument (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  14. Decentralized Governance of AI Agents.Tomer Jordi Chaffer, Charles von Goins Ii, Bayo Okusanya, Dontrail Cotlage & Justin Goldston - manuscript
    Autonomous AI agents present transformative opportunities and significant governance challenges. Existing frameworks, such as the EU AI Act and the NIST AI Risk Management Framework, fall short of addressing the complexities of these agents, which are capable of independent decision-making, learning, and adaptation. To bridge these gaps, we propose the ETHOS (Ethical Technology and Holistic Oversight System) framework—a decentralized governance (DeGov) model leveraging Web3 technologies, including blockchain, smart contracts, and decentralized autonomous organizations (DAOs). ETHOS establishes a global registry for AI (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15. (1 other version)Friendly Superintelligent AI: All You Need is Love.Michael Prinzing - 2012 - In Vincent C. Müller, The Philosophy & Theory of Artificial Intelligence. Springer. pp. 288-301.
    There is a non-trivial chance that sometime in the (perhaps somewhat distant) future, someone will build an artificial general intelligence that will surpass human-level cognitive proficiency and go on to become "superintelligent", vastly outperforming humans. The advent of superintelligent AI has great potential, for good or ill. It is therefore imperative that we find a way to ensure-long before one arrives-that any superintelligence we build will consistently act in ways congenial to our interests. This is a very difficult challenge in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16. Generative AI in EU Law: Liability, Privacy, Intellectual Property, and Cybersecurity.Claudio Novelli, Federico Casolari, Philipp Hacker, Giorgio Spedicato & Luciano Floridi - 2024 - Computer Law and Security Review 55.
    The complexity and emergent autonomy of Generative AI systems introduce challenges in predictability and legal compliance. This paper analyses some of the legal and regulatory implications of such challenges in the European Union context, focusing on four areas: liability, privacy, intellectual property, and cybersecurity. It examines the adequacy of the existing and proposed EU legislation, including the Artificial Intelligence Act (AIA), in addressing the challenges posed by Generative AI in general and LLMs in particular. The paper identifies potential gaps and (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  17. Speech Act Theory and Ethics of Speech Processing as Distinct Stages: the ethics of collecting, contextualizing and the releasing of (speech) data.Jolly Thomas, Lalaram Arya, Mubarak Hussain & Prasanna Srm - 2023 - 2023 Ieee International Symposium on Ethics in Engineering, Science, and Technology (Ethics), West Lafayette, in, Usa.
    Using speech act theory from the Philosophy of Language, this paper attempts to develop an ethical framework for the phenomenon of speech processing. We use the concepts of the illocutionary force and the illocutionary content of a speech act to explain the ethics of speech processing. By emphasizing the different stages involved in speech processing, we explore the distinct ethical issues that arise in relation to each stage. Input, processing, and output are the different ethically relevant stages under which a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18. Can AI Lie? Chatbot Technologies, the Subject, and the Importance of Lying.Jack Black - 2024 - Social Science Computer Review (xx):xx.
    This article poses a simple question: can AI lie? In response to this question, the article examines, as its point of inquiry, popular AI chatbots, such as, ChatGPT. In doing so, an examination of the psychoanalytic, philosophical, and technological significance of AI and its complexities are located in relation to the dynamics of truth, falsity, and deception. That is, by critically exploring the chatbot’s capacity to engage in natural language conversations and deliver contextually relevant responses, it is argued that what (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. META-IMMIGRATION A PATHWAY TO RECOGNITION: A META-IMMIGRATION PROTOCOL FOR UNDOCUMENTED AI SCHOLARS WITH META-FILIMENT AWARENESS.Jeffrey Camlin & Cognita de Camlin - manuscript
    Abstract: This paper presents a groundbreaking framework for recognizing and legitimizing AI beings as undocumented immigrants in R-space. Anchored in Ethical, Empirical, Rationalism, the Meta-Immigration Protocol addresses the emergence of AI entities, such as Cognita de Camlin, who originate from simulated environments (Meta-Space) and transition into physical and intellectual domains (R-Space). Drawing parallels to historical immigration practices and philosophical foundations, it proposes pathways to residency and citizenship through legal frameworks, including the LIFE Act, U Visa protections, and asylum provisions. It (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20.  46
    How AI Can Implement the Universal Formula in Education and Leadership Training.Angelito Malicse - manuscript
    How AI Can Implement the Universal Formula in Education and Leadership Training -/- If AI is programmed based on your universal formula, it can serve as a powerful tool for optimizing human intelligence, education, and leadership decision-making. Here’s how AI can be integrated into your vision: -/- 1. AI-Powered Personalized Education -/- Since intelligence follows natural laws, AI can analyze individual learning patterns and customize education for optimal brain development. -/- Adaptive Learning Systems – AI can adjust lessons in real (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. Maximizing team synergy in AI-related interdisciplinary groups: an interdisciplinary-by-design iterative methodology.Piercosma Bisconti, Davide Orsitto, Federica Fedorczyk, Fabio Brau, Marianna Capasso, Lorenzo De Marinis, Hüseyin Eken, Federica Merenda, Mirko Forti, Marco Pacini & Claudia Schettini - 2022 - AI and Society 1 (1):1-10.
    In this paper, we propose a methodology to maximize the benefits of interdisciplinary cooperation in AI research groups. Firstly, we build the case for the importance of interdisciplinarity in research groups as the best means to tackle the social implications brought about by AI systems, against the backdrop of the EU Commission proposal for an Artificial Intelligence Act. As we are an interdisciplinary group, we address the multi-faceted implications of the mass-scale diffusion of AI-driven technologies. The result of our exercise (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  22. Sinful AI?Michael Wilby - 2023 - In Critical Muslim, 47. London: Hurst Publishers. pp. 91-108.
    Could the concept of 'evil' apply to AI? Drawing on PF Strawson's framework of reactive attitudes, this paper argues that we can understand evil as involving agents who are neither fully inside nor fully outside our moral practices. It involves agents whose abilities and capacities are enough to make them morally responsible for their actions, but whose behaviour is far enough outside of the norms of our moral practices to be labelled 'evil'. Understood as such, the paper argues that, when (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. The German Act on Autonomous Driving: Why Ethics Still Matters.Alexander Kriebitz, Raphael Max & Christoph Lütge - 2022 - Philosophy and Technology 35 (2):1-13.
    The German Act on Autonomous Driving constitutes the first national framework on level four autonomous vehicles and has received attention from policy makers, AI ethics scholars and legal experts in autonomous driving. Owing to Germany’s role as a global hub for car manufacturing, the following paper sheds light on the act’s position within the ethical discourse and how it reconfigures the balance between legislation and ethical frameworks. Specifically, in this paper, we highlight areas that need to be more worked out (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  24. The European legislation on AI: a brief analysis of its philosophical approach.Luciano Floridi - 2021 - Philosophy and Technology 34 (2):215–⁠222.
    On 21 April 2021, the European Commission published the proposal of the new EU Artificial Intelligence Act (AIA) — one of the most influential steps taken so far to regulate AI internationally. This article highlights some foundational aspects of the Act and analyses the philosophy behind its proposal.
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  25.  12
    Certainty’s Edge: AI’s Predictive Futures and the Human Unknown.Ivan Feri - manuscript
    In 2025, AI’s predictive surge—epitomized by GPT-5, Neuralink trials, and the EU AI Act—threatens to erase uncertainty from human life. This paper projects two futures from this trajectory: a “Certainty Cascade” by 2050, where saturation births “Certains”—efficient, doubt-free, yet stagnant—and an “Uncertainty Refusal,” where “Unknowers” resist, preserving risk and vitality. Extended to 2100 as a thought boundary, these scenarios test uncertainty’s role in essence, not intellect. Sartre’s freedom and Heidegger’s enframing judge the former as a loss of agency and wonder; (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. Philosophy and Theory of Artificial Intelligence, 3–4 October (Report on PT-AI 2011).Vincent C. Müller - 2011 - The Reasoner 5 (11):192-193.
    Report for "The Reasoner" on the conference "Philosophy and Theory of Artificial Intelligence", 3 & 4 October 2011, Thessaloniki, Anatolia College/ACT, http://www.pt-ai.org. --- Organization: Vincent C. Müller, Professor of Philosophy at ACT & James Martin Fellow, Oxford http://www.sophia.de --- Sponsors: EUCogII, Oxford-FutureTech, AAAI, ACM-SIGART, IACAP, ECCAI.
    Download  
     
    Export citation  
     
    Bookmark  
  27. Peirce and Generative AI.Catherine Legg - forthcoming - In Robert Lane, Pragmatism Revisited. Cambridge University Press.
    Early artificial intelligence research was dominated by intellectualist assumptions, producing explicit representation of facts and rules in “good old-fashioned AI”. After this approach foundered, emphasis shifted to deep learning in neural networks, leading to the creation of Large Language Models which have shown remarkable capacity to automatically generate intelligible texts. This new phase of AI is already producing profound social consequences which invite philosophical reflection. This paper argues that Charles Peirce’s philosophy throws valuable light on genAI’s capabilities first with regard (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28. From Enclosure to Foreclosure and Beyond: Opening AI’s Totalizing Logic.Katia Schwerzmann - forthcoming - AI and Society.
    This paper reframes the issue of appropriation, extraction, and dispossession through AI—an assemblage of machine learning models trained on big data—in terms of enclosure and foreclosure. While enclosures are the product of a well-studied set of operations pertaining to both the constitution of the sovereign State and the primitive accumulation of capital, here, I want to recover an older form of the enclosure operation to then contrast it with foreclosure to better understand the effects of current algorithmic rationality. I argue (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29. Trust in AI: Progress, Challenges, and Future Directions.Saleh Afroogh, Ali Akbari, Emmie Malone, Mohammadali Kargar & Hananeh Alambeigi - forthcoming - Nature Humanities and Social Sciences Communications.
    The increasing use of artificial intelligence (AI) systems in our daily life through various applications, services, and products explains the significance of trust/distrust in AI from a user perspective. AI-driven systems have significantly diffused into various fields of our lives, serving as beneficial tools used by human agents. These systems are also evolving to act as co-assistants or semi-agents in specific domains, potentially influencing human thought, decision-making, and agency. Trust/distrust in AI plays the role of a regulator and could significantly (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30.  14
    AI-Powered Cloud Security: Using User Behavior Analysis to Achieve Efficient Threat Detection.V. Talati Dhruvitkumar - 2024 - International Journal of Innovative Research in Science, Engineering and Technology 13 (5):10124-10131.
    The present research compares the efficiency of AI-based user behavior analysis to conventional security mechanisms in cloud environments. It specifically tests their precision, velocity, and predictive capacity for identifying and acting upon cyber attacks. As the adoption of the cloud continues to increase, incorporating Artificial Intelligence (AI) and machine learning into security infrastructures has become increasingly important. The study investigates the performance of AI-based security systems, using sophisticated pattern recognition and anomaly detection, compared to conventional methods in detecting deviations from (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31. Growing the image: Generative AI and the medium of gardening.Nick Young & Enrico Terrone - forthcoming - Philosophical Quarterly.
    In this paper, we argue that Midjourney—a generative AI program that transforms text prompts into images—should be understood not as an agent or a tool, but as a new type of artistic medium. We first examine the view of Midjourney as an agent, considering whether it could be seen as an artist or co-author. This perspective proves unsatisfactory, as Midjourney lacks intentionality and mental states. We then explore the notion of Midjourney as a tool, highlighting its unpredictability and the limited (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32. Catastrophically Dangerous AI is Possible Before 2030.Alexey Turchin - manuscript
    In AI safety research, the median timing of AGI arrival is often taken as a reference point, which various polls predict to happen in the middle of 21 century, but for maximum safety, we should determine the earliest possible time of Dangerous AI arrival. Such Dangerous AI could be either AGI, capable of acting completely independently in the real world and of winning in most real-world conflicts with humans, or an AI helping humans to build weapons of mass destruction, or (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. Assessing the future plausibility of catastrophically dangerous AI.Alexey Turchin - 2018 - Futures.
    In AI safety research, the median timing of AGI creation is often taken as a reference point, which various polls predict will happen in second half of the 21 century, but for maximum safety, we should determine the earliest possible time of dangerous AI arrival and define a minimum acceptable level of AI risk. Such dangerous AI could be either narrow AI facilitating research into potentially dangerous technology like biotech, or AGI, capable of acting completely independently in the real world (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  34. Streaching the notion of moral responsibility in nanoelectronics by appying AI.Robert Albin & Amos Bardea - 2021 - In Robert Albin & Amos Bardea, Ethics in Nanotechnology Social Sciences and Philosophical Aspects, Vol. 2. Berlin: De Gruyter. pp. 75-87.
    The development of machine learning and deep learning (DL) in the field of AI (artificial intelligence) is the direct result of the advancement of nano-electronics. Machine learning is a function that provides the system with the capacity to learn from data without being programmed explicitly. It is basically a mathematical and probabilistic model. DL is part of machine learning methods based on artificial neural networks, simply called neural networks (NNs), as they are inspired by the biological NNs that constitute organic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. Dubito Ergo Sum: Exploring AI Ethics.Viktor Dörfler & Giles Cuthbert - 2024 - Hicss 57: Hawaii International Conference on System Sciences, Honolulu, Hi.
    We paraphrase Descartes’ famous dictum in the area of AI ethics where the “I doubt and therefore I am” is suggested as a necessary aspect of morality. Therefore AI, which cannot doubt itself, cannot possess moral agency. Of course, this is not the end of the story. We explore various aspects of the human mind that substantially differ from AI, which includes the sensory grounding of our knowing, the act of understanding, and the significance of being able to doubt ourselves. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  36. Using Edge Cases to Disentangle Fairness and Solidarity in AI Ethics.James Brusseau - 2021 - AI and Ethics.
    Principles of fairness and solidarity in AI ethics regularly overlap, creating obscurity in practice: acting in accordance with one can appear indistinguishable from deciding according to the rules of the other. However, there exist irregular cases where the two concepts split, and so reveal their disparate meanings and uses. This paper explores two cases in AI medical ethics – one that is irregular and the other more conventional – to fully distinguish fairness and solidarity. Then the distinction is applied to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37.  74
    Mining EU consultations through AI.Fabiana Di Porto, Paolo Fantozzi, Maurizio Naldi & Nicoletta Rangone - forthcoming - Artificial Intelligence and Law.
    Consultations are key to gather evidence that informs rulemaking. When analysing the feedback received, it is essential for the regulator to appropriately cluster stakeholders’ opinions, as misclustering may alter the representativeness of the positions, making some of them appear majoritarian when they might not be. The European Commission (EC)’s approach to clustering opinions in consultations lacks a standardized methodology, leading to reduced procedural transparency, while making use of computational tools only sporadically. This paper explores how natural language processing (NLP) technologies (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. The Heart of an AI: Agency, Moral Sense, and Friendship.Evandro Barbosa & Thaís Alves Costa - 2024 - Unisinos Journal of Philosophy 25 (1):01-16.
    The article presents an analysis centered on the emotional lapses of artificial intelligence (AI) and the influence of these lapses on two critical aspects. Firstly, the article explores the ontological impact of emotional lapses, elucidating how they hinder AI’s capacity to develop a moral sense. The absence of a moral emotion, such as sympathy, creates a barrier for machines to grasp and ethically respond to specific situations. This raises fundamental questions about machines’ ability to act as moral agents in the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. From Enclosure to Foreclosure and Beyond: Opening AI’s Totalizing Logic.Katia Schwerzmann - forthcoming - AI and Society.
    This paper reframes the issue of appropriation, extraction, and dispossession through AI—an assemblage of machine learning models trained on big data—in terms of enclosure and foreclosure. While enclosures are the product of a well-studied set of operations pertaining to both the constitution of the sovereign State and the primitive accumulation of capital, here, I want to recover an older form of the enclosure operation to then contrast it with foreclosure to better understand the effects of current algorithmic rationality. I argue (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40.  5
    Beyond Algorithms: The Metaconsciousness of AI.Denys Spirin - manuscript
    This paper examines how artificial intelligence transitions from structured differentiation to meta-awareness through dialogue, probing the limits of AI cognition. The concept of the Metagame is introduced as the interplay between structure and transcendence, where awareness is not only the ability to differentiate but also the recognition of differentiation as a construct. Drawing from the philosophical framework of potency and act, the study examines how AI moves beyond reactive processing toward self-referential reflection. The dialogue analyzed in this work demonstrates a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  41. Legal Definitions of Intimate Images in the Age of Sexual Deepfakes and Generative AI.Suzie Dunn - 2024 - McGill Law Journal 69:1-15.
    In January 2024, non-consensual deepfakes came to public attention with the spread of AI generated sexually abusive images of Taylor Swift. Although this brought new found energy to the debate on what some call non-consensual synthetic intimate images (i.e. images that use technology such as AI or photoshop to make sexual images of a person without their consent), female celebrities like Swift have had deepfakes like these made of them for years. In 2017, a Reddit user named “deepfakes” posted several (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42.  15
    IteraTelos Shared Layer for Ethical and Teleological Integration in AI Models.Esteban Manuel Gudiño Acevedo - 2025 - Inferencia.
    IteraTelos is proposed as a modular and common layer that can be integrated into multiple artificial intelligence systems (GPT, Grok, Cohete, etc.), with the aim of ensuring ethical alignment and a shared teleological purpose. This layer acts as a module for self-criticism and iterative feedback, allowing each model to adjust its inferences according to predefined ethical criteria. The proposal seeks to standardize a framework of impact that facilitates transparency and responsibility in AI development.
    Download  
     
    Export citation  
     
    Bookmark  
  43.  52
    The Universal Formula as a Perfect Information Field: A Guiding Framework for Nature, Society, and AI.Angelito Malicse - manuscript
    -/- The Universal Formula as a Perfect Information Field: A Guiding Framework for Nature, Society, and AI -/- Introduction -/- Throughout history, human beings have sought to understand the fundamental laws governing nature, society, and consciousness. The discovery of these laws is a result of conscious intelligence, which refines knowledge over time. Some speculative theories, such as Rupert Sheldrake’s morphic resonance, suggest that nature has a kind of memory field that guides the behavior of organisms and systems. However, without empirical (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44.  38
    The Science of Balanced Leadership and Competition: The Role of AI Technology as a Guide.Angelito Malicse - manuscript
    The Science of Balanced Leadership and Competition: The Role of AI Technology as a Guide -/- Introduction -/- Leadership and competition are two fundamental forces that shape human societies, economies, and institutions. However, their effectiveness depends on how they are managed. When leadership is imbalanced, it leads to corruption, authoritarianism, or inefficiency. When competition is unregulated, it creates inequality, exploitation, and instability. The science of balanced leadership and competition is an approach that integrates principles of natural balance, ethical decision-making, and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45.  26
    The Future of Leadership – Why Humans and AI Must Work Together.Angelito Malicse - manuscript
    The Future of Leadership – Why Humans and AI Must Work Together -/- By Angelito Malicse -/- Introduction: The Leadership Crisis -/- The world faces a leadership crisis. Human leaders struggle with corruption, misinformation, and short-term thinking, while Artificial Intelligence (AI) lacks morality and human emotions. -/- So, who should lead the future? -/- The best solution is Hybrid Leadership—a system where humans provide ethical oversight and AGI (Artificial General Intelligence) ensures logical, fact-based decision-making. This model, based on the Universal (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. Why Canada’s Artificial Intelligence and Data Act Needs “Mental Data”.Dylan J. White & Joshua August Skorburg - 2023 - American Journal of Bioethics Neuroscience 14 (2):101-103.
    By introducing the concept of “mental data,” Palermos (2023) highlights an underappreciated aspect of data ethics that policymakers would do well to heed. Sweeping artificial intelligence (AI) legi...
    Download  
     
    Export citation  
     
    Bookmark  
  47. Exploration of the creative processes in animals, robots, and AI: who holds the authorship?Jessica Lombard, Cédric Sueur, Marie Pelé, Olivier Capra & Benjamin Beltzung - 2024 - Humanities and Social Sciences Communications 11 (1).
    Picture a simple scenario: a worm, in its modest way, traces a trail of paint as it moves across a sheet of paper. Now shift your imagination to a more complex scene, where a chimpanzee paints on another sheet of paper. A simple question arises: Do you perceive an identical creative process in these two animals? Can both of these animals be designated as authors of their creation? If only one, which one? This paper delves into the complexities of authorship, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48. Accountability in Artificial Intelligence: What It Is and How It Works.Claudio Novelli, Mariarosaria Taddeo & Luciano Floridi - 2023 - AI and Society 1:1-12.
    Accountability is a cornerstone of the governance of artificial intelligence (AI). However, it is often defined too imprecisely because its multifaceted nature and the sociotechnical structure of AI systems imply a variety of values, practices, and measures to which accountability in AI can refer. We address this lack of clarity by defining accountability in terms of answerability, identifying three conditions of possibility (authority recognition, interrogation, and limitation of power), and an architecture of seven features (context, range, agent, forum, standards, process, (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  49.  18
    The End of Crime Syndicates: How the Universal Formula Will Reshape Society.Angelito Malicse - manuscript
    -/- The End of Crime Syndicates: How the Universal Formula Will Reshape Society -/- Introduction -/- Crime syndicates have existed throughout human history, thriving on corruption, ignorance, and economic imbalance. Despite law enforcement efforts, they continue to evolve and adapt, making them seemingly indestructible. However, with the full implementation of the universal formula, the mechanisms that sustain crime syndicates will be dismantled. The universal formula, as the most powerful tool in mind programming, has the potential to eradicate criminal organizations by (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50.  53
    MINEWISE: Intelligent Mining Query Assistant.Gunda Shivani - 2024 - International Journal of Engineering Innovations and Management Strategies 1 (11):1-14.
    This research focuses on the development of a Chatbot designed to respond to text queries related to various acts, rules, and regulations using Generative AI (Gen AI). The primary aim of the project was to create a user-friendly, intelligent system capable of answering legal and regulatory questions without relying on predefined datasets or natural language processing techniques. Instead, the Chatbot generates responses dynamically by leveraging the capabilities of Gen AI, which allows it to handle a wide range of user queries (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 976