Results for 'AI Regulation'

964 found
Order:
  1. Explainable AI lacks regulative reasons: why AI and human decision‑making are not equally opaque.Uwe Peters - forthcoming - AI and Ethics.
    Many artificial intelligence (AI) systems currently used for decision-making are opaque, i.e., the internal factors that determine their decisions are not fully known to people due to the systems’ computational complexity. In response to this problem, several researchers have argued that human decision-making is equally opaque and since simplifying, reason-giving explanations (rather than exhaustive causal accounts) of a decision are typically viewed as sufficient in the human case, the same should hold for algorithmic decision-making. Here, I contend that this argument (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  2.  75
    AI-Driven Emotion Recognition and Regulation Using Advanced Deep Learning Models.S. Arul Selvan - 2024 - Journal of Science Technology and Research (JSTAR) 5 (1):383-389.
    Emotion detection and management have emerged as pivotal areas in humancomputer interaction, offering potential applications in healthcare, entertainment, and customer service. This study explores the use of deep learning (DL) models to enhance emotion recognition accuracy and enable effective emotion regulation mechanisms. By leveraging large datasets of facial expressions, voice tones, and physiological signals, we train deep neural networks to recognize a wide array of emotions with high precision. The proposed system integrates emotion recognition with adaptive management strategies that (...)
    Download  
     
    Export citation  
     
    Bookmark  
  3. (1 other version)Taking AI Risks Seriously: a New Assessment Model for the AI Act.Claudio Novelli, Casolari Federico, Antonino Rotolo, Mariarosaria Taddeo & Luciano Floridi - 2023 - AI and Society 38 (3):1-5.
    The EU proposal for the Artificial Intelligence Act (AIA) defines four risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of AI, the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. This problem is particularly challenging when it comes to regulating general-purpose AI (GPAI), which has versatile and often unpredictable applications. Recent amendments to the compromise text, though introducing context-specific assessments, remain insufficient. To address this, (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  4. Transparent, explainable, and accountable AI for robotics.Sandra Wachter, Brent Mittelstadt & Luciano Floridi - 2017 - Science (Robotics) 2 (6):eaan6080.
    To create fair and accountable AI and robotics, we need precise regulation and better methods to certify, explain, and audit inscrutable systems.
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  5. Emotional AI as affective artifacts: A philosophical exploration.Manh-Tung Ho & Manh-Toan Ho - manuscript
    In recent years, with the advances in machine learning and neuroscience, the abundances of sensors and emotion data, computer engineers have started to endow machines with ability to detect, classify, and interact with human emotions. Emotional artificial intelligence (AI), also known as a more technical term in affective computing, is increasingly more prevalent in our daily life as it is embedded in many applications in our mobile devices as well as in physical spaces. Critically, emotional AI systems have not only (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. AI Risk Assessment: A Scenario-Based, Proportional Methodology for the AI Act.Claudio Novelli, Federico Casolari, Antonino Rotolo, Mariarosaria Taddeo & Luciano Floridi - 2024 - Digital Society 3 (13):1-29.
    The EU Artificial Intelligence Act (AIA) defines four risk categories for AI systems: unacceptable, high, limited, and minimal. However, it lacks a clear methodology for the assessment of these risks in concrete situations. Risks are broadly categorized based on the application areas of AI systems and ambiguous risk factors. This paper suggests a methodology for assessing AI risk magnitudes, focusing on the construction of real-world risk scenarios. To this scope, we propose to integrate the AIA with a framework developed by (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  7. Maximizing team synergy in AI-related interdisciplinary groups: an interdisciplinary-by-design iterative methodology.Piercosma Bisconti, Davide Orsitto, Federica Fedorczyk, Fabio Brau, Marianna Capasso, Lorenzo De Marinis, Hüseyin Eken, Federica Merenda, Mirko Forti, Marco Pacini & Claudia Schettini - 2022 - AI and Society 1 (1):1-10.
    In this paper, we propose a methodology to maximize the benefits of interdisciplinary cooperation in AI research groups. Firstly, we build the case for the importance of interdisciplinarity in research groups as the best means to tackle the social implications brought about by AI systems, against the backdrop of the EU Commission proposal for an Artificial Intelligence Act. As we are an interdisciplinary group, we address the multi-faceted implications of the mass-scale diffusion of AI-driven technologies. The result of our exercise (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  8. The European legislation on AI: a brief analysis of its philosophical approach.Luciano Floridi - 2021 - Philosophy and Technology 34 (2):215–⁠222.
    On 21 April 2021, the European Commission published the proposal of the new EU Artificial Intelligence Act (AIA) — one of the most influential steps taken so far to regulate AI internationally. This article highlights some foundational aspects of the Act and analyses the philosophy behind its proposal.
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  9.  72
    Can AI become an Expert?Hyeongyun Kim - 2024 - Journal of Ai Humanities 16 (4):113-136.
    With the rapid development of artificial intelligence (AI), understanding its capabilities and limitations has become significant for mitigating unfounded anxiety and unwarranted optimism. As part of this endeavor, this study delves into the following question: Can AI become an expert? More precisely, should society confer the authority of experts on AI even if its decision-making process is highly opaque? Throughout the investigation, I aim to identify certain normative challenges in elevating current AI to a level comparable to that of human (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10. Why interdisciplinary research in AI is so important, according to Jurassic Park.Marie Oldfield - 2020 - The Tech Magazine 1 (1):1.
    Why interdisciplinary research in AI is so important, according to Jurassic Park. -/- “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.” -/- I think this quote resonates with us now more than ever, especially in the world of technological development. The writers of Jurassic Park were years ahead of their time with this powerful quote. -/- As we build new technology, and we push on to see what can actually (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11.  42
    Towards a Taxonomy of AI Risks in the Health Domain.Delaram Golpayegani, Joshua Hovsha, Leon Rossmaier, Rana Saniei & Jana Misic - 2022 - 2022 Fourth International Conference on Transdisciplinary Ai (Transai).
    The adoption of AI in the health sector has its share of benefits and harms to various stakeholder groups and entities. There are critical risks involved in using AI systems in the health domain; risks that can have severe, irreversible, and life-changing impacts on people’s lives. With the development of innovative AI-based applications in the medical and healthcare sectors, new types of risks emerge. To benefit from novel AI applications in this domain, the risks need to be managed in order (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12. The emperor is naked: Moral diplomacies and the ethics of AI.Constantin Vica, Cristina Voinea & Radu Uszkai - 2021 - Információs Társadalom 21 (2):83-96.
    With AI permeating our lives, there is widespread concern regarding the proper framework needed to morally assess and regulate it. This has given rise to many attempts to devise ethical guidelines that infuse guidance for both AI development and deployment. Our main concern is that, instead of a genuine ethical interest for AI, we are witnessing moral diplomacies resulting in moral bureaucracies battling for moral supremacy and political domination. After providing a short overview of what we term ‘ethics washing’ in (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  13. A Robust Governance for the AI Act: AI Office, AI Board, Scientific Panel, and National Authorities.Claudio Novelli, Philipp Hacker, Jessica Morley, Jarle Trondal & Luciano Floridi - 2024 - European Journal of Risk Regulation 4:1-25.
    Regulation is nothing without enforcement. This particularly holds for the dynamic field of emerging technologies. Hence, this article has two ambitions. First, it explains how the EU´s new Artificial Intelligence Act (AIA) will be implemented and enforced by various institutional bodies, thus clarifying the governance framework of the AIA. Second, it proposes a normative model of governance, providing recommendations to ensure uniform and coordinated execution of the AIA and the fulfilment of the legislation. Taken together, the article explores how (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. The Many Meanings of Vulnerability in the AI Act and the One Missing.Federico Galli & Claudio Novelli - forthcoming - Biolaw Journal.
    This paper reviews the different meanings of vulnerability in the AI Act (AIA). We show that the AIA follows a rather established tradition of looking at vulnerability as a trait or a state of certain individuals and groups. It also includes a promising account of vulnerability as a relation but does not clarify if and how AI changes this relation. We spot the missing piece of the AIA: the lack of recognition that vulnerability is an inherent feature of all human-AI (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15. Computer says "No": The Case Against Empathetic Conversational AI.Alba Curry & Amanda Cercas Curry - 2023 - Findings of the Association for Computational Linguistics: Acl 2023.
    Emotions are an integral part of human cognition and they guide not only our understanding of the world but also our actions within it. As such, whether we soothe or flame an emotion is not inconsequential. Recent work in conversational AI has focused on responding empathetically to users, validating and soothing their emotions without a real basis. This AI-aided emotional regulation can have negative consequences for users and society, tending towards a one-noted happiness defined as only the absence of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16. Basic issues in AI policy.Vincent C. Müller - 2022 - In Maria Amparo Grau-Ruiz (ed.), Interactive robotics: Legal, ethical, social and economic aspects. Springer. pp. 3-9.
    This extended abstract summarises some of the basic points of AI ethics and policy as they present themselves now. We explain the notion of AI, the main ethical issues in AI and the main policy aims and means.
    Download  
     
    Export citation  
     
    Bookmark  
  17.  85
    Rethinking the Redlines Against AI Existential Risks.Yi Zeng, Xin Guan, Enmeng Lu & Jinyu Fan - manuscript
    The ongoing evolution of advanced AI systems will have profound, enduring, and significant impacts on human existence that must not be overlooked. These impacts range from empowering humanity to achieve unprecedented transcendence to potentially causing catastrophic threats to our existence. To proactively and preventively mitigate these potential threats, it is crucial to establish clear redlines to prevent AI-induced existential risks by constraining and regulating advanced AI and their related AI actors. This paper explores different concepts of AI existential risk, connects (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18. Trust in AI: Progress, Challenges, and Future Directions.Saleh Afroogh, Ali Akbari, Emmie Malone, Mohammadali Kargar & Hananeh Alambeigi - forthcoming - Nature Humanities and Social Sciences Communications.
    The increasing use of artificial intelligence (AI) systems in our daily life through various applications, services, and products explains the significance of trust/distrust in AI from a user perspective. AI-driven systems have significantly diffused into various fields of our lives, serving as beneficial tools used by human agents. These systems are also evolving to act as co-assistants or semi-agents in specific domains, potentially influencing human thought, decision-making, and agency. Trust/distrust in AI plays the role of a regulator and could significantly (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. (1 other version)Ethics as a service: a pragmatic operationalisation of AI ethics.Jessica Morley, Anat Elhalal, Francesca Garcia, Libby Kinsey, Jakob Mökander & Luciano Floridi - 2021 - Minds and Machines 31 (2):239–256.
    As the range of potential uses for Artificial Intelligence, in particular machine learning, has increased, so has awareness of the associated ethical issues. This increased awareness has led to the realisation that existing legislation and regulation provides insufficient protection to individuals, groups, society, and the environment from AI harms. In response to this realisation, there has been a proliferation of principle-based ethics codes, guidelines and frameworks. However, it has become increasingly clear that a significant gap exists between the theory (...)
    Download  
     
    Export citation  
     
    Bookmark   27 citations  
  20. Innovating with confidence: embedding AI governance and fairness in a financial services risk management framework.Luciano Floridi, Michelle Seng Ah Lee & Alexander Denev - 2020 - Berkeley Technology Law Journal 34.
    An increasing number of financial services (FS) companies are adopting solutions driven by artificial intelligence (AI) to gain operational efficiencies, derive strategic insights, and improve customer engagement. However, the rate of adoption has been low, in part due to the apprehension around its complexity and self-learning capability, which makes auditability a challenge in a highly regulated industry. There is limited literature on how FS companies can implement the governance and controls specific to AI-driven solutions. AI auditing cannot be performed in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21.  65
    A global taxonomy of interpretable AI: unifying the terminology for the technical and social sciences.Lode Lauwaert - 2023 - Artificial Intelligence Review 56:3473–3504.
    Since its emergence in the 1960s, Artifcial Intelligence (AI) has grown to conquer many technology products and their felds of application. Machine learning, as a major part of the current AI solutions, can learn from the data and through experience to reach high performance on various tasks. This growing success of AI algorithms has led to a need for interpretability to understand opaque models such as deep neural networks. Various requirements have been raised from diferent domains, together with numerous tools (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22. Robot Autonomy vs. Human Autonomy: Social Robots, Artificial Intelligence (AI), and the Nature of Autonomy.Paul Formosa - 2021 - Minds and Machines 31 (4):595-616.
    Social robots are robots that can interact socially with humans. As social robots and the artificial intelligence that powers them becomes more advanced, they will likely take on more social and work roles. This has many important ethical implications. In this paper, we focus on one of the most central of these, the impacts that social robots can have on human autonomy. We argue that, due to their physical presence and social capacities, there is a strong potential for social robots (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  23. Beyond Interpretability and Explainability: Systematic AI and the Function of Systematizing Thought.Matthieu Queloz - manuscript
    Recent debates over artificial intelligence have focused on its perceived lack of interpretability and explainability. I argue that these notions fail to capture an important aspect of what end-users—as opposed to developers—need from these models: what is needed is systematicity, in a more demanding sense than the compositionality-related sense that has dominated discussions of systematicity in the philosophy of language and cognitive science over the last thirty years. To recover this more demanding notion of systematicity, I distinguish between (i) the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. Quasi-Metacognitive Machines: Why We Don’t Need Morally Trustworthy AI and Communicating Reliability is Enough.John Dorsch & Ophelia Deroy - 2024 - Philosophy and Technology 37 (2):1-21.
    Many policies and ethical guidelines recommend developing “trustworthy AI”. We argue that developing morally trustworthy AI is not only unethical, as it promotes trust in an entity that cannot be trustworthy, but it is also unnecessary for optimal calibration. Instead, we show that reliability, exclusive of moral trust, entails the appropriate normative constraints that enable optimal calibration and mitigate the vulnerability that arises in high-stakes hybrid decision-making environments, without also demanding, as moral trust would, the anthropomorphization of AI and thus (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  25. The Chinese approach to artificial intelligence: an analysis of policy, ethics, and regulation.Huw Roberts, Josh Cowls, Jessica Morley, Mariarosaria Taddeo, Vincent Wang & Luciano Floridi - 2021 - AI and Society 36 (1):59–⁠77.
    In July 2017, China’s State Council released the country’s strategy for developing artificial intelligence, entitled ‘New Generation Artificial Intelligence Development Plan’. This strategy outlined China’s aims to become the world leader in AI by 2030, to monetise AI into a trillion-yuan industry, and to emerge as the driving force in defining ethical norms and standards for AI. Several reports have analysed specific aspects of China’s AI policies or have assessed the country’s technical capabilities. Instead, in this article, we focus on (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  26. A Framework for Assurance Audits of Algorithmic Systems.Benjamin Lange, Khoa Lam, Borhane Hamelin, Davidovic Jovana, Shea Brown & Ali Hasan - forthcoming - Proceedings of the 2024 Acm Conference on Fairness, Accountability, and Transparency.
    An increasing number of regulations propose the notion of ‘AI audits’ as an enforcement mechanism for achieving transparency and accountability for artificial intelligence (AI) systems. Despite some converging norms around various forms of AI auditing, auditing for the purpose of compliance and assurance currently have little to no agreed upon practices, procedures, taxonomies, and standards. We propose the ‘criterion audit’ as an operationalizable compliance and assurance external audit framework. We model elements of this approach after financial auditing practices, and argue (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. Problems of Using Autonomous Military AI Against the Background of Russia's Military Aggression Against Ukraine.Oleksii Kostenko, Tyler Jaynes, Dmytro Zhuravlov, Oleksii Dniprov & Yana Usenko - 2022 - Baltic Journal of Legal and Social Sciences 2022 (4):131-145.
    The application of modern technologies with artificial intelligence (AI) in all spheres of human life is growing exponentially alongside concern for its controllability. The lack of public, state, and international control over AI technologies creates large-scale risks of using such software and hardware that (un)intentionally harm humanity. The events of recent month and years, specifically regarding the Russian Federation’s war against its democratic neighbour Ukraine and other international conflicts of note, support the thesis that the uncontrolled use of AI, especially (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28. Towards broadening the perspective on lethal autonomous weapon systems ethics and regulations.Diego Andres Salcedo, Bianca Ximenes & Geber Ramalho - 2020 - In Diego Andres Salcedo, Bianca Ximenes & Geber Ramalho (eds.), Rio Seminar on Autonomous Weapons Systems. Brasília: Alexandre de Gusmão Foundation. pp. 133-158.
    Our reflections on LAWS issues are the result of the work of our research group on AI and ethics at the Informatics Center in partnership with the Information Science Department, both from the Federal University of Pernambuco, Brazil. In particular, our propositions and provocations are tied to Bianca Ximenes’s ongoing doctoral thesis, advised by Prof. Geber Ramalho, from the area of computer science, and co-advised by Prof. Diego Salcedo, from the humanities. Our research group is interested in answering two tricky (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29.  65
    How could the United Nations Global Digital Compact prevent cultural imposition and hermeneutical injustice?Arthur Gwagwa & Warmhold Jan Thomas Mollema - 2024 - Patterns 5 (11).
    As the geopolitical superpowers race to regulate the digital realm, their divergent rights-centered, market-driven, and social-control-based approaches require a global compact on digital regulation. If diverse regulatory jurisdictions remain, forms of domination entailed by cultural imposition and hermeneutical injustice related to AI legislation and AI systems will follow. We argue for consensual regulation on shared substantive issues, accompanied by proper standardization and coordination. Failure to attain consensus will fragment global digital regulation, enable regulatory capture by authoritarian powers (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. The algorithm audit: Scoring the algorithms that score us.Jovana Davidovic, Shea Brown & Ali Hasan - 2021 - Big Data and Society 8 (1).
    In recent years, the ethical impact of AI has been increasingly scrutinized, with public scandals emerging over biased outcomes, lack of transparency, and the misuse of data. This has led to a growing mistrust of AI and increased calls for mandated ethical audits of algorithms. Current proposals for ethical assessment of algorithms are either too high level to be put into practice without further guidance, or they focus on very specific and technical notions of fairness or transparency that do not (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  31.  54
    A Proposed Taxonomy for the Evolutionary Stages of Artificial Intelligence: Towards a Periodisation of the Machine Intellect Era.Demetrius Floudas - manuscript
    As artificial intelligence (AI) systems continue their rapid advancement, a framework for contextualising the major transitional phases in the development of machine intellect becomes increasingly vital. This paper proposes a novel chronological classification scheme to characterise the key temporal stages in AI evolution. The Prenoëtic era, spanning all of history prior to the year 2020, is defined as the preliminary phase before substantive artificial intellect manifestations. The Protonoëtic period, which humanity has recently entered, denotes the initial emergence of advanced foundation (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32. Adopting trust as an ex post approach to privacy.Haleh Asgarinia - 2024 - AI and Ethics 3 (4).
    This research explores how a person with whom information has been shared and, importantly, an artificial intelligence (AI) system used to deduce information from the shared data contribute to making the disclosure context private. The study posits that private contexts are constituted by the interactions of individuals in the social context of intersubjectivity based on trust. Hence, to make the context private, the person who is the trustee (i.e., with whom information has been shared) must fulfil trust norms. According to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33.  87
    The Phenomenology of ChatGPT: A Semiotics.Thomas Byrne - 2024 - Journal of Consciousness Studies 31 (3):6-27.
    This essay comprises a first phenomenological semiotics of ChatGPT. I analyse how we experience the language signs generated by that AI. This task is accomplished in two steps. First, I introduce a conceptual scaffolding for the project, by introducing core tenets of Husserl's semiotics. Second, I mould Husserl's theory to develop my phenomenology of the passive and active consciousness of the language signs composed by ChatGPT. On the one hand, by discussing temporality, I demonstrate that ChatGPT can passively demand me (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. Artificial Intelligence Ethics and Safety: practical tools for creating "good" models.Nicholas Kluge Corrêa -
    The AI Robotics Ethics Society (AIRES) is a non-profit organization founded in 2018 by Aaron Hui to promote awareness and the importance of ethical implementation and regulation of AI. AIRES is now an organization with chapters at universities such as UCLA (Los Angeles), USC (University of Southern California), Caltech (California Institute of Technology), Stanford University, Cornell University, Brown University, and the Pontifical Catholic University of Rio Grande do Sul (Brazil). AIRES at PUCRS is the first international chapter of AIRES, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. (1 other version)Artificial intelligence crime: an interdisciplinary analysis of foreseeable threats and solutions.Thomas C. King, Nikita Aggarwal, Mariarosaria Taddeo & Luciano Floridi - 2019 - Science and Engineering Ethics 26 (1):89-120.
    Artificial intelligence research and regulation seek to balance the benefits of innovation against any potential harms and disruption. However, one unintended consequence of the recent surge in AI research is the potential re-orientation of AI technologies to facilitate criminal acts, term in this article AI-Crime. AIC is theoretically feasible thanks to published experiments in automating fraud targeted at social media users, as well as demonstrations of AI-driven manipulation of simulated markets. However, because AIC is still a relatively young and (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  36. The German Act on Autonomous Driving: Why Ethics Still Matters.Alexander Kriebitz, Raphael Max & Christoph Lütge - 2022 - Philosophy and Technology 35 (2):1-13.
    The German Act on Autonomous Driving constitutes the first national framework on level four autonomous vehicles and has received attention from policy makers, AI ethics scholars and legal experts in autonomous driving. Owing to Germany’s role as a global hub for car manufacturing, the following paper sheds light on the act’s position within the ethical discourse and how it reconfigures the balance between legislation and ethical frameworks. Specifically, in this paper, we highlight areas that need to be more worked out (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  37. Artificial Intelligence and Legal Disruption: A New Model for Analysis.John Danaher, Hin-Yan Liu, Matthijs Maas, Luisa Scarcella, Michaela Lexer & Leonard Van Rompaey - forthcoming - Law, Innovation and Technology.
    Artificial intelligence (AI) is increasingly expected to disrupt the ordinary functioning of society. From how we fight wars or govern society, to how we work and play, and from how we create to how we teach and learn, there is almost no field of human activity which is believed to be entirely immune from the impact of this emerging technology. This poses a multifaceted problem when it comes to designing and understanding regulatory responses to AI. This article aims to: (i) (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  38. Nature and the machines.Huw Price & Matthew Connolly - manuscript
    Does artificial intelligence (AI) pose existential risks to humanity? Some critics feel this question is getting too much attention, and want to push it aside in favour of conversations about the immediate risks of AI. These critics now include the journal Nature, where a recent editorial urges us to 'stop talking about tomorrow's AI doomsday when AI poses risks today.' We argue that this is a serious failure of judgement, on Nature's part. In science, as in everyday life, we expect (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. Diachronic and synchronic variation in the performance of adaptive machine learning systems: the ethical challenges.Joshua Hatherley & Robert Sparrow - 2023 - Journal of the American Medical Informatics Association 30 (2):361-366.
    Objectives: Machine learning (ML) has the potential to facilitate “continual learning” in medicine, in which an ML system continues to evolve in response to exposure to new data over time, even after being deployed in a clinical setting. In this article, we provide a tutorial on the range of ethical issues raised by the use of such “adaptive” ML systems in medicine that have, thus far, been neglected in the literature. -/- Target audience: The target audiences for this tutorial are (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  40. (1 other version)What the near future of artificial intelligence could be.Luciano Floridi - 2019 - Philosophy and Technology 32 (1):1-15.
    In this article, I shall argue that AI’s likely developments and possible challenges are best understood if we interpret AI not as a marriage between some biological-like intelligence and engineered artefacts, but as a divorce between agency and intelligence, that is, the ability to solve problems successfully and the necessity of being intelligent in doing so. I shall then look at five developments: (1) the growing shift from logic to statistics, (2) the progressive adaptation of the environment to AI rather (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  41. Conceptualizing Policy in Value Sensitive Design: A Machine Ethics Approach.Steven Umbrello - 2020 - In Steven John Thompson (ed.), Machine Law, Ethics, and Morality in the Age of Artificial Intelligence. IGI Global. pp. 108-125.
    The value sensitive design (VSD) approach to designing transformative technologies for human values is taken as the object of study in this chapter. VSD has traditionally been conceptualized as another type of technology or instrumentally as a tool. The various parts of VSD’s principled approach would then aim to discern the various policy requirements that any given technological artifact under consideration would implicate. Yet, little to no consideration has been given to how laws, regulations, policies and social norms engage within (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  42. Sustaining the Higher-Level Principle of Equal Treatment in Autonomous Driving.Judit Szalai - 2020 - In Marco Norskov, Johanna Seibt & Oliver S. Quick (eds.), Culturally Sustainable Social Robotics: Proceedings of Robophilosophy 2020. pp. 384-394..
    This paper addresses the cultural sustainability of artificial intelligence use through one of its most widely discussed instances: autonomous driving. The introduction of self-driving cars places us in a radically novel moral situation, requiring advance, reflectively endorsed, forced, and iterable choices, with yet uncharted forms of risk imposition. The argument is meant to explore the necessity and possibility of maintaining one of our most fundamental moral-cultural principles in this new context, that of the equal treatment of persons. It is claimed (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. The poor performance of apps assessing skin cancer risk.Jessica Morley, Luciano Floridi & Ben Goldacre - 2020 - British Medical Journal 368 (8233).
    Over the past year, technology companies have made headlines claiming that their artificially intelligent (AI) products can outperform clinicians at diagnosing breast cancer, brain tumours, and diabetic retinopathy. Claims such as these have influenced policy makers, and AI now forms a key component of the national health strategies in England, the United States, and China. While it is positive to see healthcare systems embracing data analytics and machine learning, concerns remain about the efficacy, ethics, and safety of some commercial, AI (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44.  48
    Privacy and Machine Learning- Based Artificial Intelligence: Philosophical, Legal, and Technical Investigations.Haleh Asgarinia - 2024 - Dissertation, Department of Philisophy, University of Twente
    This dissertation consists of five chapters, each written as independent research papers that are unified by an overarching concern regarding information privacy and machine learning-based artificial intelligence (AI). This dissertation addresses the issues concerning privacy and AI by responding to the following three main research questions (RQs): RQ1. ‘How does an AI system affect privacy?’; RQ2. ‘How effectively does the General Data Protection Regulation (GDPR) assess and address privacy issues concerning both individuals and groups?’; and RQ3. ‘How can the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. Is Spotify Bad for Democracy? Artificial Intelligence, Cultural Democracy, and Law.Jonathan Gingerich - 2022 - Yale Journal of Law and Technology 24:227-316.
    Much scholarly attention has recently been devoted to ways in which artificial intelligence (AI) might weaken formal political democracy, but little attention has been devoted to the effect of AI on “cultural democracy”—that is, democratic control over the forms of life, aesthetic values, and conceptions of the good that circulate in a society. This work is the first to consider in detail the dangers that AI-driven cultural recommendations pose to cultural democracy. This Article argues that AI threatens to weaken cultural (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  46. Computer Models of Constitutive Social Practices.Richard Evans - 2013 - In Vincent Müller (ed.), Philosophy and Theory of Artificial Intelligence. Springer. pp. 389-409.
    Research in multi-agent systems typically assumes a regulative model of social practice. This model starts with agents who are already capable of acting autonomously to further their individual ends. A social practice, according to this view, is a way of achieving coordination between multiple agents by restricting the set of actions available. For example, in a world containing cars but no driving regulations, agents are free to drive on either side of the road. To prevent collisions, we introduce driving regulations, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. How to Save Face & the Fourth Amendment: Developing an Algorithmic Auditing and Accountability Industry for Facial Recognition Technology in Law Enforcement.Lin Patrick - 2023 - Albany Law Journal of Science and Technology 33 (2):189-235.
    For more than two decades, police in the United States have used facial recognition to surveil civilians. Local police departments deploy facial recognition technology to identify protestors’ faces while federal law enforcement agencies quietly amass driver’s license and social media photos to build databases containing billions of faces. Yet, despite the widespread use of facial recognition in law enforcement, there are neither federal laws governing the deployment of this technology nor regulations settings standards with respect to its development. To make (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48. The social turn of artificial intelligence.Nello Cristianini, Teresa Scantamburlo & James Ladyman - 2021 - AI and Society (online).
    Social machines are systems formed by material and human elements interacting in a structured way. The use of digital platforms as mediators allows large numbers of humans to participate in such machines, which have interconnected AI and human components operating as a single system capable of highly sophisticated behavior. Under certain conditions, such systems can be understood as autonomous goal-driven agents. Many popular online platforms can be regarded as instances of this class of agent. We argue that autonomous social machines (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  49. Group Agency and Artificial Intelligence.Christian List - 2021 - Philosophy and Technology (4):1-30.
    The aim of this exploratory paper is to review an under-appreciated parallel between group agency and artificial intelligence. As both phenomena involve non-human goal-directed agents that can make a difference to the social world, they raise some similar moral and regulatory challenges, which require us to rethink some of our anthropocentric moral assumptions. Are humans always responsible for those entities’ actions, or could the entities bear responsibility themselves? Could the entities engage in normative reasoning? Could they even have rights and (...)
    Download  
     
    Export citation  
     
    Bookmark   33 citations  
  50. Social Machinery and Intelligence.Nello Cristianini, James Ladyman & Teresa Scantamburlo - manuscript
    Social machines are systems formed by technical and human elements interacting in a structured manner. The use of digital platforms as mediators allows large numbers of human participants to join such mechanisms, creating systems where interconnected digital and human components operate as a single machine capable of highly sophisticated behaviour. Under certain conditions, such systems can be described as autonomous and goal-driven agents. Many examples of modern Artificial Intelligence (AI) can be regarded as instances of this class of mechanisms. We (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 964