Contents
480 found
Order:
1 — 50 / 480
  1. Regulation by Design: Features, Practices, Limitations, and Governance Implications.Kostina Prifti, Jessica Morley, Claudio Novelli & Luciano Floridi - 2024 - Minds and Machines 34 (2):1-23.
    Regulation by design (RBD) is a growing research field that explores, develops, and criticises the regulative function of design. In this article, we provide a qualitative thematic synthesis of the existing literature. The aim is to explore and analyse RBD’s core features, practices, limitations, and related governance implications. To fulfil this aim, we examine the extant literature on RBD in the context of digital technologies. We start by identifying and structuring the core features of RBD, namely the goals, regulators, regulatees, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  2. AI Human Impact: Toward a Model for Ethical Investing in AI-Intensive Companies.James Brusseau - manuscript
    Does AI conform to humans, or will we conform to AI? An ethical evaluation of AI-intensive companies will allow investors to knowledgeably participate in the decision. The evaluation is built from nine performance indicators that can be analyzed and scored to reflect a technology’s human-centering. When summed, the scores convert into objective investment guidance. The strategy of incorporating ethics into financial decisions will be recognizable to participants in environmental, social, and governance investing, however, this paper argues that conventional ESG frameworks (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  3. Four Bottomless Errors and the Collapse of Statistical Fairness.James Brusseau - manuscript
    The AI ethics of statistical fairness is an error, the approach should be abandoned, and the accumulated academic work deleted. The argument proceeds by identifying four recurring mistakes within statistical fairness. One conflates fairness with equality, which confines thinking to similars being treated similarly. The second and third errors derive from a perspectival ethical view which functions by negating others and their viewpoints. The final mistake constrains fairness to work within predefined social groups instead of allowing unconstrained fairness to subsequently (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  4. Endangered Experiences: Skipping Newfangled Technologies and Sticking to Real Life.Marc Champagne - manuscript
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  5. Aspirational Affordances of AI.Sina Fazelpour & Meica Magnani - manuscript
    As artificial intelligence (AI) systems increasingly permeate processes of cultural and epistemic production, there are growing concerns about how their outputs may confine individuals and groups to static or restricted narratives about who or what they could be. In this paper, we advance the discourse surrounding these concerns by making three contributions. First, we introduce the concept of aspirational affordance to describe how technologies of representation---paintings, literature, photographs, films, or video games---shape the exercising of imagination, particularly as it pertains to (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  6. What is AI safety? What do we want it to be?Jacqueline Harding & Cameron Domenico Kirk-Giannini - manuscript
    The field of AI safety seeks to prevent or reduce the harms caused by AI systems. A simple and appealing account of what is distinctive of AI safety as a field holds that this feature is constitutive: a research project falls within the purview of AI safety just in case it aims to prevent or reduce the harms caused by AI systems. Call this appealingly simple account The Safety Conception of AI safety. Despite its simplicity and appeal, we argue that (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  7. Preserving our humanity in the growing AI-mediated politics: Unraveling the concepts of Democracy (民主) and People as the Roots of the state (民本).Manh-Tung Ho & My-Van Luong - manuscript
    Artificial intelligence (AI) has transformed the way people engage with politics around the world: how citizens consume news, how they view the institutions and norms, how civic groups mobilize public interests, how data-driven campaigns are shaping elections, and so on (Ho & Vuong, 2024). Placing people at the center of the increasingly AI-mediated political landscape has become an urgent matter that transcends all forms of institutions. In this essay, we argue that, in this era, it is necessary to look beyond (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  8. Toward a social theory of Human-AI Co-creation: Bringing techno-social reproduction and situated cognition together with the following seven premises.Manh-Tung Ho & Quan-Hoang Vuong - manuscript
    This article synthesizes the current theoretical attempts to understand human-machine interactions and introduces seven premises to understand our emerging dynamics with increasingly competent, pervasive, and instantly accessible algorithms. The hope that these seven premises can build toward a social theory of human-AI cocreation. The focus on human-AI cocreation is intended to emphasize two factors. First, is the fact that our machine learning systems are socialized. Second, is the coevolving nature of human mind and AI systems as smart devices form an (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  9. (1 other version)Responsibility Gaps and Retributive Dispositions: Evidence from the US, Japan and Germany.Markus Kneer & Markus Christen - manuscript
    Danaher (2016) has argued that increasing robotization can lead to retribution gaps: Situation in which the normative fact that nobody can be justly held responsible for a harmful outcome stands in conflict with our retributivist moral dispositions. In this paper, we report a cross-cultural empirical study based on Sparrow’s (2007) famous example of an autonomous weapon system committing a war crime, which was conducted with participants from the US, Japan and Germany. We find that (i) people manifest a considerable willingness (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  10. From Model Performance to Claim: How a Change of Focus in Machine Learning Replicability Can Help Bridge the Responsibility Gap.Tianqi Kou - manuscript
    Two goals - improving replicability and accountability of Machine Learning research respectively, have accrued much attention from the AI ethics and the Machine Learning community. Despite sharing the measures of improving transparency, the two goals are discussed in different registers - replicability registers with scientific reasoning whereas accountability registers with ethical reasoning. Given the existing challenge of the Responsibility Gap - holding Machine Learning scientists accountable for Machine Learning harms due to them being far from sites of application, this paper (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  11. The debate on the ethics of AI in health care: a reconstruction and critical review.Jessica Morley, Caio C. V. Machado, Christopher Burr, Josh Cowls, Indra Joshi, Mariarosaria Taddeo & Luciano Floridi - manuscript
    Healthcare systems across the globe are struggling with increasing costs and worsening outcomes. This presents those responsible for overseeing healthcare with a challenge. Increasingly, policymakers, politicians, clinical entrepreneurs and computer and data scientists argue that a key part of the solution will be ‘Artificial Intelligence’ (AI) – particularly Machine Learning (ML). This argument stems not from the belief that all healthcare needs will soon be taken care of by “robot doctors.” Instead, it is an argument that rests on the classic (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  12. الذكاء الاصطناعي العاطفي.Salah Osman - manuscript
    الذكاء الاصطناعي العاطفي»، ويُعرف أيضًا باسم «الحوسبة العاطفية»، و«الذكاء الاصطناعي المتمركز حول الإنسان»، و«الذكاء الاصطناعي الاجتماعي»، مفهوم جديد نسبيًا (ما زالت تقنياته في طور التطوير)، وهو أحد مجالات علوم الحاسوب الهادفة إلى تطوير آلات قادرة على فهم المشاعر البشرية. يشير المفهوم ببساطة إلى اكتشاف وبرمجة المشاعر الإنسانية بُغية تحسين الذكاء الاصطناعي، وتوسيع نطاق استخدامه، بحيث لا يقتصر أداء الروبوتات على تحليل الجوانب المعرفية (المنطقية) والتفاعل معها فحسب، بل والامتداد بالتحليل والتفاعل إلى الجوانب العاطفية للتواصل البشري.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  13. نحو أخلاقيات للآلة: تقنيات الذكاء الاصطناعي وتحديات اتخاذ القرار.Salah Osman - manuscript
    تُعد أخلاقيات الآلة جزءًا من أخلاقيات الذكاء الاصطناعي المعنية بإضافة أو ضمان السلوكيات الأخلاقية للآلات التي صنعها الإنسان، والتي تستخدم الذكاء الاصطناعي، وهي تختلف عن المجالات الأخلاقية الأخرى المتعلقة بالهندسة والتكنولوجيا، فلا ينبغي الخلط مثلاً بين أخلاقيات الآلة وأخلاقيات الحاسوب، إذ تركز هذه الأخيرة على القضايا الأخلاقية المرتبطة باستخدام الإنسان لأجهزة الحاسوب؛ كما يجب أيضًا تمييز مجال أخلاقيات الآلة عن فلسفة التكنولوجيا، والتي تهتم بالمقاربات الإبستمولوجية والأنطولوجية والأخلاقية، والتأثيرات الاجتماعية والاقتصادية والسياسية الكبرى، للممارسات التكنولوجية على تنوعها؛ أما أخلاقيات الآلة فتعني (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  14. الميتافيرس والأزمة الوجودية.Salah Osman - manuscript
    نحن مقيمون على الإنترنت، نرسم معالم دنيانا التي نبتغيها من خلاله، ونُمارس تمثيل شخصياتٍ أبعد ما تكون عنا؛ نحقق زيفًا أحلامًا قد تكون بعيدة المنال، ويُصدق بضعنا البعض فيما نسوقه من أكاذيب ومثاليات؛ ننعم بأقوالٍ بلا أفعال، وقلوبٍ بلا عواطف، وجناتٍ بلا نعيم، وألسنة في ظلمات الأفواه المُغلقة تنطق بحركات الأصابع، وحريةٍ مُحاطة بأسيجة الوهم؛ ومن غير إنترنت سيبدو أكثر الناس قطعًا بحجمهم الطبيعي الذي لا نعرفه، او بالأحرى نعرفه ونتجاهله! لا شك أن ظهور الإنترنت واتساع نطاق استخداماته يُمثل حدثًا (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  15. The Relations Between Pedagogical and Scientific Explanations of Algorithms: Case Studies from the French Administration.Maël Pégny - manuscript
    The opacity of some recent Machine Learning (ML) techniques have raised fundamental questions on their explainability, and created a whole domain dedicated to Explainable Artificial Intelligence (XAI). However, most of the literature has been dedicated to explainability as a scientific problem dealt with typical methods of computer science, from statistics to UX. In this paper, we focus on explainability as a pedagogical problem emerging from the interaction between lay users and complex technological systems. We defend an empirical methodology based on (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  16. A Talking Cure for Autonomy Traps : How to share our social world with chatbots.Regina Rini - manuscript
    Large Language Models (LLMs) like ChatGPT were trained on human conversation, but in the future they will also train us. As chatbots speak from our smartphones and customer service helplines, they will become a part of everyday life and a growing share of all the conversations we ever have. It’s hard to doubt this will have some effect on us. Here I explore a specific concern about the impact of artificial conversation on our capacity to deliberate and hold ourselves accountable (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  17. Consciousness, Machines, and Moral Status.Henry Shevlin - manuscript
    In light of recent breakneck pace in machine learning, questions about whether near-future artificial systems might be conscious and possess moral status are increasingly pressing. This paper argues that as matters stand these debates lack any clear criteria for resolution via the science of consciousness. Instead, insofar as they are settled at all, it is likely to be via shifts in public attitudes brought about by the increasingly close relationships between humans and AI users. Section 1 of the paper I (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  18. AI Alignment Problem: “Human Values” don’t Actually Exist.Alexey Turchin - manuscript
    Abstract. The main current approach to the AI safety is AI alignment, that is, the creation of AI whose preferences are aligned with “human values.” Many AI safety researchers agree that the idea of “human values” as a constant, ordered sets of preferences is at least incomplete. However, the idea that “humans have values” underlies a lot of thinking in the field; it appears again and again, sometimes popping up as an uncritically accepted truth. Thus, it deserves a thorough deconstruction, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  19. Back to the Future: Curing Past Sufferings and S-Risks via Indexical Uncertainty.Alexey Turchin - manuscript
    The long unbearable sufferings in the past and agonies experienced in some future timelines in which a malevolent AI could torture people for some idiosyncratic reasons (s-risks) is a significant moral problem. Such events either already happened or will happen in causally disconnected regions of the multiverse and thus it seems unlikely that we can do anything about it. However, at least one pure theoretic way to cure past sufferings exists. If we assume that there is no stable substrate of (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  20. Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest”.Alexey Turchin - manuscript
    In this article we explore a promising way to AI safety: to send a message now (by openly publishing it on the Internet) that may be read by any future AI, no matter who builds it and what goal system it has. Such a message is designed to affect the AI’s behavior in a positive way, that is, to increase the chances that the AI will be benevolent. In other words, we try to persuade “paperclip maximizer” that it is in (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  21. (1 other version)Autonomous Reboot: the challenges of artificial moral agency and the ends of Machine Ethics.Jeffrey White - manuscript
    Ryan Tonkens (2009) has issued a seemingly impossible challenge, to articulate a comprehensive ethical framework within which artificial moral agents (AMAs) satisfy a Kantian inspired recipe - both "rational" and "free" - while also satisfying perceived prerogatives of Machine Ethics to create AMAs that are perfectly, not merely reliably, ethical. Challenges for machine ethicists have also been presented by Anthony Beavers and Wendell Wallach, who have pushed for the reinvention of traditional ethics in order to avoid "ethical nihilism" due to (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  22. ОТВЕТСТВЕННЫЙ ИСКУССТВЕННЫЙ ИНТЕЛЛЕКТ: ВВЕДЕНИЕ «КОЧЕВЫЕ ПРИНЦИПЫ ИСКУССТВЕННОГО ИНТЕЛЛЕКТА» ДЛЯ ЦЕНТРАЛЬНОЙ АЗИИ.Ammar Younas - manuscript
    Мы предлагаем, чтобы Центральная Азия разработала свои собственные принципы этики ИИ, которые мы предлагаем назвать “кочевыми принципами ИИ”.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  23. HARMONIZING LAW AND INNOVATIONS IN NANOMEDICINE, ARTIFICIAL INTELLIGENCE (AI) AND BIOMEDICAL ROBOTICS: A CENTRAL ASIAN PERSPECTIVE.Ammar Younas & Tegizbekova Zhyldyz Chynarbekovna - manuscript
    The recent progression in AI, nanomedicine and robotics have increased concerns about ethics, policy and law. The increasing complexity and hybrid nature of AI and nanotechnologies impact the functionality of “law in action” which can lead to legal uncertainty and ultimately to a public distrust. There is an immediate need of collaboration between Central Asian biomedical scientists, AI engineers and academic lawyers for the harmonization of AI, nanomedicines and robotics in Central Asian legal system.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  24. Trust in AI: Progress, Challenges, and Future Directions.Saleh Afroogh, Ali Akbari, Emmie Malone, Mohammadali Kargar & Hananeh Alambeigi - forthcoming - Nature Humanities and Social Sciences Communications.
    The increasing use of artificial intelligence (AI) systems in our daily life through various applications, services, and products explains the significance of trust/distrust in AI from a user perspective. AI-driven systems have significantly diffused into various fields of our lives, serving as beneficial tools used by human agents. These systems are also evolving to act as co-assistants or semi-agents in specific domains, potentially influencing human thought, decision-making, and agency. Trust/distrust in AI plays the role of a regulator and could significantly (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  25. Medical AI, Inductive Risk, and the Communication of Uncertainty: The Case of Disorders of Consciousness.Jonathan Birch - forthcoming - Journal of Medical Ethics.
    Some patients, following brain injury, do not outwardly respond to spoken commands, yet show patterns of brain activity that indicate responsiveness. This is “cognitive-motor dissociation” (CMD). Recent research has used machine learning to diagnose CMD from electroencephalogram (EEG) recordings. These techniques have high false discovery rates, raising a serious problem of inductive risk. It is no solution to communicate the false discovery rates directly to the patient’s family, because this information may confuse, alarm and mislead. Instead, we need a procedure (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  26. Ethics of Artificial Intelligence.Stefan Buijsman, Michael Klenk & Jeroen van den Hoven - forthcoming - In Nathalie Smuha, Cambridge Handbook on the Law, Ethics and Policy of AI. Cambridge University Press.
    Artificial Intelligence (AI) is increasingly adopted in society, creating numerous opportunities but at the same time posing ethical challenges. Many of these are familiar, such as issues of fairness, responsibility and privacy, but are presented in a new and challenging guise due to our limited ability to steer and predict the outputs of AI systems. This chapter first introduces these ethical challenges, stressing that overviews of values are a good starting point but frequently fail to suffice due to the context (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   4 citations  
  27. Broomean(ish) Algorithmic Fairness?Clinton Castro - forthcoming - Journal of Applied Philosophy.
    Recently, there has been much discussion of ‘fair machine learning’: fairness in data-driven decision-making systems (which are often, though not always, made with assistance from machine learning systems). Notorious impossibility results show that we cannot have everything we want here. Such problems call for careful thinking about the foundations of fair machine learning. Sune Holm has identified one promising way forward, which involves applying John Broome's theory of fairness to the puzzles of fair machine learning. Unfortunately, his application of Broome's (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  28. The Representative Individuals Approach to Fair Machine Learning.Clinton Castro & Loi Michele - forthcoming - AI and Ethics.
    The demands of fair machine learning are often expressed in probabilistic terms. Yet, most of the systems of concern are deterministic in the sense that whether a given subject will receive a given score on the basis of their traits is, for all intents and purposes, either zero or one. What, then, can justify this probabilistic talk? We argue that the statistical reference classes used in fairness measures can be understood as defining the probability that hypothetical persons, who are representative (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  29. Does Predictive Sentencing Make Sense?Clinton Castro, Alan Rubel & Lindsey Schwartz - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    This paper examines the practice of using predictive systems to lengthen the prison sentences of convicted persons when the systems forecast a higher likelihood of re-offense or re-arrest. There has been much critical discussion of technologies used for sentencing, including questions of bias and opacity. However, there hasn’t been a discussion of whether this use of predictive systems makes sense in the first place. We argue that it does not by showing that there is no plausible theory of punishment that (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  30. Sims and Vulnerability: On the Ethics of Creating Emulated Minds.Bartek Chomanski - forthcoming - Science and Engineering Ethics.
    It might become possible to build artificial minds with the capacity for experience. This raises a plethora of ethical issues, explored, among others, in the context of whole brain emulations (WBE). In this paper, I will take up the problem of vulnerability – given, for various reasons, less attention in the literature – that the conscious emulations will likely exhibit. Specifically, I will examine the role that vulnerability plays in generating ethical issues that may arise when dealing with WBEs. I (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  31. The Philosophical Case for Robot Friendship.John Danaher - forthcoming - Journal of Posthuman Studies.
    Friendship is an important part of the good life. While many roboticists are eager to create friend-like robots, many philosophers and ethicists are concerned. They argue that robots cannot really be our friends. Robots can only fake the emotional and behavioural cues we associate with friendship. Consequently, we should resist the drive to create robot friends. In this article, I argue that the philosophical critics are wrong. Using the classic virtue-ideal of friendship, I argue that robots can plausibly be considered (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   31 citations  
  32. Artificial Intelligence and Legal Disruption: A New Model for Analysis.John Danaher, Hin-Yan Liu, Matthijs Maas, Luisa Scarcella, Michaela Lexer & Leonard Van Rompaey - forthcoming - Law, Innovation and Technology.
    Artificial intelligence (AI) is increasingly expected to disrupt the ordinary functioning of society. From how we fight wars or govern society, to how we work and play, and from how we create to how we teach and learn, there is almost no field of human activity which is believed to be entirely immune from the impact of this emerging technology. This poses a multifaceted problem when it comes to designing and understanding regulatory responses to AI. This article aims to: (i) (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  33. Understanding Artificial Agency.Leonard Dung - forthcoming - Philosophical Quarterly.
    Which artificial intelligence (AI) systems are agents? To answer this question, I propose a multidimensional account of agency. According to this account, a system's agency profile is jointly determined by its level of goal-directedness and autonomy as well as is abilities for directly impacting the surrounding world, long-term planning and acting for reasons. Rooted in extant theories of agency, this account enables fine-grained, nuanced comparative characterizations of artificial agency. I show that this account has multiple important virtues and is more (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   12 citations  
  34. Norms and Causation in Artificial Morality.Laura Fearnley - forthcoming - Joint Proceedings of Acm Iui:1-4.
    There has been an increasing interest into how to build Artificial Moral Agents (AMAs) that make moral decisions on the basis of causation rather than mere correction. One promising avenue for achieving this is to use a causal modelling approach. This paper explores an open and important problem with such an approach; namely, the problem of what makes a causal model an appropriate model. I explore why we need to establish criteria for what makes a model appropriate, and offer-up such (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  35. Privacy Implications of AI-Enabled Predictive Analytics in Clinical Diagnostics, and How to Mitigate Them.Dessislava Fessenko - forthcoming - Bioethica Forum.
    AI-enabled predictive analytics is widely deployed in clinical care settings for healthcare monitoring, diagnostics and risk management. The technology may offer valuable insights into individual and population health patterns, trends and outcomes. Predictive analytics may, however, also tangibly affect individual patient privacy and the right thereto. On the one hand, predictive analytics may undermine a patient’s state of privacy by constructing or modifying their health identity independent of the patient themselves. On the other hand, the use of predictive analytics may (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  36. Digital Necrolatry: Thanabots and the Prohibition of Post-Mortem AI Simulations.Demetrius Floudas - forthcoming - Submissions to Eu Ai Office's Plenary Drafting the Code of Practice for General-Purpose Artificial Intelligence.
    The emergence of Thanabots —artificial intelligence systems designed to simulate deceased individuals—presents unprecedented challenges at the intersection of artificial intelligence, legal rights, and societal configuration. This short policy recommendations report examines the legal, social and psychological implications of these posthumous simulations and argues for their prohibition on ethical, sociological, and legal grounds.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  37. Why do We Need to Employ Exemplars in Moral Education? Insights from Recent Advances in Research on Artificial Intelligence.Hyemin Han - forthcoming - Ethics and Behavior.
    In this paper, I examine why moral exemplars are useful and even necessary in moral education despite several critiques from researchers and educators. To support my point, I review recent AI research demonstrating that exemplar-based learning is superior to rule-based learning in model performance in training neural networks, such as large language models. I particularly focus on why education aiming at promoting the development of multifaceted moral functioning can be done effectively by using exemplars, which is similar to exemplar-based learning (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  38. What is it for a Machine Learning Model to Have a Capability?Jacqueline Harding & Nathaniel Sharadin - forthcoming - British Journal for the Philosophy of Science.
    What can contemporary machine learning (ML) models do? Given the proliferation of ML models in society, answering this question matters to a variety of stakeholders, both public and private. The evaluation of models' capabilities is rapidly emerging as a key subfield of modern ML, buoyed by regulatory attention and government grants. Despite this, the notion of an ML model possessing a capability has not been interrogated: what are we saying when we say that a model is able to do something? (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  39. (Online) Manipulation: Sometimes Hidden, Always Careless.Michael Klenk - forthcoming - Review of Social Economy.
    Ever-increasing numbers of human interactions with intelligent software agents, online and offline, and their increasing ability to influence humans have prompted a surge in attention toward the concept of (online) manipulation. Several scholars have argued that manipulative influence is always hidden. But manipulation is sometimes overt, and when this is acknowledged the distinction between manipulation and other forms of social influence becomes problematic. Therefore, we need a better conceptualisation of manipulation that allows it to be overt and yet clearly distinct (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   15 citations  
  40. Deepfakes, Simone Weil, and the concept of reading.Steven R. Kraaijeveld - forthcoming - AI and Society:1-3.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  41. The ethics of using virtual assistants to help people in vulnerable positions access care.Steven R. Kraaijeveld, Hanneke van Heijster, Nadine Bol & Kirsten E. Bevelander - forthcoming - Journal of Medical Ethics.
    People in vulnerable positions who need support in their daily lives often face challenges in receiving timely access to care; for instance, due to disabilities or individual and situational vulnerabilities. There has been an increasing turn to technology-mediated ways to improve access to care, which has raised ethical questions about the appropriateness and inclusiveness of digitalising care requests. Specifically, for people in vulnerable positions, digitalisation is meant to facilitate requests for access to healthcare resources and to simplify the process of (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  42. Ethical Issues in Near-Future Socially Supportive Smart Assistants for Older Adults.Alex John London - forthcoming - IEEE Transactions on Technology and Society.
    Abstract:This paper considers novel ethical issues pertaining to near-future artificial intelligence (AI) systems that seek to support, maintain, or enhance the capabilities of older adults as they age and experience cognitive decline. In particular, we focus on smart assistants (SAs) that would seek to provide proactive assistance and mediate social interactions between users and other members of their social or support networks. Such systems would potentially have significant utility for users and their caregivers if they could reduce the cognitive load (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  43. Algorithmic Profiling as a Source of Hermeneutical Injustice.Silvia Milano & Carina Prunkl - forthcoming - Philosophical Studies:1-19.
    It is well-established that algorithms can be instruments of injustice. It is less frequently discussed, however, how current modes of AI deployment often make the very discovery of injustice difficult, if not impossible. In this article, we focus on the effects of algorithmic profiling on epistemic agency. We show how algorithmic profiling can give rise to epistemic injustice through the depletion of epistemic resources that are needed to interpret and evaluate certain experiences. By doing so, we not only demonstrate how (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   4 citations  
  44. Discerning genuine and artificial sociality: a technomoral wisdom to live with chatbots.Katsunori Miyahara & Hayate Shimizu - forthcoming - In Vincent C. Müller, Aliya R. Dewey, Leonard Dung & Guido Löhr, Philosophy of Artificial Intelligence: The State of the Art. Berlin: SpringerNature.
    Chatbots powered by large language models (LLMs) are increasingly capable of engaging in what seems like natural conversations with humans. This raises the question of whether we should interact with these chatbots in a morally considerate manner. In this chapter, we examine how to answer this question from within the normative framework of virtue ethics. In the literature, two kinds of virtue ethics arguments, the moral cultivation and the moral character argument, have been advanced to argue that we should afford (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  45. AI4Science and the Context Distinction.Moti Mizrahi - forthcoming - AI and Ethics.
    “AI4Science” refers to the use of Artificial Intelligence (AI) in scientific research. As AI systems become more widely used in science, we need guidelines for when such uses are acceptable and when they are unacceptable. To that end, I propose that the distinction between the context of discovery and the context of justification, which comes from philosophy of science, may provide a preliminary but still useful guideline for acceptable uses of AI in science. Given that AI systems used in scientific (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  46. Why the NSA didn’t diminish your privacy but might have violated your right to privacy.Lauritz Munch - forthcoming - Analysis.
    According to a popular view, privacy is a function of people not knowing or rationally believing some fact about you. But intuitively it seems possible for a perpetrator to violate your right to privacy without learning any facts about you. For example, it seems plausible to say that the US National Security Agency’s PRISM program violated, or could have violated, the privacy rights of the people whose information was collected, despite the fact that the NSA, for the most part, merely (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  47. Explainable AI lacks regulative reasons: why AI and human decision‑making are not equally opaque.Uwe Peters - forthcoming - AI and Ethics.
    Many artificial intelligence (AI) systems currently used for decision-making are opaque, i.e., the internal factors that determine their decisions are not fully known to people due to the systems’ computational complexity. In response to this problem, several researchers have argued that human decision-making is equally opaque and since simplifying, reason-giving explanations (rather than exhaustive causal accounts) of a decision are typically viewed as sufficient in the human case, the same should hold for algorithmic decision-making. Here, I contend that this argument (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   5 citations  
  48. Cultural Bias in Explainable AI Research.Uwe Peters & Mary Carman - forthcoming - Journal of Artificial Intelligence Research.
    For synergistic interactions between humans and artificial intelligence (AI) systems, AI outputs often need to be explainable to people. Explainable AI (XAI) systems are commonly tested in human user studies. However, whether XAI researchers consider potential cultural differences in human explanatory needs remains unexplored. We highlight psychological research that found significant differences in human explanations between many people from Western, commonly individualist countries and people from non-Western, often collectivist countries. We argue that XAI research currently overlooks these variations and that (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  49. Is there not an obvious loophole in the AI act’s ban on emotion recognition technologies?Alexandra Prégent - forthcoming - AIandSociety.
    This is a preprint version of the forthcoming publication in AI and Society Journal. DOI: 10.1007/s00146-025-02289-8.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  50. Automated Influence and the Challenge of Cognitive Security.Sarah Rajtmajer & Daniel Susser - forthcoming - HoTSoS: ACM Symposium on Hot Topics in the Science of Security.
    Advances in AI are powering increasingly precise and widespread computational propaganda, posing serious threats to national security. The military and intelligence communities are starting to discuss ways to engage in this space, but the path forward is still unclear. These developments raise pressing ethical questions, about which existing ethics frameworks are silent. Understanding these challenges through the lens of “cognitive security,” we argue, offers a promising approach.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
1 — 50 / 480