Related

Contents
345 found
Order:
1 — 50 / 345
  1. Medical AI, Inductive Risk, and the Communication of Uncertainty: The Case of Disorders of Consciousness.Jonathan Birch - manuscript
    Some patients, following brain injury, do not outwardly respond to spoken commands, yet show patterns of brain activity that indicate responsiveness. This is “cognitive-motor dissociation” (CMD). Recent research has used machine learning to diagnose CMD from electroencephalogram (EEG) recordings. These techniques have high false discovery rates, raising a serious problem of inductive risk. It is no solution to communicate the false discovery rates directly to the patient’s family, because this information may confuse, alarm and mislead. Instead, we need a procedure (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  2. AI Human Impact: Toward a Model for Ethical Investing in AI-Intensive Companies.James Brusseau - manuscript
    Does AI conform to humans, or will we conform to AI? An ethical evaluation of AI-intensive companies will allow investors to knowledgeably participate in the decision. The evaluation is built from nine performance indicators that can be analyzed and scored to reflect a technology’s human-centering. When summed, the scores convert into objective investment guidance. The strategy of incorporating ethics into financial decisions will be recognizable to participants in environmental, social, and governance investing, however, this paper argues that conventional ESG frameworks (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  3. Endangered Experiences: Skipping Newfangled Technologies and Sticking to Real Life.Marc Champagne - manuscript
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  4. Investigating gender and racial biases in DALL-E Mini Images.Marc Cheong, Ehsan Abedin, Marinus Ferreira, Ritsaart Willem Reimann, Shalom Chalson, Pamela Robinson, Joanne Byrne, Leah Ruppanner, Mark Alfano & Colin Klein - manuscript
    Generative artificial intelligence systems based on transformers, including both text-generators like GPT-3 and image generators like DALL-E 2, have recently entered the popular consciousness. These tools, while impressive, are liable to reproduce, exacerbate, and reinforce extant human social biases, such as gender and racial biases. In this paper, we systematically review the extent to which DALL-E Mini suffers from this problem. In line with the Model Card published alongside DALL-E Mini by its creators, we find that the images it produces (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  5. Responsibility Gaps and Retributive Dispositions: Evidence from the US, Japan and Germany.Markus Kneer & Markus Christen - manuscript
    Danaher (2016) has argued that increasing robotization can lead to retribution gaps: Situation in which the normative fact that nobody can be justly held responsible for a harmful outcome stands in conflict with our retributivist moral dispositions. In this paper, we report a cross-cultural empirical study based on Sparrow’s (2007) famous example of an autonomous weapon system committing a war crime, which was conducted with participants from the US, Japan and Germany. We find that (i) people manifest a considerable willingness (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  6. The debate on the ethics of AI in health care: a reconstruction and critical review.Jessica Morley, Caio C. V. Machado, Christopher Burr, Josh Cowls, Indra Joshi, Mariarosaria Taddeo & Luciano Floridi - manuscript
    Healthcare systems across the globe are struggling with increasing costs and worsening outcomes. This presents those responsible for overseeing healthcare with a challenge. Increasingly, policymakers, politicians, clinical entrepreneurs and computer and data scientists argue that a key part of the solution will be ‘Artificial Intelligence’ (AI) – particularly Machine Learning (ML). This argument stems not from the belief that all healthcare needs will soon be taken care of by “robot doctors.” Instead, it is an argument that rests on the classic (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  7. الذكاء الاصطناعي العاطفي.Salah Osman - manuscript
    الذكاء الاصطناعي العاطفي»، ويُعرف أيضًا باسم «الحوسبة العاطفية»، و«الذكاء الاصطناعي المتمركز حول الإنسان»، و«الذكاء الاصطناعي الاجتماعي»، مفهوم جديد نسبيًا (ما زالت تقنياته في طور التطوير)، وهو أحد مجالات علوم الحاسوب الهادفة إلى تطوير آلات قادرة على فهم المشاعر البشرية. يشير المفهوم ببساطة إلى اكتشاف وبرمجة المشاعر الإنسانية بُغية تحسين الذكاء الاصطناعي، وتوسيع نطاق استخدامه، بحيث لا يقتصر أداء الروبوتات على تحليل الجوانب المعرفية (المنطقية) والتفاعل معها فحسب، بل والامتداد بالتحليل والتفاعل إلى الجوانب العاطفية للتواصل البشري.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  8. نحو أخلاقيات للآلة: تقنيات الذكاء الاصطناعي وتحديات اتخاذ القرار.Salah Osman - manuscript
    تُعد أخلاقيات الآلة جزءًا من أخلاقيات الذكاء الاصطناعي المعنية بإضافة أو ضمان السلوكيات الأخلاقية للآلات التي صنعها الإنسان، والتي تستخدم الذكاء الاصطناعي، وهي تختلف عن المجالات الأخلاقية الأخرى المتعلقة بالهندسة والتكنولوجيا، فلا ينبغي الخلط مثلاً بين أخلاقيات الآلة وأخلاقيات الحاسوب، إذ تركز هذه الأخيرة على القضايا الأخلاقية المرتبطة باستخدام الإنسان لأجهزة الحاسوب؛ كما يجب أيضًا تمييز مجال أخلاقيات الآلة عن فلسفة التكنولوجيا، والتي تهتم بالمقاربات الإبستمولوجية والأنطولوجية والأخلاقية، والتأثيرات الاجتماعية والاقتصادية والسياسية الكبرى، للممارسات التكنولوجية على تنوعها؛ أما أخلاقيات الآلة فتعني (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  9. الميتافيرس والأزمة الوجودية.Salah Osman - manuscript
    نحن مقيمون على الإنترنت، نرسم معالم دنيانا التي نبتغيها من خلاله، ونُمارس تمثيل شخصياتٍ أبعد ما تكون عنا؛ نحقق زيفًا أحلامًا قد تكون بعيدة المنال، ويُصدق بضعنا البعض فيما نسوقه من أكاذيب ومثاليات؛ ننعم بأقوالٍ بلا أفعال، وقلوبٍ بلا عواطف، وجناتٍ بلا نعيم، وألسنة في ظلمات الأفواه المُغلقة تنطق بحركات الأصابع، وحريةٍ مُحاطة بأسيجة الوهم؛ ومن غير إنترنت سيبدو أكثر الناس قطعًا بحجمهم الطبيعي الذي لا نعرفه، او بالأحرى نعرفه ونتجاهله! لا شك أن ظهور الإنترنت واتساع نطاق استخداماته يُمثل حدثًا (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  10. The Relations Between Pedagogical and Scientific Explanations of Algorithms: Case Studies from the French Administration.Maël Pégny - manuscript
    The opacity of some recent Machine Learning (ML) techniques have raised fundamental questions on their explainability, and created a whole domain dedicated to Explainable Artificial Intelligence (XAI). However, most of the literature has been dedicated to explainability as a scientific problem dealt with typical methods of computer science, from statistics to UX. In this paper, we focus on explainability as a pedagogical problem emerging from the interaction between lay users and complex technological systems. We defend an empirical methodology based on (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  11. A Talking Cure for Autonomy Traps : How to share our social world with chatbots.Regina Rini - manuscript
    Large Language Models (LLMs) like ChatGPT were trained on human conversation, but in the future they will also train us. As chatbots speak from our smartphones and customer service helplines, they will become a part of everyday life and a growing share of all the conversations we ever have. It’s hard to doubt this will have some effect on us. Here I explore a specific concern about the impact of artificial conversation on our capacity to deliberate and hold ourselves accountable (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  12. Back to the Future: Curing Past Sufferings and S-Risks via Indexical Uncertainty.Alexey Turchin - manuscript
    The long unbearable sufferings in the past and agonies experienced in some future timelines in which a malevolent AI could torture people for some idiosyncratic reasons (s-risks) is a significant moral problem. Such events either already happened or will happen in causally disconnected regions of the multiverse and thus it seems unlikely that we can do anything about it. However, at least one pure theoretic way to cure past sufferings exists. If we assume that there is no stable substrate of (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  13. Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest”.Alexey Turchin - manuscript
    In this article we explore a promising way to AI safety: to send a message now (by openly publishing it on the Internet) that may be read by any future AI, no matter who builds it and what goal system it has. Such a message is designed to affect the AI’s behavior in a positive way, that is, to increase the chances that the AI will be benevolent. In other words, we try to persuade “paperclip maximizer” that it is in (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  14. AI Alignment Problem: “Human Values” don’t Actually Exist.Alexey Turchin - manuscript
    Abstract. The main current approach to the AI safety is AI alignment, that is, the creation of AI whose preferences are aligned with “human values.” Many AI safety researchers agree that the idea of “human values” as a constant, ordered sets of preferences is at least incomplete. However, the idea that “humans have values” underlies a lot of thinking in the field; it appears again and again, sometimes popping up as an uncritically accepted truth. Thus, it deserves a thorough deconstruction, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  15. Autonomous Reboot: the challenges of artificial moral agency and the ends of Machine Ethics.Jeffrey White - manuscript
    Ryan Tonkens (2009) has issued a seemingly impossible challenge, to articulate a comprehensive ethical framework within which artificial moral agents (AMAs) satisfy a Kantian inspired recipe - both "rational" and "free" - while also satisfying perceived prerogatives of Machine Ethics to create AMAs that are perfectly, not merely reliably, ethical. Challenges for machine ethicists have also been presented by Anthony Beavers and Wendell Wallach, who have pushed for the reinvention of traditional ethics in order to avoid "ethical nihilism" due to (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  16. Conscience: the mechanism of morality.Jeffrey White - manuscript
    Conscience is often referred to yet not understood. This text develops a theory of cognition around a model of conscience, the ACTWith model. It represents a synthesis of results from contemporary neuroscience with traditional philosophy, building from Jamesian insights into the emergence of the self to narrative identity, all the while motivated by a single mechanism as represented in the ACTWith model. Emphasis is placed on clarifying historical expressions and demonstrations of conscience - Socrates, Heidegger, Kant, M.L. King - in (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  17. ОТВЕТСТВЕННЫЙ ИСКУССТВЕННЫЙ ИНТЕЛЛЕКТ: ВВЕДЕНИЕ «КОЧЕВЫЕ ПРИНЦИПЫ ИСКУССТВЕННОГО ИНТЕЛЛЕКТА» ДЛЯ ЦЕНТРАЛЬНОЙ АЗИИ.Ammar Younas - manuscript
    Мы предлагаем, чтобы Центральная Азия разработала свои собственные принципы этики ИИ, которые мы предлагаем назвать “кочевыми принципами ИИ”.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  18. HARMONIZING LAW AND INNOVATIONS IN NANOMEDICINE, ARTIFICIAL INTELLIGENCE (AI) AND BIOMEDICAL ROBOTICS: A CENTRAL ASIAN PERSPECTIVE.Ammar Younas & Tegizbekova Zhyldyz Chynarbekovna - manuscript
    The recent progression in AI, nanomedicine and robotics have increased concerns about ethics, policy and law. The increasing complexity and hybrid nature of AI and nanotechnologies impact the functionality of “law in action” which can lead to legal uncertainty and ultimately to a public distrust. There is an immediate need of collaboration between Central Asian biomedical scientists, AI engineers and academic lawyers for the harmonization of AI, nanomedicines and robotics in Central Asian legal system.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  19. Taking AI Risks Seriously: a New Assessment Model for the AI Act.Claudio Novelli, Casolari Federico, Antonino Rotolo, Mariarosaria Taddeo & Luciano Floridi - unknown - AI and Society 38 (3):1-5.
    The EU proposal for the Artificial Intelligence Act (AIA) defines four risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of AI, the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. This problem is particularly challenging when it comes to regulating general-purpose AI (GPAI), which has versatile and often unpredictable applications. Recent amendments to the compromise text, though introducing context-specific assessments, remain insufficient. To address this, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  20. The Point of Blaming AI Systems.Hannah Altehenger & Leonhard Menges - forthcoming - Journal of Ethics and Social Philosophy.
    As Christian List (2021) has recently argued, the increasing arrival of powerful AI systems that operate autonomously in high-stakes contexts creates a need for “future-proofing” our regulatory frameworks, i.e., for reassessing them in the face of these developments. One core part of our regulatory frameworks that dominates our everyday moral interactions is blame. Therefore, “future-proofing” our extant regulatory frameworks in the face of the increasing arrival of powerful AI systems requires, among others things, that we ask whether it makes sense (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  21. The ethics of digital well-being: a multidisciplinary perspective.Christopher Burr & Luciano Floridi - forthcoming - In Christopher Burr & Luciano Floridi (eds.), Ethics of Digital Well-Being: A Multidisciplinary Perspective. Springer.
    This chapter serves as an introduction to the edited collection of the same name, which includes chapters that explore digital well-being from a range of disciplinary perspectives, including philosophy, psychology, economics, health care, and education. The purpose of this introductory chapter is to provide a short primer on the different disciplinary approaches to the study of well-being. To supplement this primer, we also invited key experts from several disciplines—philosophy, psychology, public policy, and health care—to share their thoughts on what they (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  22. Supporting human autonomy in AI systems.Rafael Calvo, Dorian Peters, Karina Vold & Richard M. Ryan - forthcoming - In Christopher Burr & Luciano Floridi (eds.), Ethics of Digital Well-being: A Multidisciplinary Approach.
    Autonomy has been central to moral and political philosophy for millenia, and has been positioned as a critical aspect of both justice and wellbeing. Research in psychology supports this position, providing empirical evidence that autonomy is critical to motivation, personal growth and psychological wellness. Responsible AI will require an understanding of, and ability to effectively design for, human autonomy (rather than just machine autonomy) if it is to genuinely benefit humanity. Yet the effects on human autonomy of digital experiences are (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   8 citations  
  23. Sims and Vulnerability: On the Ethics of Creating Emulated Minds.Bartek Chomanski - forthcoming - Science and Engineering Ethics.
    It might become possible to build artificial minds with the capacity for experience. This raises a plethora of ethical issues, explored, among others, in the context of whole brain emulations (WBE). In this paper, I will take up the problem of vulnerability – given, for various reasons, less attention in the literature – that the conscious emulations will likely exhibit. Specifically, I will examine the role that vulnerability plays in generating ethical issues that may arise when dealing with WBEs. I (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  24. Shortcuts to Artificial Intelligence.Nello Cristianini - forthcoming - In Marcello Pelillo & Teresa Scantamburlo (eds.), Machines We Trust. MIT Press.
    The current paradigm of Artificial Intelligence emerged as the result of a series of cultural innovations, some technical and some social. Among them are apparently small design decisions, that led to a subtle reframing of the field’s original goals, and are by now accepted as standard. They correspond to technical shortcuts, aimed at bypassing problems that were otherwise too complicated or too expensive to solve, while still delivering a viable version of AI. Far from being a series of separate problems, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  25. The Ethics of Algorithmic Outsourcing in Everyday Life.John Danaher - forthcoming - In Karen Yeung & Martin Lodge (eds.), Algorithmic Regulation. Oxford, UK: Oxford University Press.
    We live in a world in which ‘smart’ algorithmic tools are regularly used to structure and control our choice environments. They do so by affecting the options with which we are presented and the choices that we are encouraged or able to make. Many of us make use of these tools in our daily lives, using them to solve personal problems and fulfill goals and ambitions. What consequences does this have for individual autonomy and how should our legal and regulatory (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  26. The Philosophical Case for Robot Friendship.John Danaher - forthcoming - Journal of Posthuman Studies.
    Friendship is an important part of the good life. While many roboticists are eager to create friend-like robots, many philosophers and ethicists are concerned. They argue that robots cannot really be our friends. Robots can only fake the emotional and behavioural cues we associate with friendship. Consequently, we should resist the drive to create robot friends. In this article, I argue that the philosophical critics are wrong. Using the classic virtue-ideal of friendship, I argue that robots can plausibly be considered (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   18 citations  
  27. Freedom in an Age of Algocracy.John Danaher - forthcoming - In Shannon Vallor (ed.), Oxford Handbook of Philosophy of Technology. Oxford, UK: Oxford University Press.
    There is a growing sense of unease around algorithmic modes of governance ('algocracies') and their impact on freedom. Contrary to the emancipatory utopianism of digital enthusiasts, many now fear that the rise of algocracies will undermine our freedom. Nevertheless, there has been some struggle to explain exactly how this will happen. This chapter tries to address the shortcomings in the existing discussion by arguing for a broader conception/understanding of freedom as well as a broader conception/understanding of algocracy. Broadening the focus (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  28. Sexuality.John Danaher - forthcoming - In Markus Dubber, Frank Pasquale & Sunit Das (eds.), Oxford Handbook of the Ethics of Artificial Intelligence. Oxford: Oxford University Press.
    Sex is an important part of human life. It is a source of pleasure and intimacy, and is integral to many people’s self-identity. This chapter examines the opportunities and challenges posed by the use of AI in how humans express and enact their sexualities. It does so by focusing on three main issues. First, it considers the idea of digisexuality, which according to McArthur and Twist (2017) is the label that should be applied to those ‘whose primary sexual identity comes (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  29. Artificial Intelligence and Legal Disruption: A New Model for Analysis.John Danaher, Hin-Yan Liu, Matthijs Maas, Luisa Scarcella, Michaela Lexer & Leonard Van Rompaey - forthcoming - Law, Innovation and Technology.
    Artificial intelligence (AI) is increasingly expected to disrupt the ordinary functioning of society. From how we fight wars or govern society, to how we work and play, and from how we create to how we teach and learn, there is almost no field of human activity which is believed to be entirely immune from the impact of this emerging technology. This poses a multifaceted problem when it comes to designing and understanding regulatory responses to AI. This article aims to: (i) (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  30. Learning to Discriminate: The Perfect Proxy Problem in Artificially Intelligent Criminal Sentencing.Benjamin Davies & Thomas Douglas - forthcoming - In Jesper Ryberg & Julian V. Roberts (eds.), Sentencing and Artificial Intelligence. Oxford: Oxford University Press.
    It is often thought that traditional recidivism prediction tools used in criminal sentencing, though biased in many ways, can straightforwardly avoid one particularly pernicious type of bias: direct racial discrimination. They can avoid this by excluding race from the list of variables employed to predict recidivism. A similar approach could be taken to the design of newer, machine learning-based (ML) tools for predicting recidivism: information about race could be withheld from the ML tool during its training phase, ensuring that the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  31. The Kant-Inspired Indirect Argument for Non-Sentient Robot Rights.Tobias Flattery - forthcoming - AI and Ethics.
    Some argue that robots could never be sentient, and thus could never have intrinsic moral status. Others disagree, believing that robots indeed will be sentient and thus will have moral status. But a third group thinks that, even if robots could never have moral status, we still have a strong moral reason to treat some robots as if they do. Drawing on a Kantian argument for indirect animal rights, a number of technology ethicists contend that our treatment of anthropomorphic or (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  32. Beyond the Brave New Nudge: Activating Ethical Reflection Over Behavioral Reaction.Julian Friedland, Kristian Myrseth & David Balkin - forthcoming - Academy of Management Perspectives.
    Behavioral intervention techniques leveraging reactive responses have gained popularity as tools for promoting ethical behavior. Choice architects, for example, design and present default opt-out options to nudge individuals into accepting preselected choices deemed beneficial to both the decision-maker and society. Such interventions can also employ mild financial incentives or affective triggers including joy, fear, empathy, social pressure, and reputational rewards. We argue, however, that ethical competence is achieved via reflection, and that heavy reliance on reactive behavioral interventions can undermine the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  33. The virtues of interpretable medical artificial intelligence.Joshua Hatherley, Robert Sparrow & Mark Howard - forthcoming - Cambridge Quarterly of Healthcare Ethics:1-10.
    Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are 'black boxes'. The initial response in the literature was a demand for 'explainable AI'. However, recently, several authors have suggested that making AI more explainable or 'interpretable' is likely to be at the cost of the accuracy of these systems and that prioritising interpretability in medical AI may constitute a 'lethal prejudice'. In this paper, we defend the value of interpretability (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  34. Digital Well-Being and Manipulation Online.Michael Klenk - forthcoming - In Christopher Burr & Luciano Floridi (eds.), Ethics of Digital Well-being: A Multidisciplinary Approach. Springer.
    Social media use is soaring globally. Existing research of its ethical implications predominantly focuses on the relationships amongst human users online, and their effects. The nature of the software-to-human relationship and its impact on digital well-being, however, has not been sufficiently addressed yet. This paper aims to close the gap. I argue that some intelligent software agents, such as newsfeed curator algorithms in social media, manipulate human users because they do not intend their means of influence to reveal the user’s (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   6 citations  
  35. (Online) Manipulation: Sometimes Hidden, Always Careless.Michael Klenk - forthcoming - Review of Social Economy.
    Ever-increasing numbers of human interactions with intelligent software agents, online and offline, and their increasing ability to influence humans have prompted a surge in attention toward the concept of (online) manipulation. Several scholars have argued that manipulative influence is always hidden. But manipulation is sometimes overt, and when this is acknowledged the distinction between manipulation and other forms of social influence becomes problematic. Therefore, we need a better conceptualisation of manipulation that allows it to be overt and yet clearly distinct (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   7 citations  
  36. The Perfect Politician.Theodore M. Lechterman - forthcoming - In Living with AI: Moral Challenges. Oxford: Oxford University Press.
    Ideas for integrating AI into politics are now emerging and advancing at accelerating pace. This chapter highlights a few different varieties and show how they reflect different assumptions about the value of democracy. We cannot make informed decisions about which, if any, proposals to pursue without further reflection on what makes democracy valuable and how current conditions fail to fully realize it. Recent advances in political philosophy provide some guidance but leave important questions open. If AI advances to a state (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  37. Ethical Issues in Near-Future Socially Supportive Smart Assistants for Older Adults.Alex John London - forthcoming - IEEE Transactions on Technology and Society.
    Abstract:This paper considers novel ethical issues pertaining to near-future artificial intelligence (AI) systems that seek to support, maintain, or enhance the capabilities of older adults as they age and experience cognitive decline. In particular, we focus on smart assistants (SAs) that would seek to provide proactive assistance and mediate social interactions between users and other members of their social or support networks. Such systems would potentially have significant utility for users and their caregivers if they could reduce the cognitive load (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  38. Medical AI: Is Trust Really the Issue?Jakob Thrane Mainz - forthcoming - Journal of Medical Ethics.
    I discuss an influential argument put forward by Joshua Hatherley. Drawing on influential philosophical accounts of inter-personal trust, Hatherley claims that medical Artificial Intelligence is capable of being reliable, but not trustworthy. Furthermore, Hatherley argues that trust generates moral obligations on behalf of the trustee. For instance, when a patient trusts a clinician, it generates certain moral obligations on behalf of the clinician for her to do what she is entrusted to do. I make three objections to Hatherley’s claims: (1) (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  39. The Privacy Dependency Thesis and Self-Defense.Lauritz Aastrup Munch & Jakob Thrane Mainz - forthcoming - AI and Society:1-11.
    If I decide to disclose information about myself, this act can undermine other people’s ability to effectively conceal information about themselves. One case in point involves genetic information: if I share ‘my’ genetic information with others, I thereby also reveal genetic information about my biological relatives. Such dependencies are well-known in the privacy literature and are often referred to as ‘privacy dependencies’. Some take the existence of privacy dependencies to generate a moral duty to sometimes avoid sharing information about oneself. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  40. Data Mining in the Context of Legality, Privacy, and Ethics.Amos Okomayin, Tosin Ige & Abosede Kolade - forthcoming - International Journal of Research and Innovation in Applied Science.
    Data mining possess a significant threat to ethics, privacy, and legality, especially when we consider the fact that data mining makes it difficult for an individual or consumer (in the case of a company) to control accessibility and usage of his data. Individuals should be able to control how his/ her data in the data warehouse is being access and utilize while at the same time providing enabling environment which enforces legality, privacy and ethicality on data scientists, or data engineer (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  41. Meaningful Work and Achievement in Increasingly Automated Workplaces.W. Jared Parmer - forthcoming - The Journal of Ethics:1-25.
    As automating technologies are increasingly integrated into workplaces, one concern is that many of the human workers who remain will be relegated to more dull and less positively impactful work. This paper considers two rival theories of meaningful work that might be used to evaluate particular implementations of automation. The first is achievementism, which says that work that culminates in achievements to workers’ credit is especially meaningful; the other is the practice view, which says that work that takes the form (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  42. Out of control: Flourishing with carebots through embodied design.Anco Peeters - forthcoming - In L. Cavalcante Siebert, Giulio Mecacci, D. Amoroso, F. Santoni de Sio, D. Abbink & J. van den Hoven (eds.), Multidisciplinary Research Handbook on Meaningful Human Control over AI Systems. Edward Elgar Publishing.
    The increasing complexity and ubiquity of autonomously operating artificially intelligent (AI) systems call for a robust theoretical reconceptualization of responsibility and control. The Meaningful Human Control (MHC) approach to the design and operation of AI systems provides such a framework. However, in its focus on accountability and minimizing harms, it neglects how we may flourish in interaction with such systems. In this chapter, I show how the MHC framework can be expanded to meet this challenge by drawing on the ethics (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  43. Explainable AI lacks regulative reasons: why AI and human decision‑making are not equally opaque.Uwe Peters - forthcoming - AI and Ethics.
    Many artificial intelligence (AI) systems currently used for decision-making are opaque, i.e., the internal factors that determine their decisions are not fully known to people due to the systems’ computational complexity. In response to this problem, several researchers have argued that human decision-making is equally opaque and since simplifying, reason-giving explanations (rather than exhaustive causal accounts) of a decision are typically viewed as sufficient in the human case, the same should hold for algorithmic decision-making. Here, I contend that this argument (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  44. Automated Influence and the Challenge of Cognitive Security.Sarah Rajtmajer & Daniel Susser - forthcoming - HoTSoS: ACM Symposium on Hot Topics in the Science of Security.
    Advances in AI are powering increasingly precise and widespread computational propaganda, posing serious threats to national security. The military and intelligence communities are starting to discuss ways to engage in this space, but the path forward is still unclear. These developments raise pressing ethical questions, about which existing ethics frameworks are silent. Understanding these challenges through the lens of “cognitive security,” we argue, offers a promising approach.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  45. AI and the expert; a blueprint for the ethical use of opaque AI.Amber Ross - forthcoming - AI and Society:1-12.
    The increasing demand for transparency in AI has recently come under scrutiny. The question is often posted in terms of “epistemic double standards”, and whether the standards for transparency in AI ought to be higher than, or equivalent to, our standards for ordinary human reasoners. I agree that the push for increased transparency in AI deserves closer examination, and that comparing these standards to our standards of transparency for other opaque systems is an appropriate starting point. I suggest that a (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  46. What We Informationally Owe Each Other.Alan Rubel, Clinton Castro & Adam Pham - forthcoming - In Algorithms & Autonomy: The Ethics of Automated Decision Systems. Cambridge University Press: Cambridge University Press. pp. 21-42.
    ABSTRACT: One important criticism of algorithmic systems is that they lack transparency. Such systems can be opaque because they are complex, protected by patent or trade secret, or deliberately obscure. In the EU, there is a debate about whether the General Data Protection Regulation (GDPR) contains a “right to explanation,” and if so what such a right entails. Our task in this chapter is to address this informational component of algorithmic systems. We argue that information access is integral for respecting (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  47. Predicting and Preferring.Nathaniel Sharadin - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    The use of machine learning, or “artificial intelligence” (AI) in medicine is widespread and growing. In this paper, I focus on a specific proposed clinical application of AI: using models to predict incapacitated patients’ treatment preferences. Drawing on results from machine learning, I argue this proposal faces a special moral problem. Machine learning researchers owe us assurance on this front before experimental research can proceed. In my conclusion I connect this concern to broader issues in AI safety.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  48. How Much Should Governments Pay to Prevent Catastrophes? Longtermism's Limited Role.Carl Shulman & Elliott Thornley - forthcoming - In Jacob Barrett, Hilary Greaves & David Thorstad (eds.), Essays on Longtermism. Oxford University Press.
    Longtermists have argued that humanity should significantly increase its efforts to prevent catastrophes like nuclear wars, pandemics, and AI disasters. But one prominent longtermist argument overshoots this conclusion: the argument also implies that humanity should reduce the risk of existential catastrophe even at extreme cost to the present generation. This overshoot means that democratic governments cannot use the longtermist argument to guide their catastrophe policy. In this paper, we show that the case for preventing catastrophe does not depend on longtermism. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  49. How AI can AID bioethics.Walter Sinnott Armstrong & Joshua August Skorburg - forthcoming - Journal of Practical Ethics.
    This paper explores some ways in which artificial intelligence (AI) could be used to improve human moral judgments in bioethics by avoiding some of the most common sources of error in moral judgment, including ignorance, confusion, and bias. It surveys three existing proposals for building human morality into AI: Top-down, bottom-up, and hybrid approaches. Then it proposes a multi-step, hybrid method, using the example of kidney allocations for transplants as a test case. The paper concludes with brief remarks about how (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  50. Ethical Issues in Text Mining for Mental Health.Joshua Skorburg & Phoebe Friesen - forthcoming - In M. Dehghani & R. Boyd (ed.), The Atlas of Language Analysis in Psychology.
    A recent systematic review of Machine Learning (ML) approaches to health data, containing over 100 studies, found that the most investigated problem was mental health (Yin et al., 2019). Relatedly, recent estimates suggest that between 165,000 and 325,000 health and wellness apps are now commercially available, with over 10,000 of those designed specifically for mental health (Carlo et al., 2019). In light of these trends, the present chapter has three aims: (1) provide an informative overview of some of the recent (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
1 — 50 / 345