Results for 'chatbots, chatGPT, ethics of AI, AI, emojis, manipulation, deception'

999 found
Order:
  1. Chatbots shouldn’t use emojis.Carissa Véliz - 2023 - Nature 615:375.
    Limits need to be set on AI’s ability to simulate human feelings. Ensuring that chatbots don’t use emotive language, including emojis, would be a good start. Emojis are particularly manipulative. Humans instinctively respond to shapes that look like faces — even cartoonish or schematic ones — and emojis can induce these reactions.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  2. Artificial Intelligence Implications for Academic Cheating: Expanding the Dimensions of Responsible Human-AI Collaboration with ChatGPT.Jo Ann Oravec - 2023 - Journal of Interactive Learning Research 34 (2).
    Cheating is a growing academic and ethical concern in higher education. This article examines the rise of artificial intelligence (AI) generative chatbots for use in education and provides a review of research literature and relevant scholarship concerning the cheating-related issues involved and their implications for pedagogy. The technological “arms race” that involves cheating-detection system developers versus technology savvy students is attracting increased attention to cheating. AI has added new dimensions to academic cheating challenges as students (as well as faculty and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  3. Escape climate apathy by harnessing the power of generative AI.Quan-Hoang Vuong & Manh-Tung Ho - 2024 - AI and Society 39:1-2.
    “Throw away anything that sounds too complicated. Only keep what is simple to grasp...If the information appears fuzzy and causes the brain to implode after two sentences, toss it away and stop listening. Doing so will make the news as orderly and simple to understand as the truth.” - In “GHG emissions,” The Kingfisher Story Collection, (Vuong 2022a).
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  4. The dialectic of desire: AI chatbots and the desire not to know.Jack Black - 2023 - Psychoanalysis, Culture and Society 28 (4):607--618.
    Exploring the relationship between humans and AI chatbots, as well as the ethical concerns surrounding their use, this paper argues that our relations with chatbots are not solely based on their function as a source of knowledge, but, rather, on the desire for the subject not to know. It is argued that, outside of the very fears and anxieties that underscore our adoption of AI, the desire not to know reveals the potential to embrace the very loss AI avers. Consequently, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5. AI Can Help Us Live More Deliberately.Julian Friedland - 2019 - MIT Sloan Management Review 60 (4).
    Our rapidly increasing reliance on frictionless AI interactions may increase cognitive and emotional distance, thereby letting our adaptive resilience slacken and our ethical virtues atrophy from disuse. Many trends already well underway involve the offloading of cognitive, emotional, and ethical labor to AI software in myriad social, civil, personal, and professional contexts. Gradually, we may lose the inclination and capacity to engage in critically reflective thought, making us more cognitively and emotionally vulnerable and thus more anxious and prone to manipulation (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  6. The Whiteness of AI.Stephen Cave & Kanta Dihal - 2020 - Philosophy and Technology 33 (4):685-703.
    This paper focuses on the fact that AI is predominantly portrayed as white—in colour, ethnicity, or both. We first illustrate the prevalent Whiteness of real and imagined intelligent machines in four categories: humanoid robots, chatbots and virtual assistants, stock images of AI, and portrayals of AI in film and television. We then offer three interpretations of the Whiteness of AI, drawing on critical race theory, particularly the idea of the White racial frame. First, we examine the extent to which this (...)
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  7. All too human? Identifying and mitigating ethical risks of Social AI.Henry Shevlin - manuscript
    This paper presents an overview of the risks and benefits of Social AI, understood as conversational AI systems that cater to human social needs like romance, companionship, or entertainment. Section 1 of the paper provides a brief history of conversational AI systems and introduces conceptual distinctions to help distinguish varieties of Social AI and pathways to their deployment. Section 2 of the paper adds further context via a brief discussion of anthropomorphism and its relevance to assessment of human-chatbot relationships. Section (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. Ethics of Artificial Intelligence and Robotics.Vincent C. Müller - 2012 - In Peter Adamson (ed.), Stanford Encyclopedia of Philosophy. Stanford Encyclopedia of Philosophy. pp. 1-70.
    Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and used (...)
    Download  
     
    Export citation  
     
    Bookmark   30 citations  
  9. Ethics of Artificial Intelligence.Vincent C. Müller - 2021 - In Anthony Elliott (ed.), The Routledge social science handbook of AI. London: Routledge. pp. 122-137.
    Artificial intelligence (AI) is a digital technology that will be of major importance for the development of humanity in the near future. AI has raised fundamental questions about what we should do with such systems, what the systems themselves should do, what risks they involve and how we can control these. - After the background to the field (1), this article introduces the main debates (2), first on ethical issues that arise with AI systems as objects, i.e. tools made and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10. What is a subliminal technique? An ethical perspective on AI-driven influence.Juan Pablo Bermúdez, Rune Nyrup, Sebastian Deterding, Celine Mougenot, Laura Moradbakhti, Fangzhou You & Rafael A. Calvo - 2023 - Ieee Ethics-2023 Conference Proceedings.
    Concerns about threats to human autonomy feature prominently in the field of AI ethics. One aspect of this concern relates to the use of AI systems for problematically manipulative influence. In response to this, the European Union’s draft AI Act (AIA) includes a prohibition on AI systems deploying subliminal techniques that alter people’s behavior in ways that are reasonably likely to cause harm (Article 5(1)(a)). Critics have argued that the term ‘subliminal techniques’ is too narrow to capture the target (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11. ChatGPT: towards AI subjectivity.Kristian D’Amato - 2024 - AI and Society 39:1-15.
    Motivated by the question of responsible AI and value alignment, I seek to offer a uniquely Foucauldian reconstruction of the problem as the emergence of an ethical subject in a disciplinary setting. This reconstruction contrasts with the strictly human-oriented programme typical to current scholarship that often views technology in instrumental terms. With this in mind, I problematise the concept of a technological subjectivity through an exploration of various aspects of ChatGPT in light of Foucault’s work, arguing that current systems lack (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12. The Ethics of Military Influence Operations.Michael Skerker - 2023 - Conatus 8 (2):589-612.
    This article articulates a framework for normatively assessing influence operations, undertaken by national security institutions. Section I categorizes the vast field of possible types of influence operations according to the communication’s content, its attribution, the rights of the target audience, the communication’s purpose, and its secondary effects. Section II populates these categories with historical examples and section III evaluates these cases with a moral framework. I argue that deceptive or manipulative communications directed at non-liable audiences are presumptively immoral and illegitimate (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  13.  69
    Feminist Re-Engineering of Religion-Based AI Chatbots.Hazel T. Biana - 2024 - Philosophies 9 (1):20.
    Religion-based AI chatbots serve religious practitioners by bringing them godly wisdom through technology. These bots reply to spiritual and worldly questions by drawing insights or citing verses from the Quran, the Bible, the Bhagavad Gita, the Torah, or other holy books. They answer religious and theological queries by claiming to offer historical contexts and providing guidance and counseling to their users. A criticism of these bots is that they may give inaccurate answers and proliferate bias by propagating homogenized versions of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. The Ethics of Marketing to Vulnerable Populations.David Palmer & Trevor Hedberg - 2013 - Journal of Business Ethics 116 (2):403-413.
    An orthodox view in marketing ethics is that it is morally impermissible to market goods to specially vulnerable populations in ways that take advantage of their vulnerabilities. In his signature article “Marketing and the Vulnerable,” Brenkert (Bus Ethics Q Ruffin Ser 1:7–20, 1998) provided the first substantive defense of this position, one which has become a well-established view in marketing ethics. In what follows, we throw new light on marketing to the vulnerable by critically evaluating key components (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  15. The Ethics of AI Ethics: An Evaluation of Guidelines.Thilo Hagendorff - 2020 - Minds and Machines 30 (1):99-120.
    Current advances in research, development and application of artificial intelligence systems have yielded a far-reaching discourse on AI ethics. In consequence, a number of ethics guidelines have been released in recent years. These guidelines comprise normative principles and recommendations aimed to harness the “disruptive” potentials of new AI technologies. Designed as a semi-systematic evaluation, this paper analyzes and compares 22 guidelines, highlighting overlaps but also omissions. As a result, I give a detailed overview of the field of AI (...)
    Download  
     
    Export citation  
     
    Bookmark   141 citations  
  16. Revolutionizing Education with ChatGPT: Enhancing Learning Through Conversational AI.Prapasiri Klayklung, Piyawatjana Chocksathaporn, Pongsakorn Limna, Tanpat Kraiwanit & Kris Jangjarat - 2023 - Universal Journal of Educational Research 2 (3):217-225.
    The development of conversational artificial intelligence (AI) has brought about new opportunities for improving the learning experience in education. ChatGPT, a large language model trained on a vast corpus of text, has the potential to revolutionize education by enhancing learning through personalized and interactive conversations. This paper explores the benefits of integrating ChatGPT in education in Thailand. The research strategy employed in this study was qualitative, utilizing in-depth interviews with eight key informants who were selected using purposive sampling. The collected (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. Norms of Truthfulness and Non-Deception in Kantian Ethics.Donald Wilson - 2015 - In Pablo Muchnik Oliver Thorndike (ed.), Rethinking Kant Volume 4. Cambridge Scholars Press. pp. 111-134.
    Questions about the morality of lying tend to be decided in a distinctive way early in discussions of Kant’s view on the basis of readings of the false promising example in his Groundwork of The metaphysics of morals. The standard deception-as-interference model that emerges typically yields a very general and strong presumption against deception associated with a narrow and rigorous model subject to a range of problems. In this paper, I suggest an alternative account based on Kant’s discussion (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  18. We Asked ChatGPT About the Co-Authorship of Artificial Intelligence in Scientific Papers.Ayşe Balat & İlhan Bahşi - 2023 - European Journal of Therapeutics 29 (3):e16-e19.
    Dear Colleagues, -/- A few weeks ago, we published an editorial discussion on whether artificial intelligence applications should be authors of academic articles [1]. We were delighted to receive more than one interesting reply letter to this editorial in a short time [2, 3]. We hope that opinions on this subject will continue to be submitted to our journal. -/- In this editorial, we wanted to publish the answers we received when we asked ChatGPT, one of the artificial intelligence applications, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. Invisible Influence: Artificial Intelligence and the Ethics of Adaptive Choice Architectures.Daniel Susser - 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society 1.
    For several years, scholars have (for good reason) been largely preoccupied with worries about the use of artificial intelligence and machine learning (AI/ML) tools to make decisions about us. Only recently has significant attention turned to a potentially more alarming problem: the use of AI/ML to influence our decision-making. The contexts in which we make decisions—what behavioral economists call our choice architectures—are increasingly technologically-laden. Which is to say: algorithms increasingly determine, in a wide variety of contexts, both the sets of (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  20. Toward an Ethics of AI Assistants: an Initial Framework.John Danaher - 2018 - Philosophy and Technology 31 (4):629-653.
    Personal AI assistants are now nearly ubiquitous. Every leading smartphone operating system comes with a personal AI assistant that promises to help you with basic cognitive tasks: searching, planning, messaging, scheduling and so on. Usage of such devices is effectively a form of algorithmic outsourcing: getting a smart algorithm to do something on your behalf. Many have expressed concerns about this algorithmic outsourcing. They claim that it is dehumanising, leads to cognitive degeneration, and robs us of our freedom and autonomy. (...)
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  21. AI Decision Making with Dignity? Contrasting Workers’ Justice Perceptions of Human and AI Decision Making in a Human Resource Management Context.Sarah Bankins, Paul Formosa, Yannick Griep & Deborah Richards - forthcoming - Information Systems Frontiers.
    Using artificial intelligence (AI) to make decisions in human resource management (HRM) raises questions of how fair employees perceive these decisions to be and whether they experience respectful treatment (i.e., interactional justice). In this experimental survey study with open-ended qualitative questions, we examine decision making in six HRM functions and manipulate the decision maker (AI or human) and decision valence (positive or negative) to determine their impact on individuals’ experiences of interactional justice, trust, dehumanization, and perceptions of decision-maker role appropriate- (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  22. Manipulation in the Enrollment of Research Participants.Amulya Mandava & Joseph Millum - 2013 - Hastings Center Report 43 (2):38-47.
    In this paper we analyze the non-coercive ways in which researchers can use knowledge about the decision-making tendencies of potential participants in order to motivate them to consent to research enrollment. We identify which modes of influence preserve respect for participants’ autonomy and which disrespect autonomy, and apply the umbrella term of manipulation to the latter. We then apply our analysis to a series of cases adapted from the experiences of clinical researchers in order to develop a framework for thinking (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  23. AI as IA: The use and abuse of artificial intelligence (AI) for human enhancement through intellectual augmentation (IA).Alexandre Erler & Vincent C. Müller - 2023 - In Fabrice Jotterand & Marcello Ienca (eds.), The Routledge Handbook of the Ethics of Human Enhancement. Routledge. pp. 187-199.
    This paper offers an overview of the prospects and ethics of using AI to achieve human enhancement, and more broadly what we call intellectual augmentation (IA). After explaining the central notions of human enhancement, IA, and AI, we discuss the state of the art in terms of the main technologies for IA, with or without brain-computer interfaces. Given this picture, we discuss potential ethical problems, namely inadequate performance, safety, coercion and manipulation, privacy, cognitive liberty, authenticity, and fairness in more (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. The Use and Misuse of Counterfactuals in Ethical Machine Learning.Atoosa Kasirzadeh & Andrew Smart - 2021 - In Atoosa Kasirzadeh & Andrew Smart (eds.), ACM Conference on Fairness, Accountability, and Transparency (FAccT 21).
    The use of counterfactuals for considerations of algorithmic fairness and explainability is gaining prominence within the machine learning community and industry. This paper argues for more caution with the use of counterfactuals when the facts to be considered are social categories such as race or gender. We review a broad body of papers from philosophy and social sciences on social ontology and the semantics of counterfactuals, and we conclude that the counterfactual approach in machine learning fairness and social explainability can (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  25. The debate on the ethics of AI in health care: a reconstruction and critical review.Jessica Morley, Caio C. V. Machado, Christopher Burr, Josh Cowls, Indra Joshi, Mariarosaria Taddeo & Luciano Floridi - manuscript
    Healthcare systems across the globe are struggling with increasing costs and worsening outcomes. This presents those responsible for overseeing healthcare with a challenge. Increasingly, policymakers, politicians, clinical entrepreneurs and computer and data scientists argue that a key part of the solution will be ‘Artificial Intelligence’ (AI) – particularly Machine Learning (ML). This argument stems not from the belief that all healthcare needs will soon be taken care of by “robot doctors.” Instead, it is an argument that rests on the classic (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  26. The ethics of the extended mind: Mental privacy, manipulation and agency.Robert William Clowes, Paul R. Smart & Richard Heersmink - 2024 - In Jan-Hendrik Heinrichs, Birgit Beck & Orsolya Friedrich (eds.), Neuro-ProsthEthics: Ethical Implications of Applied Situated Cognition. Berlin, Germany: J. B. Metzler. pp. 13–35.
    According to proponents of the extended mind, bio-external resources, such as a notebook or a smartphone, are candidate parts of the cognitive and mental machinery that realises cognitive states and processes. The present chapter discusses three areas of ethical concern associated with the extended mind, namely mental privacy, mental manipulation, and agency. We also examine the ethics of the extended mind from the standpoint of three general normative frameworks, namely, consequentialism, deontology, and virtue ethics.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  27.  79
    Embracing ChatGPT and other generative AI tools in higher education: The importance of fostering trust and responsible use in teaching and learning.Jonathan Y. H. Sim - 2023 - Higher Education in Southeast Asia and Beyond.
    Trust is the foundation for learning, and we must not allow ignorance of this new technologies, like Generative AI, to disrupt the relationship between students and educators. As a first step, we need to actively engage with AI tools to better understand how they can help us in our work.
    Download  
     
    Export citation  
     
    Bookmark  
  28. AI, Biometric Analysis, and Emerging Cheating Detection Systems: The Engineering of Academic Integrity?Jo Ann Oravec - 2022 - Education Policy Analysis Archives 175 (30):1-18.
    Abstract: Cheating behaviors have been construed as a continuing and somewhat vexing issue for academic institutions as they increasingly conduct educational processes online and impose metrics on instructional evaluation. Research, development, and implementation initiatives on cheating detection have gained new dimensions in the advent of artificial intelligence (AI) applications; they have also engendered special challenges in terms of their social, ethical, and cultural implications. An assortment of commercial cheating–detection systems have been injected into educational contexts with little input on the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  29. AI, Biometric Analysis, and Emerging Cheating Detection Systems: The Engineering of Academic Integrity?Jo Ann Oravec - 2022 - Education Policy Analysis Archive 30 (175):1-18.
    Abstract: Cheating behaviors have been construed as a continuing and somewhat vexing issue for academic institutions as they increasingly conduct educational processes online and impose metrics on instructional evaluation. Research, development, and implementation initiatives on cheating detection have gained new dimensions in the advent of artificial intelligence (AI) applications; they have also engendered special challenges in terms of their social, ethical, and cultural implications. An assortment of commercial cheating–detection systems have been injected into educational contexts with little input on the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  30. Ethics as a service: a pragmatic operationalisation of AI ethics.Jessica Morley, Anat Elhalal, Francesca Garcia, Libby Kinsey, Jakob Mökander & Luciano Floridi - 2021 - Minds and Machines 31 (2):239–256.
    As the range of potential uses for Artificial Intelligence, in particular machine learning, has increased, so has awareness of the associated ethical issues. This increased awareness has led to the realisation that existing legislation and regulation provides insufficient protection to individuals, groups, society, and the environment from AI harms. In response to this realisation, there has been a proliferation of principle-based ethics codes, guidelines and frameworks. However, it has become increasingly clear that a significant gap exists between the theory (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  31. AI and the expert; a blueprint for the ethical use of opaque AI.Amber Ross - forthcoming - AI and Society:1-12.
    The increasing demand for transparency in AI has recently come under scrutiny. The question is often posted in terms of “epistemic double standards”, and whether the standards for transparency in AI ought to be higher than, or equivalent to, our standards for ordinary human reasoners. I agree that the push for increased transparency in AI deserves closer examination, and that comparing these standards to our standards of transparency for other opaque systems is an appropriate starting point. I suggest that a (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  32. The emperor is naked: Moral diplomacies and the ethics of AI.Constantin Vica, Cristina Voinea & Radu Uszkai - 2021 - Információs Társadalom 21 (2):83-96.
    With AI permeating our lives, there is widespread concern regarding the proper framework needed to morally assess and regulate it. This has given rise to many attempts to devise ethical guidelines that infuse guidance for both AI development and deployment. Our main concern is that, instead of a genuine ethical interest for AI, we are witnessing moral diplomacies resulting in moral bureaucracies battling for moral supremacy and political domination. After providing a short overview of what we term ‘ethics washing’ (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  33. Ethics as a service: a pragmatic operationalisation of AI ethics.Jessica Morley, Anat Elhalal, Francesca Garcia, Libby Kinsey, Jakob Mökander & Luciano Floridi - manuscript
    As the range of potential uses for Artificial Intelligence (AI), in particular machine learning (ML), has increased, so has awareness of the associated ethical issues. This increased awareness has led to the realisation that existing legislation and regulation provides insufficient protection to individuals, groups, society, and the environment from AI harms. In response to this realisation, there has been a proliferation of principle-based ethics codes, guidelines and frameworks. However, it has become increasingly clear that a significant gap exists between (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  34. Foundations of an Ethical Framework for AI Entities: the Ethics of Systems.Andrej Dameski - 2020 - Dissertation, University of Luxembourg
    The field of AI ethics during the current and previous decade is receiving an increasing amount of attention from all involved stakeholders: the public, science, philosophy, religious organizations, enterprises, governments, and various organizations. However, this field currently lacks consensus on scope, ethico-philosophical foundations, or common methodology. This thesis aims to contribute towards filling this gap by providing an answer to the two main research questions: first, what theory can explain moral scenarios in which AI entities are participants?; and second, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. A Talking Cure for Autonomy Traps : How to share our social world with chatbots.Regina Rini - manuscript
    Large Language Models (LLMs) like ChatGPT were trained on human conversation, but in the future they will also train us. As chatbots speak from our smartphones and customer service helplines, they will become a part of everyday life and a growing share of all the conversations we ever have. It’s hard to doubt this will have some effect on us. Here I explore a specific concern about the impact of artificial conversation on our capacity to deliberate and hold ourselves accountable (...)
    Download  
     
    Export citation  
     
    Bookmark  
  36. AI Extenders and the Ethics of Mental Health.Karina Vold & Jose Hernandez-Orallo - forthcoming - In Marcello Ienca & Fabrice Jotterand (eds.), Ethics of Artificial Intelligence in Brain and Mental Health.
    The extended mind thesis maintains that the functional contributions of tools and artefacts can become so essential for our cognition that they can be constitutive parts of our minds. In other words, our tools can be on a par with our brains: our minds and cognitive processes can literally ‘extend’ into the tools. Several extended mind theorists have argued that this ‘extended’ view of the mind offers unique insights into how we understand, assess, and treat certain cognitive conditions. In this (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  37. Generative AI and the value changes and conflicts in its integration in Japanese educational system.Ngoc-Thang B. Le, Phuong-Thao Luu & Manh-Tung Ho - manuscript
    This paper critically examines Japan's approach toward the adoption of Generative AI such as ChatGPT in education via studying media discourse and guidelines at both the national as well as local levels. It highlights the lack of consideration for socio-cultural characteristics inherent in the Japanese educational systems, such as the notion of self, teachers’ work ethics, community-centric activities for the successful adoption of the technology. We reveal ChatGPT’s infusion is likely to further accelerate the shift away from traditional notion (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. AI Human Impact: Toward a Model for Ethical Investing in AI-Intensive Companies.James Brusseau - manuscript
    Does AI conform to humans, or will we conform to AI? An ethical evaluation of AI-intensive companies will allow investors to knowledgeably participate in the decision. The evaluation is built from nine performance indicators that can be analyzed and scored to reflect a technology’s human-centering. When summed, the scores convert into objective investment guidance. The strategy of incorporating ethics into financial decisions will be recognizable to participants in environmental, social, and governance investing, however, this paper argues that conventional ESG (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  39. SIDEs: Separating Idealization from Deceptive ‘Explanations’ in xAI.Emily Sullivan - forthcoming - Proceedings of the 2024 Acm Conference on Fairness, Accountability, and Transparency.
    Explainable AI (xAI) methods are important for establishing trust in using black-box models. However, recent criticism has mounted against current xAI methods that they disagree, are necessarily false, and can be manipulated, which has started to undermine the deployment of black-box models. Rudin (2019) goes so far as to say that we should stop using black-box models altogether in high-stakes cases because xAI explanations ‘must be wrong’. However, strict fidelity to the truth is historically not a desideratum in science. Idealizations (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. OVERVIEW OF AI ETHICS IN CONTEMPORARY EURASIAN SOCIETY.Ammar Younas - 2022 - 34 International Scientific Conference of Young Scientists Andquot;Science and Innovation": Collection of Scientific Papers: October 20, 2022.
    Download  
     
    Export citation  
     
    Bookmark  
  41. Robot Betrayal: a guide to the ethics of robotic deception.John Danaher - 2020 - Ethics and Information Technology 22 (2):117-128.
    If a robot sends a deceptive signal to a human user, is this always and everywhere an unethical act, or might it sometimes be ethically desirable? Building upon previous work in robot ethics, this article tries to clarify and refine our understanding of the ethics of robotic deception. It does so by making three arguments. First, it argues that we need to distinguish between three main forms of robotic deception (external state deception; superficial state (...); and hidden state deception) in order to think clearly about its ethics. Second, it argues that the second type of deception – superficial state deception – is not best thought of as a form of deception, even though it is frequently criticised as such. And third, it argues that the third type of deception is best understood as a form of betrayal because doing so captures the unique ethical harm to which it gives rise, and justifies special ethical protections against its use. (shrink)
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  42. A principlist-based study of the ethical design and acceptability of artificial social agents.Paul Formosa - 2023 - International Journal of Human-Computer Studies 172.
    Artificial Social Agents (ASAs), which are AI software driven entities programmed with rules and preferences to act autonomously and socially with humans, are increasingly playing roles in society. As their sophistication grows, humans will share greater amounts of personal information, thoughts, and feelings with ASAs, which has significant ethical implications. We conducted a study to investigate what ethical principles are of relative importance when people engage with ASAs and whether there is a relationship between people’s values and the ethical principles (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. Ethical Considerations for Digitally Targeted Public Health Interventions.Daniel Susser - 2020 - American Journal of Public Health 110 (S3).
    Public health scholars and public health officials increasingly worry about health-related misinformation online, and they are searching for ways to mitigate it. Some have suggested that the tools of digital influence are themselves a possible answer: we can use targeted, automated digital messaging to counter health-related misinformation and promote accurate information. In this commentary, I raise a number of ethical questions prompted by such proposals—and familiar from the ethics of influence and ethics of AI—highlighting hidden costs of targeted (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. Review of Mark White, The Manipulation of Choice: Ethics and Libertarian Paternalism. [REVIEW]Jonny Anomaly - 2013 - The Independent Review 18 (2).
    Download  
     
    Export citation  
     
    Bookmark  
  45. Conversation from Beyond the Grave? A Neo‐Confucian Ethics of Chatbots of the Dead.Alexis Elder - 2020 - Journal of Applied Philosophy 37 (1):73-88.
    Digital records, from chat transcripts to social media posts, are being used to create chatbots that recreate the conversational style of deceased individuals. Some maintain that this is merely a new form of digital memorial, while others argue that they pose a variety of moral hazards. To resolve this, I turn to classical Chinese philosophy to make use of a debate over the ethics of funerals and mourning. This ancient argument includes much of interest for the contemporary issue at (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  46. Uses and Abuses of AI Ethics.Lily E. Frank & Michal Klincewicz - forthcoming - In David J. Gunkel (ed.), Handbook of the Ethics of AI. Edward Elgar Publishing.
    In this chapter we take stock of some of the complexities of the sprawling field of AI ethics. We consider questions like "what is the proper scope of AI ethics?" And "who counts as an AI ethicist?" At the same time, we flag several potential uses and abuses of AI ethics. These include challenges for the AI ethicist, including what qualifications they should have; the proper place and extent of futuring and speculation in the field; and the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. Philosophy of AI: A structured overview.Vincent C. Müller - 2024 - In Nathalie A. Smuha (ed.), Cambridge handbook on the law, ethics and policy of Artificial Intelligence. Cambridge University Press. pp. 1-25.
    This paper presents the main topics, arguments, and positions in the philosophy of AI at present (excluding ethics). Apart from the basic concepts of intelligence and computation, the main topics of ar-tificial cognition are perception, action, meaning, rational choice, free will, consciousness, and normativity. Through a better understanding of these topics, the philosophy of AI contributes to our understand-ing of the nature, prospects, and value of AI. Furthermore, these topics can be understood more deeply through the discussion of AI; (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48. On Human Genome Manipulation and Homo technicus: The Legal Treatment of Non-natural Human Subjects.Tyler L. Jaynes - 2021 - AI and Ethics 1 (3):331-345.
    Although legal personality has slowly begun to be granted to non-human entities that have a direct impact on the natural functioning of human societies (given their cultural significance), the same cannot be said for computer-based intelligence systems. While this notion has not had a significantly negative impact on humanity to this point in time that only remains the case because advanced computerised intelligence systems (ACIS) have not been acknowledged as reaching human-like levels. With the integration of ACIS in medical assistive (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  49. Large Language Models and Biorisk.William D’Alessandro, Harry R. Lloyd & Nathaniel Sharadin - 2023 - American Journal of Bioethics 23 (10):115-118.
    We discuss potential biorisks from large language models (LLMs). AI assistants based on LLMs such as ChatGPT have been shown to significantly reduce barriers to entry for actors wishing to synthesize dangerous, potentially novel pathogens and chemical weapons. The harms from deploying such bioagents could be further magnified by AI-assisted misinformation. We endorse several policy responses to these dangers, including prerelease evaluations of biomedical AIs by subject-matter experts, enhanced surveillance and lab screening procedures, restrictions on AI training data, and access (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  50. The promise and perils of AI in medicine.Robert Sparrow & Joshua James Hatherley - 2019 - International Journal of Chinese and Comparative Philosophy of Medicine 17 (2):79-109.
    What does Artificial Intelligence (AI) have to contribute to health care? And what should we be looking out for if we are worried about its risks? In this paper we offer a survey, and initial evaluation, of hopes and fears about the applications of artificial intelligence in medicine. AI clearly has enormous potential as a research tool, in genomics and public health especially, as well as a diagnostic aid. It’s also highly likely to impact on the organisational and business practices (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
1 — 50 / 999