Results for 'Responsible AI'

974 found
Order:
  1. RESPONSIBLE AI: INTRODUCTION OF “NOMADIC AI PRINCIPLES” FOR CENTRAL ASIA.Ammar Younas - 2020 - Conference Proceeding of International Conference Organized by Jizzakh Polytechnical Institute Uzbekistan.
    We think that Central Asia should come up with its own AI Ethics Principles which we propose to name as “Nomadic AI Principles”.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  2. Proposing Central Asian AI Ethics Principles: A Multilevel Approach for Responsible AI.Ammar Younas & Yi Zeng - 2024 - AI and Ethics 4.
    This paper puts forth Central Asian AI ethics principles and proposes a layered strategy tailored for the development of ethical principles in the field of artificial intelligence (AI) in Central Asian countries. This approach includes the customization of AI ethics principles to resonate with local nuances, the formulation of national and regional-level AI ethics principles, and the implementation of sector-specific principles. While countering the narrative of ineffectiveness of the AI ethics principles, this paper underscores the importance of stakeholder collaboration, provides (...)
    Download  
     
    Export citation  
     
    Bookmark  
  3. A way forward for responsibility in the age of AI.Dane Leigh Gogoshin - 2024 - Inquiry: An Interdisciplinary Journal of Philosophy:1-34.
    Whatever one makes of the relationship between free will and moral responsibility – e.g. whether it’s the case that we can have the latter without the former and, if so, what conditions must be met; whatever one thinks about whether artificially intelligent agents might ever meet such conditions, one still faces the following questions. What is the value of moral responsibility? If we take moral responsibility to be a matter of being a fitting target of moral blame or praise, what (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4. Responsible nudging for social good: new healthcare skills for AI-driven digital personal assistants.Marianna Capasso & Steven Umbrello - 2022 - Medicine, Health Care and Philosophy 25 (1):11-22.
    Traditional medical practices and relationships are changing given the widespread adoption of AI-driven technologies across the various domains of health and healthcare. In many cases, these new technologies are not specific to the field of healthcare. Still, they are existent, ubiquitous, and commercially available systems upskilled to integrate these novel care practices. Given the widespread adoption, coupled with the dramatic changes in practices, new ethical and social issues emerge due to how these systems nudge users into making decisions and changing (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  5. ChatGPT: towards AI subjectivity.Kristian D’Amato - 2024 - AI and Society 39:1-15.
    Motivated by the question of responsible AI and value alignment, I seek to offer a uniquely Foucauldian reconstruction of the problem as the emergence of an ethical subject in a disciplinary setting. This reconstruction contrasts with the strictly human-oriented programme typical to current scholarship that often views technology in instrumental terms. With this in mind, I problematise the concept of a technological subjectivity through an exploration of various aspects of ChatGPT in light of Foucault’s work, arguing that current systems (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  6. Should We Discourage AI Extension? Epistemic Responsibility and AI.Hadeel Naeem & Julian Hauser - 2024 - Philosophy and Technology 37 (3):1-17.
    We might worry that our seamless reliance on AI systems makes us prone to adopting the strange errors that these systems commit. One proposed solution is to design AI systems so that they are not phenomenally transparent to their users. This stops cognitive extension and the automatic uptake of errors. Although we acknowledge that some aspects of AI extension are concerning, we can address these concerns without discouraging transparent employment altogether. First, we believe that the potential danger should be put (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7. AI and Structural Injustice: Foundations for Equity, Values, and Responsibility.Johannes Himmelreich & Désirée Lim - 2023 - In Justin B. Bullock, Yu-Che Chen, Johannes Himmelreich, Valerie M. Hudson, Anton Korinek, Matthew M. Young & Baobao Zhang (eds.), The Oxford Handbook of AI Governance. Oxford University Press.
    This chapter argues for a structural injustice approach to the governance of AI. Structural injustice has an analytical and an evaluative component. The analytical component consists of structural explanations that are well-known in the social sciences. The evaluative component is a theory of justice. Structural injustice is a powerful conceptual tool that allows researchers and practitioners to identify, articulate, and perhaps even anticipate, AI biases. The chapter begins with an example of racial bias in AI that arises from structural injustice. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. HARNESSING AI FOR EVOLVING THREATS: FROM DETECTION TO AUTOMATED RESPONSE.Sanagana Durga Prasada Rao - 2024 - Journal of Science Technology and Research (JSTAR) 5 (1):91-97.
    The landscape of cybersecurity is constantly evolving, with adversaries becoming increasingly sophisticated and persistent. This manuscript explores the utilization of artificial intelligence (AI) to address these evolving threats, focusing on the journey from threat detection to autonomous response. By examining AI-driven detection methodologies, advanced threat analytics, and the implementation of autonomous response systems, this paper provides insights into how organizations can leverage AI to strengthen their cybersecurity posture against modern threats. Key words: Ransomware, Anomaly Detection, Advanced Persistent Threats (APTs), Automated (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9. Responsibility Internalism and Responsibility for AI.Huzeyfe Demirtas - 2023 - Dissertation, Syracuse University
    I argue for responsibility internalism. That is, moral responsibility (i.e., accountability, or being apt for praise or blame) depends only on factors internal to agents. Employing this view, I also argue that no one is responsible for what AI does but this isn’t morally problematic in a way that counts against developing or using AI. Responsibility is grounded in three potential conditions: the control (or freedom) condition, the epistemic (or awareness) condition, and the causal responsibility condition (or consequences). I (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10.  34
    From AI to Octopi and Back. AI Systems as Responsive and Contested Scaffolds.Giacomo Figà-Talamanca - forthcoming - In Vincent C. Müller, Aliya R. Dewey, Leonard Dung & Guido Löhr (eds.), Philosophy of Artificial Intelligence: The State of the Art. Berlin: SpringerNature.
    In this paper, I argue against the view that existing AI systems can be deemed agents comparably to human beings or other organisms. I especially focus on the criteria of interactivity, autonomy, and adaptivity, provided by the seminal work of Luciano Floridi and José Sanders to determine whether an artificial system can be considered an agent. I argue that the tentacles of octopuses also fit those criteria. However, I argue that octopuses’ tentacles cannot be attributed agency because their behavior can (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11. The future of AI in our hands? - To what extent are we as individuals morally responsible for guiding the development of AI in a desirable direction?Erik Persson & Maria Hedlund - 2022 - AI and Ethics 2:683-695.
    Artificial intelligence (AI) is becoming increasingly influential in most people’s lives. This raises many philosophical questions. One is what responsibility we have as individuals to guide the development of AI in a desirable direction. More specifically, how should this responsibility be distributed among individuals and between individuals and other actors? We investigate this question from the perspectives of five principles of distribution that dominate the discussion about responsibility in connection with climate change: effectiveness, equality, desert, need, and ability. Since much (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12.  80
    Ethics in AI: Balancing Innovation and Responsibility.Mosa M. M. Megdad, Mohammed H. S. Abueleiwa, Mohammed Al Qatrawi, Jehad El-Tantaw, Fadi E. S. Harara, Bassem S. Abu-Nasser & Samy S. Abu-Naser - 2024 - International Journal of Academic Pedagogical Research (IJAPR) 8 (9):20-25.
    Abstract: As artificial intelligence (AI) technologies become more integrated across various sectors, ethical considerations in their development and application have gained critical importance. This paper delves into the complex ethical landscape of AI, addressing significant challenges such as bias, transparency, privacy, and accountability. It explores how these issues manifest in AI systems and their societal impact, while also evaluating current strategies aimed at mitigating these ethical concerns, including regulatory frameworks, ethical guidelines, and best practices in AI design. Through a comprehensive (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. AI Enters Public Discourse: a Habermasian Assessment of the Moral Status of Large Language Models.Paolo Monti - 2024 - Ethics and Politics 61 (1):61-80.
    Large Language Models (LLMs) are generative AI systems capable of producing original texts based on inputs about topic and style provided in the form of prompts or questions. The introduction of the outputs of these systems into human discursive practices poses unprecedented moral and political questions. The article articulates an analysis of the moral status of these systems and their interactions with human interlocutors based on the Habermasian theory of communicative action. The analysis explores, among other things, Habermas's inquiries into (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. Tu Quoque: The Strong AI Challenge to Selfhood, Intentionality and Meaning and Some Artistic Responses.Erik C. Banks - manuscript
    This paper offers a "tu quoque" defense of strong AI, based on the argument that phenomena of self-consciousness and intentionality are nothing but the "negative space" drawn around the concrete phenomena of brain states and causally connected utterances and objects. Any machine that was capable of concretely implementing the positive phenomena would automatically inherit the negative space around these that we call self-consciousness and intention. Because this paper was written for a literary audience, some examples from Greek tragedy, noir fiction, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15. More than Skin Deep: a Response to “The Whiteness of AI”.Shelley Park - 2021 - Philosophy and Technology 34 (4):1961-1966.
    This commentary responds to Stephen Cave and Kanta Dihal’s call for further investigations of the whiteness of AI. My response focuses on three overlapping projects needed to more fully understand racial bias in the construction of AI and its representations in pop culture: unpacking the intersections of gender and other variables with whiteness in AI’s construction, marketing, and intended functions; observing the many different ways in which whiteness is scripted, and noting how white racial framing exceeds white casting and thus (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  16. Streaching the notion of moral responsibility in nanoelectronics by appying AI.Robert Albin & Amos Bardea - 2021 - In Robert Albin & Amos Bardea (eds.), Ethics in Nanotechnology Social Sciences and Philosophical Aspects, Vol. 2. Berlin: De Gruyter. pp. 75-87.
    The development of machine learning and deep learning (DL) in the field of AI (artificial intelligence) is the direct result of the advancement of nano-electronics. Machine learning is a function that provides the system with the capacity to learn from data without being programmed explicitly. It is basically a mathematical and probabilistic model. DL is part of machine learning methods based on artificial neural networks, simply called neural networks (NNs), as they are inspired by the biological NNs that constitute organic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. Exploring the Intersection of Rationality, Reality, and Theory of Mind in AI Reasoning: An Analysis of GPT-4's Responses to Paradoxes and ToM Tests.Lucas Freund - manuscript
    This paper investigates the responses of GPT-4, a state-of-the-art AI language model, to ten prominent philosophical paradoxes, and evaluates its capacity to reason and make decisions in complex and uncertain situations. In addition to analyzing GPT-4's solutions to the paradoxes, this paper assesses the model's Theory of Mind (ToM) capabilities by testing its understanding of mental states, intentions, and beliefs in scenarios ranging from classic ToM tests to complex, real-world simulations. Through these tests, we gain insight into AI's potential for (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18.  92
    Can ChatGPT be an author? Generative AI creative writing assistance and perceptions of authorship, creatorship, responsibility, and disclosure.Paul Formosa, Sarah Bankins, Rita Matulionyte & Omid Ghasemi - forthcoming - AI and Society.
    The increasing use of Generative AI raises many ethical, philosophical, and legal issues. A key issue here is uncertainties about how different degrees of Generative AI assistance in the production of text impacts assessments of the human authorship of that text. To explore this issue, we developed an experimental mixed methods survey study (N = 602) asking participants to reflect on a scenario of a human author receiving assistance to write a short novel as part of a 3 (high, medium, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. Understanding Moral Responsibility in Automated Decision-Making: Responsibility Gaps and Strategies to Address Them.Andrea Berber & Jelena Mijić - 2024 - Theoria: Beograd 67 (3):177-192.
    This paper delves into the use of machine learning-based systems in decision-making processes and its implications for moral responsibility as traditionally defined. It focuses on the emergence of responsibility gaps and examines proposed strategies to address them. The paper aims to provide an introductory and comprehensive overview of the ongoing debate surrounding moral responsibility in automated decision-making. By thoroughly examining these issues, we seek to contribute to a deeper understanding of the implications of AI integration in society.
    Download  
     
    Export citation  
     
    Bookmark  
  20. Artificial Intelligence Implications for Academic Cheating: Expanding the Dimensions of Responsible Human-AI Collaboration with ChatGPT.Jo Ann Oravec - 2023 - Journal of Interactive Learning Research 34 (2).
    Cheating is a growing academic and ethical concern in higher education. This article examines the rise of artificial intelligence (AI) generative chatbots for use in education and provides a review of research literature and relevant scholarship concerning the cheating-related issues involved and their implications for pedagogy. The technological “arms race” that involves cheating-detection system developers versus technology savvy students is attracting increased attention to cheating. AI has added new dimensions to academic cheating challenges as students (as well as faculty and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  21. AI Sovereignty: Navigating the Future of International AI Governance.Yu Chen - manuscript
    The rapid proliferation of artificial intelligence (AI) technologies has ushered in a new era of opportunities and challenges, prompting nations to grapple with the concept of AI sovereignty. This article delves into the definition and implications of AI sovereignty, drawing parallels to the well-established notion of cyber sovereignty. By exploring the connotations of AI sovereignty, including control over AI development, data sovereignty, economic impacts, national security considerations, and ethical and cultural dimensions, the article provides a comprehensive understanding of this emerging (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22. AI Art is Theft: Labour, Extraction, and Exploitation, Or, On the Dangers of Stochastic Pollocks.Trystan S. Goetze - 2024 - Proceedings of the 2024 Acm Conference on Fairness, Accountability, and Transparency:186-196.
    Since the launch of applications such as DALL-E, Midjourney, and Stable Diffusion, generative artificial intelligence has been controversial as a tool for creating artwork. While some have presented longtermist worries about these technologies as harbingers of fully automated futures to come, more pressing is the impact of generative AI on creative labour in the present. Already, business leaders have begun replacing human artistic labour with AI-generated images. In response, the artistic community has launched a protest movement, which argues that AI (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  23. AI Decision Making with Dignity? Contrasting Workers’ Justice Perceptions of Human and AI Decision Making in a Human Resource Management Context.Sarah Bankins, Paul Formosa, Yannick Griep & Deborah Richards - forthcoming - Information Systems Frontiers.
    Using artificial intelligence (AI) to make decisions in human resource management (HRM) raises questions of how fair employees perceive these decisions to be and whether they experience respectful treatment (i.e., interactional justice). In this experimental survey study with open-ended qualitative questions, we examine decision making in six HRM functions and manipulate the decision maker (AI or human) and decision valence (positive or negative) to determine their impact on individuals’ experiences of interactional justice, trust, dehumanization, and perceptions of decision-maker role appropriate- (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  24. Supporting human autonomy in AI systems.Rafael Calvo, Dorian Peters, Karina Vold & Richard M. Ryan - 2020 - In Christopher Burr & Luciano Floridi (eds.), Ethics of digital well-being: a multidisciplinary approach. Springer.
    Autonomy has been central to moral and political philosophy for millenia, and has been positioned as a critical aspect of both justice and wellbeing. Research in psychology supports this position, providing empirical evidence that autonomy is critical to motivation, personal growth and psychological wellness. Responsible AI will require an understanding of, and ability to effectively design for, human autonomy (rather than just machine autonomy) if it is to genuinely benefit humanity. Yet the effects on human autonomy of digital experiences (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  25.  65
    Can AI become an Expert?Hyeongyun Kim - 2024 - Journal of Ai Humanities 16 (4):113-136.
    With the rapid development of artificial intelligence (AI), understanding its capabilities and limitations has become significant for mitigating unfounded anxiety and unwarranted optimism. As part of this endeavor, this study delves into the following question: Can AI become an expert? More precisely, should society confer the authority of experts on AI even if its decision-making process is highly opaque? Throughout the investigation, I aim to identify certain normative challenges in elevating current AI to a level comparable to that of human (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. Explainable AI lacks regulative reasons: why AI and human decision‑making are not equally opaque.Uwe Peters - forthcoming - AI and Ethics.
    Many artificial intelligence (AI) systems currently used for decision-making are opaque, i.e., the internal factors that determine their decisions are not fully known to people due to the systems’ computational complexity. In response to this problem, several researchers have argued that human decision-making is equally opaque and since simplifying, reason-giving explanations (rather than exhaustive causal accounts) of a decision are typically viewed as sufficient in the human case, the same should hold for algorithmic decision-making. Here, I contend that this argument (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  27. AI Survival Stories: a Taxonomic Analysis of AI Existential Risk.Herman Cappelen, Simon Goldstein & John Hawthorne - forthcoming - Philosophy of Ai.
    Since the release of ChatGPT, there has been a lot of debate about whether AI systems pose an existential risk to humanity. This paper develops a general framework for thinking about the existential risk of AI systems. We analyze a two-premise argument that AI systems pose a threat to humanity. Premise one: AI systems will become extremely powerful. Premise two: if AI systems become extremely powerful, they will destroy humanity. We use these two premises to construct a taxonomy of ‘survival (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28. How AI Systems Can Be Blameworthy.Hannah Altehenger, Leonhard Menges & Peter Schulte - 2024 - Philosophia:1-24.
    AI systems, like self-driving cars, healthcare robots, or Autonomous Weapon Systems, already play an increasingly important role in our lives and will do so to an even greater extent in the near future. This raises a fundamental philosophical question: who is morally responsible when such systems cause unjustified harm? In the paper, we argue for the admittedly surprising claim that some of these systems can themselves be morally responsible for their conduct in an important and everyday sense of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29. The Struggle for AI’s Recognition: Understanding the Normative Implications of Gender Bias in AI with Honneth’s Theory of Recognition.Rosalie Waelen & Michał Wieczorek - 2022 - Philosophy and Technology 35 (2).
    AI systems have often been found to contain gender biases. As a result of these gender biases, AI routinely fails to adequately recognize the needs, rights, and accomplishments of women. In this article, we use Axel Honneth’s theory of recognition to argue that AI’s gender biases are not only an ethical problem because they can lead to discrimination, but also because they resemble forms of misrecognition that can hurt women’s self-development and self-worth. Furthermore, we argue that Honneth’s theory of recognition (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  30.  78
    AI systems must not confuse users about their sentience or moral status.Eric Schwitzgebel - 2023 - Patterns 4.
    One relatively neglected challenge in ethical artificial intelligence (AI) design is ensuring that AI systems invite a degree of emotional and moral concern appropriate to their moral standing. Although experts generally agree that current AI chatbots are not sentient to any meaningful degree, these systems can already provoke substantial attachment and sometimes intense emotional responses in users. Furthermore, rapid advances in AI technology could soon create AIs of plausibly debatable sentience and moral standing, at least by some relevant definitions. Morally (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31. Medical AI and human dignity: Contrasting perceptions of human and artificially intelligent (AI) decision making in diagnostic and medical resource allocation contexts.Paul Formosa, Wendy Rogers, Yannick Griep, Sarah Bankins & Deborah Richards - 2022 - Computers in Human Behaviour 133.
    Forms of Artificial Intelligence (AI) are already being deployed into clinical settings and research into its future healthcare uses is accelerating. Despite this trajectory, more research is needed regarding the impacts on patients of increasing AI decision making. In particular, the impersonal nature of AI means that its deployment in highly sensitive contexts-of-use, such as in healthcare, raises issues associated with patients’ perceptions of (un) dignified treatment. We explore this issue through an experimental vignette study comparing individuals’ perceptions of being (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32. Anthropomorphism in AI: Hype and Fallacy.Adriana Placani - 2024 - AI and Ethics.
    This essay focuses on anthropomorphism as both a form of hype and fallacy. As a form of hype, anthropomorphism is shown to exaggerate AI capabilities and performance by attributing human-like traits to systems that do not possess them. As a fallacy, anthropomorphism is shown to distort moral judgments about AI, such as those concerning its moral character and status, as well as judgments of responsibility and trust. By focusing on these two dimensions of anthropomorphism in AI, the essay highlights negative (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  33. AI-Related Misdirection Awareness in AIVR.Nadisha-Marie Aliman & Leon Kester - manuscript
    Recent AI progress led to a boost in beneficial applications from multiple research areas including VR. Simultaneously, in this newly unfolding deepfake era, ethically and security-relevant disagreements arose in the scientific community regarding the epistemic capabilities of present-day AI. However, given what is at stake, one can postulate that for a responsible approach, prior to engaging in a rigorous epistemic assessment of AI, humans may profit from a self-questioning strategy, an examination and calibration of the experience of their own (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34.  79
    Can AI Lie? Chatbot Technologies, the Subject, and the Importance of Lying.Jack Black - 2024 - Social Science Computer Review (xx):xx.
    This article poses a simple question: can AI lie? In response to this question, the article examines, as its point of inquiry, popular AI chatbots, such as, ChatGPT. In doing so, an examination of the psychoanalytic, philosophical, and technological significance of AI and its complexities are located in relation to the dynamics of truth, falsity, and deception. That is, by critically exploring the chatbot’s capacity to engage in natural language conversations and deliver contextually relevant responses, it is argued that what (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. Embracing ChatGPT and other generative AI tools in higher education: The importance of fostering trust and responsible use in teaching and learning.Jonathan Y. H. Sim - 2023 - Higher Education in Southeast Asia and Beyond.
    Trust is the foundation for learning, and we must not allow ignorance of this new technologies, like Generative AI, to disrupt the relationship between students and educators. As a first step, we need to actively engage with AI tools to better understand how they can help us in our work.
    Download  
     
    Export citation  
     
    Bookmark  
  36. Responsibility gaps and the reactive attitudes.Fabio Tollon - 2022 - AI and Ethics 1 (1).
    Artificial Intelligence (AI) systems are ubiquitous. From social media timelines, video recommendations on YouTube, and the kinds of adverts we see online, AI, in a very real sense, filters the world we see. More than that, AI is being embedded in agent-like systems, which might prompt certain reactions from users. Specifically, we might find ourselves feeling frustrated if these systems do not meet our expectations. In normal situations, this might be fine, but with the ever increasing sophistication of AI-systems, this (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  37. “Democratizing AI” and the Concern of Algorithmic Injustice.Ting-an Lin - 2024 - Philosophy and Technology 37 (3):1-27.
    The call to make artificial intelligence (AI) more democratic, or to “democratize AI,” is sometimes framed as a promising response for mitigating algorithmic injustice or making AI more aligned with social justice. However, the notion of “democratizing AI” is elusive, as the phrase has been associated with multiple meanings and practices, and the extent to which it may help mitigate algorithmic injustice is still underexplored. In this paper, based on a socio-technical understanding of algorithmic injustice, I examine three notable notions (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. AI & democracy, and the importance of asking the right questions.Ognjen Arandjelović - 2021 - AI Ethics Journal 2 (1):2.
    Democracy is widely praised as a great achievement of humanity. However, in recent years there has been an increasing amount of concern that its functioning across the world may be eroding. In response, efforts to combat such change are emerging. Considering the pervasiveness of technology and its increasing capabilities, it is no surprise that there has been much focus on the use of artificial intelligence (AI) to this end. Questions as to how AI can be best utilized to extend the (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  39. The Point of Blaming AI Systems.Hannah Altehenger & Leonhard Menges - 2024 - Journal of Ethics and Social Philosophy 27 (2).
    As Christian List (2021) has recently argued, the increasing arrival of powerful AI systems that operate autonomously in high-stakes contexts creates a need for “future-proofing” our regulatory frameworks, i.e., for reassessing them in the face of these developments. One core part of our regulatory frameworks that dominates our everyday moral interactions is blame. Therefore, “future-proofing” our extant regulatory frameworks in the face of the increasing arrival of powerful AI systems requires, among others things, that we ask whether it makes sense (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  40. From responsible robotics towards a human rights regime oriented to the challenges of robotics and artificial intelligence.Hin-Yan Liu & Karolina Zawieska - 2020 - Ethics and Information Technology 22 (4):321-333.
    As the aim of the responsible robotics initiative is to ensure that responsible practices are inculcated within each stage of design, development and use, this impetus is undergirded by the alignment of ethical and legal considerations towards socially beneficial ends. While every effort should be expended to ensure that issues of responsibility are addressed at each stage of technological progression, irresponsibility is inherent within the nature of robotics technologies from a theoretical perspective that threatens to thwart the endeavour. (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  41. Unjustified untrue "beliefs": AI hallucinations and justification logics.Kristina Šekrst - forthcoming - In Kordula Świętorzecka, Filip Grgić & Anna Brozek (eds.), Logic, Knowledge, and Tradition. Essays in Honor of Srecko Kovac.
    In artificial intelligence (AI), responses generated by machine-learning models (most often large language models) may be unfactual information presented as a fact. For example, a chatbot might state that the Mona Lisa was painted in 1815. Such phenomenon is called AI hallucinations, seeking inspiration from human psychology, with a great difference of AI ones being connected to unjustified beliefs (that is, AI “beliefs”) rather than perceptual failures). -/- AI hallucinations may have their source in the data itself, that is, the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42. Generative AI and photographic transparency.P. D. Magnus - forthcoming - AI and Society:1-6.
    There is a history of thinking that photographs provide a special kind of access to the objects depicted in them, beyond the access that would be provided by a painting or drawing. What is included in the photograph does not depend on the photographer’s beliefs about what is in front of the camera. This feature leads Kendall Walton to argue that photographs literally allow us to see the objects which appear in them. Current generative algorithms produce images in response to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  43. AI Safety: A Climb To Armageddon?Herman Cappelen, Dever Josh & Hawthorne John - manuscript
    This paper presents an argument that certain AI safety measures, rather than mitigating existential risk, may instead exacerbate it. Under certain key assumptions - the inevitability of AI failure, the expected correlation between an AI system's power at the point of failure and the severity of the resulting harm, and the tendency of safety measures to enable AI systems to become more powerful before failing - safety efforts have negative expected utility. The paper examines three response strategies: Optimism, Mitigation, and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. The virtues of interpretable medical AI.Joshua Hatherley, Robert Sparrow & Mark Howard - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (3).
    Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are “black boxes.” The initial response in the literature was a demand for “explainable AI.” However, recently, several authors have suggested that making AI more explainable or “interpretable” is likely to be at the cost of the accuracy of these systems and that prioritizing interpretability in medical AI may constitute a “lethal prejudice.” In this paper, we defend the value of interpretability (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  45. Trust in AI: Progress, Challenges, and Future Directions.Saleh Afroogh, Ali Akbari, Emmie Malone, Mohammadali Kargar & Hananeh Alambeigi - forthcoming - Nature Humanities and Social Sciences Communications.
    The increasing use of artificial intelligence (AI) systems in our daily life through various applications, services, and products explains the significance of trust/distrust in AI from a user perspective. AI-driven systems have significantly diffused into various fields of our lives, serving as beneficial tools used by human agents. These systems are also evolving to act as co-assistants or semi-agents in specific domains, potentially influencing human thought, decision-making, and agency. Trust/distrust in AI plays the role of a regulator and could significantly (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. Sinful AI?Michael Wilby - 2023 - In Critical Muslim, 47. London: Hurst Publishers. pp. 91-108.
    Could the concept of 'evil' apply to AI? Drawing on PF Strawson's framework of reactive attitudes, this paper argues that we can understand evil as involving agents who are neither fully inside nor fully outside our moral practices. It involves agents whose abilities and capacities are enough to make them morally responsible for their actions, but whose behaviour is far enough outside of the norms of our moral practices to be labelled 'evil'. Understood as such, the paper argues that, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. (1 other version)Capable but Amoral? Comparing AI and Human Expert Collaboration in Ethical Decision Making.Suzanne Tolmeijer, Markus Christen, Serhiy Kandul, Markus Kneer & Abraham Bernstein - 2022 - Proceedings of the 2022 Chi Conference on Human Factors in Computing Systems 160:160:1–17.
    While artificial intelligence (AI) is increasingly applied for decision-making processes, ethical decisions pose challenges for AI applications. Given that humans cannot always agree on the right thing to do, how would ethical decision-making by AI systems be perceived and how would responsibility be ascribed in human-AI collaboration? In this study, we investigate how the expert type (human vs. AI) and level of expert autonomy (adviser vs. decider) influence trust, perceived responsibility, and reliance. We find that participants consider humans to be (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  48. Medical AI, Inductive Risk, and the Communication of Uncertainty: The Case of Disorders of Consciousness.Jonathan Birch - forthcoming - Journal of Medical Ethics.
    Some patients, following brain injury, do not outwardly respond to spoken commands, yet show patterns of brain activity that indicate responsiveness. This is “cognitive-motor dissociation” (CMD). Recent research has used machine learning to diagnose CMD from electroencephalogram (EEG) recordings. These techniques have high false discovery rates, raising a serious problem of inductive risk. It is no solution to communicate the false discovery rates directly to the patient’s family, because this information may confuse, alarm and mislead. Instead, we need a procedure (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49. (1 other version)A united framework of five principles for AI in society.Luciano Floridi & Josh Cowls - 2019 - Harvard Data Science Review 1 (1).
    Artificial Intelligence (AI) is already having a major impact on society. As a result, many organizations have launched a wide range of initiatives to establish ethical principles for the adoption of socially beneficial AI. Unfortunately, the sheer volume of proposed principles threatens to overwhelm and confuse. How might this problem of ‘principle proliferation’ be solved? In this paper, we report the results of a fine-grained analysis of several of the highest-profile sets of ethical principles for AI. We assess whether these (...)
    Download  
     
    Export citation  
     
    Bookmark   76 citations  
  50. The virtues of interpretable medical AI.Joshua Hatherley, Robert Sparrow & Mark Howard - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (3):323-332.
    Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are 'black boxes'. The initial response in the literature was a demand for 'explainable AI'. However, recently, several authors have suggested that making AI more explainable or 'interpretable' is likely to be at the cost of the accuracy of these systems and that prioritising interpretability in medical AI may constitute a 'lethal prejudice'. In this paper, we defend the value of interpretability (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
1 — 50 / 974