Results for 'AI rights'

957 found
Order:
  1. AI Rights for Human Safety.Peter Salib & Simon Goldstein - manuscript
    AI companies are racing to create artificial general intelligence, or “AGI.” If they succeed, the result will be human-level AI systems that can independently pursue high-level goals by formulating and executing long-term plans in the real world. Leading AI researchers agree that some of these systems will likely be “misaligned”–pursuing goals that humans do not desire. This goal mismatch will put misaligned AIs and humans into strategic competition with one another. As with present-day strategic competition between nations with incompatible goals, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2. Designing AI with Rights, Consciousness, Self-Respect, and Freedom.Eric Schwitzgebel & Mara Garza - 2023 - In Francisco Lara & Jan Deckers (eds.), Ethics of Artificial Intelligence. Springer Nature Switzerland. pp. 459-479.
    We propose four policies of ethical design of human-grade Artificial Intelligence. Two of our policies are precautionary. Given substantial uncertainty both about ethical theory and about the conditions under which AI would have conscious experiences, we should be cautious in our handling of cases where different moral theories or different theories of consciousness would produce very different ethical recommendations. Two of our policies concern respect and freedom. If we design AI that deserves moral consideration equivalent to that of human beings, (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  3. AI & democracy, and the importance of asking the right questions.Ognjen Arandjelović - 2021 - AI Ethics Journal 2 (1):2.
    Democracy is widely praised as a great achievement of humanity. However, in recent years there has been an increasing amount of concern that its functioning across the world may be eroding. In response, efforts to combat such change are emerging. Considering the pervasiveness of technology and its increasing capabilities, it is no surprise that there has been much focus on the use of artificial intelligence (AI) to this end. Questions as to how AI can be best utilized to extend the (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  4. AI Alignment vs. AI Ethical Treatment: Ten Challenges.Adam Bradley & Bradford Saad - manuscript
    A morally acceptable course of AI development should avoid two dangers: creating unaligned AI systems that pose a threat to humanity and mistreating AI systems that merit moral consideration in their own right. This paper argues these two dangers interact and that if we create AI systems that merit moral consideration, simultaneously avoiding both of these dangers would be extremely challenging. While our argument is straightforward and supported by a wide range of pretheoretical moral judgments, it has far-reaching moral implications (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5.  69
    The Full Rights Dilemma for AI Systems of Debatable Moral Personhood.Eric Schwitzgebel - 2023 - Robonomics 4.
    An Artificially Intelligent system (an AI) has debatable moral personhood if it is epistemically possible either that the AI is a moral person or that it falls far short of personhood. Debatable moral personhood is a likely outcome of AI development and might arise soon. Debatable AI personhood throws us into a catastrophic moral dilemma: Either treat the systems as moral persons and risk sacrificing real human interests for the sake of entities without interests worth the sacrifice, or do not (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. Why AI Doomsayers are Like Sceptical Theists and Why it Matters.John Danaher - 2015 - Minds and Machines 25 (3):231-246.
    An advanced artificial intelligence could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s superintelligence: paths, dangers, strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate about the (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  7. Unownability of AI: Why Legal Ownership of Artificial Intelligence is Hard.Roman Yampolskiy - manuscript
    To hold developers responsible, it is important to establish the concept of AI ownership. In this paper we review different obstacles to ownership claims over advanced intelligent systems, including unexplainability, unpredictability, uncontrollability, self-modification, AI-rights, ease of theft when it comes to AI models and code obfuscation. We conclude that it is difficult if not impossible to establish ownership claims over AI models beyond a reasonable doubt.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  8. AI Risk Assessment: A Scenario-Based, Proportional Methodology for the AI Act.Claudio Novelli, Federico Casolari, Antonino Rotolo, Mariarosaria Taddeo & Luciano Floridi - 2024 - Digital Society 3 (13):1-29.
    The EU Artificial Intelligence Act (AIA) defines four risk categories for AI systems: unacceptable, high, limited, and minimal. However, it lacks a clear methodology for the assessment of these risks in concrete situations. Risks are broadly categorized based on the application areas of AI systems and ambiguous risk factors. This paper suggests a methodology for assessing AI risk magnitudes, focusing on the construction of real-world risk scenarios. To this scope, we propose to integrate the AIA with a framework developed by (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  9. AI Sovereignty: Navigating the Future of International AI Governance.Yu Chen - manuscript
    The rapid proliferation of artificial intelligence (AI) technologies has ushered in a new era of opportunities and challenges, prompting nations to grapple with the concept of AI sovereignty. This article delves into the definition and implications of AI sovereignty, drawing parallels to the well-established notion of cyber sovereignty. By exploring the connotations of AI sovereignty, including control over AI development, data sovereignty, economic impacts, national security considerations, and ethical and cultural dimensions, the article provides a comprehensive understanding of this emerging (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10.  22
    Towards a Taxonomy of AI Risks in the Health Domain.Delaram Golpayegani, Joshua Hovsha, Leon Rossmaier, Rana Saniei & Jana Misic - 2022 - 2022 Fourth International Conference on Transdisciplinary Ai (Transai).
    The adoption of AI in the health sector has its share of benefits and harms to various stakeholder groups and entities. There are critical risks involved in using AI systems in the health domain; risks that can have severe, irreversible, and life-changing impacts on people’s lives. With the development of innovative AI-based applications in the medical and healthcare sectors, new types of risks emerge. To benefit from novel AI applications in this domain, the risks need to be managed in order (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11. The Struggle for AI’s Recognition: Understanding the Normative Implications of Gender Bias in AI with Honneth’s Theory of Recognition.Rosalie Waelen & Michał Wieczorek - 2022 - Philosophy and Technology 35 (2).
    AI systems have often been found to contain gender biases. As a result of these gender biases, AI routinely fails to adequately recognize the needs, rights, and accomplishments of women. In this article, we use Axel Honneth’s theory of recognition to argue that AI’s gender biases are not only an ethical problem because they can lead to discrimination, but also because they resemble forms of misrecognition that can hurt women’s self-development and self-worth. Furthermore, we argue that Honneth’s theory of (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  12. AI as IA: The use and abuse of artificial intelligence (AI) for human enhancement through intellectual augmentation (IA).Alexandre Erler & Vincent C. Müller - 2023 - In Fabrice Jotterand & Marcello Ienca (eds.), The Routledge Handbook of the Ethics of Human Enhancement. Routledge. pp. 187-199.
    This paper offers an overview of the prospects and ethics of using AI to achieve human enhancement, and more broadly what we call intellectual augmentation (IA). After explaining the central notions of human enhancement, IA, and AI, we discuss the state of the art in terms of the main technologies for IA, with or without brain-computer interfaces. Given this picture, we discuss potential ethical problems, namely inadequate performance, safety, coercion and manipulation, privacy, cognitive liberty, authenticity, and fairness in more detail. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. Theology Meets AI: Examining Perspectives, Tasks, and Theses on the Intersection of Technology and Religion.Anna Puzio - 2023 - In Anna Puzio, Nicole Kunkel & Hendrik Klinge (eds.), Alexa, wie hast du's mit der Religion? Theologische Zugänge zu Technik und Künstlicher Intelligenz. Darmstadt: Wbg.
    Artificial intelligence (AI), blockchain, virtual and augmented reality, (semi-)autonomous ve- hicles, autoregulatory weapon systems, enhancement, reproductive technologies and human- oid robotics – these technologies (and many others) are no longer speculative visions of the future; they have already found their way into our lives or are on the verge of a breakthrough. These rapid technological developments awaken a need for orientation: what distinguishes hu- man from machine and human intelligence from artificial intelligence, how far should the body be allowed to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. From human resources to human rights: Impact assessments for hiring algorithms.Josephine Yam & Joshua August Skorburg - 2021 - Ethics and Information Technology 23 (4):611-623.
    Over the years, companies have adopted hiring algorithms because they promise wider job candidate pools, lower recruitment costs and less human bias. Despite these promises, they also bring perils. Using them can inflict unintentional harms on individual human rights. These include the five human rights to work, equality and nondiscrimination, privacy, free expression and free association. Despite the human rights harms of hiring algorithms, the AI ethics literature has predominantly focused on abstract ethical principles. This is problematic (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  15. (1 other version)Capable but Amoral? Comparing AI and Human Expert Collaboration in Ethical Decision Making.Suzanne Tolmeijer, Markus Christen, Serhiy Kandul, Markus Kneer & Abraham Bernstein - 2022 - Proceedings of the 2022 Chi Conference on Human Factors in Computing Systems 160:160:1–17.
    While artificial intelligence (AI) is increasingly applied for decision-making processes, ethical decisions pose challenges for AI applications. Given that humans cannot always agree on the right thing to do, how would ethical decision-making by AI systems be perceived and how would responsibility be ascribed in human-AI collaboration? In this study, we investigate how the expert type (human vs. AI) and level of expert autonomy (adviser vs. decider) influence trust, perceived responsibility, and reliance. We find that participants consider humans to be (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  16. Algorithmic decision-making: the right to explanation and the significance of stakes.Lauritz Munch, Jens Christian Bjerring & Jakob Mainz - forthcoming - Big Data and Society.
    The stakes associated with an algorithmic decision are often said to play a role in determining whether the decision engenders a right to an explanation. More specifically, “high stakes” decisions are often said to engender such a right to explanation whereas “low stakes” or “non-high” stakes decisions do not. While the overall gist of these ideas is clear enough, the details are lacking. In this paper, we aim to provide these details through a detailed investigation of what we will call (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  17. Establishing the rules for building trustworthy AI.Luciano Floridi - 2019 - Nature Machine Intelligence 1 (6):261-262.
    AI is revolutionizing everyone’s life, and it is crucial that it does so in the right way. AI’s profound and far-reaching potential for transformation concerns the engineering of systems that have some degree of autonomous agency. This is epochal and requires establishing a new, ethical balance between human and artificial autonomy.
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  18. Neuro rights, the new human rights.Deepa Kansra - 2021 - Rights Compass.
    The human mind has been a subject matter of study in psychology, law, science, philosophy and other disciplines. By definition, its potential is power, abilities and capacities including perception, knowledge, sensation, memory, belief, imagination, emotion, mood, appetite, intention, and action (Pardo, Patterson). In terms of role, it creates and shapes societal morality, culture, peace and democracy. Today, a rapidly advancing science–technology–artificial intelligence (AI) landscape is able to reach into the inner realms of the human mind. Technology, particularly neurotechnology enables access (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. Australia's Approach to AI Governance in Security and Defence.Susannah Kate Devitt & Damian Copeland - forthcoming - In M. Raska, Z. Stanley-Lockman & R. Bitzinger (eds.), AI Governance for National Security and Defence: Assessing Military AI Strategic Perspectives. Routledge. pp. 38.
    Australia is a leading AI nation with strong allies and partnerships. Australia has prioritised the development of robotics, AI, and autonomous systems to develop sovereign capability for the military. Australia commits to Article 36 reviews of all new means and method of warfare to ensure weapons and weapons systems are operated within acceptable systems of control. Additionally, Australia has undergone significant reviews of the risks of AI to human rights and within intelligence organisations and has committed to producing ethics (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20. Is it time for robot rights? Moral status in artificial entities.Vincent C. Müller - 2021 - Ethics and Information Technology 23 (3):579–587.
    Some authors have recently suggested that it is time to consider rights for robots. These suggestions are based on the claim that the question of robot rights should not depend on a standard set of conditions for ‘moral status’; but instead, the question is to be framed in a new way, by rejecting the is/ought distinction, making a relational turn, or assuming a methodological behaviourism. We try to clarify these suggestions and to show their highly problematic consequences. While (...)
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  21. Law and the Rights of the Non-Humans.Deepa Kansra - 2022 - Iils Law Review 8 (2):58-71.
    The law confers rights on non-human entities, namely nature, machines (AI), and animals. While doing so, the law is either viewed as progressive or sometimes as abstract and ambiguous. Despite the critique, it is undeniable that many of the rights of non-humans have come to solidify in statutory and constitutional rules of different systems. In the context of these developments, the article sheds light on the core justifications for advancing the rights of non-human entities. In addition, it (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22. Why the NSA didn’t diminish your privacy but might have violated your right to privacy.Lauritz Munch - forthcoming - Analysis.
    According to a popular view, privacy is a function of people not knowing or rationally believing some fact about you. But intuitively it seems possible for a perpetrator to violate your right to privacy without learning any facts about you. For example, it seems plausible to say that the US National Security Agency’s PRISM program violated, or could have violated, the privacy rights of the people whose information was collected, despite the fact that the NSA, for the most part, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. The Kant-Inspired Indirect Argument for Non-Sentient Robot Rights.Tobias Flattery - 2023 - AI and Ethics.
    Some argue that robots could never be sentient, and thus could never have intrinsic moral status. Others disagree, believing that robots indeed will be sentient and thus will have moral status. But a third group thinks that, even if robots could never have moral status, we still have a strong moral reason to treat some robots as if they do. Drawing on a Kantian argument for indirect animal rights, a number of technology ethicists contend that our treatment of anthropomorphic (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  24. Love in the time of AI.Amy Kind - 2021 - In Barry Francis Dainton, Will Slocombe & Attila Tanyi (eds.), Minding the Future: Artificial Intelligence, Philosophical Visions and Science Fiction. Springer. pp. 89-106.
    As we await the increasingly likely advent of genuinely intelligent artificial systems, a fair amount of consideration has been given to how we humans will interact with them. Less consideration has been given to how—indeed if—we humans will love them. What would human-AI romantic relationships look like? What do such relationships tell us about the nature of love? This chapter explores these questions via consideration of several works of science fiction, focusing especially on the Black Mirror episode “Be Right Back” (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. A conceptual framework for legal personality and its application to AI.Claudio Novelli, Giorgio Bongiovanni & Giovanni Sartor - 2022 - Jurisprudence 13 (2):194-219.
    In this paper, we provide an analysis of the concept of legal personality and discuss whether personality may be conferred on artificial intelligence systems (AIs). Legal personality will be presented as a doctrinal category that holds together bundles of rights and obligations; as a result, we first frame it as a node of inferential links between factual preconditions and legal effects. However, this inferentialist reading does not account for the ‘background reasons’ of legal personality, i.e., it does not explain (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. From responsible robotics towards a human rights regime oriented to the challenges of robotics and artificial intelligence.Hin-Yan Liu & Karolina Zawieska - 2020 - Ethics and Information Technology 22 (4):321-333.
    As the aim of the responsible robotics initiative is to ensure that responsible practices are inculcated within each stage of design, development and use, this impetus is undergirded by the alignment of ethical and legal considerations towards socially beneficial ends. While every effort should be expended to ensure that issues of responsibility are addressed at each stage of technological progression, irresponsibility is inherent within the nature of robotics technologies from a theoretical perspective that threatens to thwart the endeavour. This is (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  27. (1 other version)A Ghost Workers' Bill of Rights: How to Establish a Fair and Safe Gig Work Platform.Julian Friedland, David Balkin & Ramiro Montealegre - 2020 - California Management Review 62 (2).
    Many of us assume that all the free editing and sorting of online content we ordinarily rely on is carried out by AI algorithms — not human persons. Yet in fact, that is often not the case. This is because human workers remain cheaper, quicker, and more reliable than AI for performing myriad tasks where the right answer turns on ineffable contextual criteria too subtle for algorithms to yet decode. The output of this work is then used for machine learning (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28. Problems of Using Autonomous Military AI Against the Background of Russia's Military Aggression Against Ukraine.Oleksii Kostenko, Tyler Jaynes, Dmytro Zhuravlov, Oleksii Dniprov & Yana Usenko - 2022 - Baltic Journal of Legal and Social Sciences 2022 (4):131-145.
    The application of modern technologies with artificial intelligence (AI) in all spheres of human life is growing exponentially alongside concern for its controllability. The lack of public, state, and international control over AI technologies creates large-scale risks of using such software and hardware that (un)intentionally harm humanity. The events of recent month and years, specifically regarding the Russian Federation’s war against its democratic neighbour Ukraine and other international conflicts of note, support the thesis that the uncontrolled use of AI, especially (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29.  56
    Ethical Permissibility of Using Artificial Intelligence through the Lens of Al-Farabi's Theory on Natural Rights and Prosperity.Mohamad Mahdi Davar - 2024 - Legal Civilization 6 (18):195-200.
    The discussion of artificial intelligence (AI) as a newly emerging phenomenon in the present era has always been faced with various ethical challenges. The expansion of artificial intelligence is inevitable, and since this phenomenon is related to the human and social world, anything related to humans and society falls within the realm of morality and rights. In doing so, it must be understood whether the use of artificial intelligence is an ethical matter or not. Furthermore, do humans have the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. A Deontic Logic for Programming Rightful Machines: Kant’s Normative Demand for Consistency in the Law.Ava Thomas Wright - 2023 - Logics for Ai and Law: Joint Proceedings of the Third International Workshop on Logics for New-Generation Artificial Intelligence (Lingai) and the International Workshop on Logic, Ai and Law (Lail).
    In this paper, I set out some basic elements of a deontic logic with an implementation appropriate for handling conflicting legal obligations for purposes of programming autonomous machine agents. Kantian justice demands that the prescriptive system of enforceable public laws be consistent, yet statutes or case holdings may often describe legal obligations that contradict; moreover, even fundamental constitutional rights may come into conflict. I argue that a deontic logic of the law should not try to work around such conflicts (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31. Welcome to Hell on Earth - Artificial Intelligence, Babies, Bitcoin, Cartels, China, Democracy, Diversity, Dysgenics, Equality, Hackers, Human Rights, Islam, Liberalism, Prosperity, The Web.Michael Starks - 2020 - Las Vegas, NV USA: Reality Press.
    America and the world are in the process of collapse from excessive population growth, most of it for the last century and now all of it due to 3rd world people. Consumption of resources and the addition of one or two billion more ca. 2100 will collapse industrial civilization and bring about starvation, disease, violence and war on a staggering scale. Billions will die and nuclear war is all but certain. In America this is being hugely accelerated by massive immigration (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32. Critical Analysis of the “No Relevant Difference” Argument in Defense of the Rights of Artificial Intelligence.Mazarian Alireza - 2019 - Journal of Philosophical Theological Research 21 (1):165-190.
    There are many new philosophical queries about the moral status and rights of artificial intelligences; questions such as whether such entities can be considered as morally responsible entities and as having special rights. Recently, the contemporary philosophy of mind philosopher, Eric Schwitzgebel, has tried to defend the possibility of equal rights of AIs and human beings (in an imaginary future), by designing a new argument (2015). In this paper, after an introduction, the author reviews and analyzes the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. Consequentialism & Machine Ethics: Towards a Foundational Machine Ethic to Ensure the Right Action of Artificial Moral Agents.Josiah Della Foresta - 2020 - Montreal AI Ethics Institute.
    In this paper, I argue that Consequentialism represents a kind of ethical theory that is the most plausible to serve as a basis for a machine ethic. First, I outline the concept of an artificial moral agent and the essential properties of Consequentialism. Then, I present a scenario involving autonomous vehicles to illustrate how the features of Consequentialism inform agent action. Thirdly, an alternative Deontological approach will be evaluated and the problem of moral conflict discussed. Finally, two bottom-up approaches to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. Machine learning in bail decisions and judges’ trustworthiness.Alexis Morin-Martel - 2023 - AI and Society:1-12.
    The use of AI algorithms in criminal trials has been the subject of very lively ethical and legal debates recently. While there are concerns over the lack of accuracy and the harmful biases that certain algorithms display, new algorithms seem more promising and might lead to more accurate legal decisions. Algorithms seem especially relevant for bail decisions, because such decisions involve statistical data to which human reasoners struggle to give adequate weight. While getting the right legal outcome is a strong (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  35. Imagine This: Opaque DLMs are Reliable in the Context of Justification.Logan Carter - manuscript
    Artificial intelligence (AI) and machine learning (ML) models have undoubtedly become useful tools in science. In general, scientists and ML developers are optimistic – perhaps rightfully so – about the potential that these models have in facilitating scientific progress. The philosophy of AI literature carries a different mood. The attention of philosophers remains on potential epistemological issues that stem from the so-called “black box” features of ML models. For instance, Eamon Duede (2023) argues that opacity in deep learning models (DLMs) (...)
    Download  
     
    Export citation  
     
    Bookmark  
  36. Privacy and Digital Ethics After the Pandemic.Carissa Véliz - 2021 - Nature Electronics 4:10-11.
    The increasingly prominent role of digital technologies during the coronavirus pandemic has been accompanied by concerning trends in privacy and digital ethics. But more robust protection of our rights in the digital realm is possible in the future. -/- After surveying some of the challenges we face, I argue for the importance of diplomacy. Democratic countries must try to come together and reach agreements on minimum standards and rules regarding cybersecurity, privacy and the governance of AI.
    Download  
     
    Export citation  
     
    Bookmark  
  37. Varieties of Artificial Moral Agency and the New Control Problem.Marcus Arvan - 2022 - Humana.Mente - Journal of Philosophical Studies 15 (42):225-256.
    This paper presents a new trilemma with respect to resolving the control and alignment problems in machine ethics. Section 1 outlines three possible types of artificial moral agents (AMAs): (1) 'Inhuman AMAs' programmed to learn or execute moral rules or principles without understanding them in anything like the way that we do; (2) 'Better-Human AMAs' programmed to learn, execute, and understand moral rules or principles somewhat like we do, but correcting for various sources of human moral error; and (3) 'Human-Like (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. (1 other version)An Enactive Approach to Value Alignment in Artificial Intelligence: A Matter of Relevance.Michael Cannon - 2021 - In Vincent C. Müller (ed.), Philosophy and Theory of AI. Springer Cham. pp. 119-135.
    The “Value Alignment Problem” is the challenge of how to align the values of artificial intelligence with human values, whatever they may be, such that AI does not pose a risk to the existence of humans. Existing approaches appear to conceive of the problem as "how do we ensure that AI solves the problem in the right way", in order to avoid the possibility of AI turning humans into paperclips in order to “make more paperclips” or eradicating the human race (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. Group Agency and Artificial Intelligence.Christian List - 2021 - Philosophy and Technology (4):1-30.
    The aim of this exploratory paper is to review an under-appreciated parallel between group agency and artificial intelligence. As both phenomena involve non-human goal-directed agents that can make a difference to the social world, they raise some similar moral and regulatory challenges, which require us to rethink some of our anthropocentric moral assumptions. Are humans always responsible for those entities’ actions, or could the entities bear responsibility themselves? Could the entities engage in normative reasoning? Could they even have rights (...)
    Download  
     
    Export citation  
     
    Bookmark   32 citations  
  40. Algorithms and the Individual in Criminal Law.Renée Jorgensen - 2022 - Canadian Journal of Philosophy 52 (1):1-17.
    Law-enforcement agencies are increasingly able to leverage crime statistics to make risk predictions for particular individuals, employing a form of inference that some condemn as violating the right to be “treated as an individual.” I suggest that the right encodes agents’ entitlement to a fair distribution of the burdens and benefits of the rule of law. Rather than precluding statistical prediction, it requires that citizens be able to anticipate which variables will be used as predictors and act intentionally to avoid (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  41. Understanding Artificial Agency.Leonard Dung - forthcoming - Philosophical Quarterly.
    Which artificial intelligence (AI) systems are agents? To answer this question, I propose a multidimensional account of agency. According to this account, a system's agency profile is jointly determined by its level of goal-directedness and autonomy as well as is abilities for directly impacting the surrounding world, long-term planning and acting for reasons. Rooted in extant theories of agency, this account enables fine-grained, nuanced comparative characterizations of artificial agency. I show that this account has multiple important virtues and is more (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  42. What is data ethics?Luciano Floridi & Mariarosaria Taddeo - 2016 - Philosophical Transactions of the Royal Society A 374 (2083):20160360.
    This theme issue has the founding ambition of landscaping Data Ethics as a new branch of ethics that studies and evaluates moral problems related to data (including generation, recording, curation, processing, dissemination, sharing, and use), algorithms (including AI, artificial agents, machine learning, and robots), and corresponding practices (including responsible innovation, programming, hacking, and professional codes), in order to formulate and support morally good solutions (e.g. right conducts or right values). Data Ethics builds on the foundation provided by Computer and Information (...)
    Download  
     
    Export citation  
     
    Bookmark   56 citations  
  43. Two Victim Paradigms and the Problem of ‘Impure’ Victims.Diana Tietjens Meyers - 2011 - Humanity 2 (2):255-275.
    Philosophers have had surprisingly little to say about the concept of a victim although it is presupposed by the extensive philosophical literature on rights. Proceeding in four stages, I seek to remedy this deficiency and to offer an alternative to the two current paradigms that eliminates the Othering of victims. First, I analyze two victim paradigms that emerged in the late 20th century along with the initial iteration of the international human rights regime – the pathetic victim paradigm (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  44. Should machines be tools or tool-users? Clarifying motivations and assumptions in the quest for superintelligence.Dan J. Bruiger - manuscript
    Much of the basic non-technical vocabulary of artificial intelligence is surprisingly ambiguous. Some key terms with unclear meanings include intelligence, embodiment, simulation, mind, consciousness, perception, value, goal, agent, knowledge, belief, optimality, friendliness, containment, machine and thinking. Much of this vocabulary is naively borrowed from the realm of conscious human experience to apply to a theoretical notion of “mind-in-general” based on computation. However, if there is indeed a threshold between mechanical tool and autonomous agent (and a tipping point for singularity), projecting (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45.  89
    (5 other versions)Algorithm Evaluation Without Autonomy.Scott Hill - forthcoming - AI and Ethics.
    In Algorithms & Autonomy, Rubel, Castro, and Pham (hereafter RCP), argue that the concept of autonomy is especially central to understanding important moral problems about algorithms. In particular, autonomy plays a role in analyzing the version of social contract theory that they endorse. I argue that although RCP are largely correct in their diagnosis of what is wrong with the algorithms they consider, those diagnoses can be appropriated by moral theories RCP see as in competition with their autonomy based theory. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. (1 other version)Talking Monkeys: Philosophy, Psychology, Science, Religion and Politics on a Doomed Planet - Articles and Reviews 2006-2017.Michael Starks - 2017 - Las Vegas, NV USA: Reality Press.
    This collection of articles was written over the last 10 years and edited to bring them up to date (2017). The copyright page has the date of the edition and new editions will be noted there as I edit old articles or add new ones. All the articles are about human behavior (as are all articles by anyone about anything), and so about the limitations of having a recent monkey ancestry (8 million years or much less depending on viewpoint) and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. Aristotle on Epigenesis.Devin Henry - 2018
    It has become somewhat of a platitude to call Aristotle the first epigenesist insofar as he thought form and structure emerged gradually from an unorganized, amorphous embryo. But modern biology now recognizes two senses of “epigenesis”. The first is this more familiar idea about the gradual emergence of form and structure, which is traditionally opposed to the idea of preformationism. But modern biologists also use “epigenesis” to emphasize the context-dependency of the process itself. Used in this sense development is not (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  48. Epistemological Alchemy through the hermeneutics of Bits and Bytes.Shahnawaz Akhtar - manuscript
    This paper delves into the profound advancements of Large Language Models (LLMs), epitomized by GPT-3, in natural language processing and artificial intelligence. It explores the epistemological foundations of LLMs through the lenses of Aristotle and Kant, revealing apparent distinctions from human learning. Transitioning seamlessly, the paper then delves into the ethical landscape, extending beyond knowledge acquisition to scrutinize the implications of LLMs in decision-making and content creation. The ethical scrutiny, employing virtue ethics, deontological ethics, and teleological ethics, delves into LLMs' (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49. Ethical Issues with Artificial Ethics Assistants.Elizabeth O'Neill, Michal Klincewicz & Michiel Kemmer - 2023 - In Carissa Véliz (ed.), The Oxford Handbook of Digital Ethics. Oxford University Press.
    This chapter examines the possibility of using AI technologies to improve human moral reasoning and decision-making, especially in the context of purchasing and consumer decisions. We characterize such AI technologies as artificial ethics assistants (AEAs). We focus on just one part of the AI-aided moral improvement question: the case of the individual who wants to improve their morality, where what constitutes an improvement is evaluated by the individual’s own values. We distinguish three broad areas in which an individual might think (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  50. Real Sparks of Artificial Intelligence and the Importance of Inner Interpretability.Alex Grzankowski - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    The present paper looks at one of the most thorough articles on the intelligence of GPT, research conducted by engineers at Microsoft. Although there is a great deal of value in their work, I will argue that, for familiar philosophical reasons, their methodology, ‘Black-box Interpretability’ is wrongheaded. But there is a better way. There is an exciting and emerging discipline of ‘Inner Interpretability’ (also sometimes called ‘White-box Interpretability’) that aims to uncover the internal activations and weights of models in order (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 957