Switch to: References

Citations of:

Superintelligence: paths, dangers, strategies

(ed.)
Oxford University Press (2014)

Add citations

You must login to add citations.
  1. Reality, Fiction, and Make-Believe in Kendall Walton.Emanuele Arielli - 2021 - In Krešimir Purgar (ed.), The Palgrave Handbook of Image Studies. Palgrave-Macmillan. pp. 363-377.
    Images share a common feature with all phenomena of imagination, since they make us aware of what is not present or what is fictional and not existent at all. From this perspective, the philosophical approach of Kendall Lewis Walton—born in 1939 and active since the 1960s at the University of Michigan—is perhaps one of the most notable contributions to image theory. Walton is an authoritative figure within the tradition of analytical aesthetics. His contributions have had a considerable influence on a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Implementation of Ethical Decision Procedures in Autonomous Systems : the Case of the Autonomous Vehicle.Katherine Evans - 2021 - Dissertation, Sorbonne Université
    The ethics of emerging forms of artificial intelligence has become a prolific subject in both academic and public spheres. A great deal of these concerns flow from the need to ensure that these technologies do not cause harm—physical, emotional or otherwise—to the human agents with which they will interact. In the literature, this challenge has been met with the creation of artificial moral agents: embodied or virtual forms of artificial intelligence whose decision procedures are constrained by explicit normative principles, requiring (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Moral Case for Long-Term Thinking.Hilary Greaves, William MacAskill & Elliott Thornley - 2021 - In Natalie Cargill & Tyler M. John (eds.), The Long View: Essays on Policy, Philanthropy, and the Long-term Future. London: FIRST. pp. 19-28.
    This chapter makes the case for strong longtermism: the claim that, in many situations, impact on the long-run future is the most important feature of our actions. Our case begins with the observation that an astronomical number of people could exist in the aeons to come. Even on conservative estimates, the expected future population is enormous. We then add a moral claim: all the consequences of our actions matter. In particular, the moral importance of what happens does not depend on (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Robot Autonomy vs. Human Autonomy: Social Robots, Artificial Intelligence (AI), and the Nature of Autonomy.Paul Formosa - 2021 - Minds and Machines 31 (4):595-616.
    Social robots are robots that can interact socially with humans. As social robots and the artificial intelligence that powers them becomes more advanced, they will likely take on more social and work roles. This has many important ethical implications. In this paper, we focus on one of the most central of these, the impacts that social robots can have on human autonomy. We argue that, due to their physical presence and social capacities, there is a strong potential for social robots (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Kantian Moral Agency and the Ethics of Artificial Intelligence.Riya Manna & Rajakishore Nath - 2021 - Problemos 100:139-151.
    This paper discusses the philosophical issues pertaining to Kantian moral agency and artificial intelligence. Here, our objective is to offer a comprehensive analysis of Kantian ethics to elucidate the non-feasibility of Kantian machines. Meanwhile, the possibility of Kantian machines seems to contend with the genuine human Kantian agency. We argue that in machine morality, ‘duty’ should be performed with ‘freedom of will’ and ‘happiness’ because Kant narrated the human tendency of evaluating our ‘natural necessity’ through ‘happiness’ as the end. Lastly, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Primacy of Intention and the Duty to Truth: A Gandhi-Inspired Argument for Retranslating Hiṃsā_ and _Ahiṃsā, with Connections to History, Ethics, and Civil Resistance.Todd Davies - 2021 - SSRN Non-Western Philosophy eJournal.
    The words "violence" and "nonviolence" are increasingly misleading translations for the Sanskrit words hiṃsā and ahiṃsā -- which were used by Gandhi as the basis for his philosophy of satyāgraha. I argue for re-reading hiṃsā as “maleficence” and ahiṃsā as “beneficence.” These two more mind-referring English words – associated with religiously contextualized discourse of the past -- capture the primacy of intention implied by Gandhi’s core principles, better than “violence” and “nonviolence” do. Reflecting a political turn in moral accountability detectable (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Extending Introspection.Lukas Schwengerer - 2021 - In Inês Hipólito, Robert William Clowes & Klaus Gärtner (eds.), The Mind-Technology Problem : Investigating Minds, Selves and 21st Century Artefacts. Springer Verlag. pp. 231-251.
    Clark and Chalmers propose that the mind extends further than skin and skull. If they are right, then we should expect this to have some effect on our way of knowing our own mental states. If the content of my notebook can be part of my belief system, then looking at the notebook seems to be a way to get to know my own beliefs. However, it is at least not obvious whether self-ascribing a belief by looking at my notebook (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Towards a Value Sensitive Design Framework for Attaining Meaningful Human Control over Autonomous Weapons Systems.Steven Umbrello - 2021 - Dissertation, Consortium Fino
    The international debate on the ethics and legality of autonomous weapon systems (AWS) as well as the call for a ban are primarily focused on the nebulous concept of fully autonomous AWS. More specifically, on AWS that are capable of target selection and engagement without human supervision or control. This thesis argues that such a conception of autonomy is divorced both from military planning and decision-making operations as well as the design requirements that govern AWS engineering and subsequently the tracking (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The kantian notion of freedom and autonomy of artificial agency.Manas Sahu - 2021 - Prometeica - Revista De Filosofía Y Ciencias 23:136-149.
    The objective of this paper is to provide critical analysis of the Kantian notion of freedom ; its significance in the contemporary debate on free-will and determinism, and the possibility of autonomy of artificial agency in the Kantian paradigm of autonomy. Kant's resolution of the third antinomy by positing the ground in the noumenal self resolves the problem of antinomies; however, it invites an explanatory gap between phenomenality and the noumenal self; even if he has successfully established the compatibility of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Moral Consideration of Artificial Entities: A Literature Review.Jamie Harris & Jacy Reese Anthis - 2021 - Science and Engineering Ethics 27 (4):1-95.
    Ethicists, policy-makers, and the general public have questioned whether artificial entities such as robots warrant rights or other forms of moral consideration. There is little synthesis of the research on this topic so far. We identify 294 relevant research or discussion items in our literature review of this topic. There is widespread agreement among scholars that some artificial entities could warrant moral consideration in the future, if not also the present. The reasoning varies, such as concern for the effects on (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Group Agency and Artificial Intelligence.Christian List - 2021 - Philosophy and Technology (4):1-30.
    The aim of this exploratory paper is to review an under-appreciated parallel between group agency and artificial intelligence. As both phenomena involve non-human goal-directed agents that can make a difference to the social world, they raise some similar moral and regulatory challenges, which require us to rethink some of our anthropocentric moral assumptions. Are humans always responsible for those entities’ actions, or could the entities bear responsibility themselves? Could the entities engage in normative reasoning? Could they even have rights and (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  • New horizons on robotics: ethics challenges.António Moniz - 2019 - In Maria Céu do Patrão Neves (ed.), Ethics, Science and Society: Challenges for BioPolitics. pp. 57-67.
    In this chapter, the focus is on robotics development and its ethical implications, especially on some particular applications or interaction principles. In recent years, such developments have happened very quickly, based on the advances achieved in the last few decades in industrial robotics. The technological developments in manufacturing, with the implementation of Industry 4.0 strategies in most industrialized countries, and the dissemination of production strategies into services and health sectors, enabled robotics to develop in a variety of new directions. Policy (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Why a Virtual Assistant for Moral Enhancement When We Could have a Socrates?Francisco Lara - 2021 - Science and Engineering Ethics 27 (4):1-27.
    Can Artificial Intelligence be more effective than human instruction for the moral enhancement of people? The author argues that it only would be if the use of this technology were aimed at increasing the individual's capacity to reflectively decide for themselves, rather than at directly influencing behaviour. To support this, it is shown how a disregard for personal autonomy, in particular, invalidates the main proposals for applying new technologies, both biomedical and AI-based, to moral enhancement. As an alternative to these (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • AI-Completeness: Using Deep Learning to Eliminate the Human Factor.Kristina Šekrst - 2020 - In Sandro Skansi (ed.), Guide to Deep Learning Basics. Springer. pp. 117-130.
    Computational complexity is a discipline of computer science and mathematics which classifies computational problems depending on their inherent difficulty, i.e. categorizes algorithms according to their performance, and relates these classes to each other. P problems are a class of computational problems that can be solved in polynomial time using a deterministic Turing machine while solutions to NP problems can be verified in polynomial time, but we still do not know whether they can be solved in polynomial time as well. A (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Catching Treacherous Turn: A Model of the Multilevel AI Boxing.Alexey Turchin - manuscript
    With the fast pace of AI development, the problem of preventing its global catastrophic risks arises. However, no satisfactory solution has been found. From several possibilities, the confinement of AI in a box is considered as a low-quality possible solution for AI safety. However, some treacherous AIs can be stopped by effective confinement if it is used as an additional measure. Here, we proposed an idealized model of the best possible confinement by aggregating all known ideas in the field of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Techno-Telepathy & Silent Subvocal Speech-Recognition Robotics.Virgil W. Brower - 2021 - HORIZON. Studies in Phenomenology 10 (1):232-257.
    The primary focus of this project is the silent and subvocal speech-recognition interface unveiled in 2018 as an ambulatory device wearable on the neck that detects a myoelectrical signature by electrodes worn on the surface of the face, throat, and neck. These emerge from an alleged “intending to speak” by the wearer silently-saying-something-to-oneself. This inner voice is believed to occur while one reads in silence or mentally talks to oneself. The artifice does not require spoken sounds, opening the mouth, or (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • How does Artificial Intelligence Pose an Existential Risk?Karina Vold & Daniel R. Harris - 2023 - In Carissa Véliz (ed.), The Oxford Handbook of Digital Ethics. Oxford University Press.
    Alan Turing, one of the fathers of computing, warned that Artificial Intelligence (AI) could one day pose an existential risk to humanity. Today, recent advancements in the field AI have been accompanied by a renewed set of existential warnings. But what exactly constitutes an existential risk? And how exactly does AI pose such a threat? In this chapter we aim to answer these questions. In particular, we will critically explore three commonly cited reasons for thinking that AI poses an existential (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Moral difference between humans and robots: paternalism and human-relative reason.Tsung-Hsing Ho - 2022 - AI and Society 37 (4):1533-1543.
    According to some philosophers, if moral agency is understood in behaviourist terms, robots could become moral agents that are as good as or even better than humans. Given the behaviourist conception, it is natural to think that there is no interesting moral difference between robots and humans in terms of moral agency (call it the _equivalence thesis_). However, such moral differences exist: based on Strawson’s account of participant reactive attitude and Scanlon’s relational account of blame, I argue that a distinct (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Is it time for robot rights? Moral status in artificial entities.Vincent C. Müller - 2021 - Ethics and Information Technology 23 (3):579–587.
    Some authors have recently suggested that it is time to consider rights for robots. These suggestions are based on the claim that the question of robot rights should not depend on a standard set of conditions for ‘moral status’; but instead, the question is to be framed in a new way, by rejecting the is/ought distinction, making a relational turn, or assuming a methodological behaviourism. We try to clarify these suggestions and to show their highly problematic consequences. While we find (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • reconfiguring SETI in the microbial context: Panspermia as a solution to Fermi's paradox.Predrag Slijepcevic - 2021 - Biosystems 206:to be confirmed.
    All SETI (Search for Extraterrestrial Intelligence) programmes that were conceived and put into practice since the 1960s have been based on anthropocentric ideas concerning the definition of intelligence on a cosmic-wide scale. Brain-based neuronal intelligence, augmented by AI, are currently thought of as being the only form of intelligence that can engage in SETI-type interactions, and this assumption is likely to be connected with the dilemma of the famous Fermi paradox. We argue that high levels of intelligence and cognition inherent (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Symbiosis with artificial intelligence via the prism of law, robots, and society.Stamatis Karnouskos - 2021 - Artificial Intelligence and Law 30 (1):93-115.
    The rapid advances in Artificial Intelligence and Robotics will have a profound impact on society as they will interfere with the people and their interactions. Intelligent autonomous robots, independent if they are humanoid/anthropomorphic or not, will have a physical presence, make autonomous decisions, and interact with all stakeholders in the society, in yet unforeseen manners. The symbiosis with such sophisticated robots may lead to a fundamental civilizational shift, with far-reaching effects as philosophical, legal, and societal questions on consciousness, citizenship, rights, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The future of artificial intelligence, posthumanism and the inflection of Pixley Isaka Seme’s African humanism.Malesela John Lamola - 2022 - AI and Society 37 (1):131-141.
    Increasingly, innovation in artificial intelligence technologies portends the re-conceptualization of human existentiality along the paradigm of posthumanism. An exposition of this through a critical culturo-historical methodology uncloaks the Eurocentric genitive basis of the philosophical anthropology that underpins this technological posthumanism, as well as its dystopian possibilities. As a contribution to obviating the latter, an Africanist civilizational humanism proclaimed by Pixley ka Isaka Seme is proffered as a plausible alternative paradigm for humanity’s technological advancement. Seme, a pan-Africanist thinker of the early (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Artificial Suffering: An Argument for a Global Moratorium on Synthetic Phenomenology.Thomas Metzinger - 2021 - Journal of Artificial Intelligence and Consciousness 1 (8):1-24.
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • The hard limit on human nonanthropocentrism.Michael R. Scheessele - 2022 - AI and Society 37 (1):49-65.
    There may be a limit on our capacity to suppress anthropocentric tendencies toward non-human others. Normally, we do not reach this limit in our dealings with animals, the environment, etc. Thus, continued striving to overcome anthropocentrism when confronted with these non-human others may be justified. Anticipation of super artificial intelligence may force us to face this limit, denying us the ability to free ourselves completely of anthropocentrism. This could be for our own good.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Attitude of college students towards ethical issues of artificial intelligence in an international university in Japan.Nader Ghotbi, Manh Tung Ho & Peter Mantello - 2022 - AI and Society 37 (1):283-290.
    We have examined the attitude and moral perception of 228 college students towards artificial intelligence in an international university in Japan. The students were asked to select a single most significant ethical issue associated with AI in the future from a list of nine ethical issues suggested by the World Economic Forum, and to explain why they believed that their chosen issues were most important. The majority of students chose unemployment as the major ethical issue related to AI. The second (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Endowing Artificial Intelligence with legal subjectivity.Sylwia Wojtczak - 2022 - AI and Society 37 (1):205-213.
    This paper reflects on the problem of endowing Artificial Intelligence with legal subjectivity, especially with regard to civil law. It is necessary to reject the myth that the criteria of legal subjectivity are sentience and reason. Arguing that AI may have potential legal subjectivity based on an analogy to animals or juristic persons suggests the existence of a single hierarchy or sequence of entities, organized according to their degree of similarity to human beings; also, that the place of an entity (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • How AI can AID bioethics.Walter Sinnott Armstrong & Joshua August Skorburg - forthcoming - Journal of Practical Ethics.
    This paper explores some ways in which artificial intelligence (AI) could be used to improve human moral judgments in bioethics by avoiding some of the most common sources of error in moral judgment, including ignorance, confusion, and bias. It surveys three existing proposals for building human morality into AI: Top-down, bottom-up, and hybrid approaches. Then it proposes a multi-step, hybrid method, using the example of kidney allocations for transplants as a test case. The paper concludes with brief remarks about how (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Foundations of an Ethical Framework for AI Entities: the Ethics of Systems.Andrej Dameski - 2020 - Dissertation, University of Luxembourg
    The field of AI ethics during the current and previous decade is receiving an increasing amount of attention from all involved stakeholders: the public, science, philosophy, religious organizations, enterprises, governments, and various organizations. However, this field currently lacks consensus on scope, ethico-philosophical foundations, or common methodology. This thesis aims to contribute towards filling this gap by providing an answer to the two main research questions: first, what theory can explain moral scenarios in which AI entities are participants?; and second, what (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Challenges of Aligning Artificial Intelligence with Human Values.Margit Sutrop - 2020 - Acta Baltica Historiae Et Philosophiae Scientiarum 8 (2):54-72.
    As artificial intelligence systems are becoming increasingly autonomous and will soon be able to make decisions on their own about what to do, AI researchers have started to talk about the need to align AI with human values. The AI ‘value alignment problem’ faces two kinds of challenges—a technical and a normative one—which are interrelated. The technical challenge deals with the question of how to encode human values in artificial intelligence. The normative challenge is associated with two questions: “Which values (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Doing Good Badly? Philosophical Issues Related to Effective Altruism.Michael Plant - 2019 - Dissertation, Oxford University
    Suppose you want to do as much good as possible. What should you do? According to members of the effective altruism movement—which has produced much of the thinking on this issue and counts several moral philosophers as its key protagonists—we should prioritise among the world’s problems by assessing their scale, solvability, and neglectedness. Once we’ve done this, the three top priorities, not necessarily in this order, are (1) aiding the world’s poorest people by providing life-saving medical treatments or alleviating poverty (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Making moral machines: why we need artificial moral agents.Paul Formosa & Malcolm Ryan - forthcoming - AI and Society.
    As robots and Artificial Intelligences become more enmeshed in rich social contexts, it seems inevitable that we will have to make them into moral machines equipped with moral skills. Apart from the technical difficulties of how we could achieve this goal, we can also ask the ethical question of whether we should seek to create such Artificial Moral Agents (AMAs). Recently, several papers have argued that we have strong reasons not to develop AMAs. In response, we develop a comprehensive analysis (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Genealogy of Algorithms: Datafication as Transvaluation.Virgil W. Brower - 2020 - le Foucaldien 6 (1):1-43.
    This article investigates religious ideals persistent in the datafication of information society. Its nodal point is Thomas Bayes, after whom Laplace names the primal probability algorithm. It reconsiders their mathematical innovations with Laplace's providential deism and Bayes' singular theological treatise. Conceptions of divine justice one finds among probability theorists play no small part in the algorithmic data-mining and microtargeting of Cambridge Analytica. Theological traces within mathematical computation are emphasized as the vantage over large numbers shifts to weights beyond enumeration in (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Artificial superintelligence and its limits: why AlphaZero cannot become a general agent.Karim Jebari & Joakim Lundborg - forthcoming - AI and Society.
    An intelligent machine surpassing human intelligence across a wide set of skills has been proposed as a possible existential catastrophe. Among those concerned about existential risk related to artificial intelligence, it is common to assume that AI will not only be very intelligent, but also be a general agent. This article explores the characteristics of machine agency, and what it would mean for a machine to become a general agent. In particular, it does so by articulating some important differences between (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Three philosophical perspectives on the relation between technology and society, and how they affect the current debate about artificial intelligence.Ibo van de Poel - 2020 - Human Affairs 30 (4):499-511.
    Three philosophical perspectives on the relation between technology and society are distinguished and discussed: 1) technology as an autonomous force that determines society; 2) technology as a human construct that can be shaped by human values, and 3) a co-evolutionary perspective on technology and society where neither of them determines the other. The historical evolution of the three perspectives is discussed and it is argued that all three are still present in current debates about technological change and how it may (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Artificial Intelligence, Values, and Alignment.Iason Gabriel - 2020 - Minds and Machines 30 (3):411-437.
    This paper looks at philosophical questions that arise in the context of AI alignment. It defends three propositions. First, normative and technical aspects of the AI alignment problem are interrelated, creating space for productive engagement between people working in both domains. Second, it is important to be clear about the goal of alignment. There are significant differences between AI that aligns with instructions, intentions, revealed preferences, ideal preferences, interests and values. A principle-based approach to AI alignment, which combines these elements (...)
    Download  
     
    Export citation  
     
    Bookmark   48 citations  
  • Axiological Futurism: The Systematic Study of the Future of Values.John Danaher - forthcoming - Futures.
    Human values seem to vary across time and space. What implications does this have for the future of human value? Will our human and (perhaps) post-human offspring have very different values from our own? Can we study the future of human values in an insightful and systematic way? This article makes three contributions to the debate about the future of human values. First, it argues that the systematic study of future values is both necessary in and of itself and an (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • On the Logical Impossibility of Solving the Control Problem.Caleb Rudnick - manuscript
    In the philosophy of artificial intelligence (AI) we are often warned of machines built with the best possible intentions, killing everyone on the planet and in some cases, everything in our light cone. At the same time, however, we are also told of the utopian worlds that could be created with just a single superintelligent mind. If we’re ever to live in that utopia (or just avoid dystopia) it’s necessary we solve the control problem. The control problem asks how humans (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Mapping the mindsponge model onto the current understanding of how children learn.Tung Ho Manh - 2020 - OSF Preprints.
    I often hear people say about children: “They learn like a sponge.” It is clear young brains have a lot more neuroplasticity, which makes it easier for them to learn. But we know from decades of research on neuroplasticity, the capacity for the brain to change and adapt to new situations is there for a lifetime.
    Download  
     
    Export citation  
     
    Bookmark  
  • Ethics in Artificial Intelligence : How Relativism is Still Relevant.Loukas Piloidis - unknown
    This essay tries to demarcate and analyse Artificial Intelligence ethics. Going away from the traditional distinction in normative, meta, and applied ethics, a different split is executed, inspired by the three most prominent schools of thought: deontology, consequentialism, and Aristotelian virtue ethics. The reason behind this alternative approach is to connect all three schools back to ancient Greek philosophy. Having proven that the majority of arguments derive from some ancient Greek scholars, a new voice is initiated into the discussion, Protagoras (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Redefining Humanity in the Era of AI – Technical Civilization.Shoko Suzuki - 2020 - Paragrana: Internationale Zeitschrift für Historische Anthropologie 29 (1):83-93.
    The human environment is currently undergoing massive change amid the rapid adoption of information and communications technology (ICT). ICT can be characterized as offering an opportunity to consider the nature of humanity, create new values, and foster new cultures. As humans, the question that technical innovation relating to Artificial Intelligence (AI) and Robots thrusts before us is, “What is a human?” What exactly are the things that AI will never be able to do, no matter how close it gets to (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Dynamic Cognition Applied to Value Learning in Artificial Intelligence.Nythamar De Oliveira & Nicholas Corrêa - 2021 - Aoristo - International Journal of Phenomenology, Hermeneutics and Metaphysics 4 (2):185-199.
    Experts in Artificial Intelligence (AI) development predict that advances in the dvelopment of intelligent systems and agents will reshape vital areas in our society. Nevertheless, if such an advance isn't done with prudence, it can result in negative outcomes for humanity. For this reason, several researchers in the area are trying to develop a robust, beneficial, and safe concept of artificial intelligence. Currently, several of the open problems in the field of AI research arise from the difficulty of avoiding unwanted (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Forbidden knowledge in machine learning reflections on the limits of research and publication.Thilo Hagendorff - 2021 - AI and Society 36 (3):767-781.
    Certain research strands can yield “forbidden knowledge”. This term refers to knowledge that is considered too sensitive, dangerous or taboo to be produced or shared. Discourses about such publication restrictions are already entrenched in scientific fields like IT security, synthetic biology or nuclear physics research. This paper makes the case for transferring this discourse to machine learning research. Some machine learning applications can very easily be misused and unfold harmful consequences, for instance, with regard to generative video or text synthesis, (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Good AI for the Present of Humanity Democratizing AI Governance.Nicholas Kluge Corrêa & Nythamar De Oliveira - 2021 - AI Ethics Journal 2 (2):1-16.
    What does Cyberpunk and AI Ethics have to do with each other? Cyberpunk is a sub-genre of science fiction that explores the post-human relationships between human experience and technology. One similarity between AI Ethics and Cyberpunk literature is that both seek a dialogue in which the reader may inquire about the future and the ethical and social problems that our technological advance may bring upon society. In recent years, an increasing number of ethical matters involving AI have been pointed and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Radical enhancement as a moral status de-enhancer.Jesse Gray - 2020 - Monash Bioethics Review 1 (2):146-165.
    Nicholas Agar, Jeff McMahan and Allen Buchanan have all expressed concerns about enhancing humans far outside the species-typical range. They argue radically enhanced beings will be entitled to greater and more beneficial treatment through an enhanced moral status, or a stronger claim to basic rights. I challenge these claims by first arguing that emerging technologies will likely give the enhanced direct control over their mental states. The lack of control we currently exhibit over our mental lives greatly contributes to our (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Fully Autonomous AI.Wolfhart Totschnig - 2020 - Science and Engineering Ethics 26 (5):2473-2485.
    In the fields of artificial intelligence and robotics, the term “autonomy” is generally used to mean the capacity of an artificial agent to operate independently of human guidance. It is thereby assumed that the agent has a fixed goal or “utility function” with respect to which the appropriateness of its actions will be evaluated. From a philosophical perspective, this notion of autonomy seems oddly weak. For, in philosophy, the term is generally used to refer to a stronger capacity, namely the (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • natural intelligence and anthropic reasoning.Predrag Slijepcevic - 2020 - Biosemiotics 13 (tba):1-23.
    This paper aims to justify the concept of natural intelligence in the biosemiotic context. I will argue that the process of life is (i) a cognitive/semiotic process and (ii) that organisms, from bacteria to animals, are cognitive or semiotic agents. To justify these arguments, the neural-type intelligence represented by the form of reasoning known as anthropic reasoning will be compared and contrasted with types of intelligence explicated by four disciplines of biology – relational biology, evolutionary epistemology, biosemiotics and the systems (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • There Is No Techno-Responsibility Gap.Daniel W. Tigard - 2020 - Philosophy and Technology 34 (3):589-607.
    In a landmark essay, Andreas Matthias claimed that current developments in autonomous, artificially intelligent systems are creating a so-called responsibility gap, which is allegedly ever-widening and stands to undermine both the moral and legal frameworks of our society. But how severe is the threat posed by emerging technologies? In fact, a great number of authors have indicated that the fear is thoroughly instilled. The most pessimistic are calling for a drastic scaling-back or complete moratorium on AI systems, while the optimists (...)
    Download  
     
    Export citation  
     
    Bookmark   36 citations  
  • Machines learning values.Steve Petersen - 2020 - In S. Matthew Liao (ed.), Ethics of Artificial Intelligence. Oxford University Press.
    Whether it would take one decade or several centuries, many agree that it is possible to create a *superintelligence*---an artificial intelligence with a godlike ability to achieve its goals. And many who have reflected carefully on this fact agree that our best hope for a "friendly" superintelligence is to design it to *learn* values like ours, since our values are too complex to program or hardwire explicitly. But the value learning approach to AI safety faces three particularly philosophical puzzles: first, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Autonomous Weapons Systems and the Moral Equality of Combatants.Michael Skerker, Duncan Purves & Ryan Jenkins - 2020 - Ethics and Information Technology 3 (6).
    To many, the idea of autonomous weapons systems (AWS) killing human beings is grotesque. Yet critics have had difficulty explaining why it should make a significant moral difference if a human combatant is killed by an AWS as opposed to being killed by a human combatant. The purpose of this paper is to explore the roots of various deontological concerns with AWS and to consider whether these concerns are distinct from any concerns that also apply to long- distance, human-guided weaponry. (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Ethics of Artificial Intelligence.Vincent C. Müller - 2021 - In Anthony Elliott (ed.), The Routledge Social Science Handbook of Ai. Routledge. pp. 122-137.
    Artificial intelligence (AI) is a digital technology that will be of major importance for the development of humanity in the near future. AI has raised fundamental questions about what we should do with such systems, what the systems themselves should do, what risks they involve and how we can control these. - After the background to the field (1), this article introduces the main debates (2), first on ethical issues that arise with AI systems as objects, i.e. tools made and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation