Results for 'Autonomous agents'

973 found
Order:
  1. Body Schema in Autonomous Agents.Zachariah A. Neemeh & Christian Kronsted - 2021 - Journal of Artificial Intelligence and Consciousness 1 (8):113-145.
    A body schema is an agent's model of its own body that enables it to act on affordances in the environment. This paper presents a body schema system for the Learning Intelligent Decision Agent (LIDA) cognitive architecture. LIDA is a conceptual and computational implementation of Global Workspace Theory, also integrating other theories from neuroscience and psychology. This paper contends that the ‘body schema' should be split into three separate functions based on the functional role of consciousness in Global Workspace Theory. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  2. From Pluralistic Normative Principles to Autonomous-Agent Rules.Beverley Townsend, Colin Paterson, T. T. Arvind, Gabriel Nemirovsky, Radu Calinescu, Ana Cavalcanti, Ibrahim Habli & Alan Thomas - 2022 - Minds and Machines 1 (4):1-33.
    With recent advancements in systems engineering and artificial intelligence, autonomous agents are increasingly being called upon to execute tasks that have normative relevance. These are tasks that directly—and potentially adversely—affect human well-being and demand of the agent a degree of normative-sensitivity and -compliance. Such norms and normative principles are typically of a social, legal, ethical, empathetic, or cultural nature. Whereas norms of this type are often framed in the abstract, or as high-level principles, addressing normative concerns in concrete (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  3. Modeling Long-Term Intentions and Narratives in Autonomous Agents.Christian Kronsted & Zachariah A. Neemeh - forthcoming - Journal of Artificial Intelligence and Consciousness.
    Across various fields it is argued that the self in part consists of an autobiographical self-narrative and that the self-narrative has an impact on agential behavior. Similarly, within action theory, it is claimed that the intentional structure of coherent long-term action is divided into a hierarchy of distal, proximal, and motor intentions. However, the concrete mechanisms for how narratives and distal intentions are generated and impact action is rarely fleshed out concretely. We here demonstrate how narratives and distal intentions can (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  4. (2 other versions)A Critique of Alfred R Mele’s Work on Autonomous Agents: From Self-Control to Autonomy. [REVIEW]Pujarini Das - 2018 - Journal of Indian Council of Philosophical Research, Springer India:1995.
    The book, Autonomous Agents: From Self-Control to Autonomy (1995), by Alfred R. Mele, deals primarily with two main concepts, “self-control” and “individual autonomy,” and the relationship between them. The book is divided into two parts: (1) a view of self-control, the self-controlled person, and behaviour manifesting self-control, and (2) a view of personal autonomy, the autonomous person, and autonomous behaviour. Mele (Ibid.) defines self-control as the opposite of the Aristotelian concept of akrasia, or the contrary of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5. Artificial agents: responsibility & control gaps.Herman Veluwenkamp & Frank Hindriks - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    Artificial agents create significant moral opportunities and challenges. Over the last two decades, discourse has largely focused on the concept of a ‘responsibility gap.’ We argue that this concept is incoherent, misguided, and diverts attention from the core issue of ‘control gaps.’ Control gaps arise when there is a discrepancy between the causal control an agent exercises and the moral control it should possess or emulate. Such gaps present moral risks, often leading to harm or ethical violations. We propose (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. Autonomous Weapons and the Nature of Law and Morality: How Rule-of-Law-Values Require Automation of the Rule of Law.Duncan MacIntosh - 2016 - Temple International and Comparative Law Journal 30 (1):99-117.
    While Autonomous Weapons Systems have obvious military advantages, there are prima facie moral objections to using them. By way of general reply to these objections, I point out similarities between the structure of law and morality on the one hand and of automata on the other. I argue that these, plus the fact that automata can be designed to lack the biases and other failings of humans, require us to automate the formulation, administration, and enforcement of law as much (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  7. The cognitive agent: Overcoming informational limits.Orlin Vakarelov - 2011 - Adaptive Behavior 19 (2):83-100.
    This article provides an answer to the question: What is the function of cognition? By answering this question it becomes possible to investigate what are the simplest cognitive systems. It addresses the question by treating cognition as a solution to a design problem. It defines a nested sequence of design problems: (1) How can a system persist? (2) How can a system affect its environment to improve its persistence? (3) How can a system utilize better information from the environment to (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  8.  69
    Specification of Agents’ Activities in Past, Present and Future.Marie Duží - 2023 - Organon F: Medzinárodný Časopis Pre Analytickú Filozofiu 30 (1):66-101.
    The behaviour of a multi-agent system is driven by messaging. Usually, there is no central dispatcher and each autonomous agent, though resource-bounded, can make less or more rational decisions to meet its own and collective goals. To this end, however, agents must communicate with their fellow agents and account for the signals from their environment. Moreover, in the dynamic, permanently changing world, agents’ behaviour, i.e. their activities, must also be dynamic. By communicating with other fellow (...) and with their environment, agents should be able to learn new concepts and enrich their knowledge base. Processes and events that happened in the past may be irrelevant in the present or have a significant impact in the future, and vice versa. Therefore, the fine-grained analysis of agents’ activities as well as events within or beyond the system is very important so that the system can run smoothly without falling into inconsistencies. Moreover, as the system should communicate with its environment, the analysis should be as close to natural language as possible. The goal of this paper is a proposal for such an analysis. To this end, I apply Transparent Intensional Logic (TIL) because TIL is particularly apt for a fine-grained analysis of processes and events specified in the present, past or future tense with reference to the time when they happened, happen or will happen. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  9. The Autonomous Life: A Pure Social View.Michael Garnett - 2014 - Australasian Journal of Philosophy 92 (1):143-158.
    In this paper I propose and develop a social account of global autonomy. On this view, a person is autonomous simply to the extent to which it is difficult for others to subject her to their wills. I argue that many properties commonly thought necessary for autonomy are in fact properties that tend to increase an agent’s immunity to such interpersonal subjection, and that the proposed account is therefore capable of providing theoretical unity to many of the otherwise heterogeneous (...)
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  10. Language Agents Reduce the Risk of Existential Catastrophe.Simon Goldstein & Cameron Domenico Kirk-Giannini - 2023 - AI and Society:1-11.
    Recent advances in natural language processing have given rise to a new kind of AI architecture: the language agent. By repeatedly calling an LLM to perform a variety of cognitive tasks, language agents are able to function autonomously to pursue goals specified in natural language and stored in a human-readable format. Because of their architecture, language agents exhibit behavior that is predictable according to the laws of folk psychology: they function as though they have desires and beliefs, and (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  11. Autonomous weapons systems and the moral equality of combatants.Michael Skerker, Duncan Purves & Ryan Jenkins - 2020 - Ethics and Information Technology 22 (3):197-209.
    To many, the idea of autonomous weapons systems (AWS) killing human beings is grotesque. Yet critics have had difficulty explaining why it should make a significant moral difference if a human combatant is killed by an AWS as opposed to being killed by a human combatant. The purpose of this paper is to explore the roots of various deontological concerns with AWS and to consider whether these concerns are distinct from any concerns that also apply to long-distance, human-guided weaponry. (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  12. Modeling artificial agents’ actions in context – a deontic cognitive event ontology.Miroslav Vacura - 2020 - Applied ontology 15 (4):493-527.
    Although there have been efforts to integrate Semantic Web technologies and artificial agents related AI research approaches, they remain relatively isolated from each other. Herein, we introduce a new ontology framework designed to support the knowledge representation of artificial agents’ actions within the context of the actions of other autonomous agents and inspired by standard cognitive architectures. The framework consists of four parts: 1) an event ontology for information pertaining to actions and events; 2) an epistemic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. Fire and Forget: A Moral Defense of the Use of Autonomous Weapons in War and Peace.Duncan MacIntosh - 2021 - In Jai Galliott, Duncan MacIntosh & Jens David Ohlin (eds.), Lethal Autonomous Weapons: Re-Examining the Law and Ethics of Robotic Warfare. New York: Oxford University Press. pp. 9-23.
    Autonomous and automatic weapons would be fire and forget: you activate them, and they decide who, when and how to kill; or they kill at a later time a target you’ve selected earlier. Some argue that this sort of killing is always wrong. If killing is to be done, it should be done only under direct human control. (E.g., Mary Ellen O’Connell, Peter Asaro, Christof Heyns.) I argue that there are surprisingly many kinds of situation where this is false (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  14. Autonomous Weapon Systems in Just War Theory perspective. Maciej - 2022 - Dissertation,
    Please contact me at [email protected] if you are interested in reading a particular chapter or being sent the entire manuscript for private use. -/- The thesis offers a comprehensive argument in favor of a regulationist approach to autonomous weapon systems (AWS). AWS, defined as all military robots capable of selecting or engaging targets without direct human involvement, are an emerging and potentially deeply transformative military technology subject to very substantial ethical controversy. AWS have both their enthusiasts and their detractors, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15. Philosophical Signposts for Artificial Moral Agent Frameworks.Robert James M. Boyles - 2017 - Suri 6 (2):92–109.
    This article focuses on a particular issue under machine ethics—that is, the nature of Artificial Moral Agents. Machine ethics is a branch of artificial intelligence that looks into the moral status of artificial agents. Artificial moral agents, on the other hand, are artificial autonomous agents that possess moral value, as well as certain rights and responsibilities. This paper demonstrates that attempts to fully develop a theory that could possibly account for the nature of Artificial Moral (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  16. Moral Agents or Mindless Machines? A Critical Appraisal of Agency in Artificial Systems.Fabio Tollon - 2019 - Hungarian Philosophical Review 4 (63):9-23.
    In this paper I provide an exposition and critique of Johnson and Noorman’s (2014) three conceptualizations of the agential roles artificial systems can play. I argue that two of these conceptions are unproblematic: that of causally efficacious agency and “acting for” or surrogate agency. Their third conception, that of “autonomous agency,” however, is one I have reservations about. The authors point out that there are two ways in which the term “autonomy” can be used: there is, firstly, the engineering (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  17. The Soldier’s Share: Considering Narrow Responsibility for Lethal Autonomous Weapons.Kevin Schieman - 2023 - Journal of Military Ethics (3):228-245.
    Robert Sparrow (among others) claims that if an autonomous weapon were to commit a war crime, it would cause harm for which no one could reasonably be blamed. Since no one would bear responsibility for the soldier’s share of killing in such cases, he argues that they would necessarily violate the requirements of jus in bello, and should be prohibited by international law. I argue this view is mistaken and that our moral understanding of war is sufficient to determine (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  18. Artificial morality: Making of the artificial moral agents.Marija Kušić & Petar Nurkić - 2019 - Belgrade Philosophical Annual 1 (32):27-49.
    Abstract: Artificial Morality is a new, emerging interdisciplinary field that centres around the idea of creating artificial moral agents, or AMAs, by implementing moral competence in artificial systems. AMAs are ought to be autonomous agents capable of socially correct judgements and ethically functional behaviour. This request for moral machines comes from the changes in everyday practice, where artificial systems are being frequently used in a variety of situations from home help and elderly care purposes to banking and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  19. Making moral machines: why we need artificial moral agents.Paul Formosa & Malcolm Ryan - forthcoming - AI and Society.
    As robots and Artificial Intelligences become more enmeshed in rich social contexts, it seems inevitable that we will have to make them into moral machines equipped with moral skills. Apart from the technical difficulties of how we could achieve this goal, we can also ask the ethical question of whether we should seek to create such Artificial Moral Agents (AMAs). Recently, several papers have argued that we have strong reasons not to develop AMAs. In response, we develop a comprehensive (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  20. A principlist-based study of the ethical design and acceptability of artificial social agents.Paul Formosa - 2023 - International Journal of Human-Computer Studies 172.
    Artificial Social Agents (ASAs), which are AI software driven entities programmed with rules and preferences to act autonomously and socially with humans, are increasingly playing roles in society. As their sophistication grows, humans will share greater amounts of personal information, thoughts, and feelings with ASAs, which has significant ethical implications. We conducted a study to investigate what ethical principles are of relative importance when people engage with ASAs and whether there is a relationship between people’s values and the ethical (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. Designing AI for Explainability and Verifiability: A Value Sensitive Design Approach to Avoid Artificial Stupidity in Autonomous Vehicles.Steven Umbrello & Roman Yampolskiy - 2022 - International Journal of Social Robotics 14 (2):313-322.
    One of the primary, if not most critical, difficulties in the design and implementation of autonomous systems is the black-boxed nature of the decision-making structures and logical pathways. How human values are embodied and actualised in situ may ultimately prove to be harmful if not outright recalcitrant. For this reason, the values of stakeholders become of particular significance given the risks posed by opaque structures of intelligent agents (IAs). This paper explores how decision matrix algorithms, via the belief-desire-intention (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  22. When is a robot a moral agent.John P. Sullins - 2006 - International Review of Information Ethics 6 (12):23-30.
    In this paper Sullins argues that in certain circumstances robots can be seen as real moral agents. A distinction is made between persons and moral agents such that, it is not necessary for a robot to have personhood in order to be a moral agent. I detail three requirements for a robot to be seen as a moral agent. The first is achieved when the robot is significantly autonomous from any programmers or operators of the machine. The (...)
    Download  
     
    Export citation  
     
    Bookmark   74 citations  
  23. How AI’s Self-Prolongation Influences People’s Perceptions of Its Autonomous Mind: The Case of U.S. Residents.Quan-Hoang Vuong, Viet-Phuong La, Minh-Hoang Nguyen, Ruining Jin, Minh-Khanh La & Tam-Tri Le - 2023 - Behavioral Sciences 13 (6):470.
    The expanding integration of artificial intelligence (AI) in various aspects of society makes the infosphere around us increasingly complex. Humanity already faces many obstacles trying to have a better understanding of our own minds, but now we have to continue finding ways to make sense of the minds of AI. The issue of AI’s capability to have independent thinking is of special attention. When dealing with such an unfamiliar concept, people may rely on existing human properties, such as survival desire, (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  24. Vices in autonomous paternalism: The case of advance directives and persons living with dementia 1.Sungwoo Um - 2022 - Bioethics 36 (5):511-518.
    Advance directives are intended to extend patient autonomy by enabling patients to prospectively direct the care of their future incapacitated selves. There has been much discussion about issues such as whether the future incompetent self is identical to the agent who issues the advance directives or whether advance directives can legitimately secure patient autonomy. However, there is another important question to ask: to what extent and in what conditions is it ethically appropriate for one to limit the liberty or agency (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  25. Moral zombies: why algorithms are not moral agents.Carissa Véliz - 2021 - AI and Society 36 (2):487-497.
    In philosophy of mind, zombies are imaginary creatures that are exact physical duplicates of conscious subjects but for whom there is no first-personal experience. Zombies are meant to show that physicalism—the theory that the universe is made up entirely out of physical components—is false. In this paper, I apply the zombie thought experiment to the realm of morality to assess whether moral agency is something independent from sentience. Algorithms, I argue, are a kind of functional moral zombie, such that thinking (...)
    Download  
     
    Export citation  
     
    Bookmark   36 citations  
  26. An Analysis of the Interaction Between Intelligent Software Agents and Human Users.Christopher Burr, Nello Cristianini & James Ladyman - 2018 - Minds and Machines 28 (4):735-774.
    Interactions between an intelligent software agent and a human user are ubiquitous in everyday situations such as access to information, entertainment, and purchases. In such interactions, the ISA mediates the user’s access to the content, or controls some other aspect of the user experience, and is not designed to be neutral about outcomes of user choices. Like human users, ISAs are driven by goals, make autonomous decisions, and can learn from experience. Using ideas from bounded rationality, we frame these (...)
    Download  
     
    Export citation  
     
    Bookmark   39 citations  
  27. Consequentialism & Machine Ethics: Towards a Foundational Machine Ethic to Ensure the Right Action of Artificial Moral Agents.Josiah Della Foresta - 2020 - Montreal AI Ethics Institute.
    In this paper, I argue that Consequentialism represents a kind of ethical theory that is the most plausible to serve as a basis for a machine ethic. First, I outline the concept of an artificial moral agent and the essential properties of Consequentialism. Then, I present a scenario involving autonomous vehicles to illustrate how the features of Consequentialism inform agent action. Thirdly, an alternative Deontological approach will be evaluated and the problem of moral conflict discussed. Finally, two bottom-up approaches (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28. Do androids dream of normative endorsement? On the fallibility of artificial moral agents.Frodo Podschwadek - 2017 - Artificial Intelligence and Law 25 (3):325-339.
    The more autonomous future artificial agents will become, the more important it seems to equip them with a capacity for moral reasoning and to make them autonomous moral agents. Some authors have even claimed that one of the aims of AI development should be to build morally praiseworthy agents. From the perspective of moral philosophy, praiseworthy moral agents, in any meaningful sense of the term, must be fully autonomous moral agents who endorse (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  29. A lesson from subjective computing: autonomous self-referentiality and social interaction as conditions for subjectivity.Patrick Grüneberg & Kenji Suzuki - 2013 - AISB Proceedings 2012:18-28.
    In this paper, we model a relational notion of subjectivity by means of two experiments in subjective computing. The goal is to determine to what extent a cognitive and social robot can be regarded to act subjectively. The system was implemented as a reinforcement learning agent with a coaching function. To analyze the robotic agent we used the method of levels of abstraction in order to analyze the agent at four levels of abstraction. At one level the agent is described (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. The Duty to Promote Digital Minimalism in Group Agents.Timothy Aylsworth & Clinton Castro - 2024 - In Timothy Aylsworth & Clinton Castro (eds.), Kantian Ethics and the Attention Economy. Palgrave Macmillan.
    In this chapter, we turn our attention to the effects of the attention economy on our ability to act autonomously as a group. We begin by clarifying which sorts of groups we are concerned with, which are structured groups (groups sufficiently organized that it makes sense to attribute agency to the group itself). Drawing on recent work by Purves and Davis (2022), we describe the essential roles of trust (i.e., depending on groups to fulfill their commitments) and trustworthiness (i.e., the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31. Kevin J. Mitchell: Free Agents – How Evolution Gave Us Free Will. Gebunden, 333 Seiten. Princeton University Press, Princeton & Oxford 2023. Literaturhinweis. [REVIEW]Christoph Leumann - 2024 - Aphin-Rundbrief 31 (2024/1):21-23.
    In seinem Buch "Free Agents" stellt der Neurowissenschaftler und Evolutionsgenetiker Kevin Mitchell ein evolutionäres Erklärungsmodell für den freien Willen vor. Aus philosophischer Sicht relevant ist das Buch vor allem, weil es ein zentrales Credo der aktuellen Freiheits-Debatte in Frage stellt, nämlich die Auffassung, ein naturwissenschaftlich vertretbares Freiheitsverständnis müsse mit dem Determinismus im Einklang stehen. Mitchell geht auf Distanz zum Kompatibilismus und nimmt mit naturwissenschaftlicher Argumentation für die libertarische Gegenposition Partei (auch wenn er selbst diesen Ausdruck nicht verwendet). Sein Buch (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32. Not-I/Thou: Agent Intellect and the Immemorial.Gavin Keeney - 2015 - In Gausa Manuel (ed.), Rebel Matters/Radical Patterns. University of Genoa/De Ferrari. pp. 446-51.
    Not-I/Thou: The Other Subject of Art & Architecture is to be a highly focused exhibition/folio of works by perhaps 12 artists (preferably little-known or obscure), with precise commentaries denoting the discord between the autonomous object (the artwork or architectural object per se) and the larger field of reference (worlds); inference (associative magic), and insurrection (against power and privilege) – or, the Immemorial. Engaging the age-old “theological apparatuses” of the artwork, the folio is intended to upend the current fascination with (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. Should machines be tools or tool-users? Clarifying motivations and assumptions in the quest for superintelligence.Dan J. Bruiger - manuscript
    Much of the basic non-technical vocabulary of artificial intelligence is surprisingly ambiguous. Some key terms with unclear meanings include intelligence, embodiment, simulation, mind, consciousness, perception, value, goal, agent, knowledge, belief, optimality, friendliness, containment, machine and thinking. Much of this vocabulary is naively borrowed from the realm of conscious human experience to apply to a theoretical notion of “mind-in-general” based on computation. However, if there is indeed a threshold between mechanical tool and autonomous agent (and a tipping point for singularity), (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. Egocentric Bias and Doubt in Cognitive Agents.Nanda Kishore Sreenivas & Shrisha Rao - forthcoming - 18th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2019), Montreal, Canada, May 2019.
    Modeling social interactions based on individual behavior has always been an area of interest, but prior literature generally presumes rational behavior. Thus, such models may miss out on capturing the effects of biases humans are susceptible to. This work presents a method to model egocentric bias, the real-life tendency to emphasize one's own opinion heavily when presented with multiple opinions. We use a symmetric distribution, centered at an agent's own opinion, as opposed to the Bounded Confidence (BC) model used in (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  35. The social turn of artificial intelligence.Nello Cristianini, Teresa Scantamburlo & James Ladyman - 2021 - AI and Society (online).
    Social machines are systems formed by material and human elements interacting in a structured way. The use of digital platforms as mediators allows large numbers of humans to participate in such machines, which have interconnected AI and human components operating as a single system capable of highly sophisticated behavior. Under certain conditions, such systems can be understood as autonomous goal-driven agents. Many popular online platforms can be regarded as instances of this class of agent. We argue that (...) social machines provide a new paradigm for the design of intelligent systems, marking a new phase in AI. After describing the characteristics of goal-driven social machines, we discuss the consequences of their adoption, for the practice of artificial intelligence as well as for its regulation. (shrink)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  36. Artificial Evil and the Foundation of Computer Ethics.Luciano Floridi & J. W. Sanders - 2001 - Springer Netherlands. Edited by Luciano Floridi & J. W. Sanders.
    Moral reasoning traditionally distinguishes two types of evil:moral (ME) and natural (NE). The standard view is that ME is the product of human agency and so includes phenomena such as war,torture and psychological cruelty; that NE is the product of nonhuman agency, and so includes natural disasters such as earthquakes, floods, disease and famine; and finally, that more complex cases are appropriately analysed as a combination of ME and NE. Recently, as a result of developments in autonomous agents (...)
    Download  
     
    Export citation  
     
    Bookmark   30 citations  
  37. Social Machinery and Intelligence.Nello Cristianini, James Ladyman & Teresa Scantamburlo - manuscript
    Social machines are systems formed by technical and human elements interacting in a structured manner. The use of digital platforms as mediators allows large numbers of human participants to join such mechanisms, creating systems where interconnected digital and human components operate as a single machine capable of highly sophisticated behaviour. Under certain conditions, such systems can be described as autonomous and goal-driven agents. Many examples of modern Artificial Intelligence (AI) can be regarded as instances of this class of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38.  87
    Efficiency in Organism-Environment Information Exchanges: A Semantic Hierarchy of Logical Types Based on the Trial-and-Error Strategy Behind the Emergence of Knowledge.Mattia Berera - 2024 - Biosemiotics 17 (1):131-160.
    Based on Kolchinsky and Wolpert’s work on the semantics of autonomous agents, I propose an application of Mathematical Logic and Probability to model cognitive processes. In this work, I will follow Bateson’s insights on the hierarchy of learning in complex organisms and formalize his idea of applying Russell’s Type Theory. Following Weaver’s three levels for the communication problem, I link the Kolchinsky–Wolpert model to Bateson’s insights, and I reach a semantic and conceptual hierarchy in living systems as an (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. Evolving Self-taught Neural Networks: The Baldwin Effect and the Emergence of Intelligence.Nam Le - 2019 - In AISB Annual Convention 2019 -- 10th Symposium on AI & Games.
    The so-called Baldwin Effect generally says how learning, as a form of ontogenetic adaptation, can influence the process of phylogenetic adaptation, or evolution. This idea has also been taken into computation in which evolution and learning are used as computational metaphors, including evolving neural networks. This paper presents a technique called evolving self-taught neural networks – neural networks that can teach themselves without external supervision or reward. The self-taught neural network is intrinsically motivated. Moreover, the self-taught neural network is the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  40. Artificial Intelligence, Robots, and Philosophy.Masahiro Morioka, Shin-Ichiro Inaba, Makoto Kureha, István Zoltán Zárdai, Minao Kukita, Shimpei Okamoto, Yuko Murakami & Rossa Ó Muireartaigh - 2023 - Journal of Philosophy of Life.
    This book is a collection of all the papers published in the special issue “Artificial Intelligence, Robots, and Philosophy,” Journal of Philosophy of Life, Vol.13, No.1, 2023, pp.1-146. The authors discuss a variety of topics such as science fiction and space ethics, the philosophy of artificial intelligence, the ethics of autonomous agents, and virtuous robots. Through their discussions, readers are able to think deeply about the essence of modern technology and the future of humanity. All papers were invited (...)
    Download  
     
    Export citation  
     
    Bookmark  
  41. The Epistemic Value of Expert Autonomy.Finnur Dellsén - 2018 - Philosophy and Phenomenological Research (2):344-361.
    According to an influential Enlightenment ideal, one shouldn't rely epistemically on other people's say-so, at least not if one is in a position to evaluate the relevant evidence for oneself. However, in much recent work in social epistemology, we are urged to dispense with this ideal, which is seen as stemming from a misguided focus on isolated individuals to the exclusion of groups and communities. In this paper, I argue that that an emphasis on the social nature of inquiry should (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  42. The Virtue of Epistemic Autonomy.Jonathan Matheson - 2021 - In Jonathan Matheson & Kirk Lougheed (eds.), Epistemic Autonomy. New York, NY: Routledge. pp. 173-194.
    In this chapter I develop and motivate and account of epistemic autonomy as an intellectual character virtue. In Section one, I clarify the concept of an intellectual virtue and character intellectual virtues in particular. In Section two, I clear away some misconceptions about epistemic autonomy to better focus on our target. In Section three, I examine and evaluate several extant accounts of the virtue of epistemic autonomy, noting problems with each. In Section four, I provide my positive account of the (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  43. Love First.P. Quinn White - forthcoming - Philosophy and Phenomenological Research.
    How should we respond to the humanity of others? Should we care for others’ well-being? Respect them as autonomous agents? Largely neglected is an answer we can find in the religious traditions of Judaism, Christianity and Buddhism: we should love all. This paper argues that an ideal of love for all can be understood apart from its more typical religious contexts and moreover provides a unified and illuminating account of the the nature and grounds of morality. I defend (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. (1 other version)Artificial evil and the foundation of computer ethics.L. Floridi & J. Sanders - 2000 - Etica E Politica 2 (2).
    Moral reasoning traditionally distinguishes two types of evil: moral and natural. The standard view is that ME is the product of human agency and so includes phenomena such as war, torture and psychological cruelty; that NE is the product of nonhuman agency, and so includes natural disasters such as earthquakes, floods, disease and famine; and finally, that more complex cases are appropriately analysed as a combination of ME and NE. Recently, as a result of developments in autonomous agents (...)
    Download  
     
    Export citation  
     
    Bookmark   27 citations  
  45. Autonomy, Consent, and the “Nonideal” Case.Hallvard Lillehammer - 2020 - Journal of Medicine and Philosophy 45 (3):297-311.
    According to one influential view, requirements to elicit consent for medical interventions and other interactions gain their rationale from the respect we owe to each other as autonomous, or self-governing, rational agents. Yet the popular presumption that consent has a central role to play in legitimate intervention extends beyond the domain of cases where autonomous agency is present to cases where far from fully autonomous agents make choices that, as likely as not, are going to (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  46. Democratic Obligations and Technological Threats to Legitimacy: PredPol, Cambridge Analytica, and Internet Research Agency.Alan Rubel, Clinton Castro & Adam Pham - 2021 - In Alan Rubel, Clinton Castro & Adam Pham (eds.), Algorithms and Autonomy: The Ethics of Automated Decision Systems. Cambridge University Press. pp. 163-183.
    ABSTRACT: So far in this book, we have examined algorithmic decision systems from three autonomy-based perspectives: in terms of what we owe autonomous agents (chapters 3 and 4), in terms of the conditions required for people to act autonomously (chapters 5 and 6), and in terms of the responsibilities of agents (chapter 7). -/- In this chapter we turn to the ways in which autonomy underwrites democratic governance. Political authority, which is to say the ability of a (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  47. The Trouble with Formal Views of Autonomy.Jonathan Knutzen - 2020 - Journal of Ethics and Social Philosophy 18 (2).
    Formal views of autonomy rule out substantive rational capacities (reasons-responsiveness) as a condition of autonomous agency. I argue that such views face a number of underappreciated problems: they have trouble making sense of how autonomous agents could be robustly responsible for their choices, face the burden of explaining why there should be a stark distinction between the importance of factual and evaluative information within autonomous agency, and leave it mysterious why autonomy is the sort of thing (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  48. Forgiveness and Punishment in Kant's Moral System.Paula Satne - 2018 - In Larry Krasnoff, Nuria Sánchez Madrid & Paula Satne (eds.), Kant's Doctrine of Right in the 21st Century. Cardiff: University of Wales Press. pp. 201-219.
    Forgiveness as a positive response to wrongdoing is a widespread phenomenon that plays a role in the moral lives of most persons. Surprisingly, Kant has very little to say on the matter. Although Kant dedicates considerable space to discussing punishment, wrongdoing and grace, he addresses the issues of human forgiveness directly only in some short passages in the Lectures on Ethics and in one passage of the Metaphysics of Morals. As noted by Sussman, the TL passage, however, betrays some ambivalence. (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  49. Heteronomy v. Autonomy.Paul Studtmann & Shyam Gouri Suresh - manuscript
    Kant distinguishes between autonomous and heteronomous agents. Because Kant is concerned with the nature of moral action, not its consequences, he isn’t concerned with whether autonomous agents achieve better outcomes than heteronomous agents. And yet, the question about the expected outcomes of the different types of agency is an interesting one to pursue, for it is not obvious up front which type of agent would achieve better outcomes. This paper uses game theory to explore and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50. Forced Separation and the Wrong of Deportation.Thomas Carnes - 2020 - Social Philosophy Today 36:125-140.
    This paper argues that liberal states are wrong to forcibly separate through deportation the unauthorized immigrant parents of member children and that states must therefore regularize such unauthorized immigrants. While most arguments for regularization focus on how deportation wrongs the unauthorized immigrants themselves, I ground my argument in how deportation wrongs the state’s members, namely the unauthorized immigrants’ member children. Specifically, forced separation through deportation wrongs affected children by violating a basic right to sustain the intimate relationships with their parents (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
1 — 50 / 973