Results for 'Alignment'

528 found
Order:
  1. AI Alignment vs. AI Ethical Treatment: Ten Challenges.Adam Bradley & Bradford Saad - manuscript
    A morally acceptable course of AI development should avoid two dangers: creating unaligned AI systems that pose a threat to humanity and mistreating AI systems that merit moral consideration in their own right. This paper argues these two dangers interact and that if we create AI systems that merit moral consideration, simultaneously avoiding both of these dangers would be extremely challenging. While our argument is straightforward and supported by a wide range of pretheoretical moral judgments, it has far-reaching moral implications (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2. Is Alignment Unsafe?Cameron Domenico Kirk-Giannini - 2024 - Philosophy and Technology 37 (110):1–4.
    Inchul Yum (2024) argues that the widespread adoption of language agent architectures would likely increase the risk posed by AI by simplifying the process of aligning artificial systems with human values and thereby making it easier for malicious actors to use them to cause a variety of harms. Yum takes this to be an example of a broader phenomenon: progress on the alignment problem is likely to be net safety-negative because it makes artificial systems easier for malicious actors to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  3. Alignment and commitment in joint action.Matthew Rachar - 2018 - Philosophical Psychology 31 (6):831-849.
    Important work on alignment systems has been applied to philosophical work on joint action by Tollefsen and Dale. This paper builds from and expands on their work. The first aim of the paper is to spell out how the empirical research on alignment may be integrated into philosophical theories of joint action. The second aim is then to develop a successful characterization of joint action, which spells out the difference between genuine joint action and simpler forms of coordination (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4. Aligning with the Good.Benjamin Mitchell-Yellin - 2015 - Journal of Ethics and Social Philosophy (2):1-8.
    IN “CONSTRUCTIVISM, AGENCY, AND THE PROBLEM of Alignment,” Michael Bratman considers how lessons from the philosophy of action bear on the question of how best to construe the agent’s standpoint in the context of a constructivist theory of practical reasons. His focus is “the problem of alignment”: “whether the pressures from the general constructivism will align with the pressures from the theory of agency” (Bratman 2012: 81). He thus brings two lively literatures into dialogue with each other. This (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  5. AI, alignment, and the categorical imperative.Fritz McDonald - 2023 - AI and Ethics 3:337-344.
    Tae Wan Kim, John Hooker, and Thomas Donaldson make an attempt, in recent articles, to solve the alignment problem. As they define the alignment problem, it is the issue of how to give AI systems moral intelligence. They contend that one might program machines with a version of Kantian ethics cast in deontic modal logic. On their view, machines can be aligned with human values if such machines obey principles of universalization and autonomy, as well as a deontic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. Variable Value Alignment by Design; averting risks with robot religion.Jeffrey White - forthcoming - Embodied Intelligence 2023.
    Abstract: One approach to alignment with human values in AI and robotics is to engineer artiTicial systems isomorphic with human beings. The idea is that robots so designed may autonomously align with human values through similar developmental processes, to realize project ideal conditions through iterative interaction with social and object environments just as humans do, such as are expressed in narratives and life stories. One persistent problem with human value orientation is that different human beings champion different values as (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7.  97
    Mindshaping, Coordination, and Intuitive Alignment.Daniel I. Perez-Zapata & Ian A. Apperly - forthcoming - In Tad Zawidzki (ed.), Routledge Handbook of Mindshaping.
    In this chapter, we will summarize recent empirical results highlighting how different groups of people solve pure coordination games. Such games are traditionally studied in behavioural economics, where two people need to coordinate without communicating with each other. Our results suggest that coordination choices vary across groups of people, and that people can adapt flexibly to these differences in order to coordinate between groups. We propose that pure coordination games are a useful empirical platform for studying aspects of mindshaping. Drawing (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. Aligning Patient’s Ideas of a Good Life with Medically Indicated Therapies in Geriatric Rehabilitation Using Smart Sensors.Cristian Timmermann, Frank Ursin, Christopher Predel & Florian Steger - 2021 - Sensors 21 (24):8479.
    New technologies such as smart sensors improve rehabilitation processes and thereby increase older adults’ capabilities to participate in social life, leading to direct physical and mental health benefits. Wearable smart sensors for home use have the additional advantage of monitoring day-to-day activities and thereby identifying rehabilitation progress and needs. However, identifying and selecting rehabilitation priorities is ethically challenging because physicians, therapists, and caregivers may impose their own personal values leading to paternalism. Therefore, we develop a discussion template consisting of a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9. Disagreement, AI alignment, and bargaining.Harry R. Lloyd - forthcoming - Philosophical Studies:1-31.
    New AI technologies have the potential to cause unintended harms in diverse domains including warfare, judicial sentencing, biomedicine and governance. One strategy for realising the benefits of AI whilst avoiding its potential dangers is to ensure that new AIs are properly ‘aligned’ with some form of ‘alignment target.’ One danger of this strategy is that – dependent on the alignment target chosen – our AIs might optimise for objectives that reflect the values only of a certain subset of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10. Improve Alignment of Research Policy and Societal Values.Peter Novitzky, Michael J. Bernstein, Vincent Blok, Robert Braun, Tung Tung Chan, Wout Lamers, Anne Loeber, Ingeborg Meijer, Ralf Lindner & Erich Griessler - 2020 - Science 369 (6499):39-41.
    Historically, scientific and engineering expertise has been key in shaping research and innovation policies, with benefits presumed to accrue to society more broadly over time. But there is persistent and growing concern about whether and how ethical and societal values are integrated into R&I policies and governance, as we confront public disbelief in science and political suspicion toward evidence-based policy-making. Erosion of such a social contract with science limits the ability of democratic societies to deal with challenges presented by new, (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  11. AI Alignment Problem: “Human Values” don’t Actually Exist.Alexey Turchin - manuscript
    Abstract. The main current approach to the AI safety is AI alignment, that is, the creation of AI whose preferences are aligned with “human values.” Many AI safety researchers agree that the idea of “human values” as a constant, ordered sets of preferences is at least incomplete. However, the idea that “humans have values” underlies a lot of thinking in the field; it appears again and again, sometimes popping up as an uncritically accepted truth. Thus, it deserves a thorough (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  12. Values in science and AI alignment research.Leonard Dung - manuscript
    Roughly, empirical AI alignment research (AIA) is an area of AI research which investigates empirically how to design AI systems in line with human goals. This paper examines the role of non-epistemic values in AIA. It argues that: (1) Sciences differ in the degree to which values influence them. (2) AIA is strongly value-laden. (3) This influence of values is managed inappropriately and thus threatens AIA’s epistemic integrity and ethical beneficence. (4) AIA should strive to achieve value transparency, critical (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. Democratic education: Aligning curriculum, pedagogy, assessment and school governance.Gilbert Burgh - 2003 - In Philip Cam (ed.), Philosophy, democracy and education. pp. 101–120.
    Matthew Lipman claims that the community of inquiry is an exemplar of democracy in action. To many proponents the community of inquiry is considered invaluable for achieving desirable social and political ends through education for democracy. But what sort of democracy should we be educating for? In this paper I outline three models of democracy: the liberal model, which emphasises rights and duties, and draws upon pre-political assumptions about freedom; communitarianism, which focuses on identity and participation in the creation of (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  14.  92
    The linguistic dead zone of value-aligned agency, natural and artificial.Travis LaCroix - 2024 - Philosophical Studies:1-23.
    The value alignment problem for artificial intelligence (AI) asks how we can ensure that the “values”—i.e., objective functions—of artificial systems are aligned with the values of humanity. In this paper, I argue that linguistic communication is a necessary condition for robust value alignment. I discuss the consequences that the truth of this claim would have for research programmes that attempt to ensure value alignment for AI systems—or, more loftily, those programmes that seek to design robustly beneficial or (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15. (1 other version)An Enactive Approach to Value Alignment in Artificial Intelligence: A Matter of Relevance.Michael Cannon - 2021 - In Vincent C. Müller (ed.), Philosophy and Theory of AI. Springer Cham. pp. 119-135.
    The “Value Alignment Problem” is the challenge of how to align the values of artificial intelligence with human values, whatever they may be, such that AI does not pose a risk to the existence of humans. Existing approaches appear to conceive of the problem as "how do we ensure that AI solves the problem in the right way", in order to avoid the possibility of AI turning humans into paperclips in order to “make more paperclips” or eradicating the human (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16. In Conversation with Artificial Intelligence: Aligning language Models with Human Values.Atoosa Kasirzadeh - 2023 - Philosophy and Technology 36 (2):1-24.
    Large-scale language technologies are increasingly used in various forms of communication with humans across different contexts. One particular use case for these technologies is conversational agents, which output natural language text in response to prompts and queries. This mode of engagement raises a number of social and ethical questions. For example, what does it mean to align conversational agents with human norms or values? Which norms or values should they be aligned with? And how can this be accomplished? In this (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  17. Biomedical ontology alignment: An approach based on representation learning.Prodromos Kolyvakis, Alexandros Kalousis, Barry Smith & Dimitris Kiritsis - 2018 - Journal of Biomedical Semantics 9 (21).
    While representation learning techniques have shown great promise in application to a number of different NLP tasks, they have had little impact on the problem of ontology matching. Unlike past work that has focused on feature engineering, we present a novel representation learning approach that is tailored to the ontology matching task. Our approach is based on embedding ontological terms in a high-dimensional Euclidean space. This embedding is derived on the basis of a novel phrase retrofitting strategy through which semantic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18. Robustness to Fundamental Uncertainty in AGI Alignment.G. G. Worley Iii - 2020 - Journal of Consciousness Studies 27 (1-2):225-241.
    The AGI alignment problem has a bimodal distribution of outcomes with most outcomes clustering around the poles of total success and existential, catastrophic failure. Consequently, attempts to solve AGI alignment should, all else equal, prefer false negatives (ignoring research programs that would have been successful) to false positives (pursuing research programs that will unexpectedly fail). Thus, we propose adopting a policy of responding to points of philosophical and practical uncertainty associated with the alignment problem by limiting and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. Stop re-inventing the wheel: or how ELSA and RRI can align.Mark Ryan & Vincent Blok - 2023 - Journal of Responsible Innovation (x):x.
    Ethical, Legal and Social Aspects (ELSA) originated in the 4thEuropean Research Framework Programme (1994) andresponsible research and innovation (RRI) from the EC researchagenda in 2010. ELSA has received renewed attention inEuropean funding schemes and research. This raises the questionof how these two approaches to social responsibility relate toone another and if there is the possibility to align. There is aneed to evaluate the relationship/overlap between ELSA and RRIbecause there is a possibility that new ELSA research will reinventthe wheel if it (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  20. The Role of Foundational Relations in the Alignment of Biomedical Ontologies.Barry Smith & Cornelius Rosse - 2004 - In Stefan Schulze-Kremer (ed.), MedInfo. IOS Press. pp. 444-448.
    The Foundational Model of Anatomy (FMA) symbolically represents the structural organization of the human body from the macromolecular to the macroscopic levels, with the goal of providing a robust and consistent scheme for classifying anatomical entities that is designed to serve as a reference ontology in biomedical informatics. Here we articulate the need for formally clarifying the is-a and part-of relations in the FMA and similar ontology and terminology systems. We diagnose certain characteristic errors in the treatment of these relations (...)
    Download  
     
    Export citation  
     
    Bookmark   29 citations  
  21. Engineering the trust machine. Aligning the concept of trust in the context of blockchain applications.Eva Pöll - 2024 - Ethics and Information Technology 26 (2):1-16.
    Complex technology has become an essential aspect of everyday life. We rely on technology as part of basic infrastructure and repeatedly for tasks throughout the day. Yet, in many cases the relation surpasses mere reliance and evolves to trust in technology. A new, disruptive technology is blockchain. It claims to introduce trustless relationships among its users, aiming to eliminate the need for trust altogether—even being described as “the trust machine”. This paper presents a proposal to adjust the concept of trust (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22. A pragmatic approach to scientific change: transfer, alignment, influence.Stefano Canali - 2022 - European Journal for Philosophy of Science 12 (3):1-25.
    I propose an approach that expands philosophical views of scientific change, on the basis of an analysis of contemporary biomedical research and recent developments in the philosophy of scientific change. Focusing on the establishment of the exposome in epidemiology as a case study and the role of data as a context for contrasting views on change, I discuss change at conceptual, methodological, material, and social levels of biomedical epistemology. Available models of change provide key resources to discuss this type of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  23. Control and Flexibility of Interactive Alignment: Mobius Syndrome as a Case Study.John Michael, Kathleen Bogart, Kristian Tylen, Joel Krueger, Morten Bech, John R. Ostergaard & Riccardo Fusaroli - 2014 - Cognitive Processing 15 (1):S125-126.
    Download  
     
    Export citation  
     
    Bookmark  
  24. Robustness to fundamental uncertainty in AGI alignment.I. I. I. G. Gordon Worley - manuscript
    The AGI alignment problem has a bimodal distribution of outcomes with most outcomes clustering around the poles of total success and existential, catastrophic failure. Consequently, attempts to solve AGI alignment should, all else equal, prefer false negatives (ignoring research programs that would have been successful) to false positives (pursuing research programs that will unexpectedly fail). Thus, we propose adopting a policy of responding to points of metaphysical and practical uncertainty associated with the alignment problem by limiting and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. The genetic technologies questionnaire: lay judgments about genetic technologies align with ethical theory, are coherent, and predict behaviour.Svenja Küchenhoff, Johannes Doerflinger & Nora Heinzelmann - 2022 - BMC Medical Ethics 23 (54):1-14.
    -/- Policy regulations of ethically controversial genetic technologies should, on the one hand, be based on ethical principles. On the other hand, they should be socially acceptable to ensure implementation. In addition, they should align with ethical theory. Yet to date we lack a reliable and valid scale to measure the relevant ethical judgements in laypeople. We target this lacuna. -/- We developed a scale based on ethical principles to elicit lay judgments: the Genetic Technologies Questionnaire (GTQ). In two pilot (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. The Prospect of a Humanitarian Artificial Intelligence: Agency and Value Alignment.Montemayor Carlos - 2023
    In this open access book, Carlos Montemayor illuminates the development of artificial intelligence (AI) by examining our drive to live a dignified life. -/- He uses the notions of agency and attention to consider our pursuit of what is important. His method shows how the best way to guarantee value alignment between humans and potentially intelligent machines is through attention routines that satisfy similar needs. Setting out a theoretical framework for AI Montemayor acknowledges its legal, moral, and political implications (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. The marriage of astrology and AI: A model of alignment with human values and intentions.Kenneth McRitchie - 2024 - Correlation 36 (1):43-49.
    Astrology research has been using artificial intelligence (AI) to improve the understanding of astrological properties and processes. Like the large language models of AI, astrology is also a language model with a similar underlying linguistic structure but with a distinctive layer of lifestyle contexts. Recent research in semantic proximities and planetary dominance models have helped to quantify effective astrological information. As AI learning and intelligence grows, a major concern is with maintaining its alignment with human values and intentions. Astrology (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28.  18
    The Eden Framework: Exploring Divergence, Alignment, and the Ethical Flow of Information.Tim Grooms - manuscript
    Abstract This paper examines the Eden narrative as an allegory for the interplay of free will, ethical alignment, and the emergence of “dark information.” It argues that God’s will can be understood as divine information—a foundational structure that ensures harmony when adhered to. Through divergence from this information, entropy is introduced, necessitating cyclical renewal. By exploring theological, philosophical, and informational perspectives, this paper highlights the relevance of these concepts in addressing modern challenges, offering actionable frameworks for realignment with foundational (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29. Cross‐cultural variation and perspectivalism: Alignment of two red herrings?Jincai Li - 2023 - Mind and Language 38 (4):1157-1163.
    In this brief reply I respond to criticisms of my book, The referential mechanism of proper names, from Michael Devitt and Nicolo D'Agruma. I focus on the question of whether the perspectivism advocated in the book explains the empirical results there detailed.
    Download  
     
    Export citation  
     
    Bookmark  
  30. Soldierly Virtue: An argument for the restructuring of Western military ethics to align with Aristotelian Virtue Ethics.John Baldari - 2018 - Dissertation, University of Leeds
    Because wars are fought by human beings and not merely machines, a strong virtue ethic is an essential prerequisite for those engaged in combat. From a philosophical perspective, war has historically been seen as separate and outside of the commonly accepted forms of morality. Yet there remains a general, though not well-thought out, sense that those human beings who fight wars should act ethically. Since warfighters are often called upon to contemplate and complete tasks during war that are not normally (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31.  96
    Beyond Competence: Why AI Needs Purpose, Not Just Programming.Georgy Iashvili - manuscript
    The alignment problem in artificial intelligence (AI) is a critical challenge that extends beyond the need to align future superintelligent systems with human values. This paper argues that even "merely intelligent" AI systems, built on current-gen technologies, pose existential risks due to their competence-without-comprehension nature. Current AI models, despite their advanced capabilities, lack intrinsic moral reasoning and are prone to catastrophic misalignment when faced with ethical dilemmas, as illustrated by recent controversies. Solutions such as hard-coded censorship and rule-based restrictions (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32. Deontology and Safe Artificial Intelligence.William D’Alessandro - forthcoming - Philosophical Studies:1-24.
    The field of AI safety aims to prevent increasingly capable artificially intelligent systems from causing humans harm. Research on moral alignment is widely thought to offer a promising safety strategy: if we can equip AI systems with appropriate ethical rules, according to this line of thought, they'll be unlikely to disempower, destroy or otherwise seriously harm us. Deontological morality looks like a particularly attractive candidate for an alignment target, given its popularity, relative technical tractability and commitment to harm-avoidance (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  33. ChatGPT: towards AI subjectivity.Kristian D’Amato - 2024 - AI and Society 39:1-15.
    Motivated by the question of responsible AI and value alignment, I seek to offer a uniquely Foucauldian reconstruction of the problem as the emergence of an ethical subject in a disciplinary setting. This reconstruction contrasts with the strictly human-oriented programme typical to current scholarship that often views technology in instrumental terms. With this in mind, I problematise the concept of a technological subjectivity through an exploration of various aspects of ChatGPT in light of Foucault’s work, arguing that current systems (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  34. Taking Into Account Sentient Non-Humans in AI Ambitious Value Learning: Sentientist Coherent Extrapolated Volition.Adrià Moret - 2023 - Journal of Artificial Intelligence and Consciousness 10 (02):309-334.
    Ambitious value learning proposals to solve the AI alignment problem and avoid catastrophic outcomes from a possible future misaligned artificial superintelligence (such as Coherent Extrapolated Volition [CEV]) have focused on ensuring that an artificial superintelligence (ASI) would try to do what humans would want it to do. However, present and future sentient non-humans, such as non-human animals and possible future digital minds could also be affected by the ASI’s behaviour in morally relevant ways. This paper puts forward Sentientist Coherent (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. Rethinking Measuring Moral Foundations in Prisoners: Validity Concerns and Implications.Hyemin Han & Mariola Paruzel-Czachura - manuscript
    Prisoners, those who probably engaged in criminal activities, might possess different perceptions and notions of moral foundations than non-prisoners. Thus, assessing such foundations among the population without testing the validity of the measure may produce biased outcomes. To address the potential methodological issue, we examined the validity of the measurement model for moral foundations among prisoners and community members, i.e., non-prisoners. We conducted the measurement invariance test and measurement alignment to test whether the model was consistently valid across the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  36. Facing Janus: An Explanation of the Motivations and Dangers of AI Development.Aaron Graifman - manuscript
    This paper serves as an intuition building mechanism for understanding the basics of AI, misalignment, and the reasons for why strong AI is being pursued. The approach is to engage with both pro and anti AI development arguments to gain a deeper understanding of both views, and hopefully of the issue as a whole. We investigate the basics of misalignment, common misconceptions, and the arguments for why we would want to pursue strong AI anyway. The paper delves into various aspects (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37. Optimization Models for Reaction Networks: Information Divergence, Quadratic Programming and Kirchhoff’s Laws.Julio Michael Stern - 2014 - Axioms 109:109-118.
    This article presents a simple derivation of optimization models for reaction networks leading to a generalized form of the mass-action law, and compares the formal structure of Minimum Information Divergence, Quadratic Programming and Kirchhoff type network models. These optimization models are used in related articles to develop and illustrate the operation of ontology alignment algorithms and to discuss closely connected issues concerning the epistemological and statistical significance of sharp or precise hypotheses in empirical science.
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  38. From Confucius to Coding and Avicenna to Algorithms: Cultivating Ethical AI Development through Cross-Cultural Ancient Wisdom.Ammar Younas & Yi Zeng - manuscript
    This paper explores the potential of integrating ancient educational principles from diverse eastern cultures into modern AI ethics curricula. It draws on the rich educational traditions of ancient China, India, Arabia, Persia, Japan, Tibet, Mongolia, and Korea, highlighting their emphasis on philosophy, ethics, holistic development, and critical thinking. By examining these historical educational systems, the paper establishes a correlation with modern AI ethics principles, advocating for the inclusion of these ancient teachings in current AI development and education. The proposed integration (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. The evaluation of ontologies: Toward improved semantic interoperability.Leo Obrst, Werner Ceusters, Inderjeet Mani, Steve Ray & Barry Smith - 2006 - In Chris Baker & Kei H. Cheung (eds.), Semantic Web: Revolutionizing Knowledge Discovery in the Life Sciences. Springer. pp. 139-158.
    Recent years have seen rapid progress in the development of ontologies as semantic models intended to capture and represent aspects of the real world. There is, however, great variation in the quality of ontologies. If ontologies are to become progressively better in the future, more rigorously developed, and more appropriately compared, then a systematic discipline of ontology evaluation must be created to ensure quality of content and methodology. Systematic methods for ontology evaluation will take into account representation of individual ontologies, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  40. Varieties of Artificial Moral Agency and the New Control Problem.Marcus Arvan - 2022 - Humana.Mente - Journal of Philosophical Studies 15 (42):225-256.
    This paper presents a new trilemma with respect to resolving the control and alignment problems in machine ethics. Section 1 outlines three possible types of artificial moral agents (AMAs): (1) 'Inhuman AMAs' programmed to learn or execute moral rules or principles without understanding them in anything like the way that we do; (2) 'Better-Human AMAs' programmed to learn, execute, and understand moral rules or principles somewhat like we do, but correcting for various sources of human moral error; and (3) (...)
    Download  
     
    Export citation  
     
    Bookmark  
  41. Global Solutions vs. Local Solutions for the AI Safety Problem.Alexey Turchin - 2019 - Big Data Cogn. Comput 3 (1).
    There are two types of artificial general intelligence (AGI) safety solutions: global and local. Most previously suggested solutions are local: they explain how to align or “box” a specific AI (Artificial Intelligence), but do not explain how to prevent the creation of dangerous AI in other places. Global solutions are those that ensure any AI on Earth is not dangerous. The number of suggested global solutions is much smaller than the number of proposed local solutions. Global solutions can be divided (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  42. A Tri-Opti Compatibility Problem for Godlike Superintelligence.Walter Barta - manuscript
    Various thinkers have been attempting to align artificial intelligence (AI) with ethics (Christian, 2020; Russell, 2021), the so-called problem of alignment, but some suspect that the problem may be intractable (Yampolskiy, 2023). In the following, we make an argument by analogy to analyze the possibility that the problem of alignment could be intractable. We show how the Tri-Omni properties in theology can direct us towards analogous properties for artificial superintelligence, Tri-Opti properties. However, just as the Tri-Omni properties are (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. Relationship Between Corporate Governance and Information Security Governance Effectiveness in United States Corporations.Dr Robert E. Davis - 2017 - Dissertation, Walden
    Cyber attackers targeting large corporations achieved a high perimeter penetration success rate during 2013, resulting in many corporations incurring financial losses. Corporate information technology leaders have a fiduciary responsibility to implement information security domain processes that effectually address the challenges for preventing and deterring information security breaches. Grounded in corporate governance theory, the purpose of this correlational study was to examine the relationship between strategic alignment, resource management, risk management, value delivery, performance measurement implementations, and information security governance (ISG) (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. The Ghost in the Machine has an American accent: value conflict in GPT-3.Rebecca Johnson, Giada Pistilli, Natalia Menedez-Gonzalez, Leslye Denisse Dias Duran, Enrico Panai, Julija Kalpokiene & Donald Jay Bertulfo - manuscript
    The alignment problem in the context of large language models must consider the plurality of human values in our world. Whilst there are many resonant and overlapping values amongst the world’s cultures, there are also many conflicting, yet equally valid, values. It is important to observe which cultural values a model exhibits, particularly when there is a value conflict between input prompts and generated outputs. We discuss how the co- creation of language and cultural value impacts large language models (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. Propositions as Truthmaker Conditions.Mark Jago - 2017 - Argumenta 2 (2):293-308.
    Propositions are often aligned with truth-conditions. The view is mistaken, since propositions discriminate where truth conditions do not. Propositions are hyperintensional: they are sensitive to necessarily equivalent differences. I investigate an alternative view on which propositions are truthmaker conditions, understood as sets of possible truthmakers. This requires making metaphysical sense of merely possible states of affairs. The theory that emerges illuminates the semantic phenomena of samesaying, subject matter, and aboutness.
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  46. Nietzsche's Functional Disagreement with Stoicism: Eternal Recurrence, Ethical Naturalism, and Teleology.James Mollison - 2021 - History of Philosophy Quarterly 38 (2):175-195.
    Several scholars align Nietzsche’s philosophy with Stoicism because of their naturalist approaches to ethics and doctrines of eternal recurrence. Yet this alignment is difficult to reconcile with Nietzsche’s criticisms of Stoicism’s ethical ideal of living according to nature by dispassionately accepting fate—so much so that some conclude that Nietzsche’s rebuke of Stoicism undermines his own philosophical project. I argue that affinities between Nietzsche and Stoicism belie deeper disagreement about teleology, which, in turn, yields different understandings of nature and human (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  47. Rethinking the Secular in Feminist Marriage Debates.Ada S. Jaarsma - 2010 - Studies in Social Justice 4 (1):47-66.
    The religious right often aligns its patriarchal opposition to same-sex marriage with the defence of religious freedom. In this article, I identify resources for confronting such prejudicial religiosity by surveying two predominant feminist approaches to same-sex marriage that are often assumed to be at odds: discourse ethics and queer critical theory. This comparative analysis opens up to view commitments that may not be fully recognizable from within either feminist framework: commitments to ideals of selfhood, to specific conceptions of justice, and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48. A dataset of blockage, vandalism, and harassment activities for the cause of climate change mitigation.Quan-Hoang Vuong, Minh-Hoang Nguyen & Viet-Phuong La - manuscript
    Environmental activism is crucial for raising public awareness and support toward addressing the climate crisis. However, using climate change mitigation as the cause for blockage, vandalism, and harassment activities might be counterproductive and risk causing negative repercussions and declining public support. The paper describes a dataset of metadata of 89 blockage, vandalism, and harassment events happening in recent years. The dataset comprises three main categories: 1) Events, 2) Activists, and 3) Consequences. For researchers interested in environmental activism, climate change, and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  49. Induction without fallibility, deduction without certainty.Matheus Silva - manuscript
    There is no strict alignment between induction and fallibility, nor between deduction and certainty. Fallibility in deductive inferences, such as failed mathematical theorems, demonstrates that deduction does not guarantee certainty. Similarly, inductive reasoning, typically seen as weaker and more prone to uncertainty, is not inherently tied to fallibility. In fact, inductive generalizations can sometimes lead to certainty, especially in mathematical contexts. By decoupling induction from fallibility and deduction from certainty, we preserve the distinct nature of each form of reasoning, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50. Twisted thinking: Technology, values and critical thinking.Lavinia Marin & Steinert Steffen - 2022 - Prometheus. Critical Studies in Innovation 38 (1):124-140.
    Technology should be aligned with our values. We make the case that attempts to align emerging technologies with our values should reflect critically on these values. Critical thinking seems like a natural starting point for the critical assessment of our values. However, extant conceptualizations of critical thinking carve out no space for the critical scrutiny of values. We will argue that we need critical thinking that focuses on values instead of taking them as unexamined starting points. In order to play (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 528