Results for 'The value alignment problem'

947 found
Order:
  1.  68
    The linguistic dead zone of value-aligned agency, natural and artificial.Travis LaCroix - 2024 - Philosophical Studies:1-23.
    The value alignment problem for artificial intelligence (AI) asks how we can ensure that the “values”—i.e., objective functions—of artificial systems are aligned with the values of humanity. In this paper, I argue that linguistic communication is a necessary condition for robust value alignment. I discuss the consequences that the truth of this claim would have for research programmes that attempt to ensure value alignment for AI systems—or, more loftily, those programmes that seek to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2. (1 other version)An Enactive Approach to Value Alignment in Artificial Intelligence: A Matter of Relevance.Michael Cannon - 2021 - In Vincent C. Müller (ed.), Philosophy and Theory of AI. Springer Cham. pp. 119-135.
    The “Value Alignment Problem” is the challenge of how to align the values of artificial intelligence with human values, whatever they may be, such that AI does not pose a risk to the existence of humans. Existing approaches appear to conceive of the problem as "how do we ensure that AI solves the problem in the right way", in order to avoid the possibility of AI turning humans into paperclips in order to “make more paperclips” (...)
    Download  
     
    Export citation  
     
    Bookmark  
  3. AI Alignment Problem: “Human Values” don’t Actually Exist.Alexey Turchin - manuscript
    Abstract. The main current approach to the AI safety is AI alignment, that is, the creation of AI whose preferences are aligned with “human values.” Many AI safety researchers agree that the idea of “human values” as a constant, ordered sets of preferences is at least incomplete. However, the idea that “humans have values” underlies a lot of thinking in the field; it appears again and again, sometimes popping up as an uncritically accepted truth. Thus, it deserves a thorough (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  4. Variable Value Alignment by Design; averting risks with robot religion.Jeffrey White - forthcoming - Embodied Intelligence 2023.
    Abstract: One approach to alignment with human values in AI and robotics is to engineer artiTicial systems isomorphic with human beings. The idea is that robots so designed may autonomously align with human values through similar developmental processes, to realize project ideal conditions through iterative interaction with social and object environments just as humans do, such as are expressed in narratives and life stories. One persistent problem with human value orientation is that different human beings champion different (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5. Varieties of Artificial Moral Agency and the New Control Problem.Marcus Arvan - 2022 - Humana.Mente - Journal of Philosophical Studies 15 (42):225-256.
    This paper presents a new trilemma with respect to resolving the control and alignment problems in machine ethics. Section 1 outlines three possible types of artificial moral agents (AMAs): (1) 'Inhuman AMAs' programmed to learn or execute moral rules or principles without understanding them in anything like the way that we do; (2) 'Better-Human AMAs' programmed to learn, execute, and understand moral rules or principles somewhat like we do, but correcting for various sources of human moral error; and (3) (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. The Ghost in the Machine has an American accent: value conflict in GPT-3.Rebecca Johnson, Giada Pistilli, Natalia Menedez-Gonzalez, Leslye Denisse Dias Duran, Enrico Panai, Julija Kalpokiene & Donald Jay Bertulfo - manuscript
    The alignment problem in the context of large language models must consider the plurality of human values in our world. Whilst there are many resonant and overlapping values amongst the world’s cultures, there are also many conflicting, yet equally valid, values. It is important to observe which cultural values a model exhibits, particularly when there is a value conflict between input prompts and generated outputs. We discuss how the co- creation of language and cultural value impacts (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7. AI, alignment, and the categorical imperative.Fritz McDonald - 2023 - AI and Ethics 3:337-344.
    Tae Wan Kim, John Hooker, and Thomas Donaldson make an attempt, in recent articles, to solve the alignment problem. As they define the alignment problem, it is the issue of how to give AI systems moral intelligence. They contend that one might program machines with a version of Kantian ethics cast in deontic modal logic. On their view, machines can be aligned with human values if such machines obey principles of universalization and autonomy, as well as (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. Is Alignment Unsafe?Cameron Domenico Kirk-Giannini - 2024 - Philosophy and Technology 37 (110):1–4.
    Inchul Yum (2024) argues that the widespread adoption of language agent architectures would likely increase the risk posed by AI by simplifying the process of aligning artificial systems with human values and thereby making it easier for malicious actors to use them to cause a variety of harms. Yum takes this to be an example of a broader phenomenon: progress on the alignment problem is likely to be net safety-negative because it makes artificial systems easier for malicious actors (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9. Taking Into Account Sentient Non-Humans in AI Ambitious Value Learning: Sentientist Coherent Extrapolated Volition.Adrià Moret - 2023 - Journal of Artificial Intelligence and Consciousness 10 (02):309-334.
    Ambitious value learning proposals to solve the AI alignment problem and avoid catastrophic outcomes from a possible future misaligned artificial superintelligence (such as Coherent Extrapolated Volition [CEV]) have focused on ensuring that an artificial superintelligence (ASI) would try to do what humans would want it to do. However, present and future sentient non-humans, such as non-human animals and possible future digital minds could also be affected by the ASI’s behaviour in morally relevant ways. This paper puts forward (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10. Disagreement, AI alignment, and bargaining.Harry R. Lloyd - forthcoming - Philosophical Studies:1-31.
    New AI technologies have the potential to cause unintended harms in diverse domains including warfare, judicial sentencing, biomedicine and governance. One strategy for realising the benefits of AI whilst avoiding its potential dangers is to ensure that new AIs are properly ‘aligned’ with some form of ‘alignment target.’ One danger of this strategy is that – dependent on the alignment target chosen – our AIs might optimise for objectives that reflect the values only of a certain subset of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11. The problem of representation between extended and enactive approaches to cognition.Marta Caravà - 2018 - Dissertation, University of Bologna
    4Es approaches to cognition draw an unconventional picture of cognitive processes and of the mind. Instead of conceiving of cognition as a process that always takes place within the boundaries of the skull and the skin, these approaches hold that cognition is a situated process that often extends beyond human agents’ physical boundaries. In particular, supporters of the extended mind theory and of the enactive approach claim that embodied action in a perceptually complex environment is constitutive of cognitive processes, and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12.  78
    Artificial Intelligence and Universal Values.Jay Friedenberg - 2024 - UK: Ethics Press.
    The field of value alignment, or more broadly machine ethics, is becoming increasingly important as artificial intelligence developments accelerate. By ‘alignment’ we mean giving a generally intelligent software system the capability to act in ways that are beneficial, or at least minimally harmful, to humans. There are a large number of techniques that are being experimented with, but this work often fails to specify what values exactly we should be aligning. When making a decision, an agent is (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. Attention to Values Helps Shape Convergence Research.Casey Helgeson, Robert E. Nicholas, Klaus Keller, Chris E. Forest & Nancy Tuana - 2022 - Climatic Change 170.
    Convergence research is driven by specific and compelling problems and requires deep integration across disciplines. The potential of convergence research is widely recognized, but questions remain about how to design, facilitate, and assess such research. Here we analyze a seven-year, twelve-million-dollar convergence project on sustainable climate risk management to answer two questions. First, what is the impact of a project-level emphasis on the values that motivate and tie convergence research to the compelling problems? Second, how does participation in convergence projects (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. Just the Right Thickness: A Defense of Second-Wave Virtue Epistemology.Guy Axtell & J. Adam Carter - 2008 - Philosophical Papers 37 (3):413-434.
    Abstract Do the central aims of epistemology, like those of moral philosophy, require that we designate some important place for those concepts located between the thin-normative and the non-normative? Put another way, does epistemology need "thick" evaluative concepts and with what do they contrast? There are inveterate traditions in analytic epistemology which, having legitimized a certain way of viewing the nature and scope of epistemology's subject matter, give this question a negative verdict; further, they have carried with them a tacit (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  15. From Confucius to Coding and Avicenna to Algorithms: Cultivating Ethical AI Development through Cross-Cultural Ancient Wisdom.Ammar Younas & Yi Zeng - manuscript
    This paper explores the potential of integrating ancient educational principles from diverse eastern cultures into modern AI ethics curricula. It draws on the rich educational traditions of ancient China, India, Arabia, Persia, Japan, Tibet, Mongolia, and Korea, highlighting their emphasis on philosophy, ethics, holistic development, and critical thinking. By examining these historical educational systems, the paper establishes a correlation with modern AI ethics principles, advocating for the inclusion of these ancient teachings in current AI development and education. The proposed integration (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16.  88
    Beyond Competence: Why AI Needs Purpose, Not Just Programming.Georgy Iashvili - manuscript
    The alignment problem in artificial intelligence (AI) is a critical challenge that extends beyond the need to align future superintelligent systems with human values. This paper argues that even "merely intelligent" AI systems, built on current-gen technologies, pose existential risks due to their competence-without-comprehension nature. Current AI models, despite their advanced capabilities, lack intrinsic moral reasoning and are prone to catastrophic misalignment when faced with ethical dilemmas, as illustrated by recent controversies. Solutions such as hard-coded censorship and rule-based (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. Human-Centered AI: The Aristotelian Approach.Jacob Sparks & Ava Wright - 2023 - Divus Thomas 126 (2):200-218.
    As we build increasingly intelligent machines, we confront difficult questions about how to specify their objectives. One approach, which we call human-centered, tasks the machine with the objective of learning and satisfying human objectives by observing our behavior. This paper considers how human-centered AI should conceive the humans it is trying to help. We argue that an Aristotelian model of human agency has certain advantages over the currently dominant theory drawn from economics.
    Download  
     
    Export citation  
     
    Bookmark  
  18.  93
    The Value of Knowledge and its Problems.Kevin Patton - 2024 - Dissertation, University of Nebraska, Lincoln
    This dissertation answers the three value problems in epistemology. These three problems require an answer as to how knowledge is more valuable 1) than mere true belief, 2) any of the proper subsets of knowledge, and 3) in kind than that which falls short of knowledge. The methodology used to provide an answer to these problems relies on the arguments put forth in a rarely discussed paper from Ward Jones. In short, the Jonesian approach can be summed up as (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. ChatGPT: towards AI subjectivity.Kristian D’Amato - 2024 - AI and Society 39:1-15.
    Motivated by the question of responsible AI and value alignment, I seek to offer a uniquely Foucauldian reconstruction of the problem as the emergence of an ethical subject in a disciplinary setting. This reconstruction contrasts with the strictly human-oriented programme typical to current scholarship that often views technology in instrumental terms. With this in mind, I problematise the concept of a technological subjectivity through an exploration of various aspects of ChatGPT in light of Foucault’s work, arguing that (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  20.  55
    Problem of Freewill.Angelito Malicse - manuscript - Translated by Angelito Malicse.
    The Universal Formula: Solving the Problem of Free Will Through the Law of Balance By Angelito Malicse Introduction The problem of free will has puzzled humanity for centuries, often viewed as a philosophical or metaphysical enigma. This essay presents a universal formula that resolves this dilemma by focusing on the law of karma and the universal law of balance. It explores the interconnectedness of systems, the critical role of accurate knowledge, and the evolution of emotions in maintaining harmony. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. Deontology and Safe Artificial Intelligence.William D’Alessandro - forthcoming - Philosophical Studies:1-24.
    The field of AI safety aims to prevent increasingly capable artificially intelligent systems from causing humans harm. Research on moral alignment is widely thought to offer a promising safety strategy: if we can equip AI systems with appropriate ethical rules, according to this line of thought, they'll be unlikely to disempower, destroy or otherwise seriously harm us. Deontological morality looks like a particularly attractive candidate for an alignment target, given its popularity, relative technical tractability and commitment to harm-avoidance (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  22. How does Artificial Intelligence Pose an Existential Risk?Karina Vold & Daniel R. Harris - 2023 - In Carissa Véliz (ed.), The Oxford Handbook of Digital Ethics. Oxford University Press.
    Alan Turing, one of the fathers of computing, warned that Artificial Intelligence (AI) could one day pose an existential risk to humanity. Today, recent advancements in the field AI have been accompanied by a renewed set of existential warnings. But what exactly constitutes an existential risk? And how exactly does AI pose such a threat? In this chapter we aim to answer these questions. In particular, we will critically explore three commonly cited reasons for thinking that AI poses an existential (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  23. The Value of Evidence in Decision-Making.Ru Ye - forthcoming - Journal of Philosophy.
    The Value of Evidence thesis (VE) tells us to gather evidence before deciding in any decision problem, if the evidence is free. This appar- ently plausible principle faces two problems. First, it fails on evidence externalism or nonclassical decision theories. Second, it’s not general enough: it tells us to prefer gaining free evidence to gaining no evi- dence, but it doesn’t tell us to prefer gaining more informative evidence to gaining less informative evidence when both are free. This (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. The Prospect of a Humanitarian Artificial Intelligence: Agency and Value Alignment.Montemayor Carlos - 2023
    In this open access book, Carlos Montemayor illuminates the development of artificial intelligence (AI) by examining our drive to live a dignified life. -/- He uses the notions of agency and attention to consider our pursuit of what is important. His method shows how the best way to guarantee value alignment between humans and potentially intelligent machines is through attention routines that satisfy similar needs. Setting out a theoretical framework for AI Montemayor acknowledges its legal, moral, and political (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. On Pritchard, Objectual Understanding and the Value Problem.J. Adam Carter & Emma C. Gordon - 2014 - American Philosophical Quarterly.
    Duncan Pritchard (2008, 2009, 2010, forthcoming) has argued for an elegant solution to what have been called the value problems for knowledge at the forefront of recent literature on epistemic value. As Pritchard sees it, these problems dissolve once it is recognized that that it is understanding-why, not knowledge, that bears the distinctive epistemic value often (mistakenly) attributed to knowledge. A key element of Pritchard’s revisionist argument is the claim that understanding-why always involves what he calls strong (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  26. Artificial intelligence and human autonomy: the case of driving automation.Fabio Fossa - 2024 - AI and Society:1-12.
    The present paper aims at contributing to the ethical debate on the impacts of artificial intelligence (AI) systems on human autonomy. More specifically, it intends to offer a clearer understanding of the design challenges to the effort of aligning driving automation technologies to this ethical value. After introducing the discussion on the ambiguous impacts that AI systems exert on human autonomy, the analysis zooms in on how the problem has been discussed in the literature on connected and automated (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. An Expected Value Approach to the Dual-Use Problem.Thomas Douglas - 2013 - In Selgelid Michael & Rappert Brian (eds.), On the Dual Uses of Science and Ethics. Australian National University Press.
    In this chapter I examine how expected-value theory might inform responses to what I call the dual-use problem. I begin by defining that problem. I then outline a procedure, which invokes expected-value theory, for tackling it. I first illustrate the procedure with the aid of a simplified schematic example of a dual-use problem, and then describe how it might also guide responses to more complex real-world cases. I outline some attractive features of the procedure. Finally, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  28. The Value of Knowledge and Other Epistemic Standings: A Case for Epistemic Pluralism.Guido Melchior - 2023 - Philosophia 51 (4):1829-1847.
    In epistemology, the concept of knowledge is of distinctive interest. This fact is also reflected in the discussion of epistemic value, which focuses to a large extend on the value problem of knowledge. This discussion suggests that knowledge has an outstanding value among epistemic standings because its value exceeds the value of its constitutive parts. I will argue that the value of knowledge is not outstanding by presenting epistemic standings of checking, transferring knowledge, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  29. Affect, Values and Problems Assessing Decision-Making Capacity.Jennifer Hawkins - 2023 - American Journal of Bioethics 24 (8):1-12.
    The dominant approach to assessing decision-making capacity in medicine focuses on determining the extent to which individuals possess certain core cognitive abilities. Critics have argued that this model delivers the wrong verdict in certain cases where patient values that are the product of mental disorder or disordered affective states undermine decision-making without undermining cognition. I argue for a re-conceptualization of what it is to possess the capacity to make medical treatment decisions. It is, I argue, the ability to track one’s (...)
    Download  
     
    Export citation  
     
    Bookmark   20 citations  
  30. Look who’s talking: Responsible Innovation, the paradox of dialogue and the voice of the other in communication and negotiation processes.Vincent Blok - 2014 - Journal of Responsible Innovation 1 (2):171-190.
    In this article, we develop a concept of stakeholder dialogue in responsible innovation (RI) processes. The problem with most concepts of communication is that they rely on ideals of openness, alignment and harmony, even while these ideals are rarely realized in practice. Based on the work of Burke, Habermas, Deetz and Levinas, we develop a concept of stakeholder dialogue that is able to deal with fundamentally different interests and value frames of actors involved in RI processes. We (...)
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  31. On the value of philosophers in the social sciences: fixing disciplinary constitutions.Terence Rajivan Edward - manuscript
    This paper argues for the value of philosophers in a school of social sciences within a university, for fixing what I call disciplinary constitutions. A disciplinary constitution is a statement of “How our discipline works: how we achieve the ends of our discipline.” A lot of people depend on a constitution, but such a thing usually runs into problems and philosophers can identify these problems and propose solutions. I suggest that it is essential for the autonomy of an ambitious (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  32. The limits of conventional justification: inductive risk and industry bias beyond conventionalism.Miguel Ohnesorge - 2020 - Frontiers in Research Metric and Analytics 14.
    This article develops a constructive criticism of methodological conventionalism. Methodological conventionalism asserts that standards of inductive risk ought to be justified in virtue of their ability to facilitate coordination in a research community. On that view, industry bias occurs when conventional methodological standards are violated to foster industry preferences. The underlying account of scientific conventionality, however, is problematically incomplete. Conventions may be justified in virtue of their coordinative functions, but often qualify for posterior empirical criticism as research advances. Accordingly, industry (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  33. The values and rules of capacity assessments.Binesh Hass - 2022 - Journal of Medical Ethics 48 (11):816-820.
    This article advances two views on the role of evaluative judgment in clinical assessments of decision-making capacity. The first is that it is rationally impossible for such assessments to exclude judgments of the values a patient uses to motivate their decision-making. Predictably, and second, attempting to exclude such judgments sometimes yields outcomes that contain intractable dilemmas that harm patients. These arguments count against the prevailing model of assessment in common law countries—the four abilities model—which is often incorrectly advertised as being (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  34. Shortcuts to Artificial Intelligence.Nello Cristianini - 2021 - In Marcello Pelillo & Teresa Scantamburlo (eds.), Machines We Trust: Perspectives on Dependable Ai. MIT Press.
    The current paradigm of Artificial Intelligence emerged as the result of a series of cultural innovations, some technical and some social. Among them are apparently small design decisions, that led to a subtle reframing of the field’s original goals, and are by now accepted as standard. They correspond to technical shortcuts, aimed at bypassing problems that were otherwise too complicated or too expensive to solve, while still delivering a viable version of AI. Far from being a series of separate problems, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  35. The Value of a Person.John Broome & Adam Morton - 1994 - Aristotelian Society Supplementary Volume 68 (1):167 - 198.
    (for Adam Morton's half) I argue that if we take the values of persons to be ordered in a way that allows incomparability, then the problems Broome raises have easy solutions. In particular we can maintain that creating people is morally neutral while killing them has a negative value.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  36. On a Possible Basis for Metaphysical Self-development in Natural and Artificial Systems.Jeffrey White - 2022 - Filozofia i Nauka. Studia Filozoficzne I Interdyscyplinarne 10:71-100.
    Recent research into the nature of self in artificial and biological systems raises interest in a uniquely determining immutable sense of self, a “metaphysical ‘I’” associated with inviolable personal values and moral convictions that remain constant in the face of environmental change, distinguished from an object “me” that changes with its environment. Complementary research portrays processes associated with self as multimodal routines selectively enacted on the basis of contextual cues informing predictive self or world models, with the notion of the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  37. Moral Perspective from a Holistic Point of View for Weighted DecisionMaking and its Implications for the Processes of Artificial Intelligence.Mina Singh, Devi Ram, Sunita Kumar & Suresh Das - 2023 - International Journal of Research Publication and Reviews 4 (1):2223-2227.
    In the case of AI, automated systems are making increasingly complex decisions with significant ethical implications, raising questions about who is responsible for decisions made by AI and how to ensure that these decisions align with society's ethical and moral values, both in India and the West. Jonathan Haidt has conducted research on moral and ethical decision-making. Today, solving problems like decision-making in autonomous vehicles can draw on the literature of the trolley dilemma in that it illustrates the complexity of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. A Revolutionary New Metaphysics, Based on Consciousness, and a Call to All Philosophers.Lorna Green - manuscript
    June 2022 A Revolutionary New Metaphysics, Based on Consciousness, and a Call to All Philosophers We are in a unique moment of our history unlike any previous moment ever. Virtually all human economies are based on the destruction of the Earth, and we are now at a place in our history where we can foresee if we continue on as we are, our own extinction. As I write, the planet is in deep trouble, heat, fires, great storms, and record flooding, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. Values in science and AI alignment research.Leonard Dung - manuscript
    Roughly, empirical AI alignment research (AIA) is an area of AI research which investigates empirically how to design AI systems in line with human goals. This paper examines the role of non-epistemic values in AIA. It argues that: (1) Sciences differ in the degree to which values influence them. (2) AIA is strongly value-laden. (3) This influence of values is managed inappropriately and thus threatens AIA’s epistemic integrity and ethical beneficence. (4) AIA should strive to achieve value (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. Giving the Value of a Variable.Richard Lawrence - 2021 - Kriterion - Journal of Philosophy 35 (2):135-150.
    What does it mean to ‘give’ the value of a variable in an algebraic context, and how does giving the value of a variable differ from merely describing it? I argue that to answer this question, we need to examine the role that giving the value of a variable plays in problem-solving practice. I argue that four different features are required for a statement to count as giving the value of a variable in the context (...)
    Download  
     
    Export citation  
     
    Bookmark  
  41. The value and normative role of knowledge.Julien Dutant - 2014 - Liber Amicorum Pascal Engel.
    Why does knowledge matter? Two answers have been influential in the recent literature. One is that it has value: knowledge is one of the goods. Another is that it plays a significant normative role: knowledge is the norm of action, belief, assertion, or the like. This paper discusses whether one can derive one of the claims from the other. That is, whether assuming the idea that knowledge has value — and some defensible general hypotheses about norms and values (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  42. The Value of Philosophical Scepticism.Martin Nuhlicek - 2016 - Filosoficky Casopis 64 (5):675-690.
    The aim of the first part of the article is to elucidate the nature of (modern) philosophical scepticism. The author defends the view that scepticism is not a homogenous doctrine, but a general label for heterogenous ways of sceptical argumentation. Sceptical argumentation is, in turn, understood to include any kind of philosophically relevant argument which aims at calling into doubt epistemically-valued qualities, especially knowledge. In the second part of the article the author focuses on the question of what constitutes the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. The Value of Truth.Arena Fernandez - manuscript
    Truths will be defined as an agreement on uncertainties, the consensus over matters of empirical and social nature such as mathematics, physics or economics. As illustrated by Dennis Lindley , ‘individuals tend to know things to be true and false but the extent of this truth and falsity would always remain unknown’. Leading individuals to a permanent state of stress, uncertainty becomes a risk for the social community. Problems could not be presumed to be solvable as any kind of solution (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. On Value and Obligation in Practical Reason: Toward a Resolution of the Is–Ought Problem in the Thomistic Moral Tradition.William Matthew Diem - 2021 - Nova et Vetera 19 (2): 531-562.
    Within the Thomistic moral tradition, the is-ought gap is regularly treated as identical to the fact-value gap, and these two dichotomies are also regularly treated as being identical to Aristotle and Aquinas’s distinction between the practical and speculative intellect. The question whether (and if so, how) practical (‘ought’) knowledge derives from speculative (‘is’) knowledge has driven some of the fiercest disputes among the schools of Thomistic natural lawyers. I intend to show that both of these identifications are wrong and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. The Value of Being Wild: A Phenomenological Approach to Wildlife Conservation.Adam Cruise - 2020 - Dissertation, University of Stellenbosch
    Given that one-million species are currently threatened with extinction and that humans are undermining the entire natural infrastructure on which our modern world depends (IPBES, 2019), this dissertation will show that there is a need to provide an alternative approach to wildlife conservation, one that avoids anthropocentrism and wildlife valuation on an instrumental basis to provide meaningful and tangible success for both wildlife conservation and human well-being in an inclusive way. In this sense, The Value of Being Wild will (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. Quantum of Wisdom.Colin Allen & Brett Karlan - 2022 - In Greg Viggiano (ed.), Quantum Computing and AI: Social, Ethical, and Geo-Political Implications. pp. 157-166.
    Practical quantum computing devices and their applications to AI in particular are presently mostly speculative. Nevertheless, questions about whether this future technology, if achieved, presents any special ethical issues are beginning to take shape. As with any novel technology, one can be reasonably confident that the challenges presented by "quantum AI" will be a mixture of something new and something old. Other commentators (Sevilla & Moreno 2019), have emphasized continuity, arguing that quantum computing does not substantially affect approaches to (...) alignment methods for AI, although they allow that further questions arise concerning governance and verification of quantum AI applications. In this brief paper, we turn our attention to the problem of identifying as-yet-unknown discontinuities that might result from quantum AI applications. Wise development, introduction, and use of any new technology depends on successfully anticipating new modes of failure for that technology. This requires rigorous efforts to break systems in protected sandboxes, and it must be conducted at all stages of technology design, development, and deployment. Such testing must also be informed by technical expertise but cannot be left solely to experts in the technology because of the history of failures to predict how non-experts will use or adapt to new technologies. This interplay between experts and non-experts may be particularly acute for quantum AI because quantum mechanics is notoriously difficult to understand. (As Richard Feynman quipped, "Anyone who claims to understand quantum mechanics is either lying or crazy.") We will discuss the extent to which the difficulties in understanding the physics underlying quantum computing challenges attempts to anticipate new failure modes that might be introduced in AI applications intended for unsupervised operation in the public sphere. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  47. (1 other version)The value of knowledge and the pursuit of survival.Sherrilyn Roush - 2010 - Metaphilosophy 41 (3):255-278.
    Abstract: Knowledge requires more than mere true belief, and we also tend to think it is more valuable. I explain the added value that knowledge contributes if its extra ingredient beyond true belief is tracking . I show that the tracking conditions are the unique conditions on knowledge that achieve for those who fulfill them a strict Nash Equilibrium and an Evolutionarily Stable Strategy in what I call the True Belief Game. The added value of these properties, intuitively, (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  48. The value of thinking and the normativity of logic.Manish Oza - 2020 - Philosophers' Imprint 20 (25):1-23.
    (1) This paper is about how to build an account of the normativity of logic around the claim that logic is constitutive of thinking. I take the claim that logic is constitutive of thinking to mean that representational activity must tend to conform to logic to count as thinking. (2) I develop a natural line of thought about how to develop the constitutive position into an account of logical normativity by drawing on constitutivism in metaethics. (3) I argue that, while (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  49. Design for Embedding the Value of Privacy in Personal Information Management Systems.Haleh Asgarinia - 2024 - Journal of Ethics and Emerging Technologies 33 (1):1-19.
    Personal Information Management Systems (PIMS) aim to facilitate the sharing of personal information and protect privacy. Efforts to enhance privacy management, aligned with established privacy policies, have led to guidelines for integrating transparent notices and meaningful choices within these systems. Although discussions have revolved around the design of privacy-friendly systems that comply with legal requirements, there has been relatively limited philosophical discourse on incorporating the value of privacy into these systems. Exploring the connection between privacy and personal autonomy illuminates (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50. Safety’s swamp: Against the value of modal stability.Georgi Gardiner - 2017 - American Philosophical Quarterly 54 (2):119-129.
    An account of the nature of knowledge must explain the value of knowledge. I argue that modal conditions, such as safety and sensitivity, do not confer value on a belief and so any account of knowledge that posits a modal condition as a fundamental constituent cannot vindicate widely held claims about the value of knowledge. I explain the implications of this for epistemology: We must either eschew modal conditions as a fundamental constituent of knowledge, or retain the (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
1 — 50 / 947