Results for 'Value Alignment Problem'

991 found
Order:
  1. AI Alignment Problem: “Human Values” don’t Actually Exist.Alexey Turchin - manuscript
    Abstract. The main current approach to the AI safety is AI alignment, that is, the creation of AI whose preferences are aligned with “human values.” Many AI safety researchers agree that the idea of “human values” as a constant, ordered sets of preferences is at least incomplete. However, the idea that “humans have values” underlies a lot of thinking in the field; it appears again and again, sometimes popping up as an uncritically accepted truth. Thus, it deserves a thorough (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  2. An Enactive Approach to Value Alignment in Artificial Intelligence: A Matter of Relevance.Michael Cannon - 2021 - In Vincent C. Müller (ed.), Philosophy and Theory of Artificial Intelligence 2021. Springer Cham. pp. 119-135.
    The “Value Alignment Problem” is the challenge of how to align the values of artificial intelligence with human values, whatever they may be, such that AI does not pose a risk to the existence of humans. Existing approaches appear to conceive of the problem as "how do we ensure that AI solves the problem in the right way", in order to avoid the possibility of AI turning humans into paperclips in order to “make more paperclips” (...)
    Download  
     
    Export citation  
     
    Bookmark  
  3. An Enactive Approach to Value Alignment in Artificial Intelligence: A Matter of Relevance.Michael Cannon - 2022 - In Vincent C. Müller (ed.), Philosophy and Theory of Artificial Intelligence 2021. pp. 119-135.
    The “Value Alignment Problem” is the challenge of how to align the values of artificial intelligence with human values, whatever they may be, such that AI does not pose a risk to the existence of humans. A fundamental feature of how the problem is currently understood is that AI systems do not take the same things to be relevant as humans, whether turning humans into paperclips in order to “make more paperclips” or eradicating the human race (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4. AI, alignment, and the categorical imperative.Fritz McDonald - 2023 - AI and Ethics 3:337-344.
    Tae Wan Kim, John Hooker, and Thomas Donaldson make an attempt, in recent articles, to solve the alignment problem. As they define the alignment problem, it is the issue of how to give AI systems moral intelligence. They contend that one might program machines with a version of Kantian ethics cast in deontic modal logic. On their view, machines can be aligned with human values if such machines obey principles of universalization and autonomy, as well as (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5. Taking Into Account Sentient Non-Humans in AI Ambitious Value Learning: Sentientist Coherent Extrapolated Volition.Adrià Moret - 2023 - Journal of Artificial Intelligence and Consciousness 10 (02):309-334.
    Ambitious value learning proposals to solve the AI alignment problem and avoid catastrophic outcomes from a possible future misaligned artificial superintelligence (such as Coherent Extrapolated Volition [CEV]) have focused on ensuring that an artificial superintelligence (ASI) would try to do what humans would want it to do. However, present and future sentient non-humans, such as non-human animals and possible future digital minds could also be affected by the ASI’s behaviour in morally relevant ways. This paper puts forward (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. Varieties of Artificial Moral Agency and the New Control Problem.Marcus Arvan - 2022 - Humana.Mente - Journal of Philosophical Studies 15 (42):225-256.
    This paper presents a new trilemma with respect to resolving the control and alignment problems in machine ethics. Section 1 outlines three possible types of artificial moral agents (AMAs): (1) 'Inhuman AMAs' programmed to learn or execute moral rules or principles without understanding them in anything like the way that we do; (2) 'Better-Human AMAs' programmed to learn, execute, and understand moral rules or principles somewhat like we do, but correcting for various sources of human moral error; and (3) (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7. Dynamic Cognition Applied to Value Learning in Artificial Intelligence.Nythamar De Oliveira & Nicholas Corrêa - 2021 - Aoristo - International Journal of Phenomenology, Hermeneutics and Metaphysics 4 (2):185-199.
    Experts in Artificial Intelligence (AI) development predict that advances in the dvelopment of intelligent systems and agents will reshape vital areas in our society. Nevertheless, if such an advance isn't done with prudence, it can result in negative outcomes for humanity. For this reason, several researchers in the area are trying to develop a robust, beneficial, and safe concept of artificial intelligence. Currently, several of the open problems in the field of AI research arise from the difficulty of avoiding unwanted (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. The Ghost in the Machine has an American accent: value conflict in GPT-3.Rebecca Johnson, Giada Pistilli, Natalia Menedez-Gonzalez, Leslye Denisse Dias Duran, Enrico Panai, Julija Kalpokiene & Donald Jay Bertulfo - manuscript
    The alignment problem in the context of large language models must consider the plurality of human values in our world. Whilst there are many resonant and overlapping values amongst the world’s cultures, there are also many conflicting, yet equally valid, values. It is important to observe which cultural values a model exhibits, particularly when there is a value conflict between input prompts and generated outputs. We discuss how the co- creation of language and cultural value impacts (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9. Attention to Values Helps Shape Convergence Research.Casey Helgeson, Robert E. Nicholas, Klaus Keller, Chris E. Forest & Nancy Tuana - 2022 - Climatic Change 170.
    Convergence research is driven by specific and compelling problems and requires deep integration across disciplines. The potential of convergence research is widely recognized, but questions remain about how to design, facilitate, and assess such research. Here we analyze a seven-year, twelve-million-dollar convergence project on sustainable climate risk management to answer two questions. First, what is the impact of a project-level emphasis on the values that motivate and tie convergence research to the compelling problems? Second, how does participation in convergence projects (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10. ChatGPT: towards AI subjectivity.Kristian D’Amato - 2024 - AI and Society 39:1-15.
    Motivated by the question of responsible AI and value alignment, I seek to offer a uniquely Foucauldian reconstruction of the problem as the emergence of an ethical subject in a disciplinary setting. This reconstruction contrasts with the strictly human-oriented programme typical to current scholarship that often views technology in instrumental terms. With this in mind, I problematise the concept of a technological subjectivity through an exploration of various aspects of ChatGPT in light of Foucault’s work, arguing that (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11. Deontology and Safe Artificial Intelligence.William D'Alessandro - forthcoming - Philosophical Studies.
    The field of AI safety aims to prevent increasingly capable artificially intelligent systems from causing humans harm. Research on moral alignment is widely thought to offer a promising safety strategy: if we can equip AI systems with appropriate ethical rules, according to this line of thought, they'll be unlikely to disempower, destroy or otherwise seriously harm us. Deontological morality looks like a particularly attractive candidate for an alignment target, given its popularity, relative technical tractability and commitment to harm-avoidance (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12. How does Artificial Intelligence Pose an Existential Risk?Karina Vold & Daniel R. Harris - 2023 - In Carissa Véliz (ed.), The Oxford Handbook of Digital Ethics. Oxford University Press.
    Alan Turing, one of the fathers of computing, warned that Artificial Intelligence (AI) could one day pose an existential risk to humanity. Today, recent advancements in the field AI have been accompanied by a renewed set of existential warnings. But what exactly constitutes an existential risk? And how exactly does AI pose such a threat? In this chapter we aim to answer these questions. In particular, we will critically explore three commonly cited reasons for thinking that AI poses an existential (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  13. Affect, Value and Problems Assessing Decision-Making Capacity.Jennifer Hawkins - forthcoming - American Journal of Bioethics:1-12.
    The dominant approach to assessing decision-making capacity in medicine focuses on determining the extent to which individuals possess certain core cognitive abilities. Critics have argued that this model delivers the wrong verdict in certain cases where patient values that are the product of mental disorder or disordered affective states undermine decision-making without undermining cognition. I argue for a re-conceptualization of what it is to possess the capacity to make medical treatment decisions. It is, I argue, the ability to track one’s (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  14. The Prospect of a Humanitarian Artificial Intelligence: Agency and Value Alignment.Montemayor Carlos - 2023
    In this open access book, Carlos Montemayor illuminates the development of artificial intelligence (AI) by examining our drive to live a dignified life. -/- He uses the notions of agency and attention to consider our pursuit of what is important. His method shows how the best way to guarantee value alignment between humans and potentially intelligent machines is through attention routines that satisfy similar needs. Setting out a theoretical framework for AI Montemayor acknowledges its legal, moral, and political (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15. Human-Centered AI: The Aristotelian Approach.Jacob Sparks & Ava Wright - 2023 - Divus Thomas 126 (2):200-218.
    As we build increasingly intelligent machines, we confront difficult questions about how to specify their objectives. One approach, which we call human-centered, tasks the machine with the objective of learning and satisfying human objectives by observing our behavior. This paper considers how human-centered AI should conceive the humans it is trying to help. We argue that an Aristotelian model of human agency has certain advantages over the currently dominant theory drawn from economics.
    Download  
     
    Export citation  
     
    Bookmark  
  16. From Confucius to Coding and Avicenna to Algorithms: Cultivating Ethical AI Development through Cross-Cultural Ancient Wisdom.Ammar Younas & Yi Zeng - manuscript
    This paper explores the potential of integrating ancient educational principles from diverse eastern cultures into modern AI ethics curricula. It draws on the rich educational traditions of ancient China, India, Arabia, Persia, Japan, Tibet, Mongolia, and Korea, highlighting their emphasis on philosophy, ethics, holistic development, and critical thinking. By examining these historical educational systems, the paper establishes a correlation with modern AI ethics principles, advocating for the inclusion of these ancient teachings in current AI development and education. The proposed integration (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. Improve Alignment of Research Policy and Societal Values.Peter Novitzky, Michael J. Bernstein, Vincent Blok, Robert Braun, Tung Tung Chan, Wout Lamers, Anne Loeber, Ingeborg Meijer, Ralf Lindner & Erich Griessler - 2020 - Science 369 (6499):39-41.
    Historically, scientific and engineering expertise has been key in shaping research and innovation policies, with benefits presumed to accrue to society more broadly over time. But there is persistent and growing concern about whether and how ethical and societal values are integrated into R&I policies and governance, as we confront public disbelief in science and political suspicion toward evidence-based policy-making. Erosion of such a social contract with science limits the ability of democratic societies to deal with challenges presented by new, (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  18. Modelos Dinâmicos Aplicados à Aprendizagem de Valores em Inteligência Artificial.Nicholas Kluge Corrêa & Nythamar De Oliveira - 2020 - Veritas – Revista de Filosofia da Pucrs 2 (65):1-15.
    Experts in Artificial Intelligence (AI) development predict that advances in the development of intelligent systems and agents will reshape vital areas in our society. Nevertheless, if such an advance is not made prudently and critically-reflexively, it can result in negative outcomes for humanity. For this reason, several researchers in the area have developed a robust, beneficial, and safe concept of AI for the preservation of humanity and the environment. Currently, several of the open problems in the field of AI research (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. Shortcuts to Artificial Intelligence.Nello Cristianini - forthcoming - In Marcello Pelillo & Teresa Scantamburlo (eds.), Machines We Trust. MIT Press.
    The current paradigm of Artificial Intelligence emerged as the result of a series of cultural innovations, some technical and some social. Among them are apparently small design decisions, that led to a subtle reframing of the field’s original goals, and are by now accepted as standard. They correspond to technical shortcuts, aimed at bypassing problems that were otherwise too complicated or too expensive to solve, while still delivering a viable version of AI. Far from being a series of separate problems, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  20. Misrelating values and empirical matters in conservation: A problem and solutions.Matthew J. Barker & Dylan J. Fraser - 2023 - Biological Conservation 281.
    We uncover a largely unnoticed and unaddressed problem in conservation research: arguments built within studies are sometimes defective in more fundamental and specific ways than appreciated, because they misrelate values and empirical matters. We call this the unraveled rope problem because just as strands of rope must be properly and intricately wound with each other so the rope supports its load, empirical aspects and value aspects of an argument must be related intricately and properly if the argument (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. On a Possible Basis for Metaphysical Self-development in Natural and Artificial Systems.Jeffrey White - 2022 - Filozofia i Nauka. Studia Filozoficzne I Interdyscyplinarne 10:71-100.
    Recent research into the nature of self in artificial and biological systems raises interest in a uniquely determining immutable sense of self, a “metaphysical ‘I’” associated with inviolable personal values and moral convictions that remain constant in the face of environmental change, distinguished from an object “me” that changes with its environment. Complementary research portrays processes associated with self as multimodal routines selectively enacted on the basis of contextual cues informing predictive self or world models, with the notion of the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  22. Quantum of Wisdom.Colin Allen & Brett Karlan - 2022 - In Greg Viggiano (ed.), Quantum Computing and AI: Social, Ethical, and Geo-Political Implications. pp. 157-166.
    Practical quantum computing devices and their applications to AI in particular are presently mostly speculative. Nevertheless, questions about whether this future technology, if achieved, presents any special ethical issues are beginning to take shape. As with any novel technology, one can be reasonably confident that the challenges presented by "quantum AI" will be a mixture of something new and something old. Other commentators (Sevilla & Moreno 2019), have emphasized continuity, arguing that quantum computing does not substantially affect approaches to (...) alignment methods for AI, although they allow that further questions arise concerning governance and verification of quantum AI applications. In this brief paper, we turn our attention to the problem of identifying as-yet-unknown discontinuities that might result from quantum AI applications. Wise development, introduction, and use of any new technology depends on successfully anticipating new modes of failure for that technology. This requires rigorous efforts to break systems in protected sandboxes, and it must be conducted at all stages of technology design, development, and deployment. Such testing must also be informed by technical expertise but cannot be left solely to experts in the technology because of the history of failures to predict how non-experts will use or adapt to new technologies. This interplay between experts and non-experts may be particularly acute for quantum AI because quantum mechanics is notoriously difficult to understand. (As Richard Feynman quipped, "Anyone who claims to understand quantum mechanics is either lying or crazy.") We will discuss the extent to which the difficulties in understanding the physics underlying quantum computing challenges attempts to anticipate new failure modes that might be introduced in AI applications intended for unsupervised operation in the public sphere. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  23. On Value and Obligation in Practical Reason: Toward a Resolution of the Is–Ought Problem in the Thomistic Moral Tradition.William Matthew Diem - 2021 - Nova et Vetera 19 (2): 531-562.
    Within the Thomistic moral tradition, the is-ought gap is regularly treated as identical to the fact-value gap, and these two dichotomies are also regularly treated as being identical to Aristotle and Aquinas’s distinction between the practical and speculative intellect. The question whether (and if so, how) practical (‘ought’) knowledge derives from speculative (‘is’) knowledge has driven some of the fiercest disputes among the schools of Thomistic natural lawyers. I intend to show that both of these identifications are wrong and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. In Conversation with Artificial Intelligence: Aligning language Models with Human Values.Atoosa Kasirzadeh - 2023 - Philosophy and Technology 36 (2):1-24.
    Large-scale language technologies are increasingly used in various forms of communication with humans across different contexts. One particular use case for these technologies is conversational agents, which output natural language text in response to prompts and queries. This mode of engagement raises a number of social and ethical questions. For example, what does it mean to align conversational agents with human norms or values? Which norms or values should they be aligned with? And how can this be accomplished? In this (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  25. An Expected Value Approach to the Dual-Use Problem.Thomas Douglas - 2013 - In Selgelid Michael & Rappert Brian (eds.), On the Dual Uses of Science and Ethics. Australian National University Press.
    In this chapter I examine how expected-value theory might inform responses to what I call the dual-use problem. I begin by defining that problem. I then outline a procedure, which invokes expected-value theory, for tackling it. I first illustrate the procedure with the aid of a simplified schematic example of a dual-use problem, and then describe how it might also guide responses to more complex real-world cases. I outline some attractive features of the procedure. Finally, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  26. Robustness to Fundamental Uncertainty in AGI Alignment.G. G. Worley Iii - 2020 - Journal of Consciousness Studies 27 (1-2):225-241.
    The AGI alignment problem has a bimodal distribution of outcomes with most outcomes clustering around the poles of total success and existential, catastrophic failure. Consequently, attempts to solve AGI alignment should, all else equal, prefer false negatives (ignoring research programs that would have been successful) to false positives (pursuing research programs that will unexpectedly fail). Thus, we propose adopting a policy of responding to points of philosophical and practical uncertainty associated with the alignment problem by (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. How Values Shape the Machine Learning Opacity Problem.Emily Sullivan - 2022 - In Insa Lawler, Kareem Khalifa & Elay Shech (eds.), Scientific Understanding and Representation. Routledge. pp. 306-322.
    One of the main worries with machine learning model opacity is that we cannot know enough about how the model works to fully understand the decisions they make. But how much is model opacity really a problem? This chapter argues that the problem of machine learning model opacity is entangled with non-epistemic values. The chapter considers three different stages of the machine learning modeling process that corresponds to understanding phenomena: (i) model acceptance and linking the model to the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28. The marriage of astrology and AI: A model of alignment with human values and intentions.Kenneth McRitchie - 2024 - Correlation 36 (1):43-49.
    Astrology research has been using artificial intelligence (AI) to improve the understanding of astrological properties and processes. Like the large language models of AI, astrology is also a language model with a similar underlying linguistic structure but with a distinctive layer of lifestyle contexts. Recent research in semantic proximities and planetary dominance models have helped to quantify effective astrological information. As AI learning and intelligence grows, a major concern is with maintaining its alignment with human values and intentions. Astrology (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29. Superintelligence as a Cause or Cure for Risks of Astronomical Suffering.Kaj Sotala & Lukas Gloor - 2017 - Informatica: An International Journal of Computing and Informatics 41 (4):389-400.
    Discussions about the possible consequences of creating superintelligence have included the possibility of existential risk, often understood mainly as the risk of human extinction. We argue that suffering risks (s-risks) , where an adverse outcome would bring about severe suffering on an astronomical scale, are risks of a comparable severity and probability as risks of extinction. Preventing them is the common interest of many different value systems. Furthermore, we argue that in the same way as superintelligent AI both contributes (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  30. Similarity Measure of Refined Single-Valued Neutrosophic Sets and Its Multicriteria Decision Making Method.Jun Ye & Florentin Smarandache - 2016 - Neutrosophic Sets and Systems 12:41-44.
    This paper introduces a refined single-valued neutrosophic set (RSVNS) and presents a similarity measure of RSVNSs. Then a multicriteria decision-making method with RSVNS information is developed based on the similarity measure of RSVNSs. By the similarity measure between each alternative and the ideal solution (ideal alternative), all the alternatives can be ranked and the best one can be selected as well. Finally, an actual example on the selecting problems of construction projects demonstrates the application and effectiveness of the proposed method.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  31. Look who’s talking: Responsible Innovation, the paradox of dialogue and the voice of the other in communication and negotiation processes.Vincent Blok - 2014 - Journal of Responsible Innovation 1 (2):171-190.
    In this article, we develop a concept of stakeholder dialogue in responsible innovation (RI) processes. The problem with most concepts of communication is that they rely on ideals of openness, alignment and harmony, even while these ideals are rarely realized in practice. Based on the work of Burke, Habermas, Deetz and Levinas, we develop a concept of stakeholder dialogue that is able to deal with fundamentally different interests and value frames of actors involved in RI processes. We (...)
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  32. The Epistemic Value of Conscious Acquaintance: A Problem for Reductive Physicalism.Adam Pautz - manuscript
    We take it that conscious acquaintance has great epistemic value. I develop a new problem for reductive physicalism concerning the epistemic value of acquaintance. The problem concerns "multiple candidate cases". (This develops a theme of my paper *The Significance Argument for the Irreducibility of Consciousness", Philosophical Perspectives 2017.).
    Download  
     
    Export citation  
     
    Bookmark  
  33.  40
    The Phronetic Approach to Politics: Values and Limits.Damian Williams - manuscript
    A phronetic approach takes into account everything possible. By this, the phronetic researcher ought to be better-informed of the practical—that which is readily available in order to solve localized political problems and to direct political participants to think in terms of value-rational understanding and action. Phronetic knowledge ought to be of utility to the citizenry—and not only to academia. It does not only explain phenomena, but also provides for altering the outcomes associated with political phenomena by integrating value (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. Aligning with the Good.Benjamin Mitchell-Yellin - 2015 - Journal of Ethics and Social Philosophy (2):1-8.
    IN “CONSTRUCTIVISM, AGENCY, AND THE PROBLEM of Alignment,” Michael Bratman considers how lessons from the philosophy of action bear on the question of how best to construe the agent’s standpoint in the context of a constructivist theory of practical reasons. His focus is “the problem of alignment”: “whether the pressures from the general constructivism will align with the pressures from the theory of agency” (Bratman 2012: 81). He thus brings two lively literatures into dialogue with each (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  35. On Pritchard, Objectual Understanding and the Value Problem.J. Adam Carter & Emma C. Gordon - forthcoming - American Philosophical Quarterly.
    Duncan Pritchard (2008, 2009, 2010, forthcoming) has argued for an elegant solution to what have been called the value problems for knowledge at the forefront of recent literature on epistemic value. As Pritchard sees it, these problems dissolve once it is recognized that that it is understanding-why, not knowledge, that bears the distinctive epistemic value often (mistakenly) attributed to knowledge. A key element of Pritchard’s revisionist argument is the claim that understanding-why always involves what he calls strong (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  36. Aligning Patient’s Ideas of a Good Life with Medically Indicated Therapies in Geriatric Rehabilitation Using Smart Sensors.Cristian Timmermann, Frank Ursin, Christopher Predel & Florian Steger - 2021 - Sensors 21 (24):8479.
    New technologies such as smart sensors improve rehabilitation processes and thereby increase older adults’ capabilities to participate in social life, leading to direct physical and mental health benefits. Wearable smart sensors for home use have the additional advantage of monitoring day-to-day activities and thereby identifying rehabilitation progress and needs. However, identifying and selecting rehabilitation priorities is ethically challenging because physicians, therapists, and caregivers may impose their own personal values leading to paternalism. Therefore, we develop a discussion template consisting of a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37. Value Capture.Christopher Nguyen - 2024 - Journal of Ethics and Social Philosophy 27 (3).
    Value capture occurs when an agent’s values are rich and subtle; they enter a social environment that presents simplified — typically quantified — versions of those values; and those simplified articulations come to dominate their practical reasoning. Examples include becoming motivated by FitBit’s step counts, Twitter Likes and Re-tweets, citation rates, ranked lists of best schools, and Grade Point Averages. We are vulnerable to value capture because of the competitive advantage that such crisp and clear expressions of (...) have in our private reasoning and our public justification. There is, however, a price. In value capture, we take a central component of our autonomy — our ongoing deliberation over the exact articulation of our values — and we outsource it. And the metrics to which we outsource usually engineered for the interests of some external force, like a large-scale institution’s interest in cross-contextual comprehensibility and quick aggregability. That outsourcing cuts off one of the key benefits to personal deliberation. In value capture, we no longer adjust our values and their articulations in light of own rich experience of the world. Our values should often be carefully tailored to our particular selves or our small-scale communities, but in value capture, we buy our values off the rack. In some cases – like decreasing CO2 emissions – the costs of non-tailored values are outweighed by the benefit of precise collective coordination. In other cases, like in our aesthetic lives, they are not. This suggests that we should want different values suited to different scales. We should want value federalism. Some values are perhaps best pursued at the largest-scale level, others at smaller scales. The problem occurs when we exhibit an excess preference for the largest-scale values – when we consistently let the universal metrics swamp our quieter interests. (shrink)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  38. Just the Right Thickness: A Defense of Second-Wave Virtue Epistemology.Guy Axtell & J. Adam Carter - 2008 - Philosophical Papers 37 (3):413-434.
    Abstract Do the central aims of epistemology, like those of moral philosophy, require that we designate some important place for those concepts located between the thin-normative and the non-normative? Put another way, does epistemology need "thick" evaluative concepts and with what do they contrast? There are inveterate traditions in analytic epistemology which, having legitimized a certain way of viewing the nature and scope of epistemology's subject matter, give this question a negative verdict; further, they have carried with them a tacit (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  39. Epistemic Value, Duty, and Virtue.Guy Axtell - forthcoming - In Brian C. Barnett (ed.), Introduction to Philosophy: Epistemology. Rebus Community.
    This chapter introduces some central issues in Epistemology, and, like others in the open textbook series Introduction to Philosophy, is set up for rewarding college classroom use, with discussion/reflection questions matched to clearly-stated learning objectives,, a brief glossary of the introduced/bolded terms/concepts, links to further open source readings as a next step, and a readily-accessible outline of the classic between William Clifford and William James over the "ethics of belief." The chapter introduces questions of epistemic value through Plato's famous (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  40. Democratic Values: A Better Foundation for Public Trust in Science.S. Andrew Schroeder - 2021 - British Journal for the Philosophy of Science 72 (2):545-562.
    There is a growing consensus among philosophers of science that core parts of the scientific process involve non-epistemic values. This undermines the traditional foundation for public trust in science. In this article I consider two proposals for justifying public trust in value-laden science. According to the first, scientists can promote trust by being transparent about their value choices. On the second, trust requires that the values of a scientist align with the values of an individual member of the (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  41. The Undetectable Difference: An Experimental Look at the ‘Problem’ of p-Values.William M. Goodman - 2010 - Statistical Literacy Website/Papers: Www.Statlit.Org/Pdf/2010GoodmanASA.Pdf.
    In the face of continuing assumptions by many scientists and journal editors that p-values provide a gold standard for inference, counter warnings are published periodically. But the core problem is not with p-values, per se. A finding that “p-value is less than α” could merely signal that a critical value has been exceeded. The question is why, when estimating a parameter, we provide a range (a confidence interval), but when testing a hypothesis about a parameter (e.g. µ (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42. Epistemic value in the subpersonal vale.J. Adam Carter & Robert D. Rupert - 2020 - Synthese 198 (10):9243-9272.
    A vexing problem in contemporary epistemology—one with origins in Plato’s Meno—concerns the value of knowledge, and in particular, whether and how the value of knowledge exceeds the value of mere true opinion. The recent literature is deeply divided on the matter of how best to address the problem. One point, however, remains unquestioned: that if a solution is to be found, it will be at the personal level, the level at which states of subjects or (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  43. A Peripatetic argument for the intrinsic value of human life: Alexander of Aphrodisias' Ethical Problems I.Javier Echeñique - 2021 - Apeiron: A Journal for Ancient Philosophy and Science 54 (3):367-384.
    In this article I argue for the thesis that Alexander's main argument, in Ethical Problems I, is an attempt to block the implication drawn by the Stoics and other ancient philosophers from the double potential of use exhibited by human life, a life that can be either well or badly lived. Alexander wants to resist the thought that this double potential of use allows the Stoics to infer that human life, in itself, or by its own nature, is neither good (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. Many-valued logics. A mathematical and computational introduction.Luis M. Augusto - 2020 - London: College Publications.
    2nd edition. Many-valued logics are those logics that have more than the two classical truth values, to wit, true and false; in fact, they can have from three to infinitely many truth values. This property, together with truth-functionality, provides a powerful formalism to reason in settings where classical logic—as well as other non-classical logics—is of no avail. Indeed, originally motivated by philosophical concerns, these logics soon proved relevant for a plethora of applications ranging from switching theory to cognitive modeling, and (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  45. Knowledge as a Thick Concept: New Light on the Gettier and Value Problems.Brent G. Kyle - 2011 - Dissertation, Cornell University
    I argue that knowledge is a particular kind of concept known as a thick concept. Examples of thick concepts include courage, generosity, loyalty, brutality, and so forth. These concepts are commonly said to combine both evaluation and description, and one of the main goals of this dissertation is to provide a new account of how a thick concept combines these elements. It is argued that thick concepts are semantically evaluative, and that they combine evaluation and description in a way similar (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  46. Value Judgements and Value Neutrality in Economics.Philippe Mongin - 2006 - Economica 73 (290):257-286.
    The paper analyses economic evaluations by distinguishing evaluative statements from actual value judgments. From this basis, it compares four solutions to the value neutrality problem in economics. After rebutting the strong theses about neutrality (normative economics is illegitimate) and non-neutrality (the social sciences are value-impregnated), the paper settles the case between the weak neutrality thesis (common in welfare economics) and a novel, weak non-neutrality thesis that extends the realm of normative economics more widely than the other (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  47. Choosing Values? Williams Contra Nietzsche.Matthieu Queloz - 2021 - Philosophical Quarterly 71 (2):286-307.
    Amplifying Bernard Williams’ critique of the Nietzschean project of a revaluation of values, this paper mounts a critique of the idea that whether values will help us to live can serve as a criterion for choosing which values to live by. I explore why it might not serve as a criterion and highlight a number of further difficulties faced by the Nietzschean project. I then come to Nietzsche's defence, arguing that if we distinguish valuations from values, there is at least (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  48. Infinite Value and the Best of All Possible Worlds.Nevin Climenhaga - 2018 - Philosophy and Phenomenological Research 97 (2):367-392.
    A common argument for atheism runs as follows: God would not create a world worse than other worlds he could have created instead. However, if God exists, he could have created a better world than this one. Therefore, God does not exist. In this paper I challenge the second premise of this argument. I argue that if God exists, our world will continue without end, with God continuing to create value-bearers, and sustaining and perfecting the value-bearers he has (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  49. Value and Idiosyncratic Fitting Attitudes.Conor McHugh & Jonathan Way - 2023 - In Chris Howard & R. A. Rowland (eds.), Fittingness. OUP.
    Norm-attitude accounts of value say that for something to be valuable is for there to be norms that support valuing that thing. For example, according to fitting-attitude accounts, something is of value if it is fitting to value, and according to buck-passing accounts, something is of value if the reasons support valuing it. Norm-attitude accounts face the partiality problem: in cases of partiality, what it is fitting to value, and what the reasons support valuing, (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  50. Persons and the satisfaction of preferences: Problems in the rational kinematics of values.Duncan MacIntosh - 1993 - Journal of Philosophy 90 (4):163-180.
    If one can get the targets of one's current wants only by acquiring new wants (as in the Prisoner's Dilemma), is it rational to do so? Arguably not. For this could justify adopting unsatisfiable wants, violating the rational duty to maximize one's utility. Further, why cause a want's target if one will not then want it? And people "are" their wants. So if these change, people will not survive to enjoy their wants' targets. I reply that one rationally need not (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
1 — 50 / 991