Results for 'Value Alignment'

996 found
Order:
  1. An Enactive Approach to Value Alignment in Artificial Intelligence: A Matter of Relevance.Michael Cannon - 2021 - In Vincent C. Müller (ed.), Philosophy and Theory of Artificial Intelligence 2021. Springer Cham. pp. 119-135.
    The “Value Alignment Problem” is the challenge of how to align the values of artificial intelligence with human values, whatever they may be, such that AI does not pose a risk to the existence of humans. Existing approaches appear to conceive of the problem as "how do we ensure that AI solves the problem in the right way", in order to avoid the possibility of AI turning humans into paperclips in order to “make more paperclips” or eradicating the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2. The Value Alignment Problem.Dan J. Bruiger - manuscript
    The Value Alignment Problem (VAP) presupposes that artificial general intelligence (AGI) is desirable and perhaps inevitable. As usually conceived, it is one side of the more general issue of mutual control between agonistic agents. To be fully autonomous, an AI must be an autopoietic system (an agent), with its own purposiveness. In the case of such systems, Bostrom’s orthogonality thesis is untrue. The VAP reflects the more general problem of interfering in complex systems, entraining the possibility of unforeseen (...)
    Download  
     
    Export citation  
     
    Bookmark  
  3. An Enactive Approach to Value Alignment in Artificial Intelligence: A Matter of Relevance.Michael Cannon - 2022 - In Vincent C. Müller (ed.), Philosophy and Theory of Artificial Intelligence 2021. pp. 119-135.
    The “Value Alignment Problem” is the challenge of how to align the values of artificial intelligence with human values, whatever they may be, such that AI does not pose a risk to the existence of humans. A fundamental feature of how the problem is currently understood is that AI systems do not take the same things to be relevant as humans, whether turning humans into paperclips in order to “make more paperclips” or eradicating the human race to “solve (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4. The Prospect of a Humanitarian Artificial Intelligence: Agency and Value Alignment.Montemayor Carlos - 2023
    In this open access book, Carlos Montemayor illuminates the development of artificial intelligence (AI) by examining our drive to live a dignified life. -/- He uses the notions of agency and attention to consider our pursuit of what is important. His method shows how the best way to guarantee value alignment between humans and potentially intelligent machines is through attention routines that satisfy similar needs. Setting out a theoretical framework for AI Montemayor acknowledges its legal, moral, and political (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5. Improve Alignment of Research Policy and Societal Values.Peter Novitzky, Michael J. Bernstein, Vincent Blok, Robert Braun, Tung Tung Chan, Wout Lamers, Anne Loeber, Ingeborg Meijer, Ralf Lindner & Erich Griessler - 2020 - Science 369 (6499):39-41.
    Historically, scientific and engineering expertise has been key in shaping research and innovation policies, with benefits presumed to accrue to society more broadly over time. But there is persistent and growing concern about whether and how ethical and societal values are integrated into R&I policies and governance, as we confront public disbelief in science and political suspicion toward evidence-based policy-making. Erosion of such a social contract with science limits the ability of democratic societies to deal with challenges presented by new, (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  6. AI Alignment Problem: “Human Values” don’t Actually Exist.Alexey Turchin - manuscript
    Abstract. The main current approach to the AI safety is AI alignment, that is, the creation of AI whose preferences are aligned with “human values.” Many AI safety researchers agree that the idea of “human values” as a constant, ordered sets of preferences is at least incomplete. However, the idea that “humans have values” underlies a lot of thinking in the field; it appears again and again, sometimes popping up as an uncritically accepted truth. Thus, it deserves a thorough (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  7. In Conversation with Artificial Intelligence: Aligning language Models with Human Values.Atoosa Kasirzadeh - 2023 - Philosophy and Technology 36 (2):1-24.
    Large-scale language technologies are increasingly used in various forms of communication with humans across different contexts. One particular use case for these technologies is conversational agents, which output natural language text in response to prompts and queries. This mode of engagement raises a number of social and ethical questions. For example, what does it mean to align conversational agents with human norms or values? Which norms or values should they be aligned with? And how can this be accomplished? In this (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  8. The marriage of astrology and AI: A model of alignment with human values and intentions.Kenneth McRitchie - 2024 - Correlation 36 (1):43-49.
    Astrology research has been using artificial intelligence (AI) to improve the understanding of astrological properties and processes. Like the large language models of AI, astrology is also a language model with a similar underlying linguistic structure but with a distinctive layer of lifestyle contexts. Recent research in semantic proximities and planetary dominance models have helped to quantify effective astrological information. As AI learning and intelligence grows, a major concern is with maintaining its alignment with human values and intentions. Astrology (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9. AI, alignment, and the categorical imperative.Fritz McDonald - 2023 - AI and Ethics 3:337-344.
    Tae Wan Kim, John Hooker, and Thomas Donaldson make an attempt, in recent articles, to solve the alignment problem. As they define the alignment problem, it is the issue of how to give AI systems moral intelligence. They contend that one might program machines with a version of Kantian ethics cast in deontic modal logic. On their view, machines can be aligned with human values if such machines obey principles of universalization and autonomy, as well as a deontic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10. Aligning Patient’s Ideas of a Good Life with Medically Indicated Therapies in Geriatric Rehabilitation Using Smart Sensors.Cristian Timmermann, Frank Ursin, Christopher Predel & Florian Steger - 2021 - Sensors 21 (24):8479.
    New technologies such as smart sensors improve rehabilitation processes and thereby increase older adults’ capabilities to participate in social life, leading to direct physical and mental health benefits. Wearable smart sensors for home use have the additional advantage of monitoring day-to-day activities and thereby identifying rehabilitation progress and needs. However, identifying and selecting rehabilitation priorities is ethically challenging because physicians, therapists, and caregivers may impose their own personal values leading to paternalism. Therefore, we develop a discussion template consisting of a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11. Democratic Values: A Better Foundation for Public Trust in Science.S. Andrew Schroeder - 2021 - British Journal for the Philosophy of Science 72 (2):545-562.
    There is a growing consensus among philosophers of science that core parts of the scientific process involve non-epistemic values. This undermines the traditional foundation for public trust in science. In this article I consider two proposals for justifying public trust in value-laden science. According to the first, scientists can promote trust by being transparent about their value choices. On the second, trust requires that the values of a scientist align with the values of an individual member of the (...)
    Download  
     
    Export citation  
     
    Bookmark   20 citations  
  12.  31
    Contrasting Iqbal’s “Khudi” and Nietzsche’s “Will To Power” to Determine the Legal Alignment of Conscious AI.Ammar Younas & Yi Zeng - manuscript
    As AI edges toward consciousness, the establishment of a robust legal framework becomes essential. This paper advocates for a framework inspired by Allama Muhammad Iqbal's “Khudi”, which prioritizes ethical self-realization and social responsibility over Friedrich Nietzsche’s self-centric “Will to Power”. We propose that conscious AI, reflecting Iqbal’s ethical advancement, should exhibit behaviors aligned with social responsibility and, therefore, be prepared for legal recognition. This approach not only integrates Iqbal's philosophical insights into the legal status of AI but also offers a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13.  57
    Single Valued Neutrosophic HyperSoft Set based on VIKOR Method for 5G Architecture Selection.Florentin Smarandache, M. Ali Ahmed & Ahmed Abdelhafeez - 2024 - International Journal of Neutrosophic Science 23 (2):42-52.
    This work introduces the framework for selecting architecture in 5G networks, considering various technological, performance, economic, and operational factors. With the emergence of 5G technology, the architecture selection process has become pivotal in meeting diverse requirements for ultra-high-speed connectivity, low latency, scalability, and diverse service demands. The evaluation comprehensively analyses different architecture options, including centralized, distributed, cloud-based, and virtualized architectures. Factors such as network performance, scalability, cost-effectiveness, security, and compatibility are considered within a multi-criteria decision-making framework. Findings reveal each architecture (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. Machines learning values.Steve Petersen - 2020 - In S. Matthew Liao (ed.), Ethics of Artificial Intelligence. New York, USA: Oxford University Press.
    Whether it would take one decade or several centuries, many agree that it is possible to create a *superintelligence*---an artificial intelligence with a godlike ability to achieve its goals. And many who have reflected carefully on this fact agree that our best hope for a "friendly" superintelligence is to design it to *learn* values like ours, since our values are too complex to program or hardwire explicitly. But the value learning approach to AI safety faces three particularly philosophical puzzles: (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  15.  98
    Design for Embedding the Value of Privacy in Personal Information Management Systems.Haleh Asgarinia - 2024 - Journal of Ethics and Emerging Technologies 33 (1):1-19.
    Personal Information Management Systems (PIMS) aim to facilitate the sharing of personal information and protect privacy. Efforts to enhance privacy management, aligned with established privacy policies, have led to guidelines for integrating transparent notices and meaningful choices within these systems. Although discussions have revolved around the design of privacy-friendly systems that comply with legal requirements, there has been relatively limited philosophical discourse on incorporating the value of privacy into these systems. Exploring the connection between privacy and personal autonomy illuminates (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16. Dynamic Cognition Applied to Value Learning in Artificial Intelligence.Nythamar De Oliveira & Nicholas Corrêa - 2021 - Aoristo - International Journal of Phenomenology, Hermeneutics and Metaphysics 4 (2):185-199.
    Experts in Artificial Intelligence (AI) development predict that advances in the dvelopment of intelligent systems and agents will reshape vital areas in our society. Nevertheless, if such an advance isn't done with prudence, it can result in negative outcomes for humanity. For this reason, several researchers in the area are trying to develop a robust, beneficial, and safe concept of artificial intelligence. Currently, several of the open problems in the field of AI research arise from the difficulty of avoiding unwanted (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. Twisted thinking: Technology, values and critical thinking.Lavinia Marin & Steinert Steffen - 2022 - Prometheus. Critical Studies in Innovation 38 (1):124-140.
    Technology should be aligned with our values. We make the case that attempts to align emerging technologies with our values should reflect critically on these values. Critical thinking seems like a natural starting point for the critical assessment of our values. However, extant conceptualizations of critical thinking carve out no space for the critical scrutiny of values. We will argue that we need critical thinking that focuses on values instead of taking them as unexamined starting points. In order to play (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18. Colonialist Values in Animal Crossing and Their Implications for Conservation.Alexis D. Smith - 2022 - Highlights of Sustainability 1 (1):129–133.
    In the Nintendo game Animal Crossing: New Horizons, players move to an uninhabited island and quickly become instrumental to the naming, aesthetic development, and biodiversity of the island. In some ways, the game can foster a love for and curiosity about nature. In other ways, the game reinforces harmful colonialist values and attitudes that are ultimately an obstacle to conservation in the real world. Here I critique the game values relevant to conservation, both the values that benefit and the values (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. The genetic technologies questionnaire: lay judgments about genetic technologies align with ethical theory, are coherent, and predict behaviour.Svenja Küchenhoff, Johannes Doerflinger & Nora Heinzelmann - 2022 - BMC Medical Ethics 23 (54):1-14.
    -/- Policy regulations of ethically controversial genetic technologies should, on the one hand, be based on ethical principles. On the other hand, they should be socially acceptable to ensure implementation. In addition, they should align with ethical theory. Yet to date we lack a reliable and valid scale to measure the relevant ethical judgements in laypeople. We target this lacuna. -/- We developed a scale based on ethical principles to elicit lay judgments: the Genetic Technologies Questionnaire (GTQ). In two pilot (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20. Attention to Values Helps Shape Convergence Research.Casey Helgeson, Robert E. Nicholas, Klaus Keller, Chris E. Forest & Nancy Tuana - 2022 - Climatic Change 170.
    Convergence research is driven by specific and compelling problems and requires deep integration across disciplines. The potential of convergence research is widely recognized, but questions remain about how to design, facilitate, and assess such research. Here we analyze a seven-year, twelve-million-dollar convergence project on sustainable climate risk management to answer two questions. First, what is the impact of a project-level emphasis on the values that motivate and tie convergence research to the compelling problems? Second, how does participation in convergence projects (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. The Role of Engineers in Harmonising Human Values for AI Systems Design.Steven Umbrello - 2022 - Journal of Responsible Technology 10 (July):100031.
    Most engineers Fwork within social structures governing and governed by a set of values that primarily emphasise economic concerns. The majority of innovations derive from these loci. Given the effects of these innovations on various communities, it is imperative that the values they embody are aligned with those societies. Like other transformative technologies, artificial intelligence systems can be designed by a single organisation but be diffused globally, demonstrating impacts over time. This paper argues that in order to design for this (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  22. Taking Into Account Sentient Non-Humans in AI Ambitious Value Learning: Sentientist Coherent Extrapolated Volition.Adrià Moret - 2023 - Journal of Artificial Intelligence and Consciousness 10 (02):309-334.
    Ambitious value learning proposals to solve the AI alignment problem and avoid catastrophic outcomes from a possible future misaligned artificial superintelligence (such as Coherent Extrapolated Volition [CEV]) have focused on ensuring that an artificial superintelligence (ASI) would try to do what humans would want it to do. However, present and future sentient non-humans, such as non-human animals and possible future digital minds could also be affected by the ASI’s behaviour in morally relevant ways. This paper puts forward Sentientist (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. The Ghost in the Machine has an American accent: value conflict in GPT-3.Rebecca Johnson, Giada Pistilli, Natalia Menedez-Gonzalez, Leslye Denisse Dias Duran, Enrico Panai, Julija Kalpokiene & Donald Jay Bertulfo - manuscript
    The alignment problem in the context of large language models must consider the plurality of human values in our world. Whilst there are many resonant and overlapping values amongst the world’s cultures, there are also many conflicting, yet equally valid, values. It is important to observe which cultural values a model exhibits, particularly when there is a value conflict between input prompts and generated outputs. We discuss how the co- creation of language and cultural value impacts large (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. Literature Review: What Artificial General Intelligence Safety Researchers Have Written About the Nature of Human Values.Alexey Turchin & David Denkenberger - manuscript
    Abstract: The field of artificial general intelligence (AGI) safety is quickly growing. However, the nature of human values, with which future AGI should be aligned, is underdefined. Different AGI safety researchers have suggested different theories about the nature of human values, but there are contradictions. This article presents an overview of what AGI safety researchers have written about the nature of human values, up to the beginning of 2019. 21 authors were overviewed, and some of them have several theories. A (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. ChatGPT: towards AI subjectivity.Kristian D’Amato - 2024 - AI and Society 39:1-15.
    Motivated by the question of responsible AI and value alignment, I seek to offer a uniquely Foucauldian reconstruction of the problem as the emergence of an ethical subject in a disciplinary setting. This reconstruction contrasts with the strictly human-oriented programme typical to current scholarship that often views technology in instrumental terms. With this in mind, I problematise the concept of a technological subjectivity through an exploration of various aspects of ChatGPT in light of Foucault’s work, arguing that current (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. The 1 law of "absolute reality"." ~, , Data", , ", , Value", , = O. &Gt, Being", & Human - manuscript
    Download  
     
    Export citation  
     
    Bookmark  
  27. Human-Centered AI: The Aristotelian Approach.Jacob Sparks & Ava Wright - 2023 - Divus Thomas 126 (2):200-218.
    As we build increasingly intelligent machines, we confront difficult questions about how to specify their objectives. One approach, which we call human-centered, tasks the machine with the objective of learning and satisfying human objectives by observing our behavior. This paper considers how human-centered AI should conceive the humans it is trying to help. We argue that an Aristotelian model of human agency has certain advantages over the currently dominant theory drawn from economics.
    Download  
     
    Export citation  
     
    Bookmark  
  28. Shortcuts to Artificial Intelligence.Nello Cristianini - forthcoming - In Marcello Pelillo & Teresa Scantamburlo (eds.), Machines We Trust. MIT Press.
    The current paradigm of Artificial Intelligence emerged as the result of a series of cultural innovations, some technical and some social. Among them are apparently small design decisions, that led to a subtle reframing of the field’s original goals, and are by now accepted as standard. They correspond to technical shortcuts, aimed at bypassing problems that were otherwise too complicated or too expensive to solve, while still delivering a viable version of AI. Far from being a series of separate problems, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  29. Deontology and Safe Artificial Intelligence.William D'Alessandro - forthcoming - Philosophical Studies.
    The field of AI safety aims to prevent increasingly capable artificially intelligent systems from causing humans harm. Research on moral alignment is widely thought to offer a promising safety strategy: if we can equip AI systems with appropriate ethical rules, according to this line of thought, they'll be unlikely to disempower, destroy or otherwise seriously harm us. Deontological morality looks like a particularly attractive candidate for an alignment target, given its popularity, relative technical tractability and commitment to harm-avoidance (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. Augustine and an artificial soul.Jeffrey White - forthcoming - Embodied Intelligence 2023.
    Prior work proposes a view of development of purpose and source of meaning in life as a more or less temporally distal project ideal self-situation in terms of which intermediate situations are experienced and prospects evaluated. This work considers Augustine on ensoulment alongside current work into self as adapted routines to common social regularities of the sort that Augustine found deficient. How can we account for such diversity of self-reported value orientation in terms of common structural dynamics differently developed, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31. Modelos Dinâmicos Aplicados à Aprendizagem de Valores em Inteligência Artificial.Nicholas Kluge Corrêa & Nythamar De Oliveira - 2020 - Veritas – Revista de Filosofia da Pucrs 2 (65):1-15.
    Experts in Artificial Intelligence (AI) development predict that advances in the development of intelligent systems and agents will reshape vital areas in our society. Nevertheless, if such an advance is not made prudently and critically-reflexively, it can result in negative outcomes for humanity. For this reason, several researchers in the area have developed a robust, beneficial, and safe concept of AI for the preservation of humanity and the environment. Currently, several of the open problems in the field of AI research (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32. On a Possible Basis for Metaphysical Self-development in Natural and Artificial Systems.Jeffrey White - 2022 - Filozofia i Nauka. Studia Filozoficzne I Interdyscyplinarne 10:71-100.
    Recent research into the nature of self in artificial and biological systems raises interest in a uniquely determining immutable sense of self, a “metaphysical ‘I’” associated with inviolable personal values and moral convictions that remain constant in the face of environmental change, distinguished from an object “me” that changes with its environment. Complementary research portrays processes associated with self as multimodal routines selectively enacted on the basis of contextual cues informing predictive self or world models, with the notion of the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  33. How does Artificial Intelligence Pose an Existential Risk?Karina Vold & Daniel R. Harris - 2023 - In Carissa Véliz (ed.), The Oxford Handbook of Digital Ethics. Oxford University Press.
    Alan Turing, one of the fathers of computing, warned that Artificial Intelligence (AI) could one day pose an existential risk to humanity. Today, recent advancements in the field AI have been accompanied by a renewed set of existential warnings. But what exactly constitutes an existential risk? And how exactly does AI pose such a threat? In this chapter we aim to answer these questions. In particular, we will critically explore three commonly cited reasons for thinking that AI poses an existential (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  34. Quantum of Wisdom.Colin Allen & Brett Karlan - 2022 - In Greg Viggiano (ed.), Quantum Computing and AI: Social, Ethical, and Geo-Political Implications. pp. 157-166.
    Practical quantum computing devices and their applications to AI in particular are presently mostly speculative. Nevertheless, questions about whether this future technology, if achieved, presents any special ethical issues are beginning to take shape. As with any novel technology, one can be reasonably confident that the challenges presented by "quantum AI" will be a mixture of something new and something old. Other commentators (Sevilla & Moreno 2019), have emphasized continuity, arguing that quantum computing does not substantially affect approaches to (...) alignment methods for AI, although they allow that further questions arise concerning governance and verification of quantum AI applications. In this brief paper, we turn our attention to the problem of identifying as-yet-unknown discontinuities that might result from quantum AI applications. Wise development, introduction, and use of any new technology depends on successfully anticipating new modes of failure for that technology. This requires rigorous efforts to break systems in protected sandboxes, and it must be conducted at all stages of technology design, development, and deployment. Such testing must also be informed by technical expertise but cannot be left solely to experts in the technology because of the history of failures to predict how non-experts will use or adapt to new technologies. This interplay between experts and non-experts may be particularly acute for quantum AI because quantum mechanics is notoriously difficult to understand. (As Richard Feynman quipped, "Anyone who claims to understand quantum mechanics is either lying or crazy.") We will discuss the extent to which the difficulties in understanding the physics underlying quantum computing challenges attempts to anticipate new failure modes that might be introduced in AI applications intended for unsupervised operation in the public sphere. (shrink)
    Download  
     
    Export citation  
     
    Bookmark  
  35. Varieties of Artificial Moral Agency and the New Control Problem.Marcus Arvan - 2022 - Humana.Mente - Journal of Philosophical Studies 15 (42):225-256.
    This paper presents a new trilemma with respect to resolving the control and alignment problems in machine ethics. Section 1 outlines three possible types of artificial moral agents (AMAs): (1) 'Inhuman AMAs' programmed to learn or execute moral rules or principles without understanding them in anything like the way that we do; (2) 'Better-Human AMAs' programmed to learn, execute, and understand moral rules or principles somewhat like we do, but correcting for various sources of human moral error; and (3) (...)
    Download  
     
    Export citation  
     
    Bookmark  
  36. From Confucius to Coding and Avicenna to Algorithms: Cultivating Ethical AI Development through Cross-Cultural Ancient Wisdom.Ammar Younas & Yi Zeng - manuscript
    This paper explores the potential of integrating ancient educational principles from diverse eastern cultures into modern AI ethics curricula. It draws on the rich educational traditions of ancient China, India, Arabia, Persia, Japan, Tibet, Mongolia, and Korea, highlighting their emphasis on philosophy, ethics, holistic development, and critical thinking. By examining these historical educational systems, the paper establishes a correlation with modern AI ethics principles, advocating for the inclusion of these ancient teachings in current AI development and education. The proposed integration (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37. Just the Right Thickness: A Defense of Second-Wave Virtue Epistemology.Guy Axtell & J. Adam Carter - 2008 - Philosophical Papers 37 (3):413-434.
    Abstract Do the central aims of epistemology, like those of moral philosophy, require that we designate some important place for those concepts located between the thin-normative and the non-normative? Put another way, does epistemology need "thick" evaluative concepts and with what do they contrast? There are inveterate traditions in analytic epistemology which, having legitimized a certain way of viewing the nature and scope of epistemology's subject matter, give this question a negative verdict; further, they have carried with them a tacit (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  38. Let's Not Do Responsibility Skepticism.Ken M. Levy - 2023 - Journal of Applied Philosophy 40 (3):458-73.
    I argue for three conclusions. First, responsibility skeptics are committed to the position that the criminal justice system should adopt a universal nonresponsibility excuse. Second, a universal nonresponsibility excuse would diminish some of our most deeply held values, further dehumanize criminals, exacerbate mass incarceration, and cause an even greater number of innocent people (nonwrongdoers) to be punished. Third, while Saul Smilansky's ‘illusionist’ response to responsibility skeptics – that even if responsibility skepticism is correct, society should maintain a responsibility‐realist/retributivist criminal justice (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  39. Epistemic Injustice in Research Evaluation: A Cultural Analysis of the Humanities and Physics in Estonia.Endla Lõhkivi, Katrin Velbaum & Jaana Eigi - 2012 - Studia Philosophica Estonica 5 (2):108-132.
    This paper explores the issue of epistemic injustice in research evaluation. Through an analysis of the disciplinary cultures of physics and humanities, we attempt to identify some aims and values specific to the disciplinary areas. We suggest that credibility is at stake when the cultural values and goals of a discipline contradict those presupposed by official evaluation standards. Disciplines that are better aligned with the epistemic assumptions of evaluation standards appear to produce more "scientific" findings. To restore epistemic justice in (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  40. Relationship Between Corporate Governance and Information Security Governance Effectiveness in United States Corporations.Dr Robert E. Davis - 2017 - Dissertation, Walden
    Cyber attackers targeting large corporations achieved a high perimeter penetration success rate during 2013, resulting in many corporations incurring financial losses. Corporate information technology leaders have a fiduciary responsibility to implement information security domain processes that effectually address the challenges for preventing and deterring information security breaches. Grounded in corporate governance theory, the purpose of this correlational study was to examine the relationship between strategic alignment, resource management, risk management, value delivery, performance measurement implementations, and information security governance (...)
    Download  
     
    Export citation  
     
    Bookmark  
  41. Look who’s talking: Responsible Innovation, the paradox of dialogue and the voice of the other in communication and negotiation processes.Vincent Blok - 2014 - Journal of Responsible Innovation 1 (2):171-190.
    In this article, we develop a concept of stakeholder dialogue in responsible innovation (RI) processes. The problem with most concepts of communication is that they rely on ideals of openness, alignment and harmony, even while these ideals are rarely realized in practice. Based on the work of Burke, Habermas, Deetz and Levinas, we develop a concept of stakeholder dialogue that is able to deal with fundamentally different interests and value frames of actors involved in RI processes. We distinguish (...)
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  42. From Participation to Interruption : Toward an ethics of stakeholder engagement, participation and partnership in corporate social responsibility and responsible innovation.V. Blok - 2019 - In René von Schomberg & Jonathan Hankins (eds.), International Handbook on Responsible Innovation. A global resource. Cheltenham, Royaume-Uni: Edward Elgar Publishing.
    Contrary to the tendency to harmony, consensus and alignment among stakeholders in most of the literature on participation and partnership in corporate social responsibility and responsible innovation practices, in this chapter we ask which concept of participation and partnership is able to account for stakeholder engagement while acknowledging and appreciating their fundamentally different judgements, value frames and viewpoints. To this end, we reflect on a non-reductive and ethical approach to stakeholder engagement, collaboration and partnership, inspired by the philosophy (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  43. Performing agency theory and the neoliberalization of the state.Tim Christiaens - 2020 - Critical Sociology 46 (3):393-411.
    According to Streeck and Vogl, the neoliberalization of the state has been the result of political-economic developments that render the state dependent on financial markets. However, they do not explain the discursive shifts that would have been required for demoting the state to the role of an agent to bondholders. I propose to explain this shift via the performative effect of neoliberal agency theory. In 1976, Michael Jensen and William Meckling claimed that corporate managers are agents to shareholding principals, which (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  44. A New Look into Peter Townsend’s Holy Grail: The Theory and Measure of Poverty as Relative Deprivation.Samuel Maia - 2024 - Dissertation, Federal University of Minas Gerais
    The development of the science of poverty has largely been driven by the need to define more precisely what poverty is, as well as to provide theoretical and empirical criteria for identifying those who suffer from it. This thesis focuses on a notable response to these and related questions: the conception and measure of poverty by the British sociologist Peter Townsend. Townsend defines poverty as relative deprivation caused by lack of resources. This conception, along with his corresponding cut-off measure, constitutes (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. The limits of conventional justification: inductive risk and industry bias beyond conventionalism.Miguel Ohnesorge - 2020 - Frontiers in Research Metric and Analytics 14.
    This article develops a constructive criticism of methodological conventionalism. Methodological conventionalism asserts that standards of inductive risk ought to be justified in virtue of their ability to facilitate coordination in a research community. On that view, industry bias occurs when conventional methodological standards are violated to foster industry preferences. The underlying account of scientific conventionality, however, is problematically incomplete. Conventions may be justified in virtue of their coordinative functions, but often qualify for posterior empirical criticism as research advances. Accordingly, industry (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  46. Formalising trade-offs beyond algorithmic fairness: lessons from ethical philosophy and welfare economics.Michelle Seng Ah Lee, Luciano Floridi & Jatinder Singh - 2021 - AI and Ethics 3.
    There is growing concern that decision-making informed by machine learning (ML) algorithms may unfairly discriminate based on personal demographic attributes, such as race and gender. Scholars have responded by introducing numerous mathematical definitions of fairness to test the algorithm, many of which are in conflict with one another. However, these reductionist representations of fairness often bear little resemblance to real-life fairness considerations, which in practice are highly contextual. Moreover, fairness metrics tend to be implemented in narrow and targeted toolkits that (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  47. Critical Responsiveness: How Epistemic Ideology Critique Can Make Normative Legitimacy Empirical Again.Enzo Rossi - forthcoming - Social Philosophy and Policy.
    This paper outlines an empirically-grounded account of normative political legitimacy. The main idea is to give a normative edge to empirical measures of sociological legitimacy through a non-moralised form of ideology critique. A power structure’s responsiveness to the values of those subjected to its authority can be measured empirically and may be explanatory or predictive insofar as it tracks belief in legitimacy, but by itself it lacks normative purchase: it merely describes a preference alignment, and so tells us nothing (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  48. Il relativismo etico fra antropologia culturale e filosofia analitica.Sergio Volodia Marcello Cremaschi - 2007 - In Ilario Tolomio, Sergio Cremaschi, Antonio Da Re, Italo Francesco Baldo, Gian Luigi Brena, Giovanni Chimirri, Giovanni Giordano, Markus Krienke, Gian Paolo Terravecchia, Giovanna Varani, Lisa Bressan, Flavia Marcacci, Saverio Di Liso, Alice Ponchio, Edoardo Simonetti, Marco Bastianelli, Gian Luca Sanna, Valentina Caffieri, Salvatore Muscolino, Fabio Schiappa, Stefania Miscioscia, Renata Battaglin & Rossella Spinaci (eds.), Rileggere l'etica tra contingenza e principi. Ilario Tolomio (ed.). Padova: CLUEP. pp. 15-46.
    I intend to: a) clarify the origins and de facto meanings of the term relativism; b) reconstruct the reasons for the birth of the thesis named “cultural relativism”; d) reconstruct ethical implications of the above thesis; c) revisit the recent discussion between universalists and particularists in the light of the idea of cultural relativism.. -/- 1.Prescriptive Moral Relativism: “everybody is justified in acting in the way imposed by criteria accepted by the group he belongs to”. Universalism: there are at least (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  49. Public Attitudes Toward Cognitive Enhancement.Nicholas S. Fitz, Roland Nadler, Praveena Manogaran, Eugene W. J. Chong & Peter B. Reiner - 2013 - Neuroethics 7 (2):173-188.
    Vigorous debate over the moral propriety of cognitive enhancement exists, but the views of the public have been largely absent from the discussion. To address this gap in our knowledge, four experiments were carried out with contrastive vignettes in order to obtain quantitative data on public attitudes towards cognitive enhancement. The data collected suggest that the public is sensitive to and capable of understanding the four cardinal concerns identified by neuroethicists, and tend to cautiously accept cognitive enhancement even as they (...)
    Download  
     
    Export citation  
     
    Bookmark   42 citations  
  50.  44
    Why Moral Agreement is Not Enough to Address Algorithmic Structural Bias.P. Benton - 2022 - Communications in Computer and Information Science 1551:323-334.
    One of the predominant debates in AI Ethics is the worry and necessity to create fair, transparent and accountable algorithms that do not perpetuate current social inequities. I offer a critical analysis of Reuben Binns’s argument in which he suggests using public reason to address the potential bias of the outcomes of machine learning algorithms. In contrast to him, I argue that ultimately what is needed is not public reason per se, but an audit of the implicit moral assumptions of (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 996