Results for 'learning bias'

951 found
Order:
  1. Varieties of Bias.Gabbrielle M. Johnson - 2024 - Philosophy Compass (7):e13011.
    The concept of bias is pervasive in both popular discourse and empirical theorizing within philosophy, cognitive science, and artificial intelligence. This widespread application threatens to render the concept too heterogeneous and unwieldy for systematic investigation. This article explores recent philosophical literature attempting to identify a single theoretical category—termed ‘bias’—that could be unified across different contexts. To achieve this aim, the article provides a comprehensive review of theories of bias that are significant in the fields of philosophy of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2. Hindsight bias is not a bias.Brian Hedden - 2019 - Analysis 79 (1):43-52.
    Humans typically display hindsight bias. They are more confident that the evidence available beforehand made some outcome probable when they know the outcome occurred than when they don't. There is broad consensus that hindsight bias is irrational, but this consensus is wrong. Hindsight bias is generally rationally permissible and sometimes rationally required. The fact that a given outcome occurred provides both evidence about what the total evidence available ex ante was, and also evidence about what that evidence (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  3. Learning in the social being system.Zoe Jenkin & Lori Markson - 2024 - Behavioral and Brain Sciences 47:e132.
    We argue that the core social being system is unlike other core systems in that it participates in frequent, widespread learning. As a result, the social being system is less constant throughout the lifespan and less informationally encapsulated than other core systems. This learning supports the development of the precursors of bias, but also provides avenues for preempting it.
    Download  
     
    Export citation  
     
    Bookmark  
  4. Egocentric Bias and Doubt in Cognitive Agents.Nanda Kishore Sreenivas & Shrisha Rao - forthcoming - 18th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2019), Montreal, Canada, May 2019.
    Modeling social interactions based on individual behavior has always been an area of interest, but prior literature generally presumes rational behavior. Thus, such models may miss out on capturing the effects of biases humans are susceptible to. This work presents a method to model egocentric bias, the real-life tendency to emphasize one's own opinion heavily when presented with multiple opinions. We use a symmetric distribution, centered at an agent's own opinion, as opposed to the Bounded Confidence (BC) model used (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  5. Apropos of "Speciesist bias in AI: how AI applications perpetuate discrimination and unfair outcomes against animals".Ognjen Arandjelović - 2023 - AI and Ethics.
    The present comment concerns a recent AI & Ethics article which purports to report evidence of speciesist bias in various popular computer vision (CV) and natural language processing (NLP) machine learning models described in the literature. I examine the authors' analysis and show it, ironically, to be prejudicial, often being founded on poorly conceived assumptions and suffering from fallacious and insufficiently rigorous reasoning, its superficial appeal in large part relying on the sequacity of the article's target readership.
    Download  
     
    Export citation  
     
    Bookmark  
  6. Egalitarian Machine Learning.Clinton Castro, David O’Brien & Ben Schwan - 2023 - Res Publica 29 (2):237–264.
    Prediction-based decisions, which are often made by utilizing the tools of machine learning, influence nearly all facets of modern life. Ethical concerns about this widespread practice have given rise to the field of fair machine learning and a number of fairness measures, mathematically precise definitions of fairness that purport to determine whether a given prediction-based decision system is fair. Following Reuben Binns (2017), we take ‘fairness’ in this context to be a placeholder for a variety of normative egalitarian (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  7. Learning to Discriminate: The Perfect Proxy Problem in Artificially Intelligent Criminal Sentencing.Benjamin Davies & Thomas Douglas - 2022 - In Jesper Ryberg & Julian V. Roberts (eds.), Sentencing and Artificial Intelligence. Oxford: OUP.
    It is often thought that traditional recidivism prediction tools used in criminal sentencing, though biased in many ways, can straightforwardly avoid one particularly pernicious type of bias: direct racial discrimination. They can avoid this by excluding race from the list of variables employed to predict recidivism. A similar approach could be taken to the design of newer, machine learning-based (ML) tools for predicting recidivism: information about race could be withheld from the ML tool during its training phase, ensuring (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  8. Learning as Hypothesis Testing: Learning Conditional and Probabilistic Information.Jonathan Vandenburgh - manuscript
    Complex constraints like conditionals ('If A, then B') and probabilistic constraints ('The probability that A is p') pose problems for Bayesian theories of learning. Since these propositions do not express constraints on outcomes, agents cannot simply conditionalize on the new information. Furthermore, a natural extension of conditionalization, relative information minimization, leads to many counterintuitive predictions, evidenced by the sundowners problem and the Judy Benjamin problem. Building on the notion of a `paradigm shift' and empirical research in psychology and economics, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9. What Time-travel Teaches Us About Future-Bias.Kristie Miller - 2021 - Philosophies 6 (38):38.
    Future-biased individuals systematically prefer positively valenced events to be in the future (positive future-bias) and negatively valenced events to be in the past (negative future-bias). The most extreme form of future-bias is absolute future-bias, whereby we completely discount the value of past events when forming our preferences. Various authors have thought that we are absolutely future-biased (Sullivan (2018:58); Parfit (1984:173) and that future-bias (absolute or otherwise) is at least rationally permissible (Prior (1959), Hare (2007; 2008), (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  10. Can we learn from hidden mistakes? Self-fulfilling prophecy and responsible neuroprognostic innovation.Mayli Mertens, Owen C. King, Michel J. A. M. van Putten & Marianne Boenink - 2021 - Journal of Medical Ethics 48 (11):922-928.
    A self-fulfilling prophecy in neuroprognostication occurs when a patient in coma is predicted to have a poor outcome, and life-sustaining treatment is withdrawn on the basis of that prediction, thus directly bringing about a poor outcome for that patient. In contrast to the predominant emphasis in the bioethics literature, we look beyond the moral issues raised by the possibility that an erroneous prediction might lead to the death of a patient who otherwise would have lived. Instead, we focus on the (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  11. Why Moral Agreement is Not Enough to Address Algorithmic Structural Bias.P. Benton - 2022 - Communications in Computer and Information Science 1551:323-334.
    One of the predominant debates in AI Ethics is the worry and necessity to create fair, transparent and accountable algorithms that do not perpetuate current social inequities. I offer a critical analysis of Reuben Binns’s argument in which he suggests using public reason to address the potential bias of the outcomes of machine learning algorithms. In contrast to him, I argue that ultimately what is needed is not public reason per se, but an audit of the implicit moral (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12. Shared decision-making and maternity care in the deep learning age: Acknowledging and overcoming inherited defeaters.Keith Begley, Cecily Begley & Valerie Smith - 2021 - Journal of Evaluation in Clinical Practice 27 (3):497–503.
    In recent years there has been an explosion of interest in Artificial Intelligence (AI) both in health care and academic philosophy. This has been due mainly to the rise of effective machine learning and deep learning algorithms, together with increases in data collection and processing power, which have made rapid progress in many areas. However, use of this technology has brought with it philosophical issues and practical problems, in particular, epistemic and ethical. In this paper the authors, with (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. Knowledge in motion: How procedural control of knowledge usage entails selectivity and bias.Ulrich Ansorge - 2021 - Journal of Knowledge Structures and Systems 2 (1):3-28.
    The use and acquisition of knowledge appears to be influenced by what humans pay attention to. Thus, looking at attention will tell us something about the mechanisms involved in knowledge (usage). According to the present review, attention reflects selectivity in information processing and it is not necessarily also reflected in a user’s consciousness, as it is rooted in skill memory or other implicit procedural memory forms–that is, attention is rooted in the necessity of human control of mental operations and actions. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. Revolutionizing Education with ChatGPT: Enhancing Learning Through Conversational AI.Prapasiri Klayklung, Piyawatjana Chocksathaporn, Pongsakorn Limna, Tanpat Kraiwanit & Kris Jangjarat - 2023 - Universal Journal of Educational Research 2 (3):217-225.
    The development of conversational artificial intelligence (AI) has brought about new opportunities for improving the learning experience in education. ChatGPT, a large language model trained on a vast corpus of text, has the potential to revolutionize education by enhancing learning through personalized and interactive conversations. This paper explores the benefits of integrating ChatGPT in education in Thailand. The research strategy employed in this study was qualitative, utilizing in-depth interviews with eight key informants who were selected using purposive sampling. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15. Artificial intelligence as a public service: learning from Amsterdam and Helsinki.Luciano Floridi - 2020 - Philosophy and Technology 33 (4):541–⁠546.
    In September 2020, Helsinki and Amsterdam announced the launch of their open AI registers—the first cities in the world to offer such a service. The AI registers describe what, where, and how AI applications are being used in the two municipalities; how algorithms were assessed for potential bias or risks; and how humans use the AI services. Examining issues of security and transparency, this paper discusses the potential for implementing AI in an urban public service setting and how this (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16. Democratizing Algorithmic Fairness.Pak-Hang Wong - 2020 - Philosophy and Technology 33 (2):225-244.
    Algorithms can now identify patterns and correlations in the (big) datasets, and predict outcomes based on those identified patterns and correlations with the use of machine learning techniques and big data, decisions can then be made by algorithms themselves in accordance with the predicted outcomes. Yet, algorithms can inherit questionable values from the datasets and acquire biases in the course of (machine) learning, and automated algorithmic decision-making makes it more difficult for people to see algorithms as biased. While (...)
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  17. The Cultural Evolution of Cultural Evolution.Jonathan Birch & Cecilia Heyes - 2021 - Philosophical Transactions of the Royal Society B: Biological Sciences 376:20200051.
    What makes fast, cumulative cultural evolution work? Where did it come from? Why is it the sole preserve of humans? We set out a self-assembly hypothesis: cultural evolution evolved culturally. We present an evolutionary account that shows this hypothesis to be coherent, plausible, and worthy of further investigation. It has the following steps: (0) in common with other animals, early hominins had significant capacity for social learning; (1) knowledge and skills learned by offspring from their parents began to spread (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  18. Algorithmic neutrality.Milo Phillips-Brown - manuscript
    Algorithms wield increasing control over our lives—over the jobs we get, the loans we're granted, the information we see online. Algorithms can and often do wield their power in a biased way, and much work has been devoted to algorithmic bias. In contrast, algorithmic neutrality has been largely neglected. I investigate algorithmic neutrality, tackling three questions: What is algorithmic neutrality? Is it possible? And when we have it in mind, what can we learn about algorithmic bias?
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  19. Negligent Algorithmic Discrimination.Andrés Páez - 2021 - Law and Contemporary Problems 84 (3):19-33.
    The use of machine learning algorithms has become ubiquitous in hiring decisions. Recent studies have shown that many of these algorithms generate unlawful discriminatory effects in every step of the process. The training phase of the machine learning models used in these decisions has been identified as the main source of bias. For a long time, discrimination cases have been analyzed under the banner of disparate treatment and disparate impact, but these concepts have been shown to be (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20. On algorithmic fairness in medical practice.Thomas Grote & Geoff Keeling - 2022 - Cambridge Quarterly of Healthcare Ethics 31 (1):83-94.
    The application of machine-learning technologies to medical practice promises to enhance the capabilities of healthcare professionals in the assessment, diagnosis, and treatment, of medical conditions. However, there is growing concern that algorithmic bias may perpetuate or exacerbate existing health inequalities. Hence, it matters that we make precise the different respects in which algorithmic bias can arise in medicine, and also make clear the normative relevance of these different kinds of algorithmic bias for broader questions about justice (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  21. Algorithmic Microaggressions.Emma McClure & Benjamin Wald - 2022 - Feminist Philosophy Quarterly 8 (3).
    We argue that machine learning algorithms can inflict microaggressions on members of marginalized groups and that recognizing these harms as instances of microaggressions is key to effectively addressing the problem. The concept of microaggression is also illuminated by being studied in algorithmic contexts. We contribute to the microaggression literature by expanding the category of environmental microaggressions and highlighting the unique issues of moral responsibility that arise when we focus on this category. We theorize two kinds of algorithmic microaggression, stereotyping (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22. Making Sense of Sensory Input.Richard Evans, José Hernández-Orallo, Johannes Welbl, Pushmeet Kohli & Marek Sergot - 2021 - Artificial Intelligence 293 (C):103438.
    This paper attempts to answer a central question in unsupervised learning: what does it mean to “make sense” of a sensory sequence? In our formalization, making sense involves constructing a symbolic causal theory that both explains the sensory sequence and also satisfies a set of unity conditions. The unity conditions insist that the constituents of the causal theory – objects, properties, and laws – must be integrated into a coherent whole. On our account, making sense of sensory input is (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  23. Just Machines.Clinton Castro - 2022 - Public Affairs Quarterly 36 (2):163-183.
    A number of findings in the field of machine learning have given rise to questions about what it means for automated scoring- or decisionmaking systems to be fair. One center of gravity in this discussion is whether such systems ought to satisfy classification parity (which requires parity in accuracy across groups, defined by protected attributes) or calibration (which requires similar predictions to have similar meanings across groups, defined by protected attributes). Central to this discussion are impossibility results, owed to (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  24. How to Save Face & the Fourth Amendment: Developing an Algorithmic Auditing and Accountability Industry for Facial Recognition Technology in Law Enforcement.Lin Patrick - 2023 - Albany Law Journal of Science and Technology 33 (2):189-235.
    For more than two decades, police in the United States have used facial recognition to surveil civilians. Local police departments deploy facial recognition technology to identify protestors’ faces while federal law enforcement agencies quietly amass driver’s license and social media photos to build databases containing billions of faces. Yet, despite the widespread use of facial recognition in law enforcement, there are neither federal laws governing the deployment of this technology nor regulations settings standards with respect to its development. To make (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. (1 other version)Engineering Social Concepts: Feasibility and Causal Models.Eleonore Neufeld - forthcoming - Philosophy and Phenomenological Research.
    How feasible are conceptual engineering projects of social concepts that aim for the engineered concept to be widely adopted in ordinary everyday life? Predominant frameworks on the psychology of concepts that shape work on stereotyping, bias, and machine learning have grim implications for the prospects of conceptual engineers: conceptual engineering efforts are ineffective in promoting certain social-conceptual changes. Specifically, since conceptual components that give rise to problematic social stereotypes are sensitive to statistical structures of the environment, purely conceptual (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  26. “Just” accuracy? Procedural fairness demands explainability in AI‑based medical resource allocation.Jon Rueda, Janet Delgado Rodríguez, Iris Parra Jounou, Joaquín Hortal-Carmona, Txetxu Ausín & David Rodríguez-Arias - 2022 - AI and Society:1-12.
    The increasing application of artificial intelligence (AI) to healthcare raises both hope and ethical concerns. Some advanced machine learning methods provide accurate clinical predictions at the expense of a significant lack of explainability. Alex John London has defended that accuracy is a more important value than explainability in AI medicine. In this article, we locate the trade-off between accurate performance and explainable algorithms in the context of distributive justice. We acknowledge that accuracy is cardinal from outcome-oriented justice because it (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  27. (1 other version)Non Discrimination as a moral obligation in Human resources management.Geert Demuijnck - 2009 - Journal of Business Ethics 88 (S1):83-101.
    In this paper, I will argue that it is a moral obligation for companies, firstly, to accept their moral responsibility with respect to non-discrimination, and secondly, to address the issue with a full-fledged programme, including but not limited to the countering of microsocial discrimination processes through specific policies. On the basis of a broad sketch of how some discrimination mechanisms are actually influencing decisions, that is, causing intended as well as unintended bias in Human Resources Management, I will argue (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  28. Cognition and Literary Ethical Criticism.Gilbert Plumer - 2011 - In Frank Zenker (ed.), Argumentation: Cognition & Community. Proceedings of the 9th International Conference of the Ontario Society for the Study of Argumentation (OSSA), May 18--21, 2011. OSSA. pp. 1-9.
    “Ethical criticism” is an approach to literary studies that holds that reading certain carefully selected novels can make us ethically better people, e.g., by stimulating our sympathetic imagination (Nussbaum). I try to show that this nonargumentative approach cheapens the persuasive force of novels and that its inherent bias and censorship undercuts what is perhaps the principal value and defense of the novel—that reading novels can be critical to one’s learning how to think.
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  29. What's Fair about Individual Fairness?Will Fleisher - 2021 - Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society.
    One of the main lines of research in algorithmic fairness involves individual fairness (IF) methods. Individual fairness is motivated by an intuitive principle, similar treatment, which requires that similar individuals be treated similarly. IF offers a precise account of this principle using distance metrics to evaluate the similarity of individuals. Proponents of individual fairness have argued that it gives the correct definition of algorithmic fairness, and that it should therefore be preferred to other methods for determining fairness. I argue that (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  30. The Fair Chances in Algorithmic Fairness: A Response to Holm.Clinton Castro & Michele Loi - 2023 - Res Publica 29 (2):231–237.
    Holm (2022) argues that a class of algorithmic fairness measures, that he refers to as the ‘performance parity criteria’, can be understood as applications of John Broome’s Fairness Principle. We argue that the performance parity criteria cannot be read this way. This is because in the relevant context, the Fairness Principle requires the equalization of actual individuals’ individual-level chances of obtaining some good (such as an accurate prediction from a predictive system), but the performance parity criteria do not guarantee any (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  31.  94
    Investigating some ethical issues of artificial intelligence in art (طرح و بررسی برخی از مسائلِ اخلاقیِ هوش مصنوعی در هنر).Ashouri Kisomi Mohammad Ali - 2024 - Metaphysics 16 (1):93-110.
    هدف از پژوهش حاضر، بررسی مسائل اخلاق هوش مصنوعی در حوزۀ هنر است. به‌این‌منظور، با تکیه بر فلسفه و اخلاق هوش مصنوعی، موضوعات اخلاقی که می‌تواند در حوزۀ هنر تأثیرگذار باشد، بررسی شده است. باتوجه‌به رشد و توسعۀ استفاده از هوش مصنوعی و ورود آن به حوزۀ هنر، نیاز است تا مباحث اخلاقی دقیق‌تر مورد توجه پژوهشگران هنر و فلسفه قرار گیرد. برای دست‌یابی به هدف پژوهش، با استفاده از روش تحلیلی‌ـ‌توصیفی، مفاهیمی همچون هوش مصنوعی، برخی تکنیک‌های آن و موضوعات (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32. Turning queries into questions: For a plurality of perspectives in the age of AI and other frameworks with limited (mind)sets.Claudia Westermann & Tanu Gupta - 2023 - Technoetic Arts 21 (1):3-13.
    The editorial introduces issue 21.1 of Technoetic Arts via a critical reflection on the artificial intelligence hype (AI hype) that emerged in 2022. Tracing the history of the critique of Large Language Models, the editorial underscores that there are substantial ethical challenges related to bias in the training data, copyright issues, as well as ecological challenges which the technology industry has consistently downplayed over the years. -/- The editorial highlights the distinction between the current AI technology’s reliance on extensive (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  33. The emergence of “truth machines”?: Artificial intelligence approaches to lie detection.Jo Ann Oravec - 2022 - Ethics and Information Technology 24 (1):1-10.
    This article analyzes emerging artificial intelligence (AI)-enhanced lie detection systems from ethical and human resource (HR) management perspectives. I show how these AI enhancements transform lie detection, followed with analyses as to how the changes can lead to moral problems. Specifically, I examine how these applications of AI introduce human rights issues of fairness, mental privacy, and bias and outline the implications of these changes for HR management. The changes that AI is making to lie detection are altering the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. From human resources to human rights: Impact assessments for hiring algorithms.Josephine Yam & Joshua August Skorburg - 2021 - Ethics and Information Technology 23 (4):611-623.
    Over the years, companies have adopted hiring algorithms because they promise wider job candidate pools, lower recruitment costs and less human bias. Despite these promises, they also bring perils. Using them can inflict unintentional harms on individual human rights. These include the five human rights to work, equality and nondiscrimination, privacy, free expression and free association. Despite the human rights harms of hiring algorithms, the AI ethics literature has predominantly focused on abstract ethical principles. This is problematic for two (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  35. Shortcuts to Artificial Intelligence.Nello Cristianini - 2021 - In Marcello Pelillo & Teresa Scantamburlo (eds.), Machines We Trust: Perspectives on Dependable Ai. MIT Press.
    The current paradigm of Artificial Intelligence emerged as the result of a series of cultural innovations, some technical and some social. Among them are apparently small design decisions, that led to a subtle reframing of the field’s original goals, and are by now accepted as standard. They correspond to technical shortcuts, aimed at bypassing problems that were otherwise too complicated or too expensive to solve, while still delivering a viable version of AI. Far from being a series of separate problems, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  36. EXTENT OF FINANCIAL LITERACY AMONG PNP PERSONNEL: BASIS FOR AN EFFECTIVE FINANCIAL MANAGEMENT PROGRAM.Henry Legazpi Ligson - 2023 - Get International Research Journal 1 (2):32-44.
    Variations in people’s perceptions of investment risk and financial literacy have been linked in studies. More specifically, Diacon (2016) discovered significant differences between less financially savvy non-experts and financial professionals. Lay people therefore have a larger propensity for association bias (i.e., they give suppliers and salesmen a higher level of credibility than laypeople) and are often less risk-tolerant than financial professionals. The method of sampling that the researcher chose is known as purposeful sampling. According to Easton & McColl, it (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37. Linking ethical leadership and ethical climate to employees’ ethical behavior: the moderating role of person–organization fit.Hussam Al Halbusi, Kent A. Williams, Thurasamy Ramayah, Luigi Aldieri & Concetto Paolo Vinci - 2020 - Personnel Review 50 (1):159-185.
    Purpose – With the growing demand for ethical standards in the prevailing business environment, ethical leadership has been under increasingly more focus. Based on the social exchange theory and social learning theory, this study scrutinized the impact of ethical leadership on the presentation of ethical conduct by employees through the ethical climate. Notably, this study scrutinized the moderating function of the person organization fit (P-O fit) in relation to ethical climate and the ethical conduct of employees. -/- Design/methodology/approach – (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  38.  90
    Giving Moral Competence High Priority in Medical Education. New MCT-based Research Findings from the Polish Context.Ewa Nowak, Anna-Maria Barciszewska, Kay Hemmerling, Georg Lind & Sunčana Kukolja Taradi - 2021 - Ethics in Progress 12:104-133.
    Nowadays, healthcare and medical education is qualified by test scores and competitiveness. This article considers its quality in terms of improving the moral competence of future healthcare providers. Objectives. Examining the relevance of moral competence in medico-clinical decision-making despite the paradigm shift and discussing the up-to-date findings on healthcare students. Design and method. N=115 participants were surveyed with a standard Moral Competence Test to examine how their moral competence development was affected by the learning environment and further important factors. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. How AI can AID bioethics.Walter Sinnott Armstrong & Joshua August Skorburg - forthcoming - Journal of Practical Ethics.
    This paper explores some ways in which artificial intelligence (AI) could be used to improve human moral judgments in bioethics by avoiding some of the most common sources of error in moral judgment, including ignorance, confusion, and bias. It surveys three existing proposals for building human morality into AI: Top-down, bottom-up, and hybrid approaches. Then it proposes a multi-step, hybrid method, using the example of kidney allocations for transplants as a test case. The paper concludes with brief remarks about (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  40. Determination, uniformity, and relevance: normative criteria for generalization and reasoning by analogy.Todd R. Davies - 1988 - In T. Davies (ed.), Analogical Reasoning. Kluwer Academic Publishers. pp. 227-250.
    This paper defines the form of prior knowledge that is required for sound inferences by analogy and single-instance generalizations, in both logical and probabilistic reasoning. In the logical case, the first order determination rule defined in Davies (1985) is shown to solve both the justification and non-redundancy problems for analogical inference. The statistical analogue of determination that is put forward is termed 'uniformity'. Based on the semantics of determination and uniformity, a third notion of "relevance" is defined, both logically and (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  41. Future bias in action: does the past matter more when you can affect it?Andrew J. Latham, Kristie Miller, James Norton & Christian Tarsney - 2020 - Synthese 198 (12):11327-11349.
    Philosophers have long noted, and empirical psychology has lately confirmed, that most people are “biased toward the future”: we prefer to have positive experiences in the future, and negative experiences in the past. At least two explanations have been offered for this bias: belief in temporal passage and the practical irrelevance of the past resulting from our inability to influence past events. We set out to test the latter explanation. In a large survey, we find that participants exhibit significantly (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  42. Learning to love the reviewer.Quan-Hoang Vuong - 2017 - European Science Editing 43 (4):83-83.
    Learning to love the reviewer -/- Issue: 43(4) November 2017. Viewpoint Page 83 -/- Quan Hoang Vuong Western University Hanoi, Centre for Interdisciplinary Social Research, Hanoi, Vietnam.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  43. (1 other version)Implicit Bias, Character and Control.Jules Holroyd & Daniel Kelly - 2016 - In Alberto Masala & Jonathan Mark Webber (eds.), From Personality to Virtue: Essays on the Philosophy of Character. Oxford: Oxford University Press UK.
    Our focus here is on whether, when influenced by implicit biases, those behavioural dispositions should be understood as being a part of that person’s character: whether they are part of the agent that can be morally evaluated.[4] We frame this issue in terms of control. If a state, process, or behaviour is not something that the agent can, in the relevant sense, control, then it is not something that counts as part of her character. A number of theorists have argued (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  44. Prestige Bias: An Obstacle to a Just Academic Philosophy.Helen De Cruz - 2018 - Ergo: An Open Access Journal of Philosophy 5.
    This paper examines the role of prestige bias in shaping academic philosophy, with a focus on its demographics. I argue that prestige bias exacerbates the structural underrepresentation of minorities in philosophy. It works as a filter against (among others) philosophers of color, women philosophers, and philosophers of low socio-economic status. As a consequence of prestige bias our judgments of philosophical quality become distorted. I outline ways in which prestige bias in philosophy can be mitigated.
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  45. Implicit Bias as Mental Imagery.Bence Nanay - 2021 - Journal of the American Philosophical Association 7 (3):329-347.
    What is the mental representation that is responsible for implicit bias? What is this representation that mediates between the trigger and the biased behavior? My claim is that this representation is neither a propositional attitude nor a mere association. Rather, it is mental imagery: perceptual processing that is not directly triggered by sensory input. I argue that this view captures the advantages of the two standard accounts without inheriting their disadvantages. Further, this view also explains why manipulating mental imagery (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  46. Implicit Bias, Moods, and Moral Responsibility.Alex Madva - 2017 - Pacific Philosophical Quarterly 99 (S1):53-78.
    Are individuals morally responsible for their implicit biases? One reason to think not is that implicit biases are often advertised as unconscious, ‘introspectively inaccessible’ attitudes. However, recent empirical evidence consistently suggests that individuals are aware of their implicit biases, although often in partial and inarticulate ways. Here I explore the implications of this evidence of partial awareness for individuals’ moral responsibility. First, I argue that responsibility comes in degrees. Second, I argue that individuals’ partial awareness of their implicit biases makes (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  47. Bias and Perception.Susanna Siegel - 2020 - In Erin Beeghly & Alex Madva (eds.), An Introduction to Implicit Bias: Knowledge, Justice, and the Social Mind. New York, NY, USA: Routledge. pp. 99-115.
    chapter on perception and bias including implicit bias.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  48. Bias and Knowledge: Two Metaphors.Erin Beeghly - 2020 - In Erin Beeghly & Alex Madva (eds.), An Introduction to Implicit Bias: Knowledge, Justice, and the Social Mind. New York, NY, USA: Routledge. pp. 77-98.
    If you care about securing knowledge, what is wrong with being biased? Often it is said that we are less accurate and reliable knowers due to implicit biases. Likewise, many people think that biases reflect inaccurate claims about groups, are based on limited experience, and are insensitive to evidence. Chapter 3 investigates objections such as these with the help of two popular metaphors: bias as fog and bias as shortcut. Guiding readers through these metaphors, I argue that they (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  49. Bias towards the future.Kristie Miller, Preston Greene, Andrew J. Latham, James Norton, Christian Tarsney & Hannah Tierney - 2022 - Philosophy Compass 17 (8):e12859.
    All else being equal, most of us typically prefer to have positive experiences in the future rather than the past and negative experiences in the past rather than the future. Recent empirical evidence tends not only to support the idea that people have these preferences, but further, that people tend to prefer more painful experiences in their past rather than fewer in their future (and mutatis mutandis for pleasant experiences). Are such preferences rationally permissible, or are they, as time-neutralists contend, (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  50. Implicit bias, ideological bias, and epistemic risks in philosophy.Uwe Peters - 2018 - Mind and Language 34 (3):393-419.
    It has been argued that implicit biases are operative in philosophy and lead to significant epistemic costs in the field. Philosophers working on this issue have focussed mainly on implicit gender and race biases. They have overlooked ideological bias, which targets political orientations. Psychologists have found ideological bias in their field and have argued that it has negative epistemic effects on scientific research. I relate this debate to the field of philosophy and argue that if, as some studies (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
1 — 50 / 951