Switch to: Citations

Add references

You must login to add references.
  1. (1 other version)Applying a Principle of Explicability to AI Research in Africa: Should We Do It?Mary Carman & Benjamin Rosman - 2023 - In Aribiah David Attoe, Segun Samuel Temitope, Victor Nweke, John Umezurike & Jonathan Okeke Chimakonam (eds.), Conversations on African Philosophy of Mind, Consciousness and Artificial Intelligence. Springer Verlag. pp. 183-201.
    Developing and implementing artificial intelligence (AI) systems in an ethical manner faces several challenges specific to the kind of technology at hand, including ensuring that decision-making systems making use of machine learning are just, fair, and intelligible, and are aligned with our human values. Given that values vary across cultures, an additional ethical challenge is to ensure that these AI systems are not developed according to some unquestioned but questionable assumption of universal norms but are in fact compatible with the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • (1 other version)Applying a principle of explicability to AI research in Africa: should we do it?Mary Carman & Benjamin Rosman - 2020 - Ethics and Information Technology 23 (2):107-117.
    Developing and implementing artificial intelligence (AI) systems in an ethical manner faces several challenges specific to the kind of technology at hand, including ensuring that decision-making systems making use of machine learning are just, fair, and intelligible, and are aligned with our human values. Given that values vary across cultures, an additional ethical challenge is to ensure that these AI systems are not developed according to some unquestioned but questionable assumption of universal norms but are in fact compatible with the (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • How the machine ‘thinks’: Understanding opacity in machine learning algorithms.Jenna Burrell - 2016 - Big Data and Society 3 (1):205395171562251.
    This article considers the issue of opacity as a problem for socially consequential mechanisms of classification and ranking, such as spam filters, credit card fraud detection, search engines, news trends, market segmentation and advertising, insurance or loan qualification, and credit scoring. These mechanisms of classification all frequently rely on computational algorithms, and in many cases on machine learning algorithms to do this work. In this article, I draw a distinction between three forms of opacity: opacity as intentional corporate or state (...)
    Download  
     
    Export citation  
     
    Bookmark   208 citations  
  • Generalization Bias in Science.Uwe Peters, Alexander Krauss & Oliver Braganza - 2022 - Cognitive Science 46 (9):e13188.
    Many scientists routinely generalize from study samples to larger populations. It is commonly assumed that this cognitive process of scientific induction is a voluntary inference in which researchers assess the generalizability of their data and then draw conclusions accordingly. We challenge this view and argue for a novel account. The account describes scientific induction as involving by default a generalization bias that operates automatically and frequently leads researchers to unintentionally generalize their findings without sufficient evidence. The result is unwarranted, overgeneralized (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Explaining Machine Learning Decisions.John Zerilli - 2022 - Philosophy of Science 89 (1):1-19.
    The operations of deep networks are widely acknowledged to be inscrutable. The growing field of Explainable AI has emerged in direct response to this problem. However, owing to the nature of the opacity in question, XAI has been forced to prioritise interpretability at the expense of completeness, and even realism, so that its explanations are frequently interpretable without being underpinned by more comprehensive explanations faithful to the way a network computes its predictions. While this has been taken to be a (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Explanation in artificial intelligence: Insights from the social sciences.Tim Miller - 2019 - Artificial Intelligence 267 (C):1-38.
    Download  
     
    Export citation  
     
    Bookmark   144 citations  
  • Rethinking Explainable Machines: The GDPR's 'Right to Explanation' Debate and the Rise of Algorithmic Audits in Enterprise.Bryan Casey, Ashkon Farhangi & Roland Vogl - forthcoming - Berkeley Technology Law Journal.
    The public debate surrounding the General Data Protection Regulation's “right to explanation” has sparked a global conversation of profound social and.
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Culture and systems of thought: Holistic versus analytic cognition.Richard E. Nisbett, Kaiping Peng, Incheol Choi & Ara Norenzayan - 2001 - Psychological Review 108 (2):291-310.
    The authors find East Asians to be holistic, attending to the entire field and assigning causality to it, making relatively little use of categories and formal logic, and relying on "dialectical" reasoning, whereas Westerners, are more analytic, paying attention primarily to the object and the categories to which it belongs and using rules, including formal logic, to understand its behavior. The 2 types of cognitive processes are embedded in different naive metaphysical systems and tacit epistemologies. The authors speculate that the (...)
    Download  
     
    Export citation  
     
    Bookmark   284 citations  
  • Unjustified Sample Sizes and Generalizations in Explainable AI Research: Principles for More Inclusive User Studies.Uwe Peters & Mary Carman - forthcoming - IEEE Intelligent Systems.
    Many ethical frameworks require artificial intelligence (AI) systems to be explainable. Explainable AI (XAI) models are frequently tested for their adequacy in user studies. Since different people may have different explanatory needs, it is important that participant samples in user studies are large enough to represent the target population to enable generalizations. However, it is unclear to what extent XAI researchers reflect on and justify their sample sizes or avoid broad generalizations across people. We analyzed XAI user studies (N = (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard?John Zerilli, Alistair Knott, James Maclaurin & Colin Gavaghan - 2018 - Philosophy and Technology 32 (4):661-683.
    We are sceptical of concerns over the opacity of algorithmic decision tools. While transparency and explainability are certainly important desiderata in algorithmic governance, we worry that automated decision-making is being held to an unrealistically high standard, possibly owing to an unrealistically high estimate of the degree of transparency attainable from human decision-makers. In this paper, we review evidence demonstrating that much human decision-making is fraught with transparency problems, show in what respects AI fares little worse or better and argue that (...)
    Download  
     
    Export citation  
     
    Bookmark   79 citations  
  • The weirdest people in the world?Joseph Henrich, Steven J. Heine & Ara Norenzayan - 2010 - Behavioral and Brain Sciences 33 (2-3):61-83.
    Behavioral scientists routinely publish broad claims about human psychology and behavior in the world's top journals based on samples drawn entirely from Western, Educated, Industrialized, Rich, and Democratic (WEIRD) societies. Researchers – often implicitly – assume that either there is little variation across human populations, or that these “standard subjects” are as representative of the species as any other population. Are these assumptions justified? Here, our review of the comparative database from across the behavioral sciences suggests both that there is (...)
    Download  
     
    Export citation  
     
    Bookmark   753 citations  
  • The impact of culture on mindreading.Jane Suilin Lavelle - 2019 - Synthese 198 (7):6351-6374.
    The role of culture in shaping folk psychology and mindreading has been neglected in the philosophical literature. This paper shows that there are significant cultural differences in how psychological states are understood and used by drawing on Spaulding’s recent distinction between the ‘goals’ and ‘methods’ of mindreading to argue that the relations between these methods vary across cultures; and arguing that differences in folk psychology cannot be dismissed as irrelevant to the cognitive architecture that facilitates our understanding of psychological states. (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Cultural preferences for formal versus intuitive reasoning.Ara Norenzayan, Edward E. Smith, Beom Jun Kim & Richard E. Nisbett - 2002 - Cognitive Science 26 (5):653-684.
    The authors examined cultural preferences for formal versus intuitive reasoning among East Asian (Chinese and Korean), Asian American, and European American university students. We investigated categorization (Studies 1 and 2), conceptual structure (Study 3), and deductive reasoning (Studies 3 and 4). In each study a cognitive conflict was activated between formal and intuitive strategies of reasoning. European Americans, more than Chinese and Koreans, set aside intuition in favor of formal reasoning. Conversely, Chinese and Koreans relied on intuitive strategies more than (...)
    Download  
     
    Export citation  
     
    Bookmark   56 citations