Results for 'Atoosa Afshari'

8 found
Order:
  1. In Conversation with Artificial Intelligence: Aligning language Models with Human Values.Atoosa Kasirzadeh - 2023 - Philosophy and Technology 36 (2):1-24.
    Large-scale language technologies are increasingly used in various forms of communication with humans across different contexts. One particular use case for these technologies is conversational agents, which output natural language text in response to prompts and queries. This mode of engagement raises a number of social and ethical questions. For example, what does it mean to align conversational agents with human norms or values? Which norms or values should they be aligned with? And how can this be accomplished? In this (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  2. A New Role for Mathematics in Empirical Sciences.Atoosa Kasirzadeh - 2021 - Philosophy of Science 88 (4):686-706.
    Mathematics is often taken to play one of two roles in the empirical sciences: either it represents empirical phenomena or it explains these phenomena by imposing constraints on them. This article identifies a third and distinct role that has not been fully appreciated in the literature on applicability of mathematics and may be pervasive in scientific practice. I call this the “bridging” role of mathematics, according to which mathematics acts as a connecting scheme in our explanatory reasoning about why and (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  3. Counter Countermathematical Explanations.Atoosa Kasirzadeh - 2021 - Erkenntnis 88 (6):2537-2560.
    Recently, there have been several attempts to generalize the counterfactual theory of causal explanations to mathematical explanations. The central idea of these attempts is to use conditionals whose antecedents express a mathematical impossibility. Such countermathematical conditionals are plugged into the explanatory scheme of the counterfactual theory and—so is the hope—capture mathematical explanations. Here, I dash the hope that countermathematical explanations simply parallel counterfactual explanations. In particular, I show that explanations based on countermathematicals are susceptible to three problems counterfactual explanations do (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  4. The Use and Misuse of Counterfactuals in Ethical Machine Learning.Atoosa Kasirzadeh & Andrew Smart - 2021 - In Atoosa Kasirzadeh & Andrew Smart (eds.), ACM Conference on Fairness, Accountability, and Transparency (FAccT 21).
    The use of counterfactuals for considerations of algorithmic fairness and explainability is gaining prominence within the machine learning community and industry. This paper argues for more caution with the use of counterfactuals when the facts to be considered are social categories such as race or gender. We review a broad body of papers from philosophy and social sciences on social ontology and the semantics of counterfactuals, and we conclude that the counterfactual approach in machine learning fairness and social explainability can (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  5. Intelligent capacities in artificial systems.Atoosa Kasirzadeh & Victoria McGeer - 2023 - In William A. Bauer & Anna Marmodoro (eds.), Artificial Dispositions: Investigating Ethical and Metaphysical Issues. New York: Bloomsbury.
    This paper investigates the nature of dispositional properties in the context of artificial intelligence systems. We start by examining the distinctive features of natural dispositions according to criteria introduced by McGeer (2018) for distinguishing between object-centered dispositions (i.e., properties like ‘fragility’) and agent-based abilities, including both ‘habits’ and ‘skills’ (a.k.a. ‘intelligent capacities’, Ryle 1949). We then explore to what extent the distinction applies to artificial dispositions in the context of two very different kinds of artificial systems, one based on rule-based (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. The Ethical Gravity Thesis: Marrian Levels and the Persistence of Bias in Automated Decision-making Systems.Atoosa Kasirzadeh & Colin Klein - 2021 - Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (AIES '21).
    Computers are used to make decisions in an increasing number of domains. There is widespread agreement that some of these uses are ethically problematic. Far less clear is where ethical problems arise, and what might be done about them. This paper expands and defends the Ethical Gravity Thesis: ethical problems that arise at higher levels of analysis of an automated decision-making system are inherited by lower levels of analysis. Particular instantiations of systems can add new problems, but not ameliorate more (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7. Algorithmic Fairness and Structural Injustice: Insights from Feminist Political Philosophy.Atoosa Kasirzadeh - 2022 - Aies '22: Proceedings of the 2022 Aaai/Acm Conference on Ai, Ethics, and Society.
    Data-driven predictive algorithms are widely used to automate and guide high-stake decision making such as bail and parole recommendation, medical resource distribution, and mortgage allocation. Nevertheless, harmful outcomes biased against vulnerable groups have been reported. The growing research field known as 'algorithmic fairness' aims to mitigate these harmful biases. Its primary methodology consists in proposing mathematical metrics to address the social harms resulting from an algorithm's biased outputs. The metrics are typically motivated by -- or substantively rooted in -- ideals (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. Explanation Hacking: The perils of algorithmic recourse.E. Sullivan & Atoosa Kasirzadeh - forthcoming - In Juan Manuel Durán & Giorgia Pozzi (eds.), Philosophy of science for machine learning: Core issues and new perspectives. Springer.
    We argue that the trend toward providing users with feasible and actionable explanations of AI decisions—known as recourse explanations—comes with ethical downsides. Specifically, we argue that recourse explanations face several conceptual pitfalls and can lead to problematic explanation hacking, which undermines their ethical status. As an alternative, we advocate that explanations of AI decisions should aim at understanding.
    Download  
     
    Export citation  
     
    Bookmark