Switch to: References

Citations of:

Causal feature learning for utility-maximizing agents

In International Conference on Probabilistic Graphical Models. pp. 257–268 (2020)

Add citations

You must login to add citations.
  1. Tell me your (cognitive) budget, and I’ll tell you what you value.David Kinney & Tania Lombrozo - 2024 - Cognition 247 (C):105782.
    Consider the following two (hypothetical) generic causal claims: “Living in a neighborhood with many families with children increases purchases of bicycles” and “living in an affluent neighborhood with many families with children increases purchases of bicycles.” These claims not only differ in what they suggest about how bicycle ownership is distributed across different neighborhoods (i.e., “the data”), but also have the potential to communicate something about the speakers’ values: namely, the prominence they accord to affluence in representing and making decisions (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • On the Philosophy of Unsupervised Learning.David S. Watson - 2023 - Philosophy and Technology 36 (2):1-26.
    Unsupervised learning algorithms are widely used for many important statistical tasks with numerous applications in science and industry. Yet despite their prevalence, they have attracted remarkably little philosophical scrutiny to date. This stands in stark contrast to supervised and reinforcement learning algorithms, which have been widely studied and critically evaluated, often with an emphasis on ethical concerns. In this article, I analyze three canonical unsupervised learning problems: clustering, abstraction, and generative modeling. I argue that these methods raise unique epistemological and (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Local Explanations via Necessity and Sufficiency: Unifying Theory and Practice.David S. Watson, Limor Gultchin, Ankur Taly & Luciano Floridi - 2022 - Minds and Machines 32 (1):185-218.
    Necessity and sufficiency are the building blocks of all successful explanations. Yet despite their importance, these notions have been conceptually underdeveloped and inconsistently applied in explainable artificial intelligence, a fast-growing research area that is so far lacking in firm theoretical foundations. In this article, an expanded version of a paper originally presented at the 37th Conference on Uncertainty in Artificial Intelligence, we attempt to fill this gap. Building on work in logic, probability, and causality, we establish the central role of (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations