Switch to: References

Add citations

You must login to add citations.
  1. What can metacognition teach us about the evolution of communication?Joëlle Proust - 2023 - Evolutionary Linguistic Theory 5 (1):1-10.
    Procedural metacognition is the set of affect-based mechanisms allowing agents to regulate cognitive actions like perceptual discrimination, memory retrieval or problem solving. This article proposes that procedural metacognition has had a major role in the evolution of communication. A plausible hypothesis is that, under pressure for maximizing signalling efficiency, the metacognitive abilities used by nonhumans to regulate their perception and their memory have been re-used to regulate their communication. On this view, detecting one’s production errors in signalling, or solving species-specific (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Affordances from a control viewpoint.Joëlle Proust - forthcoming - Philosophical Psychology.
    Perceiving an armchair prepares us to sit. Reading the first line in a text prepares us to read it. This article proposes that the affordance construct used to explain reactive potentiation of behavior similarly applies to reactive potentiation of cognitive actions. It defends furthermore that, in both cases, affordance-sensings do not only apply to selective (dis)engagement, but also to the revision and the termination of actions. In the first section, characteristics of environmental affordance-sensings such as directness, stability, action potentiation, valence, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Quasi-Metacognitive Machines: Why We Don’t Need Morally Trustworthy AI and Communicating Reliability is Enough.John Dorsch & Ophelia Deroy - 2024 - Philosophy and Technology 37 (2):1-21.
    Many policies and ethical guidelines recommend developing “trustworthy AI”. We argue that developing morally trustworthy AI is not only unethical, as it promotes trust in an entity that cannot be trustworthy, but it is also unnecessary for optimal calibration. Instead, we show that reliability, exclusive of moral trust, entails the appropriate normative constraints that enable optimal calibration and mitigate the vulnerability that arises in high-stakes hybrid decision-making environments, without also demanding, as moral trust would, the anthropomorphization of AI and thus (...)
    Download  
     
    Export citation  
     
    Bookmark