Nineteen Prescott Fire Department, Granite Mountain Hot Shot (GMHS) wildland firefighters and supervisors (WFF), perished on the June 2013 Yarnell Hill Fire (YHF) in Arizona. The firefighters left their Safety Zone during forecast, outflow winds, triggering explosive fire behavior in drought-stressed chaparral. Why would an experienced WFF Crew, leave ‘good black’ and travel downslope through a brush-filled chimney, contrary to their training and experience? An organized Serious Accident Investigation Team (SAIT) found, “… no indication of negligence, reckless actions, or violations (...) of policy or protocol.” Despite this, many WFF professionals deemed the catastrophe, “… the final, fatal link, in a long chain of bad decisions with good outcomes.” This paper is a theoretical and realistic examination of plausible, faulty, human decisions with prior good outcomes; internal and external impacts, influencing the GMHS; and two explanations for this catastrophe: Individual Blame Logic and Organizational Function Logic, and proposed preventive mitigations. (shrink)
Nineteen Prescott Fire Department, Granite Mountain Hot Shot (GMHS) wildland firefighters (WF) perished in Arizona in June 2013 Yarnell Hill Fire, an inexplicable wildland fire disaster. In complex wildland fires, sudden, dynamic changes in human factors and fire conditions can occur, thus mistakes can be unfortunately fatal. Individual and organizational faults regarding the predictable, puzzling, human failures that will result in future WF deaths are addressed. The GMHS were individually, then collectively fixated with abandoning their Safety Zone to reengage, committing (...) themselves at the worst possible time, to relocate to another Safety Zone - a form of collective tunnel vision. Our goal is to provoke meaningful discussion toward improved wildland firefighter safety with practical solutions derived from a long-established wildland firefighter expertise/performance in a fatality-prone profession. Wildfire fatalities are unavoidable, hence these proposals, applied to ongoing training, can significantly contribute to other well-thought-out and validated measures to reduce them. (shrink)
Recent research on the metaethical beliefs of ordinary people appears to show that they are metaethical pluralists that adopt different metaethical standards for different moral judgments. Yet the methods used to evaluate folk metaethical belief rely on the assumption that participants interpret what they are asked in metaethical terms. We argue that most participants do not interpret questions designed to elicit metaethical beliefs in metaethical terms, or at least not in the way researchers intend. As a result, existing methods are (...) not reliable measures of metaethical belief. We end by discussing the implications of our account for the philosophical and practical implications of research on the psychology of metaethics. (shrink)
Modus ponens is the argument from premises of the form If A, then B and A to the conclusion B. Nearly all participants agree that the modus ponens conclusion logically follows when the argument appears in this Basic form. However, adding a further premise can lower participants’ rate of agreement—an effect called suppression. We propose a theory of suppression that draws on contemporary ideas about conditional sentences in linguistics and philosophy. Semantically, the theory assumes that people interpret an indicative conditional (...) as a context-sensitive strict conditional: true if and only if its consequent is true in each of a contextually determined set of situations in which its antecedent is true. Pragmatically, the theory claims that context changes in response to new assertions, including new conditional premises. Thus, the conclusion of a modus ponens argument may no longer be accepted in the changed context. Psychologically, the theory describes people as capable of reasoning about broad classes of possible situations, ordered by typicality, without having to reason about individual possible worlds. The theory accounts for the main suppression phenomena, and it generates some novel predictions that new experiments confirm. (shrink)
Purpose – The purpose of this paper is to ask whether a first-order-cybernetics concept, Shannon’s Information Theory, actually allows a far-reaching mathematics of perception allegedly derived from it, Norwich et al.’s “Entropy Theory of Perception”. Design/methodology/approach – All of The Entropy Theory, 35 years of publications, was scrutinized for its characterization of what underlies Shannon Information Theory: Shannon’s “general communication system”. There, “events” are passed by a “source” to a “transmitter”, thence through a “noisy channel” to a “receiver”, that passes (...) “outcomes” (received events) to a “destination”. Findings – In the entropy theory, “events” were sometimes interactions with the stimulus, but could be microscopic stimulus conditions. “Outcomes” often went unnamed; sometimes, the stimulus, or the interaction with it, or the resulting sensation, were “outcomes”. A “source” was often implied to be a “transmitter”, which frequently was a primary afferent neuron; elsewhere, the stimulus was the “transmitter” and perhaps also the “source”. “Channel” was rarely named; once, it was the whole eye; once, the incident photons; elsewhere, the primary or secondary afferent. “Receiver” was usually the sensory receptor, but could be an afferent. “Destination” went unmentioned. In sum, the entropy theory’s idea of Shannon’s “general communication system” was entirely ambiguous. Research limitations/implications – The ambiguities indicate that, contrary to claim, the entropy theory cannot be an “information theoretical description of the process of perception”. Originality/value – Scrutiny of the entropy theory’s use of information theory was overdue and reveals incompatibilities that force a reconsideration of information theory’s possible role in perception models. A second-order-cybernetics approach is suggested. (shrink)
Purpose – For half a century, neuroscientists have used Shannon Information Theory to calculate “information transmitted,” a hypothetical measure of how well neurons “discriminate” amongst stimuli. Neuroscientists’ computations, however, fail to meet even the technical requirements for credibility. Ultimately, the reasons must be conceptual. That conclusion is confirmed here, with crucial implications for neuroscience. The paper aims to discuss these issues. Design/methodology/approach – Shannon Information Theory depends upon a physical model, Shannon’s “general communication system.” Neuroscientists’ interpretation of that model is (...) scrutinized here. Findings – In Shannon’s system, a recipient receives a message composed of symbols. The symbols received, the symbols sent, and their hypothetical occurrence probabilities altogether allow calculation of “information transmitted.” Significantly, Shannon’s system’s “reception” (decoding) side physically mirrors its “transmission” (encoding) side. However, neurons lack the “reception” side; neuroscientists nonetheless insisted that decoding must happen. They turned to Homunculus, an internal humanoid who infers stimuli from neuronal firing. However, Homunculus must contain a Homunculus, and so on ad infinitum – unless it is super-human. But any need for Homunculi, as in “theories of consciousness,” is obviated if consciousness proves to be “emergent.” Research limitations/implications – Neuroscientists’ “information transmitted” indicates, at best, how well neuroscientists themselves can use neuronal firing to discriminate amongst the stimuli given to the research animal. Originality/value – A long-overdue examination unmasks a hidden element in neuroscientists’ use of Shannon Information Theory, namely, Homunculus. Almost 50 years’ worth of computations are recognized as irrelevant, mandating fresh approaches to understanding “discriminability.”. (shrink)
Purpose – The purpose of this paper is to examine the popular “information transmitted” interpretation of absolute judgments, and to provide an alternative interpretation if one is needed. Design/methodology/approach – The psychologists Garner and Hake and their successors used Shannon’s Information Theory to quantify information transmitted in absolute judgments of sensory stimuli. Here, information theory is briefly reviewed, followed by a description of the absolute judgment experiment, and its information theory analysis. Empirical channel capacities are scrutinized. A remarkable coincidence, the (...) similarity of maximum information transmitted to human memory capacity, is described. Over 60 representative psychology papers on “information transmitted” are inspected for evidence of memory involvement in absolute judgment. Finally, memory is conceptually integrated into absolute judgment through a novel qualitative model that correctly predicts how judgments change with increase in the number of judged stimuli. Findings – Garner and Hake gave conflicting accounts of how absolute judgments represent information transmission. Further, “channel capacity” is an illusion caused by sampling bias and wishful thinking; information transmitted actually peaks and then declines, the peak coinciding with memory capacity. Absolute judgments themselves have numerous idiosyncracies that are incompatible with a Shannon general communication system but which clearly imply memory dependence. Research limitations/implications – Memory capacity limits the correctness of absolute judgments. Memory capacity is already well measured by other means, making redundant the informational analysis of absolute judgments. Originality/value – This paper presents a long-overdue comprehensive critical review of the established interpretation of absolute judgments in terms of “information transmitted”. An inevitable conclusion is reached: that published measurements of information transmitted actually measure memory capacity. A new, qualitative model is offered for the role of memory in absolute judgments. The model is well supported by recently revealed empirical properties of absolute judgments. (shrink)
Purpose – A key cybernetics concept, information transmitted in a system, was quantified by Shannon. It quickly gained prominence, inspiring a version by Harvard psychologists Garner and Hake for “absolute identification” experiments. There, human subjects “categorize” sensory stimuli, affording “information transmitted” in perception. The Garner-Hake formulation has been in continuous use for 62 years, exerting enormous influence. But some experienced theorists and reviewers have criticized it as uninformative. They could not explain why, and were ignored. Here, the “why” is answered. (...) The paper aims to discuss these issues. Design/methodology/approach – A key Shannon data-organizing tool is the confusion matrix. Its columns and rows are, respectively, labeled by “symbol sent” (event) and “symbol received” (outcome), such that matrix entries represent how often outcomes actually corresponded to events. Garner and Hake made their own version of the matrix, which deserves scrutiny, and is minutely examined here. Findings – The Garner-Hake confusion-matrix columns represent “stimulus categories”, ranges of some physical stimulus attribute (usually intensity), and its rows represent “response categories” of the subject’s identification of the attribute. The matrix entries thus show how often an identification empirically corresponds to an intensity, such that “outcomes” and “events” differ in kind (unlike Shannon’s). Obtaining a true “information transmitted” therefore requires stimulus categorizations to be converted to hypothetical evoking stimuli, achievable (in principle) by relating categorization to sensation to intensity. But those relations are actually unknown, perhaps unknowable. Originality/value – The author achieves an important understanding: why “absolute identification” experiments do not illuminate sensory processes. (shrink)
Purpose – In the last half-century, individual sensory neurons have been bestowed with characteristics of the whole human being, such as behavior and its oft-presumed precursor, consciousness. This anthropomorphization is pervasive in the literature. It is also absurd, given what we know about neurons, and it needs to be abolished. This study aims to first understand how it happened, and hence why it persists. Design/methodology/approach – The peer-reviewed sensory-neurophysiology literature extends to hundreds (perhaps thousands) of papers. Here, more than 90 (...) mainstream papers were scrutinized. Findings – Anthropomorphization arose because single neurons were cast as “observers” who “identify”, “categorize”, “recognize”, “distinguish” or “discriminate” the stimuli, using math-based algorithms that reduce (“decode”) the stimulus-evoked spike trains to the particular stimuli inferred to elicit them. Without “decoding”, there is supposedly no perception. However, “decoding” is both unnecessary and unconfirmed. The neuronal “observer” in fact consists of the laboratory staff and the greater society that supports them. In anthropomorphization, the neuron becomes the collective. Research limitations/implications – Anthropomorphization underlies the widespread application to neurons Information Theory and Signal Detection Theory, making both approaches incorrect. Practical implications – A great deal of time, money and effort has been wasted on anthropomorphic Reductionist approaches to understanding perception and consciousness. Those resources should be diverted into more-fruitful approaches. Originality/value – A long-overdue scrutiny of sensory-neuroscience literature reveals that anthropomorphization, a form of Reductionism that involves the presumption of single-neuron consciousness, has run amok in neuroscience. Consciousness is more likely to be an emergent property of the brain. (shrink)
Introduction & Objectives: Norwich’s Entropy Theory of Perception (1975 [1] -present) stands alone. It explains many firing-rate behaviors and psychophysical laws from bare theory. To do so, it demands a unique sort of interaction between receptor and brain, one that Norwich never substantiated. Can it now be confirmed, given the accumulation of empirical sensory neuroscience? Background: Norwich conjoined sensation and a mathematical model of communication, Shannon’s Information Theory, as follows: “In the entropic view of sensation, magnitude of sensation is regarded (...) as a measure of the entropy or uncertainty of the stimulus signal” [2]. “To be uncertain about the outcome of an event, one must first be aware of a set of alternative outcomes” [3]. “The entropy-establishing process begins with the generation of a [internal] sensory signal by the stimulus generator. This is followed by receipt of the [external] stimulus by the sensory receptor, transmission of action potentials by the sensory neurons, and finally recapture of the [response to the internal] signal by the generator” [4]. The latter “recapture” differentiates external from internal stimuli. The hypothetical “stimulus generators” are internal emitters, that generate photons in vision, audible sounds in audition (to Norwich, the spontaneous otoacoustic emissions [SOAEs]), “temperatures in excess of local skin temperature” in skin temperature sensation [4], etc. Method (1): Several decades of empirical sensory physiology literature was scrutinized for internal “stimulus generators”. Results (1): Spontaneous photopigment isomerization (“dark light”) does not involve visible light. SOAEs are electromechanical basilar-membrane artefacts that rarely produce audible tones. The skin’s temperature sensors do not raise skin temperature, etc. Method (2): The putative action of the brain-and-sensory-receptor loop was carefully reexamined. Results (2): The sensory receptor allegedly “perceives”, experiences “awareness”, possesses “memory”, and has a “mind”. But those traits describe the whole human. The receptor, thus anthropomorphized, must therefore contain its own perceptual loop, containing a receptor, containing a perceptual loop, etc. Summary & Conclusions: The Entropy Theory demands sensory awareness of alternatives, through an imagined brain-and-sensory-receptor loop containing internal “stimulus generators”. But (1) no internal “stimulus generators” seem to exist and (2) the loop would be the outermost of an infinite nesting of identical loops. (shrink)
Shannon’s information theory has been a popular component of first-order cybernetics. It quantifies information transmitted in terms of the number of times a sent symbol is received as itself, or as another possible symbol. Sent symbols were events and received symbols were outcomes. Garner and Hake reinterpreted Shannon, describing events and outcomes as categories of a stimulus attribute, so as to quantify the information transmitted in the psychologist’s category (or absolute judgment) experiment. There, categories are represented by specific stimuli, and (...) the human subject must assign those stimuli, singly and in random order, to the categories that they represent. Hundreds of computations ensued of information transmitted and its alleged asymptote, the sensory channel capacity. The present paper critically re-examines those estimates. It also reviews estimates of memory capacity from memory experiments. It concludes that absolute judgment is memory-limited and that channel capacities are actually memory capacities. In particular, there are factors that affect absolute judgment that are not explainable within Shannon’s theory, factors such as feedback, practice, motivation, and stimulus range, as well as the anchor effect, sequential dependences, the rise in information transmitted with the increase in number of stimulus dimensions, and the phenomena of masking and stimulus duration dependence. It is recommended that absolute judgments be abandoned, because there are already many direct estimates of memory capacity. (shrink)
Cyclist Lance Armstrong cheated his way to seven Tour de France . Such cheating is wrong because it harms society. To explain how that harm affects all of us, I use Aristotle's ideas of virtue ethics to argue that Armstrong, despite his charitable work, is not a virtuous person. Virtue is to some extent determined by society, so we need to be clear that Armstrong is not a person to emulate. A society which does not clearly disapprove of vice (...) is less than it might otherwise be because a good society is one that encourages virtue in its citizens. (shrink)
Information flow in a system is a core cybernetics concept. It has been used frequently in Sensory Psychology since 1951. There, Shannon Information Theory was used to calculate "information transmitted" in "absolute identification" experiments involving human subjects. Originally, in Shannon's "system", any symbol received ("outcome") is among the symbols sent ("events"). Not all symbols are received as transmitted, hence an indirect noise measure is calculated, "information transmitted", which requires knowing the confusion matrix, its columns labeled by "event" and its rows (...) labeled by "outcome". Each matrix entry is dependent upon the frequency with which a particular outcome corresponds to a particular event. However, for the sensory psychologist, stimulus intensities are "events"; the experimenter partitions the intensity continuum into ranges called "stimulus categories" and "response categories", such that each confusion-matrix entry represents the frequency with which a stimulus from a stimulus category falls within a particular response category. Of course, a stimulus evokes a sensation, and the subject's immediate memory of it is compared to the memories of sensations learned during practice, to make a categorization. Categorizing thus introduces "false noise", which is only removed if categorizations can be converted back to their hypothetical evoking stimuli. But sensations and categorizations are both statistically distributed, and the stimulus that corresponds to a given mean categorization cannot be known from only the latter; the relation of intensity to mean sensation, and of mean sensation to mean categorization, are needed. Neither, however, are presently knowable. This is a quandary, which arose because sensory psychologists ignored an ubiquitous component of Shannon's "system", the uninvolved observer, who calculates "information transmitted". Human sensory systems, however, are within de facto observers, making "false noise" inevitable. (shrink)
In Cybernetics (1961 Edition), Professor Norbert Wiener noted that “The role of information and the technique of measuring and transmitting information constitute a whole discipline for the engineer, for the neuroscientist, for the psychologist, and for the sociologist”. Sociology aside, the neuroscientists and the psychologists inferred “information transmitted” using the discrete summations from Shannon Information Theory. The present author has since scrutinized the psychologists’ approach in depth, and found it wrong. The neuroscientists’ approach is highly related, but remains unexamined. Neuroscientists (...) quantified “the ability of [physiological sensory] receptors (or other signal-processing elements) to transmit information about stimulus parameters”. Such parameters could vary along a single continuum (e.g., intensity), or along multiple dimensions that altogether provide a Gestalt – such as a face. Here, unprecedented scrutiny is given to how 23 neuroscience papers computed “information transmitted” in terms of stimulus parameters and the evoked neuronal spikes. The computations relied upon Shannon’s “confusion matrix”, which quantifies the fidelity of a “general communication system”. Shannon’s matrix is square, with the same labels for columns and for rows. Nonetheless, neuroscientists labelled the columns by “stimulus category” and the rows by “spike-count category”. The resulting “information transmitted” is spurious, unless the evoked spike-counts are worked backwards to infer the hypothetical evoking stimuli. The latter task is probabilistic and, regardless, requires that the confusion matrix be square. Was it? For these 23 significant papers, the answer is No. (shrink)
This paper reveals errors within Norwich et al.’s Entropy Theory of Perception, errors that have broad implications for our understanding of perception. What Norwich and coauthors dubbed their “informational theory of neural coding” is based on cybernetics, that is, control and communication in man and machine. The Entropy Theory uses information theory to interpret human performance in absolute judgments. There, the continuum of the intensity of a sensory stimulus is cut into categories and the subject is shown exemplar stimuli of (...) each category. The subject must then identify individual exemplars by category. The identifications are recorded in the Garner-Hake version of the Shannon “confusion matrix”. The matrix yields “H”, the entropy (degree of uncertainty) about what stimulus was presented. Hypothetically, uncertainty drops as a stimulus lengthens, i.e. a plot of H vs. stimulus duration should fall monotonically. Such “adaptation” is known for both sensation and firing rate. Hence, because “the physiological adaptation curve has the same general shape as the psychophysical adaptation curve”, Norwich et al. assumed that both have the same time course; sensation and firing rate were thus both declared proportional to H. However, a closer look reveals insurmountable contradictions. First, the peripheral neuron hypothetically cannot fire in response to a stimulus of a given intensity until after somehow computing H from its responses to stimuli of various intensities. Thus no sensation occurs until firing rate adapts, i.e. attains its spontaneous rate. But hypothetically, once adaptation is complete, certainty is reached and perception ends. Altogether, then, perception cannot occur until perception is over. Secondly, sensations, firing rates, and H’s are empirically not synchronous, contrary to assumption. In sum, the core concept of the cybernetics-based Entropy Theory of Perception, that is, that uncertainty reduction is the basis for perception, is irrational. (shrink)
Purpose – Neuroscientists act as proxies for implied anthropomorphic signal- processing beings within the brain, Homunculi. The latter examine the arriving neuronal spike-trains to infer internal and external states. But a Homunculus needs a brain of its own, to coordinate its capabilities – a brain that necessarily contains a Homunculus and so on indefinitely. Such infinity is impossible – and in well-cited papers, Attneave and later Dennett claim to eliminate it. How do their approaches differ and do they (in fact) (...) obviate the Homunculi? Design/methodology/approach – The Attneave and Dennett approaches are carefully scrutinized. To Attneave, Homunculi are effectively “decision-making” neurons that control behaviors. Attneave presumes that Homunculi, when successively nested, become successively “stupider”, limiting their numbers by diminishing their responsibilities. Dennett likewise postulates neuronal Homunculi that become “stupider” – but brain-wards, where greater sophistication might have been expected. Findings – Attneave’s argument is Reductionist and it simply assumes-away the Homuncular infinity. Dennett’s scheme, which evidently derives from Attneave’s, ultimately involves the same mistakes. Attneave and Dennett fail, because they attempt to reduce intentionality to non-intentionality. Research limitations/implications – Homunculus has been successively recognized over the centuries by philosophers, psychologists and (some) neuroscientists as a crucial conundrum of cognitive science. It still is. Practical implications – Cognitive-science researchers need to recognize that Reductionist explanations of cognition may actually devolve to Homunculi, rather than eliminating them. Originality/value – Two notable Reductionist arguments against the infinity of Homunculi are proven wrong. In their place, a non-Reductionist treatment of the mind, “Emergence”, is discussed as a means of rendering Homunculi irrelevant. (shrink)
Purpose – This study aims to examine the observer’s role in “infant psychophysics”. Infant psychophysics was developed because the diagnosis of perceptual deficits should be done as early in a patient’s life as possible, to provide efficacious treatment and thereby reduce potential long-term costs. Infants, however, cannot report their perceptions. Hence, the intensity of a stimulus at which the infant can detect it, the “threshold”, must be inferred from the infant’s behavior, as judged by observers (watchers). But whose abilities are (...) actually being inferred? The answer affects all behavior-based conclusions about infants’ perceptions, including the well-proselytized notion that auditory stimulus-detection thresholds improve rapidly during infancy. Design/methodology/approach – In total, 55 years of infant psychophysics is scrutinized, starting with seminal studies in infant vision, followed by the studies that they inspired in infant hearing. Findings – The inferred stimulus-detection thresholds are those of the infant-plus-watcher and, more broadly, the entire laboratory. The thresholds are therefore tenuous, because infants’ actions may differ with stimulus intensity; expressiveness may differ between infants; different watchers may judge infants differently; etc. Particularly, the watcher’s ability to “read” the infant may improve with the infant’s age, confounding any interpretation of perceptual maturation. Further, the infant’s gaze duration, an assumed cue to stimulus detection, may lengthen or shorten nonlinearly with infant age. Research limitations/implications – Infant psychophysics investigators have neglected the role of the observer, resulting in an accumulation of data that requires substantial re-interpretation. Altogether, infant psychophysics has proven far too resilient for its own good. Originality/value – Infant psychophysics is examined for the first time through second-order cybernetics. The approach reveals serious unresolved issues. (shrink)
Purpose – This paper aims to extend the companion paper on “infant psychophysics”, which concentrated on the role of in-lab observers (watchers). Infants cannot report their own perceptions, so for five decades their detection thresholds for sensory stimuli were inferred from their stimulus-evoked behavior, judged by watchers. The inferred thresholds were revealed to inevitably be those of the watcher–infant duo, and, more broadly, the entire Laboratory. Such thresholds are unlikely to represent the finest stimuli that the infant can detect. What, (...) then, do they represent? Design/methodology/approach – Infants’ inferred stimulus-detection thresholds are hypothesized to be attentional thresholds, representing more-salient stimuli that overcome distraction. Findings – Empirical psychometric functions, which show “detection” performance versus stimulus intensity, have shallower slopes for infants than for adults. This (and other evidence) substantiates the attentional hypothesis. Research limitations/implications – An observer can only infer the mechanisms underlying an infant’s perceptions, not know them; infants’ minds are “Black Boxes”. Nonetheless, infants’ physiological responses have been used for decades to infer stimulus-detection thresholds. But those inferences ultimately depend upon observer-chosen statistical criteria of normality. Again, stimulus-detection thresholds are probably overestimated. Practical implications – Owing to exaggerated stimulus-detection thresholds, infants may be misdiagnosed as “hearing impaired”, then needlessly fitted with electronic implants. Originality/value – Infants’ stimulus-detection thresholds are re-interpreted as attentional thresholds. Also, a cybernetics concept, the “Black Box”, is extended to infants, reinforcing the conclusions of the companion paper that the infant-as-research-subject cannot be conceptually separated from the attending laboratory staff. Indeed, infant and staff altogether constitute a new, reflexive whole, one that has proven too resilient for anybody’s good. (shrink)
Conditional perfection is the phenomenon in which conditionals are strengthened to biconditionals. In some contexts, “If A, B” is understood as if it meant “A if and only if B.” We present and discuss a series of experiments designed to test one of the most promising pragmatic accounts of conditional perfection. This is the idea that conditional perfection is a form of exhaustification—that is a strengthening to an exhaustive reading, triggered by a question that the conditional answers. If a speaker (...) is asked how B comes about, then the answer “If A, B” is interpreted exhaustively to meaning that A is the only way to bring about B. Hence, “A if and only if B.” We uncover evidence that conditional perfection is a form of exhaustification, but not that it is triggered by a relationship to a salient question. (shrink)
Reasoning es una obra monumental de más de mil páginas editada en estrecha colaboración por el filósofo Jonathan E. Adler y el psicólogo Lance J. Rips para esclarecer el intrincado campo de investigación relacionado con los fundamentos de la inferencia y, en general, del razonamiento humano. En la actualidad, en pocos casos va unido el trabajo de compilar y editar textos científicos con el afán enciclopédico: un proyecto editorial que sobrepasa con razón los objetivos de la mayor parte de (...) los libros editados para la recopilación de artículos en torno a un mismo tema de investigación. Reasoning supone un empeño de características enciclopédicas: ha conseguido convertirse en una referencia obligada desde que saliera a la luz en 2008 para ofrecer al lector especialista artículos científicos de las más reputadas y consolidadas voces en aquellos campos de conocimiento presentes ya en los proyectos enciclopédicos europeos del siglo de las luces, a saber: el significado del racionalismo, los límites imputables a la naturaleza del conocimiento humano, las paradojas presentes en la inducción, etc. (shrink)
Whether or not deflationism is compatible with truth-conditional theories of meaning has often been discussed in very broad terms. This paper only focuses on Davidsonian semantics and Brandom's anaphoric deflationism and defends the claim that these are perfectly compatible. Critics of this view have voiced several objections, the most prominent of which claims that it involves an unacceptable form of circularity. The paper discusses how this general objection applies to the case of anaphoric deflationism and Davidsonian semantics and evaluates different (...) ways of responding to it (Williams 1999, Horisk 2008 and Lance 1997). Then, three further objections to the compatibility of these theories are assessed and eventually dismissed (Horisk 2007, Patterson 2005 and Collins 2002). It is shown how these considerations shed light on core issues of the debate. (shrink)
Our ascriptions of content to utterances in the past attribute to them a level of determinacy that extends beyond what could supervene upon the usage up to the time of those utterances. If one accepts the truth of such ascriptions, one can either (1) argue that subsequent use must be added to the supervenience base that determines the meaning of a term at a time, or (2) argue that such cases show that meaning does not supervene upon use at all. (...) The following will argue against authors such as Lance, Hawthorn and Ebbs that first of these options is the more promising of the two. However, maintaining the supervenience thesis ultimately requires understanding the relation between use and meaning as 'normative' in two important ways. The first (more familiar) way is that the function from use to meaning must be of a sort that allows us to maintain a robust distinction between correct usage and actual usage. This first type of normativity is accepted by defenders of many more temporally restricted versions of the supervenience thesis, but the second sort of normativity is unique to theories that extend the supervenience base into the future. In particular, if meaning is partially a function of future use, we can understand other commitments we are often taken to have about meaning, particularly the commitment to meaning being 'determinate', as practical commitments that structure our linguistic practices rather than theoretical commitment that merely describe such practices. (shrink)
This paper attempts to explain what a protest is by using the resources of speech-act theory. First, we distinguish the object, redress, and means of a protest. This provided a way to think of atomic acts of protest as having dual communicative aspects, viz., a negative evaluation of the object and a connected prescription of redress. Second, we use Austin’s notion of a felicity condition to further characterize the dual communicative aspects of protest. This allows us to distinguish protest from (...) some other speech acts which also involve a negative evaluation of some object and a connected prescription of redress. Finally, we turn to Kukla and Lance’s idea of a normative functionalist analysis of speech acts to advance the view that protests are a complex speech act constituted by dual input normative statuses and dual output normative statuses. (shrink)
Le but de ce recueil est d’offrir des commentaires accessibles et introductifs aux textes classiques qu’ils accompagnent, en ouvrant des perspectives de discussion sur le thème du capitalisme. C’est en ce sens qu’Emmanuel Chaput lance le débat en commentant le texte de Pierre-Joseph Proudhon, « Qu’est-ce que la propriété ? ». Les textes de Karl Marx ne sont bien sûr pas laissés pour compte : Samuel-Élie Lesage s’engage fermement dans cette voie en discutant L’idéologie allemande de Karl Marx, Christiane (...) Bailey nous offre d’approcher différemment l’œuvre marxienne en abordant son traitement de la question animale dans des extraits du Manifeste du parti communiste et du Travail salarié et capital, et Mathieu Joffre-Lainé nous présente une analyse fine des questions proprement économiques du Capital. Abordant les alternatives au capitalisme dans la pensée de Léon Bourgeois, Éliot Litalien commente La Solidarité et Simon-Pierre Chevarie-Cossette s’attaque à l’analyse de l’Essai d’une philosophie de la solidarité. Enfin, en liant capitalisme, patriarcat et pouvoir politique, Tara Chanady propose une lecture du texte Du mariage et de l’amour de Emma Goldman. La conclusion de Marie-Eve Jalbert se situe dans une perspective contemporaine et critique en décortiquant la critique du socialisme avancée par Friedrich A. Hayek. (shrink)
Élie HALÉVY (1870-1937), philosophe et historien des idées, fut professeur à l'École libre des sciences politiques, l'ancêtre de l'actuel Sciences Po. Comme son autre grand ouvrage, l'Histoire du peuple anglais au XIXe siècle, paru en six tomes de 1913 à 1932, les trois tomes de La formation du radicalisme philosophique, parus en 1901 pour les deux premiers et en 1904 pour le troisième, reflètent pour partie ses enseignements de l'Ecole libre consacrés à l'histoire britannique. Le premier tome, La jeunesse de (...) Bentham 1776-1789, étudie la doctrine utilitariste non seulement chez celui qu'on regarde comme son fondateur principal, Jeremy Bentham, mais aussi chez les nombreux auteurs qui, en Grande-Bretagne et sur le continent, en dessinèrent avant lui les contours. Le deuxième tome, L’évolution de la doctrine utilitaire de 1789 à 1815, montre comment l'utilitarisme revêtit la forme non seulement d'une école de pensée, mais aussi d'un mouvement pour la réforme économique, sociale et politique. Le rôle coordonnateur nouveau de James Mill, ainsi qu'une convergence de vues avec les économistes, qui poussaient dans le sens des réformes, marquèrent notamment cette évolution. Le troisième tome, Le radicalisme philosophique, continue d'étudier la transformation de l'école en mouvement après la fin des guerres napoléoniennes, lorsque celle-ci commence à engranger ses premiers grands succès réformistes. Bentham, James Mill et les autres penseurs utilitaires sont alors réunis sous l'appellation de philosophic radicals. Le terme temporel de l'ouvrage est le Reform Act de 1832, première étape vers la modernisation du système électoral, que la propagande de ce groupe ne contribua pas peu à faire aboutir. Quoique l'ouvrage d'Halévy vaille en premier lieu par l'immense savoir qu'il déploie, et le nombre et l'excellence des citations qu'il propose, il comporte aussi des thèses historiques et philosophiques originales. On peut citer parmi les premières la thèse, qui relie les trois tomes, voulant que l'utilitarisme britannique trouve sa forme achevée dans l'intervention sur la société, lorsqu'il se mue en radicalisme philosophique, et parmi les secondes, la thèse, énoncée au début du premier tome, voulant qu'il existe trois modèles dominants de jonction des intérêts individuels (la fusion sympathique, l'identification naturelle et l'identification artificielle). Une autre grande thèse, à la fois historique et philosophique, affirme en substance que l'économie politique classique serait un département spécialisé de la pensée utilitaire. La question de savoir jusqu'à quel point Smith, Ricardo et Malthus ont pu adhérer au "principe d'utilité" de Bentham est toujours débattue. En même temps que les élucidations apportées à ce principe, elle contribue à expliquer l'intérêt que les historiens de la pensée économique continuent de porter à l'ouvrage. L'auteur a participé à la réédition de La formation du radicalisme philosophique en 1995 par les Presses Universitaires de France (P.U.F.), suivant un projet collectif lancé par Monique Canto-Sperber. Dans le présent article, antérieur à cette réédition, l'auteur tentait de résumer brièvement un livre qui demeure irremplaçable en dépit d'une conception et d'un style quelque peu datés. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.