Reasoning es una obra monumental de más de mil páginas editada en estrecha colaboración por el filósofo Jonathan E. Adler y el psicólogo Lance J. Rips para esclarecer el intrincado campo de investigación relacionado con los fundamentos de la inferencia y, en general, del razonamiento humano. En la actualidad, en pocos casos va unido el trabajo de compilar y editar textos científicos con el afán enciclopédico: un proyecto editorial que sobrepasa con razón los objetivos de la mayor parte (...) de los libros editados para la recopilación de artículos en torno a un mismo tema de investigación. Reasoning supone un empeño de características enciclopédicas: ha conseguido convertirse en una referencia obligada desde que saliera a la luz en 2008 para ofrecer al lector especialista artículos científicos de las más reputadas y consolidadas voces en aquellos campos de conocimiento presentes ya en los proyectos enciclopédicos europeos del siglo de las luces, a saber: el significado del racionalismo, los límites imputables a la naturaleza del conocimiento humano, las paradojas presentes en la inducción, etc. (shrink)
Modus ponens is the argument from premises of the form If A, then B and A to the conclusion B. Nearly all participants agree that the modus ponens conclusion logically follows when the argument appears in this Basic form. However, adding a further premise can lower participants’ rate of agreement—an effect called suppression. We propose a theory of suppression that draws on contemporary ideas about conditional sentences in linguistics and philosophy. Semantically, the theory assumes that people interpret an indicative conditional (...) as a context-sensitive strict conditional: true if and only if its consequent is true in each of a contextually determined set of situations in which its antecedent is true. Pragmatically, the theory claims that context changes in response to new assertions, including new conditional premises. Thus, the conclusion of a modus ponens argument may no longer be accepted in the changed context. Psychologically, the theory describes people as capable of reasoning about broad classes of possible situations, ordered by typicality, without having to reason about individual possible worlds. The theory accounts for the main suppression phenomena, and it generates some novel predictions that new experiments confirm. (shrink)
Conditional perfection is the phenomenon in which conditionals are strengthened to biconditionals. In some contexts, “If A, B” is understood as if it meant “A if and only if B.” We present and discuss a series of experiments designed to test one of the most promising pragmatic accounts of conditional perfection. This is the idea that conditional perfection is a form of exhaustification—that is a strengthening to an exhaustive reading, triggered by a question that the conditional answers. If a speaker (...) is asked how B comes about, then the answer “If A, B” is interpreted exhaustively to meaning that A is the only way to bring about B. Hence, “A if and only if B.” We uncover evidence that conditional perfection is a form of exhaustification, but not that it is triggered by a relationship to a salient question. (shrink)
A major question in sensory science is how a sensation of magnitude F (such as loudness) depends upon a sensory stimulus of physical intensity I (such as a sound-pressure-wave of root-mean-square sound-pressure-level). An empirical just-noticeable sensation difference (∆F)_j at F_j specifies a just-noticeable intensity difference (∆I)_j at I_j. Classically, intensity differences accumulate from a stimulus-detection threshold I_th up to a desired intensity I. The corresponding sensation differences likewise accumulate up to F(I) from F(I_th ), the non-zero sensation (as suggested by (...) hearing studies) at I_th. Consequently, sensation growth F(I) can be obtained through classic Fechnerian integration, in which some empirically-based relation for the Weber Fraction, ∆I⁄I, is combined with either Fechner’s Law ∆F=B or Ekman’s Law (∆F⁄F)=g. The number of steps in I is equated to the number of steps in F; an infinite series ensues, whose higher-order terms are traditionally ignored (Fechnerian integration). But also, remarkably, so are the integration bounds I_th and F(I_th ). Here, we depart from orthodoxy by including those bounds. Bounded Fechnerian integration is first used to derive hypothetical sensation-growth equations for which the differential ∆F(I)=F(I+∆I)-F(I) does indeed return either Fechner’s Law or Ekman’s Law respectively. One relation emerges: linear growth of sensation F with intensity I. Subsequently, 24 sensation-growth equations F(I) that the author had derived using bounded Fechnerian integration (12 equations for the Weber Fraction (∆I⁄I), each combined with either Fechner’s Law or with Ekman’s Law) are scrutinized for whether their differentials F(I+∆I)-F(I) return the respective Fechner’s Law or Ekman’s Law, particularly in the previously-unexamined limits (∆I⁄I)≪1 and (∆I⁄I)→0. Classic claims made by Luce and Edwards (1958) are then examined, viz., that three popular forms of the Weber Fraction, when combined with Fechner’s Law, produce sensation-magnitude equations that subsequently return the selfsame Fechner’s Law. When sensation-growth equations are derived here using bounded Fechnerian integration, Luce and Edwards (1958) prove to be wrong. (shrink)
Theories of number concepts often suppose that the natural numbers are acquired as children learn to count and as they draw an induction based on their interpretation of the first few count words. In a bold critique of this general approach, Rips, Asmuth, Bloomfield [Rips, L., Asmuth, J. & Bloomfield, A.. Giving the boot to the bootstrap: How not to learn the natural numbers. Cognition, 101, B51–B60.] argue that such an inductive inference is consistent with a representational system (...) that clearly does not express the natural numbers and that possession of the natural numbers requires further principles that make the inductive inference superfluous. We argue that their critique is unsuccessful. Provided that children have access to a suitable initial system of representation, the sort of inductive inference that Rips et al. call into question can in fact facilitate the acquisition of larger integer concepts without the addition of any further principles. (shrink)
This is a brief reply to Herbert A. Simon's fine paper "Literary Criticism: A Cognitive Approach," Stanford Humanities Review, Special Supplement (Bridging the Gap: Where Cognitive Science Meets Literary Criticism), vol. 4, no. 1, pp. 1-26, Spring 1994.
A conversation with Peter Rollins, questions from the editors of Stance. Peter Rollins is a writer, philosopher, storyteller and public speaker who has gained an international reputation for overturning traditional notions of religion and forming “churches” that preach the Good News that we can’t be satisfied, that life is difficult, and that we don’t know the secret. Challenging the idea that faith concerns questions relating to belief, Peter’s incendiary and irreligious reading of Christianity attacks the distinction between the sacred and (...) the secular. It blurs the lines between theism and atheism and it sets aside questions regarding life after death to explore the possibility of life before death. (shrink)
Recent research on the metaethical beliefs of ordinary people appears to show that they are metaethical pluralists that adopt different metaethical standards for different moral judgments. Yet the methods used to evaluate folk metaethical belief rely on the assumption that participants interpret what they are asked in metaethical terms. We argue that most participants do not interpret questions designed to elicit metaethical beliefs in metaethical terms, or at least not in the way researchers intend. As a result, existing methods are (...) not reliable measures of metaethical belief. We end by discussing the implications of our account for the philosophical and practical implications of research on the psychology of metaethics. (shrink)
Cyclist Lance Armstrong cheated his way to seven Tour de France . Such cheating is wrong because it harms society. To explain how that harm affects all of us, I use Aristotle's ideas of virtue ethics to argue that Armstrong, despite his charitable work, is not a virtuous person. Virtue is to some extent determined by society, so we need to be clear that Armstrong is not a person to emulate. A society which does not clearly disapprove of vice (...) is less than it might otherwise be because a good society is one that encourages virtue in its citizens. (shrink)
This paper analyses the relationship between Public Administration, Knowledge Management and Service Delivery and to understand if improved Knowledge Management in the South African Government can improve public sector service delivery. This paper is a systematic analysis of 150 secondary literature sources. Even though not all the secondary literature sources analysed are used or cited in the paper, they nonetheless contributed to the identification of several key issues. The main finding of this paper is that improved Knowledge Management in the (...) South African Government would ultimately result in improved public sector service delivery. There is a dearth of empirical research on Knowledge Management in the South African Government, including whether the public sector's adoption of private-sector methods to better itself is effective. From a Public Administration standpoint, none of the literature analysed explains how to successfully integrate Knowledge Management in the "South African Government to improve service delivery." More research on this subject is necessary. Especially, to determine the impact of Knowledge Management on investor confidence, and the inflow of Foreign Direct Investment. The research will benefit governments of developing countries, particularly South Africa, Public Administration scholars, and Knowledge Management professionals. (shrink)
The object of research: The study revolves around KM and service delivery. It ascertains whether KM is a plausible solution to public service delivery challenges. Although the paper is aimed at governments worldwide, it is focusing on South Africa. Investigated problem: While the public service in South Africa has been significantly transformed since apartheid’s end in 1994, the government is now under enormous pressure to deliver and save the public service from further collapse. Recent years have seen an increase in (...) service delivery demonstrations and marches. Many believe the public service delivery mechanisms introduced to circumvent public service delivery challenges have been ineffective. The main scientific results: Despite knowing what must be done, officials have trouble putting their plans, strategies, and policies into action. This is even though service delivery mechanisms were implemented to help them improve service delivery. In fact, only half of the respondents (50.7 %, n=33) were aware of service delivery mechanisms, though 95.4 % (n=62) concurred that KM is a viable solution to improve service delivery. Area of practical use of the research results: Very little research has been conducted on KM as a potential solution to South Africa’s service delivery problems. As a result, this research provides new insights into improving public sector service delivery using KM. Overall, the findings will benefit KM and Public Administration practitioners. (shrink)
Although Knowledge Management was introduced as a Key Performance Indicator (KPI) for all senior management in South Africa 15 years ago, its implementation has been slow and inconsistent. This paper aimed to identify the factors that contribute to or deter the implementation of Knowledge Management in the South African government. The issue was explored through a review of literature on Knowledge Management, as well as results of an interview and questionnaire completed by government officials doing Knowledge Management practitioner work in (...) the South African government. The quantitative data was analysed using DATAtab. The findings identified two key factors that deter the implementation of Knowledge Management in the South African government: most departments in the South African government do not value Knowledge Management, and public officials responsible for implementing Knowledge Management in their departments lack implementation skills. A lack of research on Knowledge Management in developmental governments exists. More research on this subject is necessary. The research will benefit Knowledge Management and Public Administration practitioners alike. (shrink)
In 1948, Claude Shannon introduced his version of a concept that was core to Norbert Wiener's cybernetics, namely, information theory. Shannon's formalisms include a physical framework, namely a general communication system having six unique elements. Under this framework, Shannon information theory offers two particularly useful statistics, channel capacity and information transmitted. Remarkably, hundreds of neuroscience laboratories subsequently reported such numbers. But how (and why) did neuroscientists adapt a communications-engineering framework? Surprisingly, the literature offers no clear answers. To therefore first answer (...) "how", 115 authoritative peer-reviewed papers, proceedings, books and book chapters were scrutinized for neuroscientists' characterizations of the elements of Shannon's general communication system. Evidently, many neuroscientists attempted no identification of the system's elements. Others identified only a few of Shannon's system's elements. Indeed, the available neuroscience interpretations show a stunning incoherence, both within and across studies. The interpretational gamut implies hundreds, perhaps thousands, of different possible neuronal versions of Shannon's general communication system. The obvious lack of a definitive, credible interpretation makes neuroscience calculations of channel capacity and information transmitted meaningless. To now answer why Shannon's system was ever adapted for neuroscience, three common features of the neuroscience literature were examined: ignorance of the role of the observer, the presumption of "decoding" of neuronal voltage-spike trains, and the pursuit of ingrained analogies such as information, computation, and machine. Each of these factors facilitated a plethora of interpretations of Shannon's system elements. Finally, let us not ignore the impact of these "informational misadventures" on society at large. It is the same impact as scientific fraud. (shrink)
Purpose – The purpose of this paper is to ask whether a first-order-cybernetics concept, Shannon’s Information Theory, actually allows a far-reaching mathematics of perception allegedly derived from it, Norwich et al.’s “Entropy Theory of Perception”. Design/methodology/approach – All of The Entropy Theory, 35 years of publications, was scrutinized for its characterization of what underlies Shannon Information Theory: Shannon’s “general communication system”. There, “events” are passed by a “source” to a “transmitter”, thence through a “noisy channel” to a “receiver”, that passes (...) “outcomes” (received events) to a “destination”. Findings – In the entropy theory, “events” were sometimes interactions with the stimulus, but could be microscopic stimulus conditions. “Outcomes” often went unnamed; sometimes, the stimulus, or the interaction with it, or the resulting sensation, were “outcomes”. A “source” was often implied to be a “transmitter”, which frequently was a primary afferent neuron; elsewhere, the stimulus was the “transmitter” and perhaps also the “source”. “Channel” was rarely named; once, it was the whole eye; once, the incident photons; elsewhere, the primary or secondary afferent. “Receiver” was usually the sensory receptor, but could be an afferent. “Destination” went unmentioned. In sum, the entropy theory’s idea of Shannon’s “general communication system” was entirely ambiguous. Research limitations/implications – The ambiguities indicate that, contrary to claim, the entropy theory cannot be an “information theoretical description of the process of perception”. Originality/value – Scrutiny of the entropy theory’s use of information theory was overdue and reveals incompatibilities that force a reconsideration of information theory’s possible role in perception models. A second-order-cybernetics approach is suggested. (shrink)
Shannon’s information theory has been a popular component of first-order cybernetics. It quantifies information transmitted in terms of the number of times a sent symbol is received as itself, or as another possible symbol. Sent symbols were events and received symbols were outcomes. Garner and Hake reinterpreted Shannon, describing events and outcomes as categories of a stimulus attribute, so as to quantify the information transmitted in the psychologist’s category (or absolute judgment) experiment. There, categories are represented by specific stimuli, and (...) the human subject must assign those stimuli, singly and in random order, to the categories that they represent. Hundreds of computations ensued of information transmitted and its alleged asymptote, the sensory channel capacity. The present paper critically re-examines those estimates. It also reviews estimates of memory capacity from memory experiments. It concludes that absolute judgment is memory-limited and that channel capacities are actually memory capacities. In particular, there are factors that affect absolute judgment that are not explainable within Shannon’s theory, factors such as feedback, practice, motivation, and stimulus range, as well as the anchor effect, sequential dependences, the rise in information transmitted with the increase in number of stimulus dimensions, and the phenomena of masking and stimulus duration dependence. It is recommended that absolute judgments be abandoned, because there are already many direct estimates of memory capacity. (shrink)
Purpose – The purpose of this paper is to examine the popular “information transmitted” interpretation of absolute judgments, and to provide an alternative interpretation if one is needed. Design/methodology/approach – The psychologists Garner and Hake and their successors used Shannon’s Information Theory to quantify information transmitted in absolute judgments of sensory stimuli. Here, information theory is briefly reviewed, followed by a description of the absolute judgment experiment, and its information theory analysis. Empirical channel capacities are scrutinized. A remarkable coincidence, the (...) similarity of maximum information transmitted to human memory capacity, is described. Over 60 representative psychology papers on “information transmitted” are inspected for evidence of memory involvement in absolute judgment. Finally, memory is conceptually integrated into absolute judgment through a novel qualitative model that correctly predicts how judgments change with increase in the number of judged stimuli. Findings – Garner and Hake gave conflicting accounts of how absolute judgments represent information transmission. Further, “channel capacity” is an illusion caused by sampling bias and wishful thinking; information transmitted actually peaks and then declines, the peak coinciding with memory capacity. Absolute judgments themselves have numerous idiosyncracies that are incompatible with a Shannon general communication system but which clearly imply memory dependence. Research limitations/implications – Memory capacity limits the correctness of absolute judgments. Memory capacity is already well measured by other means, making redundant the informational analysis of absolute judgments. Originality/value – This paper presents a long-overdue comprehensive critical review of the established interpretation of absolute judgments in terms of “information transmitted”. An inevitable conclusion is reached: that published measurements of information transmitted actually measure memory capacity. A new, qualitative model is offered for the role of memory in absolute judgments. The model is well supported by recently revealed empirical properties of absolute judgments. (shrink)
Purpose – For half a century, neuroscientists have used Shannon Information Theory to calculate “information transmitted,” a hypothetical measure of how well neurons “discriminate” amongst stimuli. Neuroscientists’ computations, however, fail to meet even the technical requirements for credibility. Ultimately, the reasons must be conceptual. That conclusion is confirmed here, with crucial implications for neuroscience. The paper aims to discuss these issues. Design/methodology/approach – Shannon Information Theory depends upon a physical model, Shannon’s “general communication system.” Neuroscientists’ interpretation of that model is (...) scrutinized here. Findings – In Shannon’s system, a recipient receives a message composed of symbols. The symbols received, the symbols sent, and their hypothetical occurrence probabilities altogether allow calculation of “information transmitted.” Significantly, Shannon’s system’s “reception” (decoding) side physically mirrors its “transmission” (encoding) side. However, neurons lack the “reception” side; neuroscientists nonetheless insisted that decoding must happen. They turned to Homunculus, an internal humanoid who infers stimuli from neuronal firing. However, Homunculus must contain a Homunculus, and so on ad infinitum – unless it is super-human. But any need for Homunculi, as in “theories of consciousness,” is obviated if consciousness proves to be “emergent.” Research limitations/implications – Neuroscientists’ “information transmitted” indicates, at best, how well neuroscientists themselves can use neuronal firing to discriminate amongst the stimuli given to the research animal. Originality/value – A long-overdue examination unmasks a hidden element in neuroscientists’ use of Shannon Information Theory, namely, Homunculus. Almost 50 years’ worth of computations are recognized as irrelevant, mandating fresh approaches to understanding “discriminability.”. (shrink)
Purpose – A key cybernetics concept, information transmitted in a system, was quantified by Shannon. It quickly gained prominence, inspiring a version by Harvard psychologists Garner and Hake for “absolute identification” experiments. There, human subjects “categorize” sensory stimuli, affording “information transmitted” in perception. The Garner-Hake formulation has been in continuous use for 62 years, exerting enormous influence. But some experienced theorists and reviewers have criticized it as uninformative. They could not explain why, and were ignored. Here, the “why” is answered. (...) The paper aims to discuss these issues. Design/methodology/approach – A key Shannon data-organizing tool is the confusion matrix. Its columns and rows are, respectively, labeled by “symbol sent” (event) and “symbol received” (outcome), such that matrix entries represent how often outcomes actually corresponded to events. Garner and Hake made their own version of the matrix, which deserves scrutiny, and is minutely examined here. Findings – The Garner-Hake confusion-matrix columns represent “stimulus categories”, ranges of some physical stimulus attribute (usually intensity), and its rows represent “response categories” of the subject’s identification of the attribute. The matrix entries thus show how often an identification empirically corresponds to an intensity, such that “outcomes” and “events” differ in kind (unlike Shannon’s). Obtaining a true “information transmitted” therefore requires stimulus categorizations to be converted to hypothetical evoking stimuli, achievable (in principle) by relating categorization to sensation to intensity. But those relations are actually unknown, perhaps unknowable. Originality/value – The author achieves an important understanding: why “absolute identification” experiments do not illuminate sensory processes. (shrink)
Introduction & Objectives: Norwich’s Entropy Theory of Perception (1975 [1] -present) stands alone. It explains many firing-rate behaviors and psychophysical laws from bare theory. To do so, it demands a unique sort of interaction between receptor and brain, one that Norwich never substantiated. Can it now be confirmed, given the accumulation of empirical sensory neuroscience? Background: Norwich conjoined sensation and a mathematical model of communication, Shannon’s Information Theory, as follows: “In the entropic view of sensation, magnitude of sensation is regarded (...) as a measure of the entropy or uncertainty of the stimulus signal” [2]. “To be uncertain about the outcome of an event, one must first be aware of a set of alternative outcomes” [3]. “The entropy-establishing process begins with the generation of a [internal] sensory signal by the stimulus generator. This is followed by receipt of the [external] stimulus by the sensory receptor, transmission of action potentials by the sensory neurons, and finally recapture of the [response to the internal] signal by the generator” [4]. The latter “recapture” differentiates external from internal stimuli. The hypothetical “stimulus generators” are internal emitters, that generate photons in vision, audible sounds in audition (to Norwich, the spontaneous otoacoustic emissions [SOAEs]), “temperatures in excess of local skin temperature” in skin temperature sensation [4], etc. Method (1): Several decades of empirical sensory physiology literature was scrutinized for internal “stimulus generators”. Results (1): Spontaneous photopigment isomerization (“dark light”) does not involve visible light. SOAEs are electromechanical basilar-membrane artefacts that rarely produce audible tones. The skin’s temperature sensors do not raise skin temperature, etc. Method (2): The putative action of the brain-and-sensory-receptor loop was carefully reexamined. Results (2): The sensory receptor allegedly “perceives”, experiences “awareness”, possesses “memory”, and has a “mind”. But those traits describe the whole human. The receptor, thus anthropomorphized, must therefore contain its own perceptual loop, containing a receptor, containing a perceptual loop, etc. Summary & Conclusions: The Entropy Theory demands sensory awareness of alternatives, through an imagined brain-and-sensory-receptor loop containing internal “stimulus generators”. But (1) no internal “stimulus generators” seem to exist and (2) the loop would be the outermost of an infinite nesting of identical loops. (shrink)
Purpose – In the last half-century, individual sensory neurons have been bestowed with characteristics of the whole human being, such as behavior and its oft-presumed precursor, consciousness. This anthropomorphization is pervasive in the literature. It is also absurd, given what we know about neurons, and it needs to be abolished. This study aims to first understand how it happened, and hence why it persists. Design/methodology/approach – The peer-reviewed sensory-neurophysiology literature extends to hundreds (perhaps thousands) of papers. Here, more than 90 (...) mainstream papers were scrutinized. Findings – Anthropomorphization arose because single neurons were cast as “observers” who “identify”, “categorize”, “recognize”, “distinguish” or “discriminate” the stimuli, using math-based algorithms that reduce (“decode”) the stimulus-evoked spike trains to the particular stimuli inferred to elicit them. Without “decoding”, there is supposedly no perception. However, “decoding” is both unnecessary and unconfirmed. The neuronal “observer” in fact consists of the laboratory staff and the greater society that supports them. In anthropomorphization, the neuron becomes the collective. Research limitations/implications – Anthropomorphization underlies the widespread application to neurons Information Theory and Signal Detection Theory, making both approaches incorrect. Practical implications – A great deal of time, money and effort has been wasted on anthropomorphic Reductionist approaches to understanding perception and consciousness. Those resources should be diverted into more-fruitful approaches. Originality/value – A long-overdue scrutiny of sensory-neuroscience literature reveals that anthropomorphization, a form of Reductionism that involves the presumption of single-neuron consciousness, has run amok in neuroscience. Consciousness is more likely to be an emergent property of the brain. (shrink)
There is widespread consensus that the volatility, uncertainty, complexity, and ambiguity (VUCA) environment has contributed to the subpar quality of public sector service delivery in South Africa. Hence, the aim of this paper is to ascertain how the South African government can enhance service delivery in a VUCA world. This article presents a comprehensive study of a number of secondary literature sources. The author makes an effort to draw attention to knowledge gaps that might serve as the foundation for more (...) research in the future. The main finding is that for the South African government to provide good service in a VUCA environment, its employees must be proficient in Results-Based Monitoring and Evaluation, Strategic Planning, Programme and Project Management Methodology, and Change Management Methodology. There is a severe lack of empirical study on the delivery of public sector services in an environment characterized by VUCA. As a result, there is a need for more research on this topic. Specifically, in order to establish the effect that the VUCA environment has on the governments of emerging nations. The research will be beneficial to the governments of developing countries, notably South Africa, as well as to those who work in the field of public administration. (shrink)
There is widespread consensus that the volatility, uncertainty, complexity, and ambiguity (VUCA) environment has contributed to the subpar quality of public sector service delivery in South Africa. Hence, the aim of this paper is to ascertain how the South African government can enhance service delivery in a VUCA world. This article presents a comprehensive study of a number of secondary literature sources. The author makes an effort to draw attention to knowledge gaps that might serve as the foundation for more (...) research in the future. The main finding is that for the South African government to provide good service in a VUCA environment, its employees must be proficient in Results-Based Monitoring and Evaluation, Strategic Planning, Programme and Project Management Methodology, and Change Management Methodology. There is a severe lack of empirical study on the delivery of public sector services in an environment characterized by VUCA. As a result, there is a need for more research on this topic. Specifically, in order to establish the effect that the VUCA environment has on the governments of emerging nations. The research will be beneficial to the governments of developing countries, notably South Africa, as well as to those who work in the field of public administration. (shrink)
In 1947, Hardy, Wolff, and Goodell achieved a psychophysics milestone: they built a putative sensation-growth scale, for skin pain, from pain-difference limens. Limens were found using the “dolorimeter”, a device first made by Hardy & co. to evoke pain for pain-threshold measurements. Scant years later, though, H.K. Beecher (MD) discredited the pain scale – according to Paterson (2019), citing the historian Tousignant. Yet Hardy & co. receive approval in the literature. Intrigued, we scrutinized their methods, then Beecher’s critiques, and Tousignant’s (...) history of threshold dolorimetry. Beecher decried dolorimetry as irrelevant, favoring clinical trials of pain relief. But he failed to discredit dolorimetry. (shrink)
In Cybernetics (1961 Edition), Professor Norbert Wiener noted that “The role of information and the technique of measuring and transmitting information constitute a whole discipline for the engineer, for the neuroscientist, for the psychologist, and for the sociologist”. Sociology aside, the neuroscientists and the psychologists inferred “information transmitted” using the discrete summations from Shannon Information Theory. The present author has since scrutinized the psychologists’ approach in depth, and found it wrong. The neuroscientists’ approach is highly related, but remains unexamined. Neuroscientists (...) quantified “the ability of [physiological sensory] receptors (or other signal-processing elements) to transmit information about stimulus parameters”. Such parameters could vary along a single continuum (e.g., intensity), or along multiple dimensions that altogether provide a Gestalt – such as a face. Here, unprecedented scrutiny is given to how 23 neuroscience papers computed “information transmitted” in terms of stimulus parameters and the evoked neuronal spikes. The computations relied upon Shannon’s “confusion matrix”, which quantifies the fidelity of a “general communication system”. Shannon’s matrix is square, with the same labels for columns and for rows. Nonetheless, neuroscientists labelled the columns by “stimulus category” and the rows by “spike-count category”. The resulting “information transmitted” is spurious, unless the evoked spike-counts are worked backwards to infer the hypothetical evoking stimuli. The latter task is probabilistic and, regardless, requires that the confusion matrix be square. Was it? For these 23 significant papers, the answer is No. (shrink)
Purpose – This study aims to examine the observer’s role in “infant psychophysics”. Infant psychophysics was developed because the diagnosis of perceptual deficits should be done as early in a patient’s life as possible, to provide efficacious treatment and thereby reduce potential long-term costs. Infants, however, cannot report their perceptions. Hence, the intensity of a stimulus at which the infant can detect it, the “threshold”, must be inferred from the infant’s behavior, as judged by observers (watchers). But whose abilities are (...) actually being inferred? The answer affects all behavior-based conclusions about infants’ perceptions, including the well-proselytized notion that auditory stimulus-detection thresholds improve rapidly during infancy. Design/methodology/approach – In total, 55 years of infant psychophysics is scrutinized, starting with seminal studies in infant vision, followed by the studies that they inspired in infant hearing. Findings – The inferred stimulus-detection thresholds are those of the infant-plus-watcher and, more broadly, the entire laboratory. The thresholds are therefore tenuous, because infants’ actions may differ with stimulus intensity; expressiveness may differ between infants; different watchers may judge infants differently; etc. Particularly, the watcher’s ability to “read” the infant may improve with the infant’s age, confounding any interpretation of perceptual maturation. Further, the infant’s gaze duration, an assumed cue to stimulus detection, may lengthen or shorten nonlinearly with infant age. Research limitations/implications – Infant psychophysics investigators have neglected the role of the observer, resulting in an accumulation of data that requires substantial re-interpretation. Altogether, infant psychophysics has proven far too resilient for its own good. Originality/value – Infant psychophysics is examined for the first time through second-order cybernetics. The approach reveals serious unresolved issues. (shrink)
An ongoing mystery in sensory science is how sensation magnitude F(I), such as loudness, increases with increasing stimulus intensity I. No credible, direct experimental measures exist. Nonetheless, F(I) can be inferred algebraically. Differences in sensation have empirical (but non-quantifiable) minimum sizes called just-noticeable sensation differences, ∆F, which correspond to empirically-measurable just-noticeable intensity differences, ∆I. The ∆Is presumably cumulate from an empirical stimulus-detection threshold I_th up to the intensity of interest, I. Likewise, corresponding ∆Fs cumulate from the sensation at the stimulus-detection (...) threshold, F(I_th ), up to F(I). Regarding the ∆Is, however, it is unlikely that all of them will be known experimentally; the procedures are too lengthy. The customary approach, then, is to find ∆I at a few widely-spaced intensities, and then use those ∆Is to interpolate all ∆Is using some smooth continuous function. The most popular of those functions is Weber’s Law, ∆I⁄I=K. But that is often not even a credible approximation to the data. However, there are other equations for ∆I⁄I. Any such equation for ∆I⁄I can be combined with any equation for ∆F, through calculus, to altogether obtain F(I). Here, two assumptions for ∆F are considered: ∆F=B (Fechner’s Law) and (∆F⁄F)=g (Ekman’s Law). The respective integrals involve lower bounds I_th and F(I_th ). This stands in broad contrast to the literature, which heavily favors non-bounded integrals. We, hence, obtain 24 new, alternative equations for sensation magnitude F(I) (12 equations for (∆I⁄I) × 2 equations for ∆F). (shrink)
Information flow in a system is a core cybernetics concept. It has been used frequently in Sensory Psychology since 1951. There, Shannon Information Theory was used to calculate "information transmitted" in "absolute identification" experiments involving human subjects. Originally, in Shannon's "system", any symbol received ("outcome") is among the symbols sent ("events"). Not all symbols are received as transmitted, hence an indirect noise measure is calculated, "information transmitted", which requires knowing the confusion matrix, its columns labeled by "event" and its rows (...) labeled by "outcome". Each matrix entry is dependent upon the frequency with which a particular outcome corresponds to a particular event. However, for the sensory psychologist, stimulus intensities are "events"; the experimenter partitions the intensity continuum into ranges called "stimulus categories" and "response categories", such that each confusion-matrix entry represents the frequency with which a stimulus from a stimulus category falls within a particular response category. Of course, a stimulus evokes a sensation, and the subject's immediate memory of it is compared to the memories of sensations learned during practice, to make a categorization. Categorizing thus introduces "false noise", which is only removed if categorizations can be converted back to their hypothetical evoking stimuli. But sensations and categorizations are both statistically distributed, and the stimulus that corresponds to a given mean categorization cannot be known from only the latter; the relation of intensity to mean sensation, and of mean sensation to mean categorization, are needed. Neither, however, are presently knowable. This is a quandary, which arose because sensory psychologists ignored an ubiquitous component of Shannon's "system", the uninvolved observer, who calculates "information transmitted". Human sensory systems, however, are within de facto observers, making "false noise" inevitable. (shrink)
Purpose – Neuroscientists act as proxies for implied anthropomorphic signal- processing beings within the brain, Homunculi. The latter examine the arriving neuronal spike-trains to infer internal and external states. But a Homunculus needs a brain of its own, to coordinate its capabilities – a brain that necessarily contains a Homunculus and so on indefinitely. Such infinity is impossible – and in well-cited papers, Attneave and later Dennett claim to eliminate it. How do their approaches differ and do they (in fact) (...) obviate the Homunculi? Design/methodology/approach – The Attneave and Dennett approaches are carefully scrutinized. To Attneave, Homunculi are effectively “decision-making” neurons that control behaviors. Attneave presumes that Homunculi, when successively nested, become successively “stupider”, limiting their numbers by diminishing their responsibilities. Dennett likewise postulates neuronal Homunculi that become “stupider” – but brain-wards, where greater sophistication might have been expected. Findings – Attneave’s argument is Reductionist and it simply assumes-away the Homuncular infinity. Dennett’s scheme, which evidently derives from Attneave’s, ultimately involves the same mistakes. Attneave and Dennett fail, because they attempt to reduce intentionality to non-intentionality. Research limitations/implications – Homunculus has been successively recognized over the centuries by philosophers, psychologists and (some) neuroscientists as a crucial conundrum of cognitive science. It still is. Practical implications – Cognitive-science researchers need to recognize that Reductionist explanations of cognition may actually devolve to Homunculi, rather than eliminating them. Originality/value – Two notable Reductionist arguments against the infinity of Homunculi are proven wrong. In their place, a non-Reductionist treatment of the mind, “Emergence”, is discussed as a means of rendering Homunculi irrelevant. (shrink)
Purpose – This paper aims to extend the companion paper on “infant psychophysics”, which concentrated on the role of in-lab observers (watchers). Infants cannot report their own perceptions, so for five decades their detection thresholds for sensory stimuli were inferred from their stimulus-evoked behavior, judged by watchers. The inferred thresholds were revealed to inevitably be those of the watcher–infant duo, and, more broadly, the entire Laboratory. Such thresholds are unlikely to represent the finest stimuli that the infant can detect. What, (...) then, do they represent? Design/methodology/approach – Infants’ inferred stimulus-detection thresholds are hypothesized to be attentional thresholds, representing more-salient stimuli that overcome distraction. Findings – Empirical psychometric functions, which show “detection” performance versus stimulus intensity, have shallower slopes for infants than for adults. This (and other evidence) substantiates the attentional hypothesis. Research limitations/implications – An observer can only infer the mechanisms underlying an infant’s perceptions, not know them; infants’ minds are “Black Boxes”. Nonetheless, infants’ physiological responses have been used for decades to infer stimulus-detection thresholds. But those inferences ultimately depend upon observer-chosen statistical criteria of normality. Again, stimulus-detection thresholds are probably overestimated. Practical implications – Owing to exaggerated stimulus-detection thresholds, infants may be misdiagnosed as “hearing impaired”, then needlessly fitted with electronic implants. Originality/value – Infants’ stimulus-detection thresholds are re-interpreted as attentional thresholds. Also, a cybernetics concept, the “Black Box”, is extended to infants, reinforcing the conclusions of the companion paper that the infant-as-research-subject cannot be conceptually separated from the attending laboratory staff. Indeed, infant and staff altogether constitute a new, reflexive whole, one that has proven too resilient for anybody’s good. (shrink)
This paper concerns the Black Box. It is not the engineer’s black box that can be opened to reveal its mechanism, but rather one whose operations are inferred through input from (and output to) a companion observer. We are observers ourselves, and we attempt to understand minds through interactions with their host organisms. To this end, Ranulph Glanville followed W. Ross Ashby in elaborating the Black Box. The Black Box and its observer together form a system having different properties than (...) either component alone, making it a greater Black Box to any further-external observer. How far into this greater box can a further-external observer probe? The answer is crucial to understanding Black Boxes, and so an answer is offered here. It employs von Foerster’s machines, abstract entities having mechano-electrical bases, just like putative Black Boxes. Von Foerster follows Turing, Ashby, E. F. Moore, and G. H. Mealy in recognizing archetype machines that he calls trivial (predictable) and non-trivial (non-predictable). It is argued here that non-trivial machines are the only true Black Boxes. But non-trivial machines can be concatenated from trivial machines. Hence, the utter core of any greater Black Box (a non-trivial machine) may involve two (or more) White Boxes (trivial machines). This is how an unpredictable thing emerges from predictable parts. Interactions of White Boxes—of trivial machines—may be the ultimate source of the mind. Keywords: . (shrink)
This paper reveals errors within Norwich et al.’s Entropy Theory of Perception, errors that have broad implications for our understanding of perception. What Norwich and coauthors dubbed their “informational theory of neural coding” is based on cybernetics, that is, control and communication in man and machine. The Entropy Theory uses information theory to interpret human performance in absolute judgments. There, the continuum of the intensity of a sensory stimulus is cut into categories and the subject is shown exemplar stimuli of (...) each category. The subject must then identify individual exemplars by category. The identifications are recorded in the Garner-Hake version of the Shannon “confusion matrix”. The matrix yields “H”, the entropy (degree of uncertainty) about what stimulus was presented. Hypothetically, uncertainty drops as a stimulus lengthens, i.e. a plot of H vs. stimulus duration should fall monotonically. Such “adaptation” is known for both sensation and firing rate. Hence, because “the physiological adaptation curve has the same general shape as the psychophysical adaptation curve”, Norwich et al. assumed that both have the same time course; sensation and firing rate were thus both declared proportional to H. However, a closer look reveals insurmountable contradictions. First, the peripheral neuron hypothetically cannot fire in response to a stimulus of a given intensity until after somehow computing H from its responses to stimuli of various intensities. Thus no sensation occurs until firing rate adapts, i.e. attains its spontaneous rate. But hypothetically, once adaptation is complete, certainty is reached and perception ends. Altogether, then, perception cannot occur until perception is over. Secondly, sensations, firing rates, and H’s are empirically not synchronous, contrary to assumption. In sum, the core concept of the cybernetics-based Entropy Theory of Perception, that is, that uncertainty reduction is the basis for perception, is irrational. (shrink)
Nineteen Prescott Fire Department, Granite Mountain Hot Shot (GMHS) wildland firefighters and supervisors (WFF), perished on the June 2013 Yarnell Hill Fire (YHF) in Arizona. The firefighters left their Safety Zone during forecast, outflow winds, triggering explosive fire behavior in drought-stressed chaparral. Why would an experienced WFF Crew, leave ‘good black’ and travel downslope through a brush-filled chimney, contrary to their training and experience? An organized Serious Accident Investigation Team (SAIT) found, “… no indication of negligence, reckless actions, or violations (...) of policy or protocol.” Despite this, many WFF professionals deemed the catastrophe, “… the final, fatal link, in a long chain of bad decisions with good outcomes.” This paper is a theoretical and realistic examination of plausible, faulty, human decisions with prior good outcomes; internal and external impacts, influencing the GMHS; and two explanations for this catastrophe: Individual Blame Logic and Organizational Function Logic, and proposed preventive mitigations. (shrink)
Nineteen Prescott Fire Department, Granite Mountain Hot Shot (GMHS) wildland firefighters (WF) perished in Arizona in June 2013 Yarnell Hill Fire, an inexplicable wildland fire disaster. In complex wildland fires, sudden, dynamic changes in human factors and fire conditions can occur, thus mistakes can be unfortunately fatal. Individual and organizational faults regarding the predictable, puzzling, human failures that will result in future WF deaths are addressed. The GMHS were individually, then collectively fixated with abandoning their Safety Zone to reengage, committing (...) themselves at the worst possible time, to relocate to another Safety Zone - a form of collective tunnel vision. Our goal is to provoke meaningful discussion toward improved wildland firefighter safety with practical solutions derived from a long-established wildland firefighter expertise/performance in a fatality-prone profession. Wildfire fatalities are unavoidable, hence these proposals, applied to ongoing training, can significantly contribute to other well-thought-out and validated measures to reduce them. (shrink)
A growing body of psychological research seeks to understand how people's thinking comports with long-standing philosophical theories, such as whether they view ethical or aesthetic truths as subjective or objective. Yet such research can be critically undermined if it fails to accurately characterize the philosophical positions in question and fails to ensure that subjects understand them appropriately. We argue that a recent article by Rabb et al. (2020) fails to meet these demands and propose several constructive solutions for future research.
The language of phenomenology includes terms such as intentionality, phenom- enon, insight, analysis, sense, not to mention the key term of Edmund Husserl’s manifesto, “the things themselves” to return to . But what does the “things them- selves” properly mean? How come the term is replaced by the “findings” over time? And what are the findings for? The investigation begins by looking at the tricky legacy of the modern turn, trying to clarify ties to past masters, including Francis- co Suárez (...) and Augustine of Hippo . The former, because his influence goes beyond René Descartes reaching undoubtedly Franz Brentano and his students, as well as Martin Heidegger . The latter, because Augustine gives a personal component to the Greek inheritance, marked by the “inward turn .” However, it would not be possible to review the history of thought without the help offered by Jan Patočka's analyses . Patočka discloses the “care” of the Greek philosophers, Plato and Dem- ocritus among others, “for the soul”, we would say with Patočka for “being,” whose sense “does not leave us indifferent” as the leitmotiv of Ancient Philosophy . Nev- ertheless, in his lectures on Plato and Europe, Patočka points out that you must be careful not to confuse the phenomena of things, of existens, with the phenomena of being . Finally, Patočka’s legacy is found in the efforts to reconcile the life-feeling with the modern construction of reality, which means “a radical reconstruction of the naive and natural world of common sense .” In some ways, intentionality is to be revised . (shrink)
I present an account of deterministic chance which builds upon the physico-mathematical approach to theorizing about deterministic chance known as 'the method of arbitrary functions'. This approach promisingly yields deterministic probabilities which align with what we take the chances to be---it tells us that there is approximately a 1/2 probability of a spun roulette wheel stopping on black, and approximately a 1/2 probability of a flipped coin landing heads up---but it requires some probabilistic materials to work with. I contend that (...) the right probabilistic materials are found in reasonable initial credence distributions. I note that, with some normative assumptions, the resulting account entails that deterministic chances obey a variant of Lewis's 'principal principle'. I additionally argue that deterministic chances, so understood, are capable of explaining long-run frequencies. (shrink)
Is logic normative for reasoning? In the wake of work by Gilbert Harman and John MacFarlane, this question has been reduced to: are there any adequate bridge principles which link logical facts to normative constraints on reasoning? Hitherto, defenders of the normativity of logic have exclusively focussed on identifying adequate validity bridge principles: principles linking validity facts—facts of the form 'gamma entails phi'—to normative constraints on reasoning. This paper argues for two claims. First, for the time being at least, Harman’s (...) challenge cannot be surmounted by articulating validity bridge principles. Second, Harman’s challenge can be met by articulating invalidity bridge principles: principles linking invalidity facts of the form 'gamma does not entail phi' to normative constraints on reasoning. In doing so, I provide a novel defence of the normativity of logic. (shrink)
The majority of our linguistic exchanges, such as everyday conversations, are divided into turns; one party usually talks at a time, with only relatively rare occurrences of brief overlaps in which there are two simultaneous speakers. Moreover, conversational turn-taking tends to be very fast. We typically start producing our responses before the previous turn has finished, i.e., before we are confronted with the full content of our interlocutor’s utterance. This raises interesting questions about the nature of linguistic understanding. Philosophical theories (...) typically focus on linguistic understanding characterized either as an ability to grasp the contents of utterances in a given language or as outputs of this ability—mental states of one type or another. In this paper, I supplement these theories by developing an account of the process of understanding. I argue that it enables us to capture the dynamic and temporal aspect of understanding and reconcile philosophical investigations with empirical research on language comprehension. (shrink)
Se presenta la traducción de los capítulos 1 y 2 del libro Spinoza and Time del filósofo judío Samuel Alexander, el que deriva de la Cuarta Conferencia en Memoria de Arthur Davis, dictada ante la Jewish Historical Society de Inglaterra, el domingo 1 de mayo, 1921/23 de Nisan, 5681. La traducción responde a la necesidad de contar con un acercamiento en castellano al corpus alexandriano, ya que no existe al día de hoy una traducción total de sus libros. A su (...) vez el traductor encuentra motivación en el redescubrimiento de autores judíos que aborden el tema de la temporalidad. (shrink)
Basic Formal Ontology (BFO) is a top-level ontology consisting of thirty-six classes, designed to support information integration, retrieval, and analysis across all domains of scientific investigation, presently employed in over 350 ontology projects around the world. BFO is a genuine top-level ontology, containing no terms particular to material domains, such as physics, medicine, or psychology. In this paper, we demonstrate how a series of cases illustrating common types of change may be represented by universals, defined classes, and relations employing the (...) BFO framework. We provide discussion of these cases to provide a template for other ontologists using BFO, as well as to facilitate comparison with the strategies proposed by ontologists using different top-level ontologies. (shrink)
One of the central figures of philosophy of language- John Langshaw Austin, attributes principles of causation to the mere pragmatic language. Conversely, Kant tried to construct a “free human act” which is independent from any physical determination except its innate motivations via his well-known the phenomenal / noumenal distinction. That kind of Kantian metaphysical ground which addresses to the noumenal field, he obviously tries to establish this behavioral causation again by denying Austinian style pragmatic propositions or illocutionary acts. I claimed (...) that sort of duality between Austin and Kant, creates an epistemological problem with how propositions and actions relate. From a Kantian position, it (indetermination) is overlooked by Austin's propositional doctrine, without being grounded on any universal principle, but only with propositions that embraced by speech act theory. (shrink)
Multiple-choice questions have an undeserved reputation for only being able to test student recall of basic facts. In fact, well-crafted mechanically gradable questions can measure very sophisticated cognitive skills, including those engaged at the highest level of Benjamin Bloom’s taxonomy of outcomes. In this article, I argue that multiple-choice questions should be a part of the diversified assessment portfolio for most philosophy courses. I present three arguments broadly related to fairness. First, multiple-choice questions allow one to consolidate subjective decision making (...) in a way that makes it easier to manage. Second, multiple-choice questions contribute to the diversity of an evaluation portfolio by balancing out problems with writing-based assessments. Third, by increasing the diversity of evaluations, multiple-choice questions increase the inclusiveness of the course. In the course of this argument I provide examples of multiple-choice questions that measure sophisticated learning and advice for how to write good multiple-choice questions. (shrink)
An interesting aspect of Ernest Sosa’s (2017) recent thinking is that enhanced performances (e.g., the performance of an athlete under the influence of a performance-enhancing drug) fall short of aptness, and this is because such enhanced performances do not issue from genuine competences on the part of the agent. In this paper, I explore in some detail the implications of such thinking in Sosa’s wider virtue epistemology, with a focus on cases of cognitive enhancement. A certain puzzle is then highlighted, (...) and the solution proposed draws from both the recent moral responsibility literature on guidance control (e.g., Fischer and Ravizza 1998; Fischer 2012) as well as from work on cognitive integration in the philosophy of mind and cognitive science (e.g., Clark and Chalmers 1998; Clark 2008; Pritchard 2010; Palermos 2014; Carter 2017). (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.