Systems biologists often distance themselves from reductionist approaches and formulate their aim as understanding living systems “as a whole.” Yet, it is often unclear what kind of reductionism they have in mind, and in what sense their methodologies would offer a superior approach. To address these questions, we distinguish between two types of reductionism which we call “modular reductionism” and “bottom-up reductionism.” Much knowledge in molecular biology has been gained by decomposing living systems into functional modules or through detailed studies (...) of molecular processes. We ask whether systems biology provides novel ways to recompose these findings in the context of the system as a whole via computational simulations. As an example of computational integration of modules, we analyze the first whole-cell model of the bacterium M. genitalium. Secondly, we examine the attempt to recompose processes across different spatial scales via multi-scale cardiac models. Although these models rely on a number of idealizations and simplifying assumptions as well, we argue that they provide insight into the limitations of reductionist approaches. Whole-cell models can be used to discover properties arising at the interfaces of dynamically coupled processes within a biological system, thereby making more apparent what is lost through decomposition. Similarly, multi-scale modeling highlights the relevance of macroscale parameters and models and challenges the view that living systems can be understood “bottom-up.” Specifically, we point out that system-level properties constrain lower-scale processes. Thus, large-scale modeling reveals how living systems at the same time are more and less than the sum of the parts. (shrink)
Does perceptual consciousness require cognitive access? Ned Block argues that it does not. Central to his case are visual memory experiments that employ post-stimulus cueing—in particular, Sperling's classic partial report studies, change-detection work by Lamme and colleagues, and a recent paper by Bronfman and colleagues that exploits our perception of ‘gist’ properties. We argue contra Block that these experiments do not support his claim. Our reinterpretations differ from previous critics' in challenging as well a longstanding and common view of visual (...) memory as involving declining capacity across a series of stores. We conclude by discussing the relation of probabilistic perceptual representations and phenomenal consciousness. (shrink)
Zenon Pylyshyn argues that cognitively driven attentional effects do not amount to cognitive penetration of early vision because such effects occur either before or after early vision. Critics object that in fact such effects occur at all levels of perceptual processing. We argue that Pylyshyn’s claim is correct—but not for the reason he emphasizes. Even if his critics are correct that attentional effects are not external to early vision, these effects do not satisfy Pylyshyn’s requirements that the effects be direct (...) and exhibit semantic coherence. In addition, we distinguish our defense from those found in recent work by Raftopoulos and by Firestone and Scholl, argue that attention should not be assimilated to expectation, and discuss alternative characterizations of cognitive penetrability, advocating a kind of pluralism. (shrink)
Nick Shea’s Representation in Cognitive Science commits him to representations in perceptual processing that are about probabilities. This commentary concerns how to adjudicate between this view and an alternative that locates the probabilities rather in the representational states’ associated “attitudes”. As background and motivation, evidence for probabilistic representations in perceptual processing is adduced, and it is shown how, on either conception, one can address a specific challenge Ned Block has raised to this evidence.
Theories of consciousness divide over whether perceptual consciousness is rich or sparse in specific representational content and whether it requires cognitive access. These two issues are often treated in tandem because of a shared assumption that the representational capacity of cognitive access is fairly limited. Recent research on working memory challenges this shared assumption. This paper argues that abandoning the assumption undermines post-cue-based “overflow” arguments, according to which perceptual conscious is rich and does not require cognitive access. Abandoning it also (...) dissociates the rich/sparse debate from the access question. The paper then explores attempts to reformulate overflow theses in ways that don’t require the assumption of limited capacity. Finally, it discusses the problem of relating seemingly non-probabilistic perceptual consciousness to the probabilistic representations posited by the models that challenge conceptions of cognitive access as capacity-limited. (shrink)
Fiona Macpherson (2012) argues that various experimental results provide strong evidence in favor of the cognitive penetration of perceptual color experience. Moreover, she proposes a mechanism for how such cognitive penetration occurs. We argue, first, that the results on which Macpherson relies do not provide strong grounds for her claim of cognitive penetrability; and, second, that, if the results do reflect cognitive penetrability, then time-course considerations raise worries for her proposed mechanism. We base our arguments in part on several of (...) our own experiments, reported herein. (shrink)
Can one combine Davidsonian semantics with a deflationary conception of truth? Williams argues, contra a common worry, that Davidsonian semantics does not require truth-talk to play an explanatory role. Horisk replies that, in any event, the expressive role of truth-talk that Williams emphasizes disqualifies deflationary accounts—at least extant varieties—from combination with Davidsonian semantics. She argues, in particular, that this is so for Quine's disquotationalism, Horwich's minimalism, and Brandom's prosententialism. I argue that Horisk fails to establish her claim in all three (...) cases. This involves clarifying Quine’s understanding of a purely referential occurrence; explaining how Davidsonians can avail themselves of a syntactic treatment of lexical ambiguity; and correcting a common misreading of Brandom (answering along the way an objection offered by Künne as well). (shrink)
Linguistic intuitions are a central source of evidence across a variety of linguistic domains. They have also long been a source of controversy. This chapter aims to illuminate the etiology and evidential status of at least some linguistic intuitions by relating them to error signals of the sort posited by accounts of on-line monitoring of speech production and comprehension. The suggestion is framed as a novel reply to Michael Devitt’s claim that linguistic intuitions are theory-laden “central systems” responses, rather than (...) endorsed outputs of a modularized language faculty (the “Voice of Competence”). Along the way, it is argued that linguistic intuitions may not constitute a natural kind with a common etiology; and that, for a range of cases, the process by which intuitions used in linguistics are generated amounts to little more than comprehension. (shrink)
Michael Devitt ([2006a], [2006b]) argues that, insofar as linguists possess better theories about language than non-linguists, their linguistic intuitions are more reliable. (Culbertson and Gross [2009]) presented empirical evidence contrary to this claim. Devitt ([2010]) replies that, in part because we overemphasize the distinction between acceptability and grammaticality, we misunderstand linguists' claims, fall into inconsistency, and fail to see how our empirical results can be squared with his position. We reply in this note. Inter alia we argue that Devitt's (...) focus on grammaticality intuitions, rather than acceptability intuitions, distances his discussion from actual linguistic practice. We close by questioning a demand that drives his discussion—viz., that, for linguistic intuitions to supply evidence for linguistic theorizing, a better account of why they are evidence is required. (shrink)
This chapter examines the “externalist” claim that semantics should include theorizing about representational relations among linguistic expressions and (purported) aspects of the world. After disentangling our main topic from other strands in the larger set of externalist-internalist debates, arguments both for and against this claim are discussed. It is argued, among other things, that the fortunes of this externalist claim are bound up with contentious issues concerning the semantics-pragmatics border.
Is temporal representation constitutively necessary for perception? Tyler Burge (2010) argues that it is, in part because perception requires a form of memory sufficiently sophisticated as to require temporal representation. I critically discuss Burge’s argument, maintaining that it does not succeed. I conclude by reflecting on the consequences for the origins of temporal representation.
Who are the best subjects for judgment tasks intended to test grammatical hypotheses? Michael Devitt ( [2006a] , [2006b] ) argues, on the basis of a hypothesis concerning the psychology of such judgments, that linguists themselves are. We present empirical evidence suggesting that the relevant divide is not between linguists and non-linguists, but between subjects with and without minimally sufficient task-specific knowledge. In particular, we show that subjects with at least some minimal exposure to or knowledge of such tasks tend (...) to perform consistently with one another—greater knowledge of linguistics makes no further difference—while at the same time exhibiting markedly greater in-group consistency than those who have no previous exposure to or knowledge of such tasks and their goals. (shrink)
Resource rationality may explain suboptimal patterns of reasoning; but what of “anti-Bayesian” effects where the mind updates in a direction opposite the one it should? We present two phenomena — belief polarization and the size-weight illusion — that are not obviously explained by performance- or resource-based constraints, nor by the authors’ brief discussion of reference repulsion. Can resource rationality accommodate them?
Linguists often advert to what are sometimes called linguistic intuitions. These intuitions and the uses to which they are put give rise to a variety of philosophically interesting questions: What are linguistic intuitions – for example, what kind of attitude or mental state is involved? Why do they have evidential force and how might this force be underwritten by their causal etiology? What light might their causal etiology shed on questions of cognitive architecture – for example, as a case study (...) of how consciously inaccessible subpersonal processes give rise to conscious states, or as a candidate example of cognitive penetrability? What methodological issues arise concerning how linguistic intuitions are gathered and interpreted – for example, might some subjects' intuitions be more reliable than others? And what bearing might all this have on philosophers' own appeals to intuitions? This paper surveys and critically discusses leading answers to these questions. In particular, we defend a ‘mentalist’ conception of linguistics and the role of linguistic intuitions therein. (shrink)
Donald Davidson aims to illuminate the concept of meaning by asking: What knowledge would suffice to put one in a position to understand the speech of another, and what evidence sufficiently distant from the concepts to be illuminated could in principle ground such knowledge? Davidson answers: knowledge of an appropriate truth-theory for the speaker’s language, grounded in what sentences the speaker holds true, or prefers true, in what circumstances. In support of this answer, he both outlines such a truth-theory for (...) a substantial fragment of a natural language and sketches a procedure—radical interpretation—that, drawing on such evidence, could confirm such a theory. Bracketing refinements (e.g., those introduced to.. (shrink)
Stewart Shapiro’s book develops a contextualist approach to vagueness. It’s chock-full of ideas and arguments, laid out in wonderfully limpid prose. Anyone working on vagueness (or the other topics it touches on—see below) will want to read it. According to Shapiro, vague terms have borderline cases: there are objects to which the term neither determinately applies nor determinately does not apply. A term determinately applies in a context iff the term’s meaning and the non-linguistic facts determine that they do. The (...) non-linguistic facts include the “external” context: “comparison class, paradigm cases, contrasting cases, etc.” (33) But external-contextsensitivity is not what’s central to Shapiro’s contextualism. Even fixing external context, vague terms’ (anti-)extensions exhibit sensitivity to internal context: the decisions of competent speakers. According to Shapiro’s open texture thesis, for each borderline case, there is some circumstance in which a speaker, consistently with the term’s meaning and the non-linguistic facts, can judge it to fall into the term’s extension and some circumstance in which the speaker can judge it to fall into the term’s anti-extension: she can “go either way.” Moreover, borderline sentences are Euthyphronically judgment- dependent: a competent speaker’s judging a borderline to fall into a term’s (anti- )extension makes it so. For Shapiro, then, a sentence can be true but indeterminate: a case left unsettled by meaning and the non-linguistic facts (and thus indeterminate, or borderline) may be made true by a competent speaker’s judgment. Importantly, among the non-linguistic facts that constrain speakers’ judgments (at least in the cases Shapiro cares about) is a principle of tolerance: for all x and y, if x and y differ marginally in the relevant respect (henceforth, Mxy), then if one competently judges Bx, one cannot competently judge y in any other manner in the same (total) context.1 This does not require that one judge By: one might not consider the matter at all.. (shrink)
This paper motivates two bases for ascribing propositional semantic knowledge (or something knowledgelike): first, because it’s necessary to rationalize linguistic action; and, second, because it’s part of an empirical theory that would explain various aspects of linguistic behavior. The semantic knowledge ascribed on these two bases seems to differ in content, epistemic status, and cognitive role. This raises the question: how are they related, if at all? The bulk of the paper addresses this question. It distinguishes a variety of answers (...) and their varying philosophical and empirical commitments. (shrink)
A reply to commentators -- Jake Beck, Nico Orlandi and Aaron Franklin, and Ian Phillips -- on our paper "Does perceptual consciousness overflow cognitive access?".
In this note, I clarify the point of my paper “The Nature of Semantics: On Jackendoff’s Arguments” (NS) in light of Ray Jackendoff’s comments in his “Linguistics in Cognitive Science: The State of the Art.” Along the way, I amplify my remarks on unification.
This paper advances the somewhat unphilosophical thesis that “Trump is gross” to draw attention to the need to take matters of taste seriously in politics. I begin by exploring the slipperiness of distinctions between aesthetics, epistemology, and ethics, subsequently suggesting that we may need to pivot toward the aesthetic to understand and respond to the historical moment we inhabit. More specically, I suggest that, in order to understand how Donald Trump was elected President of the United States and in (...) order to stem the damage that preceded this and will ensue from it, we need to understand the power of political taste (and distaste, including disgust) as both a force of resistance and as a force of normalization. (shrink)
This article first describes a dilemma for liberalism: On the one hand restricting their own options is an important means for groups of people to shape their lives. On the other hand, group members are typically divided over whether or not to accept option-restricting solutions or policies. Should we restrict the options of all members of a group even though some consent and some do not? This dilemma is particularly relevant to public health policy, which typically target groups of people (...) with no possibility for individuals to opt out. The article then goes on to propose and discuss a series of aggregation rules for individual into group consent. Consideration of a number of scenarios shows that such rules cannot be formulated only in terms of fractions of consenters and non-consenters, but must incorporate their motives and how much they stand to win or lose. This raises further questions, including what is the appropriate impact of altruistic consenters and non-consenters, what should be the impact of costs and benefits and whether these should be understood as gross or net. All these issues are dealt with in a liberal, anti-paternalistic spirit, in order to explore whether group consent can contribute to the justification of option-restricting public health policy. (shrink)
Short‐term memory in vision is typically thought to divide into at least two memory stores: a short, fragile, high‐capacity store known as iconic memory, and a longer, durable, capacity‐limited store known as visual working memory (VWM). This paper argues that iconic memory stores icons, i.e., image‐like perceptual representations. The iconicity of iconic memory has significant consequences for understanding consciousness, nonconceptual content, and the perception–cognition border. Steven Gross and Jonathan Flombaum have recently challenged the division between iconic memory and VWM (...) by arguing against the idea of capacity limits in favor of a flexible resource‐based model of short‐term memory. I argue that, while VWM capacity is probably governed by flexible resources rather than a sharp limit, the two memory stores should still be distinguished by their representational formats. Iconic memory stores icons, while VWM stores discursive (i.e., language‐like) representations. I conclude by arguing that this format‐based distinction between memory stores entails that prominent views about consciousness and the perception–cognition border will likely have to be revised. (shrink)
Cognitive systems research has predominantly been guided by the historical distinction between emotion and cognition, and has focused its efforts on modelling the “cognitive” aspects of behaviour. While this initially meant modelling only the control system of cognitive creatures, with the advent of “embodied” cognitive science this expanded to also modelling the interactions between the control system and the external environment. What did not seem to change with this embodiment revolution, however, was the attitude towards affect and emotion in cognitive (...) science. This paper argues that cognitive systems research is now beginning to integrate these aspects of natural cognitive systems into cognitive science proper, not in virtue of traditional “embodied cognitive science”, which focuses predominantly on the body’s gross morphology, but rather in virtue of research into the interoceptive, organismic basis of natural cognitive systems. (shrink)
The term ‘psychologism’ is normally used for the doctrine that logical and mathematical truths must be explained in terms of psychological truths (see Kusch 1995 and 2011). As such, the term is typically pejorative: the widespread consensus is that psychologism in this sense is a paradigm of philosophical error, a gross mistake that was identified and conclusively refuted by Frege and Husserl.
Academic inquiry, in devoting itself primarily to the pursuit of knowledge, is profoundly and damagingly irrational, in a wholesale, structural fashion, when judged from the standpoint of helping to promote human welfare. Judged from this standpoint, academic inquiry devoted to the pursuit of knowledge violates three of the four most elementary rules of rational problem-solving conceivable. Above all, it fails to give intellectual priority to the tasks of (1) articulating problems of living, including global problems, and (2) proposing and critically (...) assessing possible solutions – possible social actions. This gross, structural irrationality of academic inquiry stems from blunders of the 18th century French Enlightenment. The philosophes had the brilliant idea of learning from scientific progress how to achieve social progress towards an enlightened world, but in implementing this idea they made three disastrous blunders. They got the nature of the progress-achieving methods of science wrong; they failed to generalize these methods properly; and most disastrously, they applied these methods to acquiring knowledge about society, and not directly to solving social problems. These blunders are still inherent in academia today, with dire consequences for the state of the world. All this has been pointed out prominently many times since 1976, but has been ignored. (shrink)
We discuss two models of virtue cultivation that are present throughout the Republic: the self-mastery model and the harmony model. Schultz (2013) discusses them at length in her recent book, Plato’s Socrates as Narrator: A Philosophical Muse. We bring this Socratic distinction into conversation with two modes of intentional regulation strategies articulated by James J. Gross. These strategies are expressive suppression and cognitive reappraisal. We argue that that the Socratic distinction helps us see the value in cognitive reappraisal and (...) that the contemporary neurological research supports the wide range of attitudes toward the value of emotional experience that mirror those found in the Republic. (shrink)
There is a widespread view that in order to be rational we must mostly know what we believe. In the probabilistic tradition this is defended by arguments that a person who failed to have this knowledge would be vulnerable to sure loss, or probabilistically incoherent. I argue that even gross failure to know one's own beliefs need not expose one to sure loss, and does not if we follow a generalization of the standard bridge principle between first-order and second-order (...) beliefs. This makes it possible for a subject to use probabilistic decision theory to manage in a rational way cases of potential failure of this self-knowledge, as we find in implicit bias. Through such cases I argue that it is possible for uncertainty about what our beliefs are to be not only rationally permissible but advantageous. (shrink)
Guided by key insights of the four great philosophers mentioned in the title, here, in review of and expanding on our earlier work (Burchard, 2005, 2011), we present an exposition of the role played by language, & in the broader sense, λογοζ, the Logos, in how the CNS, the brain, is running the human being. Evolution by neural Darwinism has been forcing the linguistic nature of mind, enabling it to overcome & exploit the cognitive gap between an animal and its (...) world by recognizing environmental structures. Our work was greatly influenced by Heidegger’s lecture notes on metaphysics (Heidegger, 1935). We found agreement with recent progress in neuroscience, but also mathematical foundations of language theory, equating Logos with the mathematical concept of structure. The mystery of perception across the gap is analyzed as radiation and molecules impinging on sensory neurons that carry linguistic information about gross environmental structures, and only remotely about the physical reality of elementary particles. The most important logical brain function is Ego or Self, guiding the workings of the brain as a logos machine. Ego or Self operates from neurons in frontopolar cortex with global receptive fields. The logos machine can function only by availing itself of global context, its internally stored noumenal cosmos NK, and the categorical-conceptual apparatus CCA, updated continually through the neural default mode network (Raichle, 2005). In the Transcendental Deduction, Immanuel Kant discovered that Ego or Self is responsible for conscious control in perception relying on concepts & categories for a fitting percept to be incorporated intoNK. The entire CNS runs as a “movie-in-the-brain” (Parvizi & Damasio, 2001), at peak speed processing simultaneously in a series of cortical centers a stack of up to twelve frames in gamma rhythm of 25 ms intervals. We equate global context, or NK, with our human world, Heidegger’s Dasein being-in-the-world, and are able to demonstrate that the great philosopher in EM parallels neuro-science concerning the human mind. (shrink)
Four modes of language acquisition and communication are presented translating ancient Indian expressions on human consciousness, mind, their form, structure and function clubbing with the Sabdabrahma theory of language acquisition and communication. The modern scientific understanding of such an insight is discussed. . A flowchart of language processing in humans will be given. A gross model of human language acquisition, comprehension and communication process forming the basis to develop software for relevantmind-machine modeling will be presented. The implications of such (...) a model to artificial intelligence and cognitive sciences will be discussed. The essential nature and necessity of a physics, communication engineering , biophysical and biochemical insight as both complementary and supplementary to using mathematical and computational methods in delineating the theory of language processing in humans is put forward. (shrink)
The late twentieth century saw two long-term trends in popular thinking about ethics. One was an increase in relativist opinions, with the “generation of the Sixties” spearheading a general libertarianism, an insistence on toleration of diverse moral views (for “Who is to say what is right? – it’s only your opinion.”) The other trend was an increasing insistence on rights – the gross violations of rights in the killing fields of the mid-century prompted immense efforts in defence of the (...) “inalienable” rights of the victims of dictators, of oppressed peoples, of refugees. The obvious incompatibility of those ethical stances, one anti-objectivist, the other objectivist in the extreme, proved no obstacle to their both being held passionately, often by the same people. (shrink)
Gini coefficients, which measure gross inequalities rather than their unfair components, are often used as proxy measures of absolute or relative distributive injustice in Western societies. This presupposes that the fair inequalities in these societies are small and stable enough to be ignored. This article presents a model for a series of ideal, perfectly just societies, where comfortable lives are equally available to everyone, and calculates the Gini coefficients for each. According to this model, inequalities produced by age and (...) other demographic factors, together with reasonable choices under equal opportunity, can raise the Gini coefficients for perfectly just societies to levels at least as high as those of any current Western country, and can as easily account for differences in Gini coefficients between such societies or within one such society over time. If Gini coefficients at these levels are possible for ideal societies without distributive injustice, then they should not be used as proxy measures of distributive injustice in real societies. (shrink)
Human cognitive process as a combination of the triad Knower - Knowing - Known and the language learning process as a combination of the triad Subject -Verb - Object will be understood in the light of ancient Indian wisdom as revealed in the Upanishads and will be presented. -/- A physics awareness of Advaita (No Two) concept will be used to model human mental processes such as – Knowing / Learning, Perception / Thinking / Logic, Understanding / Experiencing / Awareness (...) of meaning of a sentence. This awareness will further be used to prepare a flow-chart / block-diagram involving gross energy -transformations in human cognitive process. -/- This awareness will be corresponded to the awareness of Sri Aurobindo and Sri Ramana Maharshi on human consciousness. Implications of this model to natural language comprehension field of artificial intelligence will be presented combining with the ideas of connectionism. -/- . (shrink)
Tato studie recenzuje knihu: Charles CAMIC - Neil GROSS - Michèle LAMONT, Social Knowledge in the Making. Chicago: University of Chicago Press, 2011, a zasazuje ji do kontextu současných úvah o proměnách výzkumné praxe sociálních věd, akademické kultury, stylů myšlení a psaní. Pokouší se analyzovat v knize ohlašovaný „obrat k praxi" a ukazuje, nakolik samotné výzkumné praktiky v sociálních vědách ovlivňuje neexistence „standardních" forem, způsobů či stylů bádání. Detailně jsou představeny rovněž výchozí myšlenky takzvané „nové" sociologie idejí, jež stojí (...) v pozadí celého projektu analýzy sociálních praktik projevujících se v procesech „produkce, evaluace a aplikace" sociálního vědění. Jako klíčová se ukazuje potřeba nově promyslet samotnou koncepci „profesionalizované" sociální vědy, jež byla v dřívějším vývoji odmítnuta jako morálně i prakticky neudržitelná, neboť s demokratizací výzkumného procesu a proměnou vztahu mezi sociálními vědami a jejich publikem se stále zřetelněji projevuje, že s odklo- nem od konceptu profesionální sociální vědy se veřejná irelevance sociálního vědění spíše stále více prohlubuje. (shrink)
Book under review consists of a set of articles by Wolgast that contibute in various ways to her contention that human beings arrive at a theory of justice quasi-empirically insofar as a particular group encounters and seeks to surmount experiences of gross injustice. Via such experiences they develop a community-oriented sense of justice; but they do not thereby create a reliable basis for communitarian ethics.
I argue that the difference between the 17th century new moral science and Scholastic Natural Law Theory derived primarily from the skeptical challenge the former had to face. Pufendorf's project of a 'scientia practica universalis' was the paramount expression of an anti-skeptical moral science, a «science» both explanatory and normative, but also anti-dogmatic in so far as it tried to base its laws on those basic phenomena of human life that supposedly were outside the scope of skeptical doubt. Of the (...) Scholastic legacy to the new moral science, a dichotomy between an «intellettualistic» and a «voluntaristic» view of natural law (or between lex immanens and lex imposita). Voluntarism lays at the root both of theological views such as those of Calvinism and of political views such as those of Hobbes and Locke. A need to counterbalance undesirable implications of estreme voluntarism may account for much of 17th and 18th centuries developments in ethics and politics. Scottish natural jurisprudence, an expression of such quest for a third way between scepticism and extreme voluntarism, is less secular and more empirical than received wisdom admits of. One of its side-effects, namely a systematic, self-contained, and empirical economic theory, results from the search for a normative theory of social life on an empirical basis. The main tool for building such a theory, namely a view of societal laws as embedded in trans-individual mechanisms, derives from the voluntarist view of natural law as «imposed» law. Subsequent discussions of social issues based on the opposition of economic and ethical reasons originated partly from gross misreading of the Scottish natural jurisprudential framework for economic theory. (shrink)
Frontiers of Justice: Disability, Nationality, Species Membership, by Martha Nussbaum, Harvard University Press, 2006. How should we measure human development? The most popular method used to be to focus on wealth and income, as when international development agencies rank countries according to their per capita gross domestic product. Critics, however, have long noted shortcomings with this approach. Consider for example a wealthy person in a wheelchair: her problem is not a financial one, but a lack of access to public (...) spaces. Even if she were to hire porters to carry her in and out of stores and libraries, that would not really address her situation. There is a basic sense of dignity and self-respect that comes with being able to move around on one’s own. Even for a disabled millionaire, that will only be possible when public buildings are wheelchair accessible. To fully grasp what the handicapped need, we have to look beyond purely economic measures of well-being, and take into account the actual capabilities people can exercise in their daily lives. The example of the well-off person in a wheelchair illustrates what Martha Nussbaum calls the capabilities approach to human development. It was first pioneered in economics by Amartya Sen (who came up with the wheelchair example), and Nussbaum has for years been associated with a more philosophical variation, which uses the idea of capabilities to outline basic political principles. In Frontiers of Justice: Disability, Nationality, Species Membership Nussbaum takes this project even further, and applies the capabilities approach to issues of justice involving not only the disabled and the global poor, but animals as well.Yet for a philosophy called the capabilities approach, it is surprising how little theoretical work capabilities do in Nussbaum’s overall account. (shrink)
In this chapter, I defend a novel account of contempt’s evaluative presentation by synthesizing relevant psychological work (Rozin et al. 1999; Fischer and Roseman 2007; Fischer 2011; Hutcherson and Gross 2011) with philosophical insights (Mason 2003; Bell 2005; Abramson 2009; Bell 2013). I then show how a concern about contempt’s status as an emotion involved in holding people accountable can be helpfully addressed. Finally, I gesture at an account of why, when we feel contemptuous toward people, our accountability responses (...) involve withdrawal and exclusion rather than approach and confrontation. (shrink)
It is beyond doubt that the legal system established by the Nazi government in Germany between 1933-1945 represented a gross departure from the rule of law: the Nazis eradicated legal security and certainty; allowed for judicial and state arbitrariness; blocked epistemic access to what the law requires; issued unpredictable legal requirements; and so on. This introduction outlines the distorted nature of the Nazi legal system and looks at the main factors that contributed to this grave divergence.
Antitheodicy objects to all attempts to solve the problem of evil. Its objections are almost all on moral grounds—it argues that the whole project of theodicy is morally offensive. Trying to excuse God’s permission of evil is said to deny the reality of evil, to exhibit gross insensitivity to suffering, and to insult the victims of grave evils. Since antitheodicists urge the avoidance of theodicies for moral reasons, it is desirable to evaluate the moral reasons against theodicies in abstraction (...) from the intellectual reasons for and against them. It is argued that the best known theodicies such as those based on soul-making and free will are guilty of moral faults as alleged. But Leibniz’s best of all possible worlds theory, often thought to be the most morally offensive ‘Panglossian’ theodicy, is morally blameless because it excuses God by the absolute impossibility of his choosing any world better than the present one. Theodicy should not be conceived of as a search for greater goods which may excuse God’s permitting evils. From the divine point of view, creation is an upfront choice between scenarios—in modern parlance, a Trolley problem rather than a Transplant problem. In cases of forced choice among scenarios, it is morally improper to criticize one who chooses the best. (shrink)
For over two decades now, Sub-Saharan Africa has been superimposed in a coercive and contradictory neo-liberal development economism agenda. According to this paradigm, markets and not states are the fundamental determinants of distributive justice and human flourishing through the promotion of economic growth that is believed to trickle down to the poor in due time. Despite the global intellectual criticism of this neo-liberal development economics orthodox of measuring development and wellbeing in terms of market induced economic growth, autocratic states in (...) Sub-Saharan Africa that have accumulated un-dimensional growth continue to be applauded as role models on poverty reduction, wellbeing and social justice by donors and global development institutions such as the World Bank and international monetary fund (IMF). This is basically because they have wholly embraced the implementation of the anti-pro-poor neo-liberal structural adjustment tool kit. This study uses a critical hermeneutics methodology to expose the distortions embedded in neo-liberal gross domestic product (GDP) growth cartographies and how these disguise the social injustices against the poor in Sub- Saharan Africa with particular reference to Uganda. The study contends that in measuring development and wellbeing, human rights and social justice must take precedence over economic efficiency and GDP growth for that matter. (shrink)
Dialogue between feminist and mainstream philosophy of science has been limited in recent years, although feminist and mainstream traditions each have engaged in rich debates about key concepts and their efficacy. Noteworthy criticisms of concepts like objectivity, consensus, justification, and discovery can be found in the work of philosophers of science including Philip Kitcher, Helen Longino, Peter Galison, Alison Wylie, Lorraine Daston, and Sandra Harding. As a graduate student in philosophy of science who worked in both literatures, I was often (...) left with the feeling that I had joined a broken family with two warring factions. This is apparent in the number of anthologies that have emerged on both sides in the aftermath of the “Science Wars” (Gross, Paul R., Norman Levitt, and Martin W. Lewis, eds. 1996; Koertge, Noretta, ed. 1998; Sokal, Alan and Jean Bricmont. 1998; etc.) Depending on one’s perspective on the Science Wars, the breadth of illustrative cases and examples found in Science and Other Cultures can either give more ammunition for the battle, or grounding for a much needed treaty of accord. The most important feature of this book is that it does not merely claim that science is only political, and it does not merely dismiss science as a social phenomenon to be deconstructed using the standard postmodern conceptual tools. Instead, the collection illustrates ways in which postcolonial analysis and multicultural examples can enrich our understanding of “good” science and ethics. Here, the concept of “strong objectivity” from Harding’s earlier books is fleshed out through a variety of cases. The anthology is the culmination of a series of research activities funded by a National Science Foundation grant to the American Philosophical Association. The grant, under the auspices of the NSF Ethics and Values Program, sponsored fourteen summer research projects and thirty-six presentations at four regional APA meetings. (shrink)
This article discusses the ‘nature’ of our contemporary fascination with wildness, in light of the popular documentary “Grizzly Man.”Taking as its central point of departure the film’s central protagonist Timothy Treadwell’s fascination with wild grizzlies and director Werner Herzog’s condemnation of it as gross anthropomorphism, this paper will explore the context and basis of our contemporary fascination with wildness in terms of the current debate raging within environmental philosophy between the social constructivist or postmodern position as exemplified by Martin (...) Drenthen and the feral humanist position as articulated by Paul Shepard.The former argues that this fascination with wildness is reflective of certain historical and cultural trends within contemporary western society, while the latter argues that it is reflective of our primordial human heritage. (shrink)
The question of what ontological insights can be gained from the knowledge of physics (keyword: ontic structural realism) cannot obviously be separated from the view of physics as a science from an epistemological perspective. This is also visible in the debate about 'scientific realism'. This debate makes it evident, in the form of the importance of perception as a criterion for the assertion of existence in relation to the 'theoretical entities' of physics, that epistemology itself is 'ontologically laden'. This is (...) in the form of the assumption that things (or entities) in themselves exist as such and such determined ones (independent of cognition, autonomously). This ontological assumption is not only the basis of our naïve understanding of cognition, but also its indispensable premise, insofar as this understanding is a fundamentally passive, 'receptive' one. Accordingly, just as 'perception' is the foundation, ('objective') description is the aim of cognition, that which cognition is about. In this sense, our idea of cognition and our idea of the things are inseparably linked. Without the ontological premise mentioned we just would not know what cognition is, but it is basically just a kind of image that we have in our minds (an assumption that helps us understand 'cognition'). Epistemology not only shares this basic assumption (which it also shares with metaphysics), but it revolves (unlike metaphysics) entirely around it by making the idea and demand of 'certainty' a condition of 'real' knowledge. As 'certainty' is a subjective criterion this entails the 'remodelling' of the real, holistic cognitive situation (to which metaphysics adheres) into a linear subject-object-relation (which results in the strict 'transcendence' of the objects). And it also establishes, due to its 'expertise' in matters of cognition, the 'primacy of epistemology' over all other sciences. Now, on closer inspection, however, the expertise of epistemology seems not all that dependable, because it basically consists only of paradigms which, from the point of view of the holism of the real cognitive situation itself, are nothing more than relatively simplistic interpretations of this situation. However, we do not yet know what another conception of cognition might look like (which is not surprising given the high rank of the phenomenon of cognition in the hierarchy of phenomena according to their complexity). 'Certainty' as a criterion of cognition is thus excluded from the outset, and thus the linear relational model of cognition appears as what it is, a gross distortion of the real, holistic cognitive situation. The significance of this argumentation with regard to physics is that the linear epistemological model of cognition itself is a major obstacle to an adequate epistemological understanding of physics. This is because it is fixed 'a priori' to an object-related concept of cognition, and to 'description' as the only mode of ('real') cognition. But physics (without questioning our naïve notion of cognition on the level of epistemology) simply works past it and its basic assumptions. Its cognitive concept (alias heuristic) is fundamentally different from that of metaphysics. The acceptance of the real, holistic cognitive situation is, in my opinion, the condition for an adequate understanding of physics' heuristic access to objects, its transcendental, generalizing cognitive concept, as well as its ontological relevance and dimension of its own. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.