This article reviews two standard criticisms of creationism/intelligent design (ID): it is unfalsifiable, and it is refuted by the many imperfect adaptations found in nature. Problems with both criticisms are discussed. A conception of testability is described that avoids the defects in Karl Popper’s falsifiability criterion. Although ID comes in multiple forms, which call for different criticisms, it emerges that ID fails to constitute a serious alternative to evolutionary theory.
The logical empiricists said some good things about epistemology and scientific method. However, they associated those epistemological ideas with some rather less good ideas about philosophy of language. There is something epistemologically suspect about statements that cannot be tested. But to say that those statements are meaningless is to go too far. And there is something impossible about trying to figure out which of two empirically equivalent theories is true. But to say that those theories are synonymous is also to (...) go too far. My goal in this paper is not to resuscitate all these positivist ideas, but to revisit just one of them. Instrumentalism is the idea that theories are instruments for making predictions. Of course, no one would disagree that this is one of the things we use theories to do. In just the same way, no one could disagree with the emotivist claim that one of the things we do with ethical terms like "good" and "right" is to express our feelings of approval and disapproval. Instrumentalism and emotivism become contentious, and therefore interesting, when these claims are supplemented. (shrink)
The chapter discusses the principle of conservatism and traces how the general principle is related to the specific one. This tracing suggests that the principle of conservatism needs to be refined. Connecting the principle in cognitive science to more general questions about scientific inference also allows us to revisit the question of realism versus instrumentalism. The framework deployed in model selection theory is very general; it is not specific to the subject matter of science. The chapter outlines some non-Bayesian ideas (...) that have been developed in model selection theory. The principle of conservatism, like C. Lloyd Morgan's canon, describes a preference concerning kinds of parameters. It says that a model that postulates only lower-level intentionality is preferable to one that postulates higher-level intentionality if both fit the data equally well. The model selection approach to parsimony helps explain why unification is a theoretical virtue. (shrink)
Michael Scriven’s (1959) example of identical twins (who are said to be equal in fitness but unequal in their reproductive success) has been used by many philosophers of biology to discuss how fitness should be defined, how selection should be distinguished from drift, and how the environment in which a selection process occurs should be conceptualized. Here it is argued that evolutionary theory has no commitment, one way or the other, as to whether the twins are equally fit. This is (...) because the theory of natural selection is fundamentally about the fitnesses of traits, not the fitnesses of token individuals. A plausible philosophical thesis about supervenience entails that the twins are equally fit if they live in identical environments, but evolutionary biology is not committed to the thesis that the twins live in identical environments. Evolutionary theory is right to focus on traits, rather than on token individuals, because the fitnesses of token organisms (as opposed to their actual survivorship and degree of reproductive success) are almost always unknowable. This point has ramifications for the question of how Darwin’s theory of evolution and R. A. Fisher’s are conceptually different. (shrink)
We argued that explanatoriness is evidentially irrelevant in the following sense: Let H be a hypothesis, O an observation, and E the proposition that H would explain O if H and O were true. Then our claim is that Pr = Pr. We defended this screening-off thesis by discussing an example concerning smoking and cancer. Climenhaga argues that SOT is mistaken because it delivers the wrong verdict about a slightly different smoking-and-cancer case. He also considers a variant of SOT, called (...) “SOT*”, and contends that it too gives the wrong result. We here reply to Climenhaga’s arguments and suggest that SOT provides a criticism of the widely held theory of inference called “inference to the best explanation”. (shrink)
Maximum Parsimony (MP) and Maximum Likelihood (ML) are two methods for evaluating which phlogenetic tree is best supported by data on the characteristics of leaf objects (which may be species, populations, or individual organisms). MP has been criticized for assuming that evolution proceeds parsimoniously -- that if a lineage begins in state i and ends in state j, the way it got from i to j is by the smallest number of changes. MP has been criticized for needing to assume (...) some model or other of the evolutionary process even though biologists often do not know what that process really is. This paper critically evaluates both criticisms. (shrink)
We argue elsewhere that explanatoriness is evidentially irrelevant . Let H be some hypothesis, O some observation, and E the proposition that H would explain O if H and O were true. Then O screens-off E from H: Pr = Pr. This thesis, hereafter “SOT” , is defended by appeal to a representative case. The case concerns smoking and lung cancer. McCain and Poston grant that SOT holds in cases, like our case concerning smoking and lung cancer, that involve frequency (...) data. However, McCain and Poston contend that there is a wider sense of evidential relevance—wider than the sense at play in SOT—on which explanatoriness is evidentially relevant even in cases involving frequency data. This is their main point, but they also contend that SOT does not hold in certain cases not involving frequency data. We reply to each of these points and conclude with some general remarks on screening-off as a test of evidential relevance. (shrink)
In their 2010 book, Biology’s First Law, D. McShea and R. Brandon present a principle that they call ‘‘ZFEL,’’ the zero force evolutionary law. ZFEL says (roughly) that when there are no evolutionary forces acting on a population, the population’s complexity (i.e., how diverse its member organisms are) will increase. Here we develop criticisms of ZFEL and describe a different law of evolution; it says that diversity and complexity do not change when there are no evolutionary causes.
Carl Hempel (1965) argued that probabilistic hypotheses are limited in what they can explain. He contended that a hypothesis cannot explain why E is true if the hypothesis says that E has a probability less than 0.5. Wesley Salmon (1971, 1984, 1990, 1998) and Richard Jeffrey (1969) argued to the contrary, contending that P can explain why E is true even when P says that E’s probability is very low. This debate concerned noncontrastive explananda. Here, a view of contrastive causal (...) explanation is described and defended. It provides a new limit on what probabilistic hypotheses can explain; the limitation is that P cannot explain why E is true rather than A if P assign E a probability that is less than or equal to the probability that P assigns to A. The view entails that a true deterministic theory and a true probabilistic theory that apply to the same explanandum partition are such that the deterministic theory explains all the true contrastive propositions constructable from that partition, whereas the probabilistic theory often fails to do so. (shrink)
Do traits evolve because they are good for the group, or do they evolve because they are good for the individual organisms that have them? The question is whether groups, rather than individual organisms, are ever “units of selection.” My exposition begins with the 1960’s, when the idea that traits evolve because they are good for the group was criticized, not just for being factually mistaken, but for embodying a kind of confused thinking that is fundamentally at odds with the (...) logic that Darwin’s theory requires. A counter-movement has arisen since the 1960’s, called multi-level selection theory, according to which selection acts at multiple levels, including the level of the group. After discussing the 1960’s attack on group selection and the concept’s subsequent revival, I examine Darwin’s views on the subject. I discuss what Darwin says about four examples: human morality, the barbed stinger of the honeybee, neuter workers in species of social insect, and the sterility of many interspecies hybrids. I argue that Darwin defended hypotheses of group selection in the first three problems, but rejected it in the fourth. I also discuss Darwin’s general views about the role of group selection in evolution. (shrink)
Los filósofos han tendido a discutir el esencialismo como si fuera una doctrina global, una filosofía que, por alguna razón uniforme, debiera ser adoptada por todas las ciencias o por ninguna. Popper (1972) ha adoptado una postura global negativa, porque ve al esencialismo como un obstáculo fundamental para la racionalidad científica. También Quine (1953b, 1960), por una combinación de motivos semánticos y epistemológicos, quiere desterrar el esencialismo de la totalidad del discurso científico. Sin embargo, en fechas más recientes, Putnam (1975) (...) y Kripke (1972) han propugnado doctrinas esencialistas y han afirmado que es tarea de cada ciencia investigar las propiedades esenciales de sus clases naturales constitutivas. (shrink)
We argue in Roche and Sober (2013) that explanatoriness is evidentially irrelevant in that Pr(H | O&EXPL) = Pr(H | O), where H is a hypothesis, O is an observation, and EXPL is the proposition that if H and O were true, then H would explain O. This is a “screening-off” thesis. Here we clarify that thesis, reply to criticisms advanced by Lange (2017), consider alternative formulations of Inference to the Best Explanation, discuss a strengthened screening-off thesis, and consider (...) how it bears on the claim that unification is evidentially relevant. (shrink)
Is there some general reason to expect organisms that have beliefs to have false beliefs? And after you observe that an organism occasionally occupies a given neural state that you think encodes a perceptual belief, how do you evaluate hypotheses about the semantic content that that state has, where some of those hypotheses attribute beliefs that are sometimes false while others attribute beliefs that are always true? To address the first of these questions, we discuss evolution by natural selection and (...) show how organisms that are risk-prone in the beliefs they form can be fitter than organisms that are risk-free. To address the second question, we discuss a problem that is widely recognized in statistics – the problem of over-fitting – and one influential device for addressing that problem, the Akaike Information Criterion (AIC). We then use AIC to solve epistemological versions of the disjunction and distality problems, which are two key problems concerning what it is for a belief state to have one semantic content rather than another. (shrink)
We conceptualize observation selection effects (OSEs) by considering how a shift from one process of observation to another affects discrimination-conduciveness, by which we mean the degree to which possible observations discriminate between hypotheses, given the observation process at work. OSEs in this sense come in degrees and are causal, where the cause is the shift in process, and the effect is a change in degree of discrimination-conduciveness. We contrast our understanding of OSEs with others that have appeared in the literature. (...) After describing conditions of adequacy that an acceptable measure of degree of discrimination-conduciveness must satisfy, we use those conditions of adequacy to evaluate several possible measures. We also discuss how the effect of shifting from one observation process to another might be measured. We apply our framework to several examples, including the ravens paradox and the phenomenon of publication bias. (shrink)
In Ockham's Razors: A User's Guide, ElliottSober argues that parsimony considerations are epistemically relevant on the grounds that certain methods of model selection, such as the Akaike Information Criterion, exhibit good asymptotic behaviour and take the number of adjustable parameters in a model into account. I raise some worries about this form of argument.
According to Michael Friedman’s theory of explanation, a law X explains laws Y1, Y2, …, Yn precisely when X unifies the Y’s, where unification is understood in terms of reducing the number of independently acceptable laws. Philip Kitcher criticized Friedman’s theory but did not analyze the concept of independent acceptability. Here we show that Kitcher’s objection can be met by modifying an element in Friedman’s account. In addition, we argue that there are serious objections to the use that Friedman makes (...) of the concept of independent acceptability. (shrink)
on ). Advice about how to move forward on the mindreading debate, particularly when it comes to overcoming the logical problem, is much needed in comparative psychology. In chapter 4 of his book Ockham’s Razors, ElliottSober takes on the task by suggesting how we might uncover the mechanism that mediates between the environmental stimuli that is visible to all, and chimpanzee social behavior. I argue that Sober's proposed method for deciding between the behaivor-reading and mindreading hypotheses (...) fails given the nature of each of those hypotheses. I argue that the behavior-reading hypothesis that Povinelli and colleagues propose is so rich and robust that it is going to make predictions that are behaviorally indiscernible from the mindreading hypothesis. Further, I argue that the logical problem artificially separates one’s knowledge of behavior and one’s knowledge of mind. If we reject this form of dualism, the logical problem doesn’t arise. (shrink)
A primeira tese de Sober é que não podemos agir livremente, a não ser que o Argumento da Causalidade ou o Argumento da Inevitabilidade tenham alguma falha. O Argumento da Causalidade é o seguinte: nossos estados mentais causam movimentos corporais; mas nossos estados mentais são causados por fatores do mundo físico. Nossa personalidade pode ser reconduzida à nossa experiência e à nossa genética. E tanto a experiência quanto a genética foram causados por itens do mundo físico. Assim, o meio (...) ambiente e os genes são os causadores de nossas crenças e desejos. E estes, por sua vez, causam o nosso comportamento. Como, em última instância, não escolhemos nem os nossos genes e nem o meio ambiente no qual adquirimos as nossas experiências, também não escolhemos o nosso comportamento: ele é causado por fatores além do nosso controle; isso nos faz não ser livres. E o Argumento da Inevitabilidade é exposto por Sober assim: se uma ação foi praticada livremente, então deve ter sido possível ao agente agir de outra forma. Mas, dado que as causas de nossas ações são as nossas crenças e desejos, não poderíamos ter agido diferentemente de como elas nos determinam a agir. (shrink)
Jaegwon Kim’s supervenience/exclusion argument attempts to show that non-reductive physicalism is incompatible with mental causation. This influential argument can be seen as relying on the following principle, which I call “the piggyback principle”: If, with respect to an effect, E, an instance of a supervenient property, A, has no causal powers over and above, or in addition to, those had by its supervenience base, B, then the instance of A does not cause E (unless A is identical with B). In (...) their “Epiphenomenalism: The Dos and the Don’ts,” Larry Shapiro and ElliottSober employ a novel empirical approach to challenge the piggyback principle. Their empirical approach pulls from the experiments of August Weismann regarding the inheritance of acquired characteristics. Through an examination of Weismann’s experiments, Shapiro and Sober extract lessons in reasoning about the epiphenomenalism of a property. And according to these empirically drawn lessons, the piggyback principle is a don’t. My primary aim in this paper is to defend the piggyback principle against Shapiro and Sober’s empirical approach. (shrink)
Early in the eleventh of his Fifteen Sermons, Joseph Butler advances his best-known argument against psychological hedonism. ElliottSober calls that argument Butler’s stone, and famously objects to it. I consider whether Butler’s stone has philosophical value. In doing so I examine, and reject, two possible ways of overcoming Sober’s objection, each of which has proponents. In examining the first way I discuss Lord Kames’s version of the stone argument, which has hitherto escaped scholarly attention. Finally, I (...) show that Butler’s stone does something important, which I have not found previously discussed. Butler’s stone blocks an inference, persuasive to many people, which purports to show that we intrinsically desire only pleasure. (shrink)
Where E is the proposition that [If H and O were true, H would explain O], William Roche and Elliot Sober have argued that P(H|O&E) = P(H|O). In this paper I argue that not only is this equality not generally true, it is false in the very kinds of cases that Roche and Sober focus on, involving frequency data. In fact, in such cases O raises the probability of H only given that there is an explanatory connection between (...) them. (shrink)
An influential argument due to ElliottSober, subsequently strengthened by Denis Walsh and Joel Pust, moves from plausible premises to the bold conclusion that natural selection cannot explain the traits of individual organisms. If the argument were sound, the explanatory scope of selection would depend, surprisingly, on metaphysical considerations concerning origin essentialism. I show that the Sober-Walsh-Pust argument rests on a flawed counterfactual criterion for explanatory relevance. I further show that a more defensible criterion for explanatory relevance (...) recently proposed by Michael Strevens lends support to the view that natural selection can be relevant to the explanation of individual traits. (shrink)
The optimality approach to modeling natural selection has been criticized by many biologists and philosophers of biology. For instance, Lewontin (1979) argues that the optimality approach is a shortcut that will be replaced by models incorporating genetic information, if and when such models become available. In contrast, I think that optimality models have a permanent role in evolutionary study. I base my argument for this claim on what I think it takes to best explain an event. In certain contexts, optimality (...) and game-theoretic models best explain some central types of evolutionary phenomena. ‡Thanks to Michael Friedman, Helen Longino, Michael Weisberg, and especially ElliottSober for comments on earlier drafts of this paper. †To contact the author, please write to: Department of Philosophy, Stanford University, Stanford, CA 94305-2155; e-mail: potochnik@stanford.edu. (shrink)
ABSTRACT ‘Modus Darwin’ is the name given by ElliottSober to a form of argument that he attributes to Darwin in the Origin of Species, and to subsequent evolutionary biologists who have reasoned in the same way. In short, the argument form goes: similarity, ergo common ancestry. In this article, I review and critique Sober’s analysis of Darwin’s reasoning. I argue that modus Darwin has serious limitations that make the argument form unsuitable for supporting Darwin’s conclusions, and (...) that Darwin did not reason in this way. (shrink)
Roughly, psychological egoism is the thesis that all of a person's intentional actions are ultimately self-interested in some sense; psychological altruism is the thesis that some of a person's intentional actions are not ultimately self-interested, since some are ultimately other-regarding in some sense. C. Daniel Batson and other social psychologists have argued that experiments provide support for a theory called the "empathy-altruism hypothesis" that entails the falsity of psychological egoism. However, several critics claim that there are egoistic explanations of the (...) data that are still not ruled out. One of the most potent criticisms of Batson comes from ElliottSober and David Sloan Wilson. I argue for two main theses in this paper: (1) we can improve on Sober and Wilson’s conception of psychological egoism and altruism, and (2) this improvement shows that one of the strongest of Sober and Wilson's purportedly egoistic explanations is not tenable. A defense of these two theses goes some way toward defending Batson‘s claim that the evidence from social psychology provides sufficient reason to reject psychological egoism. (shrink)
This paper defends the position that the supposed gap between biological altruism and psychological altruism is not nearly as wide as some scholars (e.g., ElliottSober) insist. Crucial to this defense is the use of James Mark Baldwin's concepts of “organic selection”and “social heredity” to assist in revealing that the gap between biological and psychological altruism is more of a small lacuna. Specifically, this paper argues that ontogenetic behavioral adjustments, which are crucial to individual survival and reproduction, are (...) also crucial to species survival. In particular, it is argued that human psychological altruism is produced and maintained by various sorts of mimicry and self-reflection in the aid of both individual and species survival. The upshot of this analysis is that it is possible to offer an account of psychological altruism that is closelytethered to biological altruism without reducing entirely the former to thelatter. (shrink)
The conflation of two fundamentally distinct issues has generated serious confusion in the philosophical and biological literature concerning the units of selection. The question of how a unit of selection of defined, theoretically, is rarely distinguished from the question of how to determine the empirical accuracy of claims--either specific or general--concerning which unit(s) is undergoing selection processes. In this paper, I begin by refining a definition of the unit of selection, first presented in the philosophical literature by William Wimsatt, which (...) is grounded in the structure of natural selection models. I then explore the implications of this structural definition for empirical evaluation of claims about units of selection. I consider criticisms of this view presented by ElliottSober--criticisms taken by some (for example, Mayo and Gilinsky 1987) to provide definitive damage to the structuralist account. I shall show that Sober has misinterpreted the structuralist views; he knocks down a straw man in order to motivate his own causal account. Furthermore, I shall argue, Sober's causal account is dependent on the structuralist account that he rejects. I conclude by indicating how the refined structural definition can clarify which sorts of empirical evidence could be brought to bear on a controversial case involving units of selection. (shrink)
Causalists about explanation claim that to explain an event is to provide information about the causal history of that event. Some causalists also endorse a proportionality claim, namely that one explanation is better than another insofar as it provides a greater amount of causal information. In this chapter I consider various challenges to these causalist claims. There is a common and influential formulation of the causalist requirement – the ‘Causal Process Requirement’ – that does appear vulnerable to these anti-causalist challenges, (...) but I argue that they do not give us reason to reject causalism entirely. Instead, these challenges lead us to articulate the causalist requirement in an alternative way. This alternative articulation incorporates some of the important anti-causalist insights without abandoning the explanatory necessity of causal information. For example, proponents of the ‘equilibrium challenge’ argue that the best available explanations of the behaviour of certain dynamical systems do not appear to provide any causal information. I respond that, contrary to appearances, these equilibrium explanations are fundamentally causal, and I provide a formulation of the causalist thesis that is immune to the equilibrium challenge. I then show how this formulation is also immune to the ‘epistemic challenge’ – thus vindicating (a properly formulated version of) the causalist thesis. (shrink)
Evidential holism begins with something like the claim that “it is only jointly as a theory that scientific statements imply their observable consequences.” This is the holistic claim that ElliottSober tells us is an “unexceptional observation”. But variations on this “unexceptional” claim feature as a premise in a series of controversial arguments for radical conclusions, such as that there is no analytic or synthetic distinction that the meaning of a sentence cannot be understood without understanding the whole (...) language of which it is a part and that all knowledge is empirical knowledge. This paper is a survey of what evidential holism is, how plausible it is, and what consequences it has. Section 1 will distinguish a range of different holistic claims, Sections 2 and 3 explore how well motivated they are and how they relate to one another, and Section 4 returns to the arguments listed above and uses the distinctions from the previous sections to identify holism's role in each case. (shrink)
Few scientists are conscious of the distinc- tion between the logic of what they write and the rhetoric of how they write it. This is because we are taught to write scientific papers and books from a third-person per- spective, using as impersonal (and, almost inevitably, boring [1]) a style as possible. The first chapter in ElliottSober’s new book examines the difference between Darwin’s logic and his rhetoric in The Origin, and manages to teach some interesting and (...) in- sightful historical and philosophical lessons while doing so. (shrink)
A review of some major topics of debate in normative decision theory from circa 2007 to 2019. Topics discussed include the ongoing debate between causal and evidential decision theory, decision instability, risk-weighted expected utility theory, decision-making with incomplete preferences, and decision-making with imprecise credences.
According to comparativism, degrees of belief are reducible to a system of purely ordinal comparisons of relative confidence. (For example, being more confident that P than that Q, or being equally confident that P and that Q.) In this paper, I raise several general challenges for comparativism, relating to (i) its capacity to illuminate apparently meaningful claims regarding intervals and ratios of strengths of belief, (ii) its capacity to draw enough intuitively meaningful and theoretically relevant distinctions between doxastic states, and (...) (iii) its capacity to handle common instances of irrationality. (shrink)
Possible worlds models of belief have difficulties accounting for unawareness, the inability to entertain (and hence believe) certain propositions. Accommodating unawareness is important for adequately modelling epistemic states, and representing the informational content to which agents have in principle access given their explicit beliefs. In this paper, I develop a model of explicit belief, awareness, and informational content, along with an sound and complete axiomatisation. I furthermore defend the model against the seminal impossibility result of Dekel, Lipman and Rustichini, according (...) to which three intuitive conditions preclude non-trivial unawareness on any possible worlds model of belief. (shrink)
This response addresses the excellent responses to my book provided by Heather Douglas, Janet Kourany, and Matt Brown. First, I provide some comments and clarifications concerning a few of the highlights from their essays. Second, in response to the worries of my critics, I provide more detail than I was able to provide in my book regarding my three conditions for incorporating values in science. Third, I identify some of the most promising avenues for further research that flow out of (...) this interchange. (shrink)
One response to the problem of logical omniscience in standard possible worlds models of belief is to extend the space of worlds so as to include impossible worlds. It is natural to think that essentially the same strategy can be applied to probabilistic models of partial belief, for which parallel problems also arise. In this paper, I note a difficulty with the inclusion of impossible worlds into probabilistic models. Under weak assumptions about the space of worlds, most of the propositions (...) which can be constructed from possible and impossible worlds are in an important sense inexpressible; leaving the probabilistic model committed to saying that agents in general have at least as many attitudes towards inexpressible propositions as they do towards expressible propositions. If it is reasonable to think that our attitudes are generally expressible, then a model with such commitments looks problematic. (shrink)
In the world of philosophy of science, the dominant theory of confirmation is Bayesian. In the wider philosophical world, the idea of inference to the best explanation exerts a considerable influence. Here we place the two worlds in collision, using Bayesian confirmation theory to argue that explanatoriness is evidentially irrelevant.
The standard representation theorem for expected utility theory tells us that if a subject’s preferences conform to certain axioms, then she can be represented as maximising her expected utility given a particular set of credences and utilities—and, moreover, that having those credences and utilities is the only way that she could be maximising her expected utility. However, the kinds of agents these theorems seem apt to tell us anything about are highly idealised, being always probabilistically coherent with infinitely precise degrees (...) of belief and full knowledge of all a priori truths. Ordinary subjects do not look very rational when compared to the kinds of agents usually talked about in decision theory. In this paper, I will develop an expected utility representation theorem aimed at the representation of those who are neither probabilistically coherent, logically omniscient, nor expected utility maximisers across the board—that is, agents who are frequently irrational. The agents in question may be deductively fallible, have incoherent credences, limited representational capacities, and fail to maximise expected utility for all but a limited class of gambles. (shrink)
J. L. Schellenberg’s Philosophy of Religion argues for a specific brand of sceptical religion that takes ‘Ultimism’ – the proposition that there is a metaphysically, axiologically, and soteriologically ultimate reality – to be the object to which the sceptical religionist should assent. In this article I shall argue that Ietsism – the proposition that there is merely something transcendental worth committing ourselves to religiously – is a preferable object of assent. This is for two primary reasons. First, Ietsism is far (...) more modest than Ultimism; Ietsism, in fact, is open to the truth of Ultimism, while the converse does not hold. Second, Ietsism can fulfil the same criteria that compel Schellenberg to argue for Ultimism. (shrink)
Teleological Theories of mental representation are probably the most promising naturalistic accounts of intentionality. However, it is widely known that these theories suffer from a major objection: the Indeterminacy Problem. The most common reply to this problem employs the Target of Selection Argument, which is based on Sober’s distinction between selection for and selection of . Unfortunately, some years ago the Target of Selection Argument came into serious attack in a famous paper by Goode and Griffiths. Since then, the (...) question of the validity of the Target of Selection Argument in the context of the Indeterminacy Problem has remained largely untouched. In this essay, I argue that both the Target of Selection Argument and Goode and Griffiths’ criticisms to it misuse Sober’s analysis in important respects. (shrink)
How can a business institution function as an ethical institution within a wider system if the context of the wider system is inherently unethical? If the primary goal of an institution, no matter how ethical it sets out to be, is to function successfully within a market system, how can it reconcile making a profit and keeping its ethical goals intact? While it has been argued that some ethical businesses do exist, e.g., Johnson and Johnson, the argument I would like (...) to put forth is that no matter how ethical a business institution is, or how ethical its goals are, its capacity to act in an ethical manner is restricted by the wider system in which it must operate, the market system. Unless there is a fundamental change in the notion of the market system itself, the capacity for individual businesses to act in an ethical manner will always be restricted. My argument is divided into two parts. The first part is to show the inherent bias towards unethical outcomes that is inherent in the market system. The second part is to suggest how to reorient the general economic framework in order to make ethical institutions more possible. The question then becomes, how to define economic behavior in terms other than competition for profit. (shrink)
Drunken sex is common. Despite how common drunken sex is, we think very uncritically about it. In this paper, I want to examine whether drunk individuals can consent to sex. Specifically, I answer this question: suppose that an individual, D, who is drunk but can still engage in reasoning and communication, agrees to have sex with a sober individual, S; is D’s consent to sex with S morally valid? I will argue that, within a certain range of intoxication, an (...) individual who is drunk can give valid consent to have sex with an individual who is sober. (shrink)
This paper begins with a puzzle regarding Lewis' theory of radical interpretation. On the one hand, Lewis convincingly argued that the facts about an agent's sensory evidence and choices will always underdetermine the facts about her beliefs and desires. On the other hand, we have several representation theorems—such as those of (Ramsey 1931) and (Savage 1954)—that are widely taken to show that if an agent's choices satisfy certain constraints, then those choices can suffice to determine her beliefs and desires. In (...) this paper, I will argue that Lewis' conclusion is correct: choices radically underdetermine beliefs and desires, and representation theorems provide us with no good reasons to think otherwise. Any tension with those theorems is merely apparent, and relates ultimately to the difference between how 'choices' are understood within Lewis' theory and the problematic way that they're represented in the context of the representation theorems. For the purposes of radical interpretation, representation theorems like Ramsey's and Savage's just aren't very relevant after all. (shrink)
Where Western philosophy ends, with the limits of language, marks the beginning of Eastern philosophy. The Tao de jing of Laozi begins with the limitations of language and then proceeds from that as a starting point. On the other hand, the limitation of language marks the end of Wittgenstein's cogitations. In contrast to Wittgenstein, who thought that one should remain silent about that which cannot be put into words, the message of the Zhuangzi is that one can speak about that (...) which cannot put into words but the speech will be strange and indirect. Through the focus on the monstrous character, No-Lips in the Zhuangzi, this paper argues that a key message of the Zhuangzi is that the art of transcending language in the Zhuangzi is through the use of crippled speech. The metaphor of crippled speech, speech which is actually unheard, illustrates that philosophical truths cannot be put into words but can be indirectly signified through the art of stretching language beyond its normal contours. This allows Eastern philosophy, through the philosophy of the Zhuangzi to transcend the limits of language. (shrink)
I argue that the main theme of the Zhuangzi is that of spiritual transformation. If there is no such theme in the Zhuangzi, it becomes an obscure text with relativistic viewpoints contradicting statements and stories designed to lead the reader to a state of spiritual transformation. I propose to reveal the coherence of the deep structure of the text by clearly dividing relativistic statements designed to break down fixed viewpoints from statements, anecdotes, paradoxes and metaphors designed to lead the reader (...) to a state of spiritual transformation. Without such an analysis, its profound stories such as the butterfly dream and the Great Sage dream will blatantly contradict each other and leave us bereft of the wisdom they presage. Unlike the great works of poetic and philosophic wisdom such as the Dao de Jing and the Symposium, the Zhuangzi will be reduced to a virtually unintelligible, lengthy, disjointed literary ditty, a potpourri of paradoxical puzzles, puns and parables, obscure philosophical conundrums, monstrous interlocutors and historical personages used as mouthpieces authoritatively arguing on behalf of viewpoints humorously opposite to what they historically held. (shrink)
In this article, the Golden Rule, a central ethical value to both Judaism and Confucianism, is evaluated in its prescriptive and proscriptive sentential formulations. Contrary to the positively worded, prescriptive formulation – “Love others as oneself” – the prohibitive formulation, which forms the injunction, “Do not harm others, as one would not harm oneself,” is shown to be the more prevalent Judaic and Confucian presentation of the Golden Rule. After establishing this point, the remainder of the article is dedicated to (...) an inquiry into why this preference between the two Golden-Rule-formulations occurs. In doing so, this article discovers four main benefits to the proscriptive formulations: I) harm-doing, as opposed to generalizable moral goodness, is easier for individuals to subjectively comprehend II) the prevention of harm-doing is the most fundamental ethical priority III) the proscriptive formulation preserves self-directed discovery of what is good, thus preserving moral autonomy IV) individuals are psychologically pre-disposed toward responding to prohibitions rather than counsels of goodness. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.