This paper presents an attempt to integrate theories of causal processes—of the kind developed by Wesley Salmon and Phil Dowe—into a theory of causal models using Bayesian networks. We suggest that arcs in causal models must correspond to possible causal processes. Moreover, we suggest that when processes are rendered physically impossible by what occurs on distinct paths, the original model must be restricted by removing the relevant arc. These two techniques suffice to explain cases of late preëmption and other cases (...) that have proved problematic for causal models. (shrink)
We present a minimum message length (MML) framework for trajectory partitioning by point selection, and use it to automatically select the tolerance parameter ε for Douglas-Peucker partitioning, adapting to local trajectory complexity. By examining a range of ε for synthetic and real trajectories, it is easy to see that the best ε does vary by trajectory, and that the MML encoding makes sensible choices and is robust against Gaussian noise. We use it to explore the identification of micro-activities within a (...) longer trajectory. This MML metric is comparable to the TRACLUS metric – and shares the constraint of abstracting only by omission of points – but is a true lossless encoding. Such encoding has several theoretical advantages – particularly with very small segments (high frame rates) – but actual performance interacts strongly with the search algorithm. Both differ from unconstrained piecewise linear approximations, including other MML formulations. (shrink)
l. There is an antinomy in Hare's thought between Ought-Implies-Can and No-Indicatives-from-Imperatives. It cannot be resolved by drawing a distinction between implication and entailment. 2. Luther resolved this antinomy in the l6th century, but to understand his solution, we need to understand his problem. He thought the necessity of Divine foreknowledge removed contingency from human acts, thus making it impossible for sinners to do otherwise than sin. 3. Erasmus objected (on behalf of Free Will) that this violates Ought-Implies-Can which he (...) supported with Hare-style ordinary language arguments. 4. Luther a) pointed out the antinomy and b) resolved it by undermining the prescriptivist arguments for Ought-Implies-Can. 5. We can reinforce Luther's argument with an example due to David Lewis. 6. Whatever its merits as a moral principle, Ought-Implies-Can is not a logical truth and should not be included in deontic logics. Most deontic logics, and maybe the discipline itself, should therefore be abandoned. 7. Could it be that Ought-Conversationally-Implies-Can? Yes - in some contexts. But a) even if these contexts are central to the evolution of Ought, the implication is not built into the semantics of the word; b) nor is the parallel implication built into the semantics of orders; and c) in some cases Ought conversationally implies Can, only because Ought-Implies-Can is a background moral belief. d) Points a) and b) suggest a criticism of prescriptivism - that Oughts do not entail imperatives but that the relation is one of conversational implicature. 8. If Ought-Implies-Can is treated as a moral principle, Erasmus' argument for Free Will can be revived (given his Christian assumptions). But it does not 'prove' Pelagianism as Luther supposed. A semi-Pelagian alternative is available. (shrink)
Nihilism, Nietzsche and the Doppelganger Problem Was Nietzsche a nihilist? Yes, because, like J. L. Mackie, he was an error-theorist about morality, including the elitist morality to which he himself subscribed. But he was variously a diagnostician, an opponent and a survivor of certain other kinds of nihilism. Schacht argues that Nietzsche cannot have been an error theorist, since meta-ethical nihilism is inconsistent with the moral commitment that Nietzsche displayed. Schacht’s exegetical argument parallels the substantive argument (advocated in recent years (...) by Wright and Blackburn) that Mackie’s error theory can’t be true because if it were, we would have to give up morality or give up moralizing. I answer this argument with a little bit of help from Nietzsche. I then pose a problem, the Doppelganger Problem, for the meta-ethical nihilism that I attribute to Mackie and Nietzsche. (If A is a moral proposition then not-A is a moral proposition: hence not all moral propositions can be false.) I solve the problem by reformulating the error theory and also deal with a variant of the problem, the Reinforced Doppelganger, glancing at a famous paper of Ronald Dworkin’s. Thus, whatever its demerits, the error theory, is not self-refuting, nor does it require us to give up morality. (shrink)
My first paper on the Is/Ought issue. The young Arthur Prior endorsed the Autonomy of Ethics, in the form of Hume’s No-Ought-From-Is (NOFI) but the later Prior developed a seemingly devastating counter-argument. I defend Prior's earlier logical thesis (albeit in a modified form) against his later self. However it is important to distinguish between three versions of the Autonomy of Ethics: Ontological, Semantic and Ontological. Ontological Autonomy is the thesis that moral judgments, to be true, must answer to a realm (...) of sui generis non-natural PROPERTIES. Semantic autonomy insists on a realm of sui generis non-natural PREDICATES which do not mean the same as any natural counterparts. Logical Autonomy maintains that moral conclusions cannot be derived from non-moral premises.-moral premises with the aid of logic alone. Logical Autonomy does not entail Semantic Autonomy and Semantic Autonomy does not entail Ontological Autonomy. But, given some plausible assumptions Ontological Autonomy entails Semantic Autonomy and given the conservativeness of logic – the idea that in a valid argument you don’t get out what you haven’t put in – Semantic Autonomy entails Logical Autonomy. So if Logical Autonomy is false – as Prior appears to prove – then Semantic and Ontological Autonomy would appear to be false too! I develop a version of Logical Autonomy (or NOFI) and vindicate it against Prior’s counterexamples, which are also counterexamples to the conservativeness of logic as traditionally conceived. The key concept here is an idea derived in part from Quine - that of INFERENCE-RELATIVE VACUITY. I prove that you cannot derive conclusions in which the moral terms appear non-vacuously from premises from which they are absent. But this is because you cannot derive conclusions in which ANY (non-logical) terms appear non-vacuously from premises from which they are absent Thus NOFI or Logical Autonomy comes out as an instance of the conservativeness of logic. This means that the reverse entailment that I have suggested turns out to be a mistake. The falsehood of Logical Autonomy would not entail either the falsehood Semantic Autonomy or the falsehood of Ontological Autonomy, since Semantic Autonomy only entails Logical Autonomy with the aid of the conservativeness of logic of which Logical Autonomy is simply an instance. Thus NOFI or Logical Autonomy is vindicated, but it turns out to be a less world-shattering thesis than some have supposed. It provides no support for either non-cognitivism or non-naturalism. (shrink)
In his celebrated 'Good and Evil' (l956) Professor Geach argues as against the non-naturalists that ‘good’ is attributive and that the predicative 'good', as used by Moore, is senseless.. 'Good' when properly used is attributive. 'There is no such thing as being just good or bad, [that is, no predicative 'good'] there is only being a good or bad so and so'. On the other hand, Geach insists, as against non-cognitivists, that good-judgments are entirely 'descriptive'. By a consideration of what (...) it is to be an A, we can determine what it is to be a good A, even where the ‘A’ in question is ‘human being’. These battles are fought on behalf of naturalism, indeed, of an up-to-date Aristotelianism. Geach plans to 'pass' from the 'purely descriptive' man to good/bad man, and from human act to good/bad human act. I argue: (l) That the predicative 'good' does have a genuine sense and that it is a mistake to suppose that ‘good’ is a purely attributive adjective. This does not entail that the predicative good (as used by Moore) denotes a non-natural property, but his mistake, if any is metaphysical or ontological not conceptual. (2) That the attributive 'good' cannot be used to generate a naturalistic ethic. It is difficult to extract a set of biologically based requirements out of human nature that are a) reasonably specific; b) rationally binding or at least highly persuasive; and c) morally credible. -/- On the way I protest against Geach’s tendency to try to win arguments by affecting not to understand things. -/- My views to some extent anticipate those of Kraut in *Against Absolute Goodness*. (shrink)
The paper reconstructs Moore's Open Question Argument (OQA) and discusses its rise and fall. There are three basic objections to the OQA: Geach's point, that Moore presupposes that ?good? is a predicative adjective (whereas it is in fact attributive); Lewy's point, that it leads straight to the Paradox of Analysis; and Durrant's point that even if 'good' is not synonymous with any naturalistic predicate, goodness might be synthetically identical with a naturalistic property. As against Geach, I argue that 'good' has (...) both predicative and attributive uses and that in moral contexts it is difficult to give a naturalistic account of the attributive 'good'. To deal with Lewy, I reformulate the OQA. But the bulk of the paper is devoted to Durrant's objection. I argue that the post-Moorean programme of looking for synthetic identities between moral and naturalistic properties is either redundant or impossible. For it can be carried through only if 'good' expresses an empirical concept, in which case it is redundant since naturalism is true. But 'good' does not express an empirical concept (a point proved by the reformulated OQA). Hence synthetic naturalism is impossible. I discuss direct reference as a possible way out for the synthetic naturalist and conclude that it will not work. The OQA may be a bit battered but it works after a fashion. (shrink)
Taking my cue from Michael Smith, I try to extract a decent argument for non-cognitivism from the text of the Treatise. I argue that the premises are false and that the whole thing rests on a petitio principi. I then re-jig the argument so as to support that conclusion that Hume actually believed (namely that an action is virtuous if it would excite the approbation of a suitably qualified spectator). This argument too rests on false premises and a begged question. (...) Thus the Motivation Argument fails BOTH as an argument for noncognitivism AND as an argument for what Hume actually believed, that moral distinctions are not derived from reason and that moral properties are akin to secondary qualities. So far as the Motivation Argument is concerned, both cognitivists and rationalists can rest easy. Themes: 1) Hume’s Slavery of Reason thesis is only defensible if passions are not only desires but sometimes dispositions to acquire desires (DTADs). 2) A desire for our good on the whole, which Humeans need to posit to fend off apparent counterexamples to the Slavery of Reason Thesis, does not sit well with the Humean theory of how novel desires arise (an objection due originally to Reid). 3) Hume is wrong to suppose that ‘abstract or demonstrative reasoning never influences any of our actions, but only as it directs our judgment concerning causes and effects’ as the examples of Russell and Hobbes convincingly demonstrate. This ironic as both Russell and Hobbes subscribed to the Slavery of Reason Thesis. 4) I critique Michael Smith’s critique of motivational externalism. (shrink)
Milgram’s experiments, subjects were induced to inflict what they believed to be electric shocks in obedience to a man in a white coat. This suggests that many of us can be persuaded to torture, and perhaps kill, another person simply on the say-so of an authority figure. But the experiments have been attacked on methodological, moral and methodologico-moral grounds. Patten argues that the subjects probably were not taken in by the charade; Bok argues that lies should not be used in (...) research; and Patten insists that any excuse for Milgram’s conduct can be adapted on behalf of his subjects. (Either he was wrong to conduct the experiments or they do not establish the phenomenon of immoral obedience). We argue a) that the subjects were indeed taken in b) that there are good historical reasons for regarding the experiments as ecologically valid, c) that lies (though usually wrong) were in this case legitimate, d) that there were excuses available to Milgram which were not available to his subjects and e) that even if he was wrong to conduct the experiments this does not mean that he failed to establish immoral obedience. So far from ‘disrespecting’ his subjects, Milgram enhanced their autonomy as rational agents. We concede however that it might (now) be right to prohibit what it was (then) right to do. (shrink)
This paper is a critique of coercive theories of meaning, that is, theories (or criteria) of meaning designed to do down ones opponents by representing their views as meaningless or unintelligible. Many philosophers from Hobbes through Berkeley and Hume to the pragmatists, the logical positivists and (above all) Wittgenstein have devised such theories and criteria in order to discredit their opponents. I argue 1) that such theories and criteria are morally obnoxious, a) because they smack of the totalitarian linguistic tactics (...) of the Party in Orwell’s 1984 and b) because they dehumanize the opposition by portraying them as mere spouters of gibberish; 2) that they are profoundly illiberal since if true, they would undermine Mill’s arguments for free speech; 3) that such theories are prone to self-contradiction, pragmatic and otherwise; 4) that they often turn against their creators including what they were meant to exclude and excluding what they were meant to include; 5) that such theories are susceptible to a modus tollens pioneered by Richard Price in his Review Concerning the Principle Questions of Morals(1758); and 6) that such theories are prima facie false since they fail to ‘predict’ the data that is it their business to explain (or, in the case of criteria, fail to capture the concept that they allegedly represent). The butcher’s bill is quite considerable: some of Hobbes, a fair bit of Locke, half of Berkeley, large chunks of Hume, Russell's Theory of Types, verificationism in its positivist and Dummettian variants, much of pragmatism and most of Wittgenstein - all these have to be sacrificed if we are to save our souls as philosophic liberals. (shrink)
This paper deals with what I take to be one woman’s literary response to a philosophical problem. The woman is Jane Austen, the problem is the rationality of Hume’s ‘sensible knave’, and Austen’s response is to deepen the problem. Despite his enthusiasm for virtue, Hume reluctantly concedes in the EPM that injustice can be a rational strategy for ‘sensible knaves’, intelligent but selfish agents who feel no aversion towards thoughts of villainy or baseness. Austen agrees, but adds that ABSENT CONSIDERATIONS (...) OF A FUTURE STATE, other vices besides injustice can be rationally indulged with tolerable prospects of worldly happiness. Austen’s creation Mr Elliot in Persuasion is just such an agent – sensible and knavish but not technically ‘unjust’. Despite and partly because of his vices – ingratitude, avarice and duplicity – he manages to be both successful and reasonably happy. There are plenty of other reasonably happy knaves in Jane Austen, some of whom are not particularly sensible. This is not to say that either Austen or Hume is in favor of knavery It is just that they both think that only those with the right sensibility can be argued out of it. (shrink)
Hume is widely regarded as the grandfather of emotivism and indeed of non-cognitivism in general. For the chief argument for emotivism - the Argument from Motivation - is derived from him. In my opinion Hume was not an emotivist or proto-emotivist but a moral realist in the modern ‘response-dependent’ style. But my interest in this paper is not the historical Hume but the Hume of legend since the legendary Hume is one of the most influential philosophers of the present age. (...) According to Michael Smith ‘the Moral Problem’ – the central issue in meta-ethics - is that the premises of Hume’s argument appear to be true though the non-cognitivist conclusion appears to be false. Since the argument seems to be valid, something has got to give. Smith struggles to solve the problem by holding on to something like the premises of the argument whilst trying to fend off the conclusion. In my view this is a wasted effort. Hume was not arguing for non-cognitivsm in the first place, and the arguments for non-cognitivism that can be extracted from his writings are no good. Either the premises are false or the inferences are invalid. And this is despite the fact that Hume was substantially right about reason and the passions. Thus ‘the Moral Problem’ is not a problem, and the legendary Hume does not deserve his influence. -/- An important theme in this paper is the concept of a DTAD or a dispositions to acquire desires. These play an important role in motivation but unlike desires (with which they are sometimes confused ) they are NOT propositional attitudes. (shrink)
Bertrand Russell was a meta-ethical pioneer, the original inventor of both emotivism and the error theory. Why, having abandoned emotivism for the error theory, did he switch back to emotivism in the 1920s? Perhaps he did not relish the thought that as a moralist he was a professional hypocrite. In addition, Russell's version of the error theory suffers from severe defects. He commits the naturalistic fallacy and runs afoul of his own and Moore's arguments against subjectivism. These defects could be (...) repaired, but only by abandoning Russell's semantics.Russell preferred to revert to emotivism. (shrink)
The Quine/Putnam indispensability argument is regarded by many as the chief argument for the existence of platonic objects. We argue that this argument cannot establish what its proponents intend. The form of our argument is simple. Suppose indispensability to science is the only good reason for believing in the existence of platonic objects. Either the dispensability of mathematical objects to science can be demonstrated and, hence, there is no good reason for believing in the existence of platonic objects, or their (...) dispensability cannot be demonstrated and, hence, there is no good reason for believing in the existence of mathematical objects which are genuinely platonic. Therefore, indispensability, whether true or false, does not support platonism. (shrink)
Computer-based argument mapping greatly enhances student critical thinking, more than tripling absolute gains made by other methods. I describe the method and my experience as an outsider. Argument mapping often showed precisely how students were erring (for example: confusing helping premises for separate reasons), making it much easier for them to fix their errors.
There is a need to rapidly assess the impact of new technology initiatives on the Counter Improvised Explosive Device battle in Iraq and Afghanistan. The immediate challenge is the need for rapid decisions, and a lack of engineering test data to support the assessment. The rapid assessment methodology exploits available information to build a probabilistic model that provides an explicit executable representation of the initiative’s likely impact. The model is used to provide a consistent, explicit, explanation to decision makers on (...) the likely impact of the initiative. Sensitivity analysis on the model provides analytic information to support development of informative test plans. (shrink)
There are two motivations commonly ascribed to historical actors for taking up statistics: to reduce complicated data to a mean value (e.g., Quetelet), and to take account of diversity (e.g., Galton). Different motivations will, it is assumed, lead to different methodological decisions in the practice of the statistical sciences. Karl Pearson and W. F. R. Weldon are generally seen as following directly in Galton’s footsteps. I argue for two related theses in light of this standard interpretation, based on a reading (...) of several sources in which Weldon, independently of Pearson, reflects on his own motivations. First, while Pearson does approach statistics from this "Galtonian" perspective, he is, consistent with his positivist philosophy of science, utilizing statistics to simplify the highly variable data of biology. Weldon, on the other hand, is brought to statistics by a rich empiricism and a desire to preserve the diversity of biological data. Secondly, we have here a counterexample to the claim that divergence in motivation will lead to a corresponding separation in methodology. Pearson and Weldon, despite embracing biometry for different reasons, settled on precisely the same set of statistical tools for the investigation of evolution. (shrink)
Hume seems to contend that you can’t get an ought from an is. Searle professed to prove otherwise, deriving a conclusion about obligations from a premise about promises. Since (as Schurz and I have shown) you can’t derive a substantive ought from an is by logic alone, Searle is best construed as claiming that there are analytic bridge principles linking premises about promises to conclusions about obligations. But we can no more derive a moral obligation to pay up from the (...) fact that a promise has been made than we can derive a duty to fight a duel from the fact that a challenge has been issued – just conclusions about what we ought to do according to the rules of the relevant games. Hume suggests bridge principles that would take us from the rules of the games to conclusions about duties, but these principles are false. My argument features an obstreperous earl, an anarchist philosopher and a dueling Prime Minister. (shrink)
Frank Snare had a puzzle. Noncognitivism implies No-Ought-From-Is but No- Ought-From-Is does not imply non-cognitivism. How then can we derive non-cognitivism from No-Ought-From-Is? Via an abductive argument. If we combine non-cognitivism with the conservativeness of logic (the idea that in a valid argument the conclusion is contained in the premises), this implies No-Ought-From-Is. Hence if No-Ought-From-Is is true, we can arrive at non-cognitivism via an inference to the best explanation. With prescriptivism we can make this argument more precise. I develop (...) an account of imperative consequence that underwrites Hare’s principle that you cannot derive imperatives from indicatives. Thus if moral judgments contain an imperative component, it will be impossible to derive moral conclusions from indicative or non-moral premises. Given this account of imperative consequence, we can explain No-Ought-From-Is without appealing to anything as nebulous as the conservativeness of logic. Hence if No-Ought-From-Is is true, we have an inference to the best explanation for prescriptivism. Both lines of argument face problems from Prior. Given Prior’s counterexamples, No-Ought-From-Is as originally conceived is false. The version that survives is No-Non-Vacuous-Ought-From-Is. But the best explanation of this does not include non-cognitivism. With prescriptivism it is worse. For the version of No-Ought-From-Is that prescriptivism ‘explains’ – that is, the version of No-Ought-From-Is that prescriptivism implies – would exclude Prior’s counter-examples to Autonomy as invalid. But they are not invalid. Thus Prior’s counter-examples to No-Ought-From- Is refute prescriptivism. Thus from 1960 onwards R. M.Hare was a dead philosopher walking. But if non-cognitivism cannot be derived from No-Ought-From-Is, this suggests that it is not what Hume was trying to prove. I argue that what Hume was trying to prove is that moral truths are not demonstrable. To be demonstrable, a proposition must be either self-evident or logically derivable from self-evident propositions. By Treatise 3.1.1.27, Hume had proved to his own satisfaction that no moral propositions are self-evident. That leaves open the possibility that they are logically derivable from self-evident but NON-moral propositions. The point of No-Ought-From-Is was to exclude this possibility. If you cannot logically derive moral conclusions from non-moral premises, you cannot demonstrate the truths of morality by deriving them from self-evident but NON-moral truths. I also discuss why Hume abandoned No-Ought-From-Is in the EPM. He had no need of it since he thought he had a proof that (with some exceptions) no nontrivial truths are demonstrable. Hence no non-trivial MORAL truths are demonstrable. No-Ought-From-Is drops out as unnecessary. (shrink)
This selective overview of the history of American Philosophy in the Twentieth Century begins with certain enduring themes that were developed by the two main founders of classical American pragmatism, Charles Sanders Peirce (1839--1914) and William James. Against the background of the pervasive influence of Kantian and Hegelian idealism in America in the decades surrounding the turn of the century, pragmatism and related philosophical outlooks emphasizing naturalism and realism were dominant during the first three decades of the century. Beginning (...) in the 1930s and 1940s, however, the middle third of the century witnessed the rising influence in America of what would become known as “‘analytic philosophy’,” with its primary roots in Europe: in the Cambridge philosophical analysis of Moore, Russell, and Wittgenstein; logical empiricism and positivism on the continent; and linguistic analysis and ordinary- language philosophy at Oxford. This overview stresses the persistence of pragmatist themes throughout much of the century, while emphasizing the mid-century transformations that resulted from developments primarily in analytic philosophy. These combined influences resulted at the turn of the millennium in the flourishing, among other developments, of distinctively analytic styles of pragmatism and naturalism. (shrink)
This Thesis engages with contemporary philosophical controversies about the nature of dispositional properties or powers and the relationship they have to their non-dispositional counterparts. The focus concerns fundamentality. In particular, I seek to answer the question, ‘What fundamental properties suffice to account for the manifest world?’ The answer I defend is that fundamental categorical properties need not be invoked in order to derive a viable explanation for the manifest world. My stance is a field-theoretic view which describes the world as (...) a single system comprised of pure power, and involves the further contention that ‘pure power’ should not be interpreted as ‘purely dispositional’, if dispositionality means potentiality, possibility or otherwise unmanifested power or ability bestowed upon some bearer. The theoretical positions examined include David Armstrong’s Categoricalism, Sydney Shoemaker’s Causal Theory of Properties, Brian Ellis’s New Essentialism, Ullin Place’s Conceptualism, Charles Martin’s and John Heil’s Identity Theory of Properties and Rom Harré’s Theory of Causal Powers. The central concern of this Thesis is to examine reasons for holding a pure-power theory, and to defend such a stance. This involves two tasks. The first requires explaining what plays the substance role in a pure-power world. This Thesis argues that fundamental power, although not categorical, can be considered ontologically-robust and thus able to fulfil the substance role. A second task—answering the challenge put forward by Richard Swinburne and thereafter replicated in various neo-Swinburne arguments—concerns how the manifestly qualitative world can be explained starting from a pure-power base. The Light-like Network Account is put forward in an attempt to show how the manifest world can be derived from fundamental pure power. (shrink)
This article provides a framework for understanding the dynamics between the disenchanting effects of a uniquely modern existential meaning crisis and a countervailing reenchantment facilitated by the techno-cultural movement of transhumanism. This movement constructs a post-secular techno-theology grounded in a transhumanist ontology that corresponds to a shift away from anthropocentric meaning systems. To shed light on this dynamic, I take a phenomenological approach to the human-technology relationship, highlighting the role of technology in ontology formation and religious imagination. I refer to (...) examples of transhumanist religious movements to illustrate a new posthumanist ontological grounding of meaning corresponding to a contemporary meaning-crisis that scholars are calling ‘neuroexistentialism.’ I then use the language of Charles Taylor and his work on secularization to frame these ontological developments. Ultimately, this article argues that transhumanist religious expression represents a zeitgeist of post-secular re-enchantment. (shrink)
Fallibilism, as a fundamental aspect of pragmatic epistemology, can be illuminated by a study of law. Before he became a famous American judge, Oliver Wendell Holmes, Jr., along with his friends William James and Charles Sanders Peirce, associated as presumptive members of the Metaphysical Club of Cambridge in the 1870s, recalled as the birthplace of pragmatism. As a young scholar, Holmes advanced a concept of legal fallibilism as incremental community inquiry. In this early work, I suggest that Holmes treats (...) common law cases more like scientific experiments than as deductive applications of already clear rules. Common law rules may be seen as a product of 1) the conflicts that occur in society, 2) the channeling of conflicts into legal disputes, 3) the gradual accumulation of judicial decisions classified into groups, and 4) the development of consensual understanding, expressed in rules and principles, as to how future cases should be classified and decided. This does not involve only lawyers and judges. Especially in controversial cases, it may indirectly involve an entire community. The legal process is seen as an extended intergenerational process of inquiry. It illuminates the relation of thought, expression, and conduct among a community of inquirers, applied to the problems of social ordering. (shrink)
Totalidad e Infinito (TI) resulta ser una obra compleja. Ante el estilo de Levinas para la exposición de sus ideas, resulta oportuno contar con un apoyo para los nuevos lectores, con el objetivo de poder indagar en aspectos que podrían pasar desapercibidos. El trabajo desarrollado por James R. Mensch, profesor de Filosofía de la Charles University en República Checa y de Saint Francis Xavier University en Canadá, le permite incorporarse dentro de la lista de comentadores destacados de Levinas. El (...) acercamiento a TI desde la analítica existencial posibilita un panorama al lector de la obra a partir de la confrontación con un interlocutor ineludible: Martin Heidegger. (shrink)
This collection of essays by acclaimed philosophers explores Bertrand Russell's influence on one of the dominant philosophical approaches of this century. Michael Dummett argues that analytical philosophy began with Gottlob Frege's analysis of numbers. Frege had begun by inquiring about the nature of number, but found it more fruitful to ask instead about the meanings of sentences containing number words. Russell was to exploit this method systematically. I reflect on the essays of Charles R. Pigden, David Lewis as an (...) exponent of a variant of Russell's position: the good is what we are ideally disposed to desire to desire, and Greenspan's suggestion that Russell adopted some element of the Marxist theory on morals. (shrink)
Charles Taylor’s idea of “deep diversity” has played a major role in the debates around multiculturalism in Canada and around the world. Originally, the idea was meant to account for how the different national communities within Canada – those of the English-speaking Canadians, the French-speaking Quebeckers, and the Aboriginals – conceive of their belonging to the country in different ways. But Taylor conceives of these differences strictly in terms of irreducibility; that is, he fails to see that they also (...) exist in such a way that the country cannot be said to form a unified whole. After giving an account of the philosophical as well as religious reasons behind his position, the chapter goes on to describe some of its political implications. (shrink)
Are science and religion compatible when it comes to understanding cosmology (the origin of the universe), biology (the origin of life and of the human species), ethics, and the human mind (minds, brains, souls, and free will)? Do science and religion occupy non-overlapping magisteria? Is Intelligent Design a scientific theory? How do the various faith traditions view the relationship between science and religion? What, if any, are the limits of scientific explanation? What are the most important open questions, problems, or (...) challenges confronting the relationship between science and religion, and what are the prospects for progress? These and other questions are explored in Science and Religion: 5 Questions--a collection of thirty-three interviews based on 5 questions presented to some of the world's most influential and prominent philosophers, scientists, theologians, apologists, and atheists. Contributions by Simon Blackburn, Susan Blackmore, Sean Carroll, William Lane Craig, William Dembski, Daniel C. Dennett, George F.R. Ellis, Owen Flanagan, Owen Gingerich, Rebecca Newberger Goldstein, John F. Haught, Muzaffar Iqbal, Lawrence Krauss, Colin McGinn, Alister McGrath, Mary Midgley, Seyyed Hossein Nasr, Timothy O'Connor, Massimo Pigliucci, John Polkinghorne, James Randi, Alex Rosenberg, Michael Ruse, Robert John Russell, John Searle, Michael Shermer, Victor J. Stenger, Robert Thurman, Michael Tooley, Charles Townes, Peter van Inwagen, Keith Ward, Rabbi David Wolpe. (shrink)
This chapter both explains the origins of emotivism in C. K. Ogden and I. A. Richards, R. B. Braithwaite, Austin Duncan-Jones, A. J. Ayer and Charles Stevenson (along with the endorsement by Frank P. Ramsey, and the summary of C. D. Broad), and looks at MacIntyre's criticisms of emotivism as the inevitable result of Moore's attack on naturalistic ethics and his ushering in the fact/value, which was a historical product of the Enlightenment.
‘Greek Ethics’, an undergraduate class taught by the British moral philosopher N. J. H. Dent, introduced this reviewer to the ethical philosophy of ancient Greece. The class had a modest purview—a sequence of Socrates, Plato, and Aristotle—but it proved no less effective, in retrospect, than more synoptic classes for having taken this apparently limited and (for its students and academic level) appropriate focus. This excellent Companion will now serve any such class extremely well, allowing students a broader exposure than that (...) traditional sequence, without sacrificing the class’s circumscribed focus. The eighteen chapters encompass some of what went before, and surprisingly much of what came after, those three central philosophers—including, for instance, a discussion of Plotinus and his successors, as well as a discussion of Horace. The book will therefore be useful in many different types of class on ethical philosophy in the ancient world. This Companion will be useful not only to students, but also to at least three further groups: specialists in ancient Greek philosophy (since some contributors advance significant new positions, e.g. R. Kamtekar on Plato’s ethical psychology and D. Charles on Aristotle’s ‘ergon argument’ as already implicitly invoking ‘to kalon’); scholars working in academic subjects adjacent to ancient Greek philosophy; and contemporary moral philosophers. (shrink)
There are a bewildering variety of claims connecting Darwin to nineteenth-century philosophy of science—including to Herschel, Whewell, Lyell, German Romanticism, Comte, and others. I argue here that Herschel’s influence on Darwin is undeniable. The form of this influence, however, is often misunderstood. Darwin was not merely taking the concept of “analogy” from Herschel, nor was he combining such an analogy with a consilience as argued for by Whewell. On the contrary, Darwin’s Origin is written in precisely the manner that one (...) would expect were Darwin attempting to model his work on the precepts found in Herschel’s Preliminary Discourse on Natural Science. While Hodge has worked out a careful interpretation of both Darwin and Herschel, drawing similar conclusions, his interpretation misreads Herschel’s use of the vera causa principle and the verification of hypotheses. The new reading that I present here resolves this trouble, combining Hodge’s careful treatment of the structure of the Origin with a more cautious understanding of Herschel’s philosophy of science. This interpretation lets us understand why Darwin laid out the Origin in the way that he did and also why Herschel so strongly disagreed, including in Herschel’s heretofore unanalyzed marginalia in his copy of Darwin’s book. (shrink)
A longish (12 page) discussion of Richard Sorabji's excellent book, with a further discussion of what it means for a theory of emotions to be a cognitive theory.
A longstanding issue in attempts to understand the Everett (Many-Worlds) approach to quantum mechanics is the origin of the Born rule: why is the probability given by the square of the amplitude? Following Vaidman, we note that observers are in a position of self-locating uncertainty during the period between the branches of the wave function splitting via decoherence and the observer registering the outcome of the measurement. In this period it is tempting to regard each branch as equiprobable, but we (...) argue that the temptation should be resisted. Applying lessons from this analysis, we demonstrate (using methods similar to those of Zurek's envariance-based derivation) that the Born rule is the uniquely rational way of apportioning credence in Everettian quantum mechanics. In doing so, we rely on a single key principle: changes purely to the environment do not affect the probabilities one ought to assign to measurement outcomes in a local subsystem. We arrive at a method for assigning probabilities in cases that involve both classical and quantum self-locating uncertainty. This method provides unique answers to quantum Sleeping Beauty problems, as well as a well-defined procedure for calculating probabilities in quantum cosmological multiverses with multiple similar observers. (shrink)
Hume describes his own “open, social, and cheerful humour” as “a turn of mind which it is more happy to possess, than to be born to an estate of ten thousand a year.” Why does he value a cheerful character so highly? I argue that, for Hume, cheerfulness has two aspects—one manifests as mirth in social situations, and the other as steadfastness against life’s misfortunes. This second aspect is of special interest to Hume in that it safeguards the other virtues. (...) And its connection with the first aspect helps explain how it differs from Stoic tranquility. For Hume, I argue, philosophy has a modest role in promoting human happiness by preserving cheerfulness. (shrink)
In this article, network science is discussed from a methodological perspective, and two central theses are defended. The first is that network science exploits the very properties that make a system complex. Rather than using idealization techniques to strip those properties away, as is standard practice in other areas of science, network science brings them to the fore, and uses them to furnish new forms of explanation. The second thesis is that network representations are particularly helpful in explaining the properties (...) of non-decomposable systems. Where part-whole decomposition is not possible, network science provides a much-needed alternative method of compressing information about the behavior of complex systems, and does so without succumbing to problems associated with combinatorial explosion. The article concludes with a comparison between the uses of network representation analyzed in the main discussion, and an entirely distinct use of network representation that has recently been discussed in connection with mechanistic modeling. (shrink)
People often talk to others about their personal past. These discussions are inherently selective. Selective retrieval of memories in the course of a conversation may induce forgetting of unmentioned but related memories for both speakers and listeners (Cuc, Koppel, & Hirst, 2007). Cuc et al. (2007) defined the forgetting on the part of the speaker as within-individual retrieval-induced forgetting (WI-RIF) and the forgetting on the part of the listener as socially shared retrieval-induced forgetting (SS-RIF). However, if the forgetting associated with (...) WI-RIF and SS-RIF is to be taken seriously as a mechanism that shapes both individual and shared memories, this mechanism must be demonstrated with meaningful material and in ecologically valid groups. In our first 2 experiments we extended SS-RIF from unemotional, experimenter-contrived material to the emotional and unemotional autobiographical memories of strangers (Experiment 1) and intimate couples (Experiment 2) when merely overhearing the speaker selectively practice memories. We then extended these results to the context of a free-flowing conversation (Experiments 3 and 4). In all 4 experiments we found WI-RIF and SS-RIF regardless of the emotionalvalence or individual ownership of the memories. We discuss our findings in terms of the role of conversational silence in shaping both our personal and shared pasts. (shrink)
This entry explores Charles Peirce's account of truth in terms of the end or ‘limit’ of inquiry. This account is distinct from – and arguably more objectivist than – views of truth found in other pragmatists such as James and Rorty. The roots of the account in mathematical concepts is explored, and it is defended from objections that it is (i) incoherent, (ii) in its faith in convergence, too realist and (iii) in its ‘internal realism’, not realist enough.
Survey article on Naturalism dealing with Hume's NOFI (including Prior's objections), Moore's Naturalistic Fallacy and the Barren Tautology Argument. Naturalism, as I understand it, is a form of moral realism which rejects fundamental moral facts or properties. Thus it is opposed to both non-cognitivism, and and the error theory but also to non-naturalism. General conclusion: as of 1991: naturalism as a program has not been refuted though none of the extant versions look particularly promising.
The propensity interpretation of fitness (PIF) is commonly taken to be subject to a set of simple counterexamples. We argue that three of the most important of these are not counterexamples to the PIF itself, but only to the traditional mathematical model of this propensity: fitness as expected number of offspring. They fail to demonstrate that a new mathematical model of the PIF could not succeed where this older model fails. We then propose a new formalization of the PIF that (...) avoids these (and other) counterexamples. By producing a counterexample-free model of the PIF, we call into question one of the primary motivations for adopting the statisticalist interpretation of fitness. In addition, this new model has the benefit of being more closely allied with contemporary mathematical biology than the traditional model of the PIF. (shrink)
I explore the extent to which the epistemic state of understanding is transparent to the one who understands. Against several contemporary epistemologists, I argue that it is not transparent in the way that many have claimed, drawing on results from developmental psychology, animal cognition, and other fields.
A paradox, it is claimed, is a radical form of contradiction, one that produces gaps in meaning. In order to approach this idea, two senses of “separation” are distinguished: separation by something and separation by nothing. The latter does not refer to nothing in an ordinary sense, however, since in that sense what’s intended is actually less than nothing. Numerous ordinary nothings in philosophy as well as in other fields are surveyed so as to clarify the contrast. Then follows the (...) suggestion that philosophies which one would expect to have room for paradoxes actually tend either to exclude them altogether or to dull them. There is a clear alternative, however, one that fully recognizes paradoxes and yet also strives to overcome them. (shrink)
Philosophy of science is expanding via the introduction of new digital data and tools for their analysis. The data comprise digitized published books and journal articles, as well as heretofore unpublished material such as images, archival text, notebooks, meeting notes, and programs. The growth in available data is matched by the extensive development of automated analysis tools. The variety of data sources and tools can be overwhelming. In this article, we survey the state of digital work in the philosophy of (...) science, showing what kinds of questions can be answered and how one can go about answering them. (shrink)
Abstract Conspiracy theories should be neither believed nor investigated - that is the conventional wisdom. I argue that it is sometimes permissible both to investigate and to believe. Hence this is a dispute in the ethics of belief. I defend epistemic “oughts” that apply in the first instance to belief-forming strategies that are partly under our control. But the beliefforming strategy of not believing conspiracy theories would be a political disaster and the epistemic equivalent of selfmutilation. I discuss several variations (...) of this strategy, interpreting “conspiracy theory” in different ways but conclude that on all these readings, the conventional wisdom is deeply unwise. (shrink)
Conpiracy theories are widely deemed to be superstitious. Yet history appears to be littered with conspiracies successful and otherwise. (For this reason, "cock-up" theories cannot in general replace conspiracy theories, since in many cases the cock-ups are simply failed conspiracies.) Why then is it silly to suppose that historical events are sometimes due to conspiracy? The only argument available to this author is drawn from the work of the late Sir Karl Popper, who criticizes what he calls "the conspiracy theory (...) of society" in The Open Society and elsewhere. His critique of the conspiracy theory is indeed sound, but it is a theory no sane person maintains. Moreover, its falsehood is compatible with the prevalence of conspiracies. Nor do his arguments create any presumption against conspiracy theories of this or that. Thus the belief that it is superstitious to posit conspiracies is itself a superstition. The article concludes with some speculations as to why this superstition is so widely believed. (shrink)
This paper describes one style of functional analysis commonly used in the neurosciences called task-bound functional analysis. The concept of function invoked by this style of analysis is distinctive in virtue of the dependence relations it bears to transient environmental properties. It is argued that task-bound functional analysis cannot explain the presence of structural properties in nervous systems. An alternative concept of neural function is introduced that draws on the theoretical neuroscience literature, and an argument is given to show that (...) this alternative concept of may help to overcome the explanatory limitations of task-bound functional analysis. (shrink)
The Philosophical Forum, Volume 52, Issue 3, Page 189-210, Fall 2021. -/- Antisemitism is fun. This essay explains why and proposes a new approach to combating it.
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.