Many cognitive scientists, having discovered that some computational-level characterization f of a cognitive capacity φ is intractable, invoke heuristics as algorithmic-level explanations of how cognizers compute f. We argue that such explanations are actually dysfunctional, and rebut five possible objections. We then propose computational-level theory revision as a principled and workable alternative.
Despite their success in describing and predicting cognitive behavior, the plausibility of so-called ‘rational explanations’ is often contested on the grounds of computational intractability. Several cognitive scientists have argued that such intractability is an orthogonal pseudoproblem, however, since rational explanations account for the ‘why’ of cognition but are agnostic about the ‘how’. Their central premise is that humans do not actually perform the rational calculations posited by their models, but only act as if they do. Whether or not the problem (...) of intractability is solved by recourse to ‘as if’ explanations critically depends, inter alia, on the semantics of the ‘as if’ connective. We examine the five most sensible explications in the literature, and conclude that none of them circumvents the problem. As a result, rational ‘as if’ explanations must obey the minimal computational constraint of tractability. (shrink)
Epicurus argued that death can be neither good nor bad because it involves neither pleasure nor pain. This paper focuses on the deprivation account as a response to this Hedonist Argument. Proponents of the deprivation account hold that Epicurus’s argument fails even if death involves no painful or pleasurable experiences and even if the hedonist ethical system, which holds that pleasure and pain are all that matter ethically, is accepted. I discuss four objections that have been raised against the deprivation (...) account and argue that this response to Epicurus’s argument is successful once it has been sufficiently clarified. (shrink)
P.F. Strawson’s (1962) “Freedom and Resentment” has provoked a wide range of responses, both positive and negative, and an equally wide range of interpretations. In particular, beginning with Gary Watson, some have seen Strawson as suggesting a point about the “order of explanation” concerning moral responsibility: it is not that it is appropriate to hold agents responsible because they are morally responsible, rather, it is ... well, something else. Such claims are often developed in different ways, but one thing remains (...) constant: they meant to be incompatible with libertarian theories of moral responsibility. The overarching theme of this paper is that extant developments of “the reversal” face a dilemma: in order to make the proposals plausibly anti-libertarian, they must be made to be implausible on other grounds. I canvas different attempts to articulate a “Strawsonian reversal”, and argue that none is fit for the purposes for which it is intended. I conclude by suggesting a way of clarifying the intended thesis: an analogy with the concept of funniness. The result: proponents of the “reversal” need to accept the difficult result that if we blamed small children, they would be blameworthy, or instead explain how their view escapes this result, while still being a view on which our blaming practices “fix the facts” of moral responsibility. (shrink)
There is a familiar debate between Russell and Strawson concerning bivalence and ‘the present King of France’. According to the Strawsonian view, ‘The present King of France is bald’ is neither true nor false, whereas, on the Russellian view, that proposition is simply false. In this paper, I develop what I take to be a crucial connection between this debate and a different domain where bivalence has been at stake: future contingents. On the familiar ‘Aristotelian’ view, future contingent propositions are (...) neither true nor false. However, I argue that, just as there is a Russellian alternative to the Strawsonian view concerning ‘the present King of France’, according to which the relevant class of propositions all turn out false, so there is a Russellian alternative to the Aristotelian view, according to which future contingents all turn out false, not neither true nor false. The result: contrary to millennia of philosophical tradition, we can be open futurists without denying bivalence. (shrink)
Various philosophers have long since been attracted to the doctrine that future contingent propositions systematically fail to be true—what is sometimes called the doctrine of the open future. However, open futurists have always struggled to articulate how their view interacts with standard principles of classical logic—most notably, with the Law of Excluded Middle. For consider the following two claims: Trump will be impeached tomorrow; Trump will not be impeached tomorrow. According to the kind of open futurist at issue, both of (...) these claims may well fail to be true. According to many, however, the disjunction of these claims can be represented as p ∨ ~p—that is, as an instance of LEM. In this essay, however, I wish to defend the view that the disjunction these claims cannot be represented as an instance of p ∨ ~p. And this is for the following reason: the latter claim is not, in fact, the strict negation of the former. More particularly, there is an important semantic distinction between the strict negation of the first claim [~] and the latter claim. However, the viability of this approach has been denied by Thomason, and more recently by MacFarlane and Cariani and Santorio, the latter of whom call the denial of the given semantic distinction “scopelessness”. According to these authors, that is, will is “scopeless” with respect to negation; whereas there is perhaps a syntactic distinction between ‘~Will p’ and ‘Will ~p’, there is no corresponding semantic distinction. And if this is so, the approach in question fails. In this paper, then, I criticize the claim that will is “scopeless” with respect to negation. I argue that will is a so-called “neg-raising” predicate—and that, in this light, we can see that the requisite scope distinctions aren’t missing, but are simply being masked. The result: a under-appreciated solution to the problem of future contingents that sees and as contraries, not contradictories. (shrink)
Recently, philosophers have turned their attention to the question, not when a given agent is blameworthy for what she does, but when a further agent has the moral standing to blame her for what she does. Philosophers have proposed at least four conditions on having “moral standing”: -/- 1. One’s blame would not be “hypocritical”. 2. One is not oneself “involved in” the target agent’s wrongdoing. 3. One must be warranted in believing that the target is indeed blameworthy for the (...) wrongdoing. 4. The target’s wrongdoing must some of “one’s business”. -/- These conditions are often proposed as both conditions on one and the same thing, and as marking fundamentally different ways of “losing standing.” Here I call these claims into question. First, I claim that conditions (3) and (4) are simply conditions on different things than are conditions (1) and (2). Second, I argue that condition (2) reduces to condition (1): when “involvement” removes someone’s standing to blame, it does so only by indicating something further about that agent, viz., that he or she lacks commitment to the values that condemn the wrongdoer’s action. The result: after we clarify the nature of the non-hypocrisy condition, we will have a unified account of moral standing to blame. Issues also discussed: whether standing can ever be regained, the relationship between standing and our "moral fragility", the difference between mere inconsistency and hypocrisy, and whether a condition of standing might be derived from deeper facts about the "equality of persons". (shrink)
I provide a manipulation-style argument against classical compatibilism—the claim that freedom to do otherwise is consistent with determinism. My question is simple: if Diana really gave Ernie free will, why isn't she worried that he won't use it precisely as she would like? Diana's non-nervousness, I argue, indicates Ernie's non-freedom. Arguably, the intuition that Ernie lacks freedom to do otherwise is stronger than the direct intuition that he is simply not responsible; this result highlights the importance of the denial of (...) the principle of alternative possibilities for compatibilist theories of responsibility. Along the way, I clarify the dialectical role and structure of “manipulation arguments”, and compare the manipulation argument I develop with the more familiar Consequence Argument. I contend that the two arguments are importantly mutually supporting and reinforcing. The result: classical compatibilists should be nervous—and if PAP is true, all compatibilists should be nervous. (shrink)
This Introduction has three sections, on "logical fatalism," "theological fatalism," and the problem of future contingents, respectively. In the first two sections, we focus on the crucial idea of "dependence" and the role it plays it fatalistic arguments. Arguably, the primary response to the problems of logical and theological fatalism invokes the claim that the relevant past truths or divine beliefs depend on what we do, and therefore needn't be held fixed when evaluating what we can do. We call the (...) sort of dependence needed for this response to be successful "dependence with a capital 'd'": Dependence. We consider different accounts of Dependence, especially the account implicit in the so-called "Ockhamist" response to the fatalistic arguments. Finally, we present the problem of future contingents: what could "ground" truths about the undetermined future? On the other hand, how could all such propositions fail to be true? (shrink)
The plane was going to crash, but it didn't. Johnny was going to bleed to death, but he didn't. Geach sees here a changing future. In this paper, I develop Geach's primary argument for the (almost universally rejected) thesis that the future is mutable (an argument from the nature of prevention), respond to the most serious objections such a view faces, and consider how Geach's view bears on traditional debates concerning divine foreknowledge and human freedom. As I hope to show, (...) Geach's view constitutes a radically new view on the logic of future contingents, and deserves the status of a theoretical contender in these debates. (shrink)
In this paper, I introduce a problem to the philosophy of religion – the problem of divine moral standing – and explain how this problem is distinct from (albeit related to) the more familiar problem of evil (with which it is often conflated). In short, the problem is this: in virtue of how God would be (or, on some given conception, is) “involved in” our actions, how is it that God has the moral standing to blame us for performing those (...) very actions? In light of the recent literature on “moral standing”, I consider God’s moral standing to blame on two models of “divine providence”: open theism, and theological determinism. I contend that God may have standing on open theism, and – perhaps surprisingly – may also have standing, even on theological determinism, given the truth of compatibilism. Thus, if you think that God could not justly both determine and blame, then you will have to abandon compatibilism. The topic of this paper thus sheds considerable light on the traditional philosophical debate about the conditions of moral responsibility. (shrink)
Everyone agrees that we can’t change the past. But what about the future? Though the thought that we can change the future is familiar from popular discourse, it enjoys virtually no support from philosophers, contemporary or otherwise. In this paper, I argue that the thesis that the future is mutable has far more going for it than anyone has yet realized. The view, I hope to show, gains support from the nature of prevention, can provide a new way of responding (...) to arguments for fatalism, can account for the utility of total knowledge of the future, and can help in providing an account of the semantics of the English progressive. On the view developed, the future is mutable in the following sense: perhaps, once, it was true that you would never come to read a paper defending the mutability of the future. And then the future changed. And now you will. (shrink)
At least since Aristotle’s famous 'sea-battle' passages in On Interpretation 9, some substantial minority of philosophers has been attracted to the doctrine of the open future--the doctrine that future contingent statements are not true. But, prima facie, such views seem inconsistent with the following intuition: if something has happened, then (looking back) it was the case that it would happen. How can it be that, looking forwards, it isn’t true that there will be a sea battle, while also being true (...) that, looking backwards, it was the case that there would be a sea battle? This tension forms, in large part, what might be called the problem of future contingents. A dominant trend in temporal logic and semantic theorizing about future contingents seeks to validate both intuitions. Theorists in this tradition--including some interpretations of Aristotle, but paradigmatically, Thomason (1970), as well as more recent developments in Belnap, et. al (2001) and MacFarlane (2003, 2014)--have argued that the apparent tension between the intuitions is in fact merely apparent. In short, such theorists seek to maintain both of the following two theses: (i) the open future: Future contingents are not true, and (ii) retro-closure: From the fact that something is true, it follows that it was the case that it would be true. It is well-known that reflection on the problem of future contingents has in many ways been inspired by importantly parallel issues regarding divine foreknowledge and indeterminism. In this paper, we take up this perspective, and ask what accepting both the open future and retro-closure predicts about omniscience. When we theorize about a perfect knower, we are theorizing about what an ideal agent ought to believe. Our contention is that there isn’t an acceptable view of ideally rational belief given the assumptions of the open future and retro-closure, and thus this casts doubt on the conjunction of those assumptions. (shrink)
A central question, if not the central question, of philosophy of perception is whether sensory states have a nature similar to thoughts about the world, whether they are essentially representational. According to the content view, at least some of our sensory states are, at their core, representations with contents that are either accurate or inaccurate. Tyler Burge’s Origins of Objectivity is the most sustained and sophisticated defense of the content view to date. His defense of the view is problematic in (...) several ways. The most significant problem is that his approach does not sit well with mainstream perceptual psychology. (shrink)
Representationalists have routinely expressed skepticism about the idea that inflexible responses to stimuli are to be explained in representational terms. Representations are supposed to be more than just causal mediators in the chain of events stretching from stimulus to response, and it is difficult to see how the sensory states driving reflexes are doing more than playing the role of causal intermediaries. One popular strategy for distinguishing representations from mere causal mediators is to require that representations are decoupled from specific (...) stimulus conditions. I believe this requirement on representation is mistaken and at odds with explanatory practices in sensory ecology. Even when sensory states have the job of coordinating a specific output with a specific input, we can still find them doing the work of representations, carrying information needed for organisms to respond successfully to environmental conditions. We can uncover information at work by intervening specifically on the information conveyed by sensory states, leaving their causal role undisturbed. (shrink)
Teleological accounts of sensory normativity treat normal functioning for a species as a standard: sensory error involves departure from normal functioning for the species, i.e. sensory malfunction. Straightforward reflection on sensory trade-offs reveals that normal functioning for a species can exhibit failures of accuracy. Acknowledging these failures of accuracy is central to understanding the adaptations of a species. To make room for these errors we have to go beyond the teleological framework and invoke the notion of an ideal observer from (...) vision science. The notion of an ideal observer also sheds light on the important distinction between sensory malfunction and sensory limitation. (shrink)
This paper addresses the so-called paradox of fiction, the problem of explaining how we can have emotional responses towards fiction. I claim that no account has yet provided an adequate explanation of how we can respond with genuine emotions when we know that the objects of our responses are fictional. I argue that we should understand the role played by the imagination in our engagement with fiction as functionally equivalent to that which it plays under the guise of acceptance in (...) practical reasoning, suggesting that the same underlying cognitive-affective mechanisms are involved in both activities. As such, our imaginative engagement with fiction un-problematically arouses emotions, but only to the extent that we are not occurrently attending to our epistemic relation to the fiction i.e. fully attending to the fact that the object of our response is merely fictional. In order to illuminate this idea I examine a recent proposal that the phenomenology of attention is partially non-attributive, and I argue that emotional phenomenology too shares this characteristic. (shrink)
Перевод статьи: Davies T., Chandler R. Online deliberation design: Choices, criteria, and evidence // Democracy in motion: Evaluating the practice and impact of deliberative civic engagement / Nabatchi T., Weiksner M., Gastil J., Leighninger M. (eds.). -- Oxford: Oxford univ. press, 2013. -- P. 103-131. А. Кулик. -/- Вниманию читателей предлагается обзор эмпирических исследований в области дизайна онлайн-форумов, предназначенных для вовлечения граждан в делиберацию. Размерности дизайна определены для различных характеристик делиберации: назначения, целевой аудитории, разобщенности участников в пространстве и во времени, (...) среды коммуникации и организации делиберативного процесса. После краткого обзора критериев оценки вариантов дизайна рассматриваются эмпирические данные, соотносящиеся с каждым из вариантов. Эффективность онлайн-делиберации зависит от того, насколько условия коммуникации соотносятся с заданиями делиберации. Компромиссы, как, например, между анонимным или идентифицируемым участием, предполагают различные дизайны в зависимости от цели делиберации и состава участников. Выводы исследования получены на материале существующих технологий и могут измениться по мере коэволюции технологий и пользователей. (shrink)
A common objection to representationalism is that a representationalist view of phenomenal character cannot accommodate the effects that shifts in covert attention have on visual phenomenology: covert attention can make items more visually prominent than they would otherwise be without altering the content of visual experience. Recent empirical work on attention casts doubt on previous attempts to advance this type of objection to representationalism and it also points the way to an alternative development of the objection.
ABSTRACT This paper explores the role of aesthetic judgements in mathematics by focussing on the relationship between the epistemic and aesthetic criteria employed in such judgements, and on the nature of the psychological experiences underpinning them. I claim that aesthetic judgements in mathematics are plausibly understood as expressions of what I will call ‘aesthetic-epistemic feelings’ that serve a genuine cognitive and epistemic function. I will then propose a naturalistic account of these feelings in terms of sub-personal processes of representing and (...) assessing the relation between cognitive processes and certain properties of the stimuli at which they are directed. (shrink)
An attempt to show that Plato has a unified approach to the rationality of belief and the rationality of desire, and that his defense of that approach is a powerful one.
In his The Phenomenon of Man, Pierre Teilhard de Chardin develops concepts of consciousness, the noosphere, and psychosocial evolution. This paper explores Teilhard’s evolutionary concepts as resonant with thinking in psychology and physics. It explores contributions from archetypal depth psychology, quantum physics, and neuroscience to elucidate relationships between mind and matter. Teilhard’s work can be seen as advancing this psychological lineage or psychogenesis. That is, the evolutionary emergence of matter in increasing complexity from sub-atomic particles to the human brain and (...) reflective consciousness leads to a noosphere evolving towards an Omega point. Teilhard’s central ideas provide intimations of a numinous principle implicit in cosmology and the discovery that in and through humanity evolution not only becomes conscious of itself but also directed and purposive. (shrink)
At the most general level, "manipulation" refers one of many ways of influencing behavior, along with (but to be distinguished from) other such ways, such as coercion and rational persuasion. Like these other ways of influencing behavior, manipulation is of crucial importance in various ethical contexts. First, there are important questions concerning the moral status of manipulation itself; manipulation seems to be mor- ally problematic in ways in which (say) rational persuasion does not. Why is this so? Furthermore, the notion (...) of manipulation has played an increasingly central role in debates about free will and moral responsibility. Despite its significance in these (and other) contexts, however, the notion of manipulation itself remains deeply vexed. I would say notoriously vexed, but in fact direct philosophical treatments of the notion of manipulation are few and far between, and those that do exist are nota- ble for the sometimes widely divergent conclusions they reach concerning what it is. I begin by addressing (though certainly not resolving) the conceptual issue of how to distinguish manipulation from other ways of influencing behavior. Along the way, I also briefly address the (intimately related) question of the moral status of manipulation: what, if anything, makes it morally problematic? Then I discuss the controversial ways in which the notion of manipulation has been employed in contemporary debates about free will and moral responsibility. (shrink)
Multiple drug resistant strains of HIV and continuing difficulties with vaccine development highlight the importance of psychologi- cal interventions which aim to in uence the psychosocial and emo- tional factors empirically demonstrated to be significant predictors of immunity, illness progression and AIDS mortality in seropositive persons. Such data have profound implications for psychological interventions designed to modify psychosocial factors predictive of enhanced risk of exposure to HIV as well as the neuroendocrine and immune mechanisms mediating the impact of such factors (...) on disease progression. Many of these factors can be construed as unconscious mental ones, and psychoanalytic self-psychology may be a useful framework for conceptualizing psychic and immune de- fence as well as bodily and self-integration in HIV infection. Al- though further prospective studies and cross-cultural validation of research are necessary, existing data suggest that psychoanalytic insights may be useful both in therapeutic interventions and evaluative research which would require an underlying epistemology of the complementarity of mind and matter. (shrink)
Can new technology enhance purpose-driven, democratic dialogue in groups, governments, and societies? Online Deliberation: Design, Research, and Practice is the first book that attempts to sample the full range of work on online deliberation, forging new connections between academic research, technology designers, and practitioners. Since some of the most exciting innovations have occurred outside of traditional institutions, and those involved have often worked in relative isolation from each other, work in this growing field has often failed to reflect the full (...) set of perspectives on online deliberation. This volume is aimed at those working at the crossroads of information/communication technology and social science, and documents early findings in, and perspectives on, this new field by many of its pioneers. -/- CONTENTS: -/- Introduction: The Blossoming Field of Online Deliberation (Todd Davies, pp. 1-19) -/- Part I - Prospects for Online Civic Engagement -/- Chapter 1: Virtual Public Consultation: Prospects for Internet Deliberative Democracy (James S. Fishkin, pp. 23-35) -/- Chapter 2: Citizens Deliberating Online: Theory and Some Evidence (Vincent Price, pp. 37-58) -/- Chapter 3: Can Online Deliberation Improve Politics? Scientific Foundations for Success (Arthur Lupia, pp. 59-69) -/- Chapter 4: Deliberative Democracy, Online Discussion, and Project PICOLA (Public Informed Citizen Online Assembly) (Robert Cavalier with Miso Kim and Zachary Sam Zaiss, pp. 71-79) -/- Part II - Online Dialogue in the Wild -/- Chapter 5: Friends, Foes, and Fringe: Norms and Structure in Political Discussion Networks (John Kelly, Danyel Fisher, and Marc Smith, pp. 83-93) -/- Chapter 6: Searching the Net for Differences of Opinion (Warren Sack, John Kelly, and Michael Dale, pp. 95-104) -/- Chapter 7: Happy Accidents: Deliberation and Online Exposure to Opposing Views (Azi Lev-On and Bernard Manin, pp. 105-122) -/- Chapter 8: Rethinking Local Conversations on the Web (Sameer Ahuja, Manuel Pérez-Quiñones, and Andrea Kavanaugh, pp. 123-129) -/- Part III - Online Public Consultation -/- Chapter 9: Deliberation in E-Rulemaking? The Problem of Mass Participation (David Schlosberg, Steve Zavestoski, and Stuart Shulman, pp. 133-148) -/- Chapter 10: Turning GOLD into EPG: Lessons from Low-Tech Democratic Experimentalism for Electronic Rulemaking and Other Ventures in Cyberdemocracy (Peter M. Shane, pp. 149-162) -/- Chapter 11: Baudrillard and the Virtual Cow: Simulation Games and Citizen Participation (Hélène Michel and Dominique Kreziak, pp. 163-166) -/- Chapter 12: Using Web-Based Group Support Systems to Enhance Procedural Fairness in Administrative Decision Making in South Africa (Hossana Twinomurinzi and Jackie Phahlamohlaka, pp. 167-169) -/- Chapter 13: Citizen Participation Is Critical: An Example from Sweden (Tomas Ohlin, pp. 171-173) -/- Part IV - Online Deliberation in Organizations -/- Chapter 14: Online Deliberation in the Government of Canada: Organizing the Back Office (Elisabeth Richard, pp. 177-191) -/- Chapter 15: Political Action and Organization Building: An Internet-Based Engagement Model (Mark Cooper, pp. 193-202) -/- Chapter 16: Wiki Collaboration Within Political Parties: Benefits and Challenges (Kate Raynes-Goldie and David Fono, pp. 203-205) -/- Chapter 17: Debian’s Democracy (Gunnar Ristroph, pp. 207-211) -/- Chapter 18: Software Support for Face-to-Face Parliamentary Procedure (Dana Dahlstrom and Bayle Shanks, pp. 213-220) -/- Part V - Online Facilitation -/- Chapter 19: Deliberation on the Net: Lessons from a Field Experiment (June Woong Rhee and Eun-mee Kim, pp. 223-232) -/- Chapter 20: The Role of the Moderator: Problems and Possibilities for Government-Run Online Discussion Forums (Scott Wright, pp. 233-242) -/- Chapter 21: Silencing the Clatter: Removing Anonymity from a Corporate Online Community (Gilly Leshed, pp. 243-251) -/- Chapter 22: Facilitation and Inclusive Deliberation (Matthias Trénel, pp. 253-257) -/- Chapter 23: Rethinking the ‘Informed’ Participant: Precautions and Recommendations for the Design of Online Deliberation (Kevin S. Ramsey and Matthew W. Wilson, pp. 259-267) -/- Chapter 24: PerlNomic: Rule Making and Enforcement in Digital Shared Spaces (Mark E. Phair and Adam Bliss, pp. 269-271) -/- Part VI - Design of Deliberation Tools -/- Chapter 25: An Online Environment for Democratic Deliberation: Motivations, Principles, and Design (Todd Davies, Brendan O’Connor, Alex Cochran, Jonathan J. Effrat, Andrew Parker, Benjamin Newman, and Aaron Tam, pp. 275-292) -/- Chapter 26: Online Civic Deliberation with E-Liberate (Douglas Schuler, pp. 293-302) -/- Chapter 27: Parliament: A Module for Parliamentary Procedure Software (Bayle Shanks and Dana Dahlstrom, pp. 303-307) -/- Chapter 28: Decision Structure: A New Approach to Three Problems in Deliberation (Raymond J. Pingree, pp. 309-316) -/- Chapter 29: Design Requirements of Argument Mapping Software for Teaching Deliberation (Matthew W. Easterday, Jordan S. Kanarek, and Maralee Harrell, pp. 317-323) -/- Chapter 30: Email-Embedded Voting with eVote/Clerk (Marilyn Davis, pp. 325-327) -/- Epilogue: Understanding Diversity in the Field of Online Deliberation (Seeta Peña Gangadharan, pp. 329-358). -/- For individual chapter downloads, go to odbook.stanford.edu. (shrink)
Psychoanalytic self-psychology as outlined by such depth psychologists as Jung, Fordham, Winnicott and Kohut provide a framework for conceptualizing a relationship of complementarity between psychic and immune defence as well as loss of bodily and self integration in disease. Physicist Erwin Schrödinger’s thesis that the so-called “arrow of time” does not necessarily deal a mortal blow to its creator is reminiscent of the concept of timeless dimensions of the unconscious mind and the Self in Analytical Psychology, manifest for instance, in (...) dream content and archetypal symbols. These notions are not only consistent with the concepts of timelessness and meaningful coincidence (synchronicity) in psychoanalysis. They are also implicitly spiritual with intimations of a numinous dimension of the evolutionary process in which humanity participates. This includes the idea that an evolving God becomes conscious through and is completed by humankind in a process (Incarnational) theology which regards the numinous as both immanent and transcendent. And concepts of mind which transcend the individual in a transpersonal sense. The treatment of the psychophysical problem by depth psychologist Carl Jung and physicist Wolfgang Pauli with their notion of the unconscious archetypes as timeless, cosmic ordering and regulating principles creating a bridge between mind and matter in a relationship of complementarity is compatible with such a perspective on the numinous which might in turn be useful for contemporary theology and spirituality. (shrink)
The dominant view among philosophers of perception is that color experiences, like color judgments, are essentially representational: as part of their very nature color experiences possess representational contents which are either accurate or inaccurate. My starting point in assessing this view is Sydney Shoemaker’s familiar account of color perception. After providing a sympathetic reconstruction of his account, I show how plausible assumptions at the heart of Shoemaker’s theory make trouble for his claim that color experiences represent the colors of things. (...) I consider various ways of trying to avoid the objection, and find all of the responses wanting. My conclusion is that we have reason to be skeptical of the orthodox view that color experiences are constitutively representational. (shrink)
This paper defines the form of prior knowledge that is required for sound inferences by analogy and single-instance generalizations, in both logical and probabilistic reasoning. In the logical case, the first order determination rule defined in Davies (1985) is shown to solve both the justification and non-redundancy problems for analogical inference. The statistical analogue of determination that is put forward is termed 'uniformity'. Based on the semantics of determination and uniformity, a third notion of "relevance" is defined, both logically and (...) probabilistically. The statistical relevance of one function in determining another is put forward as a way of defining the value of information: The statistical relevance of a function F to a function G is the absolute value of the change in one's information about the value of G afforded by specifying the value of F. This theory provides normative justifications for conclusions projected by analogy from one case to another, and for generalization from an instance to a rule. The soundness of such conclusions, in either the logical or the probabilistic case, can be identified with the extent to which the corresponding criteria (determination and uniformity) actually hold for the features being related. (shrink)
In the opening section of this paper we spell out an account of our na ve view of bodily sensations that is of historical and philosophical significance. This account of our shared view of bodily sensations captures common ground between Descartes, who endorses an error theory regarding our everyday thinking about bodily sensations, and Berkeley, who is more sympathetic with common sense. In the second part of the paper we develop an alternative to this account and discuss what is at (...) stake in deciding between these two ways of understanding our everyday view. In the third and final part of the paper we offer an argument in favour of our alternative. (shrink)
We analyze the logical form of the domain knowledge that grounds analogical inferences and generalizations from a single instance. The form of the assumptions which justify analogies is given schematically as the "determination rule", so called because it expresses the relation of one set of variables determining the values of another set. The determination relation is a logical generalization of the different types of dependency relations defined in database theory. Specifically, we define determination as a relation between schemata of first (...) order logic that have two kinds of free variables: (1) object variables and (2) what we call "polar" variables, which hold the place of truth values. Determination rules facilitate sound rule inference and valid conclusions projected by analogy from single instances, without implying what the conclusion should be prior to an inspection of the instance. They also provide a way to specify what information is sufficiently relevant to decide a question, prior to knowledge of the answer to the question. (shrink)
This chapter reviews empirical evidence bearing on the design of online forums for deliberative civic engagement. Dimensions of design are defined for different aspects of the deliberation: its purpose, the target population, the spatiotemporal distance separating participants, the communication medium, and the deliberative process to be followed. After a brief overview of criteria for evaluating different design options, empirical findings are organized around design choices. Research has evolved away from treating technology for online deliberation dichotomously (either present or not) toward (...) nuanced findings that differentiate between technological features, ways of using them, and cultural settings. The effectiveness of online deliberation depends on how well the communicative environment is matched to the deliberative task. Tradeoffs, e.g. between rich and lean media and between anonymous and identifiable participation, suggest different designs depending on the purpose and participants. Findings are limited by existing technologies, and may change as technologies and users co-evolve. (shrink)
The premise of biological modularity is an ontological claim that appears to come out of practice. We understand that the biological world is modular because we can manipulate different parts of organisms in ways that would only work if there were discrete parts that were interchangeable. This is the foundation of the BioBrick assembly method widely used in synthetic biology. It is one of a number of methods that allows practitioners to construct and reconstruct biological pathways and devices using DNA (...) libraries of standardized parts with known functions. In this paper, we investigate how the practice of synthetic biology reconfigures biological understanding of the key concepts of modularity and evolvability. We illustrate how this practice approach takes engineering knowledge and uses it to try to understand biological organization by showing how the construction of functional parts and processes can be used in synthetic experimental evolution. We introduce a new approach within synthetic biology that uses the premise of a parts-based ontology together with that of organismal self-organization to optimize orthogonal metabolic pathways in E. coli. We then use this and other examples to help characterize semisynthetic categories of modularity, parthood, and evolvability within the discipline. (shrink)
Inspired by Peter van Inwagen’s “simulacra model” of the resurrection, I investigate whether it could be reasonable to adopt an analogous approach to the problem of evil. Empirically Skeptical Theism, as I call it, is the hypothesis that God shields our lives from irredeemable evils surreptitiously. I argue that EST compares favorably with traditional skeptical theism and with eschatological theodicies, and that EST does not have the negative moral consequences we might suppose.
The historic material culture produced by American Cold War nuclear weapons testing includes objects of scientific inquiry that can be generally categorized as being either ephemeral or enduring. Objects deemed to be ephemeral were of a less substantial nature, being impermanent and expendable in a nuclear test, while enduring objects were by nature more durable and long-lasting. Although all of these objects were ultimately subject to disappearance, the processes by which they were transformed, degraded, or destroyed prior to their disappearing (...) differ. Drawing principally upon archaeological theory, this paper proposes a functional dichotomy for categorizing and studying the historical trajectories of nuclear weapons testing technoscience artifacts. In examining the transformation patterns of steel towers and concrete blockhouses in particular, it explores an associated loss of scientific method that accompanies a science object's disappearance. (shrink)
As the title, The Entangled State of God and Humanity suggests, this lecture dispenses with the pre-Copernican, patriarchal, anthropomorphic image of God while presenting a case for a third millennium theology illuminated by insights from archetypal depth psychology, quantum physics, neuroscience and evolutionary biology. It attempts to smash the conceptual barriers between science and religion and in so doing, it may contribute to a Copernican revolution which reconciles both perspectives which have been apparently irreconcilable opposites since the sixteenth century. The (...) published work of C.G. Jung, Wolfgang Pauli, David Bohm and Teilhard de Chardin outline a process whereby matter evolves in increasing complexity from sub-atomic particles to the human brain and the emergence of a reflective consciousness leading to a noosphere evolving towards an Omega point. The noosphere is the envelope of consciousness and meaning superimposed upon the biosphere a concept central to the evolutionary thought of visionary Jesuit palaeontologist Pierre Teilhard de Chardin (The Phenomenon of Man). -/- His central ideas, like those of Jung with his archetypes, in particular that of the Self, provide intimations of a numinous principle implicit in cosmology and the discovery that in and through humanity, evolution becomes not only conscious of itself but also directed and purposive. Although in Jung’s conception it was a “late-born offspring of the unconscious soul”, consciousness has become the mirror which the universe has evolved to reflect upon itself and in which its very existence is revealed. Without consciousness, the universe would not know itself. The implication for process theology is that God and humanity are in an entangled state so that the evolution of God cannot be separated from that of humankind. -/- A process (Incarnational) theology inseminated by the theory of evolution is one in which humankind completes the individuation of God towards the wholeness represented for instance in cosmic mandala symbols (Jung, Collected Works, vol. 11). Jung believed that God needs humankind to become conscious, whole and complete, a thesis explored in my book The Individuation of God: Integrating Science and Religion (Wilmette, IL: Chiron Publications 2012). This process theology like that implicit in the work of Teilhard de Chardin, is panentheistic so that God is immanent in nature though not identical with it (Atmanspacher: 2014: 284). (shrink)
Physicalism as a worldview and framework for a mechanistic and materialist science seems not to have integrated the tectonic shift created by the rise of quantum physics with its notion of the personal equation of the observer. Psyche had been deliberately removed from a post-Enlightenment science. This paper explores a post-materialist science within a dual-aspect monist conception of nature in which both the mental and the physical exist in a relationship of complementarity so that they mutually exclude one another and (...) yet are together necessary to explain Reality while being irreducible to one another. Both mind and matter emerge from an underlying holistic domain known as the unus mundus in the Jung-Pauli formulation or as the analogous implicate order in the framing of physicist David Bohm and his colleagues. Kuhnian anomalies such as the role of reflective consciousness in evolution, and phenomena including so-called “near death experiences” (NDEs), are considered from the perspective of dual-aspect monism in conjunction with an emerging evolutionary panentheism. (shrink)
The principle of Conditional Excluded Middle has been a matter of longstanding controversy in both semantics and metaphysics. According to this principle, we are, inter alia, committed to claims like the following: If the coin had been flipped, it would have landed heads, or if the coin had been flipped, it would not have landed heads. In favour of the principle, theorists have appealed, primarily, to linguistic data such as that we tend to hear ¬(A > B) as equivalent to (...) (A > ¬B). Williams (2010), provides one of the most compelling recent arguments along these lines by appealing to intuitive equivalencies between certain quantified conditional statements. We argue that the strategy Williams employs can be parodied to generate an argument for the unwelcome principle of Should Excluded Middle: the principle that, for any A, it either should be that A or it should be that not A. Uncovering what goes wrong with this argument casts doubt on a key premise in Williams’ argument. The way we develop this point is by defending the thesis that, like "should", "would" is a so-called neg-raising predicate. Neg-raising is the linguistic phenomenon whereby “I don’t think that Trump is a good president” strongly tends to implicate “I think that Trump is not a good president,” despite the former not semantically entailing the latter. We show how a defender of a Lewis-style semantics for counterfactuals should implement the idea that the counterfactual is a “neg-raiser”. (shrink)
In response to criticism, we often say – in these or similar words – “Let’s see you do better!” Prima facie, it looks like this response is a challenge of a certain kind – a challenge to prove that one has what has recently been called standing . More generally, the data here seems to point a certain kind of norm of criticism: be better . Slightly more carefully: One must: criticize x with respect to standard s only if one (...) is better than x with respect to standard s . In this paper, I defend precisely this norm of criticism – an underexplored norm that is nevertheless ubiquitous in our lives, once we begin looking for it. The be better norm is, I hope to show, continuously invoked in a wide range of ordinary settings, can undergird and explain the widely endorsed non-hypocrisy condition on the standing to blame, and apparent counterexamples to the norm are no such counterexamples at all. (shrink)
In this paper, I articulate an argument for incompatibilism about moral responsibility and determinism. My argument comes in the form of an extended story, modeled loosely on Peter van Inwagen’s “rollback argument” scenario. I thus call it “the replication argument.” As I aim to bring out, though the argument is inspired by so-called “manipulation” and “original design” arguments, the argument is not a version of either such argument—and plausibly has advantages over both. The result, I believe, is a more convincing (...) incompatibilist argument than those we have considered previously. (shrink)
Reading only the contemporary and popular literature on the Orthodox spiritual life, it is possible to get the impression that Orthodox Christianity affirms only mystical theology and that it has no place for philosophical investigation, rational inquiry, or thinking for oneself. In this paper I show that this view of the relationship between philosophy and the Orthodox Christian life is one-sided and distorted. For while it is certainly true that reason is impotent to lay bare the very nature of God, (...) St. Gregory the Theologian, St. Maximos the Confessor, and St. John of Damascus all see it as a guide and powerful ally in coming to know that God exists and a little of what He is like, as well as in patterning our lives after He who is the source of all life. The explicit statements that these Fathers make as well as the use that they make of reason in their writings show how it can function as a guide for those in the beginning stages of the spiritual life even if it ultimately points beyond itself to the God who is beyond every conception. (shrink)
The 1830s and 1840s saw the proliferating usage of “the Beyond” (Jenseits) as a choice term for the afterlife in German public discourse. This linguistic innovation coincided with the rise of empiricism in natural science. It also signaled an emerging religious debate in which bald challenges to the very existence of heaven were aired before the wider German public for the first time. Against the belief of many contemporaries that empirical science was chiefly responsible for this attack on one of (...) the central tenets of Christianity, this essay shows instead that the role played by Christian dissenters in the negation of the Beyond. The polemical invocation of an empty Beyond coincided with the separation in the mid 1840s of two dissenting sects – the Deutschkatholiken (“German Catholics”) and the Protestant Free Congregations – from the main Christian Churches. During the Revolution of 1848, these sects, later known as Free Religious Congregations, deepened their critique of the Beyond as they articulated a new creed of radical humanism and natural scientific monism. Yet, despite their secularist agenda the Free Religious failed to fully secularize. The essay concludes by suggesting that the anticlerical activity of the Free Religious and affiliated freethinking organizations, which lasted into the 1930s, marks a century in which movements of radical political and social dissent remained open to and indeed partially dependent on the negation of the Beyond in order to sacralize humanity and nature. (shrink)
In The Open Future: Why Future Contingents are all False, Patrick Todd launches a sustained defense of a radical interpretation of the doctrine of the open future, one according to which all claims about undetermined aspects of the future are simply false. Todd argues that this theory is metaphysically more parsimonious than its rivals, and that objections to its logical and practical coherence are much overblown. Todd shows how proponents of this view can maintain classical logic, and (...) argues that the view has substantial advantages over Ockhamist, supervaluationist, and relativist alternatives. Todd draws inspiration from theories of “neg-raising” in linguistics, from debates about omniscience within the philosophy of religion, and defends a crucial comparison between his account of future contingents and certain more familiar theories of counterfactuals. Further, Todd defends his theory of the open future from the charges that it cannot make sense of our practices of betting, makes our credences regarding future contingents unintelligible, and is at odds with proper norms of assertion. In the end, in Todd’s classical open future, we have a compelling new solution to the longstanding “problem of future contingents”. -/- (Note: this is a pre-production copy of Chapter 3.). (shrink)
Globalization is resulting in complex decisions by businesses as to where and what to produce, while free trade is resulting in a greater menu of choices for consumers, often with the blending of products and goods from various cultures, called ‘glocalization.’ This paper reviews the theories and practices behind these current happenings, which are each economic, politicaleconomic, institutional, and sociological, first by looking at the supply side of why certain countries produce the goods that they do, and then at the (...) demand side, why consumers have particular, cultural tastes and preferences for goods. It also proffers theories to explain firm location and that of intra-industry trade. This occus when countries trade similar products rather than differentiating, as economic theory would suggest. After reviewing the literature, through numerous examples of political-economy and culture, it argues somewhat normatively that differences in culture and goods are a strength to the world community, and that globalization in the end will not likely result in a singular global culture with a uniformity of exactly identical economic goods anytime in the near future. (shrink)
Thomas Reid’s philosophy is a philosophy of mind—a Pneumatology in the idiom of 18th century Scotland. His overarching philosophical project is to construct an account of the nature and operations of the human mind, focusing on the two-way correspondence, in perception and action, between the thinking principle within and the material world without. Like his contemporaries, Reid’s treatment of these topics aimed to incorporate the lessons of the scientific revolution. What sets Reid’s philosophy of mind apart is his commitment to (...) a set of intuitive contingent truths he called the principles of common sense. This difference, as this chapter will show, enables Reid to construct an account of mind that resists the temptation to which so many philosophers in his day and ours succumb, i.e., the temptation, in his words, to materialize minds or spiritualize bodies. (shrink)
This essay (a revised version of my undergraduate honors thesis at Stanford) constructs a theory of analogy as it applies to argumentation and reasoning, especially as used in fields such as philosophy and law. The word analogy has been used in different senses, which the essay defines. The theory developed herein applies to analogia rationis, or analogical reasoning. Building on the framework of situation theory, a type of logical relation called determination is defined. This determination relation solves a puzzle about (...) analogy in the context of logical argument, namely, whether an analogous situation contributes anything logically over and above what could be inferred from the application of prior knowledge to a present situation. Scholars of reasoning have often claimed that analogical arguments are never logically valid, and that they therefore lack cogency. However, when the right type of determination structure exists, it is possible to prove that projecting a conclusion inferred by analogy onto the situation about which one is reasoning is both valid and non-redundant. Various other properties and consequences of the determination relation are also proven. Some analogical arguments are based on principles such as similarity, which are not logically valid. The theory therefore provides us with a way to distinguish between legitimate and illegitimate arguments. It also provides an alternative to procedures based on the assessment of similarity for constructing analogies in artificial intelligence systems. (shrink)
This paper argues that private property and rights assignment, especially as applied to communication infrastructure and information, should be informed by advances in both technology and our understanding of psychology. Current law in this area in the United States and many other jurisdictions is founded on assumptions about human behavior that have been shown not to hold empirically. A joint recognition of this fact, together with an understanding of what new technologies make possible, leads one to question basic assumptions about (...) how law is made and what laws we should have in a given area, if any. I begin by analyzing different aspects of U.S. law, from a high-level critique of law making to a critique of rights assignment for what I call 'simple nonrival goods.' I describe my understanding, as a non-lawyer with a background in psychology and computing, of the current conventions in U.S. law, consider the foundational assumptions that justify current conventions, describe advances in psychology and technology that call these conventions into question, and briefly note how the law might normatively change in this light. I then apply this general analysis to the question of domain name assignment by the Internet Corporation for Assigned Names and Numbers (ICANN). (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.