We consider how an epistemic network might self-assemble from the ritualization of the individual decisions of simple heterogeneous agents. In such evolved social networks, inquirers may be significantly more successful than they could be investigating nature on their own. The evolved network may also dramatically lower the epistemic risk faced by even the most talented inquirers. We consider networks that self-assemble in the context of both perfect and imperfect communication and compare the behaviour of inquirers in each. This provides a (...) step in bringing together two new and developing research programs, the theory of self-assembling games and the theory of network epistemology. (shrink)
Nietzsche, Nihilism and the Philosophy of the Future examines Nietzsche's analysis of and response to contemporary nihilism, the sense that nothing has value or ...
A cura di Ignazio Licata, Ammar J. Sakaji Jeffrey A. Barrett, Enrico Celeghini, Leonardo Chiatti, Maurizio Consoli, Davide Fiscaletti, Ervin Goldfain, Annick Lesne, Maria Paola Lombardo, Mohammad Mehrafarin, Ronald Mirman, Ulrich Mohrhoff, Renato Nobili, Farrin Payandeh, Eliano Pessa, L.I Petrova, Erasmo Recami, Giovanni Salesi, Francesco Maria Scarpa, Mohammad Vahid Takook, Giuseppe Vitiello This volume comes out from an informal discussion between friends and colleagues on the answer:what topic do you think as fundamental in theoretical physics nowadays? Obviously wereceived (...) different answers according to the disposition and the different research areas, and answersin superposition state too. And yet some attractors have emerged pointing out the keys forthe Physicists conception of Nature, all of them converging towards a group of stronglyinterconnectedproblems. Let's see them one by one:. The concept of particle identity in Quantum Mechanics (QM) and Quantum Field Theory (QFT);. The relationship between QM and QFT, in particular the non -local aspects in Field Theory andthe problem of non-perturbative solutions;. The local/global problem in the relationship between particle physics and cosmology;. The role of Renormalization group in describing the meso and macroscopic emergent behaviour;. The possible extension of Poincaré symmetry group and Quantum cosmology;. Higgs "mechanism" and the origin of mass. (shrink)
This paper investigates how environmental structure, given the innate properties of a population, affects the degree to which this population can adapt to the environment. The model we explore involves simple agents in a 2-d world which can sense a local food distribution and, as specified by their genomes, move to a new location and ingest the food there. Adaptation in this model consists of improving the genomic sensorimotor mapping so as to maximally exploit the environmental resources. We vary environmental (...) structure to see its specific effect on adaptive success. In our investigation, two properties of environmental structure, conditioned by the sensorimotor capacities of the agents, have emerged as significant factors in determining adaptive success: (1) the information content of the environment which quantifies the diversity of conditions sensed, and (2) the expected utility for optimal action. These correspond to the syntactic and pragmatic aspects of environmental information, respectively. We find that the ratio of expected utility to information content predicts adaptive success measured by population gain and information content alone predicts the fraction of ideal utility achieved. These quantitative methods and specific conclusions should aid in understanding the effects of environmental structure on evolutionary adaptation in a wide range of evolving systems, both artificial and natural. (shrink)
Both liberals and feminists have long criticized the paternalistic approach to prostitution found in most jurisdictions in the U.S. In his recent book Prostitution and Liberalism, Peter de Marneffe defends just such an intervention, arguing that the demonstrated harmfulness of a life of prostitution justifies paternalistic policies aimed at reducing the number of women who are involved in it. Although de Marneffe does not endorse the prohibitionist approach typical in the U.S., he argues that the best reasons for alternative approaches (...) to the practice (including some forms of regulated legalization) are necessarily paternalistic. In my paper, I question de Marneffe’s contention that the strongest reasons for state intervention with regard to prostitution are paternalistic in nature. I argue that reasonable state action toward prostitution is best understood not as a paternalistic intervention to remedy some moral or epistemological failure on the part of prostitutes, but rather as an attempt to advance the interests of vulnerable parties more generally concerning what they reasonably desire but could not otherwise ensure. I further argue that such an approach might favor abolitionist over regulatory policies, depending upon how the vulnerable class is defined. (shrink)
Barbara Herman's account of rules of moral salience goes far in explaining how Kantian moral theory can integrate historically emergent normative criticisms such as that offered by feminists. The ethical motives that initially lead historical agents to expand our moral categories, however, are often at odds with Kant's (and Herman's) theory of moral motivations. I argue that Hegel offers a more accurate account of ethical motivation under oppressive conditions.
Recent work in cognitive science of religion (CSR) is beginning to converge on a very interesting thesis—that, given the ordinary features of human minds operating in typical human environments, we are naturally disposed to believe in the existence of gods, among other religious ideas (e.g., seeAtran [2002], Barrett [2004; 2012], Bering [2011], Boyer [2001], Guthrie [1993], McCauley [2011], Pyysiäinen [2004; 2009]). In this paper, we explore whether such a discovery ultimately helps or hurts the atheist position—whether, for example, it (...) lends credence to atheism by explaining away religious belief or whether it actually strengthens some already powerful arguments against atheism in the relevant philosophical literature.We argue that the recent discoveries of CSR hurt, not help, the atheist position—that CSR, if anything, should not give atheists epistemic assurance. (shrink)
Mathematicians, physicists, and philosophers of physics often look to the symmetries of an object for insight into the structure and constitution of the object. My aim in this paper is to explain why this practice is successful. In order to do so, I present a collection of results that are closely related to (and in a sense, generalizations of) Beth’s and Svenonius’ theorems.
Over the course of her career, Jean Harvey contributed many invaluable insights that help to make sense of both injustice and resistance. Specifically, she developed an account of what she called “civilized oppression,” which is pernicious in part because it can be difficult to perceive. One way that we ought to pursue what she calls a “life of moral endeavor” is by increasing our perceptual awareness of civilized oppression and ourselves as its agents. In this article I argue that one (...) noxious form of civilized oppression is what Miranda Fricker calls “testimonial injustice.” I then follow Harvey in arguing that one of the methods by which we should work to avoid perpetrating testimonial injustice is by empathizing with others. This is true for two reasons. The first is that in order to manifest what Fricker calls the virtue of testimonial justice, we must have a method by which we “correct” our prejudices or implicit biases, and empathy serves as such a corrective. The second is that there are cases where the virtue of testimonial justice wouldn't in fact correct for testimonial injustice in the way that Fricker suggests, but that actively working to empathize would. (shrink)
In the literature on multiple realizability and the identity theory, cases of neural plasticity have enjoyed a very limited role. The present article attempts to remedy this small influence by arguing that clinical and experimental evidence of quite extensive neural reorganization offers compelling support for the claim that psychological kinds are multiply realized in neurological kinds, thus undermining the identity theory. In particular, cases are presented where subjects with no measurable psychological deficits also have vast, though gradually received, neurological damage. (...) Common objections and concerns are also discussed and rejected. 1 Introduction2 The GRP, Serial Lesion Effect, and Multiple Realizability2.1 A case study of the serial lesion effect2.2 Evaluating the case study’s evidence for multiple realizability3 The GRP More Generally4 Objections to the GRP as Evidence for Multiple Realizability4.1 Small plastic effects and neurological taxonomies4.2 But do neural regions and locations even matter at all?4.3 But are there not other options besides location?5 Conclusion. (shrink)
In their 2010 book, Biology’s First Law, D. McShea and R. Brandon present a principle that they call ‘‘ZFEL,’’ the zero force evolutionary law. ZFEL says (roughly) that when there are no evolutionary forces acting on a population, the population’s complexity (i.e., how diverse its member organisms are) will increase. Here we develop criticisms of ZFEL and describe a different law of evolution; it says that diversity and complexity do not change when there are no evolutionary causes.
Forgiveness and reconciliation are central to moral life; after all, everyone will be wronged by others and will then face the dual decisions of whether to forgive and whether to reconcile. It is therefore important that we have a clear analysis of each, as well as a thoroughly articulated understanding of how they relate to and differ from each other. -/- Forgiveness has received considerably more attention in the Western philosophical literature than has reconciliation. In this paper I aim to (...) give it the attention it deserves and develop an account of interpersonal reconciliation. On my view reconciliation is fundamentally bilateral (whereas forgiveness is fundamentally unilateral). It entails transparency and agreement between the wrongdoer and the victim as to the nature of a past wrong or set of wrongs. And, it requires that moral repair be made between the two parties (which entails that both parties bear proper attitudes towards each other). In making my case I contrast reconciliation with toleration and collaboration, in order to demonstrate that reconciliation also entails forgiveness (though forgiveness does not entail reconciliation). (shrink)
Some contextually sensitive expressions are such that their context independent conventional meanings need to be in some way supplemented in context for the expressions to secure semantic values in those contexts. As we’ll see, it is not clear that there is a paradigm here, but ‘he’ used demonstratively is a clear example of such an expression. Call expressions of this sort supplementives in order to highlight the fact that their context independent meanings need to be supplemented in context for them (...) to have semantic values relative to the context. Many philosophers and linguists think that there is a lot of contextual sensitivity in natural language that goes well beyond the pure indexicals and supplementives like ‘he’. Constructions/expressions that are good candidates for being contextually sensitive include: quantifiers, gradable adjectives including “predicates of personal taste”, modals, conditionals, possessives and relational expressions taking implicit arguments. It would appear that in none of these cases does the expression/construction in question have a context independent meaning that when placed in context suffices to secure a semantic value for the expression/construction in the context. In each case, some sort of supplementation is required to do this. Hence, all these expressions are supplementives in my sense. For a given supplementive, the question arises as to what the mechanism is for supplementing its conventional meanings in context so as to secure a semantic value for it in context. That is, what form does the supplementation take? The question also arises as to whether different supplementives require different kinds of supplementation. Let us call an account of what, in addition to its conventional meaning, secures a semantic value for a supplementive in context a metasemantics for that supplementive. So we can put our two questions thus: what is the proper metasemantics for a given supplementive; and do all supplementives have the same metasemantics? In the present work, I sketch the metasemantics I formulated for demonstratives in earlier work. Next, I briefly consider a number of other supplementives that I think the metasemantics I propose plausibly applies to and explain why I think that. Finally, I consider the prospects for extending the account to all supplementives. In so doing, I take up arguments due to Michael Glanzberg to the effect that supplementives are governed by two different metasemantics and attempt to respond to them. (shrink)
In this article, I argue that it is impossible to complete infinitely many tasks in a finite time. A key premise in my argument is that the only way to get to 0 tasks remaining is from 1 task remaining, when tasks are done 1-by-1. I suggest that the only way to deny this premise is by begging the question, that is, by assuming that supertasks are possible. I go on to present one reason why this conclusion (that supertasks are (...) impossible) is important, namely that it implies a new verdict on a decision puzzle propounded by JeffreyBarrett and Frank Arntzenius. (shrink)
The Epistemic Objection says that certain theories of time imply that it is impossible to know which time is absolutely present. Standard presentations of the Epistemic Objection are elliptical—and some of the most natural premises one might fill in to complete the argument end up leading to radical skepticism. But there is a way of filling in the details which avoids this problem, using epistemic safety. The new version has two interesting upshots. First, while Ross Cameron alleges that the Epistemic (...) Objection applies to presentism as much as to theories like the growing block, the safety version does not overgeneralize this way. Second, the Epistemic Objection does generalize in a different, overlooked way. The safety objection is a serious problem for a widely held combination of views: “propositional temporalism” together with “metaphysical eternalism”. (shrink)
How should a group with different opinions (but the same values) make decisions? In a Bayesian setting, the natural question is how to aggregate credences: how to use a single credence function to naturally represent a collection of different credence functions. An extension of the standard Dutch-book arguments that apply to individual decision-makers recommends that group credences should be updated by conditionalization. This imposes a constraint on what aggregation rules can be like. Taking conditionalization as a basic constraint, we gather (...) lessons from the established work on credence aggregation, and extend this work with two new impossibility results. We then explore contrasting features of two kinds of rules that satisfy the constraints we articulate: one kind uses fixed prior credences, and the other uses geometric averaging, as opposed to arithmetic averaging. We also prove a new characterisation result for geometric averaging. Finally we consider applications to neighboring philosophical issues, including the epistemology of disagreement. (shrink)
We present a general framework for representing belief-revision rules and use it to characterize Bayes's rule as a classical example and Jeffrey's rule as a non-classical one. In Jeffrey's rule, the input to a belief revision is not simply the information that some event has occurred, as in Bayes's rule, but a new assignment of probabilities to some events. Despite their differences, Bayes's and Jeffrey's rules can be characterized in terms of the same axioms: "responsiveness", which requires (...) that revised beliefs incorporate what has been learnt, and "conservativeness", which requires that beliefs on which the learnt input is "silent" do not change. To illustrate the use of non-Bayesian belief revision in economic theory, we sketch a simple decision-theoretic application. (shrink)
Mereological nihilism is the philosophical position that there are no items that have parts. If there are no items with parts then the only items that exist are partless fundamental particles, such as the true atoms (also called philosophical atoms) theorized to exist by some ancient philosophers, some contemporary physicists, and some contemporary philosophers. With several novel arguments I show that mereological nihilism is the correct theory of reality. I will also discuss strong similarities that mereological nihilism has with empirical (...) results in quantum physics. And I will discuss how mereological nihilism vindicates a few other theories, such as a very specific theory of philosophical atomism, which I will call quantum abstract atomism. I will show that mereological nihilism also is an interpretation of quantum mechanics that avoids the problems of other interpretations, such as the widely known, metaphysically generated, quantum paradoxes of quantum physics, which ironically are typically accepted as facts about reality. I will also show why it is very surprising that mereological nihilism is not a widely held theory, and not the premier theory in philosophy. (shrink)
The Christian doctrine of the Trinity poses a serious philosophical problem. On the one hand, it seems to imply that there is exactly one divine being; on the other hand, it seems to imply that there are three. There is another well-known philosophical problem that presents us with a similar sort of tension: the problem of material constitution. We argue in this paper that a relatively neglected solution to the problem of material constitution can be developed into a novel solution (...) to the problem of the Trinity. (shrink)
There is a traditional theistic doctrine, known as the doctrine of divine simplicity, according to which God is an absolutely simple being, completely devoid of any metaphysical complexity. On the standard understanding of this doctrine—as epitomized in the work of philosophers such as Augustine, Anselm, and Aquinas—there are no distinctions to be drawn between God and his nature, goodness, power, or wisdom. On the contrary, God is identical with each of these things, along with anything else that can be predicated (...) of him intrinsically. (shrink)
Creationism is the conjunction of the following theses: (i) fictional individuals (e.g. Sherlock Holmes) actually exist; (ii) fictional names (e.g., 'Holmes') are at least sometimes genuinely referential; (iii) fictional individuals are the creations of the authors who first wrote (or spoke, etc.) about them. CA Creationism is the conjunction of (i) - (iii) and the following thesis: (iv) fictional individuals are contingently existing abstracta; they are non-concrete artifacts of our world and various other possible worlds. TakashiYagisawa has recently provided a (...) number of arguments designed to show that Creationism is unjustified. I here critically examine three of his challenges to CA Creationism. I argue that each fails to undermine this version of Creationism. (shrink)
David Lewis holds that a single possible world can provide more than one way things could be. But what are possible worlds good for if they come apart from ways things could be? We can make sense of this if we go in for a metaphysical understanding of what the world is. The world does not include everything that is the case—only the genuine facts. Understood this way, Lewis's “cheap haecceitism” amounts to a kind of metaphysical anti-haecceitism: it says there (...) aren't any genuine facts about individuals over and above their qualitative roles. (shrink)
3 Abstract This paper is about modeling morality, with a proposal as to the best 4 way to do it. There is the small problem, however, in continuing disagreements 5 over what morality actually is, and so what is worth modeling. This paper resolves 6 this problem around an understanding of the purpose of a moral model, and from 7 this purpose approaches the best way to model morality.
We prove a representation theorem for preference relations over countably infinite lotteries that satisfy a generalized form of the Independence axiom, without assuming Continuity. The representing space consists of lexicographically ordered transfinite sequences of bounded real numbers. This result is generalized to preference orders on abstract superconvex spaces.
Linguists often advert to what are sometimes called linguistic intuitions. These intuitions and the uses to which they are put give rise to a variety of philosophically interesting questions: What are linguistic intuitions – for example, what kind of attitude or mental state is involved? Why do they have evidential force and how might this force be underwritten by their causal etiology? What light might their causal etiology shed on questions of cognitive architecture – for example, as a case study (...) of how consciously inaccessible subpersonal processes give rise to conscious states, or as a candidate example of cognitive penetrability? What methodological issues arise concerning how linguistic intuitions are gathered and interpreted – for example, might some subjects' intuitions be more reliable than others? And what bearing might all this have on philosophers' own appeals to intuitions? This paper surveys and critically discusses leading answers to these questions. In particular, we defend a ‘mentalist’ conception of linguistics and the role of linguistic intuitions therein. (shrink)
I examine three ‘anti-object’ metaphysical views: nihilism, generalism, and anti-quantificationalism. After setting aside nihilism, I argue that generalists should be anti-quantificationalists. Along the way, I attempt to articulate what a ‘metaphysically perspicuous’ language might even be.
“There are no gaps in logical space,” David Lewis writes, giving voice to sentiment shared by many philosophers. But different natural ways of trying to make this sentiment precise turn out to conflict with one another. One is a *pattern* idea: “Any pattern of instantiation is metaphysically possible.” Another is a *cut and paste* idea: “For any objects in any worlds, there exists a world that contains any number of duplicates of all of those objects.” We use resources from model (...) theory to show the inconsistency of certain packages of combinatorial principles and the consistency of others. (shrink)
We critique two popular philosophical definitions of intellectual humility: the “low concern for status” and the “limitations-owning.” accounts. Based upon our analysis, we offer an alternative working definition of intellectual humility: the virtue of accurately tracking what one could non-culpably take to be the positive epistemic status of one’s own beliefs. We regard this view of intellectual humility both as a virtuous mean between intellectual arrogance and diffidence and as having advantages over other recent conceptions of intellectual humility. After defending (...) this view, we sketch remaining questions and issues that may bear upon the psychological treatment of intellectual humility such as whether evidence will help determine how this construct relates to general social humility on the one hand, and intellectual traits such as open-mindedness, curiosity, and honesty on the other. (shrink)
Could space consist entirely of extended regions, without any regions shaped like points, lines, or surfaces? Peter Forrest and Frank Arntzenius have independently raised a paradox of size for space like this, drawing on a construction of Cantor’s. I present a new version of this argument and explore possible lines of response.
That believing truly as a matter of luck does not generally constitute knowing has become epistemic commonplace. Accounts of knowledge incorporating this anti-luck idea frequently rely on one or another of a safety or sensitivity condition. Sensitivity-based accounts of knowledge have a well-known problem with necessary truths, to wit, that any believed necessary truth trivially counts as knowledge on such accounts. In this paper, we argue that safety-based accounts similarly trivialize knowledge of necessary truths and that two ways of responding (...) to this problem for safety, issuing from work by Williamson and Pritchard, are of dubious success. (shrink)
Famous results by David Lewis show that plausible-sounding constraints on the probabilities of conditionals or evaluative claims lead to unacceptable results, by standard probabilistic reasoning. Existing presentations of these results rely on stronger assumptions than they really need. When we strip these arguments down to a minimal core, we can see both how certain replies miss the mark, and also how to devise parallel arguments for other domains, including epistemic “might,” probability claims, claims about comparative value, and so on. A (...) popular reply to Lewis's results is to claim that conditional claims, or claims about subjective value, lack truth conditions. For this strategy to have a chance of success, it needs to give up basic structural principles about how epistemic states can be updated—in a way that is strikingly parallel to the commitments of the project of dynamic semantics. (shrink)
Few notions are more central to Aquinas’s thought than those of matter and form. Although he invokes these notions in a number of different contexts, and puts them to a number of different uses, he always assumes that in their primary or basic sense they are correlative both with each other and with the notion of a “hylomorphic compound”—that is, a compound of matter (hyle) and form (morphe). Thus, matter is an entity that can have form, form is an entity (...) that can be had by matter, and a hylomorphic compound is an entity that exists when the potentiality of some matter to have form is actualized.1 What is more, Aquinas assumes that the matter of a hylomorphic compound explains certain of its general characteristics, whereas its form explains certain of its more specific characteristics. Thus, the matter of a bronze statue explains the fact that it is bronze, whereas its form explains the fact that it is a statue. Again, the matter of a human being explains the fact that it is a material object, whereas its form explains the specific type of material object it is (namely, human). My aim in this chapter is to provide a systematic introduction to Aquinas’s primary or basic notions of matter and form. To accomplish this aim, I focus on the two main theoretical contexts in which he deploys them—namely, his theory of change and his theory of individuation. In both contexts, as we shall see, Aquinas appeals to matter and form to account for relations of sameness and difference holding between distinct individuals. (shrink)
Readers of Philosophical Psychology may be most familiar with Ron Sun by way of an article recently appearing in this journal on creative composition expressed within his own hybrid computational intelligence model, CLARION (Sun, 2013). That article represents nearly two decades’ work in situated agency stressing the importance of psychologically realistic architectures and processes in the articulation of both functional, and reflectively informative, AI and agent- level social-cultural simulations. Readers may be less familiar with Sun’s 2001 “prolegomena” to related multi-agent (...) (proto-social) research also from this journal. That article argues that “a proper balance between “objective” social reality and individual cognitive processes” is necessary in order to understand “how individual belief systems... and the social/cultural belief system ... interact” (Sun, 2001, pages 10 and 23). This issue remains central in Sun’s 2012 edited volume, Grounding Social Sciences in the Cognitive Sciences, here addressed from within the expanding field of pioneering researchers bent on orchestrating that proper balance, the “cognitive social sciences.” Its fifteen chapters are sectioned according to culture, politics, religion, and economics, and closes with an especially rewarding pair of contributions from Gintis, and McCubbins and Turner, under the heading of “unifying perspectives.” Most entries – but for Sun’s own - are serviceably summarized in the introductory overview. So, rather than follow suit, this review will focus on setting out Sun’s vision, noting how this text helps us to realize it more clearly, with a positive focus on a few entries in particular. (shrink)
The counterpart theorist has a problem: there is no obvious way to understand talk about actuality in terms of counterparts. Fara and Williamson have charged that this obstacle cannot be overcome. Here I defend the counterpart theorist by offering systematic interpretations of a quantified modal language that includes an actuality operator. Centrally, I disentangle the counterpart relation from a related notion, a ‘representation relation’. The relation of possible things to the actual things they represent is variable, and an adequate account (...) of modal language must keep track of the way it is systematically shifted by modal operators. I apply my account to resolve several puzzles about counterparts and actuality. In technical appendices, I prove some important logical results about this ‘representational’ counterpart system and its relationship to other modal systems. (shrink)
In a recent article, Edward Wierenga defends a version of Social Trinitarianism according to which the Persons of the Trinity form a unique society of really distinct divine beings, each of whom has its own exemplification of divinity. In this paper, I call attention to several philosophical and theological difficulties with Wierenga’s account, as well as to a problem that such difficulties pose for Social Trinitarianism generally. I then briefly suggest what I take to be a more promising approach to (...) the Trinity. (shrink)
Suppose that all non-qualitative facts are grounded in qualitative facts. I argue that this view naturally comes with a picture in which trans-world identity is indeterminate. But this in turn leads to either pervasive indeterminacy in the non-qualitative, or else contingency in what facts about modality and possible worlds are determinate.
In this paper I defend a form of epistocracy I call limited epistocracy— rule by institutions housing expertise in non-political areas that become politically relevant. This kind of limited epistocracy, I argue, isn’t a far-off fiction. With increasing frequency, governments are outsourcing political power to expert institutions to solve urgent, multidimensional problems because they outperform ordinary democratic decision-making. I consider the objection that limited epistocracy, while more effective than its competitors, lacks a fundamental intrinsic value that its competitors have; namely, (...) political inclusion. After explaining this challenge, I suggest that limited epistocracies can be made compatible with robust political inclusion if specialized institutions are confined to issuing directives that give citizens multiple actionable options. I explain how this safeguards citizens’ inclusion through rational deliberation, choice, and contestation. (shrink)
Psychopathy is increasingly in the public eye. However, it is yet to be fully and effectively understood. Within the context of the DSM-IV, for example, it is best regarded as a complex family of disorders. The upside is that this family can be tightly related along common dimensions. Characteristic marks of psychopaths include a lack of guilt and remorse for paradigm case immoral actions, leading to the common conception of psychopathy rooted in affective dysfunctions. An adequate portrait of psychopathy is (...) much more complicated, however. Though some neural regions and corresponding functions are commonly indicated, they range across those responsible for action planning and learning, as well as emotional processes. Accordingly, a complete fine-grained map of all neural mechanisms responsible for psychopathy has not been realized, and even if it were, such a map would have limited utility outside of the context of surgical or chemical intervention. The utility of a neural-level understanding of psychopathy is further limited by the fact that it is only applicable in the clinical identification of individual subjects, and the neuro-chemical/biological correction of those subjects after they are positively identified as psychopaths. On the other hand, an information processing model of moral cognition provides for wider-ranging applications. The theoretical and practical implications for such a feasible working model of psychopathic personalities are assessed. Finally, this chapter raises the possibility of directed modification of social-environmental factors discouraging the development of psychopathic personalities in the first place, modifications which are also open to simulation and testing in terms of the same model of moral cognition. (shrink)
Some hold that the lesson of Russell’s paradox and its relatives is that mathematical reality does not form a ‘definite totality’ but rather is ‘indefinitely extensible’. There can always be more sets than there ever are. I argue that certain contact puzzles are analogous to Russell’s paradox this way: they similarly motivate a vision of physical reality as iteratively generated. In this picture, the divisions of the continuum into smaller parts are ‘potential’ rather than ‘actual’. Besides the intrinsic interest of (...) this metaphysical picture, it has important consequences for the debate over absolute generality. It is often thought that ‘indefinite extensibility’ arguments at best make trouble for mathematical platonists; but the contact arguments show that nominalists face the same kind of difficulty, if they recognize even the metaphysical possibility of the picture I sketch. (shrink)
Linguists, particularly in the generative tradition, commonly rely upon intuitions about sentences as a key source of evidence for their theories. While widespread, this methodology has also been controversial. In this paper, I develop a positive account of linguistic intuition, and defend its role in linguistic inquiry. Intuitions qualify as evidence as form of linguistic behavior, which, since it is partially caused by linguistic competence (the object of investigation), can be used to study this competence. I defend this view by (...) meeting two challenges. First, that intuitions are collected through methodologically unsound practices, and second, that intuition cannot distinguish between the contributions of competence and performance systems. (shrink)
Conscience is oft-referred to yet not understood. This text develops a theory of cognition around a model of conscience, the ACTWith model. It represents a synthesis of results from contemporary neuroscience with traditional philosophy, building from Jamesian insights into the emergence of the self to narrative identity, all the while motivated by a single mechanism as represented in the ACTWith model. Emphasis is placed on clarifying historical expressions and demonstrations of conscience - Socrates, Heidegger, Kant, M.L. King - in light (...) of the ACTWith model, while at once turning these resources to developing the basic architecture. In the end, this text aims to enrich moral theory by improving our understanding of moral cognition, while at once providing a useful tool in everyday moral practice and self-development. (shrink)
Scientific knowledge is not merely a matter of reconciling theories and laws with data and observations. Science presupposes a number of metatheoretic shaping principles in order to judge good methods and theories from bad. Some of these principles are metaphysical and some are methodological. While many shaping principles have endured since the scientific revolution, others have changed in response to conceptual pressures both from within science and without. Many of them have theistic roots. For example, the notion that nature conforms (...) to mathematical laws flows directly from the early modern presupposition that there is a divine Lawgiver. This interplay between theism and shaping principles is often unappreciated in discussions about the relation between science and religion. Today, of course, naturalists reject the influence of theism and prefer to do science on their terms. But as Robert Koons and Alvin Plantinga have argued, this is more difficult than is typically assumed. In particular, they argue, metaphysical naturalism is in conflict with several metatheoretic shaping principles, especially explanatory virtues such as simplicity and with scientific realism more broadly. These arguments will be discussed as well as possible responses. In the end, theism is able to provide justification for the philosophical foundations of science that naturalism cannot. (shrink)
Predication is an indisputable part of our linguistic behavior. By contrast, the metaphysics of predication has been a matter of dispute ever since antiquity. According to Plato—or at least Platonism, the view that goes by Plato’s name in contemporary philosophy—the truths expressed by predications such as “Socrates is wise” are true because there is a subject of predication (e.g., Socrates), there is an abstract property or universal (e.g., wisdom), and the subject exemplifies the property.1 This view is supposed to be (...) general, applying to all predications, whether the subject of predication is a person, a planet, or a property.2 Despite the controversy surrounding the metaphysics of predication, many theistic philosophers—including the majority of contemporary analytic theists—regard Platonism as extremely attractive. At the same time, however, such philosophers are also commonly attracted to a form of traditional theism that has at its core the thesis that God is an absolutely independent.. (shrink)
We criticize the bare theory of quantum mechanics -- a theory on which the Schrödinger equation is universally valid, and standard way of thinking about superpositions is correct.
Though legal positivism remains popular, HLA Hart’s version has fallen somewhat by the wayside. This is because, according to many, the central task of a theory of law is to explain the so-called ‘normativity of law’. Hart’s theory, it is thought, is not up to the task. Some have suggested modifying the theory accordingly. This paper argues that both Hart’s theory and the normativity of law have been misunderstood. First, a popular modification of Hart’s theory is considered and rejected. It (...) stems from a misunderstanding of Hart and his project. Second, a new understanding of the mysterious but often-mentioned ‘normativity of law’ is presented. Once we have dispelled some misunderstandings of Hart’s view and clarified the sense in which law is supposed to be normative, we see that Hart’s view, unmodified, is well suited to the task of explaining law’s normativity. (shrink)
Theology is the preeminent academic discipline during the Middle Ages and, as a result, most of great thinkers of this period are highly trained theologians. Although this is common knowledge, it is sometimes overlooked that the systematic nature of medieval theology led its practitioners to develop full treatments of virtually every area within philosophy. Indeed, theological reflection not only provides the main context in which the medievals theorize about what we would now recognize as distinctively philosophical issues, but it is (...) responsible for some of their most significant philosophical contributions. To give just a few examples: it is problems with the Christian doctrine of the Incarnation that prompt medievals to develop the notions of ‘substance’ and ‘person’ in striking and original ways; it is problems with the doctrine of the Eucharist that lead them to consider the possibility of ‘accidents that do not inhere’; and it is problems of.. (shrink)
The ultimate goal of research into computational intelligence is the construction of a fully embodied and fully autonomous artificial agent. This ultimate artificial agent must not only be able to act, but it must be able to act morally. In order to realize this goal, a number of challenges must be met, and a number of questions must be answered, the upshot being that, in doing so, the form of agency to which we must aim in developing artificial agents comes (...) into focus. This chapter explores these issues, and from its results details a novel approach to meeting the given conditions in a simple architecture of information processing. (shrink)
Eliminativists sometimes invoke evolutionary debunking arguments against ordinary object beliefs, either to help them establish object skepticism or to soften the appeal of commonsense ontology. I argue that object debunkers face a self-defeat problem: their conclusion undermines the scientific support for one of their premises, because evolutionary biology depends on our object beliefs. Using work on reductionism and multiple realizability from the philosophy of science, I argue that it will not suffice for an eliminativist debunker to simply appeal to some (...) object-free surrogate theory of evolution that results from converting any scientific proposition about some object K into a proposition about simples arranged K-wise. In the process, I examine some hazards peculiar to eliminative reductions of scientific theories, and propose a trilemma for eliminativists who attempt to recoup generality for ontologically sparse reducing theories by appealing to pluralities of simples arranged K-wise. The paper is intended to define and develop the object debunker’s self-defeat problem for further study, and to clarify some of the ways sparse and abundant ontologies interact with scientific theory. (shrink)
Over the course of her career, Jean Harvey argued that as agents engaged in a “life of moral endeavor,” we should understand ourselves and others to be moral works in progress, always possessing the potential to grow beyond and become more than the sum of our past wrongs. In this paper I follow Harvey and argue that in order to live a life of moral endeavor, it is not enough merely to know about injustice. Instead, we must engage in the (...) difficult and often painful task of overcoming deep-seated cognitive biases that cause us to fail to perceive the ubiquitous injustice that pervades our world. I conclude by arguing that education, empathy, and love can each help us to increase our perceptual awareness of injustice and so should be recognized to be crucial parts of a life of moral endeavor. (shrink)
Peter Abelard is one of the greatest philosophers of the medieval period. Although best known for his views about universals and his dramatic love affair with Heloise, he made a number of important contributions in metaphysics, logic, philosophy of language, mind and cognition, philosophical theology, ethics, and literature. The essays in this volume survey the entire range of Abelard's thought, and examine his overall achievement in its intellectual and historical context. They also trace Abelard's influence on later thought and his (...) relevance to philosophical debates today. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.