According to what has long been the dominant school of thought in analytic meta-ontology––defended not only by W. V. O. Quine, but also by Bertrand Russell, Alvin Plantinga, Peter van Inwagen, and many others––the meaning of ‘there is’ is identical to the meaning of ‘there exists.’ The most (in)famous aberration from this view is advanced by Alexius Meinong, whose ontological picture has endured extensive criticism (and borderline abuse) from several subscribers to the majority view. Meinong denies the identity of being (...) and existence. That is, he denies that ‘there is’ and ‘there exists’ are semantically equivalent, and espouses a theory according to which there are things that do not exist. Here I defend a revised version of this view, which I call “Noncontradictory Neo-Meinongianism.” Focusing primarily on van Inwagen’s arguments in “Meta-Ontology” (1998), I argue that Noncontradictory Neo-Meinongianism is, on commonsensical grounds, preferable to the meta-ontological theories of van Inwagen and Meinong. (shrink)
Samuel Kerstein’s recent (2013) How To Treat Persons is an ambitious attempt to develop a new, broadly Kantian account of what it is to treat others as mere means and what it means to act in accordance with others’ dignity. His project is explicitly nonfoundationalist: his interpretation stands or falls on its ability to accommodate our pretheoretic intuitions, and he does an admirable job of handling carefully a range of well fleshed out and sometimes subtle examples. In what follows, (...) I shall give a quick summary of the chapters and then say two good things about the book and one critical thing. (shrink)
From the end of the twelfth century until the middle of the eighteenth century, the concept of a right of necessity –i.e. the moral prerogative of an agent, given certain conditions, to use or take someone else’s property in order to get out of his plight– was common among moral and political philosophers, who took it to be a valid exception to the standard moral and legal rules. In this essay, I analyze Samuel Pufendorf’s account of such a right, (...) founded on the basic instinct of self-preservation and on the notion that, in civil society, we have certain minimal duties of humanity towards each other. I review Pufendorf’s secularized account of natural law, his conception of the civil state, and the function of private property. I then turn to his criticism of Grotius’s understanding of the right of necessity as a retreat to the pre-civil right of common use, and defend his account against some recent criticisms. Finally, I examine the conditions deemed necessary and jointly sufficient for this right to be claimable, and conclude by pointing to the main strengths of this account. Keywords: Samuel Pufendorf, Hugo Grotius, right of necessity, duty of humanity, private property. (shrink)
Judgments of blame for others are typically sensitive to what an agent knows and desires. However, when people act negligently, they do not know what they are doing and do not desire the outcomes of their negligence. How, then, do people attribute blame for negligent wrongdoing? We propose that people attribute blame for negligent wrongdoing based on perceived mental control, or the degree to which an agent guides their thoughts and attention over time. To acquire information about others’ mental control, (...) people self-project their own perceived mental control to anchor third-personal judgments about mental control and concomitant responsibility for negligent wrongdoing. In four experiments (N = 841), we tested whether perceptions of mental control drive third-personal judgments of blame for negligent wrongdoing. Study 1 showed that the ease with which people can counterfactually imagine an individual being non-negligent mediated the relationship between judgments of control and blame. Studies 2a and 2b indicated that perceived mental control has a strong effect on judgments of blame for negligent wrongdoing and that first-personal judgments of mental control are moderately correlated with third-personal judgments of blame for negligent wrongdoing. Finally, we used an autobiographical memory manipulation in Study 3 to make personal episodes of forgetfulness salient. Participants for whom past personal episodes of forgetfulness were made salient judged negligent wrongdoers less harshly compared to a control group for whom past episodes of negligence were not salient. Collectively, these findings suggest that first-personal judgments of mental control drive third-personal judgments of blame for negligent wrongdoing and indicate a novel role for counterfactual thinking in the attribution of responsibility. (shrink)
Samuel Alexander was a central figure of the new wave of realism that swept across the English-speaking world in the early twentieth century. His Space, Time, and Deity (1920a, 1920b) was taken to be the official statement of realism as a metaphysical system. But many historians of philosophy are quick to point out the idealist streak in Alexander’s thought. After all, as a student he was trained at Oxford in the late 1870s and early 1880s as British Idealism was (...) beginning to flourish. This naturally had some effect on his philosophical outlook and it is said that his early work is overtly idealist. In this paper I examine his neglected and understudied reactions to British Idealism in the 1880s. I argue that Alexander was not an idealist during this period and should not be considered as part of the British Idealist tradition, philosophically speaking. (shrink)
We sometimes fail unwittingly to do things that we ought to do. And we are, from time to time, culpable for these unwitting omissions. We provide an outline of a theory of responsibility for unwitting omissions. We emphasize two distinctive ideas: (i) many unwitting omissions can be understood as failures of appropriate vigilance, and; (ii) the sort of self-control implicated in these failures of appropriate vigilance is valuable. We argue that the norms that govern vigilance and the value of self-control (...) explain culpability for unwitting omissions. (shrink)
We present a game mechanic called pseudo-visibility for games inhabited by non-player characters (NPCs) driven by reinforcement learning (RL). NPCs are incentivized to pretend they cannot see pseudo-visible players: the training environment simulates an NPC to determine how the NPC would act if the pseudo-visible player were invisible, and penalizes the NPC for acting differently. NPCs are thereby trained to selectively ignore pseudo-visible players, except when they judge that the reaction penalty is an acceptable tradeoff (e.g., a guard might accept (...) the penalty in order to protect a treasure because losing the treasure would hurt even more). We describe an RL agent transformation which allows RL agents that would not otherwise do so to perform some limited self-reflection to learn the training environments in question. (shrink)
Mind wandering is typically operationalized as task-unrelated thought. Some argue for the need to distinguish between unintentional and intentional mind wandering, where an agent voluntarily shifts attention from task-related to task-unrelated thoughts. We reveal an inconsistency between the standard, task-unrelated thought definition of mind wandering and the occurrence of intentional mind wandering (together with plausible assumptions about tasks and intentions). This suggests that either the standard definition of mind wandering should be rejected or that intentional mind wandering is an incoherent (...) category. Solving this puzzle is critical for advancing theoretical frameworks of mind wandering. (shrink)
Can an AGI create a more intelligent AGI? Under idealized assumptions, for a certain theoretical type of intelligence, our answer is: “Not without outside help”. This is a paper on the mathematical structure of AGI populations when parent AGIs create child AGIs. We argue that such populations satisfy a certain biological law. Motivated by observations of sexual reproduction in seemingly-asexual species, the Knight-Darwin Law states that it is impossible for one organism to asexually produce another, which asexually produces another, and (...) so on forever: that any sequence of organisms (each one a child of the previous) must contain occasional multi-parent organisms, or must terminate. By proving that a certain measure (arguably an intelligence measure) decreases when an idealized parent AGI single-handedly creates a child AGI, we argue that a similar Law holds for AGIs. (shrink)
This article challenges the association between realist methodology and ideals of legitimacy. Many who seek a more “realistic” or “political” approach to political theory replace the familiar orientation towards a state of justice with a structurally similar orientation towards a state of legitimacy. As a result, they fail to provide more reliable practical guidance, and wrongly displace radical demands. Rather than orienting action towards any state of affairs, I suggest that a more practically useful approach to political theory would directly (...) address judgments, by comparing the concrete possibilities for action faced by real political actors. (shrink)
Legg and Hutter, as well as subsequent authors, considered intelligent agents through the lens of interaction with reward-giving environments, attempting to assign numeric intelligence measures to such agents, with the guiding principle that a more intelligent agent should gain higher rewards from environments in some aggregate sense. In this paper, we consider a related question: rather than measure numeric intelligence of one Legg- Hutter agent, how can we compare the relative intelligence of two Legg-Hutter agents? We propose an elegant answer (...) based on the following insight: we can view Legg-Hutter agents as candidates in an election, whose voters are environments, letting each environment vote (via its rewards) which agent (if either) is more intelligent. This leads to an abstract family of comparators simple enough that we can prove some structural theorems about them. It is an open question whether these structural theorems apply to more practical intelligence measures. (shrink)
We define a notion of the intelligence level of an idealized mechanical knowing agent. This is motivated by efforts within artificial intelligence research to define real-number intelligence levels of compli- cated intelligent systems. Our agents are more idealized, which allows us to define a much simpler measure of intelligence level for them. In short, we define the intelligence level of a mechanical knowing agent to be the supremum of the computable ordinals that have codes the agent knows to be codes (...) of computable ordinals. We prove that if one agent knows certain things about another agent, then the former necessarily has a higher intelligence level than the latter. This allows our intelligence no- tion to serve as a stepping stone to obtain results which, by themselves, are not stated in terms of our intelligence notion (results of potential in- terest even to readers totally skeptical that our notion correctly captures intelligence). As an application, we argue that these results comprise evidence against the possibility of intelligence explosion (that is, the no- tion that sufficiently intelligent machines will eventually be capable of designing even more intelligent machines, which can then design even more intelligent machines, and so on). (shrink)
Fragmentalism was originally introduced as a new A-theory of time. It was further refined and discussed, and different developments of the original insight have been proposed. In a celebrated paper, Jonathan Simon contends that fragmentalism delivers a new realist account of the quantum state—which he calls conservative realism—according to which: the quantum state is a complete description of a physical system, the quantum state is grounded in its terms, and the superposition terms are themselves grounded in local goings-on about the (...) system in question. We will argue that fragmentalism, at least along the lines proposed by Simon, does not offer a new, satisfactory realistic account of the quantum state. This raises the question about whether there are some other viable forms of quantum fragmentalism. (shrink)
According to Aristotle, the medical art aims at health, which is a virtue of the body, and does so in an unlimited way. Consequently, medicine does not determine the extent to which health should be pursued, and “mental health” falls under medicine only via pros hen predication. Because medicine is inherently oriented to its end, it produces health in accordance with its nature and disease contrary to its nature—even when disease is good for the patient. Aristotle’s politician understands that this (...) inherent orientation can be systematically distorted, and so would see the need for something like the Hippocratic Oath. (shrink)
My primary target in this paper is a puzzle that emerges from the conjunction of several seemingly innocent assumptions in action theory and the metaphysics of moral responsibility. The puzzle I have in mind is this. On one widely held account of moral responsibility, an agent is morally responsible only for those actions or outcomes over which that agent exercises control. Recently, however, some have cited cases where agents appear to be morally responsible without exercising any control. This leads some (...) to abandon the control-based account of responsibility and replace it with an alternative account. It leads others to deny the intuition that agents are responsible in these troublesome cases. After outlining the account of moral responsibility I have in mind, I look at some of the arguments made against the viability of this theory. I show that there are conceptual resources for salvaging the control account, focusing in particular on the nature of vigilance. I also argue that there is empirical data that supports the control account so conceived. (shrink)
After generalizing the Archimedean property of real numbers in such a way as to make it adaptable to non-numeric structures, we demonstrate that the real numbers cannot be used to accurately measure non-Archimedean structures. We argue that, since an agent with Artificial General Intelligence (AGI) should have no problem engaging in tasks that inherently involve non-Archimedean rewards, and since traditional reinforcement learning rewards are real numbers, therefore traditional reinforcement learning probably will not lead to AGI. We indicate two possible ways (...) traditional reinforcement learning could be altered to remove this roadblock. (shrink)
In this paper, we focus on whether and to what extent we judge that people are responsible for the consequences of their forgetfulness. We ran a series of behavioral studies to measure judgments of responsibility for the consequences of forgetfulness. Our results show that we are disposed to hold others responsible for some of their forgetfulness. The level of stress that the forgetful agent is under modulates judgments of responsibility, though the level of care that the agent exhibits toward performing (...) the forgotten action does not. We argue that this result has important implications for a long-running debate about the nature of responsible agency. (shrink)
Can an agent's intelligence level be negative? We extend the Legg-Hutter agent-environment framework to include punishments and argue for an affirmative answer to that question. We show that if the background encodings and Universal Turing Machine (UTM) admit certain Kolmogorov complexity symmetries, then the resulting Legg-Hutter intelligence measure is symmetric about the origin. In particular, this implies reward-ignoring agents have Legg-Hutter intelligence 0 according to such UTMs.
This paper examines the interplay of semantics and pragmatics within the domain of film. Films are made up of individual shots strung together in sequences over time. Though each shot is disconnected from the next, combinations of shots still convey coherent stories that take place in continuous space and time. How is this possible? The semantic view of film holds that film coherence is achieved in part through a kind of film language, a set of conventions which govern the relationships (...) between shots. In this paper, we develop and defend a new version of the semantic view. We articulate it for a pair of conventions that govern spatial relations between viewpoints. One such rule is already well-known; sometimes called the "180° Rule," we term it the X-Constraint; to this we add a previously unrecorded rule, the T-Constraint. As we show, both have the effect, in different ways, of limiting the way that viewpoint can shift through space from shot to shot over the course of a film sequence. Such constraints, we contend, are analogous to relations of discourse coherence that are widely recognized in the linguistic domain. If film is to have a language, it is a language made up of rules like these. (shrink)
With Being Me Being You, Samuel Fleischacker provides a reconstruction and defense of Adam Smith’s account of empathy, and the role it plays in building moral consensus, motivating moral behavior, and correcting our biases, prejudices, and tendency to demonize one another. He sees this book as an intervention in recent debates about the role that empathy plays in our morality. For some, such as Paul Bloom, Joshua Greene, Jesse Prinz, and others, empathy, or our capacity for fellow-feeling, tends to (...) misguide us in the best of cases, and more often reinforces faction and tribalism in morals and politics. These utilitarians, as Fleischacker refers to them, propose that empathy take a back seat to cost-benefit analysis in moral decision-making. As an intervention, the book is largely successful. Fleischacker’s defense of empathy is nuanced and escapes the myopic enthusiasm to which many partisans of empathy are prone. Anyone looking to understand the relationship between empathy and morality would do well to grapple with Being Me Being You. Still, Fleischacker overlooks that Smith would most likely be less convinced of the idea that greater empathy can help us overcome the great challenges of our time. (shrink)
Leibniz’s views on modality are among the most discussed by his interpreters. Although most of the discussion has focused on Leibniz’s analyses of modality, this essay explores Leibniz’s grounding of modality. Leibniz holds that possibilities and possibilia are grounded in the intellect of God. Although other early moderns agreed that modal truths are in some way dependent on God, there were sharp disagreements surrounding two distinct questions: (1) On what in God do modal truths and modal truth-makers depend? (2) What (...) is the manner(s) of dependence by which modal truths and modal truth-makers depend on God? Very roughly, Leibniz’s own answers are: (1) God’s intellect and (2) a form of ontological dependence. The essay first distinguishes Leibniz’s account from two nearby (and often misunderstood) alternatives found in Descartes and Spinoza. It then examines Leibniz’s theory in detail, showing how, on his account, God’s ideas provide both truth-makers for possibilities and necessities and an ontological foothold for those truth-makers, thereby explaining modal truths. Along the way, it suggests several refinements and possible amendments to Leibniz’s grounding thesis. It then defends Leibniz against a pair of recent objections by Robert Merrihew Adams and Andrew Chignell that invoke the early work of Kant. I conclude that whereas Leibniz’s alternative avoids collapsing into yet another form of Spinozism, the alternatives proposed by Adams, Chignell, and the early Kant do not. (shrink)
Most democratic theorists agree that concentrations of wealth and power tend to distort the functioning of democracy and ought to be countered wherever possible. Deliberative democrats are no exception: though not its only potential value, the capacity of deliberation to ‘neutralise power’ is often regarded as ‘fundamental’ to deliberative theory. Power may be neutralised, according to many deliberative democrats, if citizens can be induced to commit more fully to the deliberative resolution of common problems. If they do, they will be (...) unable to get away with inconsistencies and bad or private reasons, thereby mitigating the illegitimate influence of power. I argue, however, that the means by which power inflects political disagreement is far more subtle than this model suggests and cannot be countered so simply. As a wealth of recent research in political psychology demonstrates, human beings persistently exhibit ‘motivated reasoning’, meaning that even when we are sincerely committed to the deliberative resolution of common problems, and even when we are exposed to the same reasons and evidence, we still disagree strongly about what ‘fair cooperation’ entails. Motivated reasoning can be counteracted, but only under exceptional circumstances such as those that enable modern science, which cannot be reliably replicated in our society at large. My analysis suggests that in democratic politics – which rules out the kind of anti-democratic practices available to scientists – we should not expect deliberation to reliably neutralise power. (shrink)
Aristotle analyses a large range of objects as composites of matter and form. But how exactly should we understand the relation between the matter and form of a composite? Some commentators have argued that forms themselves are somehow material, that is, forms are impure. Others have denied that claim and argued for the purity of forms. In this paper, I develop a new purist interpretation of Metaphysics Z.10-11, a text central to the debate, which I call 'hierarchical purism'. I argue (...) that hierarchical purism can overcome the difficulties faced by previous versions of purism as well as by impurism. Roughly, on hierarchical purism, each composite can be considered and defined in two different ways: From the perspective of metaphysics, composites are considered only insofar as they have forms and defined purely formally. From the perspective of physics, composites are considered insofar as they have forms and matter and defined with reference to both. Moreover, while the metaphysical definition is a definition in the strict sense of 'definition', the physical definition is a definition in a loose sense. Analogous points hold for intelligible composites and geometry. Finally, neither sort of definitional practice implies that, for Aristotle, forms are impure. (shrink)
In 2011, Hibbard suggested an intelligence measure for agents who compete in an adversarial sequence prediction game. We argue that Hibbard’s idea should actually be considered as two separate ideas: first, that the intelligence of such agents can be measured based on the growth rates of the runtimes of the competitors that they defeat; and second, one specific (somewhat arbitrary) method for measuring said growth rates. Whereas Hibbard’s intelligence measure is based on the latter growth-rate-measuring method, we survey other methods (...) for measuring function growth rates, and exhibit the resulting Hibbard-like intelligence measures and taxonomies. Of particular interest, we obtain intelligence taxonomies based on Big-O and Big-Theta notation systems, which taxonomies are novel in that they challenge conventional notions of what an intelligence measure should look like. We discuss how intelligence measurement of sequence predictors can indirectly serve as intelligence measurement for agents with Artificial General Intelligence (AGIs). (shrink)
Perdurantists think of continuants as mereological sums of stages from different times. This view of persistence would force us to drop the idea that there is genuine change in the world. By exploiting a presentist metaphysics, Brogaard proposed a theory, called presentist four-dimensionalism, that aims to reconcile perdurantism with the idea that things undergo real change. However, her proposal commits us to reject the idea that stages must exist in their entirety. Giving up the tenet that all the stages are (...) equally real could be a price that perdurantists are unwilling to pay. I argue that Kit Fine ’s fragmentalism provides us with the tools to combine a presentist metaphysics with a perdurantist theory of persistence without giving up the idea that reality is constituted by more than purely present stages. (shrink)
We propose that, for the purpose of studying theoretical properties of the knowledge of an agent with Artificial General Intelligence (that is, the knowledge of an AGI), a pragmatic way to define such an agent’s knowledge (restricted to the language of Epistemic Arithmetic, or EA) is as follows. We declare an AGI to know an EA-statement φ if and only if that AGI would include φ in the resulting enumeration if that AGI were commanded: “Enumerate all the EA-sentences which you (...) know.” This definition is non-circular because an AGI, being capable of practical English communication, is capable of understanding the everyday English word “know” independently of how any philosopher formally defines knowledge; we elaborate further on the non-circularity of this circular-looking definition. This elegantly solves the problem that different AGIs may have different internal knowledge definitions and yet we want to study knowledge of AGIs in general, without having to study different AGIs separately just because they have separate internal knowledge definitions. Finally, we suggest how this definition of AGI knowledge can be used as a bridge which could allow the AGI research community to import certain abstract results about mechanical knowing agents from mathematical logic. (shrink)
Reading Foucault’s work on power and subjectivity alongside “developmentalist” approaches to evolutionary biology, this article endorses poststructuralist critiques of political ideals grounded in the value of subjective agency. Many political theorists embrace such critiques, of course, but those who do are often skeptical of liberal democracy, and even of normative theory itself. By contrast, those who are left to theorize liberal democracy tend to reject or ignore poststructuralist insights, and have continued to employ dubious ontological assumptions regarding human agents. Against (...) both groups, I argue that Foucault’s poststructuralism must be taken seriously, but that it is ultimately consistent with normative theory and liberal democracy. Linking poststructuralist attempts to transcend the dichotomy between agency and structure with recent efforts by evolutionary theorists to dissolve a similarly stubborn opposition between nature and nurture, I develop an anti-essentialist account of human nature and agency that vindicates poststructuralist criticism while enabling a novel defense of liberal democracy. (shrink)
We provide an intuitive motivation for the hyperreal numbers via electoral axioms. We do so in the form of a Socratic dialogue, in which Protagoras suggests replacing big-oh complexity classes by real numbers, and Socrates asks some troubling questions about what would happen if one tried to do that. The dialogue is followed by an appendix containing additional commentary and a more formal proof.
One popular theory of moral responsibility locates responsible agency in exercises of control. These control-based theories often appeal to tracing to explain responsibility in cases where some agent is intuitively responsible for bringing about some outcome despite lacking direct control over that outcome’s obtaining. Some question whether control-based theories are committed to utilizing tracing to explain responsibility in certain cases. I argue that reflecting on certain kinds of negligence shows that tracing plays an ineliminable role in any adequate control-based theory (...) of responsibility. (shrink)
One shortcoming of the chain rule is that it does not iterate: it gives the derivative of f(g(x)), but not (directly) the second or higher-order derivatives. We present iterated differentials and a version of the multivariable chain rule which iterates to any desired level of derivative. We first present this material informally, and later discuss how to make it rigorous (a discussion which touches on formal foundations of calculus). We also suggest a finite calculus chain rule (contrary to Graham, Knuth (...) and Patashnik's claim that "there's no corresponding chain rule of finite calculus"). (shrink)
Recent years have witnessed growing controversy over the “wisdom of the multitude.” As epistemic critics drawing on vast empirical evidence have cast doubt on the political competence of ordinary citizens, epistemic democrats have offered a defense of democracy grounded largely in analogies and formal results. So far, I argue, the critics have been more convincing. Nevertheless, democracy can be defended on instrumental grounds, and this article demonstrates an alternative approach. Instead of implausibly upholding the epistemic reliability of average voters, I (...) observe that competitive elections, universal suffrage, and discretionary state power disable certain potent mechanisms of elite entrenchment. By reserving particular forms of power for the multitude of ordinary citizens, they make democratic states more resistant to dangerous forms of capture than non-democratic alternatives. My approach thus offers a robust defense of electoral democracy, yet cautions against expecting too much from it—motivating a thicker conception of democracy, writ large. (shrink)
Kant’s Formula of Humanity can be analyzed into two parts. One is an injunction to treat humanity always as an end. The other is a prohibition on using humanity as a mere means. The second is often referred to as the FH prohibition or the mere means prohibition. It has become popular to interpret this prohibition in terms of consent. The idea is that, if X uses Y's humanity as a means and Y does not consent to it, then X (...) uses Y's humanity as a mere means. There is then debate about the kind of consent that is relevant: possible, actual, or rational. In this paper, I argue against this interpretation. Section one sets up the consent account. Section two attacks possible and actual consent accounts on doctrinal grounds. Section three extends this doctrinal attack to rational consent accounts. Section four circles back to the original motivation for the consent interpretation. I argue that the consent account rests on a misinterpretation, and I conclude with a quick sketch of an alternative interpretation of the FH prohibition. (shrink)
Actors, undercover investigators, and readers of fiction sometimes report “losing themselves” in the characters they imitate or read about. They speak of “taking on” or “assuming” the beliefs, thoughts, and feelings of someone else. I offer an account of this strange but familiar phenomenon—what I call imaginative transportation.
While researchers in business ethics, moral philosophy, and jurisprudence have advanced the study of corporate agency, there have been very few attempts to bring together insights from these and other disciplines in the pages of the Journal of Business Ethics. By introducing to an audience of business ethics scholars the work of outstanding authors working outside the field, this interdisciplinary special issue addresses this lacuna. Its aim is to encourage the formulation of innovative arguments that reinvigorate the study of corporate (...) agency and stimulate further cross-fertilization of ideas between business ethics, law, philosophy, and other disciplines. (shrink)
According to Phenomenal Conservatism (PC), if it seems to a subject S that P, S thereby has some degree of (defeasible) justification for believing P. But what is it for P to seem true? Answering this question is vital for assessing what role (if any) such states can play. Many have appeared to adopt a kind of non-reductionism that construes seemings as intentional states which cannot be reduced to more familiar mental states like beliefs or sensations. In this paper I (...) aim to show that reductive accounts need to be taken more seriously by illustrating the plausibility of identifying seemings and conscious inclinations to form a belief. I briefly close the paper by considering the implications such an analysis might have for views such as PC. (shrink)
According to the Reasoning View about normative reasons, facts about normative reasons for action can be understood in terms of facts about the norms of practical reasoning. I argue that this view is subject to an overlooked class of counterexamples, familiar from debates about Subjectivist theories of normative reasons. Strikingly, the standard strategy Subjectivists have used to respond to this problem cannot be adapted to the Reasoning View. I think there is a solution to this problem, however. I argue that (...) the norms of practical reasoning, like the norms of theoretical reasoning, are characteristically defeasible, in a sense I make precise. Recognizing this property of those norms makes space for a solution to the problem. The resulting view is in a way analogous to the familiar defeasibility theory of knowledge, but it avoids a standard objection to that theory. (shrink)
The Generality Problem is widely recognized to be a serious problem for reliabilist theories of justification. James R. Beebe's Statistical Solution is one of only a handful of attempted solutions that has garnered serious attention in the literature. In their recent response to Beebe, Julien Dutant and Erik J. Olsson successfully refute Beebe's Statistical Solution. This paper presents a New Statistical Solution that countenances Dutant and Olsson's objections, dodges the serious problems that trouble rival solutions, and retains the theoretical virtues (...) that made Beebe's solution so attractive in the first place. There indeed exists a principled, rigorous, conceptually sparse, and plausible solution to the Generality Problem: it is the New Statistical Solution. (shrink)
I describe and motivate Rational Internalism, a principle concerning the relationship between motivating reasons (which explain actions) and normative reasons (which justify actions). I use this principle to construct a novel argument against Objectivist theories of normative reasons, which hold that facts about normative reasons can be analyzed in terms of an independently specified class of normative or evaluative facts. I then argue for an alternative theory of normative reasons, the Reasoning View, which is consistent with both Rational Internalism and (...) one standard motivation for Objectivism. (shrink)
The verb ‘to know’ can be used both in ascriptions of propositional knowledge and ascriptions of knowledge of acquaintance. In the formal epistemology literature, the former use of ‘know’ has attracted considerable attention, while the latter is typically regarded as derivative. This attitude may be unsatisfactory for those philosophers who, like Russell, are not willing to think of knowledge of acquaintance as a subsidiary or dependent kind of knowledge. In this paper we outline a logic of knowledge of acquaintance in (...) which ascriptions like ‘Mary knows Smith’ are regarded as formally interesting in their own right, remaining neutral on their relation to ascriptions of propositional knowledge. The resulting logical framework, which is based on Hintikka’s modal approach to epistemic logic, provides a fresh perspective on various issues and notions at play in the philosophical debate on acquaintance. (shrink)
There was a consensus in late Scholasticism that evils are privations, the lacks of appropriate perfections. For something to be evil is for it to lack an excellence that, by its nature, it ought to have. This widely accepted ontology of evil was used, in part, to help explain the source of evil in a world created and sustained by a perfect being. during the second half of the seventeenth century, progressive early moderns began to criticize the traditional privative account (...) of evil on a variety of philosophical and theological grounds. Embedded in Scholastic Aristotelianism and applied to problems of evil, privation theory seemed to some like yet another instance of pre-modern pseudo-explanation.1Against this .. (shrink)
Cognitive science has recently made some startling discoveries about temporal experience, and these discoveries have been drafted into philosophical service. We survey recent appeals to cognitive science in the philosophical debate over whether time objectively passes. Since this research is currently in its infancy, we identify some directions for future research.
In this paper, I will argue that there is a version of possibilism—inspired by the modal analogue of Kit Fine’s fragmentalism—that can be combined with a weakening of actualism. The reasons for analysing this view, which I call Modal Fragmentalism, are twofold. Firstly, it can enrich our understanding of the actualism/possibilism divide, by showing that, at least in principle, the adoption of possibilia does not correspond to an outright rejection of the actualist intuitions. Secondly, and more specifically, it can enrich (...) our understanding of concretism, by proving that, at least in principle, the idea that objects have properties in an absolute manner is compatible with transworld identity. (shrink)
According to many philosophers, rationality is, at least in part, a matter of one’s attitudes cohering with one another. Theorists who endorse this idea have devoted much attention to formulating various coherence requirements. Surprisingly, they have said very little about what it takes for a set of attitudes to be coherent in general. We articulate and defend a general account on which a set of attitudes is coherent just in case and because it is logically possible for the attitudes to (...) be jointly satisfied in the sense of jointly fitting the world. In addition, we show how the account can help adjudicate debates about how to formulate various rational requirements. (shrink)
Samuel Alexander was one of the first realists of the twentieth century to defend a theory of categories. He thought that the categories are genuinely real and grounded in the intrinsic nature of Space-Time. I present his reduction of the categories in terms of Space-Time, articulate his account of categorial structure and completeness, and offer an interpretation of what he thought the nature of the categories really were. I then argue that his theory of categories has some advantages over (...) competing theories of his day, and finally draw some important lessons that we can learn from his realist yet reductionist theory of categories. (shrink)
In Metaphysics Z.6, Aristotle argues that each substance is the same as its essence. In this paper, I defend an identity reading of that claim. First, I provide a general argument for the identity reading, based on Aristotle’s account of sameness in number and identity. Second, I respond to the recent charge that the identity reading is incoherent, by arguing that the claim in Z.6 is restricted to primary substances and hence to forms.
I provide an analysis of sentences of the form ‘To be F is to be G’ in terms of exact truth-maker semantics—an approach that identifies the meanings of sentences with the states of the world directly responsible for their truth-values. Roughly, I argue that these sentences hold just in case that which makes something F is that which makes it G. This approach is hyperintensional, and possesses desirable logical and modal features. These sentences are reflexive, transitive and symmetric, and, if (...) they are true, then they are necessarily true, and it is necessary that all and only Fs are Gs. I close by defining an asymmetric and irreflexive notion of analysis in terms of the reflexive and symmetric one. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.