According to the most prominent principle of early modern rationalists, the Principle of Sufficient Reason [PSR], there are no brute facts, hence, there are no facts without any explanation. Contrary to the PSR, some philosophers have argued that divine ideas are brute facts within Leibniz’s metaphysics. In this paper, I argue against brute-fact-theories of divine ideas, especially represented by SamuelNewlands in Leibniz and the Ground of Possibility, and elaborate an alternative Leibnizian theory of divine ideas.
Leibniz’s views on modality are among the most discussed by his interpreters. Although most of the discussion has focused on Leibniz’s analyses of modality, this essay explores Leibniz’s grounding of modality. Leibniz holds that possibilities and possibilia are grounded in the intellect of God. Although other early moderns agreed that modal truths are in some way dependent on God, there were sharp disagreements surrounding two distinct questions: (1) On what in God do modal truths and modal truth-makers depend? (2) What (...) is the manner(s) of dependence by which modal truths and modal truth-makers depend on God? Very roughly, Leibniz’s own answers are: (1) God’s intellect and (2) a form of ontological dependence. The essay first distinguishes Leibniz’s account from two nearby (and often misunderstood) alternatives found in Descartes and Spinoza. It then examines Leibniz’s theory in detail, showing how, on his account, God’s ideas provide both truth-makers for possibilities and necessities and an ontological foothold for those truth-makers, thereby explaining modal truths. Along the way, it suggests several refinements and possible amendments to Leibniz’s grounding thesis. It then defends Leibniz against a pair of recent objections by Robert Merrihew Adams and Andrew Chignell that invoke the early work of Kant. I conclude that whereas Leibniz’s alternative avoids collapsing into yet another form of Spinozism, the alternatives proposed by Adams, Chignell, and the early Kant do not. (shrink)
There was a consensus in late Scholasticism that evils are privations, the lacks of appropriate perfections. For something to be evil is for it to lack an excellence that, by its nature, it ought to have. This widely accepted ontology of evil was used, in part, to help explain the source of evil in a world created and sustained by a perfect being. during the second half of the seventeenth century, progressive early moderns began to criticize the traditional privative account (...) of evil on a variety of philosophical and theological grounds. Embedded in Scholastic Aristotelianism and applied to problems of evil, privation theory seemed to some like yet another instance of pre-modern pseudo-explanation.1Against this .. (shrink)
Samuel Kerstein’s recent (2013) How To Treat Persons is an ambitious attempt to develop a new, broadly Kantian account of what it is to treat others as mere means and what it means to act in accordance with others’ dignity. His project is explicitly nonfoundationalist: his interpretation stands or falls on its ability to accommodate our pretheoretic intuitions, and he does an admirable job of handling carefully a range of well fleshed out and sometimes subtle examples. In what follows, (...) I shall give a quick summary of the chapters and then say two good things about the book and one critical thing. (shrink)
We sometimes fail unwittingly to do things that we ought to do. And we are, from time to time, culpable for these unwitting omissions. We provide an outline of a theory of responsibility for unwitting omissions. We emphasize two distinctive ideas: (i) many unwitting omissions can be understood as failures of appropriate vigilance, and; (ii) the sort of self-control implicated in these failures of appropriate vigilance is valuable. We argue that the norms that govern vigilance and the value of self-control (...) explain culpability for unwitting omissions. (shrink)
Fragmentalism was originally introduced as a new A-theory of time. It was further refined and discussed, and different developments of the original insight have been proposed. In a celebrated paper, Jonathan Simon contends that fragmentalism delivers a new realist account of the quantum state—which he calls conservative realism—according to which: the quantum state is a complete description of a physical system, the quantum state is grounded in its terms, and the superposition terms are themselves grounded in local goings-on about the (...) system in question. We will argue that fragmentalism, at least along the lines proposed by Simon, does not offer a new, satisfactory realistic account of the quantum state. This raises the question about whether there are some other viable forms of quantum fragmentalism. (shrink)
My primary target in this paper is a puzzle that emerges from the conjunction of several seemingly innocent assumptions in action theory and the metaphysics of moral responsibility. The puzzle I have in mind is this. On one widely held account of moral responsibility, an agent is morally responsible only for those actions or outcomes over which that agent exercises control. Recently, however, some have cited cases where agents appear to be morally responsible without exercising any control. This leads some (...) to abandon the control-based account of responsibility and replace it with an alternative account. It leads others to deny the intuition that agents are responsible in these troublesome cases. After outlining the account of moral responsibility I have in mind, I look at some of the arguments made against the viability of this theory. I show that there are conceptual resources for salvaging the control account, focusing in particular on the nature of vigilance. I also argue that there is empirical data that supports the control account so conceived. (shrink)
According to Aristotle, the medical art aims at health, which is a virtue of the body, and does so in an unlimited way. Consequently, medicine does not determine the extent to which health should be pursued, and “mental health” falls under medicine only via pros hen predication. Because medicine is inherently oriented to its end, it produces health in accordance with its nature and disease contrary to its nature—even when disease is good for the patient. Aristotle’s politician understands that this (...) inherent orientation can be systematically distorted, and so would see the need for something like the Hippocratic Oath. (shrink)
From the end of the twelfth century until the middle of the eighteenth century, the concept of a right of necessity –i.e. the moral prerogative of an agent, given certain conditions, to use or take someone else’s property in order to get out of his plight– was common among moral and political philosophers, who took it to be a valid exception to the standard moral and legal rules. In this essay, I analyze Samuel Pufendorf’s account of such a right, (...) founded on the basic instinct of self-preservation and on the notion that, in civil society, we have certain minimal duties of humanity towards each other. I review Pufendorf’s secularized account of natural law, his conception of the civil state, and the function of private property. I then turn to his criticism of Grotius’s understanding of the right of necessity as a retreat to the pre-civil right of common use, and defend his account against some recent criticisms. Finally, I examine the conditions deemed necessary and jointly sufficient for this right to be claimable, and conclude by pointing to the main strengths of this account. Keywords: Samuel Pufendorf, Hugo Grotius, right of necessity, duty of humanity, private property. (shrink)
According to the attentional resources account, mind wandering (or “task-unrelated thought”) is thought to compete with a focal task for attentional resources. Here, we tested two key predictions of this account: First, that mind wandering should not interfere with performance on a task that does not require attentional resources; second, that as task requirements become automatized, performance should improve and depth of mind wandering should increase. Here, we used a serial reaction time task with implicit- and explicit-learning groups to test (...) these predictions. Providing novel evidence for the attentional resource account’s first prediction, results indicated that depth of mind wandering was negatively associated with learning in the explicit, but not the implicit, group, indicating that mind wandering is associated with impaired explicit, but not implicit, learning. Corroborating the attention resource account’s second prediction, we also found that, overall, performance improved while at the same time depth of mind wandering increased. From an implicit learning perspective, these results are consistent with the claim that explicit learning is impaired under attentional load, but implicit learning is not. (shrink)
In this paper, we focus on whether and to what extent we judge that people are responsible for the consequences of their forgetfulness. We ran a series of behavioral studies to measure judgments of responsibility for the consequences of forgetfulness. Our results show that we are disposed to hold others responsible for some of their forgetfulness. The level of stress that the forgetful agent is under modulates judgments of responsibility, though the level of care that the agent exhibits toward performing (...) the forgotten action does not. We argue that this result has important implications for a long-running debate about the nature of responsible agency. (shrink)
One popular theory of moral responsibility locates responsible agency in exercises of control. These control-based theories often appeal to tracing to explain responsibility in cases where some agent is intuitively responsible for bringing about some outcome despite lacking direct control over that outcome’s obtaining. Some question whether control-based theories are committed to utilizing tracing to explain responsibility in certain cases. I argue that reflecting on certain kinds of negligence shows that tracing plays an ineliminable role in any adequate control-based theory (...) of responsibility. (shrink)
Perdurantists think of continuants as mereological sums of stages from different times. This view of persistence would force us to drop the idea that there is genuine change in the world. By exploiting a presentist metaphysics, Brogaard proposed a theory, called presentist four-dimensionalism, that aims to reconcile perdurantism with the idea that things undergo real change. However, her proposal commits us to reject the idea that stages must exist in their entirety. Giving up the tenet that all the stages are (...) equally real could be a price that perdurantists are unwilling to pay. I argue that Kit Fine ’s fragmentalism provides us with the tools to combine a presentist metaphysics with a perdurantist theory of persistence without giving up the idea that reality is constituted by more than purely present stages. (shrink)
Mind wandering is typically operationalized as task-unrelated thought. Some argue for the need to distinguish between unintentional and intentional mind wandering, where an agent voluntarily shifts attention from task-related to task-unrelated thoughts. We reveal an inconsistency between the standard, task-unrelated thought definition of mind wandering and the occurrence of intentional mind wandering (together with plausible assumptions about tasks and intentions). This suggests that either the standard definition of mind wandering should be rejected or that intentional mind wandering is an incoherent (...) category. Solving this puzzle is critical for advancing theoretical frameworks of mind wandering. (shrink)
Cognitive science has recently made some startling discoveries about temporal experience, and these discoveries have been drafted into philosophical service. We survey recent appeals to cognitive science in the philosophical debate over whether time objectively passes. Since this research is currently in its infancy, we identify some directions for future research.
Most democratic theorists agree that concentrations of wealth and power tend to distort the functioning of democracy and ought to be countered wherever possible. Deliberative democrats are no exception: though not its only potential value, the capacity of deliberation to ‘neutralise power’ is often regarded as ‘fundamental’ to deliberative theory. Power may be neutralised, according to many deliberative democrats, if citizens can be induced to commit more fully to the deliberative resolution of common problems. If they do, they will be (...) unable to get away with inconsistencies and bad or private reasons, thereby mitigating the illegitimate influence of power. I argue, however, that the means by which power inflects political disagreement is far more subtle than this model suggests and cannot be countered so simply. As a wealth of recent research in political psychology demonstrates, human beings persistently exhibit ‘motivated reasoning’, meaning that even when we are sincerely committed to the deliberative resolution of common problems, and even when we are exposed to the same reasons and evidence, we still disagree strongly about what ‘fair cooperation’ entails. Motivated reasoning can be counteracted, but only under exceptional circumstances such as those that enable modern science, which cannot be reliably replicated in our society at large. My analysis suggests that in democratic politics – which rules out the kind of anti-democratic practices available to scientists – we should not expect deliberation to reliably neutralise power. (shrink)
Actors, undercover investigators, and readers of fiction sometimes report “losing themselves” in the characters they imitate or read about. They speak of “taking on” or “assuming” the beliefs, thoughts, and feelings of someone else. I offer an account of this strange but familiar phenomenon—what I call imaginative transportation.
Reading Foucault’s work on power and subjectivity alongside “developmentalist” approaches to evolutionary biology, this article endorses poststructuralist critiques of political ideals grounded in the value of subjective agency. Many political theorists embrace such critiques, of course, but those who do are often skeptical of liberal democracy, and even of normative theory itself. By contrast, those who are left to theorize liberal democracy tend to reject or ignore poststructuralist insights, and have continued to employ dubious ontological assumptions regarding human agents. Against (...) both groups, I argue that Foucault’s poststructuralism must be taken seriously, but that it is ultimately consistent with normative theory and liberal democracy. Linking poststructuralist attempts to transcend the dichotomy between agency and structure with recent efforts by evolutionary theorists to dissolve a similarly stubborn opposition between nature and nurture, I develop an anti-essentialist account of human nature and agency that vindicates poststructuralist criticism while enabling a novel defense of liberal democracy. (shrink)
This paper offers an argument in favour of a Lewisian version of concretism that maintains both the principle of material inheritance and the materiality-modality link.
Can an AGI create a more intelligent AGI? Under idealized assumptions, for a certain theoretical type of intelligence, our answer is: “Not without outside help”. This is a paper on the mathematical structure of AGI populations when parent AGIs create child AGIs. We argue that such populations satisfy a certain biological law. Motivated by observations of sexual reproduction in seemingly-asexual species, the Knight-Darwin Law states that it is impossible for one organism to asexually produce another, which asexually produces another, and (...) so on forever: that any sequence of organisms (each one a child of the previous) must contain occasional multi-parent organisms, or must terminate. By proving that a certain measure (arguably an intelligence measure) decreases when an idealized parent AGI single-handedly creates a child AGI, we argue that a similar Law holds for AGIs. (shrink)
We define a notion of the intelligence level of an idealized mechanical knowing agent. This is motivated by efforts within artificial intelligence research to define real-number intelligence levels of compli- cated intelligent systems. Our agents are more idealized, which allows us to define a much simpler measure of intelligence level for them. In short, we define the intelligence level of a mechanical knowing agent to be the supremum of the computable ordinals that have codes the agent knows to be codes (...) of computable ordinals. We prove that if one agent knows certain things about another agent, then the former necessarily has a higher intelligence level than the latter. This allows our intelligence no- tion to serve as a stepping stone to obtain results which, by themselves, are not stated in terms of our intelligence notion (results of potential in- terest even to readers totally skeptical that our notion correctly captures intelligence). As an application, we argue that these results comprise evidence against the possibility of intelligence explosion (that is, the no- tion that sufficiently intelligent machines will eventually be capable of designing even more intelligent machines, which can then design even more intelligent machines, and so on). (shrink)
Legg and Hutter, as well as subsequent authors, considered intelligent agents through the lens of interaction with reward-giving environments, attempting to assign numeric intelligence measures to such agents, with the guiding principle that a more intelligent agent should gain higher rewards from environments in some aggregate sense. In this paper, we consider a related question: rather than measure numeric intelligence of one Legg- Hutter agent, how can we compare the relative intelligence of two Legg-Hutter agents? We propose an elegant answer (...) based on the following insight: we can view Legg-Hutter agents as candidates in an election, whose voters are environments, letting each environment vote (via its rewards) which agent (if either) is more intelligent. This leads to an abstract family of comparators simple enough that we can prove some structural theorems about them. It is an open question whether these structural theorems apply to more practical intelligence measures. (shrink)
After generalizing the Archimedean property of real numbers in such a way as to make it adaptable to non-numeric structures, we demonstrate that the real numbers cannot be used to accurately measure non-Archimedean structures. We argue that, since an agent with Artificial General Intelligence (AGI) should have no problem engaging in tasks that inherently involve non-Archimedean rewards, and since traditional reinforcement learning rewards are real numbers, therefore traditional reinforcement learning probably will not lead to AGI. We indicate two possible ways (...) traditional reinforcement learning could be altered to remove this roadblock. (shrink)
Recent years have witnessed growing controversy over the “wisdom of the multitude.” As epistemic critics drawing on vast empirical evidence have cast doubt on the political competence of ordinary citizens, epistemic democrats have offered a defense of democracy grounded largely in analogies and formal results. So far, I argue, the critics have been more convincing. Nevertheless, democracy can be defended on instrumental grounds, and this article demonstrates an alternative approach. Instead of implausibly upholding the epistemic reliability of average voters, I (...) observe that competitive elections, universal suffrage, and discretionary state power disable certain potent mechanisms of elite entrenchment. By reserving particular forms of power for the multitude of ordinary citizens, they make democratic states more resistant to dangerous forms of capture than non-democratic alternatives. My approach thus offers a robust defense of electoral democracy, yet cautions against expecting too much from it—motivating a thicker conception of democracy, writ large. (shrink)
I provide an analysis of sentences of the form ‘To be F is to be G’ in terms of exact truth-maker semantics—an approach that identifies the meanings of sentences with the states of the world directly responsible for their truth-values. Roughly, I argue that these sentences hold just in case that which makes something F is that which makes it G. This approach is hyperintensional, and possesses desirable logical and modal features. These sentences are reflexive, transitive and symmetric, and, if (...) they are true, then they are necessarily true, and it is necessary that all and only Fs are Gs. I close by defining an asymmetric and irreflexive notion of analysis in terms of the reflexive and symmetric one. (shrink)
In this paper, I will argue that there is a version of possibilism—inspired by the modal analogue of Kit Fine’s fragmentalism—that can be combined with a weakening of actualism. The reasons for analysing this view, which I call Modal Fragmentalism, are twofold. Firstly, it can enrich our understanding of the actualism/possibilism divide, by showing that, at least in principle, the adoption of possibilia does not correspond to an outright rejection of the actualist intuitions. Secondly, and more specifically, it can enrich (...) our understanding of concretism, by proving that, at least in principle, the idea that objects have properties in an absolute manner is compatible with transworld identity. (shrink)
Judgments of blame for others are typically sensitive to what an agent knows and desires. However, when people act negligently, they do not know what they are doing and do not desire the outcomes of their negligence. How, then, do people attribute blame for negligent wrongdoing? We propose that people attribute blame for negligent wrongdoing based on perceived mental control, or the degree to which an agent guides their thoughts and attention over time. To acquire information about others’ mental control, (...) people self-project their own perceived mental control to anchor third-personal judgments about mental control and concomitant responsibility for negligent wrongdoing. In four experiments (N = 841), we tested whether perceptions of mental control drive third-personal judgments of blame for negligent wrongdoing. Study 1 showed that the ease with which people can counterfactually imagine an individual being non-negligent mediated the relationship between judgments of control and blame. Studies 2a and 2b indicated that perceived mental control has a strong effect on judgments of blame for negligent wrongdoing and that first-personal judgments of mental control are moderately correlated with third-personal judgments of blame for negligent wrongdoing. Finally, we used an autobiographical memory manipulation in Study 3 to make personal episodes of forgetfulness salient. Participants for whom past personal episodes of forgetfulness were made salient judged negligent wrongdoers less harshly compared to a control group for whom past episodes of negligence were not salient. Collectively, these findings suggest that first-personal judgments of mental control drive third-personal judgments of blame for negligent wrongdoing and indicate a novel role for counterfactual thinking in the attribution of responsibility. (shrink)
Samuel Alexander was a central figure of the new wave of realism that swept across the English-speaking world in the early twentieth century. His Space, Time, and Deity (1920a, 1920b) was taken to be the official statement of realism as a metaphysical system. But many historians of philosophy are quick to point out the idealist streak in Alexander’s thought. After all, as a student he was trained at Oxford in the late 1870s and early 1880s as British Idealism was (...) beginning to flourish. This naturally had some effect on his philosophical outlook and it is said that his early work is overtly idealist. In this paper I examine his neglected and understudied reactions to British Idealism in the 1880s. I argue that Alexander was not an idealist during this period and should not be considered as part of the British Idealist tradition, philosophically speaking. (shrink)
Scholars have often thought that a monistic reading of Aristotle’s definition of the human good – in particular, one on which “best and most teleios virtue” refers to theoretical wisdom – cannot follow from the premises of the ergon argument. I explain how a monistic reading can follow from the premises, and I argue that this interpretation gives the correct rationale for Aristotle’s definition. I then explain that even though the best and most teleios virtue must be a single virtue, (...) that virtue could in principle be a whole virtue that arises from the combination of all the others. I also clarify that the definition of the human good aims at capturing the nature of human eudaimonia only in its primary case. (shrink)
Kraut and other neo-Aristotelians have argued that there is no such thing as absolute goodness. They admit only good in a kind, e.g. a good sculptor, and good for something, e.g. good for fish. What is the view of Aristotle? Mostly limiting myself to the Nicomachean Ethics, I argue that Aristotle is committed to things being absolutely good and also to a metaphysics of absolute goodness where there is a maximally best good that is the cause of the goodness of (...) all other things in virtue of being their end. I begin by suggesting that the notion of good as an end, which is present in the first lines of the NE, is not obviously accounted for by good in a kind or good for something. I then give evidence that good in a kind and good for something can explain neither certain distinctions drawn between virtues nor the determinacy ascribed to what is good “in itself.” I argue contra Gotthelf that because several important arguments in the Nicomachean Ethics rely on comparative judgments of absolute value—e.g. “Man is the best of all animals”—Aristotle is committed to the existence of both absolute goodness and an absolutely best being. I focus on one passage, Aristotle’s division of goods in NE I 12, which presupposes this metaphysical picture. (shrink)
According to the Rationality Constraint, our concept of belief imposes limits on how much irrationality is compatible with having beliefs at all. We argue that empirical evidence of human irrationality from the psychology of reasoning and the psychopathology of delusion undermines only the most demanding versions of the Rationality Constraint, which require perfect rationality as a condition for having beliefs. The empirical evidence poses no threat to more relaxed versions of the Rationality Constraint, which only require only minimal rationality. Nevertheless, (...) we raise problems for all versions of the Rationality Constraint by appealing to more extreme forms of irrationality that are continuous with actual cases of human irrationality. In particular, we argue that there are conceivable cases of “mad belief” in which populations of Lewisian madmen have beliefs that are not even minimally rational. This undermines Lewis’s claim that our ordinary concept of belief is a theoretical concept that is implicitly defined by its role in folk psychology. We argue that introspection gives us a phenomenal concept of belief that cannot be analyzed by applying Lewis’s semantics for theoretical terms. (shrink)
The Generality Problem is widely recognized to be a serious problem for reliabilist theories of justification. James R. Beebe's Statistical Solution is one of only a handful of attempted solutions that has garnered serious attention in the literature. In their recent response to Beebe, Julien Dutant and Erik J. Olsson successfully refute Beebe's Statistical Solution. This paper presents a New Statistical Solution that countenances Dutant and Olsson's objections, dodges the serious problems that trouble rival solutions, and retains the theoretical virtues (...) that made Beebe's solution so attractive in the first place. There indeed exists a principled, rigorous, conceptually sparse, and plausible solution to the Generality Problem: it is the New Statistical Solution. (shrink)
The verb ‘to know’ can be used both in ascriptions of propositional knowledge and ascriptions of knowledge of acquaintance. In the formal epistemology literature, the former use of ‘know’ has attracted considerable attention, while the latter is typically regarded as derivative. This attitude may be unsatisfactory for those philosophers who, like Russell, are not willing to think of knowledge of acquaintance as a subsidiary or dependent kind of knowledge. In this paper we outline a logic of knowledge of acquaintance in (...) which ascriptions like ‘Mary knows Smith’ are regarded as formally interesting in their own right, remaining neutral on their relation to ascriptions of propositional knowledge. The resulting logical framework, which is based on Hintikka’s modal approach to epistemic logic, provides a fresh perspective on various issues and notions at play in the philosophical debate on acquaintance. (shrink)
I describe and motivate Rational Internalism, a principle concerning the relationship between motivating reasons (which explain actions) and normative reasons (which justify actions). I use this principle to construct a novel argument against Objectivist theories of normative reasons, which hold that facts about normative reasons can be analyzed in terms of an independently specified class of normative or evaluative facts. I then argue for an alternative theory of normative reasons, the Reasoning View, which is consistent with both Rational Internalism and (...) one standard motivation for Objectivism. (shrink)
According to the Reasoning View about normative reasons, facts about normative reasons for action can be understood in terms of facts about the norms of practical reasoning. I argue that this view is subject to an overlooked class of counterexamples, familiar from debates about Subjectivist theories of normative reasons. Strikingly, the standard strategy Subjectivists have used to respond to this problem cannot be adapted to the Reasoning View. I think there is a solution to this problem, however. I argue that (...) the norms of practical reasoning, like the norms of theoretical reasoning, are characteristically defeasible, in a sense I make precise. Recognizing this property of those norms makes space for a solution to the problem. The resulting view is in a way analogous to the familiar defeasibility theory of knowledge, but it avoids a standard objection to that theory. (shrink)
There is an old meta-philosophical worry: very roughly, metaphysical theories have no observational consequences and so the study of metaphysics has no value. The worry has been around in some form since the rise of logical positivism in the early twentieth century but has seen a bit of a renaissance recently. In this paper, I provide an apology for metaphysics in the face of this kind of concern. The core of the argument is this: pure mathematics detaches from science in (...) much the same manner as metaphysics and yet it is valuable nonetheless. The source of value enjoyed by pure mathematics extends to metaphysics as well. Accordingly, if one denies that metaphysics has value, then one is forced to deny that pure mathematics has value. The argument places an added burden on the sceptic of metaphysics. If one truly believes that metaphysics is worthless (as some philosophers do), then one must give up on pure mathematics as well. (shrink)
The leading reductive approaches to shared agency model that phenomenon in terms of complexes of individual intentions, understood as plan-laden commitments. Yet not all agents have such intentions, and non-planning agents such as small children and some non-human animals are clearly capable of sophisticated social interactions. But just how robust are their social capacities? Are non-planning agents capable of shared agency? Existing theories of shared agency have little to say about these important questions. I address this lacuna by developing a (...) reductive account of the social capacities of non-planning agents, which I argue supports the conclusion that they can enjoy shared agency. The resulting discussion offers a fine-grained account of the psychological capacities that can underlie shared agency, and produces a recipe for generating novel hypotheses concerning why some agents do not engage in shared agency. (shrink)
This paper examines the interplay of semantics and pragmatics within the domain of film. Films are made up of individual shots strung together in sequences over time. Though each shot is disconnected from the next, combinations of shots still convey coherent stories that take place in continuous space and time. How is this possible? The semantic view of film holds that film coherence is achieved in part through a kind of film language, a set of conventions which govern the relationships (...) between shots. In this paper, we develop and defend a new version of the semantic view. We articulate it for a pair of conventions that govern spatial relations between viewpoints. One such rule is already well-known; sometimes called the "180° Rule," we term it the X-Constraint; to this we add a previously unrecorded rule, the T-Constraint. As we show, both have the effect, in different ways, of limiting the way that viewpoint can shift through space from shot to shot over the course of a film sequence. Such constraints, we contend, are analogous to relations of discourse coherence that are widely recognized in the linguistic domain. If film is to have a language, it is a language made up of rules like these. (shrink)
According to Phenomenal Conservatism (PC), if it seems to a subject S that P, S thereby has some degree of (defeasible) justification for believing P. But what is it for P to seem true? Answering this question is vital for assessing what role (if any) such states can play. Many have appeared to adopt a kind of non-reductionism that construes seemings as intentional states which cannot be reduced to more familiar mental states like beliefs or sensations. In this paper I (...) aim to show that reductive accounts need to be taken more seriously by illustrating the plausibility of identifying seemings and conscious inclinations to form a belief. I briefly close the paper by considering the implications such an analysis might have for views such as PC. (shrink)
Aristotle analyses a large range of objects as composites of matter and form. But how exactly should we understand the relation between the matter and form of a composite? Some commentators have argued that forms themselves are somehow material, that is, forms are impure. Others have denied that claim and argued for the purity of forms. In this paper, I develop a new purist interpretation of Metaphysics Z.10-11, a text central to the debate, which I call 'hierarchical purism'. I argue (...) that hierarchical purism can overcome the difficulties faced by previous versions of purism as well as by impurism. Roughly, on hierarchical purism, each composite can be considered and defined in two different ways: From the perspective of metaphysics, composites are considered only insofar as they have forms and defined purely formally. From the perspective of physics, composites are considered insofar as they have forms and matter and defined with reference to both. Moreover, while the metaphysical definition is a definition in the strict sense of 'definition', the physical definition is a definition in a loose sense. Analogous points hold for intelligible composites and geometry. Finally, neither sort of definitional practice implies that, for Aristotle, forms are impure. (shrink)
Existing approaches to campaign ethics fail to adequately account for the “arms races” incited by competitive incentives in the absence of effective sanctions for destructive behaviors. By recommending scrupulous devotion to unenforceable norms of honesty, these approaches require ethical candidates either to quit or lose. To better understand the complex dilemmas faced by candidates, therefore, we turn first to the tradition of “adversarial ethics,” which aims to enable ethical participants to compete while preventing the most destructive excesses of competition. As (...) we demonstrate, however, elections present even more difficult challenges than other adversarial contexts, because no centralized regulation is available to halt potential arms races. Turning next to recent scholarship on populism and partisanship, we articulate an alternative framework for campaign ethics, which allows candidates greater room to maneuver in their appeals to democratic populations while nevertheless requiring adherence to norms of social and political pluralism. (shrink)
With Being Me Being You, Samuel Fleischacker provides a reconstruction and defense of Adam Smith’s account of empathy, and the role it plays in building moral consensus, motivating moral behavior, and correcting our biases, prejudices, and tendency to demonize one another. He sees this book as an intervention in recent debates about the role that empathy plays in our morality. For some, such as Paul Bloom, Joshua Greene, Jesse Prinz, and others, empathy, or our capacity for fellow-feeling, tends to (...) misguide us in the best of cases, and more often reinforces faction and tribalism in morals and politics. These utilitarians, as Fleischacker refers to them, propose that empathy take a back seat to cost-benefit analysis in moral decision-making. As an intervention, the book is largely successful. Fleischacker’s defense of empathy is nuanced and escapes the myopic enthusiasm to which many partisans of empathy are prone. Anyone looking to understand the relationship between empathy and morality would do well to grapple with Being Me Being You. Still, Fleischacker overlooks that Smith would most likely be less convinced of the idea that greater empathy can help us overcome the great challenges of our time. (shrink)
We propose that, for the purpose of studying theoretical properties of the knowledge of an agent with Artificial General Intelligence (that is, the knowledge of an AGI), a pragmatic way to define such an agent’s knowledge (restricted to the language of Epistemic Arithmetic, or EA) is as follows. We declare an AGI to know an EA-statement φ if and only if that AGI would include φ in the resulting enumeration if that AGI were commanded: “Enumerate all the EA-sentences which you (...) know.” This definition is non-circular because an AGI, being capable of practical English communication, is capable of understanding the everyday English word “know” independently of how any philosopher formally defines knowledge; we elaborate further on the non-circularity of this circular-looking definition. This elegantly solves the problem that different AGIs may have different internal knowledge definitions and yet we want to study knowledge of AGIs in general, without having to study different AGIs separately just because they have separate internal knowledge definitions. Finally, we suggest how this definition of AGI knowledge can be used as a bridge which could allow the AGI research community to import certain abstract results about mechanical knowing agents from mathematical logic. (shrink)
The thesis of theory-ladenness of observations, in its various guises, is widely considered as either ill-conceived or harmless to the rationality of science. The latter view rests partly on the work of the proponents of New Experimentalism who have argued, among other things, that experimental practices are efficient in guarding against any epistemological threat posed by theory-ladenness. In this paper I show that one can generate a thesis of theory-ladenness for experimental practices from an influential New Experimentalist account. The notion (...) I introduce for this purpose is the concept of ‘theory-driven data reliability judgments’, according to which theories which are sought to be tested with a particular set of data guide reliability judgments about those very same data. I provide various prominent historical examples to show that TDRs are used by scientists to resolve data conflicts. I argue that the rationality of the practices which employ TDRs can be saved if the independent support of the theories driving TDRs is construed in a particular way. (shrink)
Creativity pervades human life. It is the mark of individuality, the vehicle of self-expression, and the engine of progress in every human endeavor. It also raises a wealth of neglected and yet evocative philosophical questions: What is the role of consciousness in the creative process? How does the audience for a work for art influence its creation? How can creativity emerge through childhood pretending? Do great works of literature give us insight into human nature? Can a computer program really be (...) creative? How do we define creativity in the first place? Is it a virtue? What is the difference between creativity in science and art? Can creativity be taught? -/- The new essays that comprise The Philosophy of Creativity take up these and other key questions and, in doing so, illustrate the value of interdisciplinary exchange. Written by leading philosophers and psychologists involved in studying creativity, the essays integrate philosophical insights with empirical research. -/- CONTENTS -/- I. Introduction Introducing The Philosophy of Creativity Elliot Samuel Paul and Scott Barry Kaufman -/- II. The Concept of Creativity 1. An Experiential Account of Creativity Bence Nanay -/- III. Aesthetics & Philosophy of Art 2. Creativity and Insight Gregory Currie 3. The Creative Audience: Some Ways in which Readers, Viewers and/or Listeners Use their Imaginations to Engage Fictional Artworks Noël Carroll 4. The Products of Musical Creativity Christopher Peacocke -/- IV. Ethics & Value Theory 5. Performing Oneself Owen Flanagan 6. Creativity as a Virtue of Character Matthew Kieran -/- V. Philosophy of Mind & Cognitive Science 7. Creativity and Not So Dumb Luck Simon Blackburn 8. The Role of Imagination in Creativity Dustin Stokes 9. Creativity, Consciousness, and Free Will: Evidence from Psychology Experiments Roy F. Baumeister, Brandon J. Schmeichel, and C. Nathan DeWall 10. The Origins of Creativity Elizabeth Picciuto and Peter Carruthers 11. Creativity and Artificial Intelligence: a Contradiction in Terms? Margaret Boden -/- VI. Philosophy of Science 12. Hierarchies of Creative Domains: Disciplinary Constraints on Blind-Variation and Selective-Retention Dean Keith Simonton -/- VII. Philosophy of Education (& Education of Philosophy) 13. Educating for Creativity Berys Gaut 14. Philosophical Heuristics Alan Hájek. (shrink)
Samuel Alexander was one of the first realists of the twentieth century to defend a theory of categories. He thought that the categories are genuinely real and grounded in the intrinsic nature of Space-Time. I present his reduction of the categories in terms of Space-Time, articulate his account of categorial structure and completeness, and offer an interpretation of what he thought the nature of the categories really were. I then argue that his theory of categories has some advantages over (...) competing theories of his day, and finally draw some important lessons that we can learn from his realist yet reductionist theory of categories. (shrink)
I argue that the famous discussion of substance and essence in Aristotle's Metaphysics Z offers a direct and positive response to the central question of 'first philosophy' or "metaphysics" as to the first principles and causes of being qua being: Z is designed to establish that essences are the first principles and causes of composite substances insofar as they are. Two moves are crucial to my argument: First, I argue that the goal of the final chapter of Z (that is, (...) Z.17) is to give an account of essences as the first causes of being of composite substances. Second, I argue that the guiding question of Z 'What is substance?' should be understood as a causal question that seeks the 'what it is' of composite substances. Overall, contrary to prominent interpretations, it emerges that Z is neither an independent treatise on substance nor a negative aporetic contribution to first philosophy but rather a core part of Aristotle's positive first-philosophical project. I also argue that this reading of Z is compatible with Aristotle's characterization of first philosophy as 'theology'. (shrink)
Traditionally, logic has been the dominant formal method within philosophy. Are logical methods still dominant today, or have the types of formal methods used in philosophy changed in recent times? To address this question, we coded a sample of philosophy papers from the late 2000s and from the late 2010s for the formal methods they used. The results indicate that the proportion of papers using logical methods remained more or less constant over that time period but the proportion of papers (...) using probabilistic methods was approximately three times higher in the late 2010s than it was in the late 2000s. Further analyses explored this change by looking more closely at specific methods, specific levels of technical engagement, and specific subdisciplines within philosophy. These analyses indicate that the increasing proportion of papers using probabilistic methods was pervasive, not confined to particular probabilistic methods, levels of sophistication, or subdisciplines. (shrink)
Proponents of the utilitarian animal welfare argument (AWA) for veganism maintain that it is reasonable to expect that adopting a vegan diet will decrease animal suffering. In this paper I argue otherwise. I maintain that (i) there are plausible scenarios in which refraining from meat-consumption will not decrease animal suffering; (ii) the utilitarian AWA rests on a false dilemma; and (iii) there are no reasonable grounds for the expectation that adopting a vegan diet will decrease animal suffering. The paper is (...) divided into four sections. In the first, I set out the utilitarian AWA in its original form. I give some background and I distinguish it from other, related arguments. In the second, I discuss the causal impotence objection, a popular objection to the utilitarian AWA. I explain how the objection works by means of a conceptual distinction between consumers and producers. In the third, I explain how proponents of the utilitarian AWA respond to this objection. In particular, I set out in some detail what I call the expected utility response. In the fourth and final section, I use the three objections noted above to explain why I do not find this response convincing. (shrink)
In Nicomachean Ethics 1. 7, Aristotle gives a definition of the human good, and he does so by means of the “ ergon argument.” I clear the way for a new interpretation of this argument by arguing that Aristotle does not think that the ergon of something is always the proper activity of that thing. Though he has a single concept of an ergon, Aristotle identifies the ergon of an X as an activity in some cases but a product in (...) others, depending on the sort of thing the X is—for while the ergon of the eye is seeing, the ergon of a sculptor is a sculpture. This alternative interpretation of Aristotle’s concept of an ergon allows the key explanatory middle term of the ergon argument to be what, I argue, it ought to be: “the best achievement of a human.”. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.