O objetivo deste artigo é analisar os usos que Werner Heisenberg fez da filosofia grega em sua obra. Pretende-se relacionar tais usos não apenas com a argumentação interna presente nos textos do físico alemão, mas também com o contexto histórico, conflitos e debates entre as diversas interpretações da teoria dos quanta durante a primeira metade do século XX. Faremos, inicialmente, uma apresentação geral da teoria quântica e da presença da filosofia na obra de Heisenberg e, em seguida, um estudo de (...) caso da apropriação que Heisenberg fez do pensamento de Leucipo, Demócrito, Heráclito, Platão e Aristóteles. (shrink)
Fragmentalism was originally introduced as a new A-theory of time. It was further refined and discussed, and different developments of the original insight have been proposed. In a celebrated paper, Jonathan Simon contends that fragmentalism delivers a new realist account of the quantum state—which he calls conservative realism—according to which: the quantum state is a complete description of a physical system, the quantum state is grounded in its terms, and the superposition terms are themselves grounded in local goings-on about (...) the system in question. We will argue that fragmentalism, at least along the lines proposed by Simon, does not offer a new, satisfactory realistic account of the quantum state. This raises the question about whether there are some other viable forms of quantum fragmentalism. (shrink)
Creativity pervades human life. It is the mark of individuality, the vehicle of self-expression, and the engine of progress in every human endeavor. It also raises a wealth of neglected and yet evocative philosophical questions: What is the role of consciousness in the creative process? How does the audience for a work for art influence its creation? How can creativity emerge through childhood pretending? Do great works of literature give us insight into human nature? Can a computer program really be (...) creative? How do we define creativity in the first place? Is it a virtue? What is the difference between creativity in science and art? Can creativity be taught? The new essays that comprise The Philosophy of Creativity take up these and other key questions and, in doing so, illustrate the value of interdisciplinary exchange. Written by leading philosophers and psychologists involved in studying creativity, the essays integrate philosophical insights with empirical research. CONTENTS I. Introduction Introducing The Philosophy of Creativity Elliot Samuel Paul and Scott Barry Kaufman II. The Concept of Creativity 1. An Experiential Account of Creativity Bence Nanay III. Aesthetics & Philosophy of Art 2. Creativity and Insight Gregory Currie 3. The Creative Audience: Some Ways in which Readers, Viewers and/or Listeners Use their Imaginations to Engage Fictional Artworks Noël Carroll 4. The Products of Musical Creativity Christopher Peacocke IV. Ethics & Value Theory 5. Performing Oneself Owen Flanagan 6. Creativity as a Virtue of Character Matthew Kieran V. Philosophy of Mind & Cognitive Science 7. Creativity and Not So Dumb Luck Simon Blackburn 8. The Role of Imagination in Creativity Dustin Stokes 9. Creativity, Consciousness, and Free Will: Evidence from Psychology Experiments Roy F. Baumeister, Brandon J. Schmeichel, and C. Nathan DeWall 10. The Origins of Creativity Elizabeth Picciuto and Peter Carruthers 11. Creativity and Artificial Intelligence: a Contradiction in Terms? Margaret Boden VI. Philosophy of Science 12. Hierarchies of Creative Domains: Disciplinary Constraints on Blind-Variation and Selective-Retention Dean Keith Simonton VII. Philosophy of Education (& Education of Philosophy) 13. Educating for Creativity Berys Gaut 14. Philosophical Heuristics Alan Hájek. (shrink)
The two justificatory roles of the social contract are establishing whether or not a state is legitimate simpliciter and establishing whether any particular individual is politically obligated to obey the dictates of its governing institutions. Rawls's theory is obviously designed to address the first role but less obviously the other. Rawls does offer a duty-based theory of political obligation that has been criticized by neo-Lockean A. John Simmons. I assess Simmons's criticisms and the possible responses that could be made to (...) them, including those offered by Samuel Freeman. I conclude they rest on a Rawlsian equivocation and ultimately fail. (shrink)
Rawls's theory of political obligation attempts to avoid the obvious flaws of a Lockean consent model. Rawls rejects a requirement of consent for two reasons: First, the consent requirement of Locke’s theory was intended to ensure that the liberty and equality of the contractors was respected, but this end is better achieved by the principles chosen in the original position, which order the basic structure of a society into which citizens are born. Second, "basing our political ties upon a principle (...) of obligation would complicate the assurance problem." Instead, Rawls offers a duty-based account, whereby we are duty-bound to support and comply with just institutions that apply to us. A. John Simmons argues that Rawls cannot meet the particularity requirement of establishing political obligation to only one state. I assess the response that this requirement can be met by the political constructivist element of Rawls's theory. I conclude that there are fatal flaws in this response. (shrink)
We find meaning and value in our lives by engaging in everyday projects. But, according to a recent argument by Samuel Scheffler, this value doesn’t depend merely on what the projects are about. In many cases, it depends also on the future generations that will replace us. By imagining the imminent extinction of humanity soon after our own deaths, we can recognize both that much of our current valuing depends on a background confidence in the ongoing survival of humanity (...) and that the survival and flourishing of those future generations matters to us. After presenting Scheffler’s argument, I will explore two twentieth century precursors—Hans Morgenthau and Simone de Beauvoir—before returning to Scheffler to see that his argument can not only show us why future generations matter, but it can also give us hope for immortality and a blueprint for embracing a changing future. (shrink)
Le but de ce recueil est d’offrir des commentaires accessibles et introductifs aux textes classiques qu’ils accompagnent, en ouvrant des perspectives de discussion sur le thème du capitalisme. C’est en ce sens qu’Emmanuel Chaput lance le débat en commentant le texte de Pierre-Joseph Proudhon, « Qu’est-ce que la propriété ? ». Les textes de Karl Marx ne sont bien sûr pas laissés pour compte : Samuel-Élie Lesage s’engage fermement dans cette voie en discutant L’idéologie allemande de Karl Marx, Christiane (...) Bailey nous offre d’approcher différemment l’œuvre marxienne en abordant son traitement de la question animale dans des extraits du Manifeste du parti communiste et du Travail salarié et capital, et Mathieu Joffre-Lainé nous présente une analyse fine des questions proprement économiques du Capital. Abordant les alternatives au capitalisme dans la pensée de Léon Bourgeois, Éliot Litalien commente La Solidarité et Simon-Pierre Chevarie-Cossette s’attaque à l’analyse de l’Essai d’une philosophie de la solidarité. Enfin, en liant capitalisme, patriarcat et pouvoir politique, Tara Chanady propose une lecture du texte Du mariage et de l’amour de Emma Goldman. La conclusion de Marie-Eve Jalbert se situe dans une perspective contemporaine et critique en décortiquant la critique du socialisme avancée par Friedrich A. Hayek. (shrink)
John Smith was among the first of the Cambridge Platonists. He was therefore in a position to influence not only his contemporaries but all those who followed after him well into the twentieth century and beyond. Well established lines of influence both to and from Whichcote, Cudworth, and More are explored first before moving on to less well-known connections to Bishop Simon Patrick and mathematician Isaac Barrow. Smith’s continued significance for eighteenth century theology is demonstrated through discussion of his (...) inspiration of the doctrines of spiritual sensation developed by Jonathan Edwards and John Wesley. Special notice is also given to Smith’s authority as an interpreter of Biblical prophecy through the eighteenth and nineteenth centuries. The chapter concludes with looks at Smith’s influence on Samuel Taylor Coleridge, Ralph Waldo Emerson, William Ralph Inge, Rufus Jones, Pierre Hadot, and others. This chapter, offers a broad, but highly selective, overview of the reception and influence of Smith’s life and work. It is intended, however, as a call for future research more than as an authoritative presentation of Smith’s legacy. For, if the Cambridge Platonists have been underappreciated until recently, none of them have been unjustly ignored as consistently as Smith. (shrink)
Samuel Kerstein’s recent (2013) How To Treat Persons is an ambitious attempt to develop a new, broadly Kantian account of what it is to treat others as mere means and what it means to act in accordance with others’ dignity. His project is explicitly nonfoundationalist: his interpretation stands or falls on its ability to accommodate our pretheoretic intuitions, and he does an admirable job of handling carefully a range of well fleshed out and sometimes subtle examples. In what follows, (...) I shall give a quick summary of the chapters and then say two good things about the book and one critical thing. (shrink)
Faced with the choice between creating a risk of harm and taking a precaution against that risk, should I take the precaution? Does the proper analysis of this trade-off require a maximizing, utilitarian approach? If not, how does one properly analyze the trade-off? These questions are important, for we often are uncertain about the effects of our actions. Accordingly, we often must consider whether our actions create an unreasonable risk of injury — that is, whether our actions are negligent.
From the end of the twelfth century until the middle of the eighteenth century, the concept of a right of necessity –i.e. the moral prerogative of an agent, given certain conditions, to use or take someone else’s property in order to get out of his plight– was common among moral and political philosophers, who took it to be a valid exception to the standard moral and legal rules. In this essay, I analyze Samuel Pufendorf’s account of such a right, (...) founded on the basic instinct of self-preservation and on the notion that, in civil society, we have certain minimal duties of humanity towards each other. I review Pufendorf’s secularized account of natural law, his conception of the civil state, and the function of private property. I then turn to his criticism of Grotius’s understanding of the right of necessity as a retreat to the pre-civil right of common use, and defend his account against some recent criticisms. Finally, I examine the conditions deemed necessary and jointly sufficient for this right to be claimable, and conclude by pointing to the main strengths of this account. Keywords: Samuel Pufendorf, Hugo Grotius, right of necessity, duty of humanity, private property. (shrink)
Samuel Alexander was a central figure of the new wave of realism that swept across the English-speaking world in the early twentieth century. His Space, Time, and Deity (1920a, 1920b) was taken to be the official statement of realism as a metaphysical system. But many historians of philosophy are quick to point out the idealist streak in Alexander’s thought. After all, as a student he was trained at Oxford in the late 1870s and early 1880s as British Idealism was (...) beginning to flourish. This naturally had some effect on his philosophical outlook and it is said that his early work is overtly idealist. In this paper I examine his neglected and understudied reactions to British Idealism in the 1880s. I argue that Alexander was not an idealist during this period and should not be considered as part of the British Idealist tradition, philosophically speaking. (shrink)
Which political principles should govern global politics? In his new book, Simon Caney engages with the work of philosophers, political theorists, and international relations scholars in order to examine some of the most pressing global issues of our time. Are there universal civil, political, and economic human rights? Should there be a system of supra- state institutions? Can humanitarian intervention be justified?
According to many philosophers, rationality is, at least in part, a matter of one’s attitudes cohering with one another. Theorists who endorse this idea have devoted much attention to formulating various coherence requirements. Surprisingly, they have said very little about what it takes for a set of attitudes to be coherent in general. We articulate and defend a general account on which a set of attitudes is coherent just in case and because it is logically possible for the attitudes to (...) be jointly satisfied in the sense of jointly fitting the world. In addition, we show how the account can help adjudicate debates about how to formulate various rational requirements. (shrink)
With Being Me Being You, Samuel Fleischacker provides a reconstruction and defense of Adam Smith’s account of empathy, and the role it plays in building moral consensus, motivating moral behavior, and correcting our biases, prejudices, and tendency to demonize one another. He sees this book as an intervention in recent debates about the role that empathy plays in our morality. For some, such as Paul Bloom, Joshua Greene, Jesse Prinz, and others, empathy, or our capacity for fellow-feeling, tends to (...) misguide us in the best of cases, and more often reinforces faction and tribalism in morals and politics. These utilitarians, as Fleischacker refers to them, propose that empathy take a back seat to cost-benefit analysis in moral decision-making. As an intervention, the book is largely successful. Fleischacker’s defense of empathy is nuanced and escapes the myopic enthusiasm to which many partisans of empathy are prone. Anyone looking to understand the relationship between empathy and morality would do well to grapple with Being Me Being You. Still, Fleischacker overlooks that Smith would most likely be less convinced of the idea that greater empathy can help us overcome the great challenges of our time. (shrink)
This paper examines the interplay of semantics and pragmatics within the domain of film. Films are made up of individual shots strung together in sequences over time. Though each shot is disconnected from the next, combinations of shots still convey coherent stories that take place in continuous space and time. How is this possible? The semantic view of film holds that film coherence is achieved in part through a kind of film language, a set of conventions which govern the relationships (...) between shots. In this paper, we develop and defend a new version of the semantic view. We articulate it for a pair of conventions that govern spatial relations between viewpoints. One such rule is already well-known; sometimes called the "180° Rule," we term it the X-Constraint; to this we add a previously unrecorded rule, the T-Constraint. As we show, both have the effect, in different ways, of limiting the way that viewpoint can shift through space from shot to shot over the course of a film sequence. Such constraints, we contend, are analogous to relations of discourse coherence that are widely recognized in the linguistic domain. If film is to have a language, it is a language made up of rules like these. (shrink)
We present a game mechanic called pseudo-visibility for games inhabited by non-player characters (NPCs) driven by reinforcement learning (RL). NPCs are incentivized to pretend they cannot see pseudo-visible players: the training environment simulates an NPC to determine how the NPC would act if the pseudo-visible player were invisible, and penalizes the NPC for acting differently. NPCs are thereby trained to selectively ignore pseudo-visible players, except when they judge that the reaction penalty is an acceptable tradeoff (e.g., a guard might accept (...) the penalty in order to protect a treasure because losing the treasure would hurt even more). We describe an RL agent transformation which allows RL agents that would not otherwise do so to perform some limited self-reflection to learn the training environments in question. (shrink)
Can an AGI create a more intelligent AGI? Under idealized assumptions, for a certain theoretical type of intelligence, our answer is: “Not without outside help”. This is a paper on the mathematical structure of AGI populations when parent AGIs create child AGIs. We argue that such populations satisfy a certain biological law. Motivated by observations of sexual reproduction in seemingly-asexual species, the Knight-Darwin Law states that it is impossible for one organism to asexually produce another, which asexually produces another, and (...) so on forever: that any sequence of organisms (each one a child of the previous) must contain occasional multi-parent organisms, or must terminate. By proving that a certain measure (arguably an intelligence measure) decreases when an idealized parent AGI single-handedly creates a child AGI, we argue that a similar Law holds for AGIs. (shrink)
Legg and Hutter, as well as subsequent authors, considered intelligent agents through the lens of interaction with reward-giving environments, attempting to assign numeric intelligence measures to such agents, with the guiding principle that a more intelligent agent should gain higher rewards from environments in some aggregate sense. In this paper, we consider a related question: rather than measure numeric intelligence of one Legg- Hutter agent, how can we compare the relative intelligence of two Legg-Hutter agents? We propose an elegant answer (...) based on the following insight: we can view Legg-Hutter agents as candidates in an election, whose voters are environments, letting each environment vote (via its rewards) which agent (if either) is more intelligent. This leads to an abstract family of comparators simple enough that we can prove some structural theorems about them. It is an open question whether these structural theorems apply to more practical intelligence measures. (shrink)
We define a notion of the intelligence level of an idealized mechanical knowing agent. This is motivated by efforts within artificial intelligence research to define real-number intelligence levels of compli- cated intelligent systems. Our agents are more idealized, which allows us to define a much simpler measure of intelligence level for them. In short, we define the intelligence level of a mechanical knowing agent to be the supremum of the computable ordinals that have codes the agent knows to be codes (...) of computable ordinals. We prove that if one agent knows certain things about another agent, then the former necessarily has a higher intelligence level than the latter. This allows our intelligence no- tion to serve as a stepping stone to obtain results which, by themselves, are not stated in terms of our intelligence notion (results of potential in- terest even to readers totally skeptical that our notion correctly captures intelligence). As an application, we argue that these results comprise evidence against the possibility of intelligence explosion (that is, the no- tion that sufficiently intelligent machines will eventually be capable of designing even more intelligent machines, which can then design even more intelligent machines, and so on). (shrink)
Recent years have witnessed growing controversy over the “wisdom of the multitude.” As epistemic critics drawing on vast empirical evidence have cast doubt on the political competence of ordinary citizens, epistemic democrats have offered a defense of democracy grounded largely in analogies and formal results. So far, I argue, the critics have been more convincing. Nevertheless, democracy can be defended on instrumental grounds, and this article demonstrates an alternative approach. Instead of implausibly upholding the epistemic reliability of average voters, I (...) observe that competitive elections, universal suffrage, and discretionary state power disable certain potent mechanisms of elite entrenchment. By reserving particular forms of power for the multitude of ordinary citizens, they make democratic states more resistant to dangerous forms of capture than non-democratic alternatives. My approach thus offers a robust defense of electoral democracy, yet cautions against expecting too much from it—motivating a thicker conception of democracy, writ large. (shrink)
The descriptions 'good' and 'bad' are examples of thin concepts, as opposed to 'kind' or 'cruel' which are thick concepts. Simon Kirchin provides one of the first full-length studies of the crucial distinction between 'thin' and 'thick' concepts, which is fundamental to many debates in ethics, aesthetics and epistemology.
After generalizing the Archimedean property of real numbers in such a way as to make it adaptable to non-numeric structures, we demonstrate that the real numbers cannot be used to accurately measure non-Archimedean structures. We argue that, since an agent with Artificial General Intelligence (AGI) should have no problem engaging in tasks that inherently involve non-Archimedean rewards, and since traditional reinforcement learning rewards are real numbers, therefore traditional reinforcement learning probably will not lead to AGI. We indicate two possible ways (...) traditional reinforcement learning could be altered to remove this roadblock. (shrink)
Can an agent's intelligence level be negative? We extend the Legg-Hutter agent-environment framework to include punishments and argue for an affirmative answer to that question. We show that if the background encodings and Universal Turing Machine (UTM) admit certain Kolmogorov complexity symmetries, then the resulting Legg-Hutter intelligence measure is symmetric about the origin. In particular, this implies reward-ignoring agents have Legg-Hutter intelligence 0 according to such UTMs.
I provide an analysis of sentences of the form ‘To be F is to be G’ in terms of exact truth-maker semantics—an approach that identifies the meanings of sentences with the states of the world directly responsible for their truth-values. Roughly, I argue that these sentences hold just in case that which makes something F is that which makes it G. This approach is hyperintensional, and possesses desirable logical and modal features. These sentences are reflexive, transitive and symmetric, and, if (...) they are true, then they are necessarily true, and it is necessary that all and only Fs are Gs. I close by defining an asymmetric and irreflexive notion of analysis in terms of the reflexive and symmetric one. (shrink)
Both short and long-term video-game play may result in superior performance on visual and attentional tasks. To further these findings, we compared the performance of experienced male video-game players (VGPs) and non-VGPs on a Simon-task. Experienced-VGPs began playing before the age of 10, had a minimum of 8 years of experience and a minimum play time of over 20 h per week over the past 6 months. Our results reveal a significantly reduced Simon-effect in experienced-VGPs relative to non-VGPs. (...) However, this was true only for the right-responses, which typically show a greater Simon-effect than left-responses. In addition, experienced-VGPs demonstrated significantly quicker reaction times and more balanced left-versus-right-hand performance than non-VGPs. Our results suggest that experienced-VGPs can resolve response-selection conflicts more rapidly for right-responses than non-VGPs, and this may in part be underpinned by improved bimanual motor control. (shrink)
Existing approaches to campaign ethics fail to adequately account for the “arms races” incited by competitive incentives in the absence of effective sanctions for destructive behaviors. By recommending scrupulous devotion to unenforceable norms of honesty, these approaches require ethical candidates either to quit or lose. To better understand the complex dilemmas faced by candidates, therefore, we turn first to the tradition of “adversarial ethics,” which aims to enable ethical participants to compete while preventing the most destructive excesses of competition. As (...) we demonstrate, however, elections present even more difficult challenges than other adversarial contexts, because no centralized regulation is available to halt potential arms races. Turning next to recent scholarship on populism and partisanship, we articulate an alternative framework for campaign ethics, which allows candidates greater room to maneuver in their appeals to democratic populations while nevertheless requiring adherence to norms of social and political pluralism. (shrink)
In Metaphysics Z.6, Aristotle argues that each substance is the same as its essence. In this paper, I defend an identity reading of that claim. First, I provide a general argument for the identity reading, based on Aristotle’s account of sameness in number and identity. Second, I respond to the recent charge that the identity reading is incoherent, by arguing that the claim in Z.6 is restricted to primary substances and hence to forms.
In 2011, Hibbard suggested an intelligence measure for agents who compete in an adversarial sequence prediction game. We argue that Hibbard’s idea should actually be considered as two separate ideas: first, that the intelligence of such agents can be measured based on the growth rates of the runtimes of the competitors that they defeat; and second, one specific (somewhat arbitrary) method for measuring said growth rates. Whereas Hibbard’s intelligence measure is based on the latter growth-rate-measuring method, we survey other methods (...) for measuring function growth rates, and exhibit the resulting Hibbard-like intelligence measures and taxonomies. Of particular interest, we obtain intelligence taxonomies based on Big-O and Big-Theta notation systems, which taxonomies are novel in that they challenge conventional notions of what an intelligence measure should look like. We discuss how intelligence measurement of sequence predictors can indirectly serve as intelligence measurement for agents with Artificial General Intelligence (AGIs). (shrink)
The following seems to be a truism in modern day philosophy: No agent can have had other parents (IDENTITY). IDENTITY shows up in discussions of moral luck, parenting, gene editing, and population ethics. In this paper, I challenge IDENTITY. I do so by showing that the most plausible arguments that can be made in favor of IDENTITY do not withstand critical scrutiny. The paper is divided into four sections. In the first, I document the prevalence of IDENTITY. In the second, (...) I examine a defense of IDENTITY on the basis of genetic considerations. In the third, I examine a defense of IDENTITY that I call gamete essentialism. In the fourth, I return to genetic considerations to wrap up the paper. (shrink)
We provide an intuitive motivation for the hyperreal numbers via electoral axioms. We do so in the form of a Socratic dialogue, in which Protagoras suggests replacing big-oh complexity classes by real numbers, and Socrates asks some troubling questions about what would happen if one tried to do that. The dialogue is followed by an appendix containing additional commentary and a more formal proof.
We propose that, for the purpose of studying theoretical properties of the knowledge of an agent with Artificial General Intelligence (that is, the knowledge of an AGI), a pragmatic way to define such an agent’s knowledge (restricted to the language of Epistemic Arithmetic, or EA) is as follows. We declare an AGI to know an EA-statement φ if and only if that AGI would include φ in the resulting enumeration if that AGI were commanded: “Enumerate all the EA-sentences which you (...) know.” This definition is non-circular because an AGI, being capable of practical English communication, is capable of understanding the everyday English word “know” independently of how any philosopher formally defines knowledge; we elaborate further on the non-circularity of this circular-looking definition. This elegantly solves the problem that different AGIs may have different internal knowledge definitions and yet we want to study knowledge of AGIs in general, without having to study different AGIs separately just because they have separate internal knowledge definitions. Finally, we suggest how this definition of AGI knowledge can be used as a bridge which could allow the AGI research community to import certain abstract results about mechanical knowing agents from mathematical logic. (shrink)
In recent years, the number of studies examining mind wandering has increased considerably, and research on the topic has spread widely across various domains of psychological research. Although the term “mind wandering” has been used to refer to various cognitive states, researchers typically operationalize mind wandering in terms of “task-unrelated thought” (TUT). Research on TUT has shed light on the various task features that require people’s attention, and on the consequences of task inattention. Important methodological and conceptual complications do persist, (...) however, in current investigations of TUT. As we argue, these complications may be dampening the development of a more nuanced scientific account of TUT. Here, we outline three of the more prominent methodological and conceptual complications in the literature on TUT, and we discuss potential directions for researchers to take as they move forward in their investigations of TUT. (shrink)
One shortcoming of the chain rule is that it does not iterate: it gives the derivative of f(g(x)), but not (directly) the second or higher-order derivatives. We present iterated differentials and a version of the multivariable chain rule which iterates to any desired level of derivative. We first present this material informally, and later discuss how to make it rigorous (a discussion which touches on formal foundations of calculus). We also suggest a finite calculus chain rule (contrary to Graham, Knuth (...) and Patashnik's claim that "there's no corresponding chain rule of finite calculus"). (shrink)
According to the B-theory, the passage of time is an illusion. The B-theory therefore requires an explanation of this illusion before it can be regarded as fullysatisfactory; yet very few B-theorists have taken up the challenge of trying to provide one. In this paper I take some first steps toward such an explanation by first making a methodological proposal, then a hypothesis about a key element in the phenomenology of temporal passage. The methodological proposal focuses onthe representational content of the (...) element of experience by virtue of which time seems to pass. The hypothesis involves the claim that the experience of changeinvolves the representation of something enduring, rather than perduring, through any change. (shrink)
Judgments of blame for others are typically sensitive to what an agent knows and desires. However, when people act negligently, they do not know what they are doing and do not desire the outcomes of their negligence. How, then, do people attribute blame for negligent wrongdoing? We propose that people attribute blame for negligent wrongdoing based on perceived mental control, or the degree to which an agent guides their thoughts and attention over time. To acquire information about others’ mental control, (...) people self-project their own perceived mental control to anchor third-personal judgments about mental control and concomitant responsibility for negligent wrongdoing. In four experiments (N = 841), we tested whether perceptions of mental control drive third-personal judgments of blame for negligent wrongdoing. Study 1 showed that the ease with which people can counterfactually imagine an individual being non-negligent mediated the relationship between judgments of control and blame. Studies 2a and 2b indicated that perceived mental control has a strong effect on judgments of blame for negligent wrongdoing and that first-personal judgments of mental control are moderately correlated with third-personal judgments of blame for negligent wrongdoing. Finally, we used an autobiographical memory manipulation in Study 3 to make personal episodes of forgetfulness salient. Participants for whom past personal episodes of forgetfulness were made salient judged negligent wrongdoers less harshly compared to a control group for whom past episodes of negligence were not salient. Collectively, these findings suggest that first-personal judgments of mental control drive third-personal judgments of blame for negligent wrongdoing and indicate a novel role for counterfactual thinking in the attribution of responsibility. (shrink)
I explore the process of changes in the observability of entities and objects in science and how such changes impact two key issues in the scientific realism debate: the claim that predictively successful elements of past science are retained in current scientific theories, and the inductive defense of a specific version of inference to the best explanation with respect to unobservables. I provide a case-study of the discovery of radium by Marie Curie in order to show that the observability of (...) some entities can change and that such changes are relevant for arguments seeking to establish the reliability of success-to-truth inferences with respect to unobservables. (shrink)
Scientific realism driven by inference to the best explanation (IBE) takes empirically confirmed objects to exist, independent, pace empiricism, of whether those objects are observable or not. This kind of realism, it has been claimed, does not need probabilistic reasoning to justify the claim that these objects exist. But I show that there are scientific contexts in which a non-probabilistic IBE-driven realism leads to a puzzle. Since IBE can be applied in scientific contexts in which empirical confirmation has not yet (...) been reached, realists will in these contexts be committed to the existence of empirically unconfirmed objects. As a consequence of such commitments, because they lack probabilistic features, the possible empirical confirmation of those objects is epistemically redundant with respect to realism. (shrink)
This essay examine Samuel Beckett's *Trilogy to specify the conditions under which we could make sense of practical necessity. Among other things, I will show how Ajax' must is connected to Mol/oy's attempt to visit his mother and to the need to keep talking that both Molloy and the Unnamable share. I will conclude that their dislocated pursuit of certainty reveal - among other things - how the conditions under which practical necessity can be properly experienced have been extirpated (...) from our social and cultural context. Still, the fact that its vestiges nevertheless subsist provide some reason to regard practical necessity as a constitutive aspect of our agency. This will provide a particular sense in which the Unnamable may coherently claim: "I don't know, I'll never know, in the silence you don't know, you must go on, I can't go on, I will go on.". (shrink)
This research note argues that political theorists of refuge ought to consider the experiences of refugees after they have received asylum in the Global North. Currently, much of the literature concerning the duties of states towards refugees implicitly adopts a blanket approach, rather than considering how varied identities may affect the remedies available to displaced people. Given the prevalence of racism, xenophobia, and homophobia in the Global North, and the growing norm of dissident persecution in foreign territory, protection is not (...) guaranteed after either territorial or legal admission. This research note considers the case of LGBTQ refugees in order to demonstrate the analytical potential of more inclusive and diverse normative approaches. Taking the origin and extension of harm seriously requires a conceptualization of sanctuary after asylum that accurately reflects the experiences of the displaced. In doing so, questions arise regarding the nature and efficacy of territorial asylum. (shrink)
A distorted representation of one's own body is a diagnostic criterion and core psychopathology of both anorexia nervosa (AN) and bulimia nervosa (BN). Despite recent technical advances in research, it is still unknown whether this body image disturbance is characterized by body dissatisfaction and a low ideal weight and/or includes a distorted perception or processing of body size. In this article, we provide an update and meta-analysis of 42 articles summarizing measures and results for body size estimation (BSE) from 926 (...) individuals with AN, 536 individuals with BN and 1920 controls. We replicate findings that individuals with AN and BN overestimate their body size as compared to controls (ES= 0.63). Our meta-regression shows that metric methods (BSE by direct or indirect spatial measures) yield larger effect sizes than depictive methods (BSE by evaluating distorted pictures), and that effect sizes are larger for patients with BN than for patients with AN. To interpret these results, we suggest a revised theoretical framework for BSE that accounts for differences between depictive and metric BSE methods regarding the underlying body representations (conceptual vs. perceptual, implicit vs. explicit). We also discuss clinical implications and argue for the importance of multimethod approaches to investigate body image disturbance. (shrink)
We sometimes fail unwittingly to do things that we ought to do. And we are, from time to time, culpable for these unwitting omissions. We provide an outline of a theory of responsibility for unwitting omissions. We emphasize two distinctive ideas: (i) many unwitting omissions can be understood as failures of appropriate vigilance, and; (ii) the sort of self-control implicated in these failures of appropriate vigilance is valuable. We argue that the norms that govern vigilance and the value of self-control (...) explain culpability for unwitting omissions. (shrink)
I am concerned with epistemic closure—the phenomenon in which some knowledge requires other knowledge. In particular, I defend a version of the closure principle in terms of analyticity; if an agent S knows that p is true, then S knows that all analytic parts of p are true as well. After targeting the relevant notion of analyticity, I argue that this principle accommodates intuitive cases and possesses the theoretical resources to avoid the preface paradox. I close by arguing that contextualists (...) who maintain that knowledge attributions are closed within—but not between—linguistic contexts are tacitly committed to this principle’s truth. (shrink)
Experimental work on free will typically relies on deterministic stimuli to elicit judgments of free will. We call this the Vignette-Judgment model. We outline a problem with research based on this model. It seems that people either fail to respond to the deterministic aspects of vignettes when making judgments or that their understanding of determinism differs from researcher expectations. We provide some empirical evidence for this claim. In the end, we argue that people seem to lack facility with the concept (...) of determinism, which calls into question the validity of experimental work operating under the Vignette-Judgment model. We also argue that alternative experimental paradigms are unlikely to elicit judgments that are philosophically relevant to questions about the metaphysics of free will. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.