My dissertation explores the ways in which Rudolf Carnap sought to make philosophy scientific by further developing recent interpretive efforts to explain Carnap’s mature philosophical work as a form of engineering. It does this by looking in detail at his philosophical practice in his most sustained mature project, his work on pure and applied inductive logic. I, first, specify the sort of engineering Carnap is engaged in as involving an engineering design problem and then draw out the complications of design (...) problems from current work in history of engineering and technology studies. I then model Carnap’s practice based on those lessons and uncover ways in which Carnap’s technical work in inductive logic takes some of these lessons on board. This shows ways in which Carnap’s philosophical project subtly changes right through his late work on induction, providing an important corrective to interpretations that ignore the work on inductive logic. Specifically, I show that paying attention to the historical details of Carnap’s attempt to apply his work in inductive logic to decision theory and theoretical statistics in the 1950s and 1960s helps us understand how Carnap develops and rearticulates the philosophical point of the practical/theoretical distinction in his late work, offering thus a new interpretation of Carnap’s technical work within the broader context of philosophy of science and analytical philosophy in general. (shrink)
Contemporary recognition theory has developed powerful tools for understanding a variety of social problems through the lens of misrecognition. It has, however, paid somewhat less attention to how to conceive of appropriate responses to misrecognition, usually making the tacit assumption that the proper societal response is adequate or proper affirmative recognition. In this paper I argue that, although affirmative recognition is one potential response to misrecognition, it is not the only such response. In particular, I would like to make the (...) case for derecognition in some cases: derecognition, in particular, through the systematic deinstitutionalization or uncoupling of various reinforcing components of social institutions, components whose tight combination in one social institution has led to the misrecognition in the first place. I make the case through the example of recent United States debates over marriage, especially but not only with respect to gay marriage. I argue that the proper response to the misrecognition of sexual minorities embodied in exclusively heterosexual marriage codes is not affirmative recognition of lesbian and gay marriages, but rather the systematic derecognition of legal marriage as currently understood. I also argue that the systematic misrecognition of women that occurs under the contemporary institution of marriage would likewise best be addressed through legal uncoupling of heterogeneous social components embodied in the contemporary social institution of marriage. (shrink)
In Between Facts and Norms (1992) Habermas set out a theory of law and politics that is linked both to our high normative expectations and to the realities consequent upon the practices and institutions meant to put them into effect. The article discusses Hugh Baxter’s Habermas: The Discourse Theory of Law and Democracy and the drawbacks he finds in Habermas’ theory. It focuses on raising questions about and objections to some of the author’s leading claims.
This paper argues that political civility is actually an illusionistic ideal and that, as such, realism counsels that we acknowledge both its promise and peril. Political civility is, I will argue, a tension-filled ideal. We have good normative reasons to strive for and encourage more civil political interactions, as they model our acknowledgement of others as equal citizens and facilitate high-quality democratic problem-solving. But we must simultaneously be attuned to civility’s limitations, its possible pernicious side-effects, and its potential for strategic (...) manipulation and oppressive abuse, particularly in contemporary, pluralistic and heterogeneous societies. (shrink)
Throughout the biological and biomedical sciences there is a growing need for, prescriptive ‘minimum information’ (MI) checklists specifying the key information to include when reporting experimental results are beginning to find favor with experimentalists, analysts, publishers and funders alike. Such checklists aim to ensure that methods, data, analyses and results are described to a level sufficient to support the unambiguous interpretation, sophisticated search, reanalysis and experimental corroboration and reuse of data sets, facilitating the extraction of maximum value from data sets (...) them. However, such ‘minimum information’ MI checklists are usually developed independently by groups working within representatives of particular biologically- or technologically-delineated domains. Consequently, an overview of the full range of checklists can be difficult to establish without intensive searching, and even tracking thetheir individual evolution of single checklists may be a non-trivial exercise. Checklists are also inevitably partially redundant when measured one against another, and where they overlap is far from straightforward. Furthermore, conflicts in scope and arbitrary decisions on wording and sub-structuring make integration difficult. This presents inhibit their use in combination. Overall, these issues present significant difficulties for the users of checklists, especially those in areas such as systems biology, who routinely combine information from multiple biological domains and technology platforms. To address all of the above, we present MIBBI (Minimum Information for Biological and Biomedical Investigations); a web-based communal resource for such checklists, designed to act as a ‘one-stop shop’ for those exploring the range of extant checklist projects, and to foster collaborative, integrative development and ultimately promote gradual integration of checklists. (shrink)
Otávio Bueno* * and Steven French.** ** Applying Mathematics: Immersion, Inference, Interpretation. Oxford University Press, 2018. ISBN: 978-0-19-881504-4 978-0-19-185286-2. doi:10.1093/oso/9780198815044. 001.0001. Pp. xvii + 257.
This study examines the relation of language use to a person’s ability to perform categorization tasks and to assess their own abilities in those categorization tasks. A silent rhyming task was used to confirm that a group of people with post-stroke aphasia (PWA) had corresponding covert language production (or “inner speech”) impairments. The performance of the PWA was then compared to that of age- and education-matched healthy controls on three kinds of categorization tasks and on metacognitive self-assessments of their performance (...) on those tasks. The PWA showed no deficits in their ability to categorize objects for any of the three trial types (visual, thematic, and categorial). However, on the categorial trials, their metacognitive assessments of whether they had categorized correctly were less reliable than those of the control group. The categorial trials were distinguished from the others by the fact that the categorization could not be based on some immediately perceptible feature or on the objects’ being found together in a type of scenario or setting. This result offers preliminary evidence for a link between covert language use and a specific form of metacognition. (shrink)
In this article, we propose the Fair Priority Model for COVID-19 vaccine distribution, and emphasize three fundamental values we believe should be considered when distributing a COVID-19 vaccine among countries: Benefiting people and limiting harm, prioritizing the disadvantaged, and equal moral concern for all individuals. The Priority Model addresses these values by focusing on mitigating three types of harms caused by COVID-19: death and permanent organ damage, indirect health consequences, such as health care system strain and stress, as well as (...) economic destruction. It proposes proceeding in three phases: the first addresses premature death, the second long-term health issues and economic harms, and the third aims to contain viral transmission fully and restore pre-pandemic activity. -/- To those who may deem an ethical framework irrelevant because of the belief that many countries will pursue "vaccine nationalism," we argue such a framework still has broad relevance. Reasonable national partiality would permit countries to focus on vaccine distribution within their borders up until the rate of transmission is below 1, at which point there would not be sufficient vaccine-preventable harm to justify retaining a vaccine. When a government reaches the limit of national partiality, it should release vaccines for other countries. -/- We also argue against two other recent proposals. Distributing a vaccine proportional to a country's population mistakenly assumes that equality requires treating differently situated countries identically. Prioritizing countries according to the number of front-line health care workers, the proportion of the population over 65, and the number of people with comorbidities within each country may exacerbate disadvantage and end up giving the vaccine in large part to wealthy nations. (shrink)
The National Center for Biomedical Ontology is now in its seventh year. The goals of this National Center for Biomedical Computing are to: create and maintain a repository of biomedical ontologies and terminologies; build tools and web services to enable the use of ontologies and terminologies in clinical and translational research; educate their trainees and the scientific community broadly about biomedical ontology and ontology-based technology and best practices; and collaborate with a variety of groups who develop and use ontologies and (...) terminologies in biomedicine. The centerpiece of the National Center for Biomedical Ontology is a web-based resource known as BioPortal. BioPortal makes available for research in computationally useful forms more than 270 of the world's biomedical ontologies and terminologies, and supports a wide range of web services that enable investigators to use the ontologies to annotate and retrieve data, to generate value sets and special-purpose lexicons, and to perform advanced analytics on a wide range of biomedical data. (shrink)
One of the reasons why there is no Hegelian school in contemporary ethics in the way that there are Kantian, Humean and Aristotelian schools is because Hegelians have been unable to clearly articulate the Hegelian alternative to those schools’ moral psychologies, i.e., to present a Hegelian model of the motivation to, perception of, and responsibility for moral action. Here it is argued that in its most basic terms Hegel's model can be understood as follows: the agent acts in a responsible (...) and thus paradigmatic sense when she identifies as reasons those motivations which are grounded in his or her talents and support actions that are likely to develop those talents in ways suggested by his or her interests. (shrink)
Michel Serres's relation to ecocriticism is complex. On the one hand, he is a pioneer in the area, anticipating the current fashion for ecological thought by over a decade. On the other hand, 'ecology' and 'eco-criticism' are singularly infelicitous terms to describe Serres's thinking if they are taken to indicate that attention should be paid to particular 'environmental' concerns. For Serres, such local, circumscribed ideas as 'ecology' or 'eco-philosophy' are one of the causes of our ecological crisis, and no progress (...) can be made while such narrow concerns govern our thinking. This chapter intervenes in the ongoing discussion about the relation of Serres to ecology by drawing on some of Serres's more recent texts on pollution and dwelling, and this fresh material leads us to modulate existing treatments of Serres and ecology. I insist on the inextricability of two senses of ecology in Serres's approach: a broader meaning that refers to the interconnectedness and inextricability of all entities (natural and cultural, material and ideal), and a narrower sense that evokes classically 'environmental' concerns. Serres's recent work leads us to challenge some of the vectors and assumptions of the debate by radicalising the continuity between 'natural' and 'cultural' phenomena, questioning some of the commonplaces that structure almost all ecological thinking, and arguing that the entire paradigm of ecology as 'conservation' and 'protection' is bankrupt and self-undermining. After outlining the shape of Serres's 'general ecology' and its opposition to ecology as conservation, this chapter asks what sorts of practices and values a Serresian general ecology can engender when it considers birdsong, advertising, industrial pollution and money to be manifestations of the same drive for appropriation through pollution. A response is given in terms of three key Serresian motifs: the world as fetish, parasitic symbiosis, and global cosmocracy. (shrink)
Nineteen fifty-eight was an extraordinary year for cultural innovation, especially in English literature. It was also a year in which several boldly revisionary positions were first articulated in analytic philosophy. And it was a crucial year for the establishment of structural linguistics, of structuralist anthropology, and of cognitive psychology. Taken together these developments had a radical effect on our conceptions of individual creativity and of the inheritance of tradition. The present essay attempts to illuminate the relationships among these developments, and (...) to explain the foundational role played by mathematical, logical and information theory in all of them. (shrink)
Much French philosophy of the late twentieth and early twenty-first centuries has been marked by the positive valorization of alterity, an ethical position that has recently received a vigorous assault from Alain Badiou’s privilege of sameness. This article argues that Badiou shares a great deal in common with the philosophies of alterity from which he seeks to distance himself, and that Michel Serres’s little-known account of alterity offers a much more radical alternative to the ethics of difference. Drawing on (...) both translated and as yet untranslated works, I argue that the Serresian ontology of inclination, along with his conceptual personae of the hermaphrodite and the parasite, informs ethical and political positions that offer a distinctive ethics and politics that present fresh insights about the relation between the singular and the universal, the contingency of market exchange, and the nature of violence. (shrink)
Is human goodness a matter of fulfilling one’s obligations and obeying rules, or one of developing habits of virtue? This article contrasts Peter French’s and Alasdair MacIntyre’s Aristotelian approach to ethics as a matter of virtue with William Frankena’s and Iris Murdoch’s Kantian view of ethics as a matter of duty. If ethicists seek to establish an acceptable, distinguishing moral characteristic as the standard of goodness, such a task may only be accomplished at a metaethical level of investigation. Approaching (...) ethics as an either/or proposition of virtue vs. duty is wrongheaded; instead, we should approach ethics as a both/and proposition, consisting of both duty AND virtue. (shrink)
An interview exploring the complexity of contemporary French philosophical atheism, in the light of Difficult Atheism: Post-Theological Thinking in Badiou, Nancy and Meillassoux (Edinburgh UP, 2011).
The thought of G. W. F. Hegel (1770 -1831) has had a deep and lasting influence on a wide range of philosophical, political, religious, aesthetic, cultural and scientific movements. But, despite the far-reaching importance of Hegel's thought, there is often a great deal of confusion about what he actually said or believed. G. W. F. Hegel: Key Concepts provides an accessible introduction to both Hegel's thought and Hegel-inspired philosophy in general, demonstrating how his concepts were understood, adopted and critically transformed (...) by later thinkers. The first section of the book covers the principal philosophical themes in Hegel's system: epistemology, metaphysics, philosophy of mind, ethical theory, political philosophy, philosophy of nature, philosophy of art, philosophy of religion, philosophy of history and theory of the history of philosophy. The second section covers the main post-Hegelian movements in philosophy: Marxism, existentialism, pragmatism, analytic philosophy, hermeneutics and French poststructuralism. The breadth and depth of G. W. F. Hegel: Key Concepts makes it an invaluable introduction for philosophical beginners and a useful reference source for more advanced scholars and researchers. (shrink)
The nuclear engineer emerged as a new form of recognised technical professional between 1940 and the early 1960s as nuclear fission, the chain reaction and their applications were explored. The institutionalization of nuclear engineering channelled into new national laboratories and corporate design offices during the decade after the war, and hurried into academic venues thereafter proved unusually dependent on government definition and support. This paper contrasts the distinct histories of the new discipline in the USA and UK (and, more briefly, (...) Canada). In the segregated and influential environments of institutional laboratories and factories, historical actors such as physicist Walter Zinn in the USA and industrial chemist Christopher Hinton in the UK proved influential in shaping the roles and perceptions of nuclear specialists. More broadly, I argue that the State-managed implantation of the new subject within further and higher education curricula was shaped strongly by distinct political and economic contexts in which secrecy, postwar prestige and differing industrial cultures were decisive factors. (shrink)
Since the 1980s, there have been many attempts to bring together Critical Theory of Frankfurtian strain and French theories generally referred to as poststructuralist. The present text seeks to readdress the problem of their tricky articulation by taking a look at some vicissitudes those two currents of thought underwent in Brazil. In addition to the risk – embedded in the Parisian passion for dissolution – of positivizing atrocious aspects of Brazilian society related to the country’s multi-secular informality and backwardness, (...) what is at stake is the adequate understanding of the meaning and relevance of dialectical criticism in a peripheral country, in other words, the capacity to weigh the influx and gravitation of foreign ideas and forms in a society of inorganic culture, the possibility, finally, of critically verifying the supposed universality of hegemonic theories and categories in light of a cultural experience amassed under a heterodox modernization. (shrink)
In the contemporary anthropological discourse, the general subjects of temporality and aesthetics, as form of cognition, turn out to be the neuralgic points through which the question of the human existence is considered in many approaches to human being (cognitive, ethical, ontological or theological). In the philosophical system of the French post-modern thinker Jean-François Lyotard (1924-1998) these two issues are deeply interlaced leading to new and original solutions of the above-mentioned question. The aim of the contribution is to reconstruct (...) the interconnection between temporality and aesthetics in the thought of this philosopher showing Lyotard’s aesthetic structures of temporality as a central contribution to the post-modern subjectivity theory. (shrink)
Thomas Pogge has argued, famously, that ‘we’ are violating the rights of the global poor insofar as we uphold an unjust international order which provides a legal and economic framework within which individuals and groups can and do deprive such individuals of their lives, liberty and property. I argue here that Pogge’s claim that we are violating a negative duty can only be made good on the basis of a substantive theory of collective action; and that it can only provide (...) substantive ethical guidance when combined with an account of how collective action gives rise to forward-looking responsibility and/or accountability on an individual level. I consider accounts of these two topics given in work by Peter French and Christopher Kutz; and I argue that neither of them give Pogge what he needs. Although there is a sense in which 'we' can be said to be violating the rights of the worst off, the sense in which this is true does not generate any plausible action-guiding claims for individuals. (shrink)
Creativity pervades human life. It is the mark of individuality, the vehicle of self-expression, and the engine of progress in every human endeavor. It also raises a wealth of neglected and yet evocative philosophical questions: What is the role of consciousness in the creative process? How does the audience for a work for art influence its creation? How can creativity emerge through childhood pretending? Do great works of literature give us insight into human nature? Can a computer program really be (...) creative? How do we define creativity in the first place? Is it a virtue? What is the difference between creativity in science and art? Can creativity be taught? -/- The new essays that comprise The Philosophy of Creativity take up these and other key questions and, in doing so, illustrate the value of interdisciplinary exchange. Written by leading philosophers and psychologists involved in studying creativity, the essays integrate philosophical insights with empirical research. -/- CONTENTS -/- I. Introduction Introducing The Philosophy of Creativity Elliot Samuel Paul and Scott Barry Kaufman -/- II. The Concept of Creativity 1. An Experiential Account of Creativity Bence Nanay -/- III. Aesthetics & Philosophy of Art 2. Creativity and Insight Gregory Currie 3. The Creative Audience: Some Ways in which Readers, Viewers and/or Listeners Use their Imaginations to Engage Fictional Artworks Noël Carroll 4. The Products of Musical Creativity Christopher Peacocke -/- IV. Ethics & Value Theory 5. Performing Oneself Owen Flanagan 6. Creativity as a Virtue of Character Matthew Kieran -/- V. Philosophy of Mind & Cognitive Science 7. Creativity and Not So Dumb Luck Simon Blackburn 8. The Role of Imagination in Creativity Dustin Stokes 9. Creativity, Consciousness, and Free Will: Evidence from Psychology Experiments Roy F. Baumeister, Brandon J. Schmeichel, and C. Nathan DeWall 10. The Origins of Creativity Elizabeth Picciuto and Peter Carruthers 11. Creativity and Artificial Intelligence: a Contradiction in Terms? Margaret Boden -/- VI. Philosophy of Science 12. Hierarchies of Creative Domains: Disciplinary Constraints on Blind-Variation and Selective-Retention Dean Keith Simonton -/- VII. Philosophy of Education (& Education of Philosophy) 13. Educating for Creativity Berys Gaut 14. Philosophical Heuristics Alan Hájek. (shrink)
During the period 1870-1914 the existing discipline of psychology was transformed. British thinkers including Spencer, Lewes, and Romanes allied psychology with biology and viewed mind as a function of the organism for adapting to the environment. British and German thinkers called attention to social and cultural factors in the development of individual human minds. In Germany and the United States a tradition of psychology as a laboratory science soon developed, which was called a 'new psychology' by contrast with the old, (...) metaphysical psychology. Methodological discussion intensified. New syntheses were framed. Chairs were established and Departments founded. Although the trend toward institutional autonomy was less rapid in Britain and France, significant work was done by the likes of Galton and Binet. Even in Germany and America the purposeful transformation of the old psychology into a new, experimental science was by no means complete in 1914. And while the increase in experimentation changed the body of psychological writing, there was considerable continuity in theoretical content and non-experimental methodology between the old and new psychologies. This chapter follows the emergence of the new psychology out of the old in the national traditions of Britain, Germany, and the United States, with some reference to French, Belgian, Austrian, and Italian thinkers. While the division into national traditions is useful, the psychological literature of the second half of the nineteenth century was generally a European literature, with numerous references across national and linguistic boundaries, and it became a North Atlantic literature as psychology developed in the United States and Canada. The order of treatment, Britain, Germany, and the US, follows the center of gravity of psychological activity. The final section considers some methodological and philosophical issues from these literatures. (shrink)
The theme of the conflict between the different interpretations of Spinoza’s philosophy in French scholarship, introduced by Christopher Norris in this volume and expanded on by Alain Badiou, is also central to the argument presented in this chapter. Indeed, this chapter will be preoccupied with distinguishing the interpretations of Spinoza by two of the figures introduced by Badiou. The interpretation of Spinoza offered by Gilles Deleuze in Expressionism in Philosophy provides an account of the dynamic changes or transformations (...) of the characteristic relations of a Spinozist finite existing mode, or human being. This account has been criticized more or less explicitly by a number of commentators, including Charles Ramond. Rather than providing a defence of Deleuze on this specific point, which I have done elsewhere, what I propose to do in this chapter is provide an account of the role played by “joyful passive a affections” in these dynamic changes or transformations by distinguishing Deleuze’s account of this role from that offered by one of his more explicit critics on this issue, Pierre Macherey. An appreciation of the role played by “joyful passive affections” in this context is crucial to understanding how Deleuze’s interpretation of Spinoza is implicated in his broader philosophical project of constructing a philosophy of difference. The outcome is a position that, like Badiou in the previous chapter, rules out “intellect in potentiality” but maintains a role for the joyful passive affects in the development of adequate ideas. (shrink)
Much of the philosophical literature on causation has focused on the concept of actual causation, sometimes called token causation. In particular, it is this notion of actual causation that many philosophical theories of causation have attempted to capture.2 In this paper, we address the question: what purpose does this concept serve? As we shall see in the next section, one does not need this concept for purposes of prediction or rational deliberation. What then could the purpose be? We will argue (...) that one can gain an important clue here by looking at the ways in which causal judgments are shaped by people‘s understanding of norms. (shrink)
Interactions between an intelligent software agent and a human user are ubiquitous in everyday situations such as access to information, entertainment, and purchases. In such interactions, the ISA mediates the user’s access to the content, or controls some other aspect of the user experience, and is not designed to be neutral about outcomes of user choices. Like human users, ISAs are driven by goals, make autonomous decisions, and can learn from experience. Using ideas from bounded rationality, we frame these interactions (...) as instances of an ISA whose reward depends on actions performed by the user. Such agents benefit by steering the user’s behaviour towards outcomes that maximise the ISA’s utility, which may or may not be aligned with that of the user. Video games, news recommendation aggregation engines, and fitness trackers can all be instances of this general case. Our analysis facilitates distinguishing various subcases of interaction, as well as second-order effects that might include the possibility for adaptive interfaces to induce behavioural addiction, and/or change in user belief. We present these types of interaction within a conceptual framework, and review current examples of persuasive technologies and the issues that arise from their use. We argue that the nature of the feedback commonly used by learning agents to update their models and subsequent decisions could steer the behaviour of human users away from what benefits them, and in a direction that can undermine autonomy and cause further disparity between actions and goals as exemplified by addictive and compulsive behaviour. We discuss some of the ethical, social and legal implications of this technology and argue that it can sometimes exploit and reinforce weaknesses in human beings. (shrink)
This paper examines the debate between permissive and impermissive forms of Bayesianism. It briefly discusses some considerations that might be offered by both sides of the debate, and then replies to some new arguments in favor of impermissivism offered by Roger White. First, it argues that White’s defense of Indifference Principles is unsuccessful. Second, it contends that White’s arguments against permissive views do not succeed.
I argue that the best interpretation of the general theory of relativity has need of a causal entity, and causal structure that is not reducible to light cone structure. I suggest that this causal interpretation of GTR helps defeat a key premise in one of the most popular arguments for causal reductionism, viz., the argument from physics.
This paper examines three accounts of the sleeping beauty case: an account proposed by Adam Elga, an account proposed by David Lewis, and a third account defended in this paper. It provides two reasons for preferring the third account. First, this account does a good job of capturing the temporal continuity of our beliefs, while the accounts favored by Elga and Lewis do not. Second, Elga’s and Lewis’ treatments of the sleeping beauty case lead to highly counterintuitive consequences. The proposed (...) account also leads to counterintuitive consequences, but they’re not as bad as those of Elga’s account, and no worse than those of Lewis’ account. (shrink)
In Reasons and Persons, Parfit (1984) posed a challenge: provide a satisfying normative account that solves the Non-Identity Problem, avoids the Repugnant and Absurd Conclusions, and solves the Mere-Addition Paradox. In response, some have suggested that we look toward person-affecting views of morality for a solution. But the person-affecting views that have been offered so far have been unable to satisfy Parfit's four requirements, and these views have been subject to a number of independent complaints. This paper describes a person-affecting (...) account which meets Parfit's challenge. The account satisfies Parfit's four requirements, and avoids many of the criticisms that have been raised against person-affecting views. (shrink)
Deference principles are principles that describe when, and to what extent, it’s rational to defer to others. Recently, some authors have used such principles to argue for Evidential Uniqueness, the claim that for every batch of evidence, there’s a unique doxastic state that it’s permissible for subjects with that total evidence to have. This paper has two aims. The first aim is to assess these deference-based arguments for Evidential Uniqueness. I’ll show that these arguments only work given a particular kind (...) of deference principle, and I’ll argue that there are reasons to reject these kinds of principles. The second aim of this paper is to spell out what a plausible generalized deference principle looks like. I’ll start by offering a principled rationale for taking deference to constrain rational belief. Then I’ll flesh out the kind of deference principle suggested by this rationale. Finally, I’ll show that this principle is both more plausible and more general than the principles used in the deference-based arguments for Evidential Uniqueness. (shrink)
We explore the question of whether machines can infer information about our psychological traits or mental states by observing samples of our behaviour gathered from our online activities. Ongoing technical advances across a range of research communities indicate that machines are now able to access this information, but the extent to which this is possible and the consequent implications have not been well explored. We begin by highlighting the urgency of asking this question, and then explore its conceptual underpinnings, in (...) order to help emphasise the relevant issues. To answer the question, we review a large number of empirical studies, in which samples of behaviour are used to automatically infer a range of psychological constructs, including affect and emotions, aptitudes and skills, attitudes and orientations (e.g. values and sexual orientation), personality, and disorders and conditions (e.g. depression and addiction). We also present a general perspective that can bring these disparate studies together and allow us to think clearly about their philosophical and ethical implications, such as issues related to consent, privacy, and the use of persuasive technologies for controlling human behaviour. (shrink)
Representation theorems are often taken to provide the foundations for decision theory. First, they are taken to characterize degrees of belief and utilities. Second, they are taken to justify two fundamental rules of rationality: that we should have probabilistic degrees of belief and that we should act as expected utility maximizers. We argue that representation theorems cannot serve either of these foundational purposes, and that recent attempts to defend the foundational importance of representation theorems are unsuccessful. As a result, we (...) should reject these claims, and lay the foundations of decision theory on firmer ground. (shrink)
Conditionalization is a widely endorsed rule for updating one’s beliefs. But a sea of complaints have been raised about it, including worries regarding how the rule handles error correction, changing desiderata of theory choice, evidence loss, self-locating beliefs, learning about new theories, and confirmation. In light of such worries, a number of authors have suggested replacing Conditionalization with a different rule — one that appeals to what I’ll call “ur-priors”. But different authors have understood the rule in different ways, and (...) these different understandings solve different problems. In this paper, I aim to map out the terrain regarding these issues. I survey the different problems that might motivate the adoption of such a rule, flesh out the different understandings of the rule that have been proposed, and assess their pros and cons. I conclude by suggesting that one particular batch of proposals, proposals that appeal to what I’ll call “loaded evidential standards”, are especially promising. (shrink)
This paper examines two mistakes regarding David Lewis’ Principal Principle that have appeared in the recent literature. These particular mistakes are worth looking at for several reasons: The thoughts that lead to these mistakes are natural ones, the principles that result from these mistakes are untenable, and these mistakes have led to significant misconceptions regarding the role of admissibility and time. After correcting these mistakes, the paper discusses the correct roles of time and admissibility. With these results in hand, the (...) paper concludes by showing that one way of formulating the chance–credence relation has a distinct advantage over its rivals. (shrink)
In Modal Logic as Metaphysics, Timothy Williamson claims that the possibilism-actualism (P-A) distinction is badly muddled. In its place, he introduces a necessitism-contingentism (N-C) distinction that he claims is free of the confusions that purportedly plague the P-A distinction. In this paper I argue first that the P-A distinction, properly understood, is historically well-grounded and entirely coherent. I then look at the two arguments Williamson levels at the P-A distinction and find them wanting and show, moreover, that, when the N-C (...) distinction is broadened (as per Williamson himself) so as to enable necessitists to fend off contingentist objections, the P-A distinction can be faithfully reconstructed in terms of the N-C distinction. However, Williamson’s critique does point to a genuine shortcoming in the common formulation of the P-A distinction. I propose a new definition of the distinction in terms of essential properties that avoids this shortcoming. (shrink)
When people want to identify the causes of an event, assign credit or blame, or learn from their mistakes, they often reflect on how things could have gone differently. In this kind of reasoning, one considers a counterfactual world in which some events are different from their real-world counterparts and considers what else would have changed. Researchers have recently proposed several probabilistic models that aim to capture how people do (or should) reason about counterfactuals. We present a new model and (...) show that it accounts better for human inferences than several alternative models. Our model builds on the work of Pearl (2000), and extends his approach in a way that accommodates backtracking inferences and that acknowledges the difference between counterfactual interventions and counterfactual observations. We present six new experiments and analyze data from four experiments carried out by Rips (2010), and the results suggest that the new model provides an accurate account of both mean human judgments and the judgments of individuals. (shrink)
This chapter surveys hybrid theories of well-being. It also discusses some criticisms, and suggests some new directions that philosophical discussion of hybrid theories might take.
According to commonsense psychology, one is conscious of everything that one pays attention to, but one does not pay attention to all the things that one is conscious of. Recent lines of research purport to show that commonsense is mistaken on both of these points: Mack and Rock (1998) tell us that attention is necessary for consciousness, while Kentridge and Heywood (2001) claim that consciousness is not necessary for attention. If these lines of research were successful they would have important (...) implications regarding the prospects of using attention research to inform us about consciousness. The present essay shows that these lines of research are not successful, and that the commonsense picture of the relationship between attention and consciousness can be. (shrink)
Giubilini and Minerva argue that the permissibility of abortion entails the permissibility of infanticide. Proponents of what we refer to as the Birth Strategy claim that there is a morally significant difference brought about at birth that accounts for our strong intuition that killing newborns is morally impermissible. We argue that strategy does not account for the moral intuition that late-term, non-therapeutic abortions are morally impermissible. Advocates of the Birth Strategy must either judge non-therapeutic abortions as impermissible in the later (...) stages of pregnancy or conclude that they are permissible on the basis of premises that are far less intuitively plausible than the opposite conclusion and its supporting premises. (shrink)
The advent of contemporary evolutionary theory ushered in the eventual decline of Aristotelian Essentialism (Æ) – for it is widely assumed that essence does not, and cannot have any proper place in the age of evolution. This paper argues that this assumption is a mistake: if Æ can be suitably evolved, it need not face extinction. In it, I claim that if that theory’s fundamental ontology consists of dispositional properties, and if its characteristic metaphysical machinery is interpreted within the framework (...) of contemporary evolutionary developmental biology, an evolved essentialism is available. The reformulated theory of Æ offered in this paper not only fails to fall prey to the typical collection of criticisms, but is also independently both theoretically and empirically plausible. The paper contends that, properly understood, essence belongs in the age of evolution. (shrink)
Should economics study the psychological basis of agents’ choice behaviour? I show how this question is multifaceted and profoundly ambiguous. There is no sharp distinction between ‘mentalist’ answ...
Though the realm of biology has long been under the philosophical rule of the mechanistic magisterium, recent years have seen a surprisingly steady rise in the usurping prowess of process ontology. According to its proponents, theoretical advances in the contemporary science of evo-devo have afforded that ontology a particularly powerful claim to the throne: in that increasingly empirically confirmed discipline, emergently autonomous, higher-order entities are the reigning explanantia. If we are to accept the election of evo-devo as our best conceptualisation (...) of the biological realm with metaphysical rigour, must we depose our mechanistic ontology for failing to properly “carve at the joints” of organisms? In this paper, I challenge the legitimacy of that claim: not only can the theoretical benefits offered by a process ontology be had without it, they cannot be sufficiently grounded without the metaphysical underpinning of the very mechanisms which processes purport to replace. The biological realm, I argue, remains one best understood as under the governance of mechanistic principles. (shrink)
Quantum theory explains a hugely diverse array of phenomena in the history of science. But how can the world be the way quantum theory says it is? Fifteen expert scholars consider what the world is like according to quantum physics in this volume and offer illuminating new perspectives on fundamental debates that span physics and philosophy.
At the heart of the Bayesianism is a rule, Conditionalization, which tells us how to update our beliefs. Typical formulations of this rule are underspecified. This paper considers how, exactly, this rule should be formulated. It focuses on three issues: when a subject’s evidence is received, whether the rule prescribes sequential or interval updates, and whether the rule is narrow or wide scope. After examining these issues, it argues that there are two distinct and equally viable versions of Conditionalization to (...) choose from. And which version we choose has interesting ramifications, bearing on issues such as whether Conditionalization can handle continuous evidence, and whether Jeffrey Conditionalization is really a generalization of Conditionalization. (shrink)
Selection against embryos that are predisposed to develop disabilities is one of the less controversial uses of embryo selection technologies. Many bio-conservatives argue that while the use of ESTs to select for non-disease-related traits, such as height and eye-colour, should be banned, their use to avoid disease and disability should be permitted. Nevertheless, there remains significant opposition, particularly from the disability rights movement, to the use of ESTs to select against disability. In this article we examine whether and why the (...) state could be justified in restricting the use of ESTs to select against disability. We first outline the challenge posed by proponents of ‘liberal eugenics’. Liberal eugenicists challenge those who defend restrictions on the use of ESTs to show why the use of these technologies would create a harm of the type and magnitude required to justify coercive measures. We argue that this challenge could be met by adverting to the risk of harms to future persons that would result from a loss of certain forms of cognitive diversity. We suggest that this risk establishes a pro tanto case for restricting selection against some disabilities, including dyslexia and Asperger's syndrome. (shrink)
This paper explores the level of obligation called for by Milton Friedman’s classic essay “The Social Responsibility of Business is to Increase Profits.” Several scholars have argued that Friedman asserts that businesses have no or minimal social duties beyond compliance with the law. This paper argues that this reading of Friedman does not give adequate weight to some claims that he makes and to their logical extensions. Throughout his article, Friedman emphasizes the values of freedom, respect for law, and duty. (...) The principle that a business professional should not infringe upon the liberty of other members of society can be used by business ethicists to ground a vigorous line of ethical analysis. Any practice, which has a negative externality that requires another party to take a significant loss without consent or compensation, can be seen as unethical. With Friedman’s framework, we can see how ethics can be seen as arising from the nature of business practice itself. Business involves an ethics in which we consider, work with, and respect strangers who are outside of traditional in-groups. (shrink)
A number of cases involving self-locating beliefs have been discussed in the Bayesian literature. I suggest that many of these cases, such as the sleeping beauty case, are entangled with issues that are independent of self-locating beliefs per se. In light of this, I propose a division of labor: we should address each of these issues separately before we try to provide a comprehensive account of belief updating. By way of example, I sketch some ways of extending Bayesianism in order (...) to accommodate these issues. Then, putting these other issues aside, I sketch some ways of extending Bayesianism in order to accommodate self-locating beliefs. Finally, I propose a constraint on updating rules, the "Learning Principle", which rules out certain kinds of troubling belief changes, and I use this principle to assess some of the available options. (shrink)
In this essay, I argue that a proper understanding of the historicity of love requires an appreciation of the irreplaceability of the beloved. I do this through a consideration of ideas that were first put forward by Robert Kraut in “Love De Re” (1986). I also evaluate Amelie Rorty's criticisms of Kraut's thesis in “The Historicity of Psychological Attitudes: Love is Not Love Which Alters Not When It Alteration Finds” (1986). I argue that Rorty fundamentally misunderstands Kraut's Kripkean analogy, and (...) I go on to criticize her claim that concern over the proper object of love should be best understood as a concern over constancy. This leads me to an elaboration of the distinct senses in which love can be seen as historical. I end with a further defense of the irreplaceability of the beloved and a discussion of the relevance of recent debates over the importance of personal identity for an adequate account of the historical dimension of love. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.