A recurring theme dominates recent philosophical debates about the nature of conscious perception: naïve realism’s opponents claim that the view is directly contradicted by empirical science. I argue that, despite their current popularity, empirical arguments against naïve realism are fundamentally flawed. The non-empirical premises needed to get from empirical scientific findings to substantive philosophical conclusions are ones the naïve realist is known to reject. Even granting the contentious premises, the empirical findings do not undermine the theory, given its overall philosophical (...) commitments. Thus, contemporary empirical research fails to supply any new argumentative force against naïve realism. I conclude that, as philosophers of mind, we would be better served spending a bit less time trying to wield empirical science as a cudgel against our opponents, and a bit more time working through the implications of each other’s views – something we can accomplish perfectly well from the comfort of our armchairs. (shrink)
According to the Fine-Tuning Argument, the existence of life in our universe confirms the Multiverse Hypothesis. A standard objection to FTA is that it violates the Requirement of Total Evidence. I argue that RTE should be rejected in favor of the Predesignation Requirement, according to which, in assessing the outcome of a probabilistic process, we should only use evidence characterizable in a manner available before observing the outcome. This produces the right verdicts in some simple cases in which RTE leads (...) us astray, and, when applied to FTA, it shows that our evidence does confirm HM. (shrink)
A response to Saul Fisher’s critical note on Peter Lamarque and Nigel Walter’s ‘The Application of Narrative to the Conservation of Historic Buildings’.
Can new technology enhance purpose-driven, democratic dialogue in groups, governments, and societies? Online Deliberation: Design, Research, and Practice is the first book that attempts to sample the full range of work on online deliberation, forging new connections between academic research, technology designers, and practitioners. Since some of the most exciting innovations have occurred outside of traditional institutions, and those involved have often worked in relative isolation from each other, work in this growing field has often failed to reflect the full (...) set of perspectives on online deliberation. This volume is aimed at those working at the crossroads of information/communication technology and social science, and documents early findings in, and perspectives on, this new field by many of its pioneers. -/- CONTENTS: -/- Introduction: The Blossoming Field of Online Deliberation (Todd Davies, pp. 1-19) -/- Part I - Prospects for Online Civic Engagement -/- Chapter 1: Virtual Public Consultation: Prospects for Internet Deliberative Democracy (James S. Fishkin, pp. 23-35) -/- Chapter 2: Citizens Deliberating Online: Theory and Some Evidence (Vincent Price, pp. 37-58) -/- Chapter 3: Can Online Deliberation Improve Politics? Scientific Foundations for Success (Arthur Lupia, pp. 59-69) -/- Chapter 4: Deliberative Democracy, Online Discussion, and Project PICOLA (Public Informed Citizen Online Assembly) (Robert Cavalier with Miso Kim and Zachary Sam Zaiss, pp. 71-79) -/- Part II - Online Dialogue in the Wild -/- Chapter 5: Friends, Foes, and Fringe: Norms and Structure in Political Discussion Networks (John Kelly, Danyel Fisher, and Marc Smith, pp. 83-93) -/- Chapter 6: Searching the Net for Differences of Opinion (Warren Sack, John Kelly, and Michael Dale, pp. 95-104) -/- Chapter 7: Happy Accidents: Deliberation and Online Exposure to Opposing Views (Azi Lev-On and Bernard Manin, pp. 105-122) -/- Chapter 8: Rethinking Local Conversations on the Web (Sameer Ahuja, Manuel Pérez-Quiñones, and Andrea Kavanaugh, pp. 123-129) -/- Part III - Online Public Consultation -/- Chapter 9: Deliberation in E-Rulemaking? The Problem of Mass Participation (David Schlosberg, Steve Zavestoski, and Stuart Shulman, pp. 133-148) -/- Chapter 10: Turning GOLD into EPG: Lessons from Low-Tech Democratic Experimentalism for Electronic Rulemaking and Other Ventures in Cyberdemocracy (Peter M. Shane, pp. 149-162) -/- Chapter 11: Baudrillard and the Virtual Cow: Simulation Games and Citizen Participation (Hélène Michel and Dominique Kreziak, pp. 163-166) -/- Chapter 12: Using Web-Based Group Support Systems to Enhance Procedural Fairness in Administrative Decision Making in South Africa (Hossana Twinomurinzi and Jackie Phahlamohlaka, pp. 167-169) -/- Chapter 13: Citizen Participation Is Critical: An Example from Sweden (Tomas Ohlin, pp. 171-173) -/- Part IV - Online Deliberation in Organizations -/- Chapter 14: Online Deliberation in the Government of Canada: Organizing the Back Office (Elisabeth Richard, pp. 177-191) -/- Chapter 15: Political Action and Organization Building: An Internet-Based Engagement Model (Mark Cooper, pp. 193-202) -/- Chapter 16: Wiki Collaboration Within Political Parties: Benefits and Challenges (Kate Raynes-Goldie and David Fono, pp. 203-205) -/- Chapter 17: Debian’s Democracy (Gunnar Ristroph, pp. 207-211) -/- Chapter 18: Software Support for Face-to-Face Parliamentary Procedure (Dana Dahlstrom and Bayle Shanks, pp. 213-220) -/- Part V - Online Facilitation -/- Chapter 19: Deliberation on the Net: Lessons from a Field Experiment (June Woong Rhee and Eun-mee Kim, pp. 223-232) -/- Chapter 20: The Role of the Moderator: Problems and Possibilities for Government-Run Online Discussion Forums (Scott Wright, pp. 233-242) -/- Chapter 21: Silencing the Clatter: Removing Anonymity from a Corporate Online Community (Gilly Leshed, pp. 243-251) -/- Chapter 22: Facilitation and Inclusive Deliberation (Matthias Trénel, pp. 253-257) -/- Chapter 23: Rethinking the ‘Informed’ Participant: Precautions and Recommendations for the Design of Online Deliberation (Kevin S. Ramsey and Matthew W. Wilson, pp. 259-267) -/- Chapter 24: PerlNomic: Rule Making and Enforcement in Digital Shared Spaces (Mark E. Phair and Adam Bliss, pp. 269-271) -/- Part VI - Design of Deliberation Tools -/- Chapter 25: An Online Environment for Democratic Deliberation: Motivations, Principles, and Design (Todd Davies, Brendan O’Connor, Alex Cochran, Jonathan J. Effrat, Andrew Parker, Benjamin Newman, and Aaron Tam, pp. 275-292) -/- Chapter 26: Online Civic Deliberation with E-Liberate (Douglas Schuler, pp. 293-302) -/- Chapter 27: Parliament: A Module for Parliamentary Procedure Software (Bayle Shanks and Dana Dahlstrom, pp. 303-307) -/- Chapter 28: Decision Structure: A New Approach to Three Problems in Deliberation (Raymond J. Pingree, pp. 309-316) -/- Chapter 29: Design Requirements of Argument Mapping Software for Teaching Deliberation (Matthew W. Easterday, Jordan S. Kanarek, and Maralee Harrell, pp. 317-323) -/- Chapter 30: Email-Embedded Voting with eVote/Clerk (Marilyn Davis, pp. 325-327) -/- Epilogue: Understanding Diversity in the Field of Online Deliberation (Seeta Peña Gangadharan, pp. 329-358). -/- For individual chapter downloads, go to odbook.stanford.edu. (shrink)
Castration is analyzed as a recurring theme in French medieval literature and as an imaginary motif, according to the Lacanian perspective, and analyzed literally in the following texts: the "Lais" by Marie de France, according to a naturalistic perspective (Guigemar, Bisclavret, Chaitivel); "Perceval ou li conte du Graal" by Chrétien de Troyes (the episode of the fisher King: reverse specular of Perceval), several pièce by Raimbaut d'Aurenga: troubadour in whose work the theme of castration is widespread, both in his (...) poetry and in his conception of love; Pietro Abelardo (and Eloisa): "Historia calamitatum" and in the Epistolario: the stitlization of a real datum. (shrink)
You don't say much about who you are teaching, or what subject you teach, but you do seem to see a need to justify what you are doing. Perhaps you're teaching underprivileged children, opening their minds to possibilities that might otherwise never have occurred to them. Or maybe you're teaching the children of affluent families and opening their eyes to the big moral issues they will face in life — like global poverty, and climate change. If you're doing something like (...) this, then stick with it. Giving money isn't the only way to make a difference. (shrink)
We live in a world of crowds and corporations, artworks and artifacts, legislatures and languages, money and markets. These are all social objects — they are made, at least in part, by people and by communities. But what exactly are these things? How are they made, and what is the role of people in making them? In The Ant Trap, Brian Epstein rewrites our understanding of the nature of the social world and the foundations of the social sciences. (...) class='Hi'>Epstein explains and challenges the three prevailing traditions about how the social world is made. One tradition takes the social world to be built out of people, much as traffic is built out of cars. A second tradition also takes people to be the building blocks of the social world, but focuses on thoughts and attitudes we have toward one another. And a third tradition takes the social world to be a collective projection onto the physical world. Epstein shows that these share critical flaws. Most fundamentally, all three traditions overestimate the role of people in building the social world: they are overly anthropocentric. Epstein starts from scratch, bringing the resources of contemporary metaphysics to bear. In the place of traditional theories, he introduces a model based on a new distinction between the grounds and the anchors of social facts. Epstein illustrates the model with a study of the nature of law, and shows how to interpret the prevailing traditions about the social world. Then he turns to social groups, and to what it means for a group to take an action or have an intention. Contrary to the overwhelming consensus, these often depend on more than the actions and intentions of group members. (shrink)
This article seeks the origin, in the theories of Ibn al-Haytham (Alhazen), Descartes, and Berkeley, of two-stage theories of spatial perception, which hold that visual perception involves both an immediate representation of the proximal stimulus in a two-dimensional ‘‘sensory core’’ and also a subsequent perception of the three dimensional world. The works of Ibn al-Haytham, Descartes, and Berkeley already frame the major theoretical options that guided visual theory into the twentieth century. The field of visual perception was the first area (...) of what we now call psychology to apply mathematics, through geometrical models as used by Euclid, Ptolemy, Ibn al-Haytham, and Descartes (among others). The article shows that Kepler’s discovery of the retinal image, which revolutionized visual anatomy and entailed fundamental changes in visual physiology, did not alter the basic structure of theories of spatial vision. These changes in visual physiology are advanced especially in Descartes' Dioptrics and his L'Homme. Berkeley develops a radically empirist theory vision, according to which visual perception of depth is learned through associative processes that rely on the sense of touch. But Descartes and Berkeley share the assertion that there is a two-dimensional sensory core that is in principle available to consciousness. They also share the observation that we don't usually perceived this core, but find depth and distance to be phenomenally immediate, a point they struggle to accommodate theoretically. If our interpretation is correct, it was not a change in the theory of the psychology of vision that engendered the idea of a sensory core, but rather the introduction of the theory into a new metaphysical context. (shrink)
Peter Ludlow shows how word meanings are much more dynamic than we might have supposed, and explores how they are modulated even during everyday conversation. The resulting view is radical, and has far-reaching consequences for our political and legal discourse, and for enduring puzzles in the foundations of semantics, epistemology, and logic.
Book Symposium on The Territories of Science and Religion (University of Chicago Press, 2015). The author responds to review essays by John Heilbron, Stephen Gaukroger, and Yiftach Fehige.
This paper presents a systematic approach for analyzing and explaining the nature of social groups. I argue against prominent views that attempt to unify all social groups or to divide them into simple typologies. Instead I argue that social groups are enormously diverse, but show how we can investigate their natures nonetheless. I analyze social groups from a bottom-up perspective, constructing profiles of the metaphysical features of groups of specific kinds. We can characterize any given kind of social group with (...) four complementary profiles: its “construction” profile, its “extra essentials” profile, its “anchor” profile, and its “accident” profile. Together these provide a framework for understanding the nature of groups, help classify and categorize groups, and shed light on group agency. (shrink)
Some artificial intelligence systems can display algorithmic bias, i.e. they may produce outputs that unfairly discriminate against people based on their social identity. Much research on this topic focuses on algorithmic bias that disadvantages people based on their gender or racial identity. The related ethical problems are significant and well known. Algorithmic bias against other aspects of people’s social identity, for instance, their political orientation, remains largely unexplored. This paper argues that algorithmic bias against people’s political orientation can arise in (...) some of the same ways in which algorithmic gender and racial biases emerge. However, it differs importantly from them because there are strong social norms against gender and racial biases. This does not hold to the same extent for political biases. Political biases can thus more powerfully influence people, which increases the chances that these biases become embedded in algorithms and makes algorithmic political biases harder to detect and eradicate than gender and racial biases even though they all can produce similar harm. Since some algorithms can now also easily identify people’s political orientations against their will, these problems are exacerbated. Algorithmic political bias thus raises substantial and distinctive risks that the AI community should be aware of and examine. (shrink)
In his insightful and challenging paper, Jonathan Schaffer argues against a distinction I make in The Ant Trap (Epstein 2015) and related articles. I argue that in addition to the widely discussed “grounding” relation, there is a different kind of metaphysical determination I name “anchoring.” Grounding and anchoring are distinct, and both need to be a part of full explanations of how facts are metaphysically determined. Schaffer argues instead that anchoring is a species of grounding. The crux of his (...) argument comes in the last sections of his paper, in his discussion of “exportation,” the relations strategy, and the definitions strategy. I am inclined to agree that Schaffer’s interesting strategies offer the best choices for the philosopher who wants to insist that anchoring is a species of grounding. But both, I will argue, are fatally flawed. I do not take the separation of anchors from grounds lightly, but find the evidence in its favor overwhelming. And once the distinction is made, I find anchoring to be a powerful practical tool in metaphysics. Philosophy and Phenomenological Research, Volume 99, Issue 3, Page 768-781, November 2019. (shrink)
Although the relationship of part to whole is one of the most fundamental there is, this is the first full-length study of this key concept. Showing that mereology, or the formal theory of part and whole, is essential to ontology, Simons surveys and critiques previous theories--especially the standard extensional view--and proposes a new account that encompasses both temporal and modal considerations. Simons's revised theory not only allows him to offer fresh solutions to long-standing problems, but also has far-reaching consequences for (...) our understanding of a host of classical philosophical concepts. (shrink)
This paper looks at the critical reception of two central claims of Peter Auriol’s theory of cognition: the claim that the objects of cognition have an apparent or objective being that resists reduction to the real being of objects, and the claim that there may be natural intuitive cognitions of nonexistent objects. These claims earned Auriol the criticism of his fellow Franciscans, Walter Chatton and Adam Wodeham. According to them, the theory of apparent being was what had led Auriol (...) to allow for intuitive cognitions of nonexistents, but the intuitive cognition of nonexistents, at its turn, led to scepticism. Modern commentators have offered similar readings of Auriol, but this paper argues, first, that the apparent being provides no special reason to think there could be intuitions of nonexistent objects, and second, that despite his idiosyncratic account of intuition, Auriol was no more vulnerable to scepticism than his critics. (shrink)
Peter Ludlow presents the first book on the philosophy of generative linguistics, including both Chomsky's government and binding theory and his minimalist ...
Confirmation bias is one of the most widely discussed epistemically problematic cognitions, challenging reliable belief formation and the correction of inaccurate views. Given its problematic nature, it remains unclear why the bias evolved and is still with us today. To offer an explanation, several philosophers and scientists have argued that the bias is in fact adaptive. I critically discuss three recent proposals of this kind before developing a novel alternative, what I call the ‘reality-matching account’. According to the account, confirmation (...) bias evolved because it helps us influence people and social structures so that they come to match our beliefs about them. This can result in significant developmental and epistemic benefits for us and other people, ensuring that over time we don’t become epistemically disconnected from social reality but can navigate it more easily. While that might not be the only evolved function of confirmation bias, it is an important one that has so far been neglected in the theorizing on the bias. (shrink)
The JSTOR Archive is a trusted digital repository providing for long-term preservation and access to leading academic journals and scholarly literature from around the world. The Archive is supported by libraries, scholarly societies, publishers, and foundations. It is an initiative of JSTOR, a not-for-profit organization with a mission to help the scholarly community take advantage of advances in technology. For more information regarding JSTOR, please contact [email protected]
In a recent article, Erasmus, Brunet, and Fisher (2021) argue that Artificial Neural Networks (ANNs) are explainable. They survey four influential accounts of explanation: the Deductive-Nomological model, the Inductive-Statistical model, the Causal-Mechanical model, and the New-Mechanist model. They argue that, on each of these accounts, the features that make something an explanation is invariant with regard to the complexity of the explanans and the explanandum. Therefore, they conclude, the complexity of ANNs (and other Machine Learning models) does not make (...) them less explainable. In this reply, it is argued that Erasmus et al. left out one influential account of explanation from their discussion: the Unificationist model. It is argued that, on the Unificationist model, the features that makes something an explanation is sensitive to complexity. Therefore, on the Unificationist model, ANNs (and other Machine Learning models) are not explainable. It is emphasized that Erasmus et al.’s general strategy is correct. The literature on explainable Artificial Intelligence can benefit by drawing from philosophical accounts of explanation. However, philosophical accounts of explanation do not settle the problem of whether ANNs are explainable because they do not unanimously declare that explanation is invariant with regard to complexity. (shrink)
The recent emergence of machine-manipulated media raises an important societal question: How can we know whether a video that we watch is real or fake? In two online studies with 15,016 participants, we present authentic videos and deepfakes and ask participants to identify which is which. We compare the performance of ordinary human observers with the leading computer vision deepfake detection model and find them similarly accurate, while making different kinds of mistakes. Together, participants with access to the model’s prediction (...) are more accurate than either alone, but inaccurate model predictions often decrease participants’ accuracy. To probe the relative strengths and weaknesses of humans and machines as detectors of deepfakes, we examine human and machine performance across video-level features, and we evaluate the impact of preregistered randomized interventions on deepfake detection. We find that manipulations designed to disrupt visual processing of faces hinder human participants’ performance while mostly not affecting the model’s performance, suggesting a role for specialized cognitive capacities in explaining human deepfake detection performance. -/- . (shrink)
The thesis of methodological individualism in social science is commonly divided into two different claims—explanatory individualism and ontological individualism. Ontological individualism is the thesis that facts about individuals exhaustively determine social facts. Initially taken to be a claim about the identity of groups with sets of individuals or their properties, ontological individualism has more recently been understood as a global supervenience claim. While explanatory individualism has remained controversial, ontological individualism thus understood is almost universally accepted. In this paper I argue (...) that ontological individualism is false. Only if the thesis is weakened to the point that it is equivalent to physicalism can it be true, but then it fails to be a thesis about the determination of social facts by facts about individual persons. Even when individualistic facts are expanded to include people’s local environments and practices, I shall argue, those still underdetermine the social facts that obtain. If true, this has implications for explanation as well as ontology. I first consider arguments against the local supervenience of social facts on facts about individuals, correcting some flaws in existing arguments and affirming that local supervenience fails for a broad set of social properties. I subsequently apply a similar approach to defeat a particularly weak form of global supervenience, and consider potential responses. Finally, I explore why it is that people have taken ontological individualism to be true. (shrink)
Delusional beliefs have sometimes been considered as rational inferences from abnormal experiences. We explore this idea in more detail, making the following points. Firstly, the abnormalities of cognition which initially prompt the entertaining of a delusional belief are not always conscious and since we prefer to restrict the term “experience” to consciousness we refer to “abnormal data” rather than “abnormal experience”. Secondly, we argue that in relation to many delusions (we consider eight) one can clearly identify what the abnormal cognitive (...) data are which prompted the delusion and what the neuropsychological impairment is which is responsible for the occurrence of these data; but one can equally clearly point to cases where this impairments is present but delusion is not. So the impairment is not sufficient for delusion to occur. A second cognitive impairment, one which impairs the ability to evaluate beliefs, must also be present. Thirdly (and this is the main thrust of our chapter) we consider in detail what the nature of the inference is that leads from the abnormal data to the belief. This is not deductive inference and it is not inference by enumerative induction; it is abductive inference. We offer a Bayesian account of abductive inference and apply it to the explanation of delusional belief. (shrink)
In recent years, theorists have debated how we introduce new social objects and kinds into the world. Searle, for instance, proposes that they are introduced by collective acceptance of a constitutive rule; Millikan and Elder that they are the products of reproduction processes; Thomasson that they result from creator intentions and subsequent intentional reproduction; and so on. In this chapter, I argue against the idea that there is a single generic method or set of requirements for doing so. Instead, there (...) is a variety of what I call “anchoring schemas,” or methods by which new social kinds are generated. Not only are social kinds a diverse lot, but the metaphysical explanation for their being the kinds they are is diverse as well. I explain the idea of anchoring and present examples of social kinds that are similar to one another but that are anchored in different ways. I also respond to Millikan’s argument that there is only one kind of “glue” that is “sticky enough” for holding together kinds. I argue that no anchoring schema will work in all environments. It is a contingent matter which schemas are successful for anchoring new social kinds, and an anchoring schema need only be “sticky enough” for practical purposes in a given environment. (shrink)
Shepard has supposed that the mind is stocked with innate knowledge of the world and that this knowledge figures prominently in the way we see the world. According to him, this internal knowledge is the legacy of a process of internalization; a process of natural selection over the evolutionary history of the species. Shepard has developed his proposal most fully in his analysis of the relation between kinematic geometry and the shape of the motion path in apparent motion displays. We (...) argue that Shepard has made a case for applying the principles of kinematic geometry to the perception of motion, but that he has not made the case for injecting these principles into the mind of the percipient. We offer a more modest interpretation of his important findings: that kinematic geometry may be a model of apparent motion. Inasmuch as our recommended interpretation does not lodge geometry in the mind of the percipient, the motivation of positing internalization, a process that moves kinematic geometry into the mind, is obviated. In our conclusion, we suggest that cognitive psychologists, in their embrace of internal mental universals and internalization may have been seduced by the siren call of metaphor. Key Words: apparent motion; imagery; internalization; inverse projection problem; kinematic geometry; measurement; metaphors of mind. (shrink)
Accounts of the concepts of function and dysfunction have not adequately explained what factors determine the line between low‐normal function and dysfunction. I call the challenge of doing so the line‐drawing problem. Previous approaches emphasize facts involving the action of natural selection (Wakefield 1992a, 1999a, 1999b) or the statistical distribution of levels of functioning in the current population (Boorse 1977, 1997). I point out limitations of these two approaches and present a solution to the line‐drawing problem that builds on the (...) second one. (shrink)
It is often seen as a truism that social objects and facts are the product of human intentions. I argue that the role of intentions in social ontology is commonly overestimated. I introduce a distinction that is implicit in much discussion of social ontology, but is often overlooked: between a social entity’s “grounds” and its “anchors.” For both, I argue that intentions, either individual or collective, are less essential than many theorists have assumed. Instead, I propose a more worldly – (...) and less intellectualist – approach to social ontology. (shrink)
In the past five years, there have been a series of papers in the journal Evolution debating the relative significance of two theories of evolution, a neo-Fisherian and a neo-Wrightian theory, where the neo-Fisherians make explicit appeal to parsimony. My aim in this paper is to determine how we can make sense of such an appeal. One interpretation of parsimony takes it that a theory that contains fewer entities or processes, (however we demarcate these) is more parsimonious. On the account (...) that I defend here, parsimony is a ‘local’ virtue. Scientists’ appeals to parsimony are not necessarily an appeal to a theory’s simplicity in the sense of it’s positing fewer mechanisms. Rather, parsimony may be proxy for greater probability or likelihood. I argue that the neo-Fisherians appeal is best understood on this interpretation. And indeed, if we interpret parsimony as either prior probability or likelihood, then we can make better sense of Coyne et al. argument that Wright’s three phase process operates relatively infrequently. (shrink)
I defend the following version of the ought-implies-can principle: (OIC) by virtue of conceptual necessity, an agent at a given time has an (objective, pro tanto) obligation to do only what the agent at that time has the ability and opportunity to do. In short, obligations correspond to ability plus opportunity. My argument has three premises: (1) obligations correspond to reasons for action; (2) reasons for action correspond to potential actions; (3) potential actions correspond to ability plus opportunity. In the (...) bulk of the paper I address six objections to OIC: three objections based on putative counterexamples, and three objections based on arguments to the effect that OIC conflicts with the is/ought thesis, the possibility of hard determinism, and the denial of the Principle of Alternate Possibilities. (shrink)
Trope theory is a leading metaphysical theory in analytic ontology. One of its classic statements is found in the work of Donald C. Williams who argued that tropes qua abstract particulars are the very alphabet of being. The concept of an abstract particular has been repeatedly attacked in the literature. Opponents and proponents of trope theory alike have levelled their criticisms at the abstractness of tropes and the associated act of abstraction. In this paper I defend the concept of a (...) trope qua abstract particular by rejecting arguments that purport to show that tropes should not be understood as abstract and by arguing that the abstractness of tropes plays an indispensable role in one of our more promising trope-theoretic analyses of universals and of concrete objects. (shrink)
This paper presents a passage from Peter Singer on the pond analogy and comments on its content and use in the classroom, especially with respect to the development of the learners' argumentative skills.
Individualists about social ontology hold that social facts are “built out of” facts about individuals. In this paper, I argue that there are two distinct kinds of individualism about social ontology, two different ways individual people might be the metaphysical “builders” of the social world. The familiar kind is ontological individualism. This is the thesis that social facts supervene on, or are exhaustively grounded by, facts about individual people. What I call anchor individualism is the alternative thesis that facts about (...) individuals put in place the conditions for a social entity to exist, or the conditions for something to have a social property. Examples include conventionalist theories of the social world, such as David Hume’s theories of promises, money, and government, and collective acceptance theories, such as John Searle’s theory of institutional facts. Anchor individualism is often conflated with ontological individualism. But in fact, the two theses are in tension with one another: if one of these kinds of individualism is true, then the other is very unlikely to be. My aim in this paper is to clarify both, and argue that they should be sharply distinguished from one another. (shrink)
In the mid-seventeenth century a movement of self-styled experimental philosophers emerged in Britain. Originating in the discipline of natural philosophy amongst Fellows of the fledgling Royal Society of London, it soon spread to medicine and by the eighteenth century had impacted moral and political philosophy and even aesthetics. Early modern experimental philosophers gave epistemic priority to observation and experiment over theorising and speculation. They decried the use of hypotheses and system-building without recourse to experiment and, in some quarters, developed a (...) philosophy of experiment. The movement spread to the Netherlands and France in the early eighteenth century and later impacted Germany. Its important role in early modern philosophy was subsequently eclipsed by the widespread adoption of the Kantian historiography of modern philosophy, which emphasised the distinction between rationalism and empiricism and had no place for the historical phenomenon of early modern experimental philosophy. The re-emergence of interest in early modern experimental philosophy roughly coincided with the development of contemporary x-phi and there are some important similarities between the two. (shrink)
The concept of instantiation is realized differently across a variety of metaphysical theories. A certain realization of the concept in a given theory depends on what roles are specified and associated with the concept and its corresponding term as well as what entities are suited to fill those roles. In this paper, the classic realization of the concept of instantiation in a one-category ontology of abstract particulars or tropes is articulated in a novel way and defended against unaddressed objections.
The Gestalt psychologists adopted a set of positions on mind-body issues that seem like an odd mix. They sought to combine a version of naturalism and physiological reductionism with an insistence on the reality of the phenomenal and the attribution of meanings to objects as natural characteristics. After reviewing basic positions in contemporary philosophy of mind, we examine the Gestalt position, characterizing it m terms of phenomenal realism and programmatic reductionism. We then distinguish Gestalt philosophy of mind from instrumentalism and (...) computational functionalism, and examine Gestalt attributions of meaning and value to perceived objects. Finally, we consider a metatheoretical moral from Gestalt theory, which commends the search for commensurate description of mental phenomena and their physiological counterparts. (shrink)
We often speak as if there are merely possible people—for example, when we make such claims as that most possible people are never going to be born. Yet most metaphysicians deny that anything is both possibly a person and never born. Since our unreflective talk of merely possible people serves to draw non-trivial distinctions, these metaphysicians owe us some paraphrase by which we can draw those distinctions without committing ourselves to there being merely possible people. We show that such paraphrases (...) are unavailable if we limit ourselves to the expressive resources of even highly infinitary first-order modal languages. We then argue that such paraphrases are available in higher-order modal languages only given certain strong assumptions concerning the metaphysics of properties. We then consider alternative paraphrase strategies, and argue that none of them are tenable. If talk of merely possible people cannot be paraphrased, then it must be taken at face value, in which case it is necessary what individuals there are. Therefore, if it is contingent what individuals there are, then the demands of paraphrase place tight constraints on the metaphysics of properties: either (i) it is necessary what properties there are, or (ii) necessarily equivalent properties are identical, and having properties does not entail even possibly being anything at all. (shrink)
Truthmaker monism is the view that the one and only truthmaker is the world. Despite its unpopularity, this view has recently received an admirable defence by Schaffer :307–324, 2010b). Its main defect, I argue, is that it omits partial truthmakers. If we omit partial truthmakers, we lose the intimate connection between a truth and its truthmaker. I further argue that the notion of a minimal truthmaker should be the key notion that plays the role of constraining ontology and that truthmaker (...) monism is not necessary for an appropriate solution to the problem of finding truthmakers for negative truths. I conclude that we should reject truthmaker monism once and for all. (shrink)
Fisher criticised the Neyman-Pearson approach to hypothesis testing by arguing that it relies on the assumption of “repeated sampling from the same population.” The present article considers the responses to this criticism provided by Pearson and Neyman. Pearson interpreted alpha levels in relation to imaginary replications of the original test. This interpretation is appropriate when test users are sure that their replications will be equivalent to one another. However, by definition, scientific researchers do not possess sufficient knowledge about the (...) relevant and irrelevant aspects of their tests and populations to be sure that their replications will be equivalent to one another. Pearson also interpreted the alpha level as a personal rule that guides researchers’ behavior during hypothesis testing. However, this interpretation fails to acknowledge that the same researcher may use different alpha levels in different testing situations. Addressing this problem, Neyman proposed that the average alpha level adopted by a particular researcher can be viewed as an indicator of that researcher’s typical Type I error rate. Researchers’ average alpha levels may be informative from a metascientific perspective. However, they are not useful from a scientific perspective. Scientists are more concerned with the error rates of specific tests of specific hypotheses, rather than the error rates of their colleagues. It is concluded that neither Neyman nor Pearson adequately rebutted Fisher’s “repeated sampling” criticism. Fisher’s significance testing approach is briefly considered as an alternative to the Neyman-Pearson approach. (shrink)
We present experimental evidence that people's modes of social interaction influence their construal of truth. Participants who engaged in cooperative interactions were less inclined to agree that there was an objective truth about that topic than were those who engaged in a competitive interaction. Follow-up experiments ruled out alternative explanations and indicated that the changes in objectivity are explained by argumentative mindsets: When people are in cooperative arguments, they see the truth as more subjective. These findings can help inform research (...) on moral objectivism and, more broadly, on the distinctive cognitive consequences of different types of social interaction. (shrink)
We present a formal semantics for epistemic logic, capturing the notion of knowability relative to information (KRI). Like Dretske, we move from the platitude that what an agent can know depends on her (empirical) information. We treat operators of the form K_AB (‘B is knowable on the basis of information A’) as variably strict quantifiers over worlds with a topic- or aboutness- preservation constraint. Variable strictness models the non-monotonicity of knowledge acquisition while allowing knowledge to be intrinsically stable. Aboutness-preservation models (...) the topic-sensitivity of information, allowing us to invalidate controversial forms of epistemic closure while validating less controversial ones. Thus, unlike the standard modal framework for epistemic logic, KRI accommodates plausible approaches to the Kripke-Harman dogmatism paradox, which bear on non-monotonicity, or on topic-sensitivity. KRI also strikes a better balance between agent idealization and a non-trivial logic of knowledge ascriptions. (shrink)
This paper argues that early modern experimental philosophy emerged as the dominant member of a pair of methods in natural philosophy, the speculative versus the experimental, and that this pairing derives from an overarching distinction between speculative and operative philosophy that can be ultimately traced back to Aristotle. The paper examines the traditional classification of natural philosophy as a speculative discipline from the Stagirite to the seventeenth century; medieval and early modern attempts to articulate a scientia experimentalis; and the tensions (...) in the classification of natural magic and mechanics that led to the introduction of an operative part of natural philosophy in the writings of Francis Bacon and John Johnston. The paper concludes with a summary of the salient discontinuities between the experimental/speculative distinction of the mid-seventeenth century and its predecessors and a statement of the developments that led to the ascendance of experimental philosophy from the 1660s. (shrink)
This paper is a study of higher-order contingentism – the view, roughly, that it is contingent what properties and propositions there are. We explore the motivations for this view and various ways in which it might be developed, synthesizing and expanding on work by Kit Fine, Robert Stalnaker, and Timothy Williamson. Special attention is paid to the question of whether the view makes sense by its own lights, or whether articulating the view requires drawing distinctions among possibilities that, according to (...) the view itself, do not exist to be drawn. The paper begins with a non-technical exposition of the main ideas and technical results, which can be read on its own. This exposition is followed by a formal investigation of higher-order contingentism, in which the tools of variable-domain intensional model theory are used to articulate various versions of the view, understood as theories formulated in a higher-order modal language. Our overall assessment is mixed: higher-order contingentism can be fleshed out into an elegant systematic theory, but perhaps only at the cost of abandoning some of its original motivations. (shrink)
Our topic is the theory of topics. My goal is to clarify and evaluate three competing traditions: what I call the way-based approach, the atom-based approach, and the subject-predicate approach. I develop criteria for adequacy using robust linguistic intuitions that feature prominently in the literature. Then I evaluate the extent to which various existing theories satisfy these constraints. I conclude that recent theories due to Parry, Perry, Lewis, and Yablo do not meet the constraints in total. I then introduce the (...) issue-based theory—a novel and natural entry in the atom-based tradition that meets our constraints. In a coda, I categorize a recent theory from Fine as atom-based, and contrast it to the issue-based theory, concluding that they are evenly matched, relative to our main criteria of adequacy. I offer tentative reasons to nevertheless favour the issue-based theory. (shrink)
Jennifer Lackey ('Testimonial Knowledge and Transmission' The Philosophical Quarterly 1999) and Peter Graham ('Conveying Information, Synthese 2000, 'Transferring Knowledge' Nous 2000) offered counterexamples to show that a hearer can acquire knowledge that P from a speaker who asserts that P, but the speaker does not know that P. These examples suggest testimony can generate knowledge. The showpiece of Lackey's examples is the Schoolteacher case. This paper shows that Lackey's case does not undermine the orthodox view that testimony cannot generate (...) knowledge. This paper explains why Lackey's arguments to the contrary are ineffective for they misunderstand the intuitive rationale for the view that testimony cannot generate knowledge. This paper then elaborates on a version of the case from Graham's paper 'Conveying Information' (the Fossil case) that effectively shows that testimony can generate knowledge. This paper then provides a deeper informative explanation for how it is that testimony transfers knowledge, and why there should be cases where testimony generates knowledge. (shrink)
According to the structured theory of propositions, if two sentences express the same proposition, then they have the same syntactic structure, with corresponding syntactic constituents expressing the same entities. A number of philosophers have recently focused attention on a powerful argument against this theory, based on a result by Bertrand Russell, which shows that the theory of structured propositions is inconsistent in higher order-logic. This paper explores a response to this argument, which involves restricting the scope of the claim that (...) propositions are structured, so that it does not hold for all propositions whatsoever, but only for those which are expressible using closed sentences of a given formal language. We call this restricted principle Closed Structure, and show that it is consistent in classical higher-order logic. As a schematic principle, the strength of Closed Structure is dependent on the chosen language. For its consistency to be philosophically significant, it also needs to be consistent in every extension of the language which the theorist of structured propositions is apt to accept. But, we go on to show, Closed Structure is in fact inconsistent in a very natural extension of the standard language of higher-order logic, which adds resources for plural talk of propositions. We conclude that this particular strategy of restricting the scope of the claim that propositions are structured is not a compelling response to the argument based on Russell’s result, though we note that for some applications, for instance to propositional attitudes, a restricted thesis in the vicinity may hold some promise. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.