Permissivism is the thesis that, for some body of evidence and a proposition p, there is more than one rational doxastic attitude any agent with that evidence can take toward p. Proponents of uniqueness deny permissivism, maintaining that every body of evidence always determines a single rational doxastic attitude. In this paper, we explore the debate between permissivism and uniqueness about evidence, outlining some of the major arguments on each side. We then consider how permissivism can be understood as an (...)underdetermination thesis, and show how this moves the debate forward in fruitful ways: in distinguishing between different types of permissivism, in dispelling classic objections to permissivism, and in shedding light on the relationship between permissivism and evidentialism. (shrink)
The underdetermination of theory by evidence is supposed to be a reason to rethink science. It is not. Many authors claim that underdetermination has momentous consequences for the status of scientific claims, but such claims are hidden in an umbra of obscurity and a penumbra of equivocation. So many various phenomena pass for `underdetermination' that it's tempting to think that it is no unified phenomenon at all, so I begin by providing a framework within which all these (...) worries can be seen as species of one genus: A claim of underdetermination involves (at least implicitly) a set of rival theories, a standard of responsible judgment, and a scope of circumstances in which responsible choice between the rivals is impossible. Within this framework, I show that one variety of underdetermination motivated modern scepticism and thus is a familiar problem at the heart of epistemology. I survey arguments that infer from underdetermination to some reëvaluation of science: top-down arguments infer a priori from the ubiquity of underdetermination to some conclusion about science; bottom-up arguments infer from specific instances of underdetermination, to the claim that underdetermination is widespread, and then to some conclusion about science. The top-down arguments either fail to deliver underdetermination of any great significance or (as with modern scepticism) deliver some well-worn epistemic concern. The bottom-up arguments must rely on cases. I consider several promising cases and find them to either be so specialized that they cannot underwrite conclusions about science in general or not be underdetermined at all. Neither top-down nor bottom-up arguments can motivate any deep reconsideration of science. (shrink)
According to the thesis of semantic underdetermination, most sentences of a natural language lack a definite semantic interpretation. This thesis supports an argument against the use of natural language as an instrument of thought, based on the premise that cognition requires a semantically precise and compositional instrument. In this paper we examine several ways to construe this argument, as well as possible ways out for the cognitive view of natural language in the introspectivist version defended by Carruthers. Finally, we (...) sketch a view of the role of language in thought as a specialized tool, showing how it avoids the consequences of semantic underdetermination. (shrink)
What attitude should we take toward a scientific theory when it competes with other scientific theories? This question elicited different answers from instrumentalists, logical positivists, constructive empiricists, scientific realists, holists, theory-ladenists, antidivisionists, falsificationists, and anarchists in the philosophy of science literature. I will summarize the diverse philosophical responses to the problem of underdetermination, and argue that there are different kinds of underdetermination, and that they should be kept apart from each other because they call for different responses.
Peter Ludlow shows how word meanings are much more dynamic than we might have supposed, and explores how they are modulated even during everyday conversation. The resulting view is radical, and has far-reaching consequences for our political and legal discourse, and for enduring puzzles in the foundations of semantics, epistemology, and logic.
There are two ways that we might respond to the underdetermination of theory by data. One response, which we can call the agnostic response, is to suspend judgment: "Where scientific standards cannot guide us, we should believe nothing". Another response, which we can call the fideist response, is to believe whatever we would like to believe: "If science cannot speak to the question, then we may believe anything without science ever contradicting us". C.S. Peirce recognized these options and suggested (...) evading the dilemma. It is a Logical Maxim, he suggests, that there could be no genuine underdetermination. This is no longer a viable option in the wake of developments in modern physics, so we must face the dilemma head on. The agnostic and fideist responses to underdetermination represent fundamentally different epistemic viewpoints. Nevertheless, the choice between them is not an unresolvable struggle between incommensurable worldviews. There are legitimate considerations tugging in each direction. Given the balance of these considerations, there should be a modest presumption of agnosticism. This may conflict with Peirce's Logical Maxim, but it preserves all that we can preserve of the Peircean motivation. (shrink)
If two theory formulations are merely different expressions of the same theory, then any problem of choosing between them cannot be due to the underdetermination of theories by data. So one might suspect that we need to be able to tell distinct theories from mere alternate formulations before we can say anything substantive about underdetermination, that we need to solve the problem of identical rivals before addressing the problem of underdetermination. Here I consider two possible solutions: Quine (...) proposes that we call two theories identical if they are equivalent under a reconstrual of predicates, but this would mishandle important cases. Another proposal is to defer to the particular judgements of actual scientists. Consideration of an historical episodethe alleged equivalence of wave and matrix mechanicsshows that this second proposal also fails. Nevertheless, I suggest, the original suspicion is wrong; there are ways to enquire into underdetermination without having solved the problem of identical rivals. (shrink)
I focus on a key argument for global external world scepticism resting on the underdetermination thesis: the argument according to which we cannot know any proposition about our physical environment because sense evidence for it equally justifies some sceptical alternative (e.g. the Cartesian demon conjecture). I contend that the underdetermination argument can go through only if the controversial thesis that conceivability is per se a source of evidence for metaphysical possibility is true. I also suggest a reason to (...) doubt that conceivability is per se a source of evidence for metaphysical possibility, and thus to doubt the underdetermination argument. (shrink)
There are many parts of science in which a certain sort of underdetermination of theory by evidence is known to be common. It is argued that reflection on this fact should serve to shift the burden of proof from scientific anti-realists to scientific realists at a crucial point in the debate between them.
Thick terms and concepts in ethics somehow combine evaluation and non-evaluative description. The non-evaluative aspects of thick terms and concepts underdetermine their extensions. Many writers argue that this underdetermination point is best explained by supposing that thick terms and concepts are semantically evaluative in some way such that evaluation plays a role in determining their extensions. This paper argues that the extensions of thick terms and concepts are underdetermined by their meanings in toto, irrespective of whether their extensions are (...) partly determined by evaluation; the underdetermination point can therefore be explained without supposing that thick terms and concepts are semantically evaluative. My argument applies general points about semantic gradability and context-sensitivity to the semantics of thick terms and concepts. (shrink)
One of the objections against the thesis of underdetermination of theories by observations is that it is unintelligible. Any two empirically equivalent theories — so the argument goes—are in principle intertranslatable, hence cannot count as rivals in any non-trivial sense. Against that objection, this paper shows that empirically equivalent theories may contain theoretical sentences that are not intertranslatable. Examples are drawn from a related discussion about incommensurability that shows that theoretical non-intertranslatability is possible.
Anthony Brueckner argues for a strong connection between the closure and the underdetermination argument for scepticism. Moreover, he claims that both arguments rest on infallibilism: In order to motivate the premises of the arguments, the sceptic has to refer to an infallibility principle. If this were true, fallibilists would be right in not taking the problems posed by these sceptical arguments seriously. As many epistemologists are sympathetic to fallibilism, this would be a very interesting result. However, in this paper (...) I will argue that Brueckner’s claims are wrong: The closure and the underdetermination argument are not as closely related as he assumes and neither rests on infallibilism. Thus even a fallibilist should take these arguments to raise serious problems that must be dealt with somehow. (shrink)
It is argued that, contrary to prevailing opinion, Bas van Fraassen nowhere uses the argument from underdetermination in his argument for constructive empiricism. It is explained that van Fraassen’s use of the notion of empirical equivalence in The Scientific Image has been widely misunderstood. A reconstruction of the main arguments for constructive empiricism is offered, showing how the passages that have been taken to be part of an appeal to the argument from underdetermination should actually be interpreted.
Since the early 20th century underdetermination has been one of the most contentious problems in the philosophy of science. In this article I relate the underdetermination problem to models in biology and defend two main lines of argument: First, the use of models in this discipline lends strong support to the underdetermination thesis. Second, models and theories in biology are not determined strictly by the logic of representation of the studied phenomena, but also by other constraints such (...) as research traditions, backgrounds of the scientists, aims of the research and available technology. Convincing evidence for the existence of underdetermination in biology, where models abound, comes both from the fact that for a natural phenomenon we can create a number of candidate models but also from the fact that we do not have a universal rule that would adjudicate among them. This all makes a strong case for the general validity of underdetermination thesis. (shrink)
This paper pursues Ernan McMullin‘s claim ("Virtues of a Good Theory" and related papers on theory-choice) that talk of theory virtues exposes a fault-line in philosophy of science separating "very different visions" of scientific theorizing. It argues that connections between theory virtues and virtue epistemology are substantive rather than ornamental, since both address underdetermination problems in science, helping us to understand the objectivity of theory choice and more specifically what I term the ampliative adequacy of scientific theories. The paper (...) argues therefore that virtue epistemologies can make substantial contributions to the epistemology and methodology of the sciences, helping to bridge the gulf between realists and anti-realists, and to re-enforce moderation over claims about the implications of underdetermination problems for scientific inquiry. It finally makes and develops the suggestion that virtue epistemologies, at least of the kind developed here, offer support to the position that philosophers of science know as normative naturalism. (shrink)
Thomas Bonk has dedicated a book to analyzing the thesis of underdetermination of scientific theories, with a chapter exclusively devoted to the analysis of the relation between this idea and the indeterminacy of meaning. Both theses caused a revolution in the philosophic world in the sixties, generating a cascade of articles and doctoral theses. Agitation seems to have cooled down, but the point is still debated and it may be experiencing a renewed resurgence.
This paper offers a general characterization of underdetermination and gives a prima facie case for the underdetermination of the topology of the universe. A survey of several philosophical approaches to the problem fails to resolve the issue: the case involves the possibility of massive reduplication, but Strawson on massive reduplication provides no help here; it is not obvious that any of the rival theories are to be preferred on grounds of simplicity; and the usual talk of empirically equivalent (...) theories misses the point entirely. (If the choice is underdetermined, then the theories are not empirically equivalent!) Yet the thought experiment is analogous to a live scientific possibility, and actual astronomy faces underdetermination of this kind. This paper concludes by suggesting how the matter can be resolved, either by localizing the underdetermination or by defeating it entirely. Introduction A brief preliminary Around the universe in 80 days Some attempts at resolving the problem 4.1 Indexicality 4.2 Simplicity 4.3 Empirical equivalence 4.4 Is this just a philosophers' fantasy? Move along... ...nothing to see here 6.1 Rules of repetition 6.2 Some possible replies Conclusion. (shrink)
This paper considers the relevance of the Duhem-Quine thesis in economics. In the introductory discussion which follows, the meaning of the thesis and a brief history of its development are detailed. The purpose of the paper is to discuss the effects of the thesis in four specific and diverse theories in economics, and to illustrate the dependence of testing the theories on a set of auxiliary hypotheses. A general taxonomy of auxiliary hypotheses is provided to demonstrate the confounding of auxiliary (...) hypotheses with the testing of economic theory. (shrink)
The underdetermination of theory by data obtains when, inescapably, evidence is insufficient to allow scientists to decide responsibly between rival theories. One response to would-be underdetermination is to deny that the rival theories are distinct theories at all, insisting instead that they are just different formulations of the same underlying theory; we call this the identical rivals response. An argument adapted from John Norton suggests that the response is presumptively always appropriate, while another from Larry Laudan and Jarrett (...) Leplin suggests that the response is never appropriate. Arguments from Einstein for the special and general theories of relativity may fruitfully be seen as instances of the identical rivals response; since Einstein’s arguments are generally accepted, the response is at least sometimes appropriate. But when is it appropriate? We attempt to steer a middle course between Norton’s view and that of Laudan and Leplin: the identical rivals response is appropriate when there is good reason for adopting a parsimonious ontology. Although in simple cases the identical rivals response need not involve any ontological difference between the theories, in actual scientific cases it typically requires treating apparent posits of the various theories as mere verbal ornaments or computational conveniences. Since these would-be posits are not now detectable, there is no perfectly reliable way to decide whether we should eliminate them or not. As such, there is no rule for deciding whether the identical rivals response is appropriate or not. Nevertheless, there are considerations that suggest for and against the response; we conclude by suggesting two of them. (shrink)
The aim of this paper is to show that science, understood as pure research, ought not to be affected by non-epistemic values and thus to defend the traditional ideal of value-free science. First, we will trace the distinction between science and technology, arguing that science should be identified with pure research and that any non-epistemic concern should be directed toward technology and technological research. Second, we will examine different kinds of values and the roles they can play in scientific research (...) to argue that science understood as pure research is mostly and in any case ought to be value-free. Third, we will consider and dismiss some widespread arguments that aim to defend, especially at a normative level, the inevitable value-ladenness of science. Finally, we will briefly return to the connections among science, technology, and values. (shrink)
This paper argues that there is no possible structural way of drawing a distinction between objects of different types, such as individuals and properties of different adicities and orders. We show first that purely combinatorial information (information about how objects combine to form states of affairs) is not sufficient for doing this. We show that for any set of such combinatorial data there is always more than one way of typing them – that is, there are always several ways of (...) classifying the different constituents of states of affairs as individuals and properties. Therefore, contrary to received ontological opinion, no object is essentially of any specific type. In the second part we argue that taking into account logical information does not help either, since logic presupposes the very distinction we are trying to draw. Furthermore, this distinction is not even essential for logic, since logic can function perfectly well without it. We conclude that certain distinctions which have been traditionally regarded as ontologically basic (such as that between individuals and properties) cannot be as fundamental as is often supposed. (shrink)
Debates about the underdetermination of theory by data often turn on specific examples. Cases invoked often enough become familiar, even well worn. Since Helen Longino’s discussion of the case, the connection between prenatal hormone levels and gender-linked childhood behaviour has become one of these stock examples. However, as I argue here, the case is not genuinely underdetermined. We can easily imagine a possible experiment to decide the question. The fact that we would not perform this experiment is a moral, (...) rather than epistemic, point. Finally, I suggest that the ”underdetermination’ of the case may be inessential for Longino to establish her central claim about it. (shrink)
Scott Soames argues that interpreted in the light of Quine's holistic verificationism, Quine's thesis of underdetermination leads to a contradiction. It is contended here that if we pay proper attention to the evolution of Quine's thinking on the subject, particularly his criterion of theory individuation, Quine's thesis of underdetermination escapes Soames' charge of paradoxicality.
The relationship between Peircean abduction and the modern notion of Inference to the Best Explanation (IBE) is a matter of dispute. Some philosophers such as Harman and Lipton claim that abduction and IBE are virtually the same. Others, however, hold that they are quite different (e.g., Hintikka and Minnameier) and there is no link between them (Campos). In this paper, I argue that neither of these views is correct. I show that abduction and IBE have important similarities as well as (...) differences. Moreover, by bringing a historical perspective to the study of the relationship between abduction and IBE—a perspective that is lacking in the literature—I show that their differences can be well understood in terms of two historic developments in the history of philosophy of science: first, Reichenbach’s distinction between the context of discovery and the context of justification—and the consequent jettisoning of the context of discovery from philosophy of science—and second, underdetermination of theory by data. (shrink)
The anti-exceptionalist debate brought into play the problem of what are the relevant data for logical theories and how such data affects the validities accepted by a logical theory. In the present paper, I depart from Laudan’s reticulated model of science to analyze one aspect of this problem, namely of the role of logical data within the process of revision of logical theories. For this, I argue that the ubiquitous nature of logical data is responsible for the proliferation of several (...) distinct methodologies for logical theories. The resulting picture is coherent with the Laudanean view that agreement and disagreement between scientific theories take place at different levels. From this perspective, one is able to articulate other kinds of divergence that considers not only the inferential aspects of a given logical theory, but also the epistemic aims and the methodological choices that drive its development. (shrink)
The Duhem-Quine Thesis is the claim that it is impossible to test a scientific hypothesis in isolation because any empirical test requires assuming the truth of one or more auxiliary hypotheses. This is taken by many philosophers, and is assumed here, to support the further thesis that theory choice is underdetermined by empirical evidence. This inquiry is focused strictly on the axiological commitments engendered in solutions to underdetermination, specifically those of Pierre Duhem and W. V. Quine. Duhem resolves (...) class='Hi'>underdetermination by appealing to a cluster of virtues called 'good sense', and it has recently been argued by Stump (Stud Hist Philos Biol Biomed Sei, 18(1):149-159,2007) that good sense is a form of virtue epistemology. This paper considers whether Quine, who's philosophy is heavily influenced by the very thesis that led Duhem to the virtues, is also led to a virtue epistemology in the face of underdetermination. Various sources of Quinian epistemic normativity are considered, and it is argued that, in conjunction with other normative commitments, Quine's sectarian solution to underdetermination amounts to a skills based virtue epistemology. The paper also sketches formal features of the novel form of virtue epistemology common to Duhem and Quine that challenges the adequacy of epistemic value truth-monism and blocks any imperialist naturalization of virtue epistemology, as the epistemic virtues are essential to the success of the sciences themselves. (shrink)
As inductive decision-making procedures, the inferences made by machine learning programs are subject to underdetermination by evidence and bear inductive risk. One strategy for overcoming these challenges is guided by a presumption in philosophy of science that inductive inferences can and should be value-free. Applied to machine learning programs, the strategy assumes that the influence of values is restricted to data and decision outcomes, thereby omitting internal value-laden design choice points. In this paper, I apply arguments from feminist philosophy (...) of science to machine learning programs to make the case that the resources required to respond to these inductive challenges render critical aspects of their design constitutively value-laden. I demonstrate these points specifically in the case of recidivism algorithms, arguing that contemporary debates concerning fairness in criminal justice risk-assessment programs are best understood as iterations of traditional arguments from inductive risk and demarcation, and thereby establish the value-laden nature of automated decision-making programs. Finally, in light of these points, I address opportunities for relocating the value-free ideal in machine learning and the limitations that accompany them. (shrink)
What makes a high-quality biomarker experiment? The success of personalized medicine hinges on the answer to this question. In this paper, I argue that judgment about the quality of biomarker experiments is mediated by the problem of theoretical underdetermination. That is, the network of biological and pathophysiological theories motivating a biomarker experiment is sufficiently complicated that it often frustrates valid interpretation of the experimental results. Drawing on a case-study in biomarker diagnostic development from neurooncology, I argue that this problem (...) of underdetermination can be overcome with greater coordination across the biomarker research trajectory. I then sketch an account for how coordination across a research trajectory can be evaluated. I ultimate conclude that what makes a high-quality biomarker experiment must be judged by the epistemic contribution it makes to this coordinated research effort. (shrink)
What epistemic defect needs to show up in a skeptical scenario if it is to effectively target some belief? According to the false belief account, the targeted belief must be false in the skeptical scenario. According to the competing ignorance account, the targeted belief must fall short of being knowledge in the skeptical scenario. This paper argues for two claims. The first is that, contrary to what is often assumed, the ignorance account is superior to the false belief account. The (...) second is that the ignorance account ultimately hobbles the skeptic. It does so for two reasons. First, when this account is joined with either a closure-based skeptical argument or a skeptical underdetermination argument, the best the skeptic can do is show that we don’t know that we know. And second, the ignorance account directly implies the maligned KK principle. (shrink)
We present a new “reason-based” approach to the formal representation of moral theories, drawing on recent decision-theoretic work. We show that any moral theory within a very large class can be represented in terms of two parameters: a specification of which properties of the objects of moral choice matter in any given context, and a specification of how these properties matter. Reason-based representations provide a very general taxonomy of moral theories, as differences among theories can be attributed to differences in (...) their two key parameters. We can thus formalize several distinctions, such as between consequentialist and non-consequentialist theories, between universalist and relativist theories, between agent-neutral and agent-relative theories, between monistic and pluralistic theories, between atomistic and holistic theories, and between theories with a teleological structure and those without. Reason-based representations also shed light on an important but under-appreciated phenomenon: the “underdetermination of moral theory by deontic content”. (shrink)
This essay examines the underdetermination problem that plagues structuralist approaches to spacetime theories, with special emphasis placed on the epistemic brands of structuralism, whether of the scientific realist variety or not. Recent non-realist structuralist accounts, by Friedman and van Fraassen, have touted the fact that different structures can accommodate the same evidence as a virtue vis-à-vis their realist counterparts; but, as will be argued, these claims gain little traction against a properly constructed liberal version of epistemic structural realism. Overall, (...) a broad construal of spacetime theories along epistemic structural realist lines will be defended which draws upon both Friedman’s earlier work and the convergence of approximate structure over theory change, but which also challenges various claims of the ontic structural realists. (shrink)
Suppose that scientific realists believe that a successful theory is approximately true, and that constructive empiricists believe that it is empirically adequate. Whose belief is more likely to be false? The problem of underdetermination does not yield an answer to this question one way or the other, but the pessimistic induction does. The pessimistic induction, if correct, indicates that successful theories, both past and current, are empirically inadequate. It is arguable, however, that they are approximately true. Therefore, scientific realists (...) overall take less epistemic risk than constructive empiricists. (shrink)
This article endeavors to identify the strongest versions of the two primary arguments against epistemic scientific realism: the historical argument—generally dubbed “the pessimistic meta-induction”—and the argument from underdetermination. It is shown that, contrary to the literature, both can be understood as historically informed but logically validmodus tollensarguments. After specifying the question relevant to underdetermination and showing why empirical equivalence is unnecessary, two types of competitors to contemporary scientific theories are identified, both of which are informed by science itself. (...) With the content and structure of the two nonrealist arguments clarified, novel relations between them are uncovered, revealing the severity of their collective threat against epistemic realism and its “no-miracles” argument. The final section proposes, however, that the realist’s axiological tenet “science seeks truth” is not blocked. An attempt is made to indicate the promise for a nonepistemic, purely axiological scientific realism—here dubbed “Socratic scientific realism.”. (shrink)
This monographic chapter explains how expected utility (EU) theory arose in von Neumann and Morgenstern, how it was called into question by Allais and others, and how it gave way to non-EU theories, at least among the specialized quarters of decion theory. I organize the narrative around the idea that the successive theoretical moves amounted to resolving Duhem-Quine underdetermination problems, so they can be assessed in terms of the philosophical recommendations made to overcome these problems. I actually follow Duhem's (...) recommendation, which was essentially to rely on the passing of time to make many experiments and arguments available, and evebntually strike a balance between competing theories on the basis of this improved knowledge. Although Duhem's solution seems disappointingly vague, relying as it does on "bon sens" to bring an end to the temporal process, I do not think there is any better one in the philosophical literature, and I apply it here for what it is worth. In this perspective, EU theorists were justified in resisting the first attempts at refuting their theory, including Allais's in the 50s, but they would have lacked "bon sens" in not acknowledging their defeat in the 80s, after the long process of pros and cons had sufficiently matured. This primary Duhemian theme is actually combined with a secondary theme - normativity. I suggest that EU theory was normative at its very beginning and has remained so all along, and I express dissatisfaction with the orthodox view that it could be treated as a straightforward descriptive theory for purposes of prediction and scientific test. This view is usually accompanied with a faulty historical reconstruction, according to which EU theorists initially formulated the VNM axioms descriptively and retreated to a normative construal once they fell threatened by empirical refutation. From my historical study, things did not evolve in this way, and the theory was both proposed and rebutted on the basis of normative arguments already in the 1950s. The ensuing, major problem was to make choice experiments compatible with this inherently normative feature of theory. Compability was obtained in some experiments, but implicitly and somewhat confusingly, for instance by excluding overtly incoherent subjects or by creating strong incentives for the subjects to reflect on the questions and provide answers they would be able to defend. I also claim that Allais had an intuition of how to combine testability and normativity, unlike most later experimenters, and that it would have been more fruitful to work from his intuition than to make choice experiments of the naively empirical style that flourished after him. In sum, it can be said that the underdetermination process accompanying EUT was resolved in a Duhemian way, but this was not without major inefficiencies. To embody explicit rationality considerations into experimental schemes right from the beginning would have limited the scope of empirical research, avoided wasting resources to get only minor findings, and speeded up the Duhemian process of groping towards a choice among competing theories. (shrink)
It is commonly argued that values “fill the logical gap” of underdetermination of theory by evidence, namely, values affect our choice between two or more theories that fit the same evidence. The underdetermination model, however, does not exhaust the roles values play in evidential reasoning. I introduce WAVE – a novel account of the logical relations between values and evidence. WAVE states that values influence evidential reasoning by adjusting evidential weights. I argue that the weight-adjusting role of values (...) is distinct from their underdetermination gap-filling role. Values adjust weights in three ways. First, values affect our trust in the testimony of others. Second, values influence the evidential thresholds required for justified epistemic judgments. Third, values influence the relative weight of a certain type of evidence within a body of multimodal discordant evidence. WAVE explains, from an epistemic perspective, rather than psychological, how smokers, for example, can find the same evidence about the dangers of smoking less persuasive than non-smokers. WAVE allows for a wider effect of values on our accepted scientific theories and beliefs than the effect for which the underdetermination model allows alone; therefore, science studies scholars must consider WAVE in their research and analysis of evidential case studies. (shrink)
On the orthodox view in economics, interpersonal comparisons of utility are not empirically meaningful, and "hence" impossible. To reassess this view, this paper draws on the parallels between the problem of interpersonal comparisons of utility and the problem of translation of linguistic meaning, as explored by Quine. I discuss several cases of what the empirical evidence for interpersonal comparisonsof utility might be and show that, even on the strongest of these, interpersonal comparisons are empirically underdetermined and, if we also deny (...) any appropriate truth of the matter, indeterminate. However, the underdetermination can be broken non-arbitrarily (though not purely empirically) if (i) we assign normative significance to certain states of affairs or (ii) we posit a fixed connection between certain empirically observable proxies and utility. I conclude that, even if interpersonal comparisons are not empirically meaningful, they are not in principle impossible. (shrink)
While most of healthcare research and practice fully endorses evidence-based healthcare, a minority view borrows popular themes from philosophy of science like underdetermination and value-ladenness to question the legitimacy of the evidence-based movement’s philosophical underpinnings. While the feminist origins go unacknowledged, those critics adopt a feminist reading of the “gap argument” to challenge the perceived objectivism of evidence-based practice. From there, the critics seem to despair over the “subjective elements” that values introduce to clinical reasoning, demonstrating that they do (...) not subscribe to feminist science studies’ normative program——where contextual values can enable good science and justified decisions. In this paper, I investigate why it is that the critics of evidence-based medicine adopt feminist science’s characterization of the problem but resist the productive solutions offered by those same theorists. I suggest that the common feminist empiricist appeal to idealized epistemic communities is impractical for those working within the current biomedical context and instead offer an alternate stream of feminist research into the empirical content of values (found in the work of Elizabeth Anderson and Sharyn Clough) as a more helpful recourse for facilitating the important task of legitimate and justified clinical decision-making. I use a case study on clinical decision-making to illustrate the fruitfulness of the latter feminist empiricist framework. -/- See response by Sharyn Clough: http://wp.me/p1Bfg0-1aN See reply by Maya Goldenberg: http://wp.me/p1Bfg0-1oY. (shrink)
Behavioural flexibility is often treated as the gold standard of evidence for more sophisticated or complex forms of animal cognition, such as planning, metacognition and mindreading. However, the evidential link between behavioural flexibility and complex cognition has not been explicitly or systematically defended. Such a defence is particularly pressing because observed flexible behaviours can frequently be explained by putatively simpler cognitive mechanisms. This leaves complex cognition hypotheses open to ‘deflationary’ challenges that are accorded greater evidential weight precisely because they offer (...) putatively simpler explanations of equal explanatory power. This paper challenges the blanket preference for simpler explanations, and shows that once this preference is dispensed with, and the full spectrum of evidence—including evolutionary, ecological and phylogenetic data—is accorded its proper weight, an argument in support of the prevailing assumption that behavioural flexibility can serve as evidence for complex cognitive mechanisms may begin to take shape. An adaptive model of cognitive-behavioural evolution is proposed, according to which the existence of convergent trait–environment clusters in phylogenetically disparate lineages may serve as evidence for the same trait–environment clusters in other lineages. This, in turn, could permit inferences of cognitive complexity in cases of experimental underdetermination, thereby placing the common view that behavioural flexibility can serve as evidence for complex cognition on firmer grounds. (shrink)
This paper develops under-recognized connections between moderate historicist methodology and character (or virtue) epistemology, and goes on to argue that their combination supports a “dialectical” conception of objectivity. Considerations stemming from underdetermination problems motivate our claim that historicism requires agent-focused rather than merely belief-focused epistemology; embracing this point helps historicists avoid the charge of relativism. Considerations stemming from the genealogy of epistemic virtue concepts motivate our claim that character epistemologies are strengthened by moderate historicism about the epistemic virtues and (...) values at work in communities of inquiry; embracing this point helps character epistemologists avoid the charge of objectivism. (shrink)
Alice encounters at least three distinct problems in her struggles to understand and navigate Wonderland. The first arises when she attempts to predict what will happen in Wonderland based on what she has experienced outside of Wonderland. In many cases, this proves difficult -- she fails to predict that babies might turn into pigs, that a grin could survive without a cat or that playing cards could hold criminal trials. Alice's second problem involves her efforts to figure out the basic (...) nature of Wonderland. So, for example, there is nothing Alice could observe that would allow her to prove whether Wonderland is simply a dream. The final problem is manifested by Alice's attempts to understand what the various residents of Wonderland mean when they speak to her. In Wonderland, "mock turtles" are real creatures and people go places with a "porpoise" (and not a purpose). All three of these problems concern Alice's attempts to infer information about unobserved events or objects from those she has observed. In philosophical terms, they all involve *induction*. -/- In this essay, I will show how Alice's experiences can be used to clarify the relation between three more general problems related to induction. The first problem, which concerns our justification for beliefs about the future, is an instance of David Hume's classic *problem of induction*. Most of us believe that rabbits will not start talking tomorrow -- the problem of induction challenges us to justify this belief. Even if we manage to solve Hume's puzzle, however, we are left with what W.V.O. Quine calls the problems of *underdetermination *and *indeterminacy.* The former problem asks us to explain how we can determine *what the world is really like *based on *everything that could be observed about the world. *So, for example, it seems plausible that nothing that Alice could observe would allow her to determine whether eating mushrooms causes her to grow or the rest of the world to shrink. The latter problem, which might remain even if resolve the first two, casts doubt on our capacity to determine *what a certain person means *based on *which words that person uses.* This problem is epitomized in the Queen's interpretation of the Knave's letter. The obstacles that Alice faces in getting around Wonderland are thus, in an important sense, the same types of obstacles we face in our own attempts to understand the world. Her successes and failures should therefore be of real interest. (shrink)
Two main theses of scientific realism —the epistemic thesis and the ontological thesis— are challenged by the underdetermination argument. Scientific realists deny the truth of the premises of this argument —particularly the thesis of the empirical equivalence among theories. But it can be shown that, even if that particular thesis were false, it is reasonable to be skeptic about the likelihood of the realistic theses. -/- .
The notion of so-called "postulated ontology" appears in the context of a well- -known thesis of the underdetermination of scientific theories by empirical data. It is argued in the paper, that the conviction of the existence of some kind of relation between a given theory and ontological ideas can be derived from this thesis, regardless of its particular form. Therefore, certain solutions to classical philosophical questions can be obtained, in principle, by careful inspection of scientific achievements. However, if the (...) thesis of underdetermination holds, such philosophical solutions are not imposed by science itself. In order to arrive at some kind of ontology based on science, it seems necessary to accept certain philosophical presuppositions in the first place. This, and the fact that scientific theories change in time show that although such a kind of ontology is possible, and perhaps desirable, it can never be ultimate. (shrink)
The paper remains and reinforces a viewpoint that science and religion (theology) are methodologically and epistemologically independent. However, it also suggests that this independence can be overcome if a "third party" is taken into account, that is - philosophy. Such possibility seems to follow from the thesis of incommensurability and the thesis of underdetermination formulated and analyzed in current philosophy of science.
Kevin Elliott and others separate two common arguments for the legitimacy of societal values in scientific reasoning as the gap and the error arguments. This article poses two questions: How are these two arguments related, and what can we learn from their interrelation? I contend that we can better understand the error argument as nested within the gap because the error is a limited case of the gap with narrower features. Furthermore, this nestedness provides philosophers with conceptual tools for analyzing (...) more robustly how values pervade science. (shrink)
There are two families of influential and stubborn puzzles that many theories of aboutness (intentionality) face: underdetermination puzzles and puzzles concerning representations that appear to be about things that do not exist. I propose an approach that elegantly avoids both kinds of puzzle. The central idea is to explain aboutness (the relation supposed to stand between thoughts and terms and their objects) in terms of relations of co-aboutness (the relation of being about the same thing that stands between the (...) thoughts and terms themselves). (shrink)
The paper proposes a synthesis between human scientists and artificial representation learning models as a way of augmenting epistemic warrants of realist theories against various anti-realist attempts. Towards this end, the paper fleshes out unconceived alternatives not as a critique of scientific realism but rather a reinforcement, as it rejects the retrospective interpretations of scientific progress, which brought about the problem of alternatives in the first place. By utilising adversarial machine learning, the synthesis explores possibility spaces of available evidence for (...) unconceived alternatives providing modal knowledge of what is possible therein. As a result, the epistemic warrant of synthesised realist theories should emerge bolstered as the underdetermination by available evidence gets reduced. While shifting the realist commitment away from theoretical artefacts towards modalities of the possibility spaces, the synthesis comes out as a kind of perspectival modelling. (shrink)
The problem of underdetermination is thought to hold important lessons for philosophy of science. Yet, as Kyle Stanford has recently argued, typical treatments of it offer only restatements of familiar philosophical problems. Following suggestions in Duhem and Sklar, Stanford calls for a New Induction from the history of science. It will provide proof, he thinks, of "the kind of underdetermination that the history of science reveals to be a distinctive and genuine threat to even our best scientific theories" (...) . This paper examines Stanford's New Induction and argues that it -- like the other forms of underdetermination that he criticizes -- merely recapitulates familiar philosophical conundra. (shrink)
“Objectivity” is an important theoretical concept with diverse applications in our collective practices of inquiry. It is also a concept attended in recent decades by vigorous debate, debate that includes but is not restricted to scientists and philosophers. The special authority of science as a source of knowledge of the natural and social world has been a matter of much controversy. In part because the authority of science is supposed to result from the objectivity of its methods and results, objectivity (...) has been described as an “essentially contested,” and even an “embattled” concept. The concept of objectivity has important but contested applications outside of scientific practices as well. Philosophers, psychologists, and theologians debate whether there is an objective basis for ethical claims and demands. Legal scholars debate what it would mean for laws to be objectively derivable from basic assumptions about justice and equality. One aim of this book is to guide readers through the often volatile debates over the nature and value of objectivity. Another aim is to contribute to that debate through articulating the domain-variance of norms of objectivity, and their different functions for inquirers. A better understanding of the underdetermination problem, and of the many ways that "epistemic" and "social" values rub shoulders in the course of inquiry, aids the development of my pragmatic pluralist account of objectivity. (shrink)
The paper seeks to show that Quine’s theses concerning the underdetermination of scientific theories by experience and the indeterminacy of reference cannot be reconciled if some of Quine’s central assumptions are accepted. The argument is this. Quine holds that the thesis about reference is not just a special case of the other thesis. In order to make sense of this comment we must distinguish between factual and epistemic indeterminacy. Something is factual indeterminate if it is not determined by the (...) facts. Epistemic indeterminacy, on the other hand, is due to the lack of evidence. Quine’s claim about the relationship between the two theses is best understood as saying that reference is factually indeterminate, whereas the underdetermination of scientific theories is merely epistemic. But the latter cannot be sustained in light of Quine’s verificationism, holism and naturalism. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.