There has been a growing concern over establishing norms that ensure the ethically acceptable and scientifically sound conduct of clinical trials. Among the leading norms internationally are the World Medical Association's Declaration of Helsinki, guidelines by the Council for International Organizations of Medical Sciences, the International Conference on Harmonization's standards for industry, and the CONSORT group's reporting norms, in addition to the influential U.S. Federal Common Rule, Food and Drug Administration's body of regulations, and information sheets by the Department of (...) Health and Human Services. There are also many norms published at more local levels by official agencies and professional groups.Any account of international standards should cover both scientific and ethical norms at once – the two are conceptually intertwined. Recent sources recognize that “[s]cientifically unsound research on human subjects is unethical in that it exposes research subjects to risks without possible benefit.”. (shrink)
Totalism Without Repugnance.Jacob M. Nebel - 2021 - In Jeff McMahan, Tim Campbell, James Goodrich & Ketan Ramakrishnan (eds.), Ethics and Existence: The Legacy of Derek Parfit. Oxford: Oxford University Press.details
Totalism is the view that one distribution of well-being is better than another just in case the one contains a greater sum of well-being than the other. Many philosophers, following Parfit, reject totalism on the grounds that it entails the repugnant conclusion: that, for any number of excellent lives, there is some number of lives that are barely worth living whose existence would be better. This paper develops a theory of welfare aggregation---the lexical-threshold view---that allows totalism to avoid the repugnant (...) conclusion, as well as its analogues involving suffering populations and the lengths of individual lives. The theory is grounded in some independently plausible views about the structure of well-being, identifies a new source of incommensurability in population ethics, and avoids some of the implausibly extreme consequences of other lexical views, without violating the intuitive separability of lives. (shrink)
I present a new argument for the repugnant conclusion. The core of the argument is a risky, intrapersonal analogue of the mere addition paradox. The argument is important for three reasons. First, some solutions to Parfit’s original puzzle do not obviously generalize to the intrapersonal puzzle in a plausible way. Second, it raises independently important questions about how to make decisions under uncertainty for the sake of people whose existence might depend on what we do. And, third, it suggests various (...) difficulties for leading views about the value of a person’s life compared to her nonexistence. (shrink)
How should we choose between uncertain prospects in which different possible people might exist at different levels of wellbeing? Alex Voorhoeve and Marc Fleurbaey offer an egalitarian answer to this question. I give some reasons to reject their answer and then sketch an alternative, which I call person-affecting prioritarianism.
The Rachels–Temkin spectrum arguments against the transitivity of better than involve good or bad experiences, lives, or outcomes that vary along multiple dimensions—e.g., duration and intensity of pleasure or pain. This paper presents variations on these arguments involving combinations of good and bad experiences, which have even more radical implications than the violation of transitivity. These variations force opponents of transitivity to conclude that something good is worse than something that isn’t good, on pain of rejecting the good altogether. That (...) is impossible, so we must reject the spectrum arguments. (shrink)
I defend the view that a reason for someone to do something is just a reason why she ought to do it. This simple view has been thought incompatible with the existence of reasons to do things that we may refrain from doing or even ought not to do. For it is widely assumed that there are reasons why we ought to do something only if we ought to do it. I present several counterexamples to this principle and reject some (...) ways of understanding "ought" so that the principle is compatible with my examples. I conclude with a hypothesis for when and why the principle should be expected to fail. (shrink)
Many economists and philosophers assume that status quo bias is necessarily irrational. I argue that, in some cases, status quo bias is fully rational. I discuss the rationality of status quo bias on both subjective and objective theories of the rationality of preferences. I argue that subjective theories cannot plausibly condemn this bias as irrational. I then discuss one kind of objective theory, which holds that a conservative bias toward existing things of value is rational. This account can fruitfully explain (...) some compelling aspects of common sense morality, and it may justify status quo bias. (shrink)
Lara Buchak argues for a version of rank-weighted utilitarianism that assigns greater weight to the interests of the worse off. She argues that our distributive principles should be derived from the preferences of rational individuals behind a veil of ignorance, who ought to be risk averse. I argue that Buchak’s appeal to the veil of ignorance leads to a particular way of extending rank-weighted utilitarianism to the evaluation of uncertain prospects. This method recommends choices that violate the unanimous preferences of (...) rational individuals and choices that guarantee worse distributions. These results, I suggest, undermine Buchak’s argument for rank-weighted utilitarianism. (shrink)
The standard view of "believes" and other propositional attitude verbs is that such verbs express relations between agents and propositions. A sentence of the form “S believes that p” is true just in case S stands in the belief-relation to the proposition that p; this proposition is the referent of the complement clause "that p." On this view, we would expect the clausal complements of propositional attitude verbs to be freely intersubstitutable with their corresponding proposition descriptions—e.g., "the proposition that p"—as (...) they are in the case of "believes." In many cases, however, intersubstitution of that-clauses and proposition descriptions fails to preserve truth value or even grammaticality. These substitution failures lead some philosophers to reject the standard view of propositional attitude reports. Others conclude that propositional attitude verbs are systematically ambiguous. I reject both these views. On my view, the that-clause complements of propositional attitude verbs denote propositions, but proposition descriptions do not. (shrink)
This paper presents a new kind of problem in the ethics of distribution. The problem takes the form of several "calibration dilemmas," in which intuitively reasonable aversion to small-stakes inequalities requires leading theories of distribution to recommend intuitively unreasonable aversion to large-stakes inequalities—e.g., inequalities in which half the population would gain an arbitrarily large quantity of well-being or resources. We first lay out a series of such dilemmas for a family of broadly prioritarian theories. We then consider a widely endorsed (...) family of egalitarian views and show that, despite avoiding the dilemmas for prioritarianism, they are subject to even more forceful calibration dilemmas. We then show how our results challenge common utilitarian accounts of the badness of inequalities in resources (e.g., wealth inequality). These dilemmas leave us with a few options, all of which we find unpalatable. We conclude by laying out these options and suggesting avenues for further research. (shrink)
Matthew Adler's Measuring Social Welfare is an introduction to the social welfare function (SWF) methodology. This essay questions some ideas at the core of the SWF methodology having to do with the relation between the SWF and the measure of well-being. The facts about individual well-being do not single out a particular scale on which well-being must be measured. As with physical quantities, there are multiple scales that can be used to represent the same information about well-being; no one scale (...) is special. Like physical laws, the SWF and its ranking of distributions cannot depend on exactly which of these scales we use. Adler and other theorists in the SWF tradition have used this idea to derive highly restrictive constraints on the shape of the SWF. These constraints rule out seemingly plausible views about distributive justice and population ethics. I argue, however, that these constraints stem from a simple but instructive mistake. The SWF should not be applied to vectors of numbers such as 1 and 2, but rather to vectors of dimensioned quantities such as 1 util and 2 utils. This seemingly pedantic suggestion turns out to have far-reaching consequences. Unlike the orthodox SWF approach, treating welfare levels as dimensioned quantities lets us distinguish between real changes in well-being and mere changes in the unit of measurement. It does this without making the SWF depend on the scale on which welfare is measured, and in a way that avoids the restrictive constraints on the shape of the SWF. (shrink)
According to the person-affecting restriction, one distribution of welfare can be better than another only if there is someone for whom it is better. Extant problems for the person-affecting restriction involve variable-population cases, such as the nonidentity problem, which are notoriously controversial and difficult to resolve. This paper develops a fixed-population problem for the person-affecting restriction. The problem reveals that, in the presence of incommensurable welfare levels, the person-affecting restriction is incompatible with minimal requirements of impartial beneficence even in fixed-population (...) contexts. (shrink)
According to asymmetric comparativism, it is worse for a person to exist with a miserable life than not to exist, but it is not better for a person to exist with a happy life than not to exist. My aim in this paper is to explain how asymmetric comparativism could possibly be true. My account of asymmetric comparativism begins with a different asymmetry, regarding the (dis)value of early death. I offer an account of this early death asymmetry, appealing to the (...) idea of conditional goods, and generalize it to explain how asymmetric comparativism could possibly be true. I also address the objection that asymmetric comparativism has unacceptably antinatalist implications. (shrink)
We argue that all gradable expressions in natural language obey a principle that we call Comparability: if x and y are both F to some degree, then either x is at least as F as y or y is at least as F as x. This principle has been widely rejected among philosophers, especially by ethicists, and its falsity has been claimed to have important normative implications. We argue that Comparability is needed to explain the goodness of several patterns of (...) inference that seem manifestly valid. We reply to some influential arguments against Comparability, raise and reject some new arguments, and draw out some surprising implications of Comparability for debates concerning preference and credence. (shrink)
Background If trials of therapeutic interventions are to serve society's interests, they must be of high methodological quality and must satisfy moral commitments to human subjects. The authors set out to develop a clinical - trials compendium in which standards for the ethical treatment of human subjects are integrated with standards for research methods. Methods The authors rank-ordered the world's nations and chose the 31 with >700 active trials as of 24 July 2008. Governmental and other authoritative entities of the (...) 31 countries were searched, and 1004 English-language documents containing ethical and/or methodological standards for clinical trials were identified. The authors extracted standards from 144 of those: 50 designated as ‘core’, 39 addressing trials of invasive procedures and a 5% sample of the remainder. As the integrating framework for the standards we developed a coherent taxonomy encompassing all elements of a trial's stages. Findings Review of the 144 documents yielded nearly 15 000 discrete standards. After duplicates were removed, 5903 substantive standards remained, distributed in the taxonomy as follows: initiation, 1401 standards, 8 divisions; design, 1869 standards, 16 divisions; conduct, 1473 standards, 8 divisions; analysing and reporting results, 997 standards, four divisions; and post-trial standards, 168 standards, 5 divisions. Conclusions The overwhelming number of source documents and standards uncovered in this study was not anticipated beforehand and confirms the extraordinary complexity of the clinical trials enterprise. This taxonomy of multinational ethical and methodological standards may help trialists and overseers improve the quality of clinical trials, particularly given the globalisation of clinical research. (shrink)
Ethical beliefs are not justified by familiar methods. We do not directly sense ethical properties, at least not in the straightforward way we sense colors or shapes. Nor is it plausible to think – despite a tradition claiming otherwise – that there are self-evident ethical truths that we can know in the way we know conceptual or mathematical truths. Yet, if we are justified in believing anything, we are justified in believing various ethical propositions e.g., that slavery is wrong. If (...) ethical beliefs are not justified in these familiar ways, how are they justified? -/- In her widely read, “Modern Moral Philosophy,” and in her short complimentary paper, “On Brute Facts,” G.E.M. Anscombe answers this question with a compelling and unorthodox account of justification in ethics. Because of her polemical tone and because “Modern Moral Philosophy” does so much else besides, this contribution is easy to overlook. But her account is worth taking seriously, since (a) it is an underappreciated yet plausible account that sidesteps traditional controversies, (b) it offers rich conceptual tools for interpreting and critiquing ethical theories, (c) it suggests an appealing account of the place of ethical theory in ethical knowledge and, (d) it provides useful guidance for doing applied ethics. (shrink)
This paper compares two alternative explanations of pragmatic encroachment on knowledge (i.e., the claim that whether an agent knows that p can depend on pragmatic factors). After reviewing the evidence for such pragmatic encroachment, we ask how it is best explained, assuming it obtains. Several authors have recently argued that the best explanation is provided by a particular account of belief, which we call pragmatic credal reductivism. On this view, what it is for an agent to believe a proposition is (...) for her credence in this proposition to be above a certain threshold, a threshold that varies depending on pragmatic factors. We show that while this account of belief can provide an elegant explanation of pragmatic encroachment on knowledge, it is not alone in doing so, for an alternative account of belief, which we call the reasoning disposition account, can do so as well. And the latter account, we argue, is far more plausible than pragmatic credal reductivism, since it accords far better with a number of claims about belief that are very hard to deny. (shrink)
Robustness is a common platitude: hypotheses are better supported with evidence generated by multiple techniques that rely on different background assumptions. Robustness has been put to numerous epistemic tasks, including the demarcation of artifacts from real entities, countering the “experimenter’s regress,” and resolving evidential discordance. Despite the frequency of appeals to robustness, the notion itself has received scant critique. Arguments based on robustness can give incorrect conclusions. More worrying is that although robustness may be valuable in ideal evidential circumstances (i.e., (...) when evidence is concordant), often when a variety of evidence is available from multiple techniques, the evidence is discordant. †To contact the author, please write to: Jacob Stegenga, Department of Philosophy, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093; e‐mail: jstegenga@ucsd.edu. (shrink)
In the present interview, Jacob Rogozinski elucidates the main concepts and theses he developed in his latest book dedicated to the issue of modern jihadism. On this occasion, he explains his disagreements with other philosophical (Badiou, Baudrillard, Žižek) and anthropological (Girard) accounts of Islamic terrorism. Rogozinski also explains that although jihadism betrays Islam, it nonetheless has everything to do with Islam. Eventually, he describes his own philosophical journey which led him from a phenomenological study of the ego and the (...) flesh to the study of past (witch-hunts, French Reign of Terror) and contemporary (jihadism) terror apparatuses. (shrink)
Philosophers have committed sins while studying science, it is said – philosophy of science focused on physics to the detriment of biology, reconstructed idealizations of scientific episodes rather than attending to historical details, and focused on theories and concepts to the detriment of experiments. Recent generations of philosophers of science have tried to atone for these sins, and by the 1980s the exculpation was in full swing. Marcel Weber’s Philosophy of Experimental Biology is a zenith mea culpa for philosophy of (...) science: it carefully describes several historical examples from twentieth century biology to address both ‘old’ philosophical topics, like reductionism, inference, and realism, and ‘new’ topics, like discovery, models, and norms. Biology, experiments, history – at last, philosophy of science, free of sin. (shrink)
Arnošt Kolman (1892–1979) was a Czech mathematician, philosopher and Communist official. In this paper, we would like to look at Kolman’s arguments against logical positivism which revolve around the notion of the fetishization of mathematics. Kolman derives his notion of fetishism from Marx’s conception of commodity fetishism. Kolman is aiming to show the fact that an entity (system, structure, logical construction) acquires besides its real existence another formal existence. Fetishism means the fantastic detachment of the physical (...) characteristics of real things or phenomena from these things. We identify Kolman’s two main arguments against logical positivism. In the first argument, Kolman applied Lenin’s arguments against Mach’s empiricism-criticism onto Russell’s neutral monism, i.e. mathematical fetishism is internally related to political conservativism. Kolman’s second main argument is that logical and mathematical fetishes are epistemologically deprived of any historical and dynamic dimension. In the final parts of our paper we place Kolman’s thinking into the context of his time, and furthermore we identify some tenets of mathematical fetishism appearing in Alain Badiou’s mathematical ontology today. (shrink)
An astonishing volume and diversity of evidence is available for many hypotheses in the biomedical and social sciences. Some of this evidence—usually from randomized controlled trials (RCTs)—is amalgamated by meta-analysis. Despite the ongoing debate regarding whether or not RCTs are the ‘gold-standard’ of evidence, it is usually meta-analysis which is considered the best source of evidence: meta-analysis is thought by many to be the platinum standard of evidence. However, I argue that meta-analysis falls far short of that standard. Different meta-analyses (...) of the same evidence can reach contradictory conclusions. Meta-analysis fails to provide objective grounds for intersubjective assessments of hypotheses because numerous decisions must be made when performing a meta-analysis which allow wide latitude for subjective idiosyncrasies to influence its outcome. I end by suggesting that an older tradition of evidence in medicine—the plurality of reasoning strategies appealed to by the epidemiologist Sir Bradford Hill—is a superior strategy for assessing a large volume and diversity of evidence. (shrink)
A platitude that took hold with Kuhn is that there can be several equally good ways of balancing theoretical virtues for theory choice. Okasha recently modelled theory choice using technical apparatus from the domain of social choice: famously, Arrow showed that no method of social choice can jointly satisfy four desiderata, and each of the desiderata in social choice has an analogue in theory choice. Okasha suggested that one can avoid the Arrow analogue for theory choice by employing a strategy (...) used by Sen in social choice, namely, to enhance the information made available to the choice algorithms. I argue here that, despite Okasha’s claims to the contrary, the information-enhancing strategy is not compelling in the domain of theory choice. (shrink)
Measuring Effectiveness.Jacob Stegenga - 2015 - Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences 54:62-71.details
Measuring the effectiveness of medical interventions faces three epistemological challenges: the choice of good measuring instruments, the use of appropriate analytic measures, and the use of a reliable method of extrapolating measures from an experimental context to a more general context. In practice each of these challenges contributes to overestimating the effectiveness of medical interventions. These challenges suggest the need for corrective normative principles. The instruments employed in clinical research should measure patient-relevant and disease-specific parameters, and should not be sensitive (...) to parameters that are only indirectly relevant. Effectiveness always should be measured and reported in absolute terms (using measures such as 'absolute risk reduction'), and only sometimes should effectiveness also be measured and reported in relative terms (using measures such as 'relative risk reduction')-employment of relative measures promotes an informal fallacy akin to the base-rate fallacy, which can be exploited to exaggerate claims of effectiveness. Finally, extrapolating from research settings to clinical settings should more rigorously take into account possible ways in which the intervention in question can fail to be effective in a target population. (shrink)
Robustness arguments hold that hypotheses are more likely to be true when they are confirmed by diverse kinds of evidence. Robustness arguments require the confirming evidence to be independent. We identify two kinds of independence appealed to in robustness arguments: ontic independence —when the multiple lines of evidence depend on different materials, assumptions, or theories—and probabilistic independence. Many assume that OI is sufficient for a robustness argument to be warranted. However, we argue that, as typically construed, OI is not a (...) sufficient independence condition for warranting robustness arguments. We show that OI evidence can collectively confirm a hypothesis to a lower degree than individual lines of evidence, contrary to the standard assumption undergirding usual robustness arguments. We employ Bayesian networks to represent the ideal empirical scenario for a robustness argument and a variety of ways in which empirical scenarios can fall short of this ideal. (shrink)
I defend a radical interpretation of biological populations—what I call population pluralism—which holds that there are many ways that a particular grouping of individuals can be related such that the grouping satisfies the conditions necessary for those individuals to evolve together. More constraining accounts of biological populations face empirical counter-examples and conceptual difficulties. One of the most intuitive and frequently employed conditions, causal connectivity—itself beset with numerous difficulties—is best construed by considering the relevant causal relations as ‘thick’ causal concepts. I (...) argue that the fine-grained causal relations that could constitute membership in a biological population are huge in number and many are manifested by degree, and thus we can construe population membership as being defined by massively multidimensional constructs, the differences between which are largely arbitrary. I end by showing that positions in two recent debates in theoretical biology depend on a view of biological populations at odds with the pluralism defended here. (shrink)
The phenomenon of disagreement has recently been brought into focus by the debate between contextualists and relativist invariantists about epistemic expressions such as ‘might’, ‘probably’, indicative conditionals, and the deontic ‘ought’. Against the orthodox contextualist view, it has been argued that an invariantist account can better explain apparent disagreements across contexts by appeal to the incompatibility of the propositions expressed in those contexts. This paper introduces an important and underappreciated phenomenon associated with epistemic expressions — a phenomenon that we call (...) reversibility. We argue that the invariantist account of disagreement is incompatible with reversibility, and we go on to show that reversible sentences cast doubt on the putative data about disagreement, even without assuming invariantism. Our argument therefore undermines much of the motivation for invariantism, and provides a new source for constraints on the proper explanation of purported data about disagreement. (shrink)
Evidence hierarchies are widely used to assess evidence in systematic reviews of medical studies. I give several arguments against the use of evidence hierarchies. The problems with evidence hierarchies are numerous, and include methodological shortcomings, philosophical problems, and formal constraints. I argue that medical science should not employ evidence hierarchies, including even the latest and most-sophisticated of such hierarchies.
Amalgamating evidence of different kinds for the same hypothesis into an overall confirmation is analogous, I argue, to amalgamating individuals’ preferences into a group preference. The latter faces well-known impossibility theorems, most famously “Arrow’s Theorem”. Once the analogy between amalgamating evidence and amalgamating preferences is tight, it is obvious that amalgamating evidence might face a theorem similar to Arrow’s. I prove that this is so, and end by discussing the plausibility of the axioms required for the theorem.
Formal principles governing best practices in classification and definition have for too long been neglected in the construction of biomedical ontologies, in ways which have important negative consequences for data integration and ontology alignment. We argue that the use of such principles in ontology construction can serve as a valuable tool in error-detection and also in supporting reliable manual curation. We argue also that such principles are a prerequisite for the successful application of advanced data integration techniques such as ontology-based (...) multi-database querying, automated ontology alignment and ontology-based text-mining. These theses are illustrated by means of a case study of the Gene Ontology, a project of increasing importance within the field of biomedical data integration. (shrink)
Aristotle is said to have held that any kind of actual infinity is impossible. I argue that he was a finitist (or "potentialist") about _magnitude_, but not about _plurality_. He did not deny that there are, or can be, infinitely many things in actuality. If this is right, then it has implications for Aristotle's views about the metaphysics of parts and points.
We introduce a realist, unextravagant interpretation of quantum theory that builds on the existing physical structure of the theory and allows experiments to have definite outcomes but leaves the theory’s basic dynamical content essentially intact. Much as classical systems have specific states that evolve along definite trajectories through configuration spaces, the traditional formulation of quantum theory permits assuming that closed quantum systems have specific states that evolve unitarily along definite trajectories through Hilbert spaces, and our interpretation extends this intuitive picture (...) of states and Hilbert-space trajectories to the more realistic case of open quantum systems despite the generic development of entanglement. We provide independent justification for the partial-trace operation for density matrices, reformulate wave-function collapse in terms of an underlying interpolating dynamics, derive the Born rule from deeper principles, resolve several open questions regarding ontological stability and dynamics, address a number of familiar no-go theorems, and argue that our interpretation is ultimately compatible with Lorentz invariance. Along the way, we also investigate a number of unexplored features of quantum theory, including an interesting geometrical structure—which we call subsystem space—that we believe merits further study. We conclude with a summary, a list of criteria for future work on quantum foundations, and further research directions. We include an appendix that briefly reviews the traditional Copenhagen interpretation and the measurement problem of quantum theory, as well as the instrumentalist approach and a collection of foundational theorems not otherwise discussed in the main text. (shrink)
Critics of commodification often claim that the buying and selling of some good communicates disrespect or some other inappropriate attitude. Such semiotic critiques have been leveled against markets in sex, pornography, kidneys, surrogacy, blood, and many other things. Brennan and Jaworski (2015a) have recently argued that all such objections fail. They claim that the meaning of a market transaction is a highly contingent, socially constructed fact. If allowing a market for one of these goods can improve the supply, access or (...) quality of the good, then instead of banning the market on semiotic grounds, they urge that we should revise our semiotics. In this reply, I isolate a part of the meaning of a market transaction that is not socially constructed: our market exchanges always express preferences. I then show how cogent semiotic critiques of some markets can be constructed on the basis of this fact. (shrink)
According to Rosenthal's higher-order thought (HOT) theory of consciousness, one is in a conscious mental state if and only if one is aware of oneself as being in that state via a suitable HOT. Several critics have argued that the possibility of so-called targetless HOTs?that is, HOTs that represent one as being in a state that does not exist?undermines the theory. Recently, Wilberg (2010) has argued that HOT theory can offer a straightforward account of such cases: since consciousness is a (...) property of mental state tokens, and since there are no states to exhibit consciousness, one is not in conscious states in virtue of targetless HOTs. In this paper, I argue that Wilberg's account is problematic and that Rosenthal's version of HOT theory, according to which a suitable HOT is both necessary and sufficient for consciousness, is to be preferred to Wilberg's account. I then argue that Rosenthal's account can comfortably accommodate targetless HOTs because consciousness is best understood as a property of individuals, not a property of states. (shrink)
David Rosenthal explains conscious mentality in terms of two independent, though complementary, theories—the higher-order thought (“HOT”) theory of consciousness and quality-space theory (“QST”) about mental qualities. It is natural to understand this combination of views as constituting a kind of representationalism about experience—that is, a version of the view that an experience’s conscious character is identical with certain of its representational properties. At times, however, Rosenthal seems to resist this characterization of his view. We explore here whether and to what (...) extent it makes sense to construe Rosenthal’s views as representationalist. Our goal is not merely terminological—discerning how best to use the expression ‘representationalism’. Rather, we argue that understanding Rosenthal’s account as a kind of representationalism permits us not only to make sense of broader debates within the philosophy of mind, but also to extend and clarify aspects of the view itself. (shrink)
I offer here a new hypothesis about the nature of implicit attitudes. Psy- chologists and philosophers alike often distinguish implicit from explicit attitudes by maintaining that we are aware of the latter, but not aware of the former. Recent experimental evidence, however, seems to challenge this account. It would seem, for example, that participants are frequently quite adept at predicting their own perfor- mances on measures of implicit attitudes. I propose here that most theorists in this area have nonetheless overlooked (...) a commonsense distinction regarding how we can be aware of attitudes, a difference that fundamentally distinguishes implicit and explicit attitudes. Along the way, I discuss the implications that this distinction may hold for future debates about and experimental investigations into the nature of implicit attitudes. (shrink)
Medical scientists employ ‘quality assessment tools’ (QATs) to measure the quality of evidence from clinical studies, especially randomized controlled trials (RCTs). These tools are designed to take into account various methodological details of clinical studies, including randomization, blinding, and other features of studies deemed relevant to minimizing bias and error. There are now dozens available. The various QATs on offer differ widely from each other, and second-order empirical studies show that QATs have low inter-rater reliability and low inter-tool reliability. This (...) is an instance of a more general problem I call the underdetermination of evidential significance. Disagreements about the strength of a particular piece of evidence can be due to different—but in principle equally good—weightings of the fine-grained methodological features which constitute QATs. (shrink)
Relationalism holds that perceptual experiences are relations between subjects and perceived objects. But much evidence suggests that perceptual states can be unconscious. We argue here that unconscious perception raises difficulties for relationalism. Relationalists would seem to have three options. First, they may deny that there is unconscious perception or question whether we have sufficient evidence to posit it. Second, they may allow for unconscious perception but deny that the relationalist analysis applies to it. Third, they may offer a relationalist explanation (...) of unconscious perception. We argue that each of these strategies is questionable. (shrink)
Harms of medical interventions are systematically underestimated in clinical research. Numerous factors—conceptual, methodological, and social—contribute to this underestimation. I articulate the depth of such underestimation by describing these factors at the various stages of clinical research. Before any evidence is gathered, the ways harms are operationalized in clinical research contributes to their underestimation. Medical interventions are first tested in phase 1 ‘first in human’ trials, but evidence from these trials is rarely published, despite the fact that such trials provide the (...) foundation for assessing the harm profile of medical interventions. If a medical intervention is deemed safe in a phase 1 trial, it is tested in larger phase 2 and 3 clinical trials. One way to think about the problem of underestimating harms is in terms of the statistical ‘power’ of a clinical trial—the ability of a trial to detect a difference of a certain effect size between the experimental group and the control group. Power is normally thought to be pertinent to detecting benefits of medical interventions. It is important, though, to distinguish between the ability of a trial to detect benefits and the ability of a trial to detect harms. I refer to the former as power-B and the latter as power-H. I identify several factors that maximize power-B by sacrificing powerH in phase 3 clinical trials. If a medical intervention is approved for general use, it is evaluated by phase 4 post-market surveillance. Phase 4 surveillance of harms further contributes to underestimating the harm profile of medical interventions. At every stage of clinical research the hunt for harms is shrouded in secrecy, which further contributes to the underestimation of the harm profiles of medical interventions. (shrink)
Reasons transmit. If one has a reason to attain an end, then one has a reason to effect means for that end: reasons are transmitted from end to means. I argue that the likelihood ratio (LR) is a compelling measure of reason transmission from ends to means. The LR measure is superior to other measures, can be used to construct a condition specifying precisely when reasons transmit, and satisfies intuitions regarding end-means reason transmission in a broad array of cases.
In this paper, we review a general technique for converting the standard Lagrangian description of a classical system into a formulation that puts time on an equal footing with the system's degrees of freedom. We show how the resulting framework anticipates key features of special relativity, including the signature of the Minkowski metric tensor and the special role played by theories that are invariant under a generalized notion of Lorentz transformations. We then use this technique to revisit a classification of (...) classical particle-types that mirrors Wigner's classification of quantum particle-types in terms of irreducible representations of the Poincaré group, including the cases of massive particles, massless particles, and tachyons. Along the way, we see gauge invariance naturally emerge in the context of classical massless particles with nonzero spin, as well as study the massless limit of a massive particle and derive a classical-particle version of the Higgs mechanism. (shrink)
To be effective, a medical intervention must improve one's health by targeting a disease. The concept of disease, though, is controversial. Among the leading accounts of disease-naturalism, normativism, hybridism, and eliminativism-I defend a version of hybridism. A hybrid account of disease holds that for a state to be a disease that state must both (i) have a constitutive causal basis and (ii) cause harm. The dual requirement of hybridism entails that a medical intervention, to be deemed effective, must target either (...) the constitutive causal basis of a disease or the harms caused by the disease (or ideally both). This provides a theoretical underpinning to the two principle aims of medical treatment: care and cure. (shrink)
Recent empirical and conceptual research has shown that moral considerations have an influence on the way we use the adverb 'intentionally'. Here we propose our own account of these phenomena, according to which they arise from the fact that the adverb 'intentionally' has three different meanings that are differently selected by contextual factors, including normative expectations. We argue that our hypotheses can account for most available data and present some new results that support this. We end by discussing the implications (...) of our account for folk psychology. (shrink)
I argue that, on Husserl's account, affectivity, along with the closely related phenomenon of association, follows a form of sui generis lawfulness belonging to the domain of what Husserl calls motivation, which must be distinguished both (1) from the causal structures through which we understand the body third-personally, as a material thing; and also (2) from the rational or inferential structures at the level of deliberative judgment traditionally understood to be the domain of epistemic import. In effect, in addition to (...) recognizing a “space of causes” and a “space of reasons,” Husserl’s account of affectivity and the epistemology of passive synthesis in which it is situated suggest that we should recognize a separate “space of motivations.” Within this space, on Husserl’s phenomenological picture, we can isolate two different sorts of epistemic import, one belonging directly to the passive-synthetic content of experience, as explained in Husserl’s account of association and his closely aligned notion of nonlinguistic sense, and a second—my primary focus—affectivity, which is still relevant for that content, albeit indirectly, and holds epistemic import in its determination not of what that content is but of how it comes to matter for us. (shrink)
We summarize a new realist, unextravagant interpretation of quantum theory that builds on the existing physical structure of the theory and allows experiments to have definite outcomes but leaves the theory's basic dynamical content essentially intact. Much as classical systems have specific states that evolve along definite trajectories through configuration spaces, the traditional formulation of quantum theory permits assuming that closed quantum systems have specific states that evolve unitarily along definite trajectories through Hilbert spaces, and our interpretation extends this intuitive (...) picture of states and Hilbert-space trajectories to the more realistic case of open quantum systems despite the generic development of entanglement. Our interpretation—which we claim is ultimately compatible with Lorentz invariance—reformulates wave-function collapse in terms of an underlying interpolating dynamics, makes it possible to derive the Born rule from deeper principles, and resolves several open questions regarding ontological stability and dynamics. (shrink)
Millstein (2009) argues against conceptual pluralism with respect to the definition of “population,” and proposes her own definition of the term. I challenge both Millstein's negative arguments against conceptual pluralism and her positive proposal for a singular definition of population. The concept of population, I argue, does not refer to a natural kind; populations are constructs of biologists variably defined by contexts of inquiry.
Using grizzly-human encounters as a case study, this paper argues for a rethinking of the differences between humans and animals within environmental ethics. A diffractive approach that understands such differences as an effect of specific material and discursive arrangements would see ethics as an interrogation of which arrangements enable flourishing, or living and dying well. The paper draws on a wide variety of human-grizzly encounters in order to describe the species as co-constitutive and challenges perspectives that treat bears and other (...) animals as oppositional and nonagential outsides to humans. (shrink)
The objective of this paper is to analyze the broader significance of Frege’s logicist project against the background of Wittgenstein’s philosophy from both Tractatus and Philosophical Investigations. The article draws on two basic observations, namely that Frege’s project aims at saying something that was only implicit in everyday arithmetical practice, as the so-called recursion theorem demonstrates, and that the explicitness involved in logicism does not concern the arithmetical operations themselves, but rather the way they are defined. It thus represents the (...) attempt to make explicit not the rules alone, but rather the rules governing their following, i.e. rules of second-order type. I elaborate on these remarks with short references to Brandom’s refinement of Frege’s expressivist and Wittgenstein’s pragmatist project. (shrink)
The chemical characterization of the substance responsible for the phenomenon of “transformation” of pneumococci was presented in the now famous 1944 paper by Avery, MacLeod, and McCarty. Reception of this work was mixed. Although interpreting their results as evidence that deoxyribonucleic acid (DNA) is the molecule responsible for genetic changes was, at the time, controversial, this paper has been retrospectively celebrated as providing such evidence. The mixed and changing assessment of the evidence presented in the paper was due to the (...) work’s interpretive flexibility – the evidence was interpreted in various ways, and such interpretations were justified given the neophytic state of molecular biology and methodological limitations of Avery’s transformation studies. I argue that the changing context in which the evidence presented by Avery’s group was interpreted partly explains the vicissitudes of the assessments of the evidence. Two less compelling explanations of the reception are a myth-making account and an appeal to the wartime historical context of its publication. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.