In attempting to form rational personal probabilities by direct inference, it is usually assumed that one should prefer frequency information concerning more specific reference classes. While the preceding assumption is intuitively plausible, little energy has been expended in explaining why it should be accepted. In the present article, I address this omission by showing that, among the principled policies that may be used in setting one’s personal probabilities, the policy of making direct inferences with a preference for frequency information for (...) more specific reference classes yields personal probabilities whose accuracy is optimal, according to all proper scoring rules, in situations where all of the relevant frequency information is point-valued. Assuming that frequency information for narrower reference classes is preferred, when the relevant frequency statements are point-valued, a dilemma arises when choosing whether to make a direct inference based upon relatively precise-valued frequency information for a broad reference class, R, or upon relatively imprecise-valued frequency information for a more specific reference class, R*. I address such cases, by showing that it is often possible to make a precise-valued frequency judgment regarding R* based on precise-valued frequency information for R, using standard principles of direct inference. Having made such a frequency judgment, the dilemma of choosing between and is removed, and one may proceed by using the precise-valued frequency estimate for the more specific reference class as a premise for direct inference. (shrink)
The objective Bayesian view of proof (or logical probability, or evidential support) is explained and defended: that the relation of evidence to hypothesis (in legal trials, science etc) is a strictly logical one, comparable to deductive logic. This view is distinguished from the thesis, which had some popularity in law in the 1980s, that legal evidence ought to be evaluated using numerical probabilities and formulas. While numbers are not always useful, a central role is played in uncertain reasoning by the (...) ‘proportional syllogism’, or argument from frequencies, such as ‘nearly all aeroplane flights arrive safely, so my flight is very likely to arrive safely’. Such arguments raise the ‘problem of the reference class’, arising from the fact that an individual case may be a member of many different classes in which frequencies differ. For example, if 15 per cent of swans are black and 60 per cent of fauna in the zoo is black, what should I think about the likelihood of a swan in the zoo being black? The nature of the problem is explained, and legal cases where it arises are given. It is explained how recent work in data mining on the relevance of features for prediction provides a solution to the reference class problem. (shrink)
It is well known that there are, at least, two sorts of cases where one should not prefer a direct inference based on a narrower reference class, in particular: cases where the narrower reference class is gerrymandered, and cases where one lacks an evidential basis for forming a precise-valued frequency judgment for the narrower reference class. I here propose (1) that the preceding exceptions exhaust the circumstances where one should not prefer direct inference based on a narrower reference class, and (...) (2) that minimal frequency information for a narrower (non-gerrymandered) reference class is sufficient to yield the defeat of a direct inference for a broader reference class. By the application of a method for inferring relatively informative expected frequencies, I argue that the latter claim does not result in an overly incredulous approach to direct inference. The method introduced here permits one to infer a relatively informative expected frequency for a reference class R', given frequency information for a superset of R' and/or frequency information for a sample drawn from R'. (shrink)
Probabilistic inference from frequencies, such as "Most Quakers are pacifists; Nixon is a Quaker, so probably Nixon is a pacifist" suffer from the problem that an individual is typically a member of many "reference classes" (such as Quakers, Republicans, Californians, etc) in which the frequency of the target attribute varies. How to choose the best class or combine the information? The article argues that the problem can be solved by the feature selection methods used in contemporary Big Data science: the (...) correct reference class is that determined by the features relevant to the target, and relevance is measured by correlation (that is, a feature is relevant if it makes a difference to the frequency of the target). (shrink)
In this article, I present a schema for generating counterexamples to the argument form known as Hypothetical Syllogism with indicative conditionals. If my schema for generating counterexamples to HS works as I think it does, then HS is invalid for indicative conditionals.
Moti Mizrahi (2013) presents some novel counterexamples to Hypothetical Syllogism (HS) for indicative conditionals. I show that they are not compelling as they neglect the complicated ways in which conditionals and modals interact. I then briefly outline why HS should nevertheless be rejected.
Recently, the practice of deciding legal cases on purely statistical evidence has been widely criticised. Many feel uncomfortable with finding someone guilty on the basis of bare probabilities, even though the chance of error might be stupendously small. This is an important issue: with the rise of DNA profiling, courts are increasingly faced with purely statistical evidence. A prominent line of argument—endorsed by Blome-Tillmann 2017; Smith 2018; and Littlejohn 2018—rejects the use of such evidence by appealing to epistemic (...) norms that apply to individual inquirers. My aim in this paper is to rehabilitate purely statistical evidence by arguing that, given the broader aims of legal systems, there are scenarios in which relying on such evidence is appropriate. Along the way I explain why popular arguments appealing to individual epistemic norms to reject legal reliance on bare statistics are unconvincing, by showing that courts and individuals face different epistemic predicaments (in short, individuals can hedge when confronted with statistical evidence, whilst legal tribunals cannot). I also correct some misconceptions about legal practice that have found their way into the recent literature. (shrink)
The law views with suspicion statistical evidence, even evidence that is probabilistically on a par with direct, individual evidence that the law is in no way suspicious of. But it has proved remarkably hard to either justify this suspicion, or to debunk it. In this paper, we connect the discussion of statistical evidence to broader epistemological discussions of similar phenomena. We highlight Sensitivity – the requirement that a belief be counterfactually sensitive to the truth in a specific way (...) – as a way of epistemically explaining the legal suspicion towards statistical evidence. Still, we do not think of this as a satisfactory vindication of the reluctance to rely on statistical evidence. Knowledge – and Sensitivity, and indeed epistemology in general – are of little, if any, legal value. Instead, we tell an incentive-based story vindicating the suspicion towards statistical evidence. We conclude by showing that the epistemological story and the incentive-based story are closely and interestingly related, and by offering initial thoughts about the role of statistical evidence in morality. (shrink)
The debate over Hypothetical Syllogism is locked in stalemate. Although putative natural language counterexamples to Hypothetical Syllogism abound, many philosophers defend Hypothetical Syllogism, arguing that the alleged counterexamples involve an illicit shift in context. The proper lesson to draw from the putative counterexamples, they argue, is that natural language conditionals are context-sensitive conditionals which obey Hypothetical Syllogism. In order to make progress on the issue, I consider and improve upon Morreau’s proof of the invalidity of Hypothetical (...)Syllogism. The improved proof relies upon the semantic claim that conditionals with antecedents irrelevant to the obtaining of an already true consequent are themselves true. Moreover, this semantic insight allows us to provide compelling counterexamples to Hypothetical Syllogism that are resistant to the usual contextualist response. (shrink)
Martin Smith has recently proposed, in this journal, a novel and intriguing approach to puzzles and paradoxes in evidence law arising from the evidential standard of the Preponderance of the Evidence. According to Smith, the relation of normic support provides us with an elegant solution to those puzzles. In this paper I develop a counterexample to Smith’s approach and argue that normic support can neither account for our reluctance to base affirmative verdicts on bare statistical evidence nor resolve the (...) pertinent paradoxes. Normic support is, as a consequence, not a successful epistemic anti-luck condition. (shrink)
In this paper I propose an interpretation of classical statistical mechanics that centers on taking seriously the idea that probability measures represent complete states of statistical mechanical systems. I show how this leads naturally to the idea that the stochasticity of statistical mechanics is associated directly with the observables of the theory rather than with the microstates (as traditional accounts would have it). The usual assumption that microstates are representationally significant in the theory is therefore dispensable, a (...) consequence which suggests interesting possibilities for developing non-equilibrium statistical mechanics and investigating inter-theoretic answers to the foundational questions of statistical mechanics. (shrink)
One finds, in Maxwell's writings on thermodynamics and statistical physics, a conception of the nature of these subjects that differs in interesting ways from the way that they are usually conceived. In particular, though—in agreement with the currently accepted view—Maxwell maintains that the second law of thermodynamics, as originally conceived, cannot be strictly true, the replacement he proposes is different from the version accepted by most physicists today. The modification of the second law accepted by most physicists is a (...) probabilistic one: although statistical fluctuations will result in occasional spontaneous differences in temperature or pressure, there is no way to predictably and reliably harness these to produce large violations of the original version of the second law. Maxwell advocates a version of the second law that is strictly weaker; the validity of even this probabilistic version is of limited scope, limited to situations in which we are dealing with large numbers of molecules en masse and have no ability to manipulate individual molecules. Connected with this is his concept of the thermodynamic concepts of heat, work, and entropy; on the Maxwellian view, these are concepts that must be relativized to the means we have available for gathering information about and manipulating physical systems. The Maxwellian view is one that deserves serious consideration in discussions of the foundation of statistical mechanics. It has relevance for the project of recovering thermodynamics from statistical mechanics because, in such a project, it matters which version of the second law we are trying to recover. (shrink)
This chapter will review selected aspects of the terrain of discussions about probabilities in statistical mechanics (with no pretensions to exhaustiveness, though the major issues will be touched upon), and will argue for a number of claims. None of the claims to be defended is entirely original, but all deserve emphasis. The first, and least controversial, is that probabilistic notions are needed to make sense of statistical mechanics. The reason for this is the same reason that convinced Maxwell, (...) Gibbs, and Boltzmann that probabilities would be needed, namely, that the second law of thermodynamics, which in its original formulation says that certain processes are impossible, must, on the kinetic theory, be replaced by a weaker formulation according to which what the original version deems impossible is merely improbable. Second is that we ought not take the standard measures invoked in equilibrium statistical mechanics as giving, in any sense, the correct probabilities about microstates of the system. We can settle for a much weaker claim: that the probabilities for outcomes of experiments yielded by the standard distributions are effectively the same as those yielded by any distribution that we should take as a representing probabilities over microstates. Lastly, (and most controversially): in asking about the status of probabilities in statistical mechanics, the familiar dichotomy between epistemic probabilities (credences, or degrees of belief) and ontic (physical) probabilities is insufficient; the concept of probability that is best suited to the needs of statistical mechanics is one that combines epistemic and physical considerations. (shrink)
Although the theory of the assertoric syllogism was Aristotle's great invention, one which dominated logical theory for the succeeding two millenia, accounts of the syllogism evolved and changed over that time. Indeed, in the twentieth century, doctrines were attributed to Aristotle which lost sight of what Aristotle intended. One of these mistaken doctrines was the very form of the syllogism: that a syllogism consists of three propositions containing three terms arranged in four figures. Yet another was (...) that a syllogism is a conditional proposition deduced from a set of axioms. There is even unclarity about what the basis of syllogistic validity consists in. Returning to Aristotle's text, and reading it in the light of commentary from late antiquity and the middle ages, we find a coherent and precise theory which shows all these claims to be based on a misunderstanding and misreading. (shrink)
Recent attempts to resolve the Paradox of the Gatecrasher rest on a now familiar distinction between individual and bare statistical evidence. This paper investigates two such approaches, the causal approach to individual evidence and a recently influential (and award-winning) modal account that explicates individual evidence in terms of Nozick's notion of sensitivity. This paper offers counterexamples to both approaches, explicates a problem concerning necessary truths for the sensitivity account, and argues that either view is implausibly committed to the impossibility (...) of no-fault wrongful convictions. The paper finally concludes that the distinction between individual and bare statistical evidence cannot be maintained in terms of causation or sensitivity. We have to look elsewhere for a solution of the Paradox of the Gatecrasher. (shrink)
The Generality Problem is widely recognized to be a serious problem for reliabilist theories of justification. James R. Beebe's Statistical Solution is one of only a handful of attempted solutions that has garnered serious attention in the literature. In their recent response to Beebe, Julien Dutant and Erik J. Olsson successfully refute Beebe's Statistical Solution. This paper presents a New Statistical Solution that countenances Dutant and Olsson's objections, dodges the serious problems that trouble rival solutions, and retains (...) the theoretical virtues that made Beebe's solution so attractive in the first place. There indeed exists a principled, rigorous, conceptually sparse, and plausible solution to the Generality Problem: it is the New Statistical Solution. (shrink)
This pair of articles provides a critical commentary on contemporary approaches to statistical mechanical probabilities. These articles focus on the two ways of understanding these probabilities that have received the most attention in the recent literature: the epistemic indifference approach, and the Lewis-style regularity approach. These articles describe these approaches, highlight the main points of contention, and make some attempts to advance the discussion. The first of these articles provides a brief sketch of statistical mechanics, and discusses the (...) indifference approach to statistical mechanical probabilities. (shrink)
This pair of articles provides a critical commentary on contemporary approaches to statistical mechanical probabilities. These articles focus on the two ways of understanding these probabilities that have received the most attention in the recent literature: the epistemic indifference approach, and the Lewis-style regularity approach. These articles describe these approaches, highlight the main points of contention, and make some attempts to advance the discussion. The second of these articles discusses the regularity approach to statistical mechanical probabilities, and describes (...) some areas where further research is needed. (shrink)
Transformed RAVAL NOTATION solves Syllogism problems very quickly and accurately. This method solves any categorical syllogism problem with same ease and is as simple as ABC… In Transformed RAVAL NOTATION, each premise and conclusion is written in abbreviated form, and then conclusion is reached simply by connecting abbreviated premises.NOTATION: Statements (both premises and conclusions) are represented as follows: Statement Notation a) All S are P, SS-P b) Some S are P, S-P c) Some S are not P, S (...) / PP d) No S is P, SS / PP (- implies are and / implies are not) All is represented by double letters; Some is represented by single letter. No S is P implies No P is S so its notation contains double letters on both sides. -/- RULES: (1) Conclusions are reached by connecting Notations. Two notations can be linked only through common linking terms. When the common linking term multiplies (becomes double from single), divides (becomes single from double) or remains double then conclusion is arrived between terminal terms. (Aristotle’s rule: the middle term must be distributed at least once) -/- (2)If both statements linked are having – signs, resulting conclusion carries – sign (Aristotle’s rule: two affirmatives imply an affirmative) -/- (3) Whenever statements having – and / signs are linked, resulting conclusion carries / sign. (Aristotle’s rule: if one premise is negative, then the conclusion must be negative) -/- (4)Statement having / sign cannot be linked with another statement having / sign to derive any conclusion. (Aristotle’s rule: Two negative premises imply no valid conclusion) Syllogism conclusion by Tranformed Raval’s Notation is in accordance with Aristotle’s rules for the same. It is visually very transparent and conclusions can be deduced at a glance, moreover it solves syllogism problems with any number of statements and it is quickest of all available methods. By new Raval method for solving categorical syllogism, solving categorical syllogism is as simple as pronouncing ABC and it is just continuance of Aristotle work on categorical syllogism. It’s believed that Boole's system could handle multi-term propositions and arguments, whereas Aristotle could handle only two-termed subject-predicate propositions and arguments. For example, it’s claimed that Aristotle's system could not deduce: "No quadrangle that is a square is a rectangle that is a rhombus" from "No square that is a quadrangle is a rhombus that is a rectangle" or from "No rhombus that is a rectangle is a square that is a quadrangle". Above conclusion is reached at a glance with Raval's Notations (Symbolic Aristotle’s syllogism rules). Premise: "No (square that is a quadrangle) is a (rhombus that is a rectangle)" Raval's Representations: S – Q, S – Q / Rh – Re, Rh – Re Premise: "No (rhombus that is a rectangle) is a (square that is a quadrangle)". Raval's Representations: Rh – Re, Rh – Re / S – Q, S - Q Conclusion: "No (quadrangle that is a square) is a (rectangle that is a rhombus)" Raval’s Representations: Q – S, Q – S / Re – Rh, Re – Rh As “ Q – S” follows from “S – Q” and “Re – Rh” from “Rh – Re”. Given conclusion follows from the given premises. Author disregards existential fallacy, as subset of a null set has to be a null set. -/- . (shrink)
In a quantum universe with a strong arrow of time, it is standard to postulate that the initial wave function started in a particular macrostate---the special low-entropy macrostate selected by the Past Hypothesis. Moreover, there is an additional postulate about statistical mechanical probabilities according to which the initial wave function is a ''typical'' choice in the macrostate. Together, they support a probabilistic version of the Second Law of Thermodynamics: typical initial wave functions will increase in entropy. Hence, there are (...) two sources of randomness in such a universe: the quantum-mechanical probabilities of the Born rule and the statistical mechanical probabilities of the Statistical Postulate. I propose a new way to understand time's arrow in a quantum universe. It is based on what I call the Thermodynamic Theories of Quantum Mechanics. According to this perspective, there is a natural choice for the initial quantum state of the universe, which is given by not a wave function but by a density matrix. The density matrix plays a microscopic role: it appears in the fundamental dynamical equations of those theories. The density matrix also plays a macroscopic / thermodynamic role: it is exactly the projection operator onto the Past Hypothesis subspace. Thus, given an initial subspace, we obtain a unique choice of the initial density matrix. I call this property "the conditional uniqueness" of the initial quantum state. The conditional uniqueness provides a new and general strategy to eliminate statistical mechanical probabilities in the fundamental physical theories, by which we can reduce the two sources of randomness to only the quantum mechanical one. I also explore the idea of an absolutely unique initial quantum state, in a way that might realize Penrose's idea of a strongly deterministic universe. (shrink)
The purpose of this paper is to outline an alternative approach to introductory logic courses. Traditional logic courses usually focus on the method of natural deduction or introduce predicate calculus as a system. These approaches complicate the process of learning different techniques for dealing with categorical and hypothetical syllogisms such as alternate notations or alternate forms of analyzing syllogisms. The author's approach takes up observations made by Dijkstrata and assimilates them into a reasoning process based on modified notations. The author's (...) model adopts a notation that addresses the essentials of a problem while remaining easily manipulated to serve other analytic frameworks. The author also discusses the pedagogical benefits of incorporating the model into introductory logic classes for topics ranging from syllogisms to predicate calculus. Since this method emphasizes the development of a clear and manipulable notation, students can worry less about issues of translation, can spend more energy solving problems in the terms in which they are expressed, and are better able to think in abstract terms. (shrink)
The conspicuous similarities between interpretive strategies in classical statistical mechanics and in quantum mechanics may be grounded on their employment of common implementations of probability. The objective probabilities which represent the underlying stochasticity of these theories can be naturally associated with three of their common formal features: initial conditions, dynamics, and observables. Various well-known interpretations of the two theories line up with particular choices among these three ways of implementing probability. This perspective has significant application to debates on primitive (...) ontology and to the quantum measurement problem. (shrink)
There are three main traditional accounts of vagueness : one takes it as a genuinely metaphysical phenomenon, one takes it as a phenomenon of ignorance, and one takes it as a linguistic or conceptual phenomenon. In this paper I first very briefly present these views, especially the epistemicist and supervaluationist strategies, and shortly point to some well-known problems that the views carry. I then examine a 'statistical epistemicist' account of vagueness that is designed to avoid precisely these problems – (...) it will be a view that provides an account of the phenomenon of vagueness as coming from our linguistic practices, while insisting that meaning supervenes on use, and that our use of vague terms does yield sharp and precise meanings, which we ignore, thus allowing bivalence to hold. (shrink)
In order to perform certain actions – such as incarcerating a person or revoking parental rights – the state must establish certain facts to a particular standard of proof. These standards – such as preponderance of evidence and beyond reasonable doubt – are often interpreted as likelihoods or epistemic confidences. Many theorists construe them numerically; beyond reasonable doubt, for example, is often construed as 90 to 95% confidence in the guilt of the defendant. -/- A family of influential cases suggests (...) standards of proof should not be interpreted numerically. These ‘proof paradoxes’ illustrate that purely statistical evidence can warrant high credence in a disputed fact without satisfying the relevant legal standard. In this essay I evaluate three influential attempts to explain why merely statistical evidence cannot satisfy legal standards. (shrink)
Moti Mizrahi (2013) presented a putative counterexample to Hypothetical Syllogism (HS) for indicative conditionals aiming to succeed where previous attempts to refute HS have failed. Lee Walters (2014a) objected that Mizrahi’s putative counterexample results from an inadequate analysis of conditionals with embedded modals, but advanced new putative counterexamples to HS for subjunctive conditionals that are supposed to bypass this issue (Walters, 2014a; 2014b). It is argued that Walter’s analysis of embedded modals is unnecessary to prevent Mizrahi’s putative counterexample, since (...) the example can be disarmed either as a fallacy of equivocation or as a contextual fallacy. Walters’s putative counterexamples to HS are also criticized as contextual fallacies. The conclusion is that neither Mizrahi nor Walters provided compelling reasons to reject HS. (shrink)
Why does classical equilibrium statistical mechanics work? Malament and Zabell (1980) noticed that, for ergodic dynamical systems, the unique absolutely continuous invariant probability measure is the microcanonical. Earman and Rédei (1996) replied that systems of interest are very probably not ergodic, so that absolutely continuous invariant probability measures very distant from the microcanonical exist. In response I define the generalized properties of epsilon-ergodicity and epsilon-continuity, I review computational evidence indicating that systems of interest are epsilon-ergodic, I adapt Malament and (...) Zabell’s defense of absolute continuity to support epsilon-continuity, and I prove that, for epsilon-ergodic systems, every epsilon-continuous invariant probability measure is very close to the microcanonical. (shrink)
The question as to what makes a perfect Aristotelian syllogism a perfect one has long been discussed by Aristotelian scholars. G. Patzig was the first to point the way to a correct answer: it is the evidence of the logical necessity that is the special feature of perfect syllogisms. Patzig moreover claimed that the evidence of a perfect syllogism can be seen for Barbara in the transitivity of the a-relation. However, this explanation would give Barbara a different status (...) over the other three first figure syllogisms. I argue that, taking into account the role of the being-contained-as-in-a-whole formulation, transitivity can be seen to be present in all four first figure syllogisms. Using this wording will put the negation sign with the predicate, similar to the notation in modern predicate calculus. (shrink)
The paper takes a closer look at the role of knowledge and evidence in legal theory. In particular, the paper examines a puzzle arising from the evidential standard Preponderance of the Evidence and its application in civil procedure. Legal scholars have argued since at least the 1940s that the rule of the Preponderance of the Evidence gives rise to a puzzle concerning the role of statistical evidence in judicial proceedings, sometimes referred to as the Problem of Bare Statistical (...) Evidence. While this puzzle has led to the development of a multitude of accounts and approaches in the legal literature, I argue here that the problem can be resolved fairly straightforwardly within a knowledge-first framework. (shrink)
In an illuminating article, Claus Beisbart argues that the recently-popular thesis that the probabilities of statistical mechanics (SM) are Best System chances runs into a serious obstacle: there is no one axiomatization of SM that is robustly best, as judged by the theoretical virtues of simplicity, strength, and fit. Beisbart takes this 'no clear winner' result to imply that the probabilities yielded by the competing axiomatizations simply fail to count as Best System chances. In this reply, we express sympathy (...) for the 'no clear winner' thesis. However, we argue that an importantly different moral should be drawn from this. We contend that the implication for Humean chances is not that there are no SM chances, but rather that SM chances fail to be sharp. (shrink)
At its strongest, Hume's problem of induction denies the existence of any well justified assumptionless inductive inference rule. At the weakest, it challenges our ability to articulate and apply good inductive inference rules. This paper examines an analysis that is closer to the latter camp. It reviews one answer to this problem drawn from the VC theorem in statistical learning theory and argues for its inadequacy. In particular, I show that it cannot be computed, in general, whether we are (...) in a situation where the VC theorem can be applied for the purpose we want it to. (shrink)
This paper offers a new angle on the common idea that the process of science does not support epistemic diversity. Under minimal assumptions on the nature of journal editing, we prove that editorial procedures, even when impartial in themselves, disadvantage less prominent research programs. This purely statistical bias in article selection further skews existing differences in the success rate and hence attractiveness of research programs, and exacerbates the reputation difference between the programs. After a discussion of the modeling assumptions, (...) the paper ends with a number of recommendations that may help promote scientific diversity through editorial decision making. (shrink)
Statistical evidence—say, that 95% of your co-workers badmouth each other—can never render resenting your colleague appropriate, in the way that other evidence (say, the testimony of a reliable friend) can. The problem of statistical resentment is to explain why. We put the problem of statistical resentment in several wider contexts: The context of the problem of statistical evidence in legal theory; the epistemological context—with problems like the lottery paradox for knowledge, epistemic impurism and doxastic wrongdoing; and (...) the context of a wider set of examples of responses and attitudes that seem not to be appropriately groundable in statistical evidence. Regrettably, we do not come up with a fully general, fully adequate, fully unified account of all the phenomena discussed. But we give reasons to believe that no such account is forthcoming, and we sketch a somewhat messier account that may be the best that can be had here. (shrink)
The major competing statistical paradigms share a common remarkable but unremarked thread: in many of their inferential applications, different probability interpretations are combined. How this plays out in different theories of inference depends on the type of question asked. We distinguish four question types: confirmation, evidence, decision, and prediction. We show that Bayesian confirmation theory mixes what are intuitively “subjective” and “objective” interpretations of probability, whereas the likelihood-based account of evidence melds three conceptions of what constitutes an “objective” probability.
Standard statistical measures of strength of association, although pioneered by Pearson deliberately to be acausal, nowadays are routinely used to measure causal efficacy. But their acausal origins have left them ill suited to this latter purpose. I distinguish between two different conceptions of causal efficacy, and argue that: 1) Both conceptions can be useful 2) The statistical measures only attempt to capture the first of them 3) They are not fully successful even at this 4) An alternative definition (...) more squarely based on causal thinking not only captures the second conception, it can also capture the first one better too. (shrink)
Over almost a half-century, evidence law scholars and philosophers have contended with what have come to be called the “Proof Paradoxes.” In brief, the following sort of paradox arises: Factfinders in criminal and civil trials are charged with reaching a verdict if the evidence presented meets a particular standard of proof—beyond a reasonable doubt, in criminal cases, and preponderance of the evidence, in civil trials. It seems that purely statistical evidence can suffice for just such a level of certainty (...) in a variety of cases where our intuition is that it would nonetheless be wrong to convict the defendant, or find in favor of the plaintiff, on merely statistical evidence. So, we either have to convict with statistical evidence, in spite of an intuition that this is unsettling, or else explain what (dispositive) deficiency statistical evidence has. -/- Most scholars have tried to justify the resistance to relying on merely statistical evidence: by relying on epistemic deficiencies in this kind of evidence; by relying on court practice; and also by reference to the psychological literature. In fact, I argue, the epistemic deficiencies philosophers and legal scholars allege are suspect. And, I argue, while scholars often discuss unfairness to civil defendants, they ignore a long history of relying on statistical evidence in a variety of civil matters, including employment discrimination, toxic torts, and market share liability cases. Were the dominant arguments in the literature to prevail, it would extremely difficult for plaintiffs to recover in a variety of cases. -/- The various considerations I advance lead to the conclusion that when it comes to naked statistical evidence, philosophers and legal scholars who argue for its insufficiency have been caught with their pants down. (shrink)
There are two motivations commonly ascribed to historical actors for taking up statistics: to reduce complicated data to a mean value (e.g., Quetelet), and to take account of diversity (e.g., Galton). Different motivations will, it is assumed, lead to different methodological decisions in the practice of the statistical sciences. Karl Pearson and W. F. R. Weldon are generally seen as following directly in Galton’s footsteps. I argue for two related theses in light of this standard interpretation, based on a (...) reading of several sources in which Weldon, independently of Pearson, reflects on his own motivations. First, while Pearson does approach statistics from this "Galtonian" perspective, he is, consistent with his positivist philosophy of science, utilizing statistics to simplify the highly variable data of biology. Weldon, on the other hand, is brought to statistics by a rich empiricism and a desire to preserve the diversity of biological data. Secondly, we have here a counterexample to the claim that divergence in motivation will lead to a corresponding separation in methodology. Pearson and Weldon, despite embracing biometry for different reasons, settled on precisely the same set of statistical tools for the investigation of evolution. (shrink)
In Nicomachean Ethics VII Aristotle describes akrasia as a disposition. Taking into account that it is a disposition, I argue that akrasia cannot be understood on an epistemological basis alone, i.e., it is not merely a problem of knowledge that the akratic person acts the ways he does, but rather one is akratic due to a certain kind of habituation, where the person is not able to activate the potential knowledge s/he possesses. To stress this point, I focus on the (...) gap between potential knowledge and its activation, whereby I argue that the distinction between potential and actual knowledge is at the center of the problem of akrasia. I suggest that to elaborate on this gap, we must go beyond the limits of Nicomachean Ethics to Metaphysics IX, where we find Aristotle’s discussion of the distinction between potentiality and actuality. I further analyze the gap between potential and actual knowledge by means of Aristotle’s discussion of practical syllogism, where I argue that akrasia is a result of a conflict in practical reasoning. I conclude my paper by stressing that for the akratic person the action is determined with respect to the conclusion of the practical syllogism, where the conclusion is produced by means of a ‘conflict’ between the universal opinion which is potential and the particular opinion which is appetitive. (shrink)
The article focuses on prosecutor's fallacy and interrogator's fallacy, the two kinds of reasoning in inferring a suspect's guilt. The prosecutor's fallacy is a combination of two conditional probabilities that lead to unfortunate commission of error in the process due to the inclination of the prosecutor in the establishment of strong evidence that will indict the defendant. It provides a comprehensive discussion of Gerd Gigerenzer's discourse on a criminal case in Germany explaining the perils of prosecutor's fallacy in his application (...) of probability to practical problems. It also discusses the interrogator's fallacy which was introduced by Robert A. J. Matthews as the error on the assumption that confessional evidence can never reduce the probability of guilt. (shrink)
The replication crisis has prompted many to call for statistical reform within the psychological sciences. Here we examine issues within Frequentist statistics that may have led to the replication crisis, and we examine the alternative—Bayesian statistics—that many have suggested as a replacement. The Frequentist approach and the Bayesian approach offer radically different perspectives on evidence and inference with the Frequentist approach prioritising error control and the Bayesian approach offering a formal method for quantifying the relative strength of evidence for (...) hypotheses. We suggest that rather than mere statistical reform, what is needed is a better understanding of the different modes of statistical inference and a better understanding of how statistical inference relates to scientific inference. (shrink)
This is a trend study of School Size, Location and Enrolment Figures of Junior secondary schools in Akwa Ibom State of Nigeria covering 2008 – 2016 with implications on sustainable development. The study was tailored to follow the ex-post facto research design. This study was a census, hence the entire population of 227 public secondary schools were used. Secondary quantitative data were obtained using “School Size, Location and Enrolment Figures Checklist (SSLEFC)” were analysed using descriptive statistics, while line graphs and (...) bar charts were used to illustrate the statistical trends. The hypotheses were tested using the independent t-test statistical approach. Findings showed that higher rates of enrolment were recorded in large and urban schools than in small schools and rural schools respectively. The mean differences in the enrolment trend among urban and provincial schools were factually huge. It was presumed that there is an upward pattern in enrolment in all the schools from 2008 – 2013 and a descending pattern from 2015– 2016. Based on this conclusion, implications were discussed, while it was recommended, among others, that infrastructural provisions and adequate supply of qualified personnel be allocated to urban and rural schools evenly, to discourage rural-urban migration but promote active rural participation in Education, as well as foster sustainability in schools. (shrink)
Predictive algorithms are playing an increasingly prominent role in society, being used to predict recidivism, loan repayment, job performance, and so on. With this increasing influence has come an increasing concern with the ways in which they might be unfair or biased against individuals in virtue of their race, gender, or, more generally, their group membership. Many purported criteria of algorithmic fairness concern statistical relationships between the algorithm’s predictions and the actual outcomes, for instance requiring that the rate of (...) false positives be equal across the relevant groups. We might seek to ensure that algorithms satisfy all of these purported fairness criteria. But a series of impossibility results shows that this is impossible, unless base rates are equal across the relevant groups. What are we to make of these pessimistic results? I argue that none of the purported criteria, except for a calibration criterion, are necessary conditions for fairness, on the grounds that they can all be simultaneously violated by a manifestly fair and uniquely optimal predictive algorithm, even when base rates are equal. I conclude with some general reflections on algorithmic fairness. (shrink)
A question, long discussed by legal scholars, has recently provoked a considerable amount of philosophical attention: ‘Is it ever appropriate to base a legal verdict on statistical evidence alone?’ Many philosophers who have considered this question reject legal reliance on bare statistics, even when the odds of error are extremely low. This paper develops a puzzle for the dominant theories concerning why we should eschew bare statistics. Namely, there seem to be compelling scenarios in which there are multiple sources (...) of incriminating statistical evidence. As we conjoin together different types of statistical evidence, it becomes increasingly incredible to suppose that a positive verdict would be impermissible. I suggest that none of the dominant views in the literature can easily accommodate such cases, and close by offering a diagnosis of my own. (shrink)
Abstract. Suppose that the word of an eyewitness makes it 80% probable that A committed a crime, and that B is drawn from a population in which the incidence rate of that crime is 80%. Many philosophers and legal theorists have held that if this is our only evidence against those parties then (i) we may be justified in finding against A but not against B; but (ii) that doing so incurs a loss in the accuracy of our findings. This (...) paper argues against (ii). It argues that accuracy considerations can motivate taking different attitudes towards individualized and statistical evidence even across cases where they generate the same probability that the defendant is guilty. (shrink)
Genesis of the early quantum theory represented by Planck’s 1897-1906 papers is considered. It is shown that the first quantum theoretical schemes were constructed as crossbreed ones composed from ideal models and laws of Maxwellian electrodynamics, Newtonian mechanics, statistical mechanics and thermodynamics. Ludwig Boltzmann’s ideas and technique appeared to be crucial. Deriving black-body radiation law Max Planck had to take the experimental evidence into account. It forced him not to deduce from phenomena but to use more theory instead. The (...) experiments forced Planck to apply the statistical technique to radiation in increasing portions. Planck’s theories in no way were generalizations of existing experimental results. They represented the stages of an ambitious programme of Maxwellian electrodynamics and statistical mechanics reconciliation. (shrink)
According to the Rational Threshold View, a rational agent believes p if and only if her credence in p is equal to or greater than a certain threshold. One of the most serious challenges for this view is the problem of statistical evidence: statistical evidence is often not sufficient to make an outright belief rational, no matter how probable the target proposition is given such evidence. This indicates that rational belief is not as sensitive to statistical evidence (...) as rational credence. The aim of this paper is twofold. First, we argue that, in addition to playing a decisive role in rationalizing outright belief, non-statistical evidence also plays a preponderant role in rationalizing credence. More precisely, when both types of evidence are present in a context, non-statistical evidence should receive a heavier weight than statistical evidence in determining rational credence. Second, based on this result, we argue that a modified version of the Rational Threshold View can avoid the problem of statistical evidence. We conclude by suggesting a possible explanation of the varying sensitivity to different types of evidence for belief and credence based on the respective aims of these attitudes. (shrink)
Agent-causal libertarians maintain we are irreducible agents who, by acting, settle matters that aren’t already settled. This implies that the neural matters underlying the exercise of our agency don’t conform to deterministic laws, but it does not appear to exclude the possibility that they conform to statistical laws. However, Pereboom (Noûs 29:21–45, 1995; Living without free will, Cambridge University Press, Cambridge, 2001; in: Nadelhoffer (ed) The future of punishment, Oxford University Press, New York, 2013) has argued that, if these (...) neural matters conform to either statistical or deterministic physical laws, the complete conformity of an irreducible agent’s settling of matters with what should be expected given the applicable laws would involve coincidences too wild to be credible. Here, I show that Pereboom’s argument depends on the assumption that, at times, the antecedent probability certain behavior will occur applies in each of a number of occasions, and is incapable of changing as a result of what one does from one occasion to the next. There is, however, no evidence this assumption is true. The upshot is the wild coincidence objection is an empirical objection lacking empirical support. Thus, it isn’t a compelling argument against agent-causal libertarianism. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.