What are people who disagree about logic disagreeing about? The paper argues that (in a wide range of cases) they are primarily disagreeing about how to regulate their degrees of belief. An analogy is drawn between beliefs about validity and beliefs about chance: both sorts of belief serve primarily to regulate degrees of belief about other matters, but in both cases the concepts have a kind of objectivity nonetheless.
Few of Kant’s distinctions have generated as much puzzlement and criticism as the one he draws in the Prolegomena between judgments of experience, which he describes as objectively and universally valid, and judgments of perception, which he says are merely subjectively valid. Yet the distinction between objective and subjective validity is central to Kant’s account of experience and plays a key role in his Transcendental Deduction of the categories. In this paper, I reject a standard interpretation of the distinction, (...) according to which judgments of perception are merely subjectively valid because they are made without sufficient investigation. In its place, I argue that for Kant, judgments of perception are merely subjectively valid because they merely report sequences of perceptions had by a subject without judging that what is represented by the perceptions is connected in the objects the perceptions are of. Whereas the interpretation I criticize undercuts Kant’s strategy in the Deduction, I argue, my interpretation illuminates it. (shrink)
Any theory of truth must find a way around Curry’s paradox, and there are well-known ways to do so. This paper concerns an apparently analogous paradox, about validity rather than truth, which JC Beall and Julien Murzi call the v-Curry. They argue that there are reasons to want a common solution to it and the standard Curry paradox, and that this rules out the solutions to the latter offered by most “naive truth theorists.” To this end they recommend a (...) radical solution to both paradoxes, involving a substructural logic, in particular, one without structural contraction. In this paper I argue that substructuralism is unnecessary. Diagnosing the “v-Curry” is complicated because of a multiplicity of readings of the principles it relies on. But these principles are not analogous to the principles of naive truth, and taken together, there is no reading of them that should have much appeal to anyone who has absorbed the morals of both the ordinary Curry paradox and the second incompleteness theorem. (shrink)
For semantic inferentialists, the basic semantic concept is validity. An inferentialist theory of meaning should offer an account of the meaning of "valid." If one tries to add a validity predicate to one's object language, however, one runs into problems like the v-Curry paradox. In previous work, I presented a validity predicate for a non-transitive logic that can adequately capture its own meta-inferences. Unfortunately, in that system, one cannot show of any inference that it is invalid. Here (...) I extend the system so that it can capture invalidities. (shrink)
Tarski's Undefinability of Truth Theorem comes in two versions: that no consistent theory which interprets Robinson's Arithmetic (Q) can prove all instances of the T-Scheme and hence define truth; and that no such theory, if sound, can even express truth. In this note, I prove corresponding limitative results for validity. While Peano Arithmetic already has the resources to define a predicate expressing logical validity, as Jeff Ketland has recently pointed out (2012, Validity as a primitive. Analysis 72: (...) 421-30), no theory which interprets Q closed under the standard structural rules can define nor express validity, on pain of triviality. The results put pressure on the widespread view that there is an asymmetry between truth and validity, viz. that while the former cannot be defined within the language, the latter can. I argue that Vann McGee's and Hartry Field's arguments for the asymmetry view are problematic. (shrink)
What accounts for how we know that certain rules of reasoning, such as reasoning by Modus Ponens, are valid? If our knowledge of validity must be based on some reasoning, then we seem to be committed to the legitimacy of rule-circular arguments for validity. This paper raises a new difficulty for the rule-circular account of our knowledge of validity. The source of the problem is that, contrary to traditional wisdom, a universal generalization cannot be inferred just on (...) the basis of reasoning about an arbitrary object. I argue in favor of a more sophisticated constraint on reasoning by universal generalization, one which undermines a rule-circular account of our knowledge of validity. (shrink)
Nontransitive responses to the validity Curry paradox face a dilemma that was recently formulated by Barrio, Rosenblatt and Tajer. It seems that, in the nontransitive logic ST enriched with a validity predicate, either you cannot prove that all derivable metarules preserve validity, or you can prove that instances of Cut that are not admissible in the logic preserve validity. I respond on behalf of the nontransitive approach. The paper argues, first, that we should reject the detachment (...) principle for naive validity. Secondly, I show how to add a validity predicate to ST while avoiding the dilemma. (shrink)
Definitions I presented in a previous article as part of a semantic approach in epistemology assumed that the concept of derivability from standard logic held across all mathematical and scientific disciplines. The present article argues that this assumption is not true for quantum mechanics (QM) by showing that concepts of validity applicable to proofs in mathematics and in classical mechanics are inapplicable to proofs in QM. Because semantic epistemology must include this important theory, revision is necessary. The one I (...) propose also extends semantic epistemology beyond the ‘hard’ sciences. The article ends by presenting and then refuting some responses QM theorists might make to my arguments. (shrink)
Beall and Murzi :143–165, 2013) introduce an object-linguistic predicate for naïve validity, governed by intuitive principles that are inconsistent with the classical structural rules. As a consequence, they suggest that revisionary approaches to semantic paradox must be substructural. In response to Beall and Murzi, Field :1–19, 2017) has argued that naïve validity principles do not admit of a coherent reading and that, for this reason, a non-classical solution to the semantic paradoxes need not be substructural. The aim of (...) this paper is to respond to Field’s objections and to point to a coherent notion of validity which underwrites a coherent reading of Beall and Murzi’s principles: grounded validity. The notion, first introduced by Nicolai and Rossi, is a generalisation of Kripke’s notion of grounded truth, and yields an irreflexive logic. While we do not advocate the adoption of a substructural logic, we take the notion of naïve validity to be a legitimate semantic notion that points to genuine expressive limitations of fully structural revisionary approaches. (shrink)
It is one thing for a given proposition to follow or to not follow from a given set of propositions and it is quite another thing for it to be shown either that the given proposition follows or that it does not follow.* Using a formal deduction to show that a conclusion follows and using a countermodel to show that a conclusion does not follow are both traditional practices recognized by Aristotle and used down through the history of logic. These (...) practices presuppose, respectively, a criterion of validity and a criterion of invalidity each of which has been extended and refined by modern logicians: deductions are studied in formal syntax (proof theory) and coun¬termodels are studied in formal semantics (model theory). The purpose of this paper is to compare these two criteria to the corresponding criteria employed in Boole’s first logical work, The Mathematical Analysis of Logic (1847). In particular, this paper presents a detailed study of the relevant metalogical passages and an analysis of Boole’s symbolic derivations. It is well known, of course, that Boole’s logical analysis of compound terms (involving ‘not’, ‘and’, ‘or’, ‘except’, etc.) contributed to the enlargement of the class of propositions and arguments formally treatable in logic. The present study shows, in addition, that Boole made significant contributions to the study of deduc¬tive reasoning. He identified the role of logical axioms (as opposed to inference rules) in formal deductions, he conceived of the idea of an axiomatic deductive sys¬tem (which yields logical truths by itself and which yields consequences when ap¬plied to arbitrary premises). Nevertheless, surprisingly, Boole’s attempt to imple¬ment his idea of an axiomatic deductive system involved striking omissions: Boole does not use his own formal deductions to establish validity. Boole does give symbolic derivations, several of which are vitiated by “Boole’s Solutions Fallacy”: the fallacy of supposing that a solution to an equation is necessarily a logical consequence of the equation. This fallacy seems to have led Boole to confuse equational calculi (i.e., methods for gen-erating solutions) with deduction procedures (i.e., methods for generating consequences). The methodological confusion is closely related to the fact, shown in detail below, that Boole had adopted an unsound criterion of validity. It is also shown that Boole totally ignored the countermodel criterion of invalid¬ity. Careful examination of the text does not reveal with certainty a test for invalidity which was adopted by Boole. However, we have isolated a test that he seems to use in this way and we show that this test is ineffectual in the sense that it does not serve to identify invalid arguments. We go beyond the simple goal stated above. Besides comparing Boole’s earliest criteria of validity and invalidity with those traditionally (and still generally) employed, this paper also investigates the framework and details of THE MATHEMATICAL ANALYSIS OF LOGIC. (shrink)
Firstly I characterize Simple Partial Logic (SPL) as the generalization and extension of a certain two-valued logic. Based on the characterization I present two definitions of validity in SPL. Finally I show that given my characterization these two definitions are more appropriate than other definitions that have been prevalent, since both have some desirable semantic properties that the others lack.
The following four theses all have some intuitive appeal: (I) There are valid norms. (II) A norm is valid only if justified by a valid norm. (III) Justification, on the class of norms, has an irreflexive proper ancestral. (IV) There is no infinite sequence of valid norms each of which is justified by its successor. However, at least one must be false, for (I)--(III) together entail the denial of (IV). There is thus a conflict between intuition and logical possibility. This (...) paper, after distinguishing various conceptions of a norm, of validity and of justification, argues for the following position. (I) is true. (II) is false for legislative justification and true for epistemic justification. (III) is true for legislative and false for epistemic justification. (IV) is true for legislative justification; for epistemic justification (IV) is true or false depending on the conception taken of a norm. Our intuition in favour of (II) must therefore be abandoned where justification is conceived legislatively. Our intuition in favour of (III) must be abandoned, and our intuition in favour of (IV) qualified, where justification is conceived epistemically. (shrink)
Following Kelsen’s influential theory of law, the concept of validity has been used in the literature to refer to different properties of law (such as existence, membership, bindingness, and more), and so it is inherently ambiguous. More importantly, Kelsen’s equivalence between the existence and the validity of law prevents us from accounting satisfactorily for relevant aspects of our current legal practices, such as the phenomenon of “unlawful law.” This chapter addresses this ambiguity to argue that the most important (...) function of the concept of validity is constituting the complex ontological paradigm of modern law as an institutional-normative practice. In this sense, validity is an artificial ontological status that supervenes on that of the existence of legal norms, thus allowing law to regulate its own creation and creating the logical space for the occurrence of “unlawful law.” This function, I argue in the last part, is crucial to understanding the relationship between the ontological and epistemic dimensions of the objectivity of law. Given the necessary practice-independence of legal norms it is the epistemic accessibility of their creation that enables the law to fulfill its general action-guiding (and thus coordinating) function. (shrink)
Background: Despite being often taken as the benchmark of quality for diagnostic and classificatory tools, 'validity' is admitted as a poorly worked out notion in psychiatric nosology. Objective: Here we aim at presenting a view that we believe to do better justice to the significance of the notion of validity, as well as at explaining away some misconceptions and inappropriate expectations regarding this attribute in the aforementioned context. Method: The notion of validity is addressed taking into account (...) its role, the framework according to which it should be assessed and the specific contents to which it refers within psychiatric nosology. Results and Conclusions: The notion of validity has an epistemological thrust and its foremost role is distinguishing correct reasoning and truth from what is irrational or false. From it follows not only that 'validity' always refers to elements of knowledge and rationality such as arguments, inferences and propositions, but also that the appropriate frameworks to assess 'validity' are logics and scientific methodology. When the validity of a psychiatric diagnostic category is at stake, the contents to which it refers are those relevantly related to the notion of 'diagnostic concept'. The consequences of our reading on the notion of 'validity' are discussed vis-à-vis the challenges faced by psychiatric nosology in order to have its diagnostic categories validated. (shrink)
Dear Editor, in a previous paper we have tried to delve into what validity means in the context of psychiatric nosology, arguing for a pragmatic view of it. Here we want to briefly reassert the basic points of our analysis, make a few clarifications and address some issues raised by commentators.
This paper looks at the question of what it means for a psychological test to have construct validity. I approach this topic by way of an analysis of recent debates about the measurement of implicit social cognition. After showing that there is little theoretical agreement about implicit social cognition, and that the predictive validity of implicit tests appears to be low, I turn to a debate about their construct validity. I show that there are two questions at (...) stake: First, what level of detail and precision does a construct have to possess such that a test can in principle be valid relative to it? And second, what kind of evidence needs to be in place such that a test can be regarded as validated relative to a given construct? I argue that construct validity is not an all-or-nothing affair. It can come in degrees, because both our constructs and our knowledge of the explanatory relation between constructs and data can vary in accuracy and level of detail, and a test can fail to measure all of the features associated with a construct. I conclude by arguing in favor of greater philosophical attention to processes of construct development. (shrink)
Neuroscientific claims have a significant impact on traditional philosophy. This essay, focusing on the field of moral neuroscience, discusses how and why philosophy can contribute to neuroscientific progress. First, viewing the interactions between moral neuroscience and moral philosophy, it becomes clear that moral philosophy can and does contribute to moral neuroscience in two ways: as explanandum and as explanans. Next, it is shown that moral philosophy is well suited to contribute to moral neuroscience in both of these two ways in (...) the context of the problem of ecological validity. Philosophy can play the role of an agent for ecological validity, since traditional philosophy shapes and reflects part of our social reality. Finally, based on these arguments, I tentatively sketch how a Kantian account of moral incentive can play this role. (shrink)
We evaluated the reliability, validity, and differential item functioning (DIF) of a shorter version of the Defining Issues Test-1 (DIT-1), the behavioral DIT (bDIT), measuring the development of moral reasoning. 353 college students (81 males, 271 females, 1 not reported; age M = 18.64 years, SD = 1.20 years) who were taking introductory psychology classes at a public University in a suburb area in the Southern United States participated in the present study. First, we examined the reliability of the (...) bDIT using Cronbach’s α and its concurrent validity with the original DIT-1 using disattenuated correlation. Second, we compared the test duration between the two measures. Third, we tested the DIF of each question between males and females. Findings reported that first, the bDIT showed acceptable reliability and good concurrent validity. Second, the test duration could be significantly shortened by employing the bDIT. Third, DIF results indicated that the bDIT items did not favour any gender. Practical implications of the present study based on the reported findings are discussed. (shrink)
The existence of singularities alerts that one of the highest priorities of a centennial perspective on general relativity should be a careful re-thinking of the validity domain of Einstein’s field equations. We address the problem of constructing distinguishable extensions of the smooth spacetime manifold model, which can incorporate singularities, while retaining the form of the field equations. The sheaf-theoretic formulation of this problem is tantamount to extending the algebra sheaf of smooth functions to a distribution-like algebra sheaf in which (...) the former may be embedded, satisfying the pertinent cohomological conditions required for the coordinatization of all of the tensorial physical quantities, such that the form of the field equations is preserved. We present in detail the construction of these distribution-like algebra sheaves in terms of residue classes of sequences of smooth functions modulo the information of singular loci encoded in suitable ideals. Finally, we consider the application of these distribution-like solution sheaves in geometrodynamics by modeling topologically-circular boundaries of singular loci in three-dimensional space in terms of topological links. It turns out that the Borromean link represents higher order wormhole solutions. (shrink)
In this paper, I claim that two ways of defining validity for modal languages (“real-world” and “general” validity), corresponding to distinction between a correct and an incorrect way of defining modal valid- ity, correspond instead to two substantive ways of conceiving modal truth. At the same time, I claim that the major logical manifestation of the real- world/general validity distinction in modal propositional languages with the actuality operator should not be taken seriously, but simply as a by-product (...) of the way in which the semantics of such an operator is usually given. (shrink)
Rodrigues and Banzato related the validity of diagnostic categories to their meaningfulness and I wish to explore this relation further without attempting to make criticisms. To commence, if a diagnostic category is to be valid, it must mean something.
This article argues for the formal validity of and the truth of the premises and conclusion of a version of Aquinas' "Third Way" that says: If each of the parts of nature is contingent, the whole of nature is contingent. Each of the parts of nature is contingent. Therefore, the whole of nature is contingent--where "contingent" means having a cause and not existing self-sufficiently.
The notion of validity for modal languages could be defined in two slightly different ways. The first is the original definition given by S. Kripke, for which a formula φ of a modal language L is valid if and only if it is true in every actual world of every interpretation of L. The second is the definition that has become standard in most textbook presentations of modal logic, for which a formula φ of L is valid if and (...) only if it is true in every world in every interpretation of L. For simple modal languages, “Kripkean validity” and “Textbook validity” are extensionally equivalent. According to E. Zalta, however, Textbook validity is an “incorrect” definition of validity, because: (i) it is not in full compliance with Tarski’s notion of truth; (ii) in expressively richer languages, enriched by the actuality operator, some obviously true formulas count as valid only if the Kripkean notion is used. The purpose of this paper is to show that (i) and (ii) are not good reasons to favor Kripkean valid- ity over Textbook validity. On the one hand, I will claim that the difference between the two should rather be seen as the result of two different conceptions on how a modal logic should be built from a non-modal basis; on the other, I will show the advantages, for the question at issue, of seeing the actuality operator as belonging to the family of two-dimensional operators. (shrink)
The first learning game to be developed to help students to develop and hone skills in constructing proofs in both the propositional and first-order predicate calculi. It comprises an autotelic (self-motivating) learning approach to assist students in developing skills and strategies of proof in the propositional and predicate calculus. The text of VALIDITY consists of a general introduction that describes earlier studies made of autotelic learning games, paying particular attention to work done at the Law School of Yale University, (...) called the ALL Project (Accelerated Learning of Logic). Following the introduction, the game of VALIDITY is described, first with reference to the propositional calculus, and then in connection with the first-order predicate calculus with identity. Sections in the text are devoted to discussions of the various rules of derivation employed in both calculi. Three appendices follow the main text; these provide a catalogue of sequents and theorems that have been proved for the propositional calculus and for the predicate calculus, and include suggestions for the classroom use of VALIDITY in university-level courses in mathematical logic. (shrink)
Let’s begin by imagining a hypothetical psychotic illness called “Schneider’s Disease”, recognized for over 100 years. Let’s assume there has been great controversy as regards the “most valid” set of diagnostic criteria for SD.
This study sought to replicate and extend Hall and colleagues’ (2014) work on developing and validating scales from the Psychopathic Personality Inventory (PPI) to index the triarchic psychopathy constructs of boldness, meanness, and disinhibition. This study also extended Hall et al.’s initial findings by including the PPI Revised (PPI–R). A community sample (n D 240) weighted toward subclinical psychopathy traits and a male prison sample (n D 160) were used for this study. Results indicated that PPI–Boldness, PPI–Meanness, and PPI–Disinhibition converged (...) with other psychopathy, personality, and behavioral criteria in ways conceptually expected from the perspective of the triarchic psychopathy model, including showing very strong convergent and discriminant validity with their Triarchic Psychopathy Measure counterparts. These findings further enhance the utility of the PPI and PPI–R in measuring these constructs. (shrink)
Abstract: Four main forms of Doomsday Argument (DA) exist—Gott’s DA, Carter’s DA, Grace’s DA and Universal DA. All four forms use different probabilistic logic to predict that the end of the human civilization will happen unexpectedly soon based on our early location in human history. There are hundreds of publications about the validity of the Doomsday argument. Most of the attempts to disprove the Doomsday Argument have some weak points. As a result, we are uncertain about the validity (...) of DA proofs and rebuttals. In this article, a meta-DA is introduced, which uses the idea of logical uncertainty over the DA’s validity estimated based on a virtual prediction market of the opinions of different scientists. The result is around 0.4 for the validity of some form of DA, and even smaller for “Strong DA”, which predicts the end of the world in the near term. We discuss many examples of the validity of the DA in real life as an instrument to prove it “experimentally”. We also show that DA becomes strongest if it is based on the idea of the “natural reference class” of observers, that is, the observers who know about the DA (i.e. a Self-Referenced DA). Such a DA predicts that there is a high probability of a global catastrophe with human extinction in the 21st century, which aligns with what we already know based on analysis of different technological risks. (shrink)
This paper considers Rumfitt’s bilateral classical logic (BCL), which is proposed to counter Dummett’s challenge to classical logic. First, agreeing with several authors, we argue that Rumfitt’s notion of harmony, used to justify logical rules by a purely proof theoretical manner, is not sufficient to justify coordination rules in BCL purely proof-theoretically. For the central part of this paper, we propose a notion of proof-theoretical validity similar to Prawitz for BCL and proves that BCL is sound and complete respect (...) to this notion of validity. The major difficulty in defining validity for BCL is that validity of positive +A appears to depend on negative −A, and vice versa. Thus, the straightforward inductive definition does not work because of this circular dependance. However, Knaster-Tarski’s fixed point theorem can resolve this circularity. Finally, we discuss the philosophical relevance of our work, in particular, the impact of the use of fixed point theorem and the issue of decidability. (shrink)
In this article, I examined the various ethical problems raise to morally discount homosexuality. I found that so far no moral argument proved adequate ground to discount homoeroticism. However, I have developed the ‘Two-Way Test’ (TWT) by which the social acceptability of any sexual relation should be tested for moral validity. From the analysis, homosexuality was found to have failed the test. That is to say, homosexuality is not a morally valid act. Despite that, the immoral status of homosexuality (...) did not constitute sufficient ground for its criminalization, because not all immoral acts are criminalized but only those that impede the right or liberty of others. In conclusion, the paper submitted that although homosexuality had failed the TWT it does not call for violence, criminalization and discrimination against persons of homosexual orientation but that homosexuals should be accommodated without deliberately discouraging heterosexual relationship. My advocacy for sexual tolerance is based on the grounds that ongoing biological research has so far shown the phenomenon as biological reality beyond the personal control of most homosexuals. (shrink)
The logical-pragmatic perspective on the psychiatric diagnosis, presented by Rodriguez and Banzato contributes to and develops the existing conventional taxonomic framework. The latter is regarded as grounded on the epistemological prerequisites proponed by Carl Gustav Hempel in the late 1960s, adopted by the DSM task force of R. Spitzer in 1973.
Background: Moral Growth Mindset (MGM) is a belief about whether one can become a morally better person through efforts. Prior research showed that MGM is positively associated with promotion of moral motivation among adolescents and young adults. We developed and tested the English version of the MGM measure in this study with data collected from college student participants. Methods: In Study 1, we tested the reliability and validity of the MGM measure with two-wave data (N = 212, Age mean (...) = 24.18 years, SD = 7.82 years). In Study 2, we retested the construct validity of the MGM measure once again and its association with other moral and positive psychological indicators to test its convergent and discriminant validity (N = 275, Age mean = 22.02 years, SD = 6.34 years). Results: We found that the MGM measure was reliable and valid from Study 1. In Study 2, the results indicated that the MGM was well correlated with other moral and positive psychological indicators as expected. Conclusions: We developed and validated the English version of the MGM measure in the present study. The results from studies 1 and 2 supported the reliability and validity of the MGM measure. Given this, we found that the English version of the MGM measure can measure one’s MGM as we intended. (shrink)
True beliefs and truth-preserving inferences are, in some sense, good beliefs and good inferences. When an inference is valid though, it is not merely truth-preserving, but truth-preserving in all cases. This motivates my question: I consider a Modus Ponens inference, and I ask what its validity in particular contributes to the explanation of why the inference is, in any sense, a good inference. I consider the question under three different definitions of ‘case’, and hence of ‘validity’: the orthodox (...) definition given in terms of interpretations or models, a metaphysical definition given in terms of possible worlds, and a substitutional definition defended by Quine. I argue that the orthodox notion is poorly suited to explain what's good about a Modus Ponens inference. I argue that there is something good that is explained by a certain kind of truth across possible worlds, but the explanation is not provided by metaphysical validity in particular; nothing of value is explained by truth across all possible worlds. Finally, I argue that the substitutional notion of validity allows us to correctly explain what is good about a valid inference. (shrink)
Logical pluralism is commonly described as the view that there is more than one correct logic. It has been claimed that, in order for that view to be interesting, there has to be at least a potential for rivalry between the correct logics. This paper offers a detailed assessment of this suggestion. I argue that an interesting version of logical pluralism is hard, if not impossible, to achieve. I first outline an intuitive understanding of the notions of rivalry and correctness. (...) I then discuss a natural account of rivalry in terms of disagreement about validity claims and the argument from meaning variance that has been raised against it. I explore a more refined picture of the meaning of validity claims that makes use of the character-content distinction of classical two dimensional semantics. There are three ways in which pluralists can use that framework to argue for the view that different logics may be rivals but could nevertheless be equally correct. I argue that none of them is convincing. (shrink)
Since the time of Aristotle's students, interpreters have considered Prior Analytics to be a treatise about deductive reasoning, more generally, about methods of determining the validity and invalidity of premise-conclusion arguments. People studied Prior Analytics in order to learn more about deductive reasoning and to improve their own reasoning skills. These interpreters understood Aristotle to be focusing on two epistemic processes: first, the process of establishing knowledge that a conclusion follows necessarily from a set of premises (that is, on (...) the epistemic process of extracting information implicit in explicitly given information) and, second, the process of establishing knowledge that a conclusion does not follow. Despite the overwhelming tendency to interpret the syllogistic as formal epistemology, it was not until the early 1970s that it occurred to anyone to think that Aristotle may have developed a theory of deductive reasoning with a well worked-out system of deductions comparable in rigor and precision with systems such as propositional logic or equational logic familiar from mathematical logic. When modern logicians in the 1920s and 1930s first turned their attention to the problem of understanding Aristotle's contribution to logic in modern terms, they were guided both by the Frege-Russell conception of logic as formal ontology and at the same time by a desire to protect Aristotle from possible charges of psychologism. They thought they saw Aristotle applying the informal axiomatic method to formal ontology, not as making the first steps into formal epistemology. They did not notice Aristotle's description of deductive reasoning. Ironically, the formal axiomatic method (in which one explicitly presents not merely the substantive axioms but also the deductive processes used to derive theorems from the axioms) is incipient in Aristotle's presentation. Partly in opposition to the axiomatic, ontically-oriented approach to Aristotle's logic and partly as a result of attempting to increase the degree of fit between interpretation and text, logicians in the 1970s working independently came to remarkably similar conclusions to the effect that Aristotle indeed had produced the first system of formal deductions. They concluded that Aristotle had analyzed the process of deduction and that his achievement included a semantically complete system of natural deductions including both direct and indirect deductions. Where the interpretations of the 1920s and 1930s attribute to Aristotle a system of propositions organized deductively, the interpretations of the 1970s attribute to Aristotle a system of deductions, or extended deductive discourses, organized epistemically. The logicians of the 1920s and 1930s take Aristotle to be deducing laws of logic from axiomatic origins; the logicians of the 1970s take Aristotle to be describing the process of deduction and in particular to be describing deductions themselves, both those deductions that are proofs based on axiomatic premises and those deductions that, though deductively cogent, do not establish the truth of the conclusion but only that the conclusion is implied by the premise-set. Thus, two very different and opposed interpretations had emerged, interestingly both products of modern logicians equipped with the theoretical apparatus of mathematical logic. The issue at stake between these two interpretations is the historical question of Aristotle's place in the history of logic and of his orientation in philosophy of logic. This paper affirms Aristotle's place as the founder of logic taken as formal epistemology, including the study of deductive reasoning. A by-product of this study of Aristotle's accomplishments in logic is a clarification of a distinction implicit in discourses among logicians--that between logic as formal ontology and logic as formal epistemology. (shrink)
The perhaps most important criticism of the nontransitive approach to semantic paradoxes is that it cannot truthfully express exactly which metarules preserve validity. I argue that this criticism overlooks that the admissibility of metarules cannot be expressed in any logic that allows us to formulate validity-Curry sentences and that is formulated in a classical metalanguage. Hence, the criticism applies to all approaches that do their metatheory in classical logic. If we do the metatheory of nontransitive logics in a (...) nontransitive logic, however, there is no reason to think that the argument behind the criticism goes through. In general, asking a logic to express its own admissible metarules may not be a good idea. (shrink)
This article addresses two areas of continuing controversy about consent in clinical research: the question of when consent to low risk research is necessary, and the question of when consent to research is valid. The article identifies a number of considerations relevant to determining whether consent is necessary, chief of which is whether the study would involve subjects in ways that would (otherwise) infringe their rights. When consent is necessary, there is a further question of under what conditions consent is (...) valid or successful in waiving a right. The most influential account of validity conditions is non-moralized, in the sense that the conditions make no essential reference to whether the researcher soliciting consent has obtained it in a way that wrongs the subject. The article examines the implications of this account, and compares it with recent accounts that moralize some of the validity conditions. -/- . (shrink)
Experimental design is one aspect of a scientific method. A well-designed, properly conducted experiment aims to control variables in order to isolate and manipulate causal effects and thereby maximize internal validity, support causal inferences, and guarantee reliable results. Traditionally employed in the natural sciences, experimental design has become an important part of research in the social and behavioral sciences. Experimental methods are also endorsed as the most reliable guides to policy effectiveness. Through a discussion of some of the central (...) concepts associated with experimental design, including controlled variation and randomization, this chapter will provide a summary of key ethical issues that tend to arise in experimental contexts. In addition, by exploring assumptions about the nature of causation and by analyzing features of causal relationships, systems, and inferences in social contexts, this chapter will summarize the ways in which experimental design can undermine the integrity of not only social and behavioral research but policies implemented on the basis of such research. (shrink)
In order to decide whether a discursive product of human reason corresponds or not to the logical order, one must analyze it in terms of syntactic correctness, consistency, and validity. The first step in logical analysis is formalization, that is, the process by which logical forms of thoughts are represented in different formal languages or logical systems. Although each thought can be properly formalized in different ways, the formalization variants are not equally adequate. The adequacy of formalization seems to (...) depend on several essential features: parsimony, accuracy, transparency, fertility and reliability. Because there is a partial antinomy between these traits, it is impossible to find a perfectly adequate variant of formalization. However, it is possible and preferable to reach a reasonable compromise by choosing the variant of formalization which satisfies all of these fundamental characteristics. (shrink)
With his distinction between the "context of discovery" and the "context of justification", Hans Reichenbach gave the traditional difference between genesis and validity a modern standard formulation. Reichenbach's distinction is one of the well-known ways in which the expression "context" is used in the theory of science. My argument is that Reichenbach's concept is unsuitable and leads to contradictions in the semantic fields of genesis and validity. I would like to demonstrate this by examining the different meanings of (...) Reichenbach's context distinction. My investigation also shows how the difference between genesis and validity precedes Reichenbach's context distinction and indicates approaches for meaningful applications of the concept of context to the phenomena designated by Reichenbach. I will begin by reconstructing the way in which Reichenbach introduces the distinction between discovery and justification as a difference of contexts (I). Drawing on the numerous meanings of the term "context", I will then emphasize some chief characteristics and review, through exemplification, the usage of this term. First of all, I turn to the context of discovery as the nonrational part of all scientific knowledge and show that this meaning cannot be defined consistently (la). For the context of justification, one can distinguish two main cases: the context of justification is either contrasted with the context of discovery, or it forms a unit there with. In the first case, the use of the context term becomes paradoxical, insofar as justification separated from scientific practice does not represent a field of reference which could be specifically contrasted with another field of reference (I b). In the second case, the unifying definitions contradict the contextual meaning of discovery and justification (1 c). In the last section, I point to a useful application of the concept of context which can be found in Reichenbach's argumentation and which refers to the practical conditions of justification(2). (shrink)
The classical rule of Repetition says that if you take any sentence as a premise, and repeat it as a conclusion, you have a valid argument. It's a very basic rule of logic, and many other rules depend on the guarantee that repeating a sentence, or really, any expression, guarantees sameness of referent, or semantic value. However, Repetition fails for token-reflexive expressions. In this paper, I offer three ways that one might replace Repetition, and still keep an interesting notion of (...)validity. Each is a fine way to go for certain purposes, but I argue that one in particular is to be preferred by the semanticist who thinks that there are token-reflexive expressions in natural languages. (shrink)
The aim of the paper is to develop general criteria of argumentative validity and adequacy for probabilistic arguments on the basis of the epistemological approach to argumentation. In this approach, as in most other approaches to argumentation, proabilistic arguments have been neglected somewhat. Nonetheless, criteria for several special types of probabilistic arguments have been developed, in particular by Richard Feldman and Christoph Lumer. In the first part (sects. 2-5) the epistemological basis of probabilistic arguments is discussed. With regard to (...) the philosophical interpretation of probabilities a new subjectivist, epistemic interpretation is proposed, which identifies probabilities with tendencies of evidence (sect. 2). After drawing the conclusions of this interpretation with respect to the syntactic features of the probability concept, e.g. one variable referring to the data base (sect. 3), the justification of basic probabilities (priors) by judgements of relative frequency (sect. 4) and the justification of derivative probabilities by means of the probability calculus are explained (sect. 5). The core of the paper is the definition of '(argumentatively) valid derivative probabilistic arguments', which provides exact conditions for epistemically good probabilistic arguments, together with conditions for the adequate use of such arguments for the aim of rationally convincing an addressee (sect. 6). Finally, some measures for improving the applicability of probabilistic reasoning are proposed (sect. 7). (shrink)
With his influence on the development of physiology, physics and geometry, Hermann von Helmholtz – like few scientists of the second half of the 19th century – is representative of the research in natural science in Germany. The development of his understanding of science is not less representative. Until the late sixties, he emphatically claimed the truth of science; later on, he began to see the conditions for the validity of scientific knowledge in relative terms, and this can, in (...) summary, be referred to as hypothesizing. Already in the past century, HeImholtz made first approaches to an understanding of science, which were incompatible with his own former position and which pointed to the modern age to an astonishingly large extent. A comparison with Karl R. Popper's logic of research will illustrate how closely he nevertheless approached modern understanding of science. In Popper's logic of research, hypothesizing of scientific knowledge is definitely much more advanced than in Helmholtz's theory of science. What begins vaguely to emerge with Helmholtz has already become an explicitly formulated programme with Popper. Although HeImholtz and Popper are not on a direct line of epistemological development and Popper refers to HeImholtz only rarely and casually, there are in fact surprising points of contact which have not been taken notice of so far and which appear above all if one looks at Helmholtz's understanding of science against the background of Popper's logic of research. (shrink)
ABSTRACT: A detailed presentation of Stoic logic, part one, including their theories of propositions (or assertibles, Greek: axiomata), demonstratives, temporal truth, simple propositions, non-simple propositions(conjunction, disjunction, conditional), quantified propositions, logical truths, modal logic, and general theory of arguments (including definition, validity, soundness, classification of invalid arguments).
The need to distinguish between logical and extra-logical varieties of inference, entailment, validity, and consistency has played a prominent role in meta-ethical debates between expressivists and descriptivists. But, to date, the importance that matters of logical form play in these distinctions has been overlooked. That’s a mistake given the foundational place that logical form plays in our understanding of the difference between the logical and the extra-logical. This essay argues that descriptivists are better positioned than their expressivist rivals to (...) provide the needed account of logical form, and so better able to capture the needed distinctions. This finding is significant for several reasons: First, it provides a new argument against expressivism. Second, it reveals that descriptivists can make use of this new argument only if they are willing to take a controversial—but plausible—stand on claims about the nature and foundations of logic. (shrink)
Dummett’s justification procedures are revisited. They are used as background for the discussion of some conceptual and technical issues in proof-theoretic semantics, especially the role played by assumptions in proof-theoretic definitions of validity.
The system R, or more precisely the pure implicational fragment R›, is considered by the relevance logicians as the most important. The another central system of relevance logic has been the logic E of entailment that was supposed to capture strict relevant implication. The next system of relevance logic is RM or R-mingle. The question is whether adding mingle axiom to R› yields the pure implicational fragment RM› of the system? As concerns the weak systems there are at least two (...) approaches to the problem. First of all, it is possible to restrict a validity of some theorems. In another approach we can investigate even weaker logics which have no theorems and are characterized only by rules of deducibility. (shrink)
Werner Heisenberg made an important – and as yet insufficiently researched – contribution to the transformation of the modern conception of science. This transformation involved a reassessment of the status of scientific knowledge from certain to merely hypothetical – an assessment that is widely recognized today. I examine Heisenberg’s contribution in particular by taking his conception of “closed theories” as an example according to which the established physical theories have no universal and exclusive, but only a restricted validity. Firstly, (...) I characterize the historical process of hypothetization of claims to validity. Then, secondly, I reconstruct Heisenberg’s conception, as far as it can be derived from his popular writings, relating it to the process of hypothetization. Finally, I touch on the history of its reception and compare it with conceptions of science that emphasize the significance of the hypothetical for the modern theories of natural sciences. Compared to these conceptions, Heisenberg’s contribution turns out to be rather independent. (shrink)
I discuss Prawitz’s claim that a non-reliabilist answer to the question “What is a proof?” compels us to reject the standard Bolzano-Tarski account of validity, andto account for the meaning of a sentence in broadly veriﬁcationist terms. I sketch what I take to be a possible way of resisting Prawitz’s claim---one that concedes the anti-reliabilist assumption from which Prawitz’s argument proceeds.
We study the modal logic M L r of the countable random frame, which is contained in and `approximates' the modal logic of almost sure frame validity, i.e. the logic of those modal principles which are valid with asymptotic probability 1 in a randomly chosen finite frame. We give a sound and complete axiomatization of M L r and show that it is not finitely axiomatizable. Then we describe the finite frames of that logic and show that it has (...) the finite frame property and its satisfiability problem is in EXPTIME. All these results easily extend to temporal and other multi-modal logics. Finally, we show that there are modal formulas which are almost surely valid in the finite, yet fail in the countable random frame, and hence do not follow from the extension axioms. Therefore the analog of Fagin's transfer theorem for almost sure validity in first-order logic fails for modal logic. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.