Introduction The Defining Issues Test (DIT) aimed to measure one’s moral judgment development in terms of moral reasoning. The Neo-Kohlbergian approach, which is an elaboration of Kohlbergian theory, focuses on the continuous development of postconventional moral reasoning, which constitutes the theoretical basis of the DIT. However, very few studies have directly tested the internal structure of the DIT, which would indicate its construct validity. Objectives Using the DIT-2, a later revision of the DIT, we examined whether a bi-factor model (...) or 3-factor CFA model showed a better model fit. The Neo-Kohlbergian theory of moral judgment development, which constitutes the theoretical basis for the DIT-2, proposes that moral judgment development occurs continuously and that it can be better explained with a soft-stage model. Given these assertions, we assumed that the bi-factor model, which considers the Schema-General Moral Judgment (SGMJ), might be more consistent with Neo-Kohlbergian theory. Methods We analyzed a large dataset collected from undergraduate students. We performed confirmatory factor analysis (CFA) via weighted least squares. A 3-factor CFA based on the DIT-2 manual and a bi-factor model were compared for model fit. The three factors in the 3-factor CFA were labeled as moral development schemas in Neo-Kohlbergian theory (i.e., personal interests, maintaining norms, and postconventional schemas). The bi-factor model included the SGMJ in addition to the three factors. Results In general, the bi-factor model showed a better model fit compared with the 3-factor CFA model although both models reported acceptable model fit indices. Conclusion We found that the DIT-2 scale is a valid measure of the internal structure of moral reasoning development using both CFA and bi-factor models. In addition, we conclude that the soft-stage model, posited by the Neo-Kohlbergian approach to moral judgment development, can be better supported with the bi-factor model that was tested in the present study. (shrink)
Beall and Murzi :143–165, 2013) introduce an object-linguistic predicate for naïve validity, governed by intuitive principles that are inconsistent with the classical structural rules. As a consequence, they suggest that revisionary approaches to semantic paradox must be substructural. In response to Beall and Murzi, Field :1–19, 2017) has argued that naïve validity principles do not admit of a coherent reading and that, for this reason, a non-classical solution to the semantic paradoxes need not be substructural. The aim of (...) this paper is to respond to Field’s objections and to point to a coherent notion of validity which underwrites a coherent reading of Beall and Murzi’s principles: grounded validity. The notion, first introduced by Nicolai and Rossi, is a generalisation of Kripke’s notion of grounded truth, and yields an irreflexive logic. While we do not advocate the adoption of a substructural logic, we take the notion of naïve validity to be a legitimate semantic notion that points to genuine expressive limitations of fully structural revisionary approaches. (shrink)
What are people who disagree about logic disagreeing about? The paper argues that (in a wide range of cases) they are primarily disagreeing about how to regulate their degrees of belief. An analogy is drawn between beliefs about validity and beliefs about chance: both sorts of belief serve primarily to regulate degrees of belief about other matters, but in both cases the concepts have a kind of objectivity nonetheless.
This paper looks at the question of what it means for a psychological test to have construct validity. I approach this topic by way of an analysis of recent debates about the measurement of implicit social cognition. After showing that there is little theoretical agreement about implicit social cognition, and that the predictive validity of implicit tests appears to be low, I turn to a debate about their construct validity. I show that there are two questions at (...) stake: First, what level of detail and precision does a construct have to possess such that a test can in principle be valid relative to it? And second, what kind of evidence needs to be in place such that a test can be regarded as validated relative to a given construct? I argue that construct validity is not an all-or-nothing affair. It can come in degrees, because both our constructs and our knowledge of the explanatory relation between constructs and data can vary in accuracy and level of detail, and a test can fail to measure all of the features associated with a construct. I conclude by arguing in favor of greater philosophical attention to processes of construct development. (shrink)
Nontransitive responses to the validity Curry paradox face a dilemma that was recently formulated by Barrio, Rosenblatt and Tajer. It seems that, in the nontransitive logic ST enriched with a validity predicate, either you cannot prove that all derivable metarules preserve validity, or you can prove that instances of Cut that are not admissible in the logic preserve validity. I respond on behalf of the nontransitive approach. The paper argues, first, that we should reject the detachment (...) principle for naive validity. Secondly, I show how to add a validity predicate to ST while avoiding the dilemma. (shrink)
Any theory of truth must find a way around Curry’s paradox, and there are well-known ways to do so. This paper concerns an apparently analogous paradox, about validity rather than truth, which JC Beall and Julien Murzi call the v-Curry. They argue that there are reasons to want a common solution to it and the standard Curry paradox, and that this rules out the solutions to the latter offered by most “naive truth theorists.” To this end they recommend a (...) radical solution to both paradoxes, involving a substructural logic, in particular, one without structural contraction. In this paper I argue that substructuralism is unnecessary. Diagnosing the “v-Curry” is complicated because of a multiplicity of readings of the principles it relies on. But these principles are not analogous to the principles of naive truth, and taken together, there is no reading of them that should have much appeal to anyone who has absorbed the morals of both the ordinary Curry paradox and the second incompleteness theorem. (shrink)
Tarski's Undefinability of Truth Theorem comes in two versions: that no consistent theory which interprets Robinson's Arithmetic (Q) can prove all instances of the T-Scheme and hence define truth; and that no such theory, if sound, can even express truth. In this note, I prove corresponding limitative results for validity. While Peano Arithmetic already has the resources to define a predicate expressing logical validity, as Jeff Ketland has recently pointed out (2012, Validity as a primitive. Analysis 72: (...) 421-30), no theory which interprets Q closed under the standard structural rules can define nor express validity, on pain of triviality. The results put pressure on the widespread view that there is an asymmetry between truth and validity, viz. that while the former cannot be defined within the language, the latter can. I argue that Vann McGee's and Hartry Field's arguments for the asymmetry view are problematic. (shrink)
What accounts for how we know that certain rules of reasoning, such as reasoning by Modus Ponens, are valid? If our knowledge of validity must be based on some reasoning, then we seem to be committed to the legitimacy of rule-circular arguments for validity. This paper raises a new difficulty for the rule-circular account of our knowledge of validity. The source of the problem is that, contrary to traditional wisdom, a universal generalization cannot be inferred just on (...) the basis of reasoning about an arbitrary object. I argue in favor of a more sophisticated constraint on reasoning by universal generalization, one which undermines a rule-circular account of our knowledge of validity. (shrink)
Firstly I characterize Simple Partial Logic (SPL) as the generalization and extension of a certain two-valued logic. Based on the characterization I present two definitions of validity in SPL. Finally I show that given my characterization these two definitions are more appropriate than other definitions that have been prevalent, since both have some desirable semantic properties that the others lack.
It is one thing for a given proposition to follow or to not follow from a given set of propositions and it is quite another thing for it to be shown either that the given proposition follows or that it does not follow.* Using a formal deduction to show that a conclusion follows and using a countermodel to show that a conclusion does not follow are both traditional practices recognized by Aristotle and used down through the history of logic. These (...) practices presuppose, respectively, a criterion of validity and a criterion of invalidity each of which has been extended and refined by modern logicians: deductions are studied in formal syntax (proof theory) and coun¬termodels are studied in formal semantics (model theory). The purpose of this paper is to compare these two criteria to the corresponding criteria employed in Boole’s first logical work, The Mathematical Analysis of Logic (1847). In particular, this paper presents a detailed study of the relevant metalogical passages and an analysis of Boole’s symbolic derivations. It is well known, of course, that Boole’s logical analysis of compound terms (involving ‘not’, ‘and’, ‘or’, ‘except’, etc.) contributed to the enlargement of the class of propositions and arguments formally treatable in logic. The present study shows, in addition, that Boole made significant contributions to the study of deduc¬tive reasoning. He identified the role of logical axioms (as opposed to inference rules) in formal deductions, he conceived of the idea of an axiomatic deductive sys¬tem (which yields logical truths by itself and which yields consequences when ap¬plied to arbitrary premises). Nevertheless, surprisingly, Boole’s attempt to imple¬ment his idea of an axiomatic deductive system involved striking omissions: Boole does not use his own formal deductions to establish validity. Boole does give symbolic derivations, several of which are vitiated by “Boole’s Solutions Fallacy”: the fallacy of supposing that a solution to an equation is necessarily a logical consequence of the equation. This fallacy seems to have led Boole to confuse equational calculi (i.e., methods for gen-erating solutions) with deduction procedures (i.e., methods for generating consequences). The methodological confusion is closely related to the fact, shown in detail below, that Boole had adopted an unsound criterion of validity. It is also shown that Boole totally ignored the countermodel criterion of invalid¬ity. Careful examination of the text does not reveal with certainty a test for invalidity which was adopted by Boole. However, we have isolated a test that he seems to use in this way and we show that this test is ineffectual in the sense that it does not serve to identify invalid arguments. We go beyond the simple goal stated above. Besides comparing Boole’s earliest criteria of validity and invalidity with those traditionally (and still generally) employed, this paper also investigates the framework and details of THE MATHEMATICAL ANALYSIS OF LOGIC. (shrink)
The first learning game to be developed to help students to develop and hone skills in constructing proofs in both the propositional and first-order predicate calculi. It comprises an autotelic (self-motivating) learning approach to assist students in developing skills and strategies of proof in the propositional and predicate calculus. The text of VALIDITY consists of a general introduction that describes earlier studies made of autotelic learning games, paying particular attention to work done at the Law School of Yale University, (...) called the ALL Project (Accelerated Learning of Logic). Following the introduction, the game of VALIDITY is described, first with reference to the propositional calculus, and then in connection with the first-order predicate calculus with identity. Sections in the text are devoted to discussions of the various rules of derivation employed in both calculi. Three appendices follow the main text; these provide a catalogue of sequents and theorems that have been proved for the propositional calculus and for the predicate calculus, and include suggestions for the classroom use of VALIDITY in university-level courses in mathematical logic. (shrink)
For semantic inferentialists, the basic semantic concept is validity. An inferentialist theory of meaning should offer an account of the meaning of "valid." If one tries to add a validity predicate to one's object language, however, one runs into problems like the v-Curry paradox. In previous work, I presented a validity predicate for a non-transitive logic that can adequately capture its own meta-inferences. Unfortunately, in that system, one cannot show of any inference that it is invalid. Here (...) I extend the system so that it can capture invalidities. (shrink)
The following four theses all have some intuitive appeal: (I) There are valid norms. (II) A norm is valid only if justified by a valid norm. (III) Justification, on the class of norms, has an irreflexive proper ancestral. (IV) There is no infinite sequence of valid norms each of which is justified by its successor. However, at least one must be false, for (I)--(III) together entail the denial of (IV). There is thus a conflict between intuition and logical possibility. This (...) paper, after distinguishing various conceptions of a norm, of validity and of justification, argues for the following position. (I) is true. (II) is false for legislative justification and true for epistemic justification. (III) is true for legislative and false for epistemic justification. (IV) is true for legislative justification; for epistemic justification (IV) is true or false depending on the conception taken of a norm. Our intuition in favour of (II) must therefore be abandoned where justification is conceived legislatively. Our intuition in favour of (III) must be abandoned, and our intuition in favour of (IV) qualified, where justification is conceived epistemically. (shrink)
Definitions I presented in a previous article as part of a semantic approach in epistemology assumed that the concept of derivability from standard logic held across all mathematical and scientific disciplines. The present article argues that this assumption is not true for quantum mechanics (QM) by showing that concepts of validity applicable to proofs in mathematics and in classical mechanics are inapplicable to proofs in QM. Because semantic epistemology must include this important theory, revision is necessary. The one I (...) propose also extends semantic epistemology beyond the ‘hard’ sciences. The article ends by presenting and then refuting some responses QM theorists might make to my arguments. (shrink)
We evaluated the reliability, validity, and differential item functioning (DIF) of a shorter version of the Defining Issues Test-1 (DIT-1), the behavioral DIT (bDIT), measuring the development of moral reasoning. 353 college students (81 males, 271 females, 1 not reported; age M = 18.64 years, SD = 1.20 years) who were taking introductory psychology classes at a public University in a suburb area in the Southern United States participated in the present study. First, we examined the reliability of the (...) bDIT using Cronbach’s α and its concurrent validity with the original DIT-1 using disattenuated correlation. Second, we compared the test duration between the two measures. Third, we tested the DIF of each question between males and females. Findings reported that first, the bDIT showed acceptable reliability and good concurrent validity. Second, the test duration could be significantly shortened by employing the bDIT. Third, DIF results indicated that the bDIT items did not favour any gender. Practical implications of the present study based on the reported findings are discussed. (shrink)
The notion of validity for modal languages could be defined in two slightly different ways. The first is the original definition given by S. Kripke, for which a formula φ of a modal language L is valid if and only if it is true in every actual world of every interpretation of L. The second is the definition that has become standard in most textbook presentations of modal logic, for which a formula φ of L is valid if and (...) only if it is true in every world in every interpretation of L. For simple modal languages, “Kripkean validity” and “Textbook validity” are extensionally equivalent. According to E. Zalta, however, Textbook validity is an “incorrect” definition of validity, because: (i) it is not in full compliance with Tarski’s notion of truth; (ii) in expressively richer languages, enriched by the actuality operator, some obviously true formulas count as valid only if the Kripkean notion is used. The purpose of this paper is to show that (i) and (ii) are not good reasons to favor Kripkean valid- ity over Textbook validity. On the one hand, I will claim that the difference between the two should rather be seen as the result of two different conceptions on how a modal logic should be built from a non-modal basis; on the other, I will show the advantages, for the question at issue, of seeing the actuality operator as belonging to the family of two-dimensional operators. (shrink)
Rodrigues and Banzato related the validity of diagnostic categories to their meaningfulness and I wish to explore this relation further without attempting to make criticisms. To commence, if a diagnostic category is to be valid, it must mean something.
This article argues for the formal validity of and the truth of the premises and conclusion of a version of Aquinas' "Third Way" that says: If each of the parts of nature is contingent, the whole of nature is contingent. Each of the parts of nature is contingent. Therefore, the whole of nature is contingent--where "contingent" means having a cause and not existing self-sufficiently.
In this paper, I claim that two ways of defining validity for modal languages (“real-world” and “general” validity), corresponding to distinction between a correct and an incorrect way of defining modal valid- ity, correspond instead to two substantive ways of conceiving modal truth. At the same time, I claim that the major logical manifestation of the real- world/general validity distinction in modal propositional languages with the actuality operator should not be taken seriously, but simply as a by-product (...) of the way in which the semantics of such an operator is usually given. (shrink)
This study sought to replicate and extend Hall and colleagues’ (2014) work on developing and validating scales from the Psychopathic Personality Inventory (PPI) to index the triarchic psychopathy constructs of boldness, meanness, and disinhibition. This study also extended Hall et al.’s initial findings by including the PPI Revised (PPI–R). A community sample (n D 240) weighted toward subclinical psychopathy traits and a male prison sample (n D 160) were used for this study. Results indicated that PPI–Boldness, PPI–Meanness, and PPI–Disinhibition converged (...) with other psychopathy, personality, and behavioral criteria in ways conceptually expected from the perspective of the triarchic psychopathy model, including showing very strong convergent and discriminant validity with their Triarchic Psychopathy Measure counterparts. These findings further enhance the utility of the PPI and PPI–R in measuring these constructs. (shrink)
Following Kelsen’s influential theory of law, the concept of validity has been used in the literature to refer to different properties of law (such as existence, membership, bindingness, and more), and so it is inherently ambiguous. More importantly, Kelsen’s equivalence between the existence and the validity of law prevents us from accounting satisfactorily for relevant aspects of our current legal practices, such as the phenomenon of “unlawful law.” This chapter addresses this ambiguity to argue that the most important (...) function of the concept of validity is constituting the complex ontological paradigm of modern law as an institutional-normative practice. In this sense, validity is an artificial ontological status that supervenes on that of the existence of legal norms, thus allowing law to regulate its own creation and creating the logical space for the occurrence of “unlawful law.” This function, I argue in the last part, is crucial to understanding the relationship between the ontological and epistemic dimensions of the objectivity of law. Given the necessary practice-independence of legal norms it is the epistemic accessibility of their creation that enables the law to fulfill its general action-guiding (and thus coordinating) function. (shrink)
This paper considers Rumfitt’s bilateral classical logic (BCL), which is proposed to counter Dummett’s challenge to classical logic. First, agreeing with several authors, we argue that Rumfitt’s notion of harmony, used to justify logical rules by a purely proof theoretical manner, is not sufficient to justify coordination rules in BCL purely proof-theoretically. For the central part of this paper, we propose a notion of proof-theoretical validity similar to Prawitz for BCL and proves that BCL is sound and complete respect (...) to this notion of validity. The major difficulty in defining validity for BCL is that validity of positive +A appears to depend on negative −A, and vice versa. Thus, the straightforward inductive definition does not work because of this circular dependance. However, Knaster-Tarski’s fixed point theorem can resolve this circularity. Finally, we discuss the philosophical relevance of our work, in particular, the impact of the use of fixed point theorem and the issue of decidability. (shrink)
The existence of singularities alerts that one of the highest priorities of a centennial perspective on general relativity should be a careful re-thinking of the validity domain of Einstein’s field equations. We address the problem of constructing distinguishable extensions of the smooth spacetime manifold model, which can incorporate singularities, while retaining the form of the field equations. The sheaf-theoretic formulation of this problem is tantamount to extending the algebra sheaf of smooth functions to a distribution-like algebra sheaf in which (...) the former may be embedded, satisfying the pertinent cohomological conditions required for the coordinatization of all of the tensorial physical quantities, such that the form of the field equations is preserved. We present in detail the construction of these distribution-like algebra sheaves in terms of residue classes of sequences of smooth functions modulo the information of singular loci encoded in suitable ideals. Finally, we consider the application of these distribution-like solution sheaves in geometrodynamics by modeling topologically-circular boundaries of singular loci in three-dimensional space in terms of topological links. It turns out that the Borromean link represents higher order wormhole solutions. (shrink)
As the technosciences, including genomics, develop into a global phenomenon, the question inevitably emerges whether and to what extent bioethics can and should become a globalised phenomenon as well. Could we somehow articulate a set of core principles or values that ought to be respected worldwide and that could serve as a universal guide or blueprint for bioethical regulations for embedding biotechnologies in various countries? This article considers one universal declaration, the UNESCO Declaration on Bioethics and Human Rights ( 2005a (...) ). General criticisms made in a recent special issue of Developing World Bioethics are that the concepts used in the Declaration are too general and vague to generate real commitment; that the so-called universal values are not universal; and, that UNESCO should not be engaged in producing such declarations which are the domain of professional bioethicists. This article considers these and other criticisms in detail and presents an example of an event in which the Declaration was used: the request by the Republic of Sakha, in Siberia, for a UNESCO delegation to advise on the initiation of a bioethics programme. The Declaration was intended to provide an adequate “framework of principles and procedures to guide states in the formulation of their legislation, policies and other instruments in the field of bioethics” (article 2a). The Declaration was produced, and principles agreed upon, in an interactive and deliberative manner with world-wide expert participation. We argue that the key issue is not whether the general principles can be exported worldwide (in principle they can), but rather how processes of implementation and institutionalisation should take shape in different social and cultural contexts. In particular, broader publics are not routinely involved in bioethical debate and policy-making processes worldwide. (shrink)
Background: Despite being often taken as the benchmark of quality for diagnostic and classificatory tools, 'validity' is admitted as a poorly worked out notion in psychiatric nosology. Objective: Here we aim at presenting a view that we believe to do better justice to the significance of the notion of validity, as well as at explaining away some misconceptions and inappropriate expectations regarding this attribute in the aforementioned context. Method: The notion of validity is addressed taking into account (...) its role, the framework according to which it should be assessed and the specific contents to which it refers within psychiatric nosology. Results and Conclusions: The notion of validity has an epistemological thrust and its foremost role is distinguishing correct reasoning and truth from what is irrational or false. From it follows not only that 'validity' always refers to elements of knowledge and rationality such as arguments, inferences and propositions, but also that the appropriate frameworks to assess 'validity' are logics and scientific methodology. When the validity of a psychiatric diagnostic category is at stake, the contents to which it refers are those relevantly related to the notion of 'diagnostic concept'. The consequences of our reading on the notion of 'validity' are discussed vis-à-vis the challenges faced by psychiatric nosology in order to have its diagnostic categories validated. (shrink)
Dear Editor, in a previous paper we have tried to delve into what validity means in the context of psychiatric nosology, arguing for a pragmatic view of it. Here we want to briefly reassert the basic points of our analysis, make a few clarifications and address some issues raised by commentators.
Neuroscience has studied deductive reasoning over the last 20 years under the assumption that deductive inferences are not only de jure but also de facto distinct from other forms of inference. The objective of this research is to verify if logically valid deductions leave any cerebral electrical trait that is distinct from the trait left by non-valid deductions. 23 subjects with an average age of 20.35 years were registered with MEG and placed into a two conditions paradigm (100 trials for (...) each condition) which each presented the exact same relational complexity (same variables and content) but had distinct logical complexity. Both conditions show the same electromagnetic components (P3, N4) in the early temporal window (250–525 ms) and P6 in the late temporal window (500–775 ms). The significant activity in both valid and invalid conditions is found in sensors from medial prefrontal regions, probably corresponding to the ACC or to the medial prefrontal cortex. The amplitude and intensity of valid deductions is significantly lower in both temporal windows (p = 0.0003). The reaction time was 54.37% slower in the valid condition. Validity leaves a minimal but measurable hypoactive electrical trait in brain processing. The minor electrical demand is attributable to the recursive and automatable character of valid deductions, suggesting a physical indicator of computational deductive properties. It is hypothesized that all valid deductions are recursive and hypoactive. (shrink)
What is a valid measuring instrument? Recent philosophy has attended to logic of justification of measures, such as construct validation, but not to the question of what it means for an instrument to be a valid measure of a construct. A prominent approach grounds validity in the existence of a causal link between the attribute and its detectable manifestations. Some of its proponents claim that, therefore, validity does not depend on pragmatics and research context. In this paper, I (...) cast doubt on the possibility of a context-independent causal account of validity. I assess several versions, arguing that all of them fail to judge the validity of measuring instruments correctly. Because different research purposes require different properties from measuring instruments, no account of validity succeeds without referring to the specific research purpose that creates the need for measurement in the first place. (shrink)
Let’s begin by imagining a hypothetical psychotic illness called “Schneider’s Disease”, recognized for over 100 years. Let’s assume there has been great controversy as regards the “most valid” set of diagnostic criteria for SD.
The logical-pragmatic perspective on the psychiatric diagnosis, presented by Rodriguez and Banzato contributes to and develops the existing conventional taxonomic framework. The latter is regarded as grounded on the epistemological prerequisites proponed by Carl Gustav Hempel in the late 1960s, adopted by the DSM task force of R. Spitzer in 1973.
Few of Kant’s distinctions have generated as much puzzlement and criticism as the one he draws in the Prolegomena between judgments of experience, which he describes as objectively and universally valid, and judgments of perception, which he says are merely subjectively valid. Yet the distinction between objective and subjective validity is central to Kant’s account of experience and plays a key role in his Transcendental Deduction of the categories. In this paper, I reject a standard interpretation of the distinction, (...) according to which judgments of perception are merely subjectively valid because they are made without sufficient investigation. In its place, I argue that for Kant, judgments of perception are merely subjectively valid because they merely report sequences of perceptions had by a subject without claiming that what is represented by the perceptions is connected in the objects the perceptions are of. Whereas the interpretation I criticize undercuts Kant’s strategy in the Deduction, I argue, my interpretation illuminates it. (shrink)
Neuroscientific claims have a significant impact on traditional philosophy. This essay, focusing on the field of moral neuroscience, discusses how and why philosophy can contribute to neuroscientific progress. First, viewing the interactions between moral neuroscience and moral philosophy, it becomes clear that moral philosophy can and does contribute to moral neuroscience in two ways: as explanandum and as explanans. Next, it is shown that moral philosophy is well suited to contribute to moral neuroscience in both of these two ways in (...) the context of the problem of ecological validity. Philosophy can play the role of an agent for ecological validity, since traditional philosophy shapes and reflects part of our social reality. Finally, based on these arguments, I tentatively sketch how a Kantian account of moral incentive can play this role. (shrink)
Abstract: Four main forms of Doomsday Argument (DA) exist—Gott’s DA, Carter’s DA, Grace’s DA and Universal DA. All four forms use different probabilistic logic to predict that the end of the human civilization will happen unexpectedly soon based on our early location in human history. There are hundreds of publications about the validity of the Doomsday argument. Most of the attempts to disprove the Doomsday Argument have some weak points. As a result, we are uncertain about the validity (...) of DA proofs and rebuttals. In this article, a meta-DA is introduced, which uses the idea of logical uncertainty over the DA’s validity estimated based on a virtual prediction market of the opinions of different scientists. The result is around 0.4 for the validity of some form of DA, and even smaller for “Strong DA”, which predicts the end of the world in the near term. We discuss many examples of the validity of the DA in real life as an instrument to prove it “experimentally”. We also show that DA becomes strongest if it is based on the idea of the “natural reference class” of observers, that is, the observers who know about the DA (i.e. a Self-Referenced DA). Such a DA predicts that there is a high probability of a global catastrophe with human extinction in the 21st century, which aligns with what we already know based on analysis of different technological risks. (shrink)
In this article, I examined the various ethical problems raise to morally discount homosexuality. I found that so far no moral argument proved adequate ground to discount homoeroticism. However, I have developed the ‘Two-Way Test’ (TWT) by which the social acceptability of any sexual relation should be tested for moral validity. From the analysis, homosexuality was found to have failed the test. That is to say, homosexuality is not a morally valid act. Despite that, the immoral status of homosexuality (...) did not constitute sufficient ground for its criminalization, because not all immoral acts are criminalized but only those that impede the right or liberty of others. In conclusion, the paper submitted that although homosexuality had failed the TWT it does not call for violence, criminalization and discrimination against persons of homosexual orientation but that homosexuals should be accommodated without deliberately discouraging heterosexual relationship. My advocacy for sexual tolerance is based on the grounds that ongoing biological research has so far shown the phenomenon as biological reality beyond the personal control of most homosexuals. (shrink)
True beliefs and truth-preserving inferences are, in some sense, good beliefs and good inferences. When an inference is valid though, it is not merely truth-preserving, but truth-preserving in all cases. This motivates my question: I consider a Modus Ponens inference, and I ask what its validity in particular contributes to the explanation of why the inference is, in any sense, a good inference. I consider the question under three different definitions of ‘case’, and hence of ‘validity’: the orthodox (...) definition given in terms of interpretations or models, a metaphysical definition given in terms of possible worlds, and a substitutional definition defended by Quine. I argue that the orthodox notion is poorly suited to explain what's good about a Modus Ponens inference. I argue that there is something good that is explained by a certain kind of truth across possible worlds, but the explanation is not provided by metaphysical validity in particular; nothing of value is explained by truth across all possible worlds. Finally, I argue that the substitutional notion of validity allows us to correctly explain what is good about a valid inference. (shrink)
The perhaps most important criticism of the nontransitive approach to semantic paradoxes is that it cannot truthfully express exactly which metarules preserve validity. I argue that this criticism overlooks that the admissibility of metarules cannot be expressed in any logic that allows us to formulate validity-Curry sentences and that is formulated in a classical metalanguage. Hence, the criticism applies to all approaches that do their metatheory in classical logic. If we do the metatheory of nontransitive logics in a (...) nontransitive logic, however, there is no reason to think that the argument behind the criticism goes through. In general, asking a logic to express its own admissible metarules may not be a good idea. (shrink)
Goodman and Lederman (2020) argue that the traditional Fregean strategy for preserving the validity of Leibniz’s Law of substitution fails when confronted with apparent counterexamples involving proper names embedded under propositional attitude verbs. We argue, on the contrary, that the Fregean strategy succeeds and that Goodman and Lederman’s argument misfires.
Background: Moral Growth Mindset (MGM) is a belief about whether one can become a morally better person through efforts. Prior research showed that MGM is positively associated with promotion of moral motivation among adolescents and young adults. We developed and tested the English version of the MGM measure in this study with data collected from college student participants. Methods: In Study 1, we tested the reliability and validity of the MGM measure with two-wave data (N = 212, Age mean (...) = 24.18 years, SD = 7.82 years). In Study 2, we retested the construct validity of the MGM measure once again and its association with other moral and positive psychological indicators to test its convergent and discriminant validity (N = 275, Age mean = 22.02 years, SD = 6.34 years). Results: We found that the MGM measure was reliable and valid from Study 1. In Study 2, the results indicated that the MGM was well correlated with other moral and positive psychological indicators as expected. Conclusions: We developed and validated the English version of the MGM measure in the present study. The results from studies 1 and 2 supported the reliability and validity of the MGM measure. Given this, we found that the English version of the MGM measure can measure one’s MGM as we intended. (shrink)
Since the time of Aristotle's students, interpreters have considered Prior Analytics to be a treatise about deductive reasoning, more generally, about methods of determining the validity and invalidity of premise-conclusion arguments. People studied Prior Analytics in order to learn more about deductive reasoning and to improve their own reasoning skills. These interpreters understood Aristotle to be focusing on two epistemic processes: first, the process of establishing knowledge that a conclusion follows necessarily from a set of premises (that is, on (...) the epistemic process of extracting information implicit in explicitly given information) and, second, the process of establishing knowledge that a conclusion does not follow. Despite the overwhelming tendency to interpret the syllogistic as formal epistemology, it was not until the early 1970s that it occurred to anyone to think that Aristotle may have developed a theory of deductive reasoning with a well worked-out system of deductions comparable in rigor and precision with systems such as propositional logic or equational logic familiar from mathematical logic. When modern logicians in the 1920s and 1930s first turned their attention to the problem of understanding Aristotle's contribution to logic in modern terms, they were guided both by the Frege-Russell conception of logic as formal ontology and at the same time by a desire to protect Aristotle from possible charges of psychologism. They thought they saw Aristotle applying the informal axiomatic method to formal ontology, not as making the first steps into formal epistemology. They did not notice Aristotle's description of deductive reasoning. Ironically, the formal axiomatic method (in which one explicitly presents not merely the substantive axioms but also the deductive processes used to derive theorems from the axioms) is incipient in Aristotle's presentation. Partly in opposition to the axiomatic, ontically-oriented approach to Aristotle's logic and partly as a result of attempting to increase the degree of fit between interpretation and text, logicians in the 1970s working independently came to remarkably similar conclusions to the effect that Aristotle indeed had produced the first system of formal deductions. They concluded that Aristotle had analyzed the process of deduction and that his achievement included a semantically complete system of natural deductions including both direct and indirect deductions. Where the interpretations of the 1920s and 1930s attribute to Aristotle a system of propositions organized deductively, the interpretations of the 1970s attribute to Aristotle a system of deductions, or extended deductive discourses, organized epistemically. The logicians of the 1920s and 1930s take Aristotle to be deducing laws of logic from axiomatic origins; the logicians of the 1970s take Aristotle to be describing the process of deduction and in particular to be describing deductions themselves, both those deductions that are proofs based on axiomatic premises and those deductions that, though deductively cogent, do not establish the truth of the conclusion but only that the conclusion is implied by the premise-set. Thus, two very different and opposed interpretations had emerged, interestingly both products of modern logicians equipped with the theoretical apparatus of mathematical logic. The issue at stake between these two interpretations is the historical question of Aristotle's place in the history of logic and of his orientation in philosophy of logic. This paper affirms Aristotle's place as the founder of logic taken as formal epistemology, including the study of deductive reasoning. A by-product of this study of Aristotle's accomplishments in logic is a clarification of a distinction implicit in discourses among logicians--that between logic as formal ontology and logic as formal epistemology. (shrink)
The classical rule of Repetition says that if you take any sentence as a premise, and repeat it as a conclusion, you have a valid argument. It's a very basic rule of logic, and many other rules depend on the guarantee that repeating a sentence, or really, any expression, guarantees sameness of referent, or semantic value. However, Repetition fails for token-reflexive expressions. In this paper, I offer three ways that one might replace Repetition, and still keep an interesting notion of (...)validity. Each is a fine way to go for certain purposes, but I argue that one in particular is to be preferred by the semanticist who thinks that there are token-reflexive expressions in natural languages. (shrink)
The aim of the paper is to develop general criteria of argumentative validity and adequacy for probabilistic arguments on the basis of the epistemological approach to argumentation. In this approach, as in most other approaches to argumentation, proabilistic arguments have been neglected somewhat. Nonetheless, criteria for several special types of probabilistic arguments have been developed, in particular by Richard Feldman and Christoph Lumer. In the first part (sects. 2-5) the epistemological basis of probabilistic arguments is discussed. With regard to (...) the philosophical interpretation of probabilities a new subjectivist, epistemic interpretation is proposed, which identifies probabilities with tendencies of evidence (sect. 2). After drawing the conclusions of this interpretation with respect to the syntactic features of the probability concept, e.g. one variable referring to the data base (sect. 3), the justification of basic probabilities (priors) by judgements of relative frequency (sect. 4) and the justification of derivative probabilities by means of the probability calculus are explained (sect. 5). The core of the paper is the definition of '(argumentatively) valid derivative probabilistic arguments', which provides exact conditions for epistemically good probabilistic arguments, together with conditions for the adequate use of such arguments for the aim of rationally convincing an addressee (sect. 6). Finally, some measures for improving the applicability of probabilistic reasoning are proposed (sect. 7). (shrink)
The need to distinguish between logical and extra-logical varieties of inference, entailment, validity, and consistency has played a prominent role in meta-ethical debates between expressivists and descriptivists. But, to date, the importance that matters of logical form play in these distinctions has been overlooked. That’s a mistake given the foundational place that logical form plays in our understanding of the difference between the logical and the extra-logical. This essay argues that descriptivists are better positioned than their expressivist rivals to (...) provide the needed account of logical form, and so better able to capture the needed distinctions. This finding is significant for several reasons: First, it provides a new argument against expressivism. Second, it reveals that descriptivists can make use of this new argument only if they are willing to take a controversial—but plausible—stand on claims about the nature and foundations of logic. (shrink)
This article provides a critical assessment of Habermas’s recent work on religion and its role in the public sphere by comparing it to Kant’s phi-losophy of religion on the one hand and that of Kierkegaard on the other. It is argued that although Habermas is in many ways a Kantian, he diverges from Kant when it comes to religion, by taking a position which comes closer to the Kierkegaardian view that religiousness belongs to private faith rather than philosophy. This has (...) implications not just for the conception of religion but also for the very roles of communication, validity, rationality, and philosophy. (shrink)
With his distinction between the "context of discovery" and the "context of justification", Hans Reichenbach gave the traditional difference between genesis and validity a modern standard formulation. Reichenbach's distinction is one of the well-known ways in which the expression "context" is used in the theory of science. My argument is that Reichenbach's concept is unsuitable and leads to contradictions in the semantic fields of genesis and validity. I would like to demonstrate this by examining the different meanings of (...) Reichenbach's context distinction. My investigation also shows how the difference between genesis and validity precedes Reichenbach's context distinction and indicates approaches for meaningful applications of the concept of context to the phenomena designated by Reichenbach. I will begin by reconstructing the way in which Reichenbach introduces the distinction between discovery and justification as a difference of contexts (I). Drawing on the numerous meanings of the term "context", I will then emphasize some chief characteristics and review, through exemplification, the usage of this term. First of all, I turn to the context of discovery as the nonrational part of all scientific knowledge and show that this meaning cannot be defined consistently (la). For the context of justification, one can distinguish two main cases: the context of justification is either contrasted with the context of discovery, or it forms a unit there with. In the first case, the use of the context term becomes paradoxical, insofar as justification separated from scientific practice does not represent a field of reference which could be specifically contrasted with another field of reference (I b). In the second case, the unifying definitions contradict the contextual meaning of discovery and justification (1 c). In the last section, I point to a useful application of the concept of context which can be found in Reichenbach's argumentation and which refers to the practical conditions of justification(2). (shrink)
ABSTRACT: A detailed presentation of Stoic logic, part one, including their theories of propositions (or assertibles, Greek: axiomata), demonstratives, temporal truth, simple propositions, non-simple propositions(conjunction, disjunction, conditional), quantified propositions, logical truths, modal logic, and general theory of arguments (including definition, validity, soundness, classification of invalid arguments).
I discuss Prawitz’s claim that a non-reliabilist answer to the question “What is a proof?” compels us to reject the standard Bolzano-Tarski account of validity, andto account for the meaning of a sentence in broadly veriﬁcationist terms. I sketch what I take to be a possible way of resisting Prawitz’s claim---one that concedes the anti-reliabilist assumption from which Prawitz’s argument proceeds.
We study the modal logic M L r of the countable random frame, which is contained in and `approximates' the modal logic of almost sure frame validity, i.e. the logic of those modal principles which are valid with asymptotic probability 1 in a randomly chosen finite frame. We give a sound and complete axiomatization of M L r and show that it is not finitely axiomatizable. Then we describe the finite frames of that logic and show that it has (...) the finite frame property and its satisfiability problem is in EXPTIME. All these results easily extend to temporal and other multi-modal logics. Finally, we show that there are modal formulas which are almost surely valid in the finite, yet fail in the countable random frame, and hence do not follow from the extension axioms. Therefore the analog of Fagin's transfer theorem for almost sure validity in first-order logic fails for modal logic. (shrink)
Validity of physical laws for any aspect of brain activity and strict correlation of mental to physical states of the brain do not imply, with logical necessity, that a complete algorithmic theory of the mind-body relation is possible. A limit of decodability may be imposed by the finite number of possible analytical operations which is rooted in the finiteness of the world. It is considered as a fundamental intrinsic limitation of the scientific approach comparable to quantum indeterminacy and the (...) theorems of logical undecidability. An analysis of these limits, applied to dispositions of future behaviour, suggests that limits of decodability of the psycho-physic relation may actually exist with respect to brain states with self-referential aspects, as they are involved in mental processes. Limits for an algorithmic theory of the mind-body problem suggested by this study are formally similar to other intrinsic limits of the scientific method such as quantum indeterminacy and mathematical undecidability which are also related to self-referential operations. At the metatheoretical level, hard sciences, despite their reliability, universality and objectivity, depend on metatheoretical presuppositions which allow for multiple philosophical interpretations. -/- . (shrink)
The standard definition of “argument” is satisfied by any series of statements in which one (of the statements) is marked as the conclusion of the others. This leads to the counter-intuitive result that “I like cookies, therefore, all swans are white” is an argument, since “therefore” marks “all swans are white” as the conclusion of “I like cookies”. This objection is often disregarded by stating that, although the previous sequence is an argument, it fails to be a good one. However, (...) when we compare our previous argument with a definitely bad argument like “this swan is white, therefore, all swans are white”, we see that there is an important difference between them. Whereas the former fails to fulfil our intuition of what an argument is, the latter does qualify as an argument, but as a bad one. In this talk, I will sketch a definition that better captures this feature of our intuition of what an argument is in three steps. Following Díez and Moulines (1999), I first reduce inductive validity to deductive validity through what we may call the method of deductivisation. Second, through epistemic predicates (cf. Thompson 2002), I introduce a broader concept of validity that accounts not only for deductive and inductive validity, but also for a weaker type of validity that may be called pseudo-validity. I show that these pseudo-valid arguments can also be deductivised with the help of the above-mentioned epistemic predicates. Finally, I re-define the concept of argument as any series of statements that is at least pseudo-valid, which leaves the “cookies argument” outside of this definition. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.