In this paper, I explore Bach’s idea (Bach, 2000) that nullappositives, intended as expanded qua-clauses, can resolve the puzzles of belief reports. These puzzles are crucial in understanding the semantics and pragmatics of belief reports and are presented in a section. I propose that Bach’s strategy is not only a way of dealing with puzzles, but also an ideal way of dealing with belief reports. I argue that even simple unproblematic cases of belief reports are cases of (...) pragmatic intrusion, involving nullappositives, or to use the words of Bach, ‘qua-clauses’. The main difference between my pragmatic approach and the one by Salmon (1986) is that this author uses the notion of conversational implicature, whereas I use the notion of pragmatic intrusion and explicature. From my point of view, statements such as ‘‘John believes that Cicero is clever’’ and ‘‘John believes that Tully is clever’’ have got distinct truth-values. In other words, I claim that belief reports in the default case illuminate the hearer on the mental life of the believer, that includes specific modes of presentation of the referents talked about. Furthermore, while in the other pragmatic approaches, it is mysterious how a mode of presentation is assumed to be the main filter of the believer’s mental life, here I provide an explanatory account in terms of relevance, cognitive effects, and processing efforts. The most important part of the paper is devoted to showing that nullappositives are required, in the case of belief reports, to explain certain anaphoric effects, which would otherwise be mysterious. My examples show that nullappositives are not necessitated at logical form, but only at the level of the explicature, in line with the standard assumptions by Carston and Recanati on pragmatic intrusion. I develop a potentially useful analysis of belief reports by exploiting syntactic and semantic considerations on presuppositional clitics in Romance. (shrink)
In Tractatus, Wittgenstein held that there are null sentences – prominently including logical truths and the truths of mathematics. He says that such sentences are without sense (sinnlos), that they say nothing; he also denies that they are nonsensical (unsinning). Surely it is what a sentence says which is true or false. So if a sentence says nothing, how can it be true or false? The paper discusses the issue.
The meaning of an experimental result depends on the experiment's conceptual backdrop, particularly its null hypothesis. This observation provides the basis for a functional interpretation of belief in the base rate fallacy. On this interpretation, if the base rate fallacy is to be labelled a “myth,” then it should be recognized that this label is not necessarily a disparaging one.
In this article I give a critical evaluation of the use and limitations of null-model-based hypothesis testing as a research strategy in the biological sciences. According to this strategy, the null model based on a randomization procedure provides an appropriate null hypothesis stating that the existence of a pattern is the result of random processes or can be expected by chance alone, and proponents of other hypotheses should first try to reject this null hypothesis in order (...) to demonstrate their own hypotheses. Using as an example the controversy over the use of null hypotheses and null models in species co-occurrence studies, I argue that null-model-based hypothesis testing fails to work as a proper analog to traditional statistical null-hypothesis testing as used in well-controlled experimental research, and that the random process hypothesis should not be privileged as a null hypothesis. Instead, the possible use of the null model resides in its role of providing a way to challenge scientists’ commonsense judgments about how a seemingly unusual pattern could have come to be. Despite this possible use, null-model-based hypothesis testing still carries certain limitations, and it should not be regarded as an obligation for biologists who are interested in explaining patterns in nature to first conduct such a test before pursuing their own hypotheses. (shrink)
In this paper, I develop an essentialist model of the semantics of slurs. I defend the view that slurs are a species of kind terms: Slur concepts encode mini-theories which represent an essence-like element that is causally connected to a set of negatively-valenced stereotypical features of a social group. The truth-conditional contribution of slur nouns can then be captured by the following schema: For a given slur S of a social group G and a person P, S is true of (...) P iff P bears the “essence” of G—whatever this essence is—which is causally responsible for stereotypical negative features associated with G and predicted of P. Since there is no essence that is causally responsible for stereotypical negative features of a social group, slurs have null-extension, and consequently, many sentences containing them are either meaningless or false. After giving a detailed outline of my theory, I show that it receives strong linguistic support. In particular, it can account for a wide range of linguistic cases that are regarded as challenging, central data for any theory of slurs. Finally, I show that my theory also receives convergent support from cognitive psychology and psycholinguistics. (shrink)
The ecologist today finds scarce ground safe from controversy. Decisions must be made about what combination of data, goals, methods, and theories offers them the foundations and tools they need to construct and defend their research. When push comes to shove, ecologists often turn to philosophy to justify why it is their approach that is scientific. Karl Popper’s image of science as bold conjectures and heroic refutations is routinely enlisted to justify testing hypotheses over merely confirming them. One of the (...) most controversial theories in contemporary science is the Neutral Theory of Ecology. Its chief developer and proponent, Stephen Hubbell, presents the neutral theory as a bold conjecture that has so far escaped refutation. Critics of the neutral theory claim that it already stands refuted, despite what the dogmatic neutralists say. We see the controversy through a Popperian lens. But Popper’s is an impoverished philosophy of science that distorts contemporary ecology. -/- The controversy surrounding the neutral theory actually rests on a methodological fault. There is a strong but messy historical link between the concepts of being neutral and being null in biology, and Hubbell perpetuates this when he claims that the neutral theory is supplies the appropriate null for testing alternative theories. What method is being followed here? There are three contenders: -/- Null hypothesis testing tests for whether a there is a pattern to be explained. Null modeling tests for whether a process is causally relevant to a pattern. Baseline modeling apportions relative responsibility to multiple processes each relevant to a pattern. -/- Whether the neutral theory supplies an appropriate “null” depends upon whether null hypothesis, null modeling, or baseline model is intended. These methods prescribe distinct inference patterns. If they are null hypothesis testing or null modeling, the neutralists’s reasoning is invalid. If they are baseline modeling, the justification of a crucial assumption remains opaque. Either way, the neutral-null connection is being exploited rhetorically to privilege the neutral theory over its rivals. Clarifying the reasoning immunizes us against the rhetoric and foregrounds the underlying virtues of the neutralist approach to ecology. -/- The Popperian lens distorts theoretical development as dogmatism. Lakatos’s view of science as the development of research programmes clarifies the epistemology of the neutral theory. Focusing philosophical attention on the neutralist research programme illuminates (1) the synchronic uses of the neutral theory to make predictions and give descriptions and explanations; (2) its diachronic development in response to theoretical innovation and confrontation with data; (3) its complex relationships to alternative theories. For example, baseline modeling is now seen to be its primary explanatory heuristic. The justification for baseline modeling with the neutral theory, previously hidden from view, is seen in the logic of in the neutralist research programme. (shrink)
The purpose of this article is to present several immediate consequences of the introduction of a new constant called Lambda in order to represent the object ``nothing" or ``void" into a standard set theory. The use of Lambda will appear natural thanks to its role of condition of possibility of sets. On a conceptual level, the use of Lambda leads to a legitimation of the empty set and to a redefinition of the notion of set. It lets also clearly appear (...) the distinction between the empty set, the nothing and the ur-elements. On a technical level, we introduce the notion of pre-element and we suggest a formal definition of the nothing distinct of that of the null-class. Among other results, we get a relative resolution of the anomaly of the intersection of a family free of sets and the possibility of building the empty set from ``nothing". The theory is presented with equi-consistency results . On both conceptual and technical levels, the introduction of Lambda leads to a resolution of the Russell's puzzle of the null-class. (shrink)
This article consists in two parts that are complementary and autonomous at the same time. -/- In the first one, we develop some surprising consequences of the introduction of a new constant called Lambda in order to represent the object ``nothing" or ``void" into a standard set theory. On a conceptual level, it allows to see sets in a new light and to give a legitimacy to the empty set. On a technical level, it leads to a relative resolution of (...) the anomaly of the intersection of a family free of sets. -/- In the second part, we show the interest of introducing an operator of potentiality into a standard set theory. Among other results, this operator allows to prove the existence of a hierarchy of empty sets and to propose a solution to the puzzle of "ubiquity" of the empty set. -/- Both theories are presented with equi-consistency results (model and interpretation). -/- Here is a declaration of intent : in each case, the starting point is a conceptual questionning; the technical tools come in a second time\\[0.4cm] \textbf{Keywords:} nothing, void, empty set, null-class, zero-order logic with quantifiers, potential, effective, empty set, ubiquity, hierarchy, equality, equality by the bottom, identity, identification. (shrink)
The significance and use of absence of a thing is highlighted as its presence. The role of absence in various disciplines of mathematics, physics, semi-conductor electronics, computing and cognitive sciences for ease in conceptualizing is discussed. The use of null set, null vector and null matrix are also presented.
A collection of material on Husserl's Logical Investigations, and specifically on Husserl's formal theory of parts, wholes and dependence and its influence in ontology, logic and psychology. Includes translations of classic works by Adolf Reinach and Eugenie Ginsberg, as well as original contributions by Wolfgang Künne, Kevin Mulligan, Gilbert Null, Barry Smith, Peter M. Simons, Roger A. Simons and Dallas Willard. Documents work on Husserl's ontology arising out of early meetings of the Seminar for Austro-German Philosophy.
This article discusses various dangers that accompany the supposedly benign methods in behavioral evoltutionary biology and evolutionary psychology that fall under the framework of "methodological adaptationism." A "Logic of Research Questions" is proposed that aids in clarifying the reasoning problems that arise due to the framework under critique. The live, and widely practiced, " evolutionary factors" framework is offered as the key comparison and alternative. The article goes beyond the traditional critique of Stephen Jay Gould and Richard C. Lewontin, to (...) present problems such as the disappearance of evidence, the mishandling of the null hypothesis, and failures in scientific reasoning, exemplified by a case from human behavioral ecology. In conclusion the paper shows that "methodological adaptationism" does not deserve its benign reputation. (shrink)
It is sometimes argued that certain sentences of natural language fail to express truth conditional contents. Standard examples include e.g. Tipper is ready and Steel is strong enough. In this paper, we provide a novel analysis of truth conditional meaning using the notion of a question under discussion. This account explains why these types of sentences are not, in fact, semantically underdetermined, provides a principled analysis of the process by which natural language sentences can come to have enriched meanings in (...) context, and shows why various alternative views, e.g. so-called Radical Contextualism, Moderate Contextualism, and Semantic Minimalism, are partially right in their respective analyses of the problem, but also all ultimately wrong. Our analysis achieves this result using a standard truth conditional and compositional semantics and without making any assumptions about enriched logical forms, i.e. logical forms containing phonologically null expressions. (shrink)
The replication crisis has caused researchers to distinguish between exact replications, which duplicate all aspects of a study that could potentially affect the results, and direct replications, which duplicate only those aspects of the study that are thought to be theoretically essential to reproduce the original effect. The replication crisis has also prompted researchers to think more carefully about the possibility of making Type I errors when rejecting null hypotheses. In this context, the present article considers the utility of (...) two types of Type I error probability: the Neyman–Pearson long run Type I error rate and the Fisherian sample-specific Type I error probability. It is argued that the Neyman–Pearson Type I error rate is inapplicable in social science because it refers to a long run of exact replications, and social science deals with irreversible units that make exact replications impossible. Instead, the Fisherian sample-specific Type I error probability is recommended as a more meaningful way to conceptualize false positive results in social science because it can be applied to each sample-specific decision about rejecting the same substantive null hypothesis in a series of direct replications. It is concluded that the replication crisis may be partly due to researchers’ unrealistic expectations about replicability based on their consideration of the Neyman–Pearson Type I error rate across a long run of exact replications. (shrink)
This paper defends a realist account of the composition of Newtonian forces, dubbed ‘residualism’. According to residualism, the resultant force acting on a body is identical to the component forces acting on it that do not prevent each other from bringing about its acceleration. Several reasons to favor residualism over alternative accounts of the composition of forces are advanced. (i) Residualism reconciles realism about component forces with realism about resultant forces while avoiding any threat of causal overdetermination. (ii) Residualism provides (...) a systematic semantics for the term ‘force’ within Newtonian mechanics. (iii) Residualism allows us to precisely apportion the causal responsibility of each component force in the ensuing acceleration. (iv) Residualism handles special cases such as null forces, single forces, and antagonistic forces in a natural way. (v) Residualism provides a neat picture of the causal powers of forces: each force essentially has two causal powers⎯the power to bring about accelerations (sometimes together with other co-directionnal forces) and the power to prevent other forces from doing so⎯exactly one of which is manifested at a time. (vi) Residualism avoids commitment to unobservable effects of forces: forces cause either stresses (tensile or compressive) or accelerations. (shrink)
The standard versions of predicativism are committed to the following two theses: proper names are count nouns in all their occurrences, and names do not refer to objects but express name-bearing properties. The main motivation for predicativism is to provide a uniform explanation of referential names and predicative names. According to predicativism, predicative names are fundamental and referential names are explained by appealing to a null determiner functioning like “the” or “that.” This paper has two goals. The first is (...) to reject the predicativists’ explanation of the two types of names. I present three syntactic counterexamples to the predicativists’ account of referential names: incorporation, modification, and measure phrase uses. The second goal is to present a novel strategy to explain the two types of names. I propose that referential names are fundamental but that there are null morphemes available for transforming a name into a count noun. (shrink)
We argue that genuine modal realism can be extended, rather than modified, so as to allow for the possibility of nothing concrete, a possibility we term ‘metaphysical nihilism’. The issue should be important to the genuine modal realist because, not only is metaphysical nihilism itself intuitively plausible, but also it is supported by an argument with pre-theoretically credible premises, namely, the subtraction argument. Given the soundness of the subtraction argument, we show that there are two ways that the genuine modal (...) realist can accommodate metaphysical nihilism: (i) by allowing for worlds containing only spatiotemporal points and (ii) by allowing for a world containing nothing but the null individual. On methodological grounds, we argue that the genuine modal realist should reject the former way but embrace the latter way. (shrink)
Research in ecology and evolutionary biology (evo-eco) often tries to emulate the “hard” sciences such as physics and chemistry, but to many of its practitioners feels more like the “soft” sciences of psychology and sociology. I argue that this schizophrenic attitude is the result of lack of appreciation of the full consequences of the peculiarity of the evo-eco sciences as lying in between a-historical disciplines such as physics and completely historical ones as like paleontology. Furthermore, evo-eco researchers have gotten stuck (...) on mathematically appealing but philosophi- cally simplistic concepts such as null hypotheses and p-values defined according to the frequentist approach in statistics, with the consequence of having been unable to fully embrace the complexity and subtlety of the problems with which ecologists and evolutionary biologists deal with. I review and discuss some literature in ecology, philosophy of science and psychology to show that a more critical methodological attitude can be liberating for the evo-eco scientist and can lead to a more fecund and enjoyable practice of ecology and evolutionary biology. With this aim, I briefly cover concepts such as the method of multiple hypotheses, Bayesian analysis, and strong inference. (shrink)
The purpose of this paper was to examine institutional variables and the supervision of security in secondary schools in Cross River State. The study specifically sought to determine whether there was a significant influence of school population, school type and school location, on the supervision of security in public secondary schools in Cross River State. Three null hypotheses were formulated accordingly to guide the study. 360 students and 120 teachers resulting in a total of 480 respondents, constituted the sample (...) for the study. The instrument used for data collection was a questionnaire while Independent t-test was used to analyze data and test the hypotheses at .05 level of significance using Microsoft Excel version 2013. The results of the findings revealed that school population, school type and school location, all have an influence in the supervision of security in public secondary schools of Cross River State. It was also revealed that lowly populated, mixed-gender, and urban public secondary schools were more efficient in the supervision of security than their counterparts such as highly populated, single-gender and rural secondary schools. Based on the findings of this study, conclusions were drawn and recommendations were made. (shrink)
This thesis inquires what it means to interpret non-relativistic quantum mechanics (QM), and the philosophical limits of this interpretation. In pursuit of a scientific-realist stance, a metametaphysical method is expanded and applied to evaluate rival interpretations of QM, based on the conceptual distinction between ontology and metaphysics, for objective theory choice in metaphysical discussions relating to QM. Three cases are examined, in which this metametaphysical method succeeds in indicating what are the wrong alternatives to interpret QM in metaphysical terms. The (...) first two cases failed in doing so due to different kinds of underdetermination. In the third case, unlike underdetermination, where there are many choices to be made, a “null-determination” is proposed where there may be no metaphysical choices in the available metaphysical literature. Considering what has been discussed, an agnostic philosophic position is adopted concerning the possibility of interpreting QM from a scientific-realistic point of view. (shrink)
This study assessed quality assurance practices and students’ performance evaluation in universities of South-South Nigeria using an SEM approach. Three null hypotheses guided the study. Based on factorial research design, and using a stratified random sampling technique, a sample of 878 academic staff were drawn from a sampling frame of 15 universities in South-South Nigeria. Quality Assurance Practices Students’ Performance Evaluation Scale (QAPSPES) with split-half reliability estimates ranging from .86–.92, was used as the instruments for data collection. Multiple regression (...) and Confirmatory Factor Analyses (CFA) were used for the analysis of data, model building, and testing of the hypotheses at .05 alpha level. Findings showed a significant composite and relative influence (F=48.19, P<.05) of school management, staff, and students’ quality assurance practices on students’ performance evaluation. The results also indicated that there were positive and significant covariances between the four variables of this study, with the CFI, RMSEA, TLI, and SRMR values indicating a good model fit. It was recommended, based on the findings of this study that, each school should organize quality assurance orientation campaigns for new students and set up quality assurance committees at the school, faculty and departmental levels for optimal performance in schools. (shrink)
The study examined personnel management and corrupt academic practices in universities in Cross River State, Nigeria. In achieving this objective, two research questions and two null hypotheses were posed and formulated respectively, to guide the study. The study adopted a factorial research design, while the population of the study included all the academic staff and students from University of Calabar and Cross River University of Technology. A purposive sampling technique was employed to select 1200 students and 200 lecturers from (...) both Universities, resulting in a sample of 1400 respondents. The instrument used for data collection was a 25-item rating scale that was designed by the researcher to assess both students and lecturers respectively. The collected were analyzed using descriptive statistics, while the null hypotheses were tested at .05 alpha level, using multiple regression analysis. The results of the analysis revealed that; discipline and remuneration of lecturers influenced lecturers’ corrupt academic practices in the universities, with remuneration having the most influence. The findings of the study also revealed that discipline and supervision of university students have a joint significant influence on university students’ corrupt academic practices, with students’ supervision having the most influence on corrupt academic practices. Based on these findings, it was recommended among other things that university lecturers should be properly remunerated through frequent payment of salaries, and other wages in order to ensure that they do not lack food and other resources to manage and take proper care of their families; students should be supervised properly during examinations and in other academic/ co-curricular activities of the universities. (shrink)
The study was conducted to compare manual and computerized software techniques of data management and analysis in educational research. Specifically, the study investigated whether there was a significant difference in the results of Pearson correlation, independent t-test and ANOVA obtained from using manual and computerized software technique of data analyses. Three null hypotheses were formulated accordingly to guide the study. The study adopted a quasi-experimental research design where several data were generated by the researchers and analyzed using manual and (...) computerized software techniques. The data were generated to suit the required data of each statistical method of analysis. CASIO fx-991ES PLUS NATURAL DISPLAY scientific calculator and statistical tables were used for manual analysis; while data analysis tool pack of Microsoft Excel version 2013 were used for computerized software analysis. The results of the analysis revealed that both manual and computerized software techniques yielded the same results for Pearson correlation, independent t-test and ANOVA. It was concluded that though both manual and computerized techniques are reliable and dependable, computerized technique is faster and efficient in managing and analyzing data than manual technique. It was recommended, among other things, that any of the techniques should be used without fear when computing Pearson, independent t-test and one-way ANOVA as it is the same results that will be gotten. (shrink)
In what follows, I suggest that, against most theories of time, there really is an actual present, a now, but that such an eternal moment cannot be found before or after time. It may even be semantically incoherent to say that such an eternal present exists since “it” is changeless and formless (presumably a dynamic chaos without location or duration) yet with creative potential. Such a field of near-infinite potential energy could have had no beginning and will have no end, (...) yet within it stirs the desire to experience that brings forth singularities, like the one that exploded into the Big Bang (experiencing itself through relative and relational spacetime). From the perspective of the eternal now of near-infinite possibilities (if such a sentence can be semantically parsed at all), there is only the timeless creative present, so the Big Bang did not happen some 13 billion years ago. Inasmuch as there is neither time past nor time future nor any time at all at the null point of forever, we must understand the Big Bang (and all other events) as taking place right here and now. In terms of the eternal now, the beginning is happening now and we just appeared (and are always just appearing) to witness it. The rest is all conscious construction; time and experience are so entangled, they need each other to exist. (shrink)
This study was aimed at examining “practicum exercise and the attitudes of pre-service educational administrators in Cross River State.” Pre-administrators’ attitudes were assessed in the area of self-discipline, time management, and record keeping. Three null hypotheses formulated offered direction to the study. The study adopted a quasi-experimental research design. Pre-service administrators with practicum experience were the experimental group while those without practicum experience were the control group. Cluster and simple random sampling techniques were adopted in selecting 60 final year (...) students and 60 year three students or its equivalent out of a population of 220 final year and 208 year three students or its equivalent, from both NUC and CES programmes respectively. The instrument used for data collection was Practicum Exercise, Study Habits, and Record-Keeping Abilities Questionnaire (PESDTMARKAQ). Independent t-test was used to test the null hypotheses at .05 level of significance using Microsoft Excel 2013 Data analysis tool pack. The results of the study showed that practicum exercise had no effect on pre-service administrators’ self-discipline and record keeping attitudes; Practicum exercise was found to affect time management attitudes of pre-service educational administrators. Based on these findings, it was recommended among several others that; pre-service administrators with practicum experience should make efforts to develop the level of their self-discipline by enacting and obeying personal policies that are favorable to their academic growth and progress. Keywords:. (shrink)
This study assessed students’ personnel management and academic effectiveness in terms of punctuality to classes, time management, study habits, record keeping, attitudes during classes, note taking, attitudes towards assignment, examination results and attitudes towards co-curricular activities in Calabar Education Zone of Cross River State. Three null hypotheses were formulated accordingly to guide the study following a descriptive survey research design. Proportionate sampling technique was employed in selecting a sample of 1,934 students (representing 20%) from a population of 9,672 students. (...) Students’ Personnel Management and Academic Effectiveness Questionnaire (SPMAEQ) was used as an instrument for data collection. The instrument yielded reliability estimates of .86 and .93 for the independent and dependent variables using Split-half technique. The null hypotheses were all tested at .05 level of significance using Pearson Correlation Matrix with the aid of SPSS v25. Findings from the study revealed among others that; students' counseling, healthcare, and discipline management respectively, are significantly related to students' academic effectiveness in terms of punctuality to classes, time management, study habits, record keeping, attitudes during classes, note taking, attitudes towards assignment, examination results and attitudes towards co-curricular activities. It was concluded generally from the findings of this study that, there is a moderate positive relationship which is statistically significant between students’ personnel management and their academic effectiveness. It was recommended amongst others that; there should be adequate employment and supply of professional guidance counselors to all secondary schools to boost the psychological levels of students and make them emotionally prepared for academic and co-curricular activities in the school. (shrink)
Management of school-related variables and teachers’ job effectiveness in secondary schools in Calabar South Local Government Area, Cross River State was the main thrust of this study. Four research questions were raised, and four hypotheses were formulated to direct the study. The descriptive survey design was adopted for the study while the total population of 208 secondary school teachers in Calabar South Local Government Area were selected for the study using census technique. A questionnaire titled “Management of School Related Variables (...) and Teachers’ Job Effectiveness in Secondary School Questionnaire (MSRVTJESSQ)” designed by the researcher was used as an instrument to collect data from the respondents. The null hypotheses were tested at 0.05 level of significance using Pearson Product Moment Correlation, Independent t-test, and One-Way Analysis of Variance statistical techniques where applicable. It was found that; managing class size, school management style, and school location has a significant influence on teachers’ job effectiveness respectively in Calabar South Local Government Area of Cross River State. It was recommended amongst others that; school principals should ensure that they adopt a more contingent management style where different situations will warrant the use of the different technique; the recommended teacher-pupils’ ratio of 1:35 should be maintained. (shrink)
Neutral Theory is controversial in ecology. Ecologists and philosophers have diagnosed the source of the controversy as: its false assumption that individuals in different species within the same trophic level are ecologically equivalent, its conflict with Competition Theory and the adaptation of species, its role as a null hypothesis, and as a Lakatosian research programme. In this paper, I show why we should instead understand the conflict at the level of research programs which involve more than theory. The Neutralist (...) and Competitionist research programs borrow and construct theories, models, and experiments for various aims and given their home ecological systems. I present a holistic and pragmatic view of the controversy that foregrounds the interrelation between many kinds of practices and decisions in ecological research. (shrink)
Assessment of students’ attitudes towards test-taking in secondary schools in Afikpo Education Zone of Ebonyi State, Nigeria was the main thrust of this study. The study was guided by four null hypotheses in line with the ex-post facto research design. Proportionate stratified random sampling technique was employed in selecting a sample 1,276 respondents from a population of 12,763 students distributed across 43 public and 71 private secondary schools in the study area. Students’ Attitudes Towards Test-Taking Questionnaire (SATTQ) with Cronbach's (...) alpha reliability coefficient of .893 was used for data collection. The null hypotheses were all tested at .05 level of significance using population t-test and independent t-test statistical methods. Emerging findings showed that the level of students’ attitudes towards test-taking as an academic activity in secondary schools is significantly high. It was also shown that males students, students in urban and private schools significantly differ in their attitudes towards test-taking as an academic activity, from female students and students in rural and public secondary schools. Based on these findings, it was recommended, amongst others, that all students irrespective of gender, school type and school location should be properly counselled by both teachers and professional counsellors to develop positive attitudes towards taking tests in schools. Keywords:. (shrink)
The rationale of this study was to examine the interactive effect of gender, test anxiety, and test items sequencing on the academic performance in mathematics among SS3 students in Calabar Education Zone, Cross River State. Two formulated null hypotheses directed the study. The study adopted the quasi-experimental design. Simple random sampling technique was used in drawing a sample of 474 students from a population of 8,549 SS3 students. A Mathematics Achievement Test (MAT) and a Test Anxiety Scale (TAS) were (...) used primarily as the instruments for data collection. The reliability coefficient obtained for both instruments were .88 and .82 respectively. The data collected were analyzed using descriptive statistics such as mean and standard deviation, while the null hypotheses were tested respectively, using Pearson product-moment Correlation, and Analysis of Covariance where applicable. Findings indicated that test anxiety contributes negatively to academic performance in Mathematics; there is a significant interaction effect between item sequencing and gender on academic performance; between item sequencing and test anxiety on academic performance; and between gender and test anxiety on academic performance in Mathematics respectively; The findings also showed that Based on these findings conclusion and recommendations were made. (shrink)
The high rate of mathematics education students‟ academic performance in universities has become unbearable. In an attempt to proffer solution to this menace, this study assessed Poll Everywhere eLearning platform, test anxiety, and undergraduates‟ academic performance in Mathematics in Cross River State, Nigeria. The study adopted a quasi-experimental research one control group and one treatment group. The population of this study comprised all the fulltime regular undergraduates offering Education Mathematics in the Department of Science Education, University of Calabar, Calabar, Nigeria. (...) Accidental sampling technique was adopted in selecting a sample of 328 undergraduates who are owners of smartphones from the population. Test-Anxiety Inventory and Mathematics Achievement Test (MAT) were used as data collection instruments. The null hypotheses were tested using independent t-tests and simple linear regression. Findings revealed that Poll Everywhere eLearning platform has a significant effect on students‟ test anxiety and academic performance in Mathematics respectively; there is a moderate negative and significant relationship between students‟ test anxiety and their academic performance in Mathematics. Based on the findings of this study, it was concluded that Poll Everywhere eLearning platform could be used for the effective teaching and examination of learners in Mathematics and other science-related disciplines. This can be done through the design and integration of course modules to the Poll Everywhere online platform. (shrink)
This study used a structural equation modelling approach to assess the association between employee work- life policies, psychological empowerment, and academic staff job commitment in universities in Cross River State, Nigeria. Three null hypotheses were formulated to guide the study following a descriptive survey research design. Multistage sampling procedure was adopted in the selection of 315 academic staff from two universities in the study area. “Work-Life Policies, Psychological Empowerment and Job Commitment Questionnaire (WPPEJCQ)” was used as the instrument for (...) data collection. The construct validity of the instrument was ascertained through an Exploratory Factor Analysis (EFA) using the Principal Component Analysis (PCA). The Kaiser-Meyer-Ohlin of .894 and the Bartlett coefficient of 7795.820 were obtained. Several fit indices of Confirmatory Factor Analysis were used to accept the model such as RMSEA=.031, TLI=.969, CFI=.971 and many others. The null hypotheses were all tested using Path analysis. Findings revealed, among others, that there is a significant effect of work-life policies on the affective (β=.774,t=21.636,p<.05), continuance (β=.450,t=8.932,p<.05), and normative (β=490,t=9.967,p<.05) dimensions of academic staff commitment; furthermore, psychological empowerment has a significant effect on the affective (β=.795,t=23.199,p<.05), continuance (β=.501,t=10.261,p<.05) and normative (β = .520, t = 10.795, p< .05) dimensions of staff commitment; and there is a significant composite effect of work-life policies and psychological empowerment on the affective, continuance, and normative commitment levels of academic staff in universities. Based on these findings, conclusions and recommendations were made. (shrink)
This research was carried out in order to verify by simulation Mendel’s laws and seek for the clarification, from the author’s point of view, the Mendel-Fisher controversy. It was demonstrated from: the experimental procedure and the first two steps of the Hardy-Weinberg law, that the null hypothesis in such experiments is absolutely and undeniably true. Consequently, repeating hybridizing experiments as those showed by Mendel, it makes sense to expect a highly coincidence between the observed and the expected cell frequencies. (...) By simulation, 30 random samples were generated with size equal to the number of observations reported by Mendel for his single trait trial, in this case, seed shape; assuming complete dominance, with genes A and a; likewise, it was simulated the results for the experiment with two traits, segregating in separate chromosomes, in this case seed shape, as before, and albumen color, with genes B and b, both loci with complete dominance. In the case of a single trait, the data only showed evidence for rejecting the null hypothesis (Ho ) in 1/30 samples, with (P<0.05). In the case of the 30 samples of the two traits experiment, (Ho ) was rejected only on 3/30 times, when it was set a = 0.05. In both simulations there was a high correspondence between the observed and expected cell frequencies, which is simply due to the fact that (Ho ) is true, and under these conditions, that is what would to expect. It was concluded, that Mendel had no reason to manipulate his data in order to make them to coincide with his beliefs. Therefore, in experiment with a single trait, and in experiments with two traits assuming complete dominance, segregation ratios are 3:1; and 9:3:3:1, respectively. Consequently, Mendel’s laws, under the conditions as were described are absolutely valid and universal. (shrink)
Dicke and Schiff established a framework for testing general relativity, including through null experiments and using the physics of space exploration, electronics and condensed matter, such as the Pound-Rebka experiment and laser interferometry. The gravitational lens tests and the temporal delay of light are highlighted by parameter γ of the PPN formalism, equal to 1 for general relativity and with different values in other theories. The BepiColombo mission aims to test the general theory of relativity by measuring the gamma (...) and beta parameters of the PPN formalism. DOI: 10.13140/RG.2.2.25673.70240. (shrink)
Transformed RAVAL NOTATION solves Syllogism problems very quickly and accurately. This method solves any categorical syllogism problem with same ease and is as simple as ABC… In Transformed RAVAL NOTATION, each premise and conclusion is written in abbreviated form, and then conclusion is reached simply by connecting abbreviated premises.NOTATION: Statements (both premises and conclusions) are represented as follows: Statement Notation a) All S are P, SS-P b) Some S are P, S-P c) Some S are not P, S / PP (...) d) No S is P, SS / PP (- implies are and / implies are not) All is represented by double letters; Some is represented by single letter. No S is P implies No P is S so its notation contains double letters on both sides. -/- RULES: (1) Conclusions are reached by connecting Notations. Two notations can be linked only through common linking terms. When the common linking term multiplies (becomes double from single), divides (becomes single from double) or remains double then conclusion is arrived between terminal terms. (Aristotle’s rule: the middle term must be distributed at least once) -/- (2)If both statements linked are having – signs, resulting conclusion carries – sign (Aristotle’s rule: two affirmatives imply an affirmative) -/- (3) Whenever statements having – and / signs are linked, resulting conclusion carries / sign. (Aristotle’s rule: if one premise is negative, then the conclusion must be negative) -/- (4)Statement having / sign cannot be linked with another statement having / sign to derive any conclusion. (Aristotle’s rule: Two negative premises imply no valid conclusion) Syllogism conclusion by Tranformed Raval’s Notation is in accordance with Aristotle’s rules for the same. It is visually very transparent and conclusions can be deduced at a glance, moreover it solves syllogism problems with any number of statements and it is quickest of all available methods. By new Raval method for solving categorical syllogism, solving categorical syllogism is as simple as pronouncing ABC and it is just continuance of Aristotle work on categorical syllogism. It’s believed that Boole's system could handle multi-term propositions and arguments, whereas Aristotle could handle only two-termed subject-predicate propositions and arguments. For example, it’s claimed that Aristotle's system could not deduce: "No quadrangle that is a square is a rectangle that is a rhombus" from "No square that is a quadrangle is a rhombus that is a rectangle" or from "No rhombus that is a rectangle is a square that is a quadrangle". Above conclusion is reached at a glance with Raval's Notations (Symbolic Aristotle’s syllogism rules). Premise: "No (square that is a quadrangle) is a (rhombus that is a rectangle)" Raval's Representations: S – Q, S – Q / Rh – Re, Rh – Re Premise: "No (rhombus that is a rectangle) is a (square that is a quadrangle)". Raval's Representations: Rh – Re, Rh – Re / S – Q, S - Q Conclusion: "No (quadrangle that is a square) is a (rectangle that is a rhombus)" Raval’s Representations: Q – S, Q – S / Re – Rh, Re – Rh As “ Q – S” follows from “S – Q” and “Re – Rh” from “Rh – Re”. Given conclusion follows from the given premises. Author disregards existential fallacy, as subset of a null set has to be a null set. -/- . (shrink)
It is a consequence of both Kennedy and McNally's typology of the scale structures of gradable adjectives and Kennedy's economy principle that an object is clean just in case its degree of cleanness is maximal. So they jointly predict that the sentence `Both towels are clean, but the red one is cleaner than the blue one' is a contradiction. Surely, one can account for the sentence's assertability by saying that the first instance of `clean' is used loosely: Since `clean' pragmatically (...) conveys the property of being close to maximally clean rather than the property of being maximally clean, the sentence as a whole conveys a consistent proposition. I challenge this semantics-pragmatics package by considering the sentence `Mary believes that both towels are clean but that the red one is cleaner than the blue one'. We can certainly use this sentence to attribute a coherent belief to Mary: One of its readings says that she believes that the towels are clean by a contextually salient standard (e.g. the speaker's); the other says that she believes that the towels are clean by her own standard. I argue that Kennedy's semantics-pragmatics package can't deliver those readings, and propose that we drop the economy principle and account for those readings semantically by assigning to the belief sentence two distinct truth conditions. I consider two ways to deliver those truth-conditions. The first one posits world-variables in the sentence's logical form and analyzes those truth-conditions as resulting from two binding possibilities of those variables. The second one proposes that the threshold function introduced by the phonologically null morpheme pos is shiftable in belief contexts. (shrink)
In a recent paper I proposed a novel relativity theory termed Information Relativity (IR). Unlike Einstein's relativity which dictates as force majeure that relativity is a true state of nature, Information Relativity assumes that relativity results from difference in information about nature between observers who are in motion relative to each other. The theory is based on two axioms: 1. the laws of physics are the same in all inertial frames of reference (Special relativity's first axiom); 2. All translations of (...) information from one frame of reference to another are carried by light or by another carrier with equal velocity (information-carrier axiom). For the case of constant relative velocities, I showed in the aforementioned paper that IR accounts successfully for the results of a class of relativistic time results, including the Michelson-Morley's "null" result, the Sagnac effect, and the neutrino velocities reported by OPERA and other collaborations. Here I apply the theory, with no alteration, to cosmology. I show that the theory is successful in accounting for several cosmological findings, including the pattern of recession velocity predicted by inflationary theories, the GZK energy suppression phenomenon at redshift z ̴ 1.6, and the amounts of matter and dark energy reported in recent ΛCDM cosmologies. (shrink)
Surányi (2006) observed that Hungarian has a hybrid (strict + non-strict) negative concord system. This paper proposes a uniform analysis of that system within the general framework of Zeijlstra (2004, 2008) and, especially, Chierchia (2013), with the following new ingredients. Sentential negation NEM is the same full negation in the presence of both strict and non-strict concord items. Preverbal SENKI `n-one’ type negative concord items occupy the specifier position of either NEM `not' or SEM `nor'. The latter, SEM spells out (...) IS `too, even’ in the immediate scope of negation; it is a focus-sensitive head on the clausal spine. SEM can be seen as an overt counterpart of the phonetically null head that Chierchia dubs NEG; it is capable of invoking an abstract (disembodied) negation at the edge of its projection. (shrink)
Does individual desert matter for distributive justice? Is it relevant, for purposes of justice, that the pattern of distribution of justice’s “currency” (be it well-being, resources, preference-satisfaction, capabilities, or something else) is aligned in one or another way with the pattern of individual desert? -/- This paper examines the nexus between desert and distributive justice through the lens of individual claims. The concept of claims (specifically “claims across outcomes”) is a fruitful way to flesh out the content of distributive justice (...) so as to be grounded in the separateness of persons. A claim is a relation between a person and a pair of outcomes. If someone is better off in one outcome than a second, she has a claim in favor of the first. If she is equally well off in the two outcomes, she has a null claim between the two. In turn, whether one outcome is more just than a second depends upon the pattern of claims between them. -/- In prior work, I have elaborated the concept of claims across outcomes, and have used it to provide a unified defense of the Pareto and Pigou-Dalton axioms. Adding some further, plausible, axioms, we arrive at prioritarianism. -/- Here, I consider the possibility of desert-modulated claims—whereby the strength of an individual’s claim between two outcomes is determined not only by her well-being levels in the two outcomes, and her well-being difference between them, but also by her desert. This generalization of the notion of claims suggests a new axiom of justice: Priority for the More Deserving, requiring that, as between two individuals at the same well-being level, a given increment in well-being be allocated to the more deserving one. -/- If individual desert is intrapersonally fixed, this new axiom, together with a desert-modulated version of the Pigou-Dalton principle, and the Pareto axioms, yields a desert-modulated prioritarian account of distributive justice. Trouble arises, however, if an individual’s desert level can be different in different outcomes. In this case of intrapersonally variable desert, Priority for the More Deserving can conflict with the Pareto axioms (both Pareto indifference and strong Pareto). -/- This conflict, I believe, is sufficient reason to abandon the proposal to make claim strength a function of individual desert on top of well-being levels and differences. If distributive justice is truly sensitive to each individual’s separate perspective—if the justice ranking of outcomes is built up from the totality of individual rankings—we should embrace the Pareto axioms as axioms of justice and reject Priority for the More Deserving. In short: desert-modulated prioritarianism is a nonstarter. Rawls was right to sever distributive justice from desert. (shrink)
This study assessed school hazards management and teachers' job effectiveness in secondary schools in Ikom Local Government Area of Cross River State. Four null hypotheses were formulated accordingly to guide the study. The design adopted for the study was ex-post facto research design. Census technique was employed in selecting the entire population of 551 teachers in the area. The instruments used for data collection were "School Hazards Management Questionnaire (SHMQ)" and "Teachers' Job Effectiveness Questionnaire (TJEQ)." Collected data were analysed (...) using descriptive statistics; while the null hypotheses were tested at .05 level of significance using Product Moment Correlation Matrix Analysis. Findings from the study revealed a significant relationship between physical hazard management, psychological management, environmental hazard management, noise hazard management and teachers job effectiveness in terms of punctuality, classroom management, instructional delivery, lesson evaluation, dressing and record keeping respectively. It was generally concluded that, school hazards management has a significant relationship to teachers' job effectiveness in secondary schools. Based on the findings of this study, recommendations were made. (shrink)
This paper examined the relationship between tertiary students’ social media management attitudes and their academic performance in Cross River State, with a specific focus on Facebook, WhatsApp, and Instagram. To achieve this purpose, three null hypotheses were formulated accordingly. The study adopted a correlational research design. Cluster and simple random sampling techniques were used to select a sample of 1000 students from the entire population. The instrument used for data collection was a questionnaire titled: Tertiary Students’ Social Media Management (...) and Academic Performance Questionnaire (TESSMMAPQ). The reliability of the instrument was established through Cronbach alpha and estimate of .93 was obtained indicating that the instrument was internally consistent in measuring what it purports to measure. Pearson Product Moment Correlation Analysis used to analyze data and to test the hypotheses at .05 level of significance. Findings revealed that there is no significant relationship between tertiary students Facebook management attitudes (r. cal.0.032 < crit. r.0.062), WhatsApp management attitudes (r. cal.0.038 < crit. r.0.062), and Instagram management attitudes (r. cal.-0.009 < crit. r.0.062) respectively, with their academic performance. Based on these findings it was recommended among others that; tertiary students' social media usage should be regulated by smartphones producers and internet service providers; in such a way that a maximum monthly login interval (MMLI) is provided that will restrict users to a limited number of login chances. This will ensure that students maintain focus when they have been barred. (shrink)
Girolamo Saccheri (1667--1733) was an Italian Jesuit priest, scholastic philosopher, and mathematician. He earned a permanent place in the history of mathematics by discovering and rigorously deducing an elaborate chain of consequences of an axiom-set for what is now known as hyperbolic (or Lobachevskian) plane geometry. Reviewer's remarks: (1) On two pages of this book Saccheri refers to his previous and equally original book Logica demonstrativa (Turin, 1697) to which 14 of the 16 pages of the editor's "Introduction" are devoted. (...) At the time of the first edition, 1920, the editor was apparently not acquainted with the secondary literature on Logica demonstrativa which continued to grow in the period preceding the second edition \ref[see D. J. Struik, in Dictionary of scientific biography, Vol. 12, 55--57, Scribner's, New York, 1975]. Of special interest in this connection is a series of three articles by A. F. Emch [Scripta Math. 3 (1935), 51--60; Zbl 10, 386; ibid. 3 (1935), 143--152; Zbl 11, 193; ibid. 3 (1935), 221--333; Zbl 12, 98]. (2) It seems curious that modern writers believe that demonstration of the "nondeducibility" of the parallel postulate vindicates Euclid whereas at first Saccheri seems to have thought that demonstration of its "deducibility" is what would vindicate Euclid. Saccheri is perfectly clear in his commitment to the ancient (and now discredited) view that it is wrong to take as an "axiom" a proposition which is not a "primal verity", which is not "known through itself". So it would seem that Saccheri should think that he was convicting Euclid of error by deducing the parallel postulate. The resolution of this confusion is that Saccheri thought that he had proved, not merely that the parallel postulate was true, but that it was a "primal verity" and, thus, that Euclid was correct in taking it as an "axiom". As implausible as this claim about Saccheri may seem, the passage on p. 237, lines 3--15, seems to admit of no other interpretation. Indeed, Emch takes it this way. (3) As has been noted by many others, Saccheri was fascinated, if not obsessed, by what may be called "reflexive indirect deductions", indirect deductions which show that a conclusion follows from given premises by a chain of reasoning beginning with the given premises augmented by the denial of the desired conclusion and ending with the conclusion itself. It is obvious, of course, that this is simply a species of ordinary indirect deduction; a conclusion follows from given premises if a contradiction is deducible from those given premises augmented by the denial of the conclusion---and it is immaterial whether the contradiction involves one of the premises, the denial of the conclusion, or even, as often happens, intermediate propositions distinct from the given premises and the denial of the conclusion. Saccheri seemed to think that a proposition proved in this way was deduced from its own denial and, thus, that its denial was self-contradictory (p. 207). Inference from this mistake to the idea that propositions proved in this way are "primal verities" would involve yet another confusion. The reviewer gratefully acknowledges extensive communication with his former doctoral students J. Gasser and M. Scanlan. ADDED 14 March 14, 2015: (1) Wikipedia reports that many of Saccheri's ideas have a precedent in the 11th Century Persian polymath Omar Khayyám's Discussion of Difficulties in Euclid, a fact ignored in most Western sources until recently. It is unclear whether Saccheri had access to this work in translation, or developed his ideas independently. (2) This book is another exemplification of the huge difference between indirect deduction and indirect reduction. Indirect deduction requires making an assumption that is inconsistent with the premises previously adopted. This means that the reasoner must perform a certain mental act of assuming a certain proposition. It case the premises are all known truths, indirect deduction—which would then be indirect proof—requires the reasoner to assume a falsehood. This fact has been noted by several prominent mathematicians including Hardy, Hilbert, and Tarski. Indirect reduction requires no new assumption. Indirect reduction is simply a transformation of an argument in one form into another argument in a different form. In an indirect reduction one proposition in the old premise set is replaced by the contradictory opposite of the old conclusion and the new conclusion becomes the contradictory opposite of the replaced premise. Roughly and schematically, P,Q/R becomes P,~R/~Q or ~R, Q/~P. Saccheri’s work involved indirect deduction not indirect reduction. (3) The distinction between indirect deduction and indirect reduction has largely slipped through the cracks, the cracks between medieval-oriented logic and modern-oriented logic. The medievalists have a heavy investment in reduction and, though they have heard of deduction, they think that deduction is a form of reduction, or vice versa, or in some cases they think that the word ‘deduction’ is the modern way of referring to reduction. The modernists have no interest in reduction, i.e. in the process of transforming one argument into another having exactly the same number of premises. Modern logicians, like Aristotle, are concerned with deducing a single proposition from a set of propositions. Some focus on deducing a single proposition from the null set—something difficult to relate to reduction. (shrink)
A truth-preservation fallacy is using the concept of truth-preservation where some other concept is needed. For example, in certain contexts saying that consequences can be deduced from premises using truth-preserving deduction rules is a fallacy if it suggests that all truth-preserving rules are consequence-preserving. The arithmetic additive-associativity rule that yields 6 = (3 + (2 + 1)) from 6 = ((3 + 2) + 1) is truth-preserving but not consequence-preserving. As noted in James Gasser’s dissertation, Leibniz has been criticized for (...) using that rule in attempting to show that arithmetic equations are consequences of definitions. -/- A system of deductions is truth-preserving if each of its deductions having true premises has a true conclusion—and consequence-preserving if, for any given set of sentences, each deduction having premises that are consequences of that set has a conclusion that is a consequence of that set. Consequence-preserving amounts to: in each of its deductions the conclusion is a consequence of the premises. The same definitions apply to deduction rules considered as systems of deductions. Every consequence-preserving system is truth-preserving. It is not as well-known that the converse fails: not every truth-preserving system is consequence-preserving. Likewise for rules: not every truth-preserving rule is consequence-preserving. There are many famous examples. In ordinary first-order Peano-Arithmetic, the induction rule yields the conclusion ‘every number x is such that: x is zero or x is a successor’—which is not a consequence of the null set—from two tautological premises, which are consequences of the null set, of course. The arithmetic induction rule is truth-preserving but not consequence-preserving. Truth-preserving rules that are not consequence-preserving are non-logical or extra-logical rules. Such rules are unacceptable to persons espousing traditional truth-and-consequence conceptions of demonstration: a demonstration shows its conclusion is true by showing that its conclusion is a consequence of premises already known to be true. The 1965 Preface in Benson Mates (1972, vii) contains the first occurrence of truth-preservation fallacies in the book. (shrink)
In this paper I aim to show that the philosophy of Mihai Şora can both be seen as a phenomenological treatment of being and as a general theory of being in its most rigorous sense. At least, this philosophy could be designated as a phenomenological ontology which opens up itself towards an originally metaphysical perspective based on a specific type of knowledge of the sort of “global disclosure”. I will argue too that within Şora's philosophy one can have a twofold (...) approach: one starts from what we could call the phenomenology of the “common givenness” (“ordinary/ quotidian givenness”) and then proceeds (in a phenomenological manner) to a general theory of being, which is represented in Şora's philosophy by the constitution of the model of the “sphere with the null ray”; the second commences with the treatment of the ontological model as such (the model, which is already formed, represents in the end the symbolic structure of reality), that is, from this “general theory of being” (or of the constitution of being) and then advances step by step towards various (phenomenological) applications of this ontological model to the multifaceted spheres of the domain of being (the sphere of language, of temporality, of ethics, of the social, of politics, etc.). (shrink)
DOI: 10.1080/00031305.2018.1564697 When the editors of Basic and Applied Social Psychology effectively banned the use of null hypothesis significance testing (NHST) from articles published in their journal, it set off a fire-storm of discussions both supporting the decision and defending the utility of NHST in scientific research. At the heart of NHST is the p-value which is the probability of obtaining an effect equal to or more extreme than the one observed in the sample data, given the null (...) hypothesis and other model assumptions. Although this is conceptually different from the probability of the null hypothesis being true, given the sample, p-values nonetheless can provide evidential information, toward making an inference about a parameter. Applying a 10,000-case simulation described in this article, the authors found that p-values’ inferential signals to either reject or not reject a null hypothesis about the mean (α = 0.05) were consistent for almost 70% of the cases with the parameter’s true location for the sampled-from population. Success increases if a hybrid decision criterion, minimum effect size plus p-value (MESP), is used. Here, rejecting the null also requires the difference of the observed statistic from the exact null to be meaningfully large or practically significant, in the researcher’s judgment and experience. The simulation compares performances of several methods: from p-value and/or effect size-based, to confidence-interval based, under various conditions of true location of the mean, test power, and comparative sizes of the meaningful distance and population variability. For any inference procedure that outputs a binary indicator, like flagging whether a p-value is significant, the output of one single experiment is not sufficient evidence for a definitive conclusion. Yet, if a tool like MESP generates a relatively reliable signal and is used knowledgeably as part of a research process, it can provide useful information. (shrink)
In the face of continuing assumptions by many scientists and journal editors that p-values provide a gold standard for inference, counter warnings are published periodically. But the core problem is not with p-values, per se. A finding that “p-value is less than α” could merely signal that a critical value has been exceeded. The question is why, when estimating a parameter, we provide a range (a confidence interval), but when testing a hypothesis about a parameter (e.g. µ = x) we (...) proceed as if “=” entails exact equality of the parameter with x. That standard is hard to meet, and is not a standard expected for power calculations, where we are satisfied to reject a null hypothesis H0 if the result is merely “detectably” different from (exact) H0. This paper explores, with resampling (simulation) methods, the impacts on p-values, and alternatives, if the null hypothesis is defined as a thick or thin range of values. It also examines, empirically, the extent to which the p-value may or may not be a good predictor of the probability that H0 is true, given the distribution of the data. (shrink)
Everything has mathematically expressible value. -/- The null hypothesis is that nothing, zero is a physical reality based mathematical conception which we can perceive as an energy, matter, information, space, time free state. Revealing as our common physical, mathematical, philosophical origin, a physical reality based mathematical reference point. I state that in proportion to this physical reality based sense(conception) everything has some kind of mathematically expressible value. Space, time, information, energy, matter.
This article studies common aspects and differences between Heidegger, contemporary German philosopher, with Persian philosopher, Mulla Sadra 17th century equivalent to 11th Islamic century. The origin point for Mulla Sadra and Heidegger is "existence". However, Mulla Sadra believe in "existence unity" whereas Heidegger tries to understand the concept of "existence" by special being, i.e. human being. Both of them believe in supposing the concept of human without world is wrong, they believe in dichotomy of mind, being is wrong, and there (...) is a unity between objective and sane. Both reached from thing to being, but Heidegger by "Phenomenology", and Mulla Sadra by "metaphysics." Both believe in priority of existence before nature, or virtual validity of nature. Although Heidegger considers existence just for threshold, and Mulla Sadra believes origin of existence is the same and it includes for all beings versus of null. Briefly, for Heidegger same as Hegel, being as being has no value, whereas for Mulla Sadra this type of being is highest grade of existence. Heidegger returns into illumination, while his illumination is different from Mulla Sadra's illumination, his illumination is same as Buddhism "human for human" not from higher world for human. According to Heidegger, existence changes have cultural and historical meaning. Whereas according to Mulla Sadra the concept of existence is common, meaning and it is understood with illumination . (shrink)
In what follows, I suggest that, against most theories of time, there really is an actual present, a now, but that such an eternal moment cannot be found before or after time. It may even be semantically incoherent to say that such an eternal present exists since “it” is changeless and formless (presumably a dynamic chaos without location or duration) yet with creative potential. Such a field of near-infinite potential energy could have had no beginning and will have no end, (...) yet within it stirs the desire to experience that brings forth singularities, like the one that exploded into the Big Bang (experiencing itself through relative and relational spacetime). From the perspective of the eternal now of near-infinite possibilities (if such a sentence can be semantically parsed at all), there is only the timeless creative present, so the Big Bang did not happen some 13 billion years ago. Inasmuch as there is neither time past nor time future nor any time at all at the null point of forever, we must understand the Big Bang (and all other events) as taking place right here and now. In terms of the eternal now, the beginning is happening now and we just appeared (and are always just appearing) to witness it. The rest is all conscious construction; time and experience are so entangled, they need each other to exist. (shrink)
Fisher criticised the Neyman-Pearson approach to hypothesis testing by arguing that it relies on the assumption of “repeated sampling from the same population.” The present article considers the responses to this criticism provided by Pearson and Neyman. Pearson interpreted alpha levels in relation to imaginary replications of the original test. This interpretation is appropriate when test users are sure that their replications will be equivalent to one another. However, by definition, scientific researchers do not possess sufficient knowledge about the relevant (...) and irrelevant aspects of their tests and populations to be sure that their replications will be equivalent to one another. Pearson also interpreted the alpha level as a personal rule that guides researchers’ behavior during hypothesis testing. However, this interpretation fails to acknowledge that the same researcher may use different alpha levels in different testing situations. Addressing this problem, Neyman proposed that the average alpha level adopted by a particular researcher can be viewed as an indicator of that researcher’s typical Type I error rate. Researchers’ average alpha levels may be informative from a metascientific perspective. However, they are not useful from a scientific perspective. Scientists are more concerned with the error rates of specific tests of specific hypotheses, rather than the error rates of their colleagues. It is concluded that neither Neyman nor Pearson adequately rebutted Fisher’s “repeated sampling” criticism. Fisher’s significance testing approach is briefly considered as an alternative to the Neyman-Pearson approach. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.