This chapter presents a typology of the different kinds of inductive inferences we can draw from our evidence, based on the explanatory relationship between evidence and conclusion. Drawing on the literature on graphical models of explanation, I divide inductive inferences into (a) downwards inferences, which proceed from cause to effect, (b) upwards inferences, which proceed from effect to cause, and (c) sideways inferences, which proceed first from effect to cause and then from that cause to an additional effect. (...) I further distinguish between direct and indirect forms of downwards and upwards inferences. I then show how we can subsume canonical forms of inductiveinference mentioned in the literature, such as inference to the best explanation, enumerative induction, and analogical inference, under this typology. Along the way, I explore connections with probability and confirmation, epistemic defeat, the relation between abduction and enumerative induction, the compatibility of IBE and Bayesianism, and theories of epistemic justification. (shrink)
We are often justified in acting on the basis of evidential confirmation. I argue that such evidence supports belief in non-quantificational generic generalizations, rather than universally quantified generalizations. I show how this account supports, rather than undermines, a Bayesian account of confirmation. Induction from confirming instances of a generalization to belief in the corresponding generic is part of a reasoning instinct that is typically (but not always) correct, and allows us to approximate the predictions that formal epistemology would make.
An argument, different from the Newman objection, against the view that the cognitive content of a theory is exhausted by its Ramsey sentence is reviewed. The crux of the argument is that Ramsification may ruin inductive systematization between theory and observation. The argument also has some implications concerning the issue of underdetermination.
Theories of number concepts often suppose that the natural numbers are acquired as children learn to count and as they draw an induction based on their interpretation of the first few count words. In a bold critique of this general approach, Rips, Asmuth, Bloomfield [Rips, L., Asmuth, J. & Bloomfield, A.. Giving the boot to the bootstrap: How not to learn the natural numbers. Cognition, 101, B51–B60.] argue that such an inductiveinference is consistent with a representational system (...) that clearly does not express the natural numbers and that possession of the natural numbers requires further principles that make the inductiveinference superfluous. We argue that their critique is unsuccessful. Provided that children have access to a suitable initial system of representation, the sort of inductiveinference that Rips et al. call into question can in fact facilitate the acquisition of larger integer concepts without the addition of any further principles. (shrink)
In this paper I adduce a new argument in support of the claim that IBE is an autonomous form of inference, based on a familiar, yet surprisingly, under-discussed, problem for Hume’s theory of induction. I then use some insights thereby gleaned to argue for the claim that induction is really IBE, and draw some normative conclusions.
It has been common wisdom for centuries that scientific inference cannot be deductive; if it is inference at all, it must be a distinctive kind of inductiveinference. According to demonstrative theories of induction, however, important scientific inferences are not inductive in the sense of requiring ampliative inference rules at all. Rather, they are deductive inferences with sufficiently strong premises. General considerations about inferences suffice to show that there is no difference in justification between (...) an inference construed demonstratively or ampliatively. The inductive risk may be shouldered by premises or rules, but it cannot be shirked. Demonstrative theories of induction might, nevertheless, better describe scientific practice. And there may be good methodological reasons for constructing our inferences one way rather than the other. By exploring the limits of these possible advantages, I argue that scientific inference is neither of essence deductive nor of essence inductive. (shrink)
Views which deny that there are necessary connections between distinct existences have often been criticized for leading to inductive skepticism. If there is no glue holding the world together then there seems to be no basis on which to infer from past to future. However, deniers of necessary connections have typically been unconcerned. After all, they say, everyone has a problem with induction. But, if we look at the connection between induction and explanation, we can develop the problem of (...) induction in a way that hits deniers of necessary connections, but not their opponents. The denier of necessary connections faces an `internal' problem with induction -- skepticism about important inductive inferences naturally flows from their position in a way that it doesn't for those who accept necessary connections. This is a major problem, perhaps a fatal one, for the denial of necessary connections. (shrink)
The hard problem of induction is to argue without begging the question that inductiveinference, applied properly in the proper circumstances, is conducive to truth. A recent theorem seems to show that the hard problem has a deductive solution. The theorem, provable in zfc, states that a predictive function M exists with the following property: whatever world we live in, M correctly predicts the world’s present state given its previous states at all times apart from a well-ordered subset. (...) On the usual model of time a well-ordered subset is small relative to the set of all times. M’s existence therefore seems to provide a solution to the hard problem. My paper argues for two conclusions. First, the theorem does not solve the hard problem of induction. More positively though, it solves a version of the problem in which the structure of time is given modulo our choice of set theory. (shrink)
In ‘Induction and Natural Kinds’, I proposed a solution to the problem of induction according to which our use of inductiveinference is reliable because it is grounded in the natural kind structure of the world. When we infer that unobserved members of a kind will have the same properties as observed members of the kind, we are right because all members of the kind possess the same essential properties. The claim that the existence of natural kinds is (...) what grounds reliable use of induction is based on an inference to the best explanation of the success of our inductive practices. As such, the argument for the existence of natural kinds employs a form of ampliative inference. But induction is likewise a form of ampliative inference. Given both of these facts, my account of the reliability of induction is subject to the objection that it provides a circular justification of induction, since it employs an ampliative inference to justify an ampliative inference. In this paper, I respond to the objection of circularity by arguing that what justifies induction is not the inference to the best explanation of its reliability. The ground of induction is the natural kinds themselves. (shrink)
This essay presents a point of view for looking at `free will', with the purpose of interpreting where exactly the freedom lies. For, freedom is what we mean by it. It compares the exercise of free will with the making of inferences, which usually is predominantly inductive in nature. The making of inference and the exercise of free will, both draw upon psychological resources that define our ‘selves’. I examine the constitution of the self of an individual, especially (...) the involvement of personal beliefs, personal memories, affects, emotions, and the hugely important psychological value-system, all of which distinguish the self of one individual from that of another. The foundational position that adopted in this essay is that all psychological processes are correlated with corresponding ones involving large scale neural aggregates in the brain, communicating with one another through wavelike modes of excitation and de-excitation. Of central relevance is the value-network around which the affect system is organized, the latter, in turn, being the axis around which is assembled the self, with all its emotional correlates. The self is a complex system. I include a brief outline of what complexity consists of. In reality all systems are complex, for complexity is ubiquitous, and certain parts of nature appear to us to be ‘simple’ only in certain specific contexts. It is in this background that the issue of determinism is viewed in this essay. Instead of looking at determinism as a grand principle entrenched in nature independent of our interpretation of it, I look at our ability to explain and to predict events and phenomena around us, which is made possible by the existence of causal links running in a complex course that set up correlations between diverse parts of nature, in this way putting the stamp of necessity on these events. However, the complexity of systems limits our ability to explain and to predict to within certain horizons defined by contexts. Our ability to explain and predict in matters relating to acts of free will is similarly limited by the operations of the self that remain hidden from our own awareness. The aspects of necessity and determinism appear to us in the form of reason and rationality that explain and predict only within a limited horizon, while the rest depends on the complex operation of self-linked psychological resources, where the latter appear as contingent in the context of the exercise of free will. The hallmark of complex systems is the existence of amplifying factors that operate as destabilizing ones, along with inhibiting or stabilizing factors as well that generally limit the destabilizing influences to local occurrences, while preserving the global integrity of a system. This complex interplay of destabilizing and stabilizing influences lead to the possibility of an enormous number of distinct modes of behavior that appear as emergent phenomena in complex systems. Looking at the particular case of the self of an individual that guide her actions and thoughts, it is the operation of our emotions, built around the psychological value-system, that provide for the amplifying and inhibiting factors mentioned above. The operation of these self-linked factors stamps our actions and thoughts as contingent ones that do not fit with our concepts of reason and rationality. And this is what provides the basis of our idea of free will. Free will is not ‘free’ in virtue of exemption from causal links running through all our self-based processes - ones that remain hidden from our awareness, but is free of what is perceived to be ‘reason and rationality’ based on knowledge and the common pool of beliefs and principles associated with a shared world-view. When we speak of the choice involved in an act of exercise of free will what we actually refer to is the commonly observed fact that various different individuals of a similar disposition respond differently when placed under similar circumstances. In other words, free will is ‘free’ precisely because it is not subject to constraints of a commonly accepted and shared set of principles, beliefs, and values. While it is possible that, in an exercise of free will, the self-linked psychological resources are brought into action when a number of alternatives are presented to the self by the operation of the ingredients of a commonly shared world-view, it seems likely that, that set of alternatives is not of primary relevance in the final response that the mind produces, since the latter is primarily a product of the operation of the self-based psychological resources. What is of greater relevance here is the operation of emotion-driven processes, guided by the psychological value-system based on the activity of the so-called reward-punishment network in the brain. These processes lead the individual to a response that appears to be free from the shackles of determinism precisely because their mechanisms, which are hidden from us, do not conform to commonly accepted and shared rules and principles. In contrast, inductiveinference is a process that is based on the ‘cognitive face’ of self, where the self-based psychological resources play a supportive role to the commonly shared world-view of an individual. There is never any freedom from the all-pervading causal links representing correlations among all objects, entities, and events in nature. In the midst of all this, the closest thing to freedom that we can have in our life comes with self-examination and self-improvement. The possibility of self-examination appears in the form of specific conjunctions between our complex self-processes and the ceaseless changes of scenario in our external world. This actually makes the emergent phenomenon of self-examination a matter of chance, but one that keeps on appearing again and again in our life. Once realized, self examination creates possibilities that would not be there in the absence of it, and these possibilities include the enhancement of further self-enrichment and further diversity in the exercise of our free will. (shrink)
De Ray argues that relying on inference to the best explanation (IBE) requires the metaphysical belief that most phenomena have explanations. I object that instead the metaphysical belief requires the use of IBE. De Ray uses IBE himself to establish theism that God is the cause of the metaphysical belief, and thus he has the burden of establishing the metaphysical belief independently of using IBE. Naturalism that the world is the cause of the metaphysical belief is preferable to theism, (...) contrary to what de Ray thinks. (shrink)
This monograph is an in-depth and engaging discourse on the deeply cognitive roots of human scientific quest. The process of making scientific inferences is continuous with the day-to-day inferential activity of individuals, and is predominantly inductive in nature. Inductiveinference, which is fallible, exploratory, and open-ended, is of essential relevance in our incessant efforts at making sense of a complex and uncertain world around us, and covers a vast range of cognitive activities, among which scientific exploration constitutes (...) the pinnacle. Inductiveinference has a personal aspect to it, being rooted in the cognitive unconscious of individuals, which has recently been found to be of paramount importance in a wide range of complex cognitive processes. One other major aspect of the process of inference making, including the making of scientific inferences, is the role of a vast web of beliefs lodged in the human mind, as also of a huge repertoire of heuristics, that constitute an important component of ‘unconscious intelligence’. Finally, human cognitive activity is dependent in a large measure on emotions and affects, that operate mostly at an unconscious level. Of special importance in scientific inferential activity is the process of hypothesis making, which is examined in this book, along with the above aspects of inductiveinference, at considerable depth. The book focuses on the inadequacy of the viewpoint of naive realism in understanding the contextual nature of scientific theories, where a cumulative progress towards an ultimate truth about Nature appears to be too simplistic a generalization. It poses a critique to the commonly perceived image of science which is seen as the last word in logic and objectivity, the latter in the double sense of being independent of individual psychological propensities and, at the same time, approaching a correct understanding of the workings of a mind-independent nature. Adopting the naturalist point of view, it examines the essential tension between the cognitive endeavours of individuals and scientific communities, immersed in belief systems and cultures, on the one hand, and the engagement with a mind-independent reality on the other. In the end, science emerges as an interpretation of nature, which is perceived by us only contextually, as successively emerging cross-sections of limited scope and extent. Successive waves of theory building in science appear as episodic and kaleidoscopic changes in perspective as certain in-built borders are crossed, rather than as a cumulative progress towards some ultimate truth. While written in an informal and conversational style, the book raises a number of deep and intriguing questions located at the interface of cognitive psychology and philosophy of science, meant for both the general lay reader and the specialist. Of particular interest is the way it explores the role of belief (aided by emotions and affects) in making the process of inductiveinference possible since belief is a subtle though all-pervasive cognitive factor not adequately investigated in the current literature. (shrink)
As our epistemic ambitions grow, the common and scientific endeavours are becoming increasingly dependent on Machine Learning (ML). The field rests on a single experimental paradigm, which consists of splitting the available data into a training and testing set and using the latter to measure how well the trained ML model generalises to unseen samples. If the model reaches acceptable accuracy, an a posteriori contract comes into effect between humans and the model, supposedly allowing its deployment to target environments. Yet (...) the latter part of the contract depends on human inductive predictions or generalisations, which infer a uniformity between the trained ML model and the targets. The paper asks how we justify the contract between human and machine learning. It is argued that the justification becomes a pressing issue when we use ML to reach ‘elsewheres’ in space and time or deploy ML models in non-benign environments. The paper argues that the only viable version of the contract can be based on optimality (instead of on reliability which cannot be justified without circularity) and aligns this position with Schurz’s optimality justification. It is shown that when dealing with inaccessible/unstable ground-truths (‘elsewheres’ and non-benign targets), the optimality justification undergoes a slight change, which should reflect critically on our epistemic ambitions. Therefore, the study of ML robustness should involve not only heuristics that lead to acceptable accuracies on testing sets. The justification of human inductive predictions or generalisations about the uniformity between ML models and targets should be included as well. Without it, the assumptions about inductive risk minimisation in ML are not addressed in full. (shrink)
Delusional beliefs have sometimes been considered as rational inferences from abnormal experiences. We explore this idea in more detail, making the following points. Firstly, the abnormalities of cognition which initially prompt the entertaining of a delusional belief are not always conscious and since we prefer to restrict the term “experience” to consciousness we refer to “abnormal data” rather than “abnormal experience”. Secondly, we argue that in relation to many delusions (we consider eight) one can clearly identify what the abnormal cognitive (...) data are which prompted the delusion and what the neuropsychological impairment is which is responsible for the occurrence of these data; but one can equally clearly point to cases where this impairments is present but delusion is not. So the impairment is not sufficient for delusion to occur. A second cognitive impairment, one which impairs the ability to evaluate beliefs, must also be present. Thirdly (and this is the main thrust of our chapter) we consider in detail what the nature of the inference is that leads from the abnormal data to the belief. This is not deductive inference and it is not inference by enumerative induction; it is abductive inference. We offer a Bayesian account of abductive inference and apply it to the explanation of delusional belief. (shrink)
Applying good inductive rules inside the scope of suppositions leads to implausible results. I argue it is a mistake to think that inductive rules of inference behave anything like 'inference rules' in natural deduction systems. And this implies that it isn't always true that good arguments can be run 'off-line' to gain a priori knowledge of conditional conclusions.
Under what conditions does machine learning (ML) model opacity inhibit the possibility of explaining and understanding phenomena? In this paper, I argue that non-epistemic values give shape to the ML opacity problem even if we keep researcher interests fixed. Treating ML models as an instance of doing model-based science to explain and understand phenomena reveals that there is (i) an external opacity problem, where the presence of inductive risk imposes higher standards on externally validating models, and (ii) an internal (...) opacity problem, where greater inductive risk demands a higher level of transparency regarding the inferences the model makes. (shrink)
In this essay we collect and put together a number of ideas relevant to the under- standing of the phenomenon of creativity, confining our considerations mostly to the domain of cognitive psychology while we will, on a few occasions, hint at neuropsy- chological underpinnings as well. In this, we will mostly focus on creativity in science, since creativity in other domains of human endeavor have common links with scientific creativity while differing in numerous other specific respects. We begin by briefly (...) introducing a few basic notions relating to cognition, among which the notion of ‘concepts’ is of basic relevance. The myriads of concepts lodged in our mind constitute a ‘conceptual space’ of an enormously complex structure, where con- cepts are correlated by beliefs that are themselves made up of concepts and are as- sociated with emotions. The conceptual space, moreover, is perpetually in a state of dynamic evolution that is once again of a complex nature. A major component of the dynamic evolution is made up of incessant acts of inference, where an inference occurs essentially by means of a succession of correlations among concepts set up with beliefs and heuristics, the latter being beliefs of a special kind, namely, ones relatively free of emotional associations and possessed of a relatively greater degree of justification. Beliefs, along with heuristics, have been described as the ‘mind’s software’, and con- stitute important cognitive components of the self-linked psychological resources of an individual. The self is the psychological engine driving all our mental and physical activity, and is in a state of ceaseless dynamics resulting from one’s most intimate ex- periences of the world accumulating in the course of one’s journey through life. Many of our psychological resources are of a dual character, having both a self-linked and a shared character, the latter being held in common with larger groups of people and imbibed from cultural inputs. We focus on the privately held self-linked beliefs of an individual, since these are presumably of central relevance in making possible inductive inferences – ones in which there arises a fundamental need of adopting a choice or making a decision. Beliefs, decisions, and inferences, all have the common link to the self of an individual and, in this, are fundamentally analogous to free will, where all of these have an aspect of non-determinism inherent in them. Creativity involves a major restructuring of the conceptual space where a sustained inferential process eventually links remote conceptual domains, thereby opening up the possibility of a large number of new correlations between remote concepts by a cascading process. Since the process of inductiveinference depends crucially on de- cisions at critical junctures of the inferential chain, it becomes necessary to examine the basic mechanism underlying the making of decisions. In the framework that we attempt to build up for the understanding of scientific creativity, this role of decision making in the inferential process assumes central relevance. With this background in place, we briefly sketch the affect theory of decisions. Affect is an innate system of response to perceptual inputs received either from the exter- nal world or from the internal physiological and psychological environment whereby a positive or negative valence gets associated with a perceptual input. Almost every sit- uation faced by an individual, even one experienced tacitly, i.e., without overt aware-ness, elicits an affective response from him, carrying a positive or negative valence that underlies all sorts of decision making, including ones carried out unconsciously in inferential processes. Referring to the process of inferential exploration of the conceptual space that gener- ates the possibility of correlations being established between remote conceptual do- mains, such exploration is guided and steered at every stage by the affect system, analogous to the way a complex computer program proceeds through junctures where the program ascertains whether specified conditions are met with by way of generating appropriate numerical values – for instance, the program takes different routes, depending on whether some particular numerical value turns out to be positive or negative. The valence generated by the affect system in the process of adoption of a choice plays a similar role which therefore is of crucial relevance in inferential processes, especially in the exploration of the conceptual space where remote domains need to be linked up – the affect system produces a response along a single value dimension, resembling a number with a sign and a magnitude. While the affect system plays a guiding role in the exploration of the conceptual space, the process of exploration itself consists of the establishment of correlations between concepts by means of beliefs and heuristics, the self-linked ones among the latter having a special role in making possible the inferential journey along alternative routes whenever the shared rules of inference become inadequate. A successful access to a remote conceptual domain, necessary for the creative solution of a standing problem or anomaly – one that could not be solved within the limited domain hitherto accessed – requires a phase of relatively slow cumulative search and then, at some stage, a rapid cascading process when a solution is in sight. Representing the conceptual space in the form of a complex network, the overall process can be likened to one of self-organized criticality commonly observed in the dynamical evolution of complex systems. In order that inferential access to remote domains may actually be possible, it is necessary that restrictions on the exploration process – necessary for setting the context in ordinary instances of inductiveinference – be relaxed and a relatively free exploration in a larger conceptual terrain be made possible. This is achieved by the mind going into the default mode, where external constraints – ones imposed by shared beliefs and modes of exploration – are made inoperative. While explaining all these various aspects of the creative process, we underline the supremely important role that analogy plays in it. Broadly speaking, analogy is in the nature of a heuristic, establishing correlations between concepts. However, analo- gies are very special in that these are particularly effective in establishing correlations among remote concepts, since analogy works without regard to the contiguity of the concepts in the conceptual space. In establishing links between concepts, analogies have the power to light up entire terrains in the conceptual space when a rapid cas- cading of fresh correlations becomes possible. The creative process occurs within the mind of a single individual or of a few closely collaborating individuals, but is then continued by an entire epistemic community, eventually resulting in a conceptual revolution. Such conceptual revolutions make pos- sible the radical revision of scientific theories whereby the scope of an extant theory is broadened and a new theoretical framework makes its appearance. The emerging theory is characterized by a certain degree of incommensurability when compared with the earlier one – a feature that may appear strange at first sight. But incommen- surability does not mean incompatibility and the apparently contrary features of the relation between the successive theories may be traced to the multi-layered structureof the conceptual space where concepts are correlated not by means of single links but by multiple ones, thereby generating multiple layers of correlation, among which some are retained and some created afresh in a conceptual restructuring. We conclude with the observation that creativity occurs on all scales. Analogous to correlations being set up across domains in the conceptual space and new domains being generated, processes with similar features can occur within the confines of a domain where a new layer of inferential links may be generated, connecting up sub- domains. In this context, insight can be looked upon as an instance of creativity within the confines of a domain of a relatively limited extent. (shrink)
How induction was understood took a substantial turn during the Renaissance. At the beginning, induction was understood as it had been throughout the medieval period, as a kind of propositional inference that is stronger the more it approximates deduction. During the Renaissance, an older understanding, one prevalent in antiquity, was rediscovered and adopted. By this understanding, induction identifies defining characteristics using a process of comparing and contrasting. Important participants in the change were Jean Buridan, humanists such as Lorenzo Valla (...) and Rudolph Agricola, Paduan Aristotelians such as Agostino Nifo, Jacopo Zabarella, and members of the medical faculty, writers on philosophy of mind such as the Englishman John Case, writers of reasoning handbooks, and Francis Bacon. (shrink)
A standard way to challenge convergence-based accounts of inductive success is to claim that they are too weak to constrain inductive inferences in the short run. We respond to such a challenge by answering some questions raised by Juhl (1994). When it comes to predicting limiting relative frequencies in the framework of Reichenbach, we show that speed-optimal convergence—a long-run success condition—induces dynamic coherence in the short run.
Nickles (2017) advocates scientific antirealism by appealing to the pessimistic induction over scientific theories, the illusion hypothesis (Quoidbach, Gilbert, and Wilson, 2013), and Darwin’s evolutionary theory. He rejects Putnam’s (1975: 73) no-miracles argument on the grounds that it uses inference to the best explanation. I object that both the illusion hypothesis and evolutionary theory clash with the pessimistic induction and with his negative attitude towards inference to the best explanation. I also argue that Nickles’s positive philosophical theories are (...) subject to Park’s (2017a) pessimistic induction over antirealists. (shrink)
It is common to assume that the problem of induction arises only because of small sample sizes or unreliable data. In this paper, I argue that the piecemeal collection of data can also lead to underdetermination of theories by evidence, even if arbitrarily large amounts of completely reliable experimental and observational data are collected. Specifically, I focus on the construction of causal theories from the results of many studies (perhaps hundreds), including randomized controlled trials and observational studies, where the studies (...) focus on overlapping, but not identical, sets of variables. Two theorems reveal that, for any collection of variables V, there exist fundamentally different causal theories over V that cannot be distinguished unless all variables are simultaneously measured. Underdetermination can result from piecemeal measurement, regardless of the quantity and quality of the data. Moreover, I generalize these results to show that, a priori, it is impossible to choose a series of small (in terms of number of variables) observational studies that will be most informative with respect to the causal theory describing the variables under investigation. This final result suggests that scientific institutions may need to play a larger role in coordinating differing research programs during inquiry. (shrink)
In this paper, I explore a question about deductive reasoning: why am I in a position to immediately infer some deductive consequences of what I know, but not others? I show why the question cannot be answered in the most natural ways of answering it, in particular in Descartes’s way of answering it. I then go on to introduce a new approach to answering the question, an approach inspired by Hume’s view of inductive reasoning.
According to John D. Norton's Material Theory of Induction, all inductive inferences are justified by local facts, rather than their formal features or some grand principles of nature's uniformity. Recently, Richard Dawid (2015) has offered a challenge to this theory: in an adaptation of Norton's own celebrated "Dome" thought experiment, it seems that there are certain inductions that are intuitively reasonable, but which do not have any local facts that could serve to justify them in accordance with Norton's requirements. (...) Dawid's suggestion is that “raw induction” might have a limited but important role for such inferences. -/- I argue that the Material Theory can accommodate such inductions, because there are local facts concerning the combinatoric features of the induction’s target populations that can licence the inferences in an analogous way to existing examples of material induction. Since my arguments are largely independent of the details of the Dome, Norton's theory emerges as surprisingly robust against criticisms of excessive narrowness. (shrink)
Laurence BonJour has recently proposed a novel and interesting approach to the problem of induction. He grants that it is contingent, and so not a priori, that our patterns of inductiveinference are reliable. Nevertheless, he claims, it is necessary and a priori that those patterns are highly likely to be reliable, and that is enough to ground an a priori justification induction. This paper examines an important defect in BonJour's proposal. Once we make sense of the claim (...) that inductiveinference is "necessarily highly likely" to be reliable, we find that it is not knowable a priori after all. (shrink)
This paper responds to recent work in the philosophy of Homotopy Type Theory by James Ladyman and Stuart Presnell. They consider one of the rules for identity, path induction, and justify it along ‘pre-mathematical’ lines. I give an alternate justification based on the philosophical framework of inferentialism. Accordingly, I construct a notion of harmony that allows the inferentialist to say when a connective or concept is meaning-bearing and this conception unifies most of the prominent conceptions of harmony through category theory. (...) This categorical harmony is stated in terms of adjoints and says that any concept definable by iterated adjoints from general categorical operations is harmonious. Moreover, it has been shown that identity in a categorical setting is determined by an adjoint in the relevant way. Furthermore, path induction as a rule comes from this definition. Thus we arrive at an account of how path induction, as a rule of inference governing identity, can be justified on mathematically motivated grounds. (shrink)
Disputants in the debate regarding whether Hume's argument on induction is descriptive or normative have by and large ignored Hume’s positive argument (that custom is what determines inferences to the unobserved), largely confining themselves to intricate debates within the negative argument (that inferences to the unobserved are not founded on reason). I believe that this is a mistake, for I think Hume’s positive argument to have significant implications for the interpretation of his negative argument. In this paper, I will argue (...) that Hume’s positive and negative arguments should be read as addressing the same issues, whether normative or causal. I will then focus on the Enquiry version of Hume’s positive argument, arguing that it carries a significant normative conclusion: there, Hume argues that custom plays a normative role in justifying our inductive inferences. Given that Hume’s positive argument should be read as addressing the same issues as his negative argument, we should correspondingly read Hume’s negative argument in the Enquiry as having a normative conclusion. (shrink)
Hans Reichenbach’s pragmatic treatment of the problem of induction in his later works on inductiveinference was, and still is, of great interest. However, it has been dismissed as a pseudo-solution and it has been regarded as problematically obscure. This is, in large part, due to the difficulty in understanding exactly what Reichenbach’s solution is supposed to amount to, especially as it appears to offer no response to the inductive skeptic. For entirely different reasons, the significance of (...) Bertrand Russell’s classic attempt to solve Hume’s problem is also both obscure and controversial. Russell accepted that Hume’s reasoning about induction was basically correct, but he argued that given the centrality of induction in our cognitive endeavors something must be wrong with Hume’s basic assumptions. What Russell effectively identified as Hume’s (and Reichenbach’s) failure was the commitment to a purely extensional empiricism. So, Russell’s solution to the problem of induction was to concede extensional empiricism and to accept that induction is grounded by accepting both a robust essentialism and a form of rationalism that allowed for a priori knowledge of universals. So, neither of those doctrines is without its critics. On the one hand, Reichenbach’s solution faces the charges of obscurity and of offering no response to the inductive skeptic. On the other hand, Russell’s solution looks to be objectionably ad hoc absent some non-controversial and independent argument that the universals that are necessary to ground the uniformity of nature actually exist and are knowable. This particular charge is especially likely to arise from those inclined towards purely extensional forms of empiricism. In this paper the significance of Reichenbach’s solution to the problem of induction will be made clearer via the comparison of these two historically important views about the problem of induction. The modest but important contention that will be made here is that the comparison of Reichenbach’s and Russell’s solutions calls attention to the opposition between extensional and intensional metaphysical presuppositions in the context of attempts to solve the problem of induction. It will be show that, in effect, what Reichenbach does is to establish an important epistemic limitation of extensional empiricism. So, it will be argued here that there is nothing really obscure about Reichenbach’s thoughts on induction at all. He was simply working out the limits of extensional empiricism with respect to inductiveinference in opposition to the sort of metaphysics favored by Russell and like-minded thinkers. (shrink)
The word ‘induction’ is derived from Cicero’s ‘inductio’, itself a translation of Aristotle’s ‘epagôgê’. In its traditional sense this denotes the inference of general laws from particular instances, but within modern philosophy it has usually been understood in a related but broader sense, covering any non-demonstrative reasoning that is founded on experience. As such it encompasses reasoning from observed to unobserved, both inference of general laws and of further particular instances, but it excludes those cases of reasoning in (...) which the conclusion is logically implied by the premises, such as induction by complete enumeration. (shrink)
It is usually accepted that deductions are non-informative and monotonic, inductions are informative and nonmonotonic, abductions create hypotheses but are epistemically irrelevant, and both deductions and inductions can’t provide new insights. In this article, I attempt to provide a more cohesive view of the subject with the following hypotheses: (1) the paradigmatic examples of deductions, such as modus ponens and hypothetical syllogism, are not inferential forms, but coherence requirements for inferences; (2) since any reasoner aims to be coherent, any (...) class='Hi'>inference must be deductive; (3) a coherent inference is an intuitive process where the premises should be taken as sufficient evidence for the conclusion, which on its turn should be viewed as a necessary evidence for the premises in some modal range; (4) inductions, properly understood, are abductions, but there are no abductions beyond the fact that in any inference the conclusion should be regarded as a necessary evidence for the premises; (5) motonocity is not only compatible with the retraction of past inferences given new information, but it is a requirement for it; (6) this explanation of inferences holds true for discovery processes, predictions and trivial inferences. (shrink)
The idea of the uniformity of nature, as a solution to the problem of induction, has at least two contemporary versions: natural kinds and natural necessity. Then there are at least three alternative ontological ideas addressing the problem of induction. In this paper, I articulate how these ideas are used to justify the practice of inductiveinference, and compare them, in terms of their applicability, to see whether each of them is preferred in addressing the problem of induction. (...) Given the variety of contexts in which inductive inferences are made, from natural science to social science and to everyday thinking, I suggest that no singular idea is absolutely preferred, and a proper strategy is probably to welcome the plurality of ideas helpful to induction, and to take pragmatic considerations into account, in order to judge in every single case. (shrink)
At its strongest, Hume's problem of induction denies the existence of any well justified assumptionless inductiveinference rule. At the weakest, it challenges our ability to articulate and apply good inductiveinference rules. This paper examines an analysis that is closer to the latter camp. It reviews one answer to this problem drawn from the VC theorem in statistical learning theory and argues for its inadequacy. In particular, I show that it cannot be computed, in general, (...) whether we are in a situation where the VC theorem can be applied for the purpose we want it to. (shrink)
In his recent book, John Norton has created a theory of inference to the best explanation, within the context of his "material theory of induction". I apply it to the problem of scientific explanations that are false: if we want the theories in our explanations to be true, then why do historians and scientists often say that false theories explained phenomena? I also defend Norton's theory against some possible objections.
This article offers a simple technical resolution to the problem of induction, which is to say that general facts are not always inferred from observations of particular facts, but are themselves sometimes defeasibly observed. The article suggests a holistic account of observation that allows for general statements in empirical theories to be interpreted as observation reports, in place of the common but arguably obsolete idea that observations are exclusively particular. Predictions and other particular statements about unobservable facts can then appear (...) as deductive consequences of such general observation statements, rather than inductive consequences of other particular statements. This semantic shift resolves the problem by eliminating induction as a basic form of inference, and folding the justification of general beliefs into the more basic problem of perception. (shrink)
The philosophical background important to Mill’s theory of induction has two major components: Richard Whately’s introduction of the uniformity principle into inductiveinference and the loss of the idea of formal cause.
The limited aim here is to explain what John Dewey might say about the formulation of the grue example. Nelson Goodman’s problem of distinguishing good and bad inductive inferences is an important one, but the grue example misconstrues this complex problem for certain technical reasons, due to ambiguities that contemporary logical theory has not yet come to terms with. Goodman’s problem is a problem for the theory of induction and thus for logical theory in general. Behind the whole discussion (...) of these issues over the last several decades is a certain view of logic hammered out by Russell, Carnap, Tarski, Quine, and many others. Goodman’s nominalism hinges in essential ways on a certain view of formal logic with an extensional quantification theory at its core. This raises many issues, but the one issue most germane here is the conception of predicates ensconced in this view of logic. (shrink)
On the basis of the distinction between logical and factual probability, epistemic justification is distinguished from logical justification of induction. It is argued that, contrary to the accepted interpretation of Hume, Hume believes that inductive inferences are epistemically legitimate and justifiable. Hence the beliefs arrived at via (correct) inductive inferences are rational beliefs. According to this interpretation, Hume is not a radical skeptic about induction.
The “Game of the Rule” is easy enough: I give you the beginning of a sequence of numbers (say) and you have to figure out how the sequence continues, to uncover the rule by means of which the sequence is generated. The game depends on two obvious constraints, namely (1) that the initial segment uniquely identify the sequence, and (2) that the sequence be non-random. As it turns out, neither constraint can fully be met, among other reasons because the relevant (...) notion of randomness is either vacuous or undecidable. This may not be a problem when we play for fun. It is, however, a serious problem when it comes to playing the game for real, i.e., when the player to issue the initial segment is not one of us but the world out there, the sequence consisting not of numbers (say) but of the events that make up our history. Moreover, when we play for fun we know exactly what initial segment to focus on, but when we play for real we don’t even know that. This is the core difficulty in the philosophy of the inductive sciences. (shrink)
Medical diagnosis has been traditionally recognized as a privileged field of application for so called probabilistic induction. Consequently, the Bayesian theorem, which mathematically formalizes this form of inference, has been seen as the most adequate tool for quantifying the uncertainty surrounding the diagnosis by providing probabilities of different diagnostic hypotheses, given symptomatic or laboratory data. On the other side, it has also been remarked that differential diagnosis rather works by exclusion, e.g. by modus tollens, i.e. deductively. By drawing on (...) a case history, this paper aims at clarifying some points on the issue. Namely: 1) Medical diagnosis does not represent, strictly speaking, a form of induction, but a type, of what in Peircean terms should be called ‘abduction’ (identifying a case as the token of a specific type); 2) in performing the single diagnostic steps, however, different inferential methods are used for both inductive and deductive nature: modus tollens, hypothetical-deductive method, abduction; 3) Bayes’ theorem is a probabilized form of abduction which uses mathematics in order to justify the degree of confidence which can be entertained on a hypothesis given the available evidence; 4) although theoretically irreconcilable, in practice, both the hypothetical- deductive method and the Bayesian one, are used in the same diagnosis with no serious compromise for its correctness; 5) Medical diagnosis, especially differential diagnosis, also uses a kind of “probabilistic modus tollens”, in that, signs (symptoms or laboratory data) are taken as strong evidence for a given hypothesis not to be true: the focus is not on hypothesis confirmation, but instead on its refutation [Pr (¬ H/E1, E2, …, En)]. Especially at the beginning of a complicated case, odds are between the hypothesis that is potentially being excluded and a vague “other”. This procedure has the advantage of providing a clue of what evidence to look for and to eventually reduce the set of candidate hypotheses if conclusive negative evidence is found. 6) Bayes’ theorem in the hypothesis-confirmation form can more faithfully, although idealistically, represent the medical diagnosis when the diagnostic itinerary has come to a reduced set of plausible hypotheses after a process of progressive elimination of candidate hypotheses; 7) Bayes’ theorem is however indispensable in the case of litigation in order to assess doctor’s responsibility for medical error by taking into account the weight of the evidence at his disposal. (shrink)
The paper is a continuation of another paper published as Part I. Now, the case of “n=3” is inferred as a corollary from the Kochen and Specker theorem (1967): the eventual solutions of Fermat’s equation for “n=3” would correspond to an admissible disjunctive division of qubit into two absolutely independent parts therefore versus the contextuality of any qubit, implied by the Kochen – Specker theorem. Incommensurability (implied by the absence of hidden variables) is considered as dual to quantum contextuality. The (...) relevant mathematical structure is Hilbert arithmetic in a wide sense, in the framework of which Hilbert arithmetic in a narrow sense and the qubit Hilbert space are dual to each other. A few cases involving set theory are possible: (1) only within the case “n=3” and implicitly, within any next level of “n” in Fermat’s equation; (2) the identification of the case “n=3” and the general case utilizing the axiom of choice rather than the axiom of induction. If the former is the case, the application of set theory and arithmetic can remain disjunctively divided: set theory, “locally”, within any level; and arithmetic, “globally”, to all levels. If the latter is the case, the proof is thoroughly within set theory. Thus, the relevance of Yablo’s paradox to the statement of Fermat’s last theorem is avoided in both cases. The idea of “arithmetic mechanics” is sketched: it might deduce the basic physical dimensions of mechanics (mass, time, distance) from the axioms of arithmetic after a relevant generalization, Furthermore, a future Part III of the paper is suggested: FLT by mediation of Hilbert arithmetic in a wide sense can be considered as another expression of Gleason’s theorem in quantum mechanics: the exclusions about (n = 1, 2) in both theorems as well as the validity for all the rest values of “n” can be unified after the theory of quantum information. The availability (respectively, non-availability) of solutions of Fermat’s equation can be proved as equivalent to the non-availability (respectively, availability) of a single probabilistic measure as to Gleason’s theorem. (shrink)
Alice encounters at least three distinct problems in her struggles to understand and navigate Wonderland. The first arises when she attempts to predict what will happen in Wonderland based on what she has experienced outside of Wonderland. In many cases, this proves difficult -- she fails to predict that babies might turn into pigs, that a grin could survive without a cat or that playing cards could hold criminal trials. Alice's second problem involves her efforts to figure out the basic (...) nature of Wonderland. So, for example, there is nothing Alice could observe that would allow her to prove whether Wonderland is simply a dream. The final problem is manifested by Alice's attempts to understand what the various residents of Wonderland mean when they speak to her. In Wonderland, "mock turtles" are real creatures and people go places with a "porpoise" (and not a purpose). All three of these problems concern Alice's attempts to infer information about unobserved events or objects from those she has observed. In philosophical terms, they all involve *induction*. -/- In this essay, I will show how Alice's experiences can be used to clarify the relation between three more general problems related to induction. The first problem, which concerns our justification for beliefs about the future, is an instance of David Hume's classic *problem of induction*. Most of us believe that rabbits will not start talking tomorrow -- the problem of induction challenges us to justify this belief. Even if we manage to solve Hume's puzzle, however, we are left with what W.V.O. Quine calls the problems of *underdetermination *and *indeterminacy.* The former problem asks us to explain how we can determine *what the world is really like *based on *everything that could be observed about the world. *So, for example, it seems plausible that nothing that Alice could observe would allow her to determine whether eating mushrooms causes her to grow or the rest of the world to shrink. The latter problem, which might remain even if resolve the first two, casts doubt on our capacity to determine *what a certain person means *based on *which words that person uses.* This problem is epitomized in the Queen's interpretation of the Knave's letter. The obstacles that Alice faces in getting around Wonderland are thus, in an important sense, the same types of obstacles we face in our own attempts to understand the world. Her successes and failures should therefore be of real interest. (shrink)
In a previous paper, an elementary and thoroughly arithmetical proof of Fermat’s last theorem by induction has been demonstrated if the case for “n = 3” is granted as proved only arithmetically (which is a fact a long time ago), furthermore in a way accessible to Fermat himself though without being absolutely and precisely correct. The present paper elucidates the contemporary mathematical background, from which an inductive proof of FLT can be inferred since its proof for the case for (...) “n = 3” has been known for a long time. It needs “Hilbert mathematics”, which is inherently complete unlike the usual “Gödel mathematics”, and based on “Hilbert arithmetic” to generalize Peano arithmetic in a way to unify it with the qubit Hilbert space of quantum information. An “epoché to infinity” (similar to Husserl’s “epoché to reality”) is necessary to map Hilbert arithmetic into Peano arithmetic in order to be relevant to Fermat’s age. Furthermore, the two linked semigroups originating from addition and multiplication and from the Peano axioms in the final analysis can be postulated algebraically as independent of each other in a “Hamilton” modification of arithmetic supposedly equivalent to Peano arithmetic. The inductive proof of FLT can be deduced absolutely precisely in that Hamilton arithmetic and the pransfered as a corollary in the standard Peano arithmetic furthermore in a way accessible in Fermat’s epoch and thus, to himself in principle. A future, second part of the paper is outlined, getting directed to an eventual proof of the case “n=3” based on the qubit Hilbert space and the Kochen-Specker theorem inferable from it. (shrink)
Where does the necessity that seems to accompany causal inferences come from? “Why [do] we conclude that … particular causes must necessarily have such particular effects?” In 1.3.6 of the Treatise, Hume entertains the possibility that this necessity is a function of reason. However, he eventually dismisses this possibility, where this dismissal consists of Hume’s “negative” argument concerning induction. This argument has received, and continues to receive, a tremendous amount of attention. How could causal inferences be justified if they are (...) not justified by reason? If we believe that p causes q, isn’t it reason that allows us to conclude q when we see p with some assurance, i.e. with some necessity? (shrink)
It is argued that two observers with the same information may rightly disagree about the probability of an event that they are both observing. This is a correct way of describing the view of a lottery outcome from the perspective of a winner and from the perspective of an observer not connected with the winner - the outcome is improbable for the winner and not improbable for the unconnected observer. This claim is both argued for and extended by developing a (...) case in which a probabilistic inference is supported for one observer and not for another, though they relevantly differ only in perspective, not in any information that they have. It is pointed out, finally, that all probabilities are in this way dependent on perspective. (shrink)
In The Rationality of Induction, David Stove presents an argument against scepticism about inductiveinference—where, for Stove, inductiveinference is inference from the observed to the unobserved. Let U be a finite collection of n particulars such that each member of U either has property F-ness or does not. If s is a natural number less than n, define an s-fold sample of U as s observations of distinct members of U each either having F-ness (...) or not having F-ness. Let pU denote the proportion of members of U that are Fs and, if S is an s-fold sample of U, let pS denote the proportion of members of S that are Fs. Call S representative if and only if |pS – pU|<0.01. Stove‘s argument against inductive scepticism is built on the following statistical fact: -/- As s gets larger the proportion of all possible s-fold samples of U that are representative gets closer to 1 (regardless of the size of U or of the value of pU). -/- In this essay I subject Stove‘s argument to thorough scrutiny. I show that the argument – as it stands – is incomplete, and I illuminate the issues involved in trying to fill the gaps. Along the way I demonstrate that one of the commonest objects to Stove‘s argument misses the point. -/- . (shrink)
There is good reason to believe that scientific realism requires a commitment to the objective modal structure of the physical world. Causality, equilibrium, laws of nature, and probability all feature prominently in scientific theory and explanation, and each one is a modal notion. If we are committed to the content of our best scientific theories, we must accept the modal nature of the physical world. But what does the scientific realist’s commitment to physical modality require? We consider whether scientific realism (...) is compatible with Humeanism about the laws of nature, and we conclude that it is not. We specifically identify three major problems for the best-systems account of lawhood: its central concept of strength cannot be formulated non-circularly, it cannot offer a satisfactory account of the laws of the special sciences, and it can offer no explanation of the success of inductiveinference. In addition, Humeanism fails to be naturalistically motivated. For these reasons, we conclude that the scientific realist must embrace natural necessity. (shrink)
The homeostatic property cluster theory is widely influential for its ability to account for many natural-kind terms in the life sciences. However, the notion of homeostatic mechanism has never been fully explicated. In 2009, Carl Craver interpreted the notion in the sense articulated in discussions on mechanistic explanation and pointed out that the HPC account equipped with such notion invites interest-relativity. In this paper, we analyze two recent refinements on HPC: one that avoids any reference to the causes of the (...) clustering of properties and one that replaces homeostatic mechanisms with causal networks represented by causal graphs. We argue that the former is too slender to account for some inductiveinference in science and the latter, thicker account invites interest-relativity, as the original HPC does. This suggests that human interest will be an un-eliminative part of a satisfactory account of natural kindness. We conclude by discussing the implication of interest-relativity to the naturalness, reality, or objectivity of kinds and indicating an overlooked aspect of natural kinds that requires further studies. (shrink)
As inductive decision-making procedures, the inferences made by machine learning programs are subject to underdetermination by evidence and bear inductive risk. One strategy for overcoming these challenges is guided by a presumption in philosophy of science that inductive inferences can and should be value-free. Applied to machine learning programs, the strategy assumes that the influence of values is restricted to data and decision outcomes, thereby omitting internal value-laden design choice points. In this paper, I apply arguments from (...) feminist philosophy of science to machine learning programs to make the case that the resources required to respond to these inductive challenges render critical aspects of their design constitutively value-laden. I demonstrate these points specifically in the case of recidivism algorithms, arguing that contemporary debates concerning fairness in criminal justice risk-assessment programs are best understood as iterations of traditional arguments from inductive risk and demarcation, and thereby establish the value-laden nature of automated decision-making programs. Finally, in light of these points, I address opportunities for relocating the value-free ideal in machine learning and the limitations that accompany them. (shrink)
In this paper, we set out to investigate the following question: if science relies heavily on induction, does philosophy of science rely heavily on induction as well? Using data mining and text analysis methods, we study a large corpus of philosophical texts mined from the JSTOR database (n = 14,199) in order to answer this question empirically. If philosophy of science relies heavily on induction, just as science supposedly does, then we would expect to find significantly more inductive arguments (...) than deductive arguments and abductive arguments in the published works of philosophers of science. Using indicator words to classify arguments by type (namely, deductive, inductive, and abductive arguments), we search through our corpus to find patterns of argumentation. Overall, the results of our study suggest that philosophers of science do rely on inductiveinference. But induction may not be as foundational to philosophy of science as it is thought to be for science, given that philosophers of science make significantly more deductive arguments than inductive arguments. Interestingly, our results also suggest that philosophers of science do not rely on abductive arguments all that much, even though philosophers of science consider abduction to be a cornerstone of scientific methodology. (shrink)
The paper takes a novel approach to a classic problem—Hempel’s Raven Paradox. A standard approach to it supposes the solution to consist in bringing our inductive logic into “reflective equilibrium” with our intuitive judgements about which inductive inferences we should license. This approach leaves the intuitions as a kind of black box and takes it on faith that, whatever the structure of the intuitions inside that box might be, it is one for which we can construct an isomorphic (...) formal edifice, a system of inductive logic. By popping open the box we can see whether that faith is misplaced. I aim, therefore, to characterize our pre-theoretical, intuitive understanding of generalizations like “ravens are black” and argue that, intuitively, we take them to mean, for instance: “ravens are black by some indeterminate yet characteristic means.” I motivate and explicate this formulation and bring it to bear on Hempel’s Problem. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.