Should we insist on prediction, i.e. on correctly forecasting the future? Or can we rest content with accommodation, i.e. empirical success only with respect to the past? I apply general considerations about this issue to the case of economics. In particular, I examine various ways in which mere accommodation can be sufficient, in order to see whether those ways apply to economics. Two conclusions result. First, an entanglement thesis: the need for prediction is entangled with the methodological role (...) of orthodox economic theory. Second, a conditional predictivism: if we are not committed to orthodox economic theory, then we should demand prediction rather than accommodation – against most current practice. (shrink)
Parkinson's Disease (PD) is a long-term degenerative disorder of the central nervous system that mainly affects the motor system. The symptoms generally come on slowly over time. Early in the disease, the most obvious are shaking, rigidity, slowness of movement, and difficulty with walking. Doctors do not know what causes it and finds difficulty in early diagnosing the presence of Parkinson’s disease. An artificial neural network system with back propagation algorithm is presented in this paper for helping doctors in identifying (...) PD. Previous research with regards to predict the presence of the PD has shown accuracy rates up to 93% [1]; however, accuracy of prediction for small classes is reduced. The proposed design of the neural network system causes a significant increase of robustness. It is also has shown that networks recognition rates reached 100%. (shrink)
Shared activity is often simply willed into existence by individuals. This poses a problem. Philosophical reflection suggests that shared activity involves a distinctive, interlocking structure of intentions. But it is not obvious how one can form the intention necessary for shared activity without settling what fellow participants will do and thereby compromising their agency and autonomy. One response to this problem suggests that an individual can have the requisite intention if she makes the appropriate predictions about fellow participants. I argue (...) that implementing this predictive strategy risks derailing practical reasoning and preventing one from forming the intention. My alternative proposal for reconciling shared activity with autonomy appeals to the idea of acting directly on another's intention. In particular, I appeal to the entitlement one sometimes has to another's practical judgment, and the corresponding authority the other sometimes has to settle what one is to do. (shrink)
In his paper, Jakob Hohwy outlines a theory of the brain as an organ for prediction-error minimization, which he claims has the potential to profoundly alter our understanding of mind and cognition. One manner in which our understanding of the mind is altered, according to PEM, stems from the neurocentric conception of the mind that falls out of the framework, which portrays the mind as “inferentially-secluded” from its environment. This in turn leads Hohwy to reject certain theses of embodied (...) cognition. Focusing on this aspect of Hohwy’s argument, we first outline the key components of the PEM framework such as the “evidentiary boundary,” before looking at why this leads Hohwy to reject certain theses of embodied cognition. We will argue that although Hohwy may be correct to reject specific theses of embodied cognition, others are in fact implied by the PEM framework and may contribute to its development. We present the metaphor of the “body as a laboratory” in order to highlight wha... (shrink)
Courtesy of its free energy formulation, the hierarchical predictive processing theory of the brain (PTB) is often claimed to be a grand unifying theory. To test this claim, we examine a central case: activity of mesocorticolimbic dopaminergic (DA) systems. After reviewing the three most prominent hypotheses of DA activity—the anhedonia, incentive salience, and reward prediction error hypotheses—we conclude that the evidence currently vindicates explanatory pluralism. This vindication implies that the grand unifying claims of advocates of PTB are unwarranted. More (...) generally, we suggest that the form of scientific progress in the cognitive sciences is unlikely to be a single overarching grand unifying theory. (shrink)
Selectionist evolutionary theory has often been faulted for not making novel predictions that are surprising, risky, and correct. I argue that it in fact exhibits the theoretical virtue of predictive capacity in addition to two other virtues: explanatory unification and model fitting. Two case studies show the predictive capacity of selectionist evolutionary theory: parallel evolutionary change in E. coli, and the origin of eukaryotic cells through endosymbiosis.
A recent surge of work on prediction-driven processing models--based on Bayesian inference and representation-heavy models--suggests that the material basis of conscious experience is inferentially secluded and neurocentrically brain bound. This paper develops an alternative account based on the free energy principle. It is argued that the free energy principle provides the right basic tools for understanding the anticipatory dynamics of the brain within a larger brain-body-environment dynamic, viewing the material basis of some conscious experiences as extensive--relational and thoroughly world-involving.
Both mindreading and stereotyping are forms of social cognition that play a pervasive role in our everyday lives, yet too little attention has been paid to the question of how these two processes are related. This paper offers a theory of the influence of stereotyping on mental-state attribution that draws on hierarchical predictive coding accounts of action prediction. It is argued that the key to understanding the relation between stereotyping and mindreading lies in the fact that stereotypes centrally involve (...) character-trait attributions, which play a systematic role in the action–prediction hierarchy. On this view, when we apply a stereotype to an individual, we rapidly attribute to her a cluster of generic character traits on the basis of her perceived social group membership. These traits are then used to make inferences about that individual’s likely beliefs and desires, which in turn inform inferences about her behavior. (shrink)
Can an agent deliberating about an action A hold a meaningful credence that she will do A? 'No', say some authors, for 'Deliberation Crowds Out Prediction' (DCOP). Others disagree, but we argue here that such disagreements are often terminological. We explain why DCOP holds in a Ramseyian operationalist model of credence, but show that it is trivial to extend this model so that DCOP fails. We then discuss a model due to Joyce, and show that Joyce's rejection of DCOP (...) rests on terminological choices about terms such as 'intention', 'prediction', and 'belief'. Once these choices are in view, they reveal underlying agreement between Joyce and the DCOP-favouring tradition that descends from Ramsey. Joyce's Evidential Autonomy Thesis (EAT) is effectively DCOP, in different terminological clothing. Both principles rest on the so-called 'transparency' of first-person present-tensed reflection on one's own mental states. (shrink)
In the last two decades, philosophy of neuroscience has predominantly focused on explanation. Indeed, it has been argued that mechanistic models are the standards of explanatory success in neuroscience over, among other things, topological models. However, explanatory power is only one virtue of a scientific model. Another is its predictive power. Unfortunately, the notion of prediction has received comparatively little attention in the philosophy of neuroscience, in part because predictions seem disconnected from interventions. In contrast, we argue that topological (...) predictions can and do guide interventions in science, both inside and outside of neuroscience. Topological models allow researchers to predict many phenomena, including diseases, treatment outcomes, aging, and cognition, among others. Moreover, we argue that these predictions also offer strategies for useful interventions. Topology-based predictions play this role regardless of whether they do or can receive a mechanistic interpretation. We conclude by making a case for philosophers to focus on prediction in neuroscience in addition to explanation alone. (shrink)
Several authors have claimed that prediction is essentially impossible in the general theory of relativity, the case being particularly strong, it is said, when one fully considers the epistemic predicament of the observer. Each of these claims rests on the support of an underdetermination argument and a particular interpretation of the concept of prediction. I argue that these underdetermination arguments fail and depend on an implausible explication of prediction in the theory. The technical results adduced in these (...) arguments can be related to certain epistemic issues, but can only be misleadingly or mistakenly characterized as related to prediction. (shrink)
Testimony about the future dangerousness of a person has become a central staple of many judicial processes. In settings such as bail, sentencing, and parole decisions, in rulings about the civil confinement of the mentally ill, and in custody decisions in a context of domestic violence, the assessment of a person’s propensity towards physical or sexual violence is regarded as a deciding factor. These assessments can be based on two forms of expert testimony: actuarial or clinical. The purpose of this (...) paper is to examine the scientific and epistemological basis of both methods of prediction or risk assessment. My analysis will reveal that this kind of expert testimony is scientifically baseless. The problems I will discuss will generate a dilemma for factfinders: on the one hand, given the weak predictive abilities of the branches of science involved, they should not admit expert clinical or actuarial testimony as evidence; on the other hand, there is a very strong tradition and a vast jurisprudence that supports the continued use of this kind of expert testimony. It is a clear case of the not so uncommon conflict between science and legal tradition. (shrink)
There is considerable disagreement about the epistemic value of novel predictive success, i.e. when a scientist predicts an unexpected phenomenon, experiments are conducted, and the prediction proves to be accurate. We survey the field on this question, noting both fully articulated views such as weak and strong predictivism, and more nascent views, such as pluralist reasons for the instrumental value of prediction. By examining the various reasons offered for the value of prediction across a range of inferential (...) contexts , we can see that neither weak nor strong predictivism captures all of the reasons for valuing prediction available. A third path is presented, Pluralist Instrumental Predictivism; PIP for short. (shrink)
To succeed, political science usually requires either prediction or contextual historical work. Both of these methods favor explanations that are narrow-scope, applying to only one or a few cases. Because of the difficulty of prediction, the main focus of political science should often be contextual historical work. These epistemological conclusions follow from the ubiquity of causal fragility, under-determination, and noise. They tell against several practices that are widespread in the discipline: wide-scope retrospective testing, such as much large-n statistical (...) work; lack of emphasis on prediction; and resources devoted to ‘pure theory’ divorced from frequent empirical application. I illustrate, via Donatella della Porta’s work on political violence, the important role that is still left for theory. I conclude by assessing the scope for political science to offer policy advice. (shrink)
Recent theory suggests that action prediction relies of a motor emulation mechanism that works by mapping observed actions onto the observer action system so that predictions can be generated using that same predictive mechanisms that underlie action control. This suggests that action prediction may be more accurate when there is a more direct mapping between the stimulus and the observer. We tested this hypothesis by comparing prediction accuracy for two stimulus types. A mannequin stimulus which contained information (...) about the effectors used to produce the action and a point stimulus, which contained identical dynamic information but no effector information. Prediction was more accurate for the mannequin stimulus. However, this effect was dependent on the observer having previous experience performing the observed action. This suggests that experienced and na¨ıve observers might generate predictions in qualitatively difference ways, which may relate to the presence of an internal representation of the action laid down through action performance. (shrink)
Abstract: During the course of this research, imposing the training of an artificial neural network to predicate the MPG rate for present thru forthcoming automobiles in the foremost relatively accurate evaluation for the approximated number which foresight the actual number to help through later design and manufacturing of later automobile, by training the ANN to accustom to the relationship between the skewing of each later stated attributes, the set of mathematical combination of the sequences that could be excavate the Miles (...) Per Gallon(MPG) by the system and using both the Gradient Descent Algorithm and the Normalized Square Error Technique explicitly lure the Final Parameter Norm and Scaling layer and Bounded Layering rules implicitly. And so on the system should be able to produce immune approximations `and calculations to make better of results of What the Actual output estimation. (shrink)
Alan Hájek launches a formidable attack on the idea that deliberation crowds out prediction – that when we are deliberating about what to do, we cannot rationally accommodate evidence about what we are likely to do. Although Hájek rightly diagnoses the problems with some of the arguments for the view, his treatment falls short in crucial ways. In particular, he fails to consider the most plausible version of the view, the best argument for it, and why anyone would ever (...) believe it in the first place. In doing so, he misses a deep puzzle about deliberation and prediction – a puzzle which all of us, as agents, face, and which we may be able to resolve by recognizing the complicated relationship between deliberation and prediction. (shrink)
Abstract- Breast Cancer is mostly identified amongst women and is a main reason for increasing the rate of mortality amongst women. Diagnosis of breast cancer takes time and due to the importance of the topic, it is necessary to develop a system that can automatically diagnose breast cancer in its early stages. Many Machine Learning Algorithms have been used for the detection breast cancer. The Wisconsin Breast Cancer Dataset has been used which contains 699 samples and 10 features. The paper (...) propos an Artificial Neural Network model that is implemented using Just Neural Networks (JNN) using the dataset which is collected from UCI machine learning repository. The proposed model has been trained and validated. The accuracy rate obtained from the proposed ANN model was 99.57%. (shrink)
One of the leading and central figures in research on delusions, Max Coltheart, presents and summarises his heretofore work in a short text. Miyazono and Bortolotti present an interesting argument aimed at the charges against the doxastic concept of delusions. Adams, Brown and Friston showcase a predictive-Bayesian concept of delusions. Young criticizes the current changes in the two-factor account of delusions and argues that the role of experience should not be dismissed within it. Kapusta presents an interesting, phenomenological approach to (...) delusions, rooted in the classic works of Karl Jaspers. In the last article, Carruthers takes a look at delusions from a different perspective. He uses them in order to show the weakness of the sense of agency concept as proposed by Wegner. The issue also contains an interview with Jakob Hohwy. In Hohwy’s still-recent book, we can find an interesting, predictive approach to delusions. Hohwy points towards the unobvious connections between delusions and illusions. (shrink)
Abstract: The aim behind analyzing the Goodreads dataset is to get a fair idea about the relationships between the multiple attributes a book might have, such as: the aggregate rating of each book, the trend of the authors over the years and books with numerous languages. With over a hundred thousand ratings, there are books which just tend to become popular as each day seems to pass. We proposed an Artificial Neural Network (ANN) model for predicting the overall rating of (...) books. The prediction is based on these features (bookID, title, authors, isbn, language_code, isbn13, # num_pages, ratings_count, text_reviews_count), which were used as input variables and (average_rating) as output variable for our ANN model. Our model were created, trained, and validated using data set in JNN environment, which its title is “Goodreads-books”. Model evaluation showed that the ANN model is able to predict correctly 99.78% of the validation samples. (shrink)
Abstract: In this research, an Artificial Neural Network (ANN) model was developed and tested to predict Birth Weight. A number of factors were identified that may affect birth weight. Factors such as smoke, race, age, weight (lbs) at last menstrual period, hypertension, uterine irritability, number of physician visits in 1st trimester, among others, as input variables for the ANN model. A model based on multi-layer concept topology was developed and trained using the data from some birth cases in hospitals. The (...) evaluation of testing the dataset shows that the ANN model is capable of correctly predicting the birth weight with 100% accuracy. (shrink)
The establishment of the transport infrastructure is usually preceded by an EIA procedure, which should determine amphibian breeding sites and migration routes. However, evaluation is very difficult due to the large number of habitats spread over a vast area and the limited time available for field work. An artificial Neural Network (ANN) is proposed for predicting the presence of amphibians species near the water reservoirs based on features obtained from GIS systems and satellite images. The dataset collected from UCI Machine (...) Learning repository. The dataset is a multi-label classification problem. The goal of this study is to predict the presence of amphibians species near the water reservoirs based on features obtained from GIS systems and satellite images. After preprocessing the data, the proposed model was trained and evaluated. The accuracy of the proposed model for predicting the presence of amphibian’s species was 100%. (shrink)
Abstract: In this research, an Artificial Neural Network (ANN) model was developed and validated to predict efficiency of antibiotics in treating various bacteria types. Attributes that were taken in account are: organism name, specimen type, and antibiotic name as input and susceptibility as an output. A model based on one input layer, one hidden layer, and one output layer concept topology was developed and trained using a data from Queensland government's website. The evaluation shows that the proposed ANN model using (...) JNN tool is capable of correctly predicting the susceptibility of organisms to the antibiotics with 94.17% accuracy. (shrink)
Abstract: Breast cancer is reported to be the most common cancer type among women worldwide and it is the second highest women fatality rate amongst all cancer types. Notwithstanding all the progresses made in prevention and early intervention, early prognosis and survival prediction rates are still not sufficient. In this paper, we propose an ANN model which outperforms all the previous supervised learning methods by reaching 99.57 in terms of accuracy in Wisconsin Breast Cancer dataset. Experimental results on Haberman’s (...) Breast Cancer Survival dataset show the superiority of proposed method by reaching 88.24 % in terms of accuracy. The results are the best reported ones obtained from Artificial Neural Network using JNN environment without any preprocessing of the dataset. (shrink)
Integrating the concept of place meanings into protected area management has been difficult. Across a diverse body of social science literature, challenges in the conceptualization and application of place meanings continue to exist. However, focusing on relationships in the context of participatory planning and management allows protected area managers to bring place meanings into professional judgment and practice. This paper builds on work that has outlined objectives and recommendations for bringing place meanings, relationships, and lived experiences to the forefront of (...) land-use planning and management. It proposes the next steps in accounting for people’s relationships with protected areas and their relationships with protected area managers. Our goals are to 1) conceptualize this relationship framework; 2) present a structure for application of the framework; and 3) demonstrate the application in a specific protected area context, using an example from Alaska. We identify three key target areas of information and knowledge that managers will need to sustain quality relationship outcomes at protected areas. These targets are recording stories or narratives, monitoring public trust in management, and identifying and prioritizing threats to relationships. The structure needed to apply this relationship-focused approach requires documenting and following individual relationships with protected areas in multiple ways. The goal of this application is not to predict relationships, but instead to gain a deeper understanding of how and why relationships develop and change over time. By documenting narratives of individuals, managers can understand how relationships evolve over time and the role they play in individual’s lives. By understanding public trust, the shared values and goals of individuals and managers can be observed. By identifying and prioritizing threats, managers can pursue efforts that steward relationships while allowing for the protection of experiences and meanings. The collection and interpretation of these three information targets can then be integrated and implemented within planning and management strategies to achieve outcomes that are beneficial for resource protection, visitor experiences, and stakeholder engagement. By investing in this approach, agencies will gain greater understanding and usable knowledge towards the achievement of quality relationships. It represents an investment in both place relationships and public relations. By integrating such an approach into planning and management, protected area managers can represent the greatest diversity of individual place meanings and connections. -/- relationships, place meanings, trust, narratives, planning, protected areas. (shrink)
The relative efficiency of an action is a central criterion in action control and can be used to predict others’ behavior. Yet, it is unclear when the ability to predict on and reason about the efficiency of others’ actions develops. In three main and two followup studies, 3- to 6-year-old children (n = 242) were confronted with vignettes in which protagonists could take a short (efficient) path or a long path. Children predicted which path the protagonist would take and why (...) the protagonist would take a specific path. The 3-year-olds did not take efficiency into account when making decisions even when there was an explicit goal, the task was simplified and made more salient, and children were questioned after exposure to the agent’s action. Four years is a transition age for rational action prediction, and the 5-year-olds reasoned on the efficiency of actions before relying on them to predict others’ behavior. Results are discussed within a representational redescription account. (shrink)
The relative efficiency of an action is a central criterion in action control and can be used to predict others’ behavior. Yet, it is unclear when the ability to predict on and reason about the efficiency of others’ actions develops. In three main and two follow-up studies, 3- to 6-year-old children (n = 242) were confronted with vignettes in which protagonists could take a short (efficient) path or a long path. Children predicted which path the protagonist would take and why (...) the protagonist would take a specific path. The 3-year-olds did not take efficiency into account when making decisions even when there was an explicit goal, the task was simplified and made more salient, and children were questioned after exposure to the agent’s action. Four years is a transition age for rational action prediction, and the 5-year-olds reasoned on the efficiency of actions before relying on them to predict others’ behavior. Results are discussed within a representational redescription account. (shrink)
According to Steiner (1998), in contemporary physics new important discoveries are often obtained by means of strategies which rely on purely formal mathematical considerations. In such discoveries, mathematics seems to have a peculiar and controversial role, which apparently cannot be accounted for by means of standard methodological criteria. M. Gell-Mann and Y. Ne׳eman׳s prediction of the Ω− particle is usually considered a typical example of application of this kind of strategy. According to Bangu (2008), this prediction is apparently (...) based on the employment of a highly controversial principle—what he calls the “reification principle”. Bangu himself takes this principle to be methodologically unjustifiable, but still indispensable to make the prediction logically sound. In the present paper I will offer a new reconstruction of the reasoning that led to this prediction. By means of this reconstruction, I will show that we do not need to postulate any “reificatory” role of mathematics in contemporary physics and I will contextually clarify the representative and heuristic role of mathematics in science. (shrink)
Parkinson's Disease (PD) is a long-term degenerative disorder of the central nervous system that mainly affects the motor system. The symptoms generally come on slowly over time. Early in the disease, the most obvious are shaking, rigidity, slowness of movement, and difficulty with walking. Doctors do not know what causes it and finds difficulty in early diagnosing the presence of Parkinson’s disease. An artificial neural network system with back propagation algorithm is presented in this paper for helping doctors in identifying (...) PD. Previous research with regards to predict the presence of the PD has shown accuracy rates up to 93% [1]; however, accuracy of prediction for small classes is reduced. The proposed design of the neural network system causes a significant increase of robustness. It is also has shown that networks recognition rates reached 100%. (shrink)
Abstract: Four main forms of Doomsday Argument (DA) exist—Gott’s DA, Carter’s DA, Grace’s DA and Universal DA. All four forms use different probabilistic logic to predict that the end of the human civilization will happen unexpectedly soon based on our early location in human history. There are hundreds of publications about the validity of the Doomsday argument. Most of the attempts to disprove the Doomsday Argument have some weak points. As a result, we are uncertain about the validity of DA (...) proofs and rebuttals. In this article, a meta-DA is introduced, which uses the idea of logical uncertainty over the DA’s validity estimated based on a virtual prediction market of the opinions of different scientists. The result is around 0.4 for the validity of some form of DA, and even smaller for “Strong DA”, which predicts the end of the world in the near term. We discuss many examples of the validity of the DA in real life as an instrument to prove it “experimentally”. We also show that DA becomes strongest if it is based on the idea of the “natural reference class” of observers, that is, the observers who know about the DA (i.e. a Self-Referenced DA). Such a DA predicts that there is a high probability of a global catastrophe with human extinction in the 21st century, which aligns with what we already know based on analysis of different technological risks. (shrink)
Abstract: Predication is an application of Artificial Neural Network (ANN). It is a supervised learning due to predefined input and output attributes. Multi-Layer ANN model is used for training, validating, and testing of the dataset. In this paper, Multi-Layer ANN model was used to train and test the mushroom dataset to predict whether mushroom is edible or poisonous. The Mushrooms dataset was prepared for training, 8124 instances were used for the training. JNN tool was used for training and validating the (...) dataset. The most important attributes of the dataset were identified, and the accuracy of the predication of whether Mushroom is edible or Poisonous was 100%. (shrink)
Prediction Error Minimization theory (PEM) is one of the most promising attempts to model perception in current science of mind, and it has recently been advocated by some prominent philosophers as Andy Clark and Jakob Hohwy. Briefly, PEM maintains that “the brain is an organ that on aver-age and over time continually minimizes the error between the sensory input it predicts on the basis of its model of the world and the actual sensory input” (Hohwy 2014, p. 2). An (...) interesting debate has arisen with regard to which is the more adequate epistemological interpretation of PEM. Indeed, Hohwy maintains that given that PEM supports an inferential view of perception and cognition, PEM has to be considered as conveying an internalist epistemological perspective. Contrary to this view, Clark maintains that it would be incorrect to interpret in such a way the indirectness of the link between the world and our inner model of it, and that PEM may well be combined with an externalist epistemological perspective. The aim of this paper is to assess those two opposite interpretations of PEM. Moreover, it will be suggested that Hohwy’s position may be considerably strengthened by adopting Carlo Cellucci’s view on knowledge (2013). (shrink)
I propose an account of the speech act of prediction that denies that the contents of prediction must be about the future and illuminates the relation between prediction and assertion. My account is a synthesis of two ideas: (i) that what is in the future in prediction is the time of discovery and (ii) that, as Benton and Turri recently argued, prediction is best characterized in terms of its constitutive norms.
Can purely predictive models be useful in investigating causal systems? I argue ‘yes’. Moreover, in many cases not only are they useful, they are essential. The alternative is to stick to models or mechanisms drawn from well-understood theory. But a necessary condition for explanation is empirical success, and in many cases in social and field sciences such success can only be achieved by purely predictive models, not by ones drawn from theory. Alas, the attempt to use theory to achieve explanation (...) or insight without empirical success therefore fails, leaving us with the worst of both worlds – neither prediction nor explanation. Best go with empirical success by any means necessary. I support these methodological claims via case studies of two impressive feats of predictive modelling: opinion polling of political elections, and weather forecasting. (shrink)
An exciting theory in neuroscience is that the brain is an organ for prediction error minimization (PEM). This theory is rapidly gaining influence and is set to dominate the science of mind and brain in the years to come. PEM has extreme explanatory ambition, and profound philosophical implications. Here, I assume the theory, briefly explain it, and then I argue that PEM implies that the brain is essentially self-evidencing. This means it is imperative to identify an evidentiary boundary between (...) the brain and its environment. This boundary defines the mind-world relation, opens the door to skepticism, and makes the mind transpire as more inferentially secluded and neurocentrically skull-bound than many would nowadays think. Therefore, PEM somewhat deflates contemporary hypotheses that cognition is extended, embodied and enactive; however, it can nevertheless accommodate the kinds of cases that fuel these hypotheses. (shrink)
There is a long-standing disagreement in the philosophy of probability and Bayesian decision theory about whether an agent can hold a meaningful credence about an upcoming action, while she deliberates about what to do. Can she believe that it is, say, 70% probable that she will do A, while she chooses whether to do A? No, say some philosophers, for Deliberation Crowds Out Prediction (DCOP), but others disagree. In this paper, we propose a valid core for DCOP, and identify (...) terminological causes for some of the apparent disputes. (shrink)
The view that folk psychology is primarily mindreading beliefs and desires has come under challenge in recent years. I have argued that we also understand others in terms of individual properties such as personality traits and generalizations from past behavior, and in terms of group properties such as stereotypes and social norms (Andrews 2012). Others have also argued that propositional attitude attribution isn’t necessary for predicting others’ behavior, because this can be done in terms of taking Dennett’s Intentional Stance (Zawidzki (...) 2013), appealing to social structures (Maibom 2007), shared norms (McGeer 2007) or via solution based heuristics for reaching equilibrium between social partners (Morton 2003). But it isn’t only prediction that can be done without thinking about what others think; we can explain and understand people in terms of their personality traits, habitual behaviors, and social practices as well. The decentering of propositional attitude attributions goes hand in hand with a move away from taking folk psychology to be primarily a predictive device. While experiments examining folk psychological abilities in children, infants, and other species still rest on asking subjects to predict behavior, theoretical investigations as to the evolutionary function of folk psychology have stressed the role of explanation (Andrews 2012) and regulative functions (McGeer 2007, Zawidzki 2013, Fenici 2011). In this paper I argue that an explanatory role for folk psychology is also a regulative role, and that language is not required for these regulative functions. I will start by drawing out the relationship between prediction, explanation, and regulation of behavior according to both mindreading approaches to folk psychology and the pluralistic account I defend. I will argue that social cognition does not take the form of causal reasoning so much as it does normative reasoning, and will introduce the folk psychological spiral. Then I will examine the cognitive resources necessary for participating in the folk psychological spiral, and I will argue that these cognitive resources can be had without language. There is preliminary evidence that some other species understand one another through a normative lens that, through looping effects, creates expectations that community members strive to live up to. (shrink)
I argue that folk psychology does not serve the purpose of facilitating prediction of others' behaviour but if facilitating cooperative action. (See my subsequent book *The Importance of Being Understood*.
From the early-1950s on, F.A. Hayek was concerned with the development of a methodology of sciences that study systems of complex phenomena. Hayek argued that the knowledge that can be acquired about such systems is, in virtue of their complexity (and the comparatively narrow boundaries of human cognitive faculties), relatively limited. The paper aims to elucidate the implications of Hayek’s methodology with respect to the specific dimensions along which the scientist’s knowledge of some complex phenomena may be limited. Hayek’s fallibilism (...) was an essential (if not always explicit) aspect of his arguments against the defenders of both socialism ([1935] 1948, [1940] 1948) and countercyclical monetary policy ([1975] 1978); yet, despite the fact that his conceptions of both complex phenomena and the methodology appropriate to their investigation imply that ignorance might beset the scientist in multiple respects, he never explicated all of these consequences. The specificity of a scientific prediction depends on the extent of the scientist’s knowledge concerning the phenomena under investigation. The paper offers an account of the considerations that determine the extent to which a theory’s implications prohibit the occurrence of particular events in the relevant domain. This theory of “predictive degree” both expresses and – as the phenomena of scientific prediction are themselves complex in Hayek’s sense – exemplifies the intuition that the specificity of a scientific prediction depends on the relevant knowledge available. (shrink)
To automate examination of massive amounts of sequence data for biological function, it is important to computerize interpretation based on empirical knowledge of sequence-function relationships. For this purpose, we have been constructing an Artificial Neural Network (ANN) by organizing various experimental and computational observations as a collection ANN models. Here we propose an ANN model which utilizes the Dataset for UCI Machine Learning Repository, for predicting localization sites of proteins. We collected data for 336 proteins with known localization sites and (...) divided them into training data and validating data. It was found that the accuracy rate for predicting Protein Localization Sites in Cells is 92.11%. This Indicates that Artificial Neural Network approach is powerful and flexible enough to be used in Protein Localization Sites prediction. (shrink)
Election prediction by means of opinion polling is a rare empirical success story for social science. I examine the details of a prominent case, drawing two lessons of more general interest: Methodology over metaphysics. Traditional metaphysical criteria were not a useful guide to whether successful prediction would be possible; instead, the crucial thing was selecting an effective methodology. Which methodology? Success required sophisticated use of case-specific evidence from opinion polling. The pursuit of explanations via general theory or causal (...) mechanisms, by contrast, turned out to be precisely the wrong path—contrary to much recent philosophy of social science. (shrink)
Abstract: The effectiveness of cancer prediction system helps the people to know their cancer risk with low cost and it also helps the people to take the appropriate decision based on their cancer risk status. The dataset is collected from the data world website. In this paper, we proposed an Artificial Neural Network for detecting whether lung cancer is found or not in human body. Symptoms were used to diagnose the lung cancer, these symptoms such as Yellow fingers, Anxiety, (...) Chronic Disease, Fatigue, Allergy, Wheezing, Coughing, Shortness of Breath, Swallowing Difficulty and Chest pain. They were used and other information about the patient as input variables for the proposed ANN model. The proposed model was trained, and validated using the lung cancer dataset. The proposed model was evaluated and tested. The accuracy rate it gave us was 99.01 %. (shrink)
The problem of rational prediction, launched by Wesley Salmon, is without doubt the Achilles heel of the critical method defended by Popper. In this paper, I assess the response given both by Popper and by the popperian Alan Musgrave to this problem. Both responses are inadequate and thus the conclusion of Salmon is reinforced: without appeal to induction, there is no way to make of the practical prediction a rational action. Furthermore, the critical method needs to be vindicated (...) if one pretends that its application be suitable for the preference of hypothesis. I argue that the nature of this vindication is such that it may be applied also to induction. Thus, to be a popperian is a good reason also to be an inductivist. (shrink)
The major competing statistical paradigms share a common remarkable but unremarked thread: in many of their inferential applications, different probability interpretations are combined. How this plays out in different theories of inference depends on the type of question asked. We distinguish four question types: confirmation, evidence, decision, and prediction. We show that Bayesian confirmation theory mixes what are intuitively “subjective” and “objective” interpretations of probability, whereas the likelihood-based account of evidence melds three conceptions of what constitutes an “objective” probability.
Kristin Andrews proposes a new framework for thinking about folk psychology, which she calls Pluralistic Folk Psychology. Her approach emphasizes kinds of psychological prediction and explanation that don't rest on propositional attitude attribution. Here I review some elements of her theory and find that, although the approach is very promising, there's still work to be done before we can conclude that the manners of prediction and explanation she identifies don't involve implicit propositional attitude attribution.
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.