I explore a challenge that idealisations pose to scientific realism and argue that the realist can best accommodate idealisations by capitalising on certain modal features of idealised models that are underwritten by laws of nature.
Michael Strevens’s book Depth is a great achievement.1 To say anything interesting, useful and true about explanation requires taking on fundamental issues in the metaphysics and epistemology of science. So this book not only tells us a lot about scientific explanation, it has a lot to say about causation, lawhood, probability and the relation between the physical and the special sciences. It should be read by anyone interested in any of those questions, which includes presumably the vast majority of readers (...) of this journal. One of its many virtues is that it lets us see more clearly what questions about explanation, causation, lawhood and so on need answering, and frames those questions in perspicuous ways. I’m going to focus on one of these questions, what I’ll call the Goldilocks problem. As it turns out, I’m not going to agree with all the details of Strevens’s answer to this problem, though I suspect that something like his answer is right. At least, I hope something like his answer is right; if it isn’t, I’m not sure where else we can look. (shrink)
This paper adds to the philosophical literature on mechanistic explanation by elaborating two related explanatory functions of idealisation in mechanistic models. The first function involves explaining the presence of structural/organizational features of mechanisms by reference to their role as difference-makers for performance requirements. The second involves tracking counterfactual dependency relations between features of mechanisms and features of mechanistic explanandum phenomena. To make these functions salient, we relate our discussion to an exemplar from systems biological research on the mechanism for (...) countering heat shock—the heat shock response system—in Escherichia coli bacteria. This research also reinforces a more general lesson: ontic constraint accounts in the literature on mechanistic explanation provide insufficiently informative normative appraisals of mechanistic models. We close by outlining an alternative view on the explanatory norms governing mechanistic representation. (shrink)
"Semantic dispositionalism" is the theory that a speaker's meaning something by a given linguistic symbol is determined by her dispositions to use the symbol in a certain way. According to an objection by Kripke, further elaborated in Kusch :156–163, 2005), semantic dispositionalism involves ceteris paribus-clauses and idealisations, such as unbounded memory, that deviate from standard scientific methodology. I argue that Kusch misrepresents both ceteris paribus-laws and idealisation, neither of which factually "approximate" the behaviour of agents or the course of (...) events, but, rather, identify and isolate nature's component parts and processes. An analysis of current results in cognitive psychology vindicates the idealisations involved and certain counterfactual assumptions in science generally. In particular, results suggest that there can be causal continuity between the dispositional structure of actual objects and that of highly idealised objects. I conclude by suggesting that we can assimilate ceteris paribus-laws with disposition ascriptions insofar as they involve identical idealising assumptions. (shrink)
The paper deals with the methodological clash between idealism and anti-idealism in political philosophy, and highlights its importance for public reason (PR) and public justification (PJ) theorising. Upon reviewing the broader context which harks back to Rawls’s notion of a realistic utopia, we focus on two major recent contributions to the debate in the work of David Estlund (the prototypical utopian) and Gerald Gaus (the cautious anti-utopian). While Estlund presents a powerful case on behalf of ideal theorising, claiming that motivational (...) incapacity and other non-ideal features of “human nature” – the so-called bad facts – do not normally refute the desirability of highly utopian theories of justice, we show that Gaus is correct in stressing the importance of feasibility considerations, including empirical knowledge about human societies. Because moral disagreement is to be expected even among cognitively and morally excellent reasoners, we argue that Estlund’s search for Truth about justice must idealise away normative diversity as just another bad fact. This methodological dispute has important ramifications for current debates about PR and PJ as the grounds of liberal legitimacy. Because consensual approaches rely on strong idealisation which results in exclusion of numerous comprehensive doctrines from consideration, we conclude that convergence-based liberal political theory has distinct advantage as regards exploiting normative diversity to the advantage of everyone. (shrink)
The notion of understanding occupies an increasingly prominent place in contemporary epistemology, philosophy of science, and moral theory. A central and ongoing debate about the nature of understanding is how it relates to the truth. In a series of influential contributions, Catherine Elgin has used a variety of familiar motivations for antirealism in philosophy of science to defend a non- factive theory of understanding. Key to her position are: (i) the fact that false theories can contribute to the upwards trajectory (...) of scientific understanding, and (ii) the essential role of inaccurate idealisations in scientific research. Using Elgin’s arguments as a foil, I show that a strictly factive theory of understanding has resources with which to offer a unified response to both the problem of idealisations and the role of false theories in the upwards trajectory of scientific understanding. Hence, strictly factive theories of understanding are viable notwithstanding these forceful criticisms. (shrink)
This dissertation examines aspects of the interplay between computing and scientific practice. The appropriate foundational framework for such an endeavour is rather real computability than the classical computability theory. This is so because physical sciences, engineering, and applied mathematics mostly employ functions defined in continuous domains. But, contrary to the case of computation over natural numbers, there is no universally accepted framework for real computation; rather, there are two incompatible approaches --computable analysis and BSS model--, both claiming to formalise algorithmic (...) computation and to offer foundations for scientific computing. -/- The dissertation consists of three parts. In the first part, we examine what notion of 'algorithmic computation' underlies each approach and how it is respectively formalised. It is argued that the very existence of the two rival frameworks indicates that 'algorithm' is not one unique concept in mathematics, but it is used in more than one way. We test this hypothesis for consistency with mathematical practice as well as with key foundational works that aim to define the term. As a result, new connections between certain subfields of mathematics and computer science are drawn, and a distinction between 'algorithms' and 'effective procedures' is proposed. -/- In the second part, we focus on the second goal of the two rival approaches to real computation; namely, to provide foundations for scientific computing. We examine both frameworks in detail, what idealisations they employ, and how they relate to floating-point arithmetic systems used in real computers. We explore limitations and advantages of both frameworks, and answer questions about which one is preferable for computational modelling and which one for addressing general computability issues. -/- In the third part, analog computing and its relation to analogue (physical) modelling in science are investigated. Based on some paradigmatic cases of the former, a certain view about the nature of computation is defended, and the indispensable role of representation in it is emphasized and accounted for. We also propose a novel account of the distinction between analog and digital computation and, based on it, we compare analog computational modelling to physical modelling. It is concluded that the two practices, despite their apparent similarities, are orthogonal. (shrink)
This paper addresses the issue of finite versus countable additivity in Bayesian probability and decision theory -- in particular, Savage's theory of subjective expected utility and personal probability. I show that Savage's reason for not requiring countable additivity in his theory is inconclusive. The assessment leads to an analysis of various highly idealised assumptions commonly adopted in Bayesian theory, where I argue that a healthy dose of, what I call, conceptual realism is often helpful in understanding the interpretational value of (...) sophisticated mathematical structures employed in applied sciences like decision theory. In the last part, I introduce countable additivity into Savage's theory and explore some technical properties in relation to other axioms of the system. (shrink)
It is usually accepted that unconditional statements are clearer and less problematic than conditional ones. This article goes against this popular belief by advancing the contrarian hypothesis that all unconditional statements can be reduced to conditional ones due to the way our assumptions support our assertions. In fact, considering the coherentist process by which most of our different beliefs mutually support themselves, the only genuine example of unconditional statements are cases of self-justified beliefs, but these examples are controversial and few (...) and far between. The distinction between unconditional and conditional statements is similar to the distinction between assumptions and premises in that is a largely conventional idealisation that results from our attempts to limit epistemic complexity. (shrink)
Following Nancy Cartwright and others, I suggest that most (if not all) theories incorporate, or depend on, one or more idealizing assumptions. I then argue that such theories ought to be regimented as counterfactuals, the antecedents of which are simplifying assumptions. If this account of the logic form of theories is granted, then a serious problem arises for Bayesians concerning the prior probabilities of theories that have counterfactual form. If no such probabilities can be assigned, the the posterior probabilities will (...) be undefined, as the latter are defined in terms of the former. I argue here that the most plausible attempts to address the problem of probabilities of conditionals fail to help Bayesians, and, hence, that Bayesians are faced with a new problem. In so far as these proposed solutions fail, I argue that Bayesians must give up Bayesianism or accept the counterintuitive view that no theories that incorporate any idealizations have ever really been confirmed to any extent whatsoever. Moreover, as it appears that the latter horn of this dilemma is highly implausible, we are left with the conclusion that Bayesianism should be rejected, at least as it stands. (shrink)
Sterelny (2003) develops an idealised natural history of folk-psychological kinds. He argues that belief-like states are natural elaborations of simpler control systems, called detection systems, which map directly from environmental cue to response. Belief-like states exhibit robust tracking (sensitivity to multiple environmental states), and response breadth (occasioning a wider range of behaviours). The development of robust tracking and response-breadth depend partly on properties of the informational environment. In a transparent environment the functional relevance of states of the world is directly (...) detectable. Outside transparent environments, selection can favour decoupled representations. Sterelny maintains that these arguments do not generalise to desire. Unlike the external environment, the internal processes of an organism, he argues, are selected for transparency. Parts of a single organism gain nothing from deceiving one another, but gain significantly from accurate signalling of their states and needs. Key conditions favouring the development of belief-like states are therefore absent in the case of desires. Here I argue that Sterelny’s reasons for saying that his treatment of belief does not generalise to motivation (desires, or preferences) are insufficient. There are limits to the transparency that internal environments can achieve. Even if there were not, tracking the motivational salience of external states suggests possible gains for systematic tracking of outcome values in any system in which selection has driven the production of belief-like states. (shrink)
I argue that normative formal epistemology (NFE) is best understood as modelling, in the sense that this is the reconstruction of its methodology on which NFE is doing best. I focus on Bayesianism and show that it has the characteristics of modelling. But modelling is a scientific enterprise, while NFE is normative. I thus develop an account of normative models on which they are idealised representations put to normative purposes. Normative assumptions, such as the transitivity of comparative credence, are characterised (...) as modelling idealisations motivated normatively. I then survey the landscape of methodological options: what might formal epistemologists be up to? I argue the choice is essentially binary: modelling or theorising. If NFE is theorising it is doing very poorly: generating false claims with no clear methodology for separating out what is to be taken seriously. Modelling, by contrast, is a successful methodology precisely suited to the management of useful falsehoods. Regarding NFE as modelling is not costless, however. First, our normative inferences are less direct and are muddied by the presence of descriptive idealisations. Second, our models are purpose-specific and limited in their scope. I close with suggestions for how to adapt our practice. (shrink)
Epistemologists often appeal to the idea that a normative theory must provide useful, usable, guidance to argue for one normative epistemology over another. I argue that this is a mistake. Guidance considerations have no role to play in theory choice in epistemology. I show how this has implications for debates about the possibility and scope of epistemic dilemmas, the legitimacy of idealisation in Bayesian epistemology, uniqueness versus permissivism, sharp versus mushy credences, and internalism versus externalism.
Taken as a model for how groups should make collective judgments and decisions, the ideal of deliberative democracy is inherently ambiguous. Consider the idealised case where it is agreed on all sides that a certain conclusion should be endorsed if and only if certain premises are admitted. Does deliberative democracy recommend that members of the group debate the premises and then individually vote, in the light of that debate, on whether or not to support the conclusion? Or does it recommend (...) that members individually vote on the premises, and then let their commitment to the conclusion be settled by whether or not the group endorses the required premises? Deliberative-democratic theory has not addressed this issue, and this is a problem. The discursive dilemma of my title--a generalisation of the doctrinal paradox from analytical jurisprudence--shows that the procedures distinguished can come apart. Thus deliberative democrats must make up their minds on where they stand in relation to the issue; they cannot sit on the fence. This paper is an attempt to address the issue and look at the grounds on which it may be resolved. (shrink)
We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealised explanation game in which players collaborate to find the best explanation for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players to explore overlapping causal (...) patterns of variable granularity and scope. We characterise the conditions under which such a game is almost surely guaranteed to converge on a optimal explanation surface in polynomial time, and highlight obstacles that will tend to prevent the players from advancing beyond certain explanatory thresholds. The game serves a descriptive and a normative function, establishing a conceptual space in which to analyse and compare existing proposals, as well as design new and improved solutions. (shrink)
The tendency to idealise artificial intelligence as independent from human manipulators, combined with the growing ontological entanglement of humans and digital machines, has created an “anthrobotic” horizon, in which data analytics, statistics and probabilities throw our agential power into question. How can we avoid the consequences of a reified definition of intelligence as universal operation becoming imposed upon our destinies? It is here argued that the fantasised autonomy of automated intelligence presents a contradistinctive opportunity for philosophical consciousness to understand itself (...) anew as holistic and co-creative, beyond the recent “analytic” moment of the history of philosophy. Here we introduce the concept of “crealectic intelligence”, a meta-analytic and meta-dialectic aspect of consciousness. Intelligent behaviour may consist in distinguishing discrete familiar parts or reproducible functions in the midst of noise via an analytic process of segmentation; intelligence may also manifest itself in the constitution of larger wholes and dynamic unities through a dialectic process of association or assemblage. But, by contrast, crealectic intelligence co-creates realities in the image of an ideal or truth, taking into account the desiring agent imbued with a sense of possibility, in a relationship not only with the Real but also with the creative sublime or “Creal”. (shrink)
Scientific progress is a hot topic in the philosophy of science. However, as yet we lack a comprehensive philosophical examination of scientific progress. First, the recent debate pays too much attention to the epistemic approach and the semantic approach. Shan’s new functional approach and Dellsén’s noetic approach are still insufficiently assessed. Second, there is little in-depth analysis of the progress in the history of the sciences. Third, many related philosophical issues are still to be explored. For example, what are the (...) implications of scientific progress for the scientific realism/antirealism debate? Is the incommensurability thesis a challenge to scientific progress? What role does aesthetic values play in scientific progress? Does idealisation impede scientific progress? This book fills this gap. It offers a new assessment of the four main approaches to scientific progress (Part I). It also features eight historical case studies to investigate the notion of progress in different disciplines: physics, chemistry, evolutionary biology, seismology, psychology, sociology, economics, and medicine respectively (Part II). It discusses some issues related to scientific progress: scientific realism, incommensurability, values in science, idealisation, scientific speculation, interdisciplinarity, and scientific perspectivalism (Part III). (shrink)
It is increasingly common for philosophers to rely on the notion of an idealised listener when explaining how the semantic values of context-sensitive expressions are determined. Some have identified the semantic values of such expressions, as used on particular occasions, with whatever an appropriately idealised listener would take them to be. Others have argued that, for something to count as the semantic value, an appropriately idealised listener should be able to recover it. Our aim here is to explore the range (...) of ways that such idealisation might be worked out, and then to argue that none of these results in a very plausible theory. We conclude by reflecting on what this negative result reveals about the nature of meaning and responsibility. (shrink)
Based on three recently published books on climate justice, this article reviews the field of climate ethics in light of developments of international climate politics. The central problem addressed is how idealised normative theories can be relevant to the political process of negotiating a just distribution of the costs and benefits of mitigating climate change. I distinguish three possible responses, that is, three kinds of non-ideal theories of climate justice: focused on (1) the injustice of some agents not doing their (...) part; (2) the policy process and aiming to be realistic; and (3) grievances related to the transition to a clean-energy economy. The methodological discussion underpinning each response is innovative and should be of interest more generally, even though it is still underdeveloped. The practical upshot, however, is unclear: even non-ideal climate justice may be too disconnected from the fast-moving and messy climate circus. (shrink)
The standard representation theorem for expected utility theory tells us that if a subject’s preferences conform to certain axioms, then she can be represented as maximising her expected utility given a particular set of credences and utilities—and, moreover, that having those credences and utilities is the only way that she could be maximising her expected utility. However, the kinds of agents these theorems seem apt to tell us anything about are highly idealised, being always probabilistically coherent with infinitely precise degrees (...) of belief and full knowledge of all a priori truths. Ordinary subjects do not look very rational when compared to the kinds of agents usually talked about in decision theory. In this paper, I will develop an expected utility representation theorem aimed at the representation of those who are neither probabilistically coherent, logically omniscient, nor expected utility maximisers across the board—that is, agents who are frequently irrational. The agents in question may be deductively fallible, have incoherent credences, limited representational capacities, and fail to maximise expected utility for all but a limited class of gambles. (shrink)
In this article, I criticize Véronique Zanetti on the topic of moral compromise. As I understand Zanetti, a compromise could only be called a “moral compromise” if (i) it does not originate under coercive conditions, (ii) it involves conflict whose subject matter is moral, and (iii) “the parties support the solution found for what they take to be moral reasons rather than strategic interests.” I offer three criticisms of Zanetti. First, Zanetti ignores how some parties may not have reason to (...) seek social peace at all. Zanetti’s claim that there is consensus on the aim of social peace can involve idealising away from disagreement in a manner that Zanetti accuses Rawls of. Second, even if there is consensus on the aim of seeking social peace, this leaves open the possibility of disagreement about which society different people should belong to. This idealises away from real world conflict concerning borders. Indeed, Zanetti does not mention that her ‘central example’ of moral disagreement, the German Abortion compromise, was enacted in the wake of German reunification. Third, there are at least two things that can be called the ‘German Abortion compromise.’ The compromise that Zanetti speaks of was imposed by the German Federal Constitutional Court. The court declared unconstitutional a law passed in 1992 that had been negotiated in parliament. Zanetti does not dwell on this lack of democratic credentials. Even the substance of the court-imposed solution is itself a dubious example of a moral compromise between parties based on what is acceptable to their reason. -/- (penultimate version - If you would like the .pdf of the final version for personal scholarly use, please contact me through the email address on my profile or my CV.). (shrink)
One of the philosophical discussions stimulated by the recent scientific study of psychopathy concerns the mental illness status of this construct. This paper contributes to this debate by recommending a way of approaching the problem at issue. By relying on and integrating the seminal work of the philosopher of psychiatry Bill Fulford, I argue that a mental illness is a harmful unified construct that involves failures of ordinary doing. Central to the present proposal is the idea that the notion of (...) failure of ordinary doing, besides the first personal experience of the patient, has to be spelled out also by referring to a normative account of idealised conditions of agency. This account would have to state in particular the conditions which are required for moral responsibility. I maintain that psychopathy is a unified enough construct that involves some harms. The question whether the condition involves also a failure of ordinary doing, as this notion is understood in this paper, is not investigated here. (shrink)
There has been a growing trend to include non-causal models in accounts of scientific explanation. A worry addressed in this paper is that without a higher threshold for explanation there are no tools for distinguishing between models that provide genuine explanations and those that provide merely potential explanations. To remedy this, a condition is introduced that extends a veridicality requirement to models that are empirically underdetermined, highly-idealised, or otherwise non-causal. This condition is applied to models of electroweak symmetry breaking beyond (...) the Standard Model. (shrink)
In his critical notice entitled ‘An Improved Whole Life Satisfaction Theory of Happiness?’ focusing on my article that was previously published in this journal, Fred Feldman raises an important objection to a suggestion I made about how to best formulate the whole life satisfaction theories of happiness. According to my proposal, happiness is a matter of whether an idealised version of you would judge that your actual life corresponds to the life-plan, which he or she has constructed for you on (...) the basis of your cares and concerns. Feldman argues that either the idealised version will include in the relevant life-plan only actions that are possible for you to do or he or she will also include actions and outcomes that are not available for you in the real world. He then uses examples to argue that both of these alternatives have implausible consequences. In response to this objection, I argue that what it is included in the relevant life-plan depends on what you most fundamentally desire and that this constraint is enough to deal with Feldman’s new cases. (shrink)
Modeling is central to scientific inquiry. It also depends heavily upon the imagination. In modeling, scientists seem to turn their attention away from the complexity of the real world to imagine a realm of perfect spheres, frictionless planes and perfect rational agents. Modeling poses many questions. What are models? How do they relate to the real world? Recently, a number of philosophers have addressed these questions by focusing on the role of the imagination in modeling. Some have also drawn parallels (...) between models and fiction. This chapter examines these approaches to scientific modeling and considers the challenges they face. (shrink)
USKALI MÄKI (Helsinki, 1951) is a philosopher of science and a social scientist, and one of the forerunners of the strong wave of research on the philosophy and methodology of economics that has been expanding during the last three decades. His research interests and academic contributions cover many topics in the philosophy of economics, such as realism and realisticness, idealisation, scientific modelling, causation, explanation, rhetoric, the sociology and economics of economics, and the foundations of new institutional and Austrian economics. (...) He is a coeditor of The handbook of economic methodology (1998); Economics and methodology: crossing boundaries (1998); Rationality, institutions and economic methodology (1993). And the editor of two compilations of essays that have become highly influential to the shaping of the field: The economic world view: studies in the ontology of economics (2001), and Fact and fiction in economics: realism, models, and social construction (2002). (shrink)
A difficult problem for contractualists is how to provide an interpretation of the contractual situation that is both subject to appropriately stringent constraints and yet also appropriately sensitive to certain features of us as we actually are. My suggestion is that we should embrace a model of contractualism that is structurally analogous to the “advice model” of the ideal observer theory famously proposed by Michael Smith (1994; 1995). An advice model of contractualism is appealing since it promises to deliver a (...) straightforward solution to the so-called “conditional fallacy.” But it faces some formidable challenges. On the face of it, it seems to be straightforwardly conceptually incoherent. And it seems to deliver a solution to the conditional fallacy at the cost of being vulnerable to what I shall call “the concessional fallacy.” I shall consider how, if at all, these challenges are to be met. I shall then conclude by considering what this might mean for the so-called “ideal/non-ideal theory” issue. (shrink)
Today professionals have to deal with more uncertainties in their field than before. We live in complex and rapidly changing environments. The British philosopher Ronald Barnett adds the term ‘supercomplexity’ to highlight the fact that ‘we can no longer be sure how even to describe the world that faces us’ (Barnett, 2004). Uncertainty is, nevertheless, not a highly appreciated notion. An obvious response to uncertainty is to reduce it—or even better, to wipe it away. The assumption of this approach is (...) that uncertainty has no advantages. This assumption is, however, not correct as several contemporary authors have argued. Rather than problematising uncertainty, I will investigate the pros and cons of embedding uncertainty in educational practice of professional higher education. In order to thoroughly explore the probabilities and challenges that uncertainty poses in education, I will dwell on the radical ideas on uncertainty of the German philosopher Friedrich Nietzsche. In The Birth of Tragedy (1872) he recognises two forces: the Apollinian, that is the pursuit of order and coherence, and the Dionysian, that is the human tendency to nullify all systematisation and idealisation. Uncertainty is part of the Dionysian. I will argue that when educators take Nietzsche's plea to make room for the Dionysian to heart, they can better prepare students for an uncertain world. If, and only if, students are encouraged to deploy both tendencies—the Apollinian and the Dionysian—they can become professionals who are able to stand their ground in an uncertain and changing (professional) world. (shrink)
This article considers attempts to include the issues of ageing and ill health in a Rawlsian framework. It first considers Norman Daniels’ Prudential Lifespan Account, which reduces intergenerational questions to issues of intrapersonal prudence from behind a Rawslian veil of ignorance. This approach faces several problems of idealisation, including those raised by Hugh Lazenby, because it must assume that everyone will live to the same age, undermining its status as a prudential calculation. I then assess Lazenby's account, which applies (...) Rawls’ general theory of justice more directly to healthcare. Lazenby suggests that we should apply Rawls’ difference principle – which claims that any inequalities in social goods must benefit the worst off – to conclude that we should significantly prioritise treatment of young patients. I argue first that the existence of young terminally ill patients undermines a number of Rawlsian arguments for the difference principle. I then argue that the structure of ageing undermines the Rawlsian decision mechanism of the ‘veil of ignorance’ on which Lazenby relies. I conclude that age and terminal illness present significant problems for any comprehensive Rawlsian account of justice. (shrink)
Le passage spéculatif de la catégorie du mauvais infini dans le véritable infini reste l’un des plus importants dans la Science de la logique. Comme il est bien connu, ce passage est expliqué par Hegel à travers sa théorie de l’idéalité du fini. Pourtant, du fait de sa structure complexe, le surgissement du véritable infini au sein du fini par l’idéalisation peut être considéré comme un processus abstrait, consistant seulement à supprimer la dualité de l’infinité. Cet article se propose donc (...) d’examiner pourquoi l’idéalisation de la véritable infini ne signifie ni une simple neutralisation de la catégorie de la finité ni une infinitisation extérieure de celle-ci, mais un processus dynamique qui s’infinitise en supprimant l’opposition statique du fini et de l’infini. (shrink)
In the paper, various notions of the logical semiotic sense of linguistic expressions – namely, syntactic and semantic, intensional and extensional – are considered and formalised on the basis of a formal-logical conception of any language L characterised categorially in the spirit of certain Husserl's ideas of pure grammar, Leśniewski-Ajdukiewicz's theory of syntactic/semantic categories and, in accordance with Frege's ontological canons, Bocheński's and some of Suszko's ideas of language adequacy of expressions of L. The adequacy ensures their unambiguous syntactic and (...) semantic senses and mutual, syntactic and semantic correspondence guaranteed by the acceptance of a postulate of categorial compatibility of syntactic and semantic categories of expressions of L. This postulate defines the unification of these three logical senses. There are three principles of compositionality which follow from this postulate: one syntactic and two semantic ones already known to Frege. They are treated as conditions of homomorphism of partial algebra of L into algebraic models of L: syntactic, intensional and extensional. In the paper, they are applied to some expressions with quantifiers. Language adequacy connected with the logical senses described in the logical conception of language L is, obviously, an idealisation. The syntactic and semantic unambiguity of its expressions is not, of course, a feature of natural languages, but every syntactically and semantically ambiguous expression of such languages may be treated as a schema representing all of its interpretations that are unambiguous expressions. (shrink)
This paper explores the question of what logic is not. It argues against the wide spread assumptions that logic is: a model of reason; a model of correct reason; the laws of thought, or indeed is related to reason at all such that the essential nature of the two are crucially or essentially co-illustrative. I note that due to such assumptions, our current understanding of the nature of logic itself is thoroughly entangled with the nature of reason. I show that (...) most arguments for the presence of any sort of essential re- lationship between logic and reason face intractable problems and demands, and fall well short of addressing them. These arguments include those for the notion that logic is normative for reason (or that logic and correct reason are in some way the same thing), that logic is some sort of description of correct reason and that logic is an abstracted or idealised version of correct reason. A strong version of logical realism is put forward as an alternative view, and is briefly explored. (shrink)
What is special about the philosophy of history when the history is about science? I shall focus on an impasse between two perspectives — one seeking an ideal of rationality to guide scientific practices, and one stressing the contingency of the practices. They disagree on what this contingency means for scientific norms. Their impasse underlies some fractious relations within History and Philosophy of Science. Since the late 1960s, this interdisciplinary field has been described, variously, as an “intimate relationship or marriage (...) of convenience”, a “marriage for the sake of reason”, a “troubling interaction ”, a “precarious relationship”, or one at risk of “ epistemological derangement”. -/- My paper has three aims. First, I characterise two idealised perspectives in this impasse: the sublimers and subversives. Then I describe a curious dynamic in which each perspective accuses the other of, simultaneously, claiming too little and too much. Subliming is dismissed for being scholastic, and condemned for being imperialist. Subverting is disparaged for being merely sociological, then criticised for being irrational or relativist. Here I draw on and extend the analyses of science studies in Kitcher, Zammito and others. Second, I argue that sublimers and subversives often talk past each other. Their mutual dismissals and condemnations arise partly because each perspective misconstrues the other’s claims. Each fails to see that the other is interested in different problems and interprets concepts differently. Third, I suggest that my analysis clarifies some current disagreements and old debates about contingency in science. It may also apply to debates on the contingency of norms in non- scientific realms. (shrink)
Most philosophical accounts of scientific models assume that models represent some aspect, or some theory, of reality. They also assume that interpretation plays only a supporting role. This paper challenges both assumptions. It proposes that models can be used in science to interpret reality. (a) I distinguish these interpretative models from representational ones. They find new meanings in a target system’s behaviour, rather than fit its parts together. They are built through idealisation, abstraction and recontextualisation. (b) To show how (...) interpretative models work, I offer a case study on the scientific controversy over foetal pain. It highlights how pain scientists use conflicting models to interpret the human foetus and its behaviour, and thereby to support opposing claims about whether the foetus can feel pain. (c) I raise a sceptical worry and a methodological challenge for interpretative models. To address the latter, I use my case study to compare how interpretative and representational models ought to be evaluated. (shrink)
Cette thèse porte sur la question du fini et de l’infini dans la philosophie de Hegel. L’objectif est double. En premier lieu, elle vise à retracer l’influence exercée par la philosophie antique (principalement Platon et Aristote) et par la philosophie moderne (pour l’essentiel Kant et certains postkantiens) sur l’élaboration hégélienne des catégories de la finité et de l’infinité. En second lieu, elle étudie le développement systématique de la logique de l’infinité hégélienne à la lumière de cette influence. Il s’agit d’étudier, (...) à travers une approche historique et critique, comment Hegel résout l’opposition traditionnelle du fini et de l’infini par sa théorie des deux infinis. A l’aune des conceptions de l’infinité-finie (la mauvaise infinité) et de l’infinité véritablement infinie (la véritable infinité), Hegel montre que le processus de la détermination du fini est un processus d’idéalisation qui supprime la contradiction du fini et de l’infini. Ainsi, l’enquête sur des concepts de finité et d’infinité permet de découvrir que l’idéalité spéculative est pour Hegel une réponse non seulement au problème traditionnel de leur articulation, mais aussi, plus généralement, aux problèmes soulevés par la caractérisation des idéalismes antiques et modernes. (shrink)
This article draws upon the work of Timothy Morton and Slavoj Žižek in order to critically examine how mountain bike trail builders orientated themselves within nature relations. Beginning with a discussion of the key ontological differences between Morton’s object-oriented ontology and Žižek’s blend of Hegelian-Lacanianism, we explore how Morton’s dark ecology and Žižek’s account of the radical contingency of nature, can offer parallel paths to achieving an ecological awareness that neither idealises nor mythologises nature, but instead, acknowledges its strange and (...) contingent form. Empirically, we support this theoretical approach in interviews with twenty mountain bike trail builders. These interviews depicted an approach to trail building that was ambivalently formed in/with the contingency of nature. In doing so, the trail builders acted with a sense of temporal awareness that accepted the radical openness of nature, presenting a ‘symbolic framework’ that was amiable to nature’s ambivalent, strange and contingent form. In conclusion, we argue that we should not lose sight of the ambivalences and strange surprises that emanate from our collective and unpredictable attempts to symbolize nature and that such knowledge can coincide with Morton’s ‘dark ecology’ – an ecological awareness that remains radically open to our ecological existence. (shrink)
Disagreement about how best to think of the relation between theories and the realities they represent has a longstanding and venerable history. We take up this debate in relation to the free energy principle (FEP) - a contemporary framework in computational neuroscience, theoretical biology and the philosophy of cognitive science. The FEP is very ambitious, extending from the brain sciences to the biology of self-organisation. In this context, some find apparent discrepancies between the map (the FEP) and the territory (target (...) systems) a compelling reason to defend instrumentalism about the FEP. We take this to be misguided. We identify an important fallacy made by those defending instrumentalism about the FEP. We call it the literalist fallacy: this is the fallacy of inferring the truth of instrumentalism based on the claim that the properties of FEP models do not literally map onto real-world, target systems. We conclude that scientific realism about the FEP is a live and tenable option. (shrink)
In this paper I will try to trace a discussion about status of art in some recent theories, which pay special attention to the fact that artworks are the kind of things to which representational, expressive, and aesthetic properties are ascribed. First, I will briefly mention some already established criticisms—developed by Richard Wollheim1—against the idea that artworks cannot be identified with physical objects. These criticisms have the further aim of providing an account of art experience that includes our perceiving representational (...) and expressive properties as well as aesthetic ones in artworks. In Wollheim's view, a misconstruction of the nature of these properties and of our perception of them in artworks has been the main reason for the view he intensely criticizes. Thus, he offers a different understanding of these properties, which also involves a new account of perception that does not require an idealisation of our experience of art. (shrink)
Corporate disclosure and reporting of information has become synonymous with transparency which in discourses idealising its value is part of the rhetoric of good governance. This notion is overtly conveyed in principles and codes of corporate governance practice which have proliferated globally over the last three decades. The possibility for transparency to conceal more than is revealed is considered with regard to corporate communication of information, with the consequence that power and real knowledge of the corporate behavioural agenda remains in (...) corporate hands. Philosophically, the paradoxical and unintended outcome is that corporations are constrained by the norm of transparency in developing authentic moral behaviour, while also exercising power and control since in the information society transparency does not, and indeed cannot, lead to revealing all. Transparency places greater emphasis on judging, or evaluating existing organisational processes than on assessment of (self-)learning processes based on what actually takes place when corporations make decisions. (shrink)
"L'énigme majeure est la cause efficace" insiste Jean Largeault (1985). En effet, la physique contemporaine géométrise les phénomènes de la nature en transformant les causes efficientes en causalité formelle. Cependant elle nous invite aussi à accorder le structurel etle physique, le formel et le dynamique. Il convient donc de s'interroger sur le fondement du pouvoir créateur des mathématiques ou de leur caractère générateur en physique. Selon Charles de Koninck, la fécondité des mathématiques parait comme une intériorisation de la figure philosophique (...) de l'ange dans le cadre du projet d'une métamathématique comme angélologie thomiste. La raison poursuit un moyen de connaitre qui serait universel in repraesentando, par opposition à l'universel in praedicando. Cette tendance est essentielle à ce que l'on appelle le "mode platonicien". L'une des idées centrales de cet article consiste à établir une connexion profonde entre la notion de causalité en physique et cette manière platonicienne de connaitre. Plus précisément, l'idéalisation de la causalité efficace en physique mathématique est-il autre chose que ce mode dialectique de connaître abusivement porté à sa limite ? (shrink)
In a 1995 interview, contemporary American composer John Zorn stated: ‘I got involved in music because of fi lm […] There’s a lot of fi lm elements in my music’ (Duckworth, 1995, p. 451). Scholars and critics have since widely noted these cinematic elements, with emphasis being placed on Zorn’s genre of so-called ‘fi le card compositions’. Whilst these studies have primarily concentrated on how the arrangement of sound blocks – the disjointed segments of Zorn’s compositions – can be compared (...) to cinematic montage, this article instead focusses on how sound blocks suggest the visual aspects of cinema. To delve deeper into the visuo-cinematic qualities of Zorn’s fi le card compositions, an idealised cinematic listener will be constructed, aided by various psychological, semiotic, philosophical and fi lm theories. I will suggest that a listener occupying a hypnogogic state projects moving images, akin to those of cinema, onto what Bernard Lewin fi rst called a ‘dream screen’. These projections occur due to intertextual associations made between fi le card compositions and the artistic fi gures to whom they are dedicated. These images combine with the sounds that brought them into being to form an audio-visual diegesis, which can be considered a type of half-imagined fi lm. The cinematic listener then actively draws out of this diegesis a narrative, via the semi-conscious process Boris Eikhenbaum called ‘inner speech’. I will conclude by giving some broader justifi cation for the methodology that brought this cinematic listener into being and suggest how the cinematic listener may be further utilised to provide musical analyses for fi le card works. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.