I argue that anarchist ideas for organising human communities could be a useful practical resource for Christian ethics. I demonstrate this firstly by introducing the main theological ideas underlying Maximus the Confessor’s ethics, a theologian respected and important in a number of Christian denominations. I compare practical similarities in the way in which ‘love’ and ‘well-being’ are interpreted as the telos of Maximus and Peter Kropotkin’s ethics respectively. I further highlight these similarities by demonstrating them in action when it comes (...) attitudes towards property. I consequently suggest that there are enough similarities in practical aims, for Kropotkin’s ideas for human organising to be useful to Christian ethicists. (shrink)
This paper explores the cosmology of St Maximus the Confessor and its relevance for contemporary ethics. It takes as it’s starting point two papers on Maximus’ cosmology and environmental ethics (Bordeianu, 2009; Munteanu, 2010) and from there argues that we can not consider environmental ethics in isolation from other ethical issues. This, as both Ware and Keselopoulos have also pointed out, is because the environmental crisis is actually a crisis in the human heart and in human attitudes toward everything about (...) us. The paper goes through some key areas in Maximus’ cosmology according to his own formula of creation – movement – rest and considers at each stage the implications of this theology for the way the human should be living and treating other beings. The main sources for this exploration are Ambiguum 7, Ambiguum 41, and The Mystagogia with especial focus on the doctrine of the logoi and the divisions of nature. The paper concludes that Bordeianu and Munteanu are right to consider Maximus’ theology to be of ecological relevance, but that this relevance comes from the radical ethical statement being made about human activity. Maximus’ theology points the human toward becoming in the likeness of Christ who unites heaven and earth through love. The love of Christ when considered in an ethical context stands as a formidable challenge to current attitudes and institutions that advocate the exploitation and destruction of human or non-human creation. (shrink)
2021 marks Dylan's 80th birthday and his 60th year in the music world. It invites us to look back on his career and the multitudes that it contains. Is he a song and dance man? A political hero? A protest singer? A self-portrait artist who has yet to paint his masterpiece? Is he Shakespeare in the alley? The greatest living exponent of American music? An ironsmith? Internet radio DJ? Poet (who knows it)? Is he a spiritual and religious parking meter? (...) Judas? The voice of a generation or a false prophet, jokerman, and thief? Dylan is all these and none. The essays in this book explore the Nobel laureate’s masks, collectively reflecting upon their meaning through time, change, movement, and age. They are written by wonderful and diverse set of contributors, all here for his 80th birthday bash: celebrated Dylanologists like Michael Gray and Laura Tenschert; recording artists such as Robyn Hitchcock, Barb Jungr, Amy Rigby, and Emma Swift; and 'the professors’ who all like his looks: David Boucher, Anne Margaret Daniel, Ray Monk, Galen Strawson, and more. Read it on your toaster! (shrink)
Whilst much has been said about the implications of predictive processing for our scientific understanding of cognition, there has been comparatively little discussion of how this new paradigm fits with our everyday understanding of the mind, i.e. folk psychology. This paper aims to assess the relationship between folk psychology and predictive processing, which will first require making a distinction between two ways of understanding folk psychology: as propositional attitude psychology and as a broader folk psychological discourse. It will be argued (...) that folk psychology in this broader sense is compatible with predictive processing, despite the fact that there is an apparent incompatibility between predictive processing and a literalist interpretation of propositional attitude psychology. The distinction between these two kinds of folk psychology allows us to accept that our scientific usage of folk concepts requires revision, whilst rejecting the suggestion that we should eliminate folk psychology entirely. (shrink)
While we agree in broad strokes with the characterisation of rationalization as a “useful fiction,” we think that Fiery Cushman's claim remains ambiguous in two crucial respects: the reality of beliefs and desires, that is, the fictional status of folk-psychological entities and the degree to which they should be understood as useful. Our aim is to clarify both points and explicate the rationale of rationalization.
In this paper we will demonstrate that a computational system can meet the criteria for autonomy laid down by classical enactivism. The two criteria that we will focus on are operational closure and structural determinism, and we will show that both can be applied to a basic example of a physically instantiated Turing machine. We will also address the question of precariousness, and briefly suggest that a precarious Turing machine could be designed. Our aim in this paper is to challenge (...) the assumption that computational systems are necessarily heteronomous systems, to try and motivate in enactivism a more nuanced and less rigid conception of computational systems, and to demonstrate to computational theorists that they might find some interesting material within the enactivist tradition, despite its historical hostility towards computationalism. (shrink)
Scientists, philosophers, and policymakers disagree about how to define microaggression. Here, we offer a taxonomy of existing definitions, clustering around (a) the psychological motives of perpetrators, (b) the experience of victims, and (c) the functional role of microaggression in oppressive social structures. We consider conceptual and epistemic challenges to each and suggest that progress may come from developing novel hybrid accounts of microaggression, combining empirically tractable features with sensitivity to the testimony of victims.
Recently, some authors have begun to raise questions about the potential unity of 4E (enactive, embedded, embodied, extended) cognition as a distinct research programme within cognitive science. Two tensions, in particular, have been raised:(i) that the body-centric claims embodied cognition militate against the distributed tendencies of extended cognition and (ii) that the body/environment distinction emphasized by enactivism stands in tension with the world-spanning claims of extended cognition. The goal of this paper is to resolve tensions (i) and (ii). The proposal (...) is that a form of ‘wide computationalism’can be used to reconcile the two tensions and, in so doing, articulate a common theoretical core for 4E cognition. (shrink)
Microaggressions are seemingly negligible slights that can cause significant damage to frequently targeted members of marginalized groups. Recently, Scott O. Lilienfeld challenged a key platform of the microaggression research project: what’s aggressive about microaggressions? To answer this challenge, Derald Wing Sue, the psychologist who has spearheaded the research on microaggressions, needs to theorize a spectrum of aggression that ranges from intentional assault to unintentional microaggressions. I suggest turning to Bonnie Mann’s “Creepers, Flirts, Heroes and Allies” for inspiration. Building from Mann’s (...) richer theoretical framework will allow Sue to answer Lilienfeld’s objection and defend the legitimacy of the concept, ‘microaggression’. (shrink)
In this article, we aim to map out the complexities which characterise debates about the ethics of vaccine distribution, particularly those surrounding the distribution of the COVID-19 vaccine. In doing so, we distinguish three general principles which might be used to distribute goods and two ambiguities in how one might wish to spell them out. We then argue that we can understand actual debates around the COVID-19 vaccine – including those over prioritising vaccinating the most vulnerable – as reflecting disagreements (...) over these principles. Finally, we shift our attention away from traditional discussions of distributive justice, highlighting the importance of concerns about risk imposition, special duties, and social roles in explaining debates over the COVID-19 vaccine. We conclude that the normative complexity this article highlights deepens the need for decision-making bodies to be sensitive to public input. (shrink)
Decision theory and folk psychology both purport to represent the same phenomena: our belief-like and desire- and preference-like states. They also purport to do the same work with these representations: explain and predict our actions. But they do so with different sets of concepts. There's much at stake in whether one of these two sets of concepts can be accounted for with the other. Without such an account, we'd have two competing representations and systems of prediction and explanation, a dubious (...) dualism. Folk psychology structures our daily lives and has proven fruitful in the study of mind and ethics, while decision theory is pervasive in various disciplines, including the quantitative social sciences, especially economics, and philosophy. My interest is in accounting for folk psychology with decision theory -- in particular, for believe and wanting, which decision theory omits. Many have attempted this task for belief. (The Lockean Thesis says that there is such an account.) I take up the parallel task for wanting, which has received far less attention. I propose necessary and sufficient conditions, stated in terms of decision theory, for when you're truly said to want; I give an analogue of the Lockean Thesis for wanting. My account is an alternative to orthodox accounts that link wanting to preference (e.g. Stalnaker (1984), Lewis (1986)), which I argue are false. I argue further that want ascriptions are context-sensitive. My account explains this context-sensitivity, makes sense of conflicting desires, and accommodates phenomena that motivate traditional theses on which 'want' has multiple senses (e.g. all-things-considered vs. pro tanto). (shrink)
Literature in epistemology tends to suppose that there are three main types of understanding – propositional, atomistic, and objectual. By showing that all apparent instances of propositional understanding can be more plausibly explained as featuring one of several other epistemic states, this paper argues that talk of propositional understanding is unhelpful and misleading. The upshot is that epistemologists can do without the notion of propositional understanding.
I want to see the concert, but I don’t want to take the long drive. Both of these desire ascriptions are true, even though I believe I’ll see the concert if and only if I take the drive.Yet they, and strongly conflicting desire ascriptions more generally, are predicted incompatible by the standard semantics, given two standard constraints. There are two proposed solutions. I argue that both face problems because they misunderstand how what we believe influences what we desire. I then (...) sketch my own solution: a coarse-worlds semantics that captures the extent to which belief influences desire. My semantics models what I call some-things-considered desire. Considering what the concert would be like, but ignoring the drive, I want to see the concert; considering what the drive would be like, but ignoring the concert, I don’t want to take the drive. (shrink)
Traditionally, computational theory (CT) and dynamical systems theory (DST) have presented themselves as opposed and incompatible paradigms in cognitive science. There have been some efforts to reconcile these paradigms, mainly, by assimilating DST to CT at the expenses of its anti-representationalist commitments. In this paper, building on Piccinini’s mechanistic account of computation and the notion of functional closure, we explore an alternative conciliatory strategy. We try to assimilate CT to DST by dropping its representationalist commitments, and by inviting CT to (...) recognize the functionally closed nature of some computational systems. (shrink)
Campbell Brown has recently argued that G.E. Moore's intrinsic value holism is superior to Jonathan Dancy's. I show that the advantage which Brown claims for Moore's view over Dancy's is illusory, and that Dancy's view may be superior.
Physical Computation is the summation of Piccinini’s work on computation and mechanistic explanation over the past decade. It draws together material from papers published during that time, but also provides additional clarifications and restructuring that make this the definitive presentation of his mechanistic account of physical computation. This review will first give a brief summary of the account that Piccinini defends, followed by a chapter-by-chapter overview of the book, before finally discussing one aspect of the account in more critical detail.
Epistemic paternalism is the thesis that a paternalistic interference with an individual's inquiry is justified when it is likely to bring about an epistemic improvement in her. In this article I claim that in order to motivate epistemic paternalism we must first account for the value of epistemic improvements. I propose that the epistemic paternalist has two options: either epistemic improvements are valuable because they contribute to wellbeing, or they are epistemically valuable. I will argue that these options constitute the (...) foundations of a dilemma: either epistemic paternalism collapses into general paternalism, or a distinctive project of justified epistemic paternalism is implausible. (shrink)
At first glance, hate speech and microaggressions seem to have little overlap beyond being communicated verbally or in written form. Hate speech seems clearly macro-aggressive: an intentional, obviously harmful act lacking the ambiguity (and plausible deniability) of microaggressions. If we look back at historical discussions of hate speech, however, many of these assumed differences turn out to be points of similarity. The harmfulness of hate speech only became widely acknowledged after a concerted effort by critical race theorists, feminists, and other (...) activists. Before the 1990s, slurs were widely considered socially acceptable behavior: mere jokes that weren’t intended to be harmful. Authors like Richard Delgado, Mari Matsuda, and Charles Lawrence pushed back against this dismissal. In this chapter, I show that their arguments for the serious harmfulness of hate speech prefigure and provide support for current debates about the serious harmfulness of microaggressions. Exploring resonances with the 1980s hate speech debate will allow us to explain why microaggressions fall below the cutoff for legal liability but remain apt targets for moral blame. (shrink)
Many theorists have focused on Wittgenstein’s use of examples, but I argue that examples form only half of his method. Rather than continuing the disjointed style of his Cambridge lectures, Wittgenstein returns to the techniques he employed while teaching elementary school. Philosophical Investigations trains the reader as a math class trains a student—‘by means of examples and by exercises’ (§208). Its numbered passages, carefully arranged, provide a series of demonstrations and practice problems. I guide the reader through one such series, (...) demonstrating how the exercises build upon one another and give us ample opportunity to hone our problem-solving skills. Through careful practice, we learn to pass the test Wittgenstein poses when he claims that something is ‘easy to imagine’ (§19). Whereas other critics have viewed the Investigations as merely a diagnosis of our philosophical delusions, I claim that Wittgenstein also writes a prescription for our disease: Do your exercises. (shrink)
Both patients and clinicians frequently report problems around communicating and assessing pain. Patients express dissatisfaction with their doctors and doctors often find exchanges with chronic pain patients difficult and frustrating. This chapter thus asks how we could improve pain communication and thereby enhance outcomes for chronic pain patients. We argue that improving matters will require a better appreciation of the complex meaning of pain terms and of the variability and flexibility in how individuals think about pain. We start by examining (...) the various accounts of the meaning of pain terms that have been suggested within philosophy and suggest that, while each of the accounts captures something important about our use of pain terms, none is completely satisfactory. We propose that pain terms should be viewed as communicating complex meanings, which may change across different communicative contexts, and this in turn suggests that we should view our ordinary thought about pain as similarly complex. We then sketch what a view taking seriously this variability in meaning and thought might look like, which we call the “polyeidic” view. According to this view, individuals tacitly occupy divergent stances across a range of different dimensions of pain, with one agent, for instance, thinking of pain in a much more “bodycentric” kind of way, while another thinks of pain in a much more "mindcentric” way. The polyeidic view attempts to expand the multidimensionality recognised in, e.g., biopsychosocial models in two directions: first, it holds that the standard triumvirate— dividing sensory/cognitive/affective factors— needs to be enriched in order to capture important distinctions within the social and psychological dimensions. Second, the polyeidic view attempts to explain why modulation of experience by these social and psychological factors is possible in the first place. It does so by arguing that because the folk concept of pain is complex, different weightings of the different parts of the concept can modulate pain experience in a variety of ways. Finally, we argue that adopting a polyeidic approach to the meaning of pain would have a range of measurable clinical outcomes. (shrink)
Advocates of longtermism point out that interventions which focus on improving the prospects of people in the very far future will, in expectation, bring about a significant amount of good. Indeed, in expectation, such long-term interventions bring about far more good than their short-term counterparts. As such, longtermists claim we have compelling moral reason to prefer long-term interventions. In this paper, I show that longtermism is in conflict with plausible deontic scepticism about aggregation. I do so by demonstrating that, from (...) both an ex-ante and ex-post perspective, longtermist interventions – and, in particular, those which aim to mitigate catastrophic risk – typically generate extremely weak claims of assistance from future people. (shrink)
The question of where the knowledge comes from when we conduct thought experiments has been one of the most fundamental issues discussed in the epistemological position of thought experiments. In this regard, Pierre Duhem shows a skeptical attitude on the subject by stating that thought experiments cannot be evaluated as real experiments or cannot be accepted as an alternative to real experiments. James R. Brown, on the other hand, states that thought experiments, which are not based on new experimental (...) evidence or logically derived from old data, called the Platonic thought experiment, provide intuitive access to a priori knowledge. Unlike Brown, John D. Norton strictly criticizes the idea that thought experiments provide mysterious access to the knowledge of the physical world, and states that thought experiments cannot provide knowledge that transcends empiricism. In the context of the NortonBrown debate, in this article, Brown's stance on thought experiments is supported by critically analyzing the thoughts put forward on the subject. -/- Düşünce deneylerini gerçekleştirdiğimizde sonucunda elde edilen bilginin nereden geldiği sorusu düşünce deneylerinin epistemolojik konumuna ilişkin tartışılan en temel konulardan bir tanesidir. Bu doğrultuda, Pierre Duhem düşünce deneylerinin gerçek deneyler ile aynı statüde değerlendirilemeyeceğini ve hatta düşünce deneylerinin gerçek deneylerin bir alternatifi olarak bile kabul edilemeyeceğini belirterek konuya ilişkin şüpheci bir tavır sergilemektedir. James R. Brown ise yeni deneysel kanıtlara dayanmayan ya da eski verilerden mantıksal olarak türetilmeyen, Platoncu düşünce deneyi olarak adlandırılan düşünce deneylerinin a priori bilgiye sezgisel erişim sağladığını ifade etmektedir. Brown’ın aksine, John D. Norton düşünce deneylerinin fiziksel dünyanın bilgisine gizemli bir erişim sağladığı yönündeki düşünceyi kesin bir dille eleştirmekte ve düşünce deneylerinin ampirizmi aşan bir bilgi sağlamasının mümkün olamayacağını ifade etmektedir. Norton-Brown tartışması çerçevesinde bu makalede, Brown'un düşünce deneylerine ilişkin tutumu, konuyla ilgili düşüncelerin eleştirel olarak analiz edilmesiyle desteklenmektedir. (shrink)
Lockdown measures in response to the COVID-19 pandemic involve placing huge burdens on some members of society for the sake of benefiting other members of society. How should we decide when these policies are permissible? Many writers propose we should address this question using cost-benefit analysis, a broadly consequentialist approach. We argue for an alternative non-consequentialist approach, grounded in contractualist moral theorising. The first section sets up key issues in the ethics of lockdown, and sketches the apparent appeal of addressing (...) these problems in a CBA frame. The second section argues that CBA fundamentally distorts the normative landscape in two ways: first, in principle, it allows very many morally trivial preferences—say, for a coffee—might outweigh morally weighty life-and-death concerns; second, it is insensitive to the core moral distinction between victims and vectors of disease. The third section sketches our non-consequentialist alternative, grounded in Thomas Scanlon’s contractualist moral theory. On this account, the ethics of self-defence implies a strong default presumption in favour of a highly restrictive, universal lockdown policy: we then ask whether there are alternatives to such a policy which are justifiable to all affected parties, paying particular attention to the complaints of those most burdened by policy. In the fourth section, we defend our contractualist approach against the charge that it is impractical or counterintuitive, noting that actual CBAs face similar, or worse, challenges. (shrink)
In this article, we redefine classical notions of theory reduction in such a way that model-theoretic preferential semantics becomes part of a realist depiction of this aspect of science. We offer a model-theoretic reconstruction of science in which theory succession or reduction is often better - or at a finer level of analysis - interpreted as the result of model succession or reduction. This analysis leads to 'defeasible reduction', defined as follows: The conjunction of the assumptions of a reducing theory (...) T with the definitions translating the vocabulary of a reduced theory T' to the vocabulary of T, defeasibly entails the assumptions of reduced T'. This relation of defeasible reduction offers, in the context of additional knowledge becoming available, articulation of a more flexible kind of reduction in theory development than in the classical case. Also, defeasible reduction is shown to solve the problems of entailment that classical homogeneous reduction encounters. Reduction in the defeasible sense is a practical device for studying the processes of science, since it is about highlighting different aspects of the same theory at different times of application, rather than about naive dreams concerning a metaphysical unity of science. (shrink)
I investigate a new understanding of realism in science, referred to as ‘interactive realism’, and I suggest the ‘evolutionary progressiveness’ of a theory as novel criterion for this kind of realism. My basic claim is that we cannot be realists about anything except the progress affected by myriad science-reality interactions that are constantly moving on a continuum of increased ‘fitness’ determined according to empirical constraints. Moreover to reflect this movement accurately, there is a corresponding continuum of verdicts about the status (...) of the knowledge conveyed by theories – ranging from stark instrumentalism to full-blown realism. This view may sound like a pessimistic inductivist's dream, but actually this is so only if one evaluates it from within a traditional context where the ‘truth’ of a single theory is the exclusive criterion for realism. I, on the other hand, want to redefine the terms of realist debate in such a way that the units of assessment of realism are sequences of theories evaluated according to their ‘evolutionary progressiveness’. I unpack ‘interactive realism’ by defining my notion of ‘evolutionary progressiveness’, the notion of ‘truth-as-historied reference’ underpinning it, and the continuum of interaction between theories and aspects of reality it affects. I conclude that, although interactive realism is a radically non-standard kind of realism, it is at least more realistic about science as a fractured complex multi-faceted enterprise than most other kinds of realism on the block, because it shows coherence amidst fragmentation. (shrink)
Recent discussions of cognitive enhancement often note that drugs and technologies that improve cognitive performance may do so at the risk of “cheapening” our resulting cognitive achievements Arguing about bioethics, Routledge, London, 2012; Harris in Bioethics 25:102–111, 2011). While there are several possible responses to this worry, we will highlight what we take to be one of the most promising—one which draws on a recent strand of thinking in social and virtue epistemology to construct an integrationist defence of cognitive enhancement.. (...) According to such a line, there is—despite initial appearances to the contrary—no genuine tension between using enhancements to attain our goals and achieving these goals in a valuable way provided the relevant enhancement is appropriately integrated into the agent’s cognitive architecture. In this paper, however, we show that the kind of integration recommended by such views will likely come at a high cost. More specifically, we highlight a dilemma for users of pharmacological cognitive enhancement: they can meet the conditions for cognitive integration at the significant risk of dangerous dependency, or remain free of such dependency while foregoing integration and the valuable achievements that such integration enables. After motivating and clarifying the import of this dilemma, we offer recommendations for how future cognitive enhancement research may offer potential routes for navigating past it. (shrink)
‘Sentience’ sometimes refers to the capacity for any type of subjective experience, and sometimes to the capacity to have subjective experiences with a positive or negative valence, such as pain or pleasure. We review recent controversies regarding sentience in fish and invertebrates and consider the deep methodological challenge posed by these cases. We then present two ways of responding to the challenge. In a policy-making context, precautionary thinking can help us treat animals appropriately despite continuing uncertainty about their sentience. In (...) a scientific context, we can draw inspiration from the science of human consciousness to disentangle conscious and unconscious perception (especially vision) in animals. Developing better ways to disentangle conscious and unconscious affect is a key priority for future research. (shrink)
This chapter draws an analogy between computing mechanisms and autopoietic systems, focusing on the non-representational status of both kinds of system (computational and autopoietic). It will be argued that the role played by input and output components in a computing mechanism closely resembles the relationship between an autopoietic system and its environment, and in this sense differs from the classical understanding of inputs and outputs. The analogy helps to make sense of why we should think of computing mechanisms as non-representational, (...) and might also facilitate reconciliation between computational and autopoietic/enactive approaches to the study of cognition. (shrink)
Recent work in the philosophy of mind and cognitive science (e.g., Clark and Chalmers 1998; Clark 2010a; Clark 2010b; Palermos 2014) can help to explain why certain kinds of assertions—made on the basis of information stored in our gadgets rather than in biological memory—are properly criticisable in light of misleading implicatures, while others are not.
‘If you want to go to Harlem, you have to take the A train’ doesn’t look special. Yet a compositional account of its meaning, and the meaning of anankastic conditionals more generally, has proven an enigma. Semanticists have responded by assigning anankastics a unique status, distinguishing them from ordinary indicative conditionals. Condoravdi & Lauer (2016) maintain instead that “anankastic conditionals are just conditionals.” I argue that Condoravdi and Lauer don’t give a general solution to a well-known problem: the problem of (...) conflicting goals. They rely on a special, “effective preference” interpretation for want on which an agent cannot want two things that conflict with her beliefs. A general solution, though, requires that the goals cannot conflict with the facts. Condoravdi and Lauer’s view fails. Yet they show, I believe, that previous accounts fail too. Anankastic conditionals are still a mystery. (shrink)
This is a first tentative examination of the possibility of reinstating reduction as a valid candidate for presenting relations between mental and physical properties. Classical Nagelian reduction is undoubtedly contaminated in many ways, but here I investigate the possibility of adapting to problems concerning mental properties an alternative definition for theory reduction in philosophy of science. The definition I offer is formulated with the aid of non-monotonic logic, which I suspect might be a very interesting realm for testing notions concerning (...) localized mental-physical reduction. The reason for this is that non-monotonic reasoning by definition is about appeals made not only to explicit observations, but also to an implicit selection of background knowledge containing heuristic information. The flexibility of this definition and the fact that it is not absolute, i.e. that the relation of reduction may be retracted or allowed to shift without fuss, add at least an interesting alternative factor to current materialist debates. South African Journal of Philosophy Vol. 25(2) 2006: 102-112. (shrink)
Although expert consensus states that critical thinking (CT) is essential to enquiry, it doesn’t necessarily follow that by practicing enquiry children are developing CT skills. Philosophy with children programmes around the world aim to develop CT dispositions and skills through a community of enquiry, and this study compared the impact of the explicit teaching of CT skills during an enquiry, to The Philosophy Foundation's philosophical enquiry (PhiE) method alone (which had no explicit teaching of CT skills). Philosophy with children is (...) also said to improve metacognitive (MC) skills but there is little research into this claim. Following observable problems with ensuring genuine metacognition was happening in PhiE sessions - on a reasonably strong understanding of what metacognition is – a method has been developed and trialed in this study to bring together, in mutual support, the development of critical thinking and metacognitive skills. Based on the work of Peter Worley and Ellen Fridland (KCL)The Philosophy Foundation ran an experimental study with King's College London in Autumn 2017 and Autumn 2018 to compare the impact of teaching CT skills and MC skills against classes that just have philosophical enquiry. The approach developed and used for the study employs the explicit teaching of some CT and MC skills within the context of a philosophical enquiry (as opposed to stand-alone teaching of these skills) and yields some positive findings both qualitative and quantitative. Both studies took place over one term (12 weeks) and a control and intervention group were used in each study. This report focuses on the second year of the study, with 220 ten and eleven-year-old children involved in eight classes across three state schools in South East London. Although there were limitations to the study the results indicate that the explicit teaching of these skills during a philosophical enquiry can help children use CT and MC skills more successfully than philosophical enquiry alone. (shrink)
In this article I give an overview of some recent work in philosophy of science dedicated to analysing the scientific process in terms of (conceptual) mathematical models of theories and the various semantic relations between such models, scientific theories, and aspects of reality. In current philosophy of science, the most interesting questions centre around the ways in which writers distinguish between theories and the mathematical structures that interpret them and in which they are true, i.e. between scientific theories as linguistic (...) systems and their non-linguistic models. In philosophy of science literature there are two main approaches to the structure of scientific theories, the statement or syntactic approach -- advocated by Carnap, Hempel and Nagel -- and the non- statement or semantic approach --advocated, among others, by Suppes, the structuralists, Beth, Van Fraassen, Giere, Wojcicki. In conclusion, I briefly review some of the usual realist inspired questions about the possibility and character of relations between scientific theories and reality as implied by the various approaches I discuss in the course of the article. The models of a scientific theory should indeed be adequate to the phenomena, but if the theory is 'adequate' to (true in) its conceptual (mathematical) models as well, we have a model-theoretic realism that addresses the possible meaning and reference of 'theoretical entities' without relapsing into the metaphysics typical of the usual scientific realist approaches. (shrink)
In this paper, I focus on the representations of Black women in contrast to Black men found within Frantz Fanon’s philosophical work Black Skin, White Masks. I propose that while Fanon’s racial dialectical work is very significant, he often lacks acknowledgement of the multidimensionality of the Black woman’s lived experience specifically. Drawing on the theory of intersectionality, coined by Kimberlé Crenshaw, I argue that Fanon does not recognize the different layers of oppression operating in Black women’s lives to the degree (...) that he fails to include them within his framework of both liberation and resistance from racial oppression. (shrink)
This article analyzes the role of entropy in Bayesian statistics, focusing on its use as a tool for detection, recognition and validation of eigen-solutions. “Objects as eigen-solutions” is a key metaphor of the cognitive constructivism epistemological framework developed by the philosopher Heinz von Foerster. Special attention is given to some objections to the concepts of probability, statistics and randomization posed by George Spencer-Brown, a figure of great influence in the field of radical constructivism.
A model-theoretic realist account of science places linguistic systems and their corresponding non-linguistic structures at different stages or different levels of abstraction of the scientific process. Apart from the obvious problem of underdetermination of theories by data, philosophers of science are also faced with the inverse (and very real) problem of overdetermination of theories by their empirical models, which is what this article will focus on. I acknowledge the contingency of the factors determining the nature – and choice – of (...) a certain model at a certain time, but in my terms, this is a matter about which we can talk and whose structure we can formalise. In this article a mechanism for tracing "empirical choices" and their particularized observational-theoretical entanglements will be offered in the form of Yoav Shoham's version of non-monotonic logic. Such an analysis of the structure of scientific theories may clarify the motivations underlying choices in favor of certain empirical models (and not others) in a way that shows that "disentangling" theoretical and observation terms is more deeply model-specific than theory-specific. This kind of analysis offers a method for getting an articulable grip on the overdetermination of theories by their models – implied by empirical equivalence – which Kuipers' structuralist analysis of the structure of theories does not offer. (shrink)
In this inaugural lecture I offer, against the background of a discussion of knowledge representation and its tools, an overview of my research in the philosophy of science. I defend a relational model-theoretic realism as being the appropriate meta-stance most congruent with the model-theoretic view of science as a form of human engagement with the world. Making use of logics with preferential semantics within a model-theoretic paradigm, I give an account of science as process and product. I demonstrate the power (...) of the full-blown employment of this paradigm in the philosophy of science by discussing the main applications of model-theoretic realism to traditional problems in the philosophy of science. I discuss my views of the nature of logic and of its role in the philosophy of science today. I also specifically offer a brief discussion on the future of cognitive philosophy in South Africa. My conclusion is a general look at the nature of philosophical inquiry and its significance for philosophers today. South African Journal of Philosophy Vol. 25 (4) 2006: pp. 275-289. (shrink)
If the defenders of typical postmodern accounts of science (and their less extreme social-constructivist partners) are at one end of the scale in current philosophy of science, who shall we place at the other end? Old-style metaphysical realists? Neo-neo-positivists? ... Are the choices concerning realist issues as simple as being centered around either, on the one hand, whether it is the way reality is “constructed” in accordance with some contingent language game that determines scientific “truth”; or, on the other hand, (...) whether it is the way things are in an independent reality that makes our theories true or false? If, in terms of realism, “strong” implies “metaphysical” in the traditional sense, and “weak” implies “non-absolutist” or “non-unique”, what – if anything – could realism after Rorty's shattering of the mirror of nature still entail? In accordance with my position as a model-theoretic realist, I shall show in this article the relevance of the assumption of an independent reality for postmodern (philosophy of) science – against Lyotard's dismissal of the necessity of this assumption for science which he interprets as a non-privileged game among many others. I shall imply that science is neither the “child” of positivist philosophy who has outgrown her mother, freeing herself from metaphysics and epistemology, nor is science, at the other end of the scale, foundationless and up for grabs. S. Afr. J. Philos. Vol.22(3) 2003: 220–235. (shrink)
One way in which to address the intriguing relations between science and reality is to work via the models (mathematical structures) of formal scientific theories which are interpretations under which these theories turn out to be true. The so-called 'statement approach' to scientific theories -- characteristic for instance of Nagel, Carnap, and Hempel --depicts theories in terms of 'symbolic languages' and some set of 'correspondence rules' or 'definition principles'. The defenders of the oppositionist non-statement approach advocate an analysis where the (...) language in which the theory is formulated plays a much smaller role. They hold that foundational problems in the various sciences can in general be better addressed by focusing on the models these sciences employ than by reformulating the products of these sciences in some appropriate language. My model-theoretic realist account of science lies decidedly within the non-statement context, although I retain the notion of a theory as a deductively closed set of sentences (expressed in some appropriate language), in this paper I shall focus -- against the background of a model-theoretic account of science -- on the approach to the reality-science dichotomy offered by Nancy Cartwright and briefly comment on a few aspects of Roy Bhaskar's transcendental realism. I shall, in conclusion, show how a model-theoretic approach such as mine can combine the best of these two approaches. (shrink)
I am arguing that it is only by concentrating on the role of models in theory construction, interpretation and change, that one can study the progress of science sensibly. I define the level at which these models operate as a level above the purely empirical (consisting of various systems in reality) but also indeed below that of the fundamental formal theories (expressed linguistically). The essentially multi-interpretability of the theory at the general, abstract linguistic level, implies that it can potentially make (...) claims about systems in reality, other than the particular one which originally induced it. Any so-called correspondence relation between (systems in) reality and the entities and relations in some scientific theory, thus consists of two jumps or interpretations: from the theory (linguistic level) to some model of it (constructural level); and from there to some system in reality. Clearly then the level of fundamental theories cannot be ignored la Nancy Cartwright - in studying the relations between a theory and reality, because the particular features of the theory (the various systems in reality onto which the theory can be mapped) cannot be studied without the underlying knowledge that these systems have one common feature, namely that each of them is the range (or other pole) of a mapping of a context-specific model of the theory - which in itself, is a mapping, or more specifically, an interpretation of the theory. I am also claiming that the nature of these levels and the relations between them necessitate an epistemological rather than an ontological notion of truth criteria, and a referential rather than a representational link between science and reality. (shrink)
Recent thinking within philosophy of mind about the ways cognition can extend has yet to be integrated with philosophical theories of emotion, which give cognition a central role. We carve out new ground at the intersection of these areas and, in doing so, defend what we call the extended emotion thesis: the claim that some emotions can extend beyond skin and skull to parts of the external world.
While openmindedness is often cited as a paradigmatic example of an intellectual virtue, the connection between openmindedness and truth is tenuous. Several strategies for reconciling this tension are considered, and each is shown to fail; it is thus claimed that openmindedness, when intellectually virtuous, bears no interesting essential connection to truth. In the final section, the implication of this result is assessed in the wider context of debates about epistemic value.
The Brown vs. Board of Education decision of 1954 mandated school integration. The decision also to recognize that inequalities outside the schools, of both a class- and race-based nature, prevent equality in education. Today, the most prominent argument for integration is that disadvantaged students benefit from the financial, social, and cultural “capital” of middle class families when the children attend the same schools. This argument fails to recognize that disadvantaged students contribute to advantaged students’ educational growth, and sends demeaning (...) messages to the disadvantaged students and messages of unwarranted superiority to the advantaged. Parents, teachers, and schools can adopt a justice perspective that avoids these deleterious aspects of the capital argument, and helps create a community of equals inside the integrated school. Struggles for educational justice must remain closely linked with struggles of both a class- and race-based nature for other forms of justice in the wider society. (shrink)
What are economic exchanges? The received view has it that exchanges are mutual transfers of goods motivated by inverse valuations thereof. As a corollary, the standard approach treats exchanges of services as a subspecies of exchanges of goods. We raise two objections against this standard approach. First, it is incomplete, as it fails to take into account, among other things, the offers and acceptances that lie at the core of even the simplest cases of exchanges. Second, it ultimately fails to (...) generalize to exchanges of services, in which neither inverse preferences nor mutual transfers hold true. We propose an alternative definition of exchanges, which treats exchanges of goods as a special case of exchanges of services and which builds in offers and acceptances. According to this theory: (i) The valuations motivating exchanges are propositional and convergent rather than objectual and inverse; (ii) All exchanges of goods involve exchanges of services/actions, but not the reverse; (iii) Offers and acceptances, together with the contractual obligations and claims they bring about, lie at the heart of all cases of exchange. (shrink)
Animal welfare has a long history of disregard. While in recent decades the study of animal welfare has become a scientific discipline of its own, the difficulty of measuring animal welfare can still be vastly underestimated. There are three primary theories, or perspectives, on animal welfare - biological functioning, natural living and affective state. These come with their own diverse methods of measurement, each providing a limited perspective on an aspect of welfare. This paper describes a perspectival pluralist account of (...) animal welfare, in which all three theoretical perspectives and their multiple measures are necessary to understand this complex phenomenon and provide a full picture of animal welfare. This in turn will offer us a better understanding of perspectivism and pluralism itself. (shrink)
What is it like to be a bat? What is it like to be sick? These two questions are much closer to one another than has hitherto been acknowledged. Indeed, both raise a number of related, albeit very complex, philosophical problems. In recent years, the phenomenology of health and disease has become a major topic in bioethics and the philosophy of medicine, owing much to the work of Havi Carel (2007, 2011, 2018). Surprisingly little attention, however, has been given to (...) the phenomenology of animal health and suffering. This omission shall be remedied here, laying the groundwork for the phenomenological evaluation of animal health and suffering. (shrink)
Abstract: Entrepreneurship education is very imperative in Nigeria especially now that unemployment rate has reached an alarming proportion. Entrepreneurship education is not a straight-jacketed process in any nation. It is a function of many factors which include good leadership, creation of business-friendly environment, drawing academic curriculum that will inculcate skill acquisition. This study, against this backdrop, focuses on the enterepreurship education alongside trade subjects. Also, the place of entrepreneurship training, government/institutional support, societal value re-orientation, entrepreneurial career selection were interrogated. The (...) study relied heavily on content analysis technique. The position of the paper is inter alia: provision of financial assistance for self-employment through a properly articulated micro credit scheme that would enable enterprising youths obtain soft loans for the establishment of micro businesses. Beefing up electricity generation. Currently 3,231 MHW generated as against 50,000 MHW calls for serious rethinking by the Federal Government of Nigeria. (shrink)
This paper seeks to recover the function of universal history, which was to place particulars into relation with universals. By the 20th century universal history was largely discredited because of an idealism that served to lend epistemic coherence to the overwhelming complexity arising from universal history's comprehensive scope. Idealism also attempted to account for history's being "open"--for the human ability to transcend circumstance. The paper attempts to recover these virtues without the idealism by defining universal history not by its scope (...) but rather as a scientific method that provides an understanding of any kind of historical process, be it physical, biological or human. While this method is not new, it is in need of a development that offers a more robust historiography and warrant as a liberating historical consciousness. The first section constructs an ontology of process by defining matter as ontic probabilities rather than as closed entities. This is lent warrant in the next section through an appeal to contemporary physical science. The resulting conceptual frame and method is applied to the physical domain of existents, to the biological domain of social being and finally to the human domain of species being. It is then used to account for the emergence of human history's initial stage--the Archaic Socio-Economic Formation and for history' stadial trajectory--its alternation of evolution and revolution. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.