Extended and distributed cognition theories argue that human cognitive systems sometimes include non-biological objects. On these views, the physical supervenience base of cognitive systems is thus not the biological brain or even the embodied organism, but an organism-plus-artifacts. In this paper, we provide a novel account of the implications of these views for learning, education, and assessment. We start by conceptualising how we learn to assemble extended cognitive systems by internalising cultural norms and practices. Having a better grip on how (...) extended cognitive systems are assembled, we focus on the question: If our cognition extends, how should we educate and assess such extended cognitive systems? We suggest various ways to minimize possible negative effects of extending one’s cognition and to efficiently find and organise (online) information by adopting a virtue epistemology approach. Educational and assessment implications are foregrounded, particularly in the case of Danish students’ use of the Internet during exams. (shrink)
Discussions of where the costs of climate change adaptation and mitigation should fall often focus on the 'polluter pays principle' or the 'ability to pay principle'. Simon Caney has recently defended a 'hybrid view', which includes versions of both of these principles. This article argues that Caney's view succeeds in overcoming several shortfalls of both principles, but is nevertheless subject to three important objections: first, it does not distinguish between those emissions which are hard to avoid and those which (...) are easy to avoid; second, its only partial reference to all-things-considered justice means it cannot provide a full account even of climate justice; and third, it assigns to the poor very limited duties to meet climate change costs, even where they have created those costs, which may incentivise them to increase emissions. An alternative pluralistic account which avoids these objections is presented. (shrink)
Luck egalitarianism is a family of egalitarian theories of distributive justice that aim to counteract the distributive effects of luck. This article explains luck egalitarianism's main ideas, and the debates that have accompanied its rise to prominence. There are two main parts to the discussion. The first part sets out three key moves in the influential early statements of Dworkin, Arneson, and Cohen: the brute luck/option luck distinction, the specification of brute luck in everyday or theoretical terms and the specification (...) of advantage as resources, welfare, or some combination of these. The second part covers three later developments: the democratic egalitarian critique of luck egalitarianism, the luck egalitarian acceptance of pluralism, and luck egalitarian doubts about the significance of the brute luck/option luck distinction. (shrink)
Several attempts have been made to apply the choice-sensitive theory of distributive justice, luck egalitarianism, in the context of health and healthcare. This article presents a framework for this discussion by highlighting different normative decisions to be made in such an application, some of the objections to which luck egalitarians must provide answers and some of the practical implications associated with applying such an approach in the real world. It is argued that luck egalitarians should address distributions of health rather (...) than healthcare, endorse an integrationist theory that combines health concerns with general distributive concerns and be pluralist in their approach. It further suggests that choice-sensitive policies need not be the result of applying luck egalitarianism in this context. (shrink)
O objetivo deste artigo é analisar os usos que Werner Heisenberg fez da filosofia grega em sua obra. Pretende-se relacionar tais usos não apenas com a argumentação interna presente nos textos do físico alemão, mas também com o contexto histórico, conflitos e debates entre as diversas interpretações da teoria dos quanta durante a primeira metade do século XX. Faremos, inicialmente, uma apresentação geral da teoria quântica e da presença da filosofia na obra de Heisenberg e, em seguida, um estudo de (...) caso da apropriação que Heisenberg fez do pensamento de Leucipo, Demócrito, Heráclito, Platão e Aristóteles. (shrink)
According to all-luck egalitarianism, the differential distributive effects of both brute luck, which defines the outcome of risks which are not deliberately taken, and option luck, which defines the outcome of deliberate gambles, are unjust. Exactly how to correct the effects of option luck is, however, a complex issue. This article argues that (a) option luck should be neutralized not just by correcting luck among gamblers, but among the community as a whole, because it would be unfair for gamblers as (...) a group to be disadvantaged relative to non-gamblers by bad option luck; (b) individuals should receive the warranted expected results of their gambles, except insofar as individuals blamelessly lacked the ability to ascertain which expectations were warranted; and (c) where societal resources are insufficient to deliver expected results to gamblers, gamblers should receive a lesser distributive share which is in proportion to the expected results. Where all-luck egalitarianism is understood in this way, it allows risk-takers to impose externalities on non-risk-takers, which seems counterintuitive. This may, however, be an advantage as it provides a luck egalitarian rationale for assisting ‘negligent victims’. (shrink)
Many political philosophers maintain that beneficiaries of injustice are under special obligations to assist victims of injustice. However, the examples favoured by those who endorse this view equally support an alternative luck egalitarian view, which holds that special obligations should be assigned to those with good brute luck. From this perspective the distinguishing features of the benefiting view are (1) its silence on the question of whether to allocate special obligations to assist the brute luck worse off to those who (...) are well off as a matter of brute luck but not as a result of injustice, and (2) its silence on the question of whether to allocate assistance to those who are badly off as a matter of brute luck but not as a result of injustice. In this new light, the benefiting view is harder to justify. (shrink)
South Africa is a highly distributively unequal country, and its inequality continues to be largely along racial lines. Such circumstances call for assessment from the perspective of contemporary theories of distributive justice. Three such theories—Rawlsian justice, utilitarianism, and luck egalitarianism—are described and applied. Rawls' difference principle recommends that the worst off be made as well as they can be, a standard which South Africa clearly falls short of. Utilitarianism recommends the maximization of overall societal well-being, a goal which South Africa (...) again fails to achieve given its severe inequality and the fact of the diminishing marginal value of money—that a given amount of money tends to produce more utility for a poor person than it does for a rich person. The final theory, luck egalitarianism, aims to make distributions sensitive to individual exercises of responsibility. This view also objects to South Africa's inequality, this time on the basis that the poor are overwhelmingly worse off through no fault or choice of their own. These major theories of distributive justice therefore all propose large-scale redistribution to the benefit of the (predominantly black) poor. Perhaps more surprisingly, all three views also provide support for socio-economic affirmative action, as opposed to South Africa's race-based Black Economic Empowerment. (shrink)
This article argues for an unconventional interpretation of Arthur O. Lovejoy’s distinctive approach to method in the history of ideas. It is maintained that the value of the central concept of the ‘unit-idea’ has been misunderstood by friends and foes alike. The commonality of unit-ideas at different times and places is often defined in terms of familial resemblance. But such an approach must necessarily define unit-ideas as being something other than the smallest conceptual unit. It is therefore in tension with (...) Lovejoy’s methodological prescription and, more importantly, disregards a potentially important aspect of intellectual history – the smaller conceptual units themselves. In response to this, an alternative interpretation of unit-ideas as ‘elemental’ – as the smallest identifiable conceptual components – is put forward. Unlike the familial resemblance approach, the elemental approach can provide a plausible explanation for changes in ideas. These are construed as being either the creation of new unit-ideas, the disappearance of existing ones, or alterations in the groups of unit-ideas that compose idea-complexes. The focus on the movement of unit-ideas and idea-complexes through history can also be sensitive to contextual issues, carefully distinguishing the different meanings that single words may have, in much the way that both Lovejoy and his influential critic Quentin Skinner suggest. (shrink)
David Miller has objected to the cosmopolitan argument that it is arbitrary and hence unfair to treat individuals differently on account of things for which they are not responsible. Such a view seems to require, implausibly, that individuals be treated identically even where (unchosen) needs differ. The objection is, however, inapplicable where the focus of cosmopolitan concern is arbitrary disadvantage rather than arbitrary treatment. This 'unfair disadvantage argument' supports a form of global luck egalitarianism. Miller also objects that cosmopolitanism is (...) unable to accommodate special obligations generated by national membership. Cosmopolitanism can, however, accommodate many special obligations to compatriots. Those which it cannot accommodate are only morally compelling if we assume what the objection claims to prove - that cosmopolitanism is mistaken. Cosmopolitanism construed as global luck egalitarianism is therefore able to withstand both of Miller's objections, and has significant independent appeal on account of the unfair disadvantage argument. (shrink)
I argue that there can be no such thing as a borderline case of the predicate ‘phenomenally conscious’: for any given creature at any given time, it cannot be vague whether that creature is phenomenally conscious at that time. I first defend the Positive Characterization Thesis, which says that for any borderline case of any predicate there is a positive characterization of that case that can show any sufficiently competent speaker what makes it a borderline case. I then appeal to (...) the familiar claim that zombies are conceivable, and I argue that this claim entails that there can be no positive characterizations of borderline cases of ‘phenomenally conscious’. By the Positive Characterization Thesis, it follows that ‘phenomenally conscious’ can not have any borderline cases. (shrink)
Discrimination might be considered unjust on account of the comparative disadvantage it imposes, the absolute disadvantage it imposes, the disrespect it shows, or the prejudice it shows. This article argues that each of these accounts overlooks some cases of unjust discrimination. In response to this state of affairs we might combine two or more of these accounts. A promising approach combines the comparative disadvantage and absolute disadvantage accounts.
Although the economic thought of Marshall and Pigou was united by ethical positions broadly considered utilitarian, differences in their intellectual milieu led to degrees of difference between their respective philosophical visions. This change in milieu includes the influence of the little understood period of transition from the early idealist period in Great Britain, which provided the context to Marshall’s intellectual formation, and the late British Idealist period, which provided the context to Pigou’s intellectual formation. During this latter period, the pervading (...) Hegelianism and influences of naturalism arising from the ideas of Charles Darwin and Herbert Spencer were challenged by Hermann Lotze, a key transitional thinker influencing the Neo-Kantian movement, who recognised significant limits of naturalism, on the one hand, and the metaphysical tenor of absolute idealism, on the other, and attempted to provide a balance between the two. The goal of this paper is to make the provisional case for the argument that Pigou’s views on ethics were not only directly influenced by utilitarian thinkers like Mill and Sidgwick, but they were also indirectly influenced by Hermann Lotze, via the influence of the Neo- Kantian movement on late British idealism. To that end, Pigou’s essays in The Trouble with Theism (1908), including his sympathetic consideration of the ethics of Friedrich Nietzsche, reflect the influence of Lotze indirectly through the impact at Cambridge of: James Ward’s critique of associationist psychology, and consideration of the limits of naturalism including the critique of evolutionary ethics; Bertrand Russell’s rejection of neo-Hegelianism and, together with Alfred North Whitehead, the development of Logicism; and G.E. Moore’s critique of utilitarian ethics on the basis of the naturalistic fallacy and the development of his own intuitionist system of ethics. (shrink)
This article explores the Rawlsian goal of ensuring that distributions are not influenced by the morally arbitrary. It does so by bringing discussions of distributive justice into contact with the debate over moral luck initiated by Williams and Nagel. Rawls’ own justice as fairness appears to be incompatible with the arbitrariness commitment, as it creates some equalities arbitrarily. A major rival, Dworkin’s version of brute luck egalitarianism, aims to be continuous with ordinary ethics, and so is (a) sensitive to non-philosophical (...) beliefs about free will and responsibility, and (b) allows inequalities to arise on the basis of option luck. But Dworkin does not present convincing reasons in support of continuity, and there are compelling moral reasons for justice to be sensitive to the best philosophical account of free will and responsibility, as is proposed by the revised brute luck egalitarianism of Arneson and Cohen. While Dworkinian brute luck egalitarianism admits three sorts of morally arbitrary disadvantaging which correspond to three forms of moral luck (constitutive, circumstantial, and option luck), revised brute luck egalitarianism does not disadvantage on the basis of constitutive or circumstantial luck. But it is not as sensitive to responsibility as it needs to be to fully extinguish the influence of the morally arbitrary, for persons under it may exercise their responsibility equivalently yet end up with different outcomes on account of option luck. It is concluded that egalitarians should deny the existence of distributive luck, which is luck in the levels of advantage that individuals are due. (shrink)
In recent years, a number of theorists have claimed that beliefs about probability are transparent. To believe probably p is simply to have a high credence that p. In this paper, I prove a variety of triviality results for theses like the above. I show that such claims are inconsistent with the thesis that probabilistic modal sentences have propositions or sets of worlds as their meaning. Then I consider the extent to which a dynamic semantics for probabilistic modals can capture (...) theses connecting belief, certainty, credence, and probability. I show that although a dynamic semantics for probabilistic modals does allow one to validate such theses, it can only do so at a cost. I prove that such theses can only be valid if probabilistic modals do not satisfy the axioms of the probability calculus. (shrink)
This review article of Shlomi Segall's Health, Luck, and Justice (Princeton University Press, 2010) addresses three issues: first, Segall’s claim that luck egalitarianism, properly construed, does not object to brute luck equality; second, Segall’s claim that brute luck is properly construed as the outcome of actions that it would have been unreasonable to expect the agent to avoid; and third, Segall’s account of healthcare and criticism of rival views. On the first two issues, a more conventional form of luck egalitarianism (...) – that is, one which objects to brute luck even if it creates equality, and which construes brute luck as the inverse of agent responsibility – is defended. On the third issue, strengths and weaknesses in Segall’s criticism of Rawlsian, democratic egalitarian, and all-luck egalitarian approaches to healthcare, and in his own luck egalitarian approach, are identified. (shrink)
In this open peer commentary, we categorize the possible “neuroscience in national security” definitions of misuse of science and identify which, if any, are uniquely presented by advances in neuroscience. To define misuse, we first define what we would consider appropriate use: the application of reasonably safe and effective technology, based on valid and reliable scientific research, to serve a legitimate end. This definition presents distinct opportunities for assessing misuse: misuse is the application of invalid or unreliable science, or is (...) the use of reliable scientific methods to serve illegitimate ends. Ultimately, we conclude that while national security is often a politicized issue, assessing the state of scientific progress should not be. (shrink)
Today’s technological-scientific prospect of posthumanity simultaneously evokes and defies historical understanding. On the one hand, it implies a historical claim of an epochal transformation concerning posthumanity as a new era. On the other, by postulating the birth of a novel, better-than-human subject for this new era, it eliminates the human subject of modern Western historical understanding. In this article, I attempt to understand posthumanity as measured against the story of humanity as the story of history itself. I examine the fate (...) of humanity as the central subject of history in three consecutive steps: first, by exploring how classical philosophies of history achieved the integrity of the greatest historical narrative of history itself through the very invention of humanity as its subject; second, by recounting how this central subject came under heavy criticism by postcolonial and gender studies in the last half-century, targeting the universalism of the story of humanity as the greatest historical narrative of history; and third, by conceptualizing the challenge of posthumanity against both the story of humanity and its criticism. Whereas criticism fragmented history but retained the possibility of smaller-scale narratives, posthumanity does not doubt the feasibility of the story of humanity. Instead, it necessarily invokes humanity, if only in order to be able to claim its supersession by a better-than-human subject. In that, it represents a fundamental challenge to the modern Western historical condition and the very possibility of historical narratives – small-scale or large-scale, fragmented or universal. (shrink)
Emissions grandfathering holds that a history of emissions strengthens an agent’s claim for future emission entitlements. Though grandfathering appears to have been influential in actual emission control frameworks, it is rarely taken seriously by philosophers. This article presents an argument for thinking this an oversight. The core of the argument is that members of countries with higher historical emissions are typically burdened with higher costs when transitioning to a given lower level of emissions. According to several appealing views in political (...) philosophy (utilitarianism, egalitarianism, prioritarianism, and sufficientarianism) they are therefore entitled to greater resources, including emission entitlements, than those in similar positions but with lower emissions. This grandfathering may play an especially important role in allocating emission entitlements among rich countries. (shrink)
This articles proposes that theories and principles of distributive justice be considered substantively egalitarian iff they satisfy each of three conditions: (1) they consider the bare fact that a person is in certain circumstances to be a conclusive reason for placing another relevantly identically entitled person in the same circumstances, except where this conflicts with other similarly conclusive reasons arising from the circumstances of other persons; (2) they can be stated as 'equality of x for all persons', making no explicit (...) or implicit exclusion of persons or individuals and showing no greater concern and respect for some rather than others; and (3) they pursue equality in a dimension that is valuable to egalitarians. On this construal, prioritarianism and Dworkinian equality of resources, a view often identified as luck egalitarian, are not substantively egalitarian, but equality of opportunity, the standard form of luck egalitarianism, may be. (shrink)
Many naive realists endorse a negative disjunctivist strategy in order to deal with the challenge presented by the possibility of phenomenologically indistinguishable halucination. In the first part of this paper I argue that this approach is methodologically inconsistent because it undercuts the phenomenological motivation that underlies the the appeal of naive realism. In the second part of the paper I develop an alternative to the negative disjunctivist account along broadly Meinongian lines. In the last section of this paper I consider (...) and evaluate a somewhat similar but rival view of hallucination developed by Mark Johnston. (shrink)
Idealist philosophers have traditionally tried to defend their views by appealing to the claim that nonmental reality is inconceivable. A standard response to this inconceivability claim is to try to show that it is only plausible if one blurs the fundamental distinction between consciousness and its object. I try to rehabilitate the idealistic argument by presenting an alternative formulation of the idealist’s basic inconceivability claim. Rather than suggesting that all objects are inconceivable apart from consciousness, I suggest that it is (...) impossible to conceive of any such object as genuinely existent. This thesis is lent credence by the fact that only in reflective self-consciousness is existence a phenomenological datum. Not only is it the case that we are not ever aware of an object as existing, we do not have a clear understanding of what it would be like to have such an awareness. If this is true, then we have reason to believe that while consciousness exists, the objects of consciousness cannot exist. (shrink)
Contemporary discussions of egalitarian justice have often focused on the issue of expensive taste. G.A. Cohen has recently abandoned the view that all chosen disadvantages are non-compensable, now maintaining that chosen expensive judgmental tastes—those endorsed by valuational judgment—are compensable as it is unreasonable to expect persons not to develop them. But chosen expensive brute taste—the main type of non-compensable expensive taste on the new scheme—cannot be described in such a way that there is a normative difference between it and chosen (...) expensive judgmental taste. As there are related problems with denying compensation for the other kind of expensive taste that might remain non-compensable, Cohen's position on taste appears to be either implausible or virtually indistinguishable from that of equality of welfare. However, compensation for valuational judgment-based expensive taste might be justified on grounds of responsibility. (shrink)
Traditional outcome-orientated egalitarian principles require access to information about the size of individual holdings. Recent egalitarian political theory has sought to accommodate considerations of responsibility. Such a move may seem problematic, in that a new informational burden is thereby introduced, with no apparent decrease in the existing burden. This article uses a simple model with simulated data to examine the extent to which outcome egalitarianism and responsibility-sensitive egalitarianism (‘luck egalitarianism’) can be accurately applied where information is incomplete or erroneous. It (...) is found that, while outcome egalitarianism tends to be more accurately applied, its advantage is not overwhelming, and in many prima facie plausible circumstances luck egalitarianism would be more accurately applied. This suggests that luck egalitarianism cannot be rejected as utopian. Furthermore, while some argue that, in practice, luck egalitarianism is best realized indirectly, by securing equality of outcome, our evidence suggests that a luck egalitarian rule of regulation offers a far more accurate implementation of the luck egalitarian ideal than does an outcome egalitarian rule of regulation. (shrink)
The central hypothesis of the collaboration between Language and Computing (L&C) and the Institute for Formal Ontology and Medical Information Science (IFOMIS) is that the methodology and conceptual rigor of a philosophically inspired formal ontology will greatly benefit software application ontologies. To this end LinKBase®, L&C’s ontology, which is designed to integrate and reason across various external databases simultaneously, has been submitted to the conceptual demands of IFOMIS’s Basic Formal Ontology (BFO). With this, we aim to move beyond the level (...) of controlled vocabularies to yield an ontology with the ability to support reasoning applications. (shrink)
Even where an act appears to be responsible, and satisfies all the conditions for responsibility laid down by society, the response to it may be unjust where that appearance is false, and where those conditions are insufficient. This paper argues that those who want to place considerations of responsibility at the centre of distributive and criminal justice ought to take this concern seriously. The common strategy of relying on what Susan Hurley describes as a 'black box of responsibility' has the (...) advantage of not taking responsibility considerations to be irrelevant merely because some specific account of responsibility is mistaken. It can, furthermore, cope perfectly well with an absence of responsibility, even of the global sort implied by hard determinism and other strongly sceptical accounts. Problems for the black box view come in where responsibility is present, but in a form that is curtailed in one significant regard or another. The trick, then, is to open the box of responsibility just enough that its contents can be the basis for judgements of justice. I identify three 'moderately sceptical' forms of compatibilism that cannot ground judgements of justice, and are therefore expunged by the strongest 'grey box' view. (shrink)
Software application ontologies have the potential to become the keystone in state-of-the-art information management techniques. It is expected that these ontologies will support the sort of reasoning power required to navigate large and complex terminologies correctly and efficiently. Yet, there is one problem in particular that continues to stand in our way. As these terminological structures increase in size and complexity, and the drive to integrate them inevitably swells, it is clear that the level of consistency required for such navigation (...) will become correspondingly difficult to maintain. While descriptive semantic representations are certainly a necessary component to any adequate ontology-based system, so long as ontology engineers rely solely on semantic information, without a sound ontological theory informing their modeling decisions, this goal will surely remain out of reach. In this paper we describe how Language and Computing nv (L&C), along with The Institute for Formal Ontology and Medical Information Sciences (IFOMIS), are working towards developing and implementing just such a theory, combining the open software architecture of L&C’s LinkSuiteTM with the philosophical rigor of IFOMIS’s Basic Formal Ontology. In this way we aim to move beyond the more or less simple controlled vocabularies that have dominated the industry to date. (shrink)
The collaboration of Language and Computing nv (L&C) and the Institute for Formal Ontology and Medical Information Science (IFOMIS) is guided by the hypothesis that quality constraints on ontologies for software ap-plication purposes closely parallel the constraints salient to the design of sound philosophical theories. The extent of this parallel has been poorly appreciated in the informatics community, and it turns out that importing the benefits of phi-losophical insight and methodology into application domains yields a variety of improvements. L&C’s LinKBase® (...) is one of the world’s largest medical domain ontologies. Its current primary use pertains to natural language processing ap-plications, but it also supports intelligent navigation through a range of struc-tured medical and bioinformatics information resources, such as SNOMED-CT, Swiss-Prot, and the Gene Ontology (GO). In this report we discuss how and why philosophical methods improve both the internal coherence of LinKBase®, and its capacity to serve as a translation hub, improving the interoperability of the ontologies through which it navigates. (shrink)
Palliative care serves both as an integrated part of treatment and as a last effort to care for those we cannot cure. The extent to which palliative care should be provided and our reasons for doing so have been curiously overlooked in the debate about distributive justice in health and healthcare. We argue that one prominent approach, the Rawlsian approach developed by Norman Daniels, is unable to provide such reasons and such care. This is because of a central feature in (...) Daniels' account, namely that care should be provided to restore people's opportunities. Daniels' view is both unable to provide pain relief to those who need it as a supplement to treatment and, without justice-based reasons to provide palliative care to those whose opportunities cannot be restored. We conclude that this makes Daniels' framework much less attractive. (shrink)
A large proportion of humankind today lives in avoidable poverty. This article examines whether affluent individuals and governments have moral duties to change this situation. It is maintained that an alternative to the familiar accounts of transdomestic distributive justice and personal ethics put forward by writers such as Peter Singer, John Rawls, and Thomas Pogge is required, since each of these accounts fails to reflect the full range of relevant considerations. A better account would give some weight to overall utility, (...) the condition of the worst off, and individual responsibility. This approach provides robust support to global poverty alleviation. (shrink)
Emissions grandfathering maintains that prior emissions increase future emission entitlements. The view forms a large part of actual emission control frameworks, but is routinely dismissed by political theorists and applied philosophers as evidently unjust. A sympathetic theoretical reconsideration of grandfathering suggests that the most plausible version is moderate, allowing that other considerations should influence emission entitlements, and be justified on instrumental grounds. The most promising instrumental justification defends moderate grandfathering on the basis that one extra unit of emission entitlements from (...) a baseline of zero emissions increases welfare to a greater extent where it is assigned to a high emitter than where it is assigned to a low emitter. Moderate grandfathering can be combined with basic needs and ability to pay considerations to provide an attractive approach to allocating emission entitlements. (shrink)
The central hypothesis of the collaboration between Language and Computing (L&C) and the Institute for Formal Ontology and Medical Information Science (IFOMIS) is that the methodology and conceptual rigor of a philosophically inspired formal ontology greatly benefits application ontologies. To this end r®, L&C’s ontology, which is designed to integrate and reason across various external databases simultaneously, has been submitted to the conceptual demands of IFOMIS’s Basic Formal Ontology (BFO). With this project we aim to move beyond the level of (...) controlled vocabularies to yield an ontology with the ability to support reasoning applications. Our general procedure has been the implementation of a meta-ontological definition space in which the definitions of all the concepts and relations in LinKBase® are standardized in a framework of first-order logic. In this paper we describe how this standardization has already led to an improvement in the LinKBase® structure that allows for a greater degree of internal coherence than ever before possible. We then show the use of this philosophical standardization for the purpose of mapping external databases to one another, using LinKBase® as translation hub, with a greater degree of success than possible hitherto. We demonstrate how this offers a genuine advance over other application ontologies that have not submitted themselves to the demands of philosophical scrutiny. LinKBase® is one of the world’s largest applications-oriented medical domain ontologies, and BFO is one of the world’s first philosophically driven reference ontologies. The collaboration of the two thus initiates a new phase in the quest to solve the so-called “Tower of Babel”. (shrink)
Software application ontologies have the potential to become the keystone in state-of-the-art information management techniques. It is expected that these ontologies will support the sort of reasoning power required to navigate large and complex terminologies correctly and efficiently. Yet, there is one problem in particular that continues to stand in our way. As these terminological structures increase in size and complexity, and the drive to integrate them inevitably swells, it is clear that the level of consistency required for such navigation (...) will become correspondingly difficult to maintain. While descriptive semantic representations are certainly a necessary component to any adequate ontology-based system, so long as ontology engineers rely solely on semantic information, without a sound ontological theory informing their modeling decisions, this goal will surely remain out of reach. In this paper we describe how Language and Computing nv (L&C), along with The Institute for Formal Ontology and Medical Information Sciences (IFOMIS), are working towards developing and implementing just such a theory, combining the open software architecture of L&C’s LinkSuiteTM with the philosophical rigor of IFOMIS’s Basic Formal Ontology. In this way we aim to move beyond the more or less simple controlled vocabularies that have dominated the industry to date. (shrink)
The theory and philosophy of history (just like philosophy in general) has established a dogmatic dilemma regarding the issue of language and experience: either you have an immediate experience separated from language, or you have language without any experiential basis. In other words, either you have an immediate experience that is and must remain mute and ineffable, or you have language and linguistic conceptualization that precedes experience, provides the condition of possibility of it, and thus, in a certain sense, produces (...) it. Either you join forces with the few and opt for such mute experiences, or you go with the flow of narrative philosophy of history and the impossibility of immediacy. Either way, you end up postulating a mutual hostility between the nonlinguistic and language, and, more important, you remain unable to account for new insights and change. Contrary to this and in relation to history, I am going to talk about something nonlinguistic—historical experience—and about how such historical experience could productively interact with language in giving birth to novel historical representations. I am going to suggest that, under a theory of expression, a more friendly relationship can be established between experience and language: a relationship in which they are not hostile to but rather desperately need each other. To explain the occurrence of new insights and historiographical change, I will talk about a process of expression as sense-formation and meaning-constitution in history, and condense the theory into a struck-through “of,” as the expression of historical experience. (shrink)
The rapidly growing transdisciplinary enthusiasm about developing new kinds of Anthropocene stories is based on the shared assumption that the Anthropocene predicament is best made sense of by narrative means. Against this assumption, this article argues that the challenge we are facing today does not merely lie in telling either scientific, socio-political, or entangled Anthropocene narratives to come to terms with our current condition. Instead, the challenge lies in coming to grips with how the stories we can tell in the (...) Anthropocene relate to the radical novelty of the Anthropocene condition about which no stories can be told. What we need to find are meaningful ways to reconcile an inherited commitment to narrativization and the collapse of storytelling as a vehicle of understanding the Anthropocene as our current predicament. (shrink)
The historical sensibility of Western modernity is best captured by the phrase “acting upon a story that we can believe.” Whereas the most famous stories of historians facilitated nation-building processes, philosophers of history told the largest possible story to act upon: history itself. When the rise of an overwhelming postwar skepticism about the modern idea of history discredited the entire enterprise, the historical sensibility of “acting upon a story that we can believe” fell apart to its constituents: action, story form, (...) and belief in a feasible future outcome. Its constituent parts are nevertheless still hold, either separately or in paired arrangements. First, believable stories are still told, but without an equally believable future outcome to facilitate. Second, in the shape of what I call the prospect of unprecedented change, there still is a feasible vision of a future (in prospects of technology and the Anthropocene), but it defies story form. And third, it is even possible to upon that feasible future, but such action aims at avoiding worst case scenarios instead of facilitating best outcomes. These are, I believe, features of an emerging postwar historical sensibility that the theory and philosophy of history is yet to understand. (shrink)
In times of a felt need to justify the value of the humanities, the need to revisit and re-establish the public relevance of the discipline of history cannot come as a surprise. On the following pages I will argue that this need is unappeasable by scholarly proposals. The much desired revitalization of historical writing lies instead in reconciling ourselves with the dual meaning of the word history, in exploring the necessary interconnection between history understood as the course of events and (...) as historical writing. Despite the general tendency of the last decades to forbid philosophizing about history in the former sense (at least in departments of history and philosophy), I think that to a certain extent we already do so without succumbing to substantive thought. We already have the sprouts of a speculative although only quasi-substantive philosophy of history that nevertheless takes seriously the postwar criticism of the substantive enterprise. In this essay I will first try to outline this quasi-substantive philosophy of history that attests to the historical sensibility of our times; and second, I will try to outline its consequences regarding history as historical writing. Finally, in place of a conclusion I will suggest that historical writing is not as much a contribution to public agendas as it is the very arena in which public life is at stake. (shrink)
A brief examination of the self-negating quality of the precautionary principle within the context of environmental ethics, and its consequent failure, as an ethical guide, to justify large-scale regulation of atmospheric cabon dioxide emissions.
This article reviews Herbert Simon's theory of bounded rationality, with a view of deemphasizing his "satisficing" model, and by contrast, of emphasizing his distinction between "procedural" and "substantive" rationality. The article also discusses a possible move from neo-classical economists to respond to Simon's criticisms, i.e., a reduction of bounded rationality to a special case of second-optimization, using Stigler's search theory. This move is eventually dismissed.
Neither Karl Popper, nor Frank Knight, nor Max Weber are cited or mentioned in Friedman’s famous 1953 essay “On the methodology of positive economics” (F53). However, they play a crucial role in F53. Making their con-tribution explicit suggests that F53 has been seriously misread in the past. I will first show that there are several irritating statements in F53 that are, taken together, not compatible with any of the usual readings of F53. Sec-ond, I show that an alternative reading (...) of F53 can be achieved if one takes seriously Friedman’s reference to ideal types; “ideal type” is a technical term introduced by Max Weber. Friedman was familiar with Max Weber’s work through Frank Knight, who was his teacher in Chicago. Given that in F53’s view ideal types are fundamen-tal building blocks of economic theory, it becomes clear why both instrumentalist and realist readings of F53 are inadequate. Third, the reading of F53 in terms of ideal types gives the role of elements from Popper’s falsifica-tionist methodology in F53 a somewhat different twist. Finally, I show that the irritating passages of F53 make good sense under the new reading, including the infamous “the more significant the theory, the more unrealistic the assumptions”. (shrink)
Simon Blackburn has not shied away from the use of vivid imagery in developing, over a long and prolific career, a large-scale philosophical vision. Here one might think, for instance, of ‘Practical Tortoise Raising’ or ‘Ramsey's Ladder’ or ‘Frege's Abyss’. Blackburn develops a ‘quasi-realist’ account of many of our philosophical and everyday commitments, both theoretical (e.g., modality and causation) and practical (e.g., moral judgement and normative reasons). Quasi-realism aims to provide a naturalistic treatment of its targeted phenomena while earning (...) the right to deploy all of the ‘trappings’ of realism—i.e., while eschewing any idea that our normal thought and talk about such phenomena are pervasively in error. The quasi-realist project is that of explaining how (as Huw Price puts it here) ‘the folk come to “talk the realist talk” without committing ourselves—us theorists, as it were—to “walking the metaphysical walk”’ (p. 136). Quasi-realism, too, can speak of truth, facts, properties, belief, knowledge, and so on. The imagery in this collection also abounds, though, in capturing a different view of quasi-realism: No fewer than three of the contributors picture Blackburn as wanting to have his cake and eat it too (Louise Antony asking, in addition, ‘Who doesn't? It's cake’ [p. 19]). (shrink)
In his recent book ‘Experiencing time’, Simon Prosser discusses a wide variety of topics relating to temporal experience, in a way that is accessible both to those steeped in the philosophy of mind, and to those more familiar with the philosophy of time. He forcefully argues for the conclusion that the B-theorist of time can account for the temporal appearances. In this article, I offer a chapter by chapter response.
This chapter argues that Simon anticipated what has emerged as the consensus view about human cognition: embodied functionalism. According to embodied functionalism, cognitive processes appear at a distinctively cognitive level; types of cognitive processes (such as proving a theorem) are not identical to kinds of neural processes, because the former can take various physical forms in various individual thinkers. Nevertheless, the distinctive characteristics of such processes — their causal structures — are determined by fine-grained properties shared by various, often (...) especially bodily related, physical processes that realize them. Simon’s apparently anti-embodiment views are surveyed and are shown to be consistent with his many claims that lend themselves to an embodied interpretation and that, to a significant extent, helped to lay the groundwork for an embodied cognitive science. (shrink)
Review of: Menachem Fisch; Simon Schaffer (Editors). William Whewell: A Composite Portrait. xiv + 403 pp., bibl., index. Oxford: Clarendon Press of Oxford University Press, 1991. $98.
Albert Lautman. Mathematics, Ideas and the Physical Real. Simon B. Duffy, trans. London and New York: Continuum, 2011. 978-1-4411-2344-2 (pbk); 978-1-44114656-4 (hbk); 978-1-44114433-1 (pdf e-bk); 978-1-44114654-0 (epub e-bk). Pp. xlii + 310.
ONE of the most celebrated mathematical physicists, Pierre-Simon Laplace is often remembered as the mathematician who showed that despite appearances, the Solar System does conform to Newton’s theories. Together with distinguished scholars Robert Fox and Ivor Grattan-Guinness, Charles Gillispie gives us a new perspective, showing that Laplace did not merely vindicate Newton’s system, but had a uniquely creative and independent mind.
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.