Both realist and anti-realist accounts of natural kinds possess prima facie virtues: realists can straightforwardly make sense of the apparent objectivity of the natural kinds, and anti-realists, their knowability. This paper formulates a properly anti-realist account designed to capture both merits. In particular, it recommends understanding natural kinds as ‘categorical bottlenecks,’ those categories that not only best serve us, with our idiosyncratic aims and cognitive capacities, but also those of a wide range of alternative agents. By endorsing an ultimately subjective (...) categorical principle, this view sidesteps epistemological difficulties facing realist views. Yet, it nevertheless identifies natural kinds that are fairly, though not completely, stance-independent or objective. (shrink)
This paper sketches a causal account of scientific explanation designed to sustain the judgment that high-level, detail-sparse explanations—particularly those offered in biology—can be at least as explanatorily valuable as lower-level counterparts. The motivating idea is that complete explanations maximize causal economy: they cite those aspects of an event’s causal run-up that offer the biggest-bang-for-your-buck, by costing less (in virtue of being abstract) and delivering more (in virtue making the event stable or robust).
Though biologists identify individuals as ‘male’ or ‘female’ across a broad range of animal species, the particular traits exhibited by males and females can vary tremendously. This diversity has led some to conclude that cross-animal sexes (males, or females, of whatever animal species) have “little or no explanatory power” (Dupré 1986: 447) and, thus, are not natural kinds in any traditional sense. This essay will explore considerations for and against this conclusion, ultimately arguing that the animal sexes, properly understood, are (...) “historical explanatory kinds”: groupings that can be scientifically significant even while their members differ radically in their current properties and particular histories. Whether this makes them full-fledged natural kinds is a question I take up at the very end. (shrink)
The interventionist account of causal explanation, in the version presented by Jim Woodward, has been recently claimed capable of buttressing the widely felt—though poorly understood—hunch that high-level, relatively abstract explanations, of the sort provided by sciences like biology, psychology and economics, are in some cases explanatorily optimal. It is the aim of this paper to show that this is mistaken. Due to a lack of effective constraints on the causal variables at the heart of the interventionist causal-explanatory scheme, as presently (...) formulated it is either unable to prefer high-level explanations to low, or systematically overshoots, recommending explanations at so high of a level as to be virtually vacuous. (shrink)
This paper critiques the new mechanistic explanatory program on grounds that, even when applied to the kinds of examples that it was originally designed to treat, it does not distinguish correct explanations from those that blunder. First, I offer a systematization of the explanatory account, one according to which explanations are mechanistic models that satisfy three desiderata: they must 1) represent causal relations, 2) describe the proper parts, and 3) depict the system at the right ‘level.’ Second, I argue that (...) even the most developed attempts to fulfill these desiderata fall short by failing to appropriately constrain explanatorily apt mechanistic models. -/- *This paper used to be called "The Emperor's New Mechanisms". (shrink)
Among the factors necessary for the occurrence of some event, which of these are selectively highlighted in its explanation and labeled as causes — and which are explanatorily omitted, or relegated to the status of background conditions? Following J. S. Mill, most have thought that only a pragmatic answer to this question was possible. In this paper I suggest we understand this ‘causal selection problem’ in causal-explanatory terms, and propose that explanatory trade-offs between abstraction and stability can provide a principled (...) solution to it. After sketching that solution, it is applied to a few biological examples, including to a debate concerning the ‘causal democracy’ of organismal development, with an anti-democratic (though not a gene-centric) moral. (shrink)
The Tree of Life has traditionally been understood to represent the history of species lineages. However, recently researchers have suggested that it might be better interpreted as representing the history of cellular lineages, sometimes called the Tree of Cells. This paper examines and evaluates reasons offered against this cellular interpretation of the Tree of Life. It argues that some such reasons are bad reasons, based either on a false attribution of essentialism, on a misunderstanding of the problem of lineage identity, (...) or on a limited view of scientific representation. I suggest that debate about the Tree of Cells and other successors to the traditional Tree of Life should be formulated in terms of the purposes these representations may serve. In pursuing this strategy, we see that the Tree of Cells cannot serve one purpose suggested for it: as an explanation for the hierarchical nature of taxonomy. We then explore whether, instead, the tree may play an important role in the dynamic modeling of evolution. As highly-integrated complex systems, cells may influence which lineage components can successfully transfer into them and how they change once integrated. Only if they do in fact have a substantial role to play in this process might the Tree of Cells have some claim to be the Tree of Life. (shrink)
Causal selection is the task of picking out, from a field of known causally relevant factors, some factors as elements of an explanation. The Causal Parity Thesis in the philosophy of biology challenges the usual ways of making such selections among different causes operating in a developing organism. The main target of this thesis is usually gene centrism, the doctrine that genes play some special role in ontogeny, which is often described in terms of information-bearing or programming. This paper is (...) concerned with the attempt of confronting the challenge coming from the Causal Parity Thesis by offering principles of causal selection that are spelled out in terms of an explicit philosophical account of causation, namely an interventionist account. I show that two such accounts that have been developed, although they contain important insights about causation in biology, nonetheless fail to provide an adequate reply to the Causal Parity challenge: Ken Waters's account of actual-difference making and Jim Woodward's account of causal specificity. A combination of the two also doesn't do the trick, nor does LauraFranklin-Hall's account of explanation (in this volume). We need additional conceptual resources. I argue that the resources we need consist in a special class of counterfactual conditionals, namely counterfactuals the antecedents of which describe biologically normal interventions. (shrink)
Philosophical theories of explanation characterize the difference between correct and incorrect explanations. While remaining neutral as to which of these ‘first-order’ theories is right, this paper asks the ‘meta-explanatory’ question: is the difference between correct and incorrect explanation real, i.e., objective or mind-independent? After offering a framework for distinguishing realist from anti-realist views, I sketch three distinct paths to explanatory anti-realism.
This study sought to replicate and extend Hall and colleagues’ (2014) work on developing and validating scales from the Psychopathic Personality Inventory (PPI) to index the triarchic psychopathy constructs of boldness, meanness, and disinhibition. This study also extended Hall et al.’s initial findings by including the PPI Revised (PPI–R). A community sample (n D 240) weighted toward subclinical psychopathy traits and a male prison sample (n D 160) were used for this study. Results indicated that PPI–Boldness, PPI–Meanness, and (...) PPI–Disinhibition converged with other psychopathy, personality, and behavioral criteria in ways conceptually expected from the perspective of the triarchic psychopathy model, including showing very strong convergent and discriminant validity with their Triarchic Psychopathy Measure counterparts. These findings further enhance the utility of the PPI and PPI–R in measuring these constructs. (shrink)
The aim of this article is to discuss the relation between indigenous and scientific kinds on the basis of contemporary ethnobiological research. I argue that ethnobiological accounts of taxonomic convergence-divergence patters challenge common philosophical models of the relation between folk concepts and natural kinds. Furthermore, I outline a positive model of taxonomic convergence-divergence patterns that is based on Slater's [2014] notion of “stable property clusters” and Franklin-Hall's [2014] discussion of natural kinds as “categorical bottlenecks.” Finally, I argue that (...) this model is not only helpful for understanding the relation between indigenous and scientific kinds but also makes substantial contributions to contemporary debates about natural kinds.to contemporary debates about natural kinds. (shrink)
In his recent book The Idea of Justice, Amartya Sen suggests that political philosophy should move beyond the dominant, Rawls-inspired, methodological paradigm – what Sen calls ‘transcendental institutionalism’ – towards a more practically oriented approach to justice: ‘realization-focused comparison’. In this article, I argue that Sen's call for a paradigm shift in thinking about justice is unwarranted. I show that his criticisms of the Rawlsian approach are either based on misunderstandings, or correct but of little consequence, and conclude that the (...) Rawlsian approach already delivers much of what Sen himself wants from a theory of justice. (shrink)
• It would be a moral disgrace for God (if he existed) to allow the many evils in the world, in the same way it would be for a parent to allow a nursery to be infested with criminals who abused the children. • There is a contradiction in asserting all three of the propositions: God is perfectly good; God is perfectly powerful; evil exists (since if God wanted to remove the evils and could, he would). • The religious believer (...) has no hope of getting away with excuses that evil is not as bad as it seems, or that it is all a result of free will, and so on. Piper avoids mentioning the best solution so far put forward to the problem of evil. It is Leibniz’s theory that God does not create a better world because there isn’t one — that is, that (contrary to appearances) if one part of the world were improved, the ramifications would result in it being worse elsewhere, and worse overall. It is a “bump in the carpet” theory: push evil down here, and it pops up over there. Leibniz put it by saying this is the “Best of All Possible Worlds”. That phrase was a public relations disaster for his theory, suggesting as it does that everything is perfectly fine as it is. He does not mean that, but only that designing worlds is a lot harder than it looks, and determining the amount of evil in the best one is no easy matter. Though humour is hardly appropriate to the subject matter, the point of Leibniz’s idea is contained in the old joke, “An optimist is someone who thinks this is the best of all possible worlds, and a pessimist thinks.. (shrink)
This article provides a conceptual map of the debate on ideal and non‐ideal theory. It argues that this debate encompasses a number of different questions, which have not been kept sufficiently separate in the literature. In particular, the article distinguishes between the following three interpretations of the ‘ideal vs. non‐ideal theory’ contrast: full compliance vs. partial compliance theory; utopian vs. realistic theory; end‐state vs. transitional theory. The article advances critical reflections on each of these sub‐debates, and highlights areas for future (...) research in the field. (shrink)
A polemical account of Australian philosophy up to 2003, emphasising its unique aspects (such as commitment to realism) and the connections between philosophers' views and their lives. Topics include early idealism, the dominance of John Anderson in Sydney, the Orr case, Catholic scholasticism, Melbourne Wittgensteinianism, philosophy of science, the Sydney disturbances of the 1970s, Francofeminism, environmental philosophy, the philosophy of law and Mabo, ethics and Peter Singer. Realist theories especially praised are David Armstrong's on universals, David Stove's on logical probability (...) and the ethical realism of Rai Gaita and Catholic philosophers. In addition to strict philosophy, the book treats non-religious moral traditions to train virtue, such as Freemasonry, civics education and the Greek and Roman classics. (shrink)
Effective quantum field theories are effective insofar as they apply within a prescribed range of length-scales, but within that range they predict and describe with extremely high accuracy and precision. The effectiveness of EFTs is explained by identifying the features—the scaling behaviour of the parameters—that lead to effectiveness. The explanation relies on distinguishing autonomy with respect to changes in microstates, from autonomy with respect to changes in microlaws, and relating these, respectively, to renormalizability and naturalness. It is claimed that the (...) effectiveness of EFTs is a consequence of each theory’s autonomyms rather than its autonomyml.1Introduction2Renormalizability2.1Explaining renormalizability3Naturalness3.1An unnatural but renormalizable theory4Two Kinds of Autonomy5The Effectiveness of Effective Quantum Field Theories6Conclusion. (shrink)
It is commonly claimed that the universality of critical phenomena is explained through particular applications of the renormalization group. This article has three aims: to clarify the structure of the explanation of universality, to discuss the physics of such RG explanations, and to examine the extent to which universality is thus explained. The derivation of critical exponents proceeds via a real-space or a field-theoretic approach to the RG. Building on work by Mainwood, this article argues that these approaches ought to (...) be distinguished: while the field-theoretic approach explains universality, the real-space approach fails to provide an adequate explanation. (shrink)
Recent discussions of emergence in physics have focussed on the use of limiting relations, and often particularly on singular or asymptotic limits. We discuss a putative example of emergence that does not fit into this narrative: the case of phonons. These quasi-particles have some claim to be emergent, not least because the way in which they relate to the underlying crystal is almost precisely analogous to the way in which quantum particles relate to the underlying quantum field theory. But there (...) is no need to take a limit when moving from a crystal lattice based description to the phonon description. Not only does this demonstrate that we can have emergence without limits, but also provides a way of understanding cases that do involve limits. (shrink)
How were reliable predictions made before Pascal and Fermat's discovery of the mathematics of probability in 1654? What methods in law, science, commerce, philosophy, and logic helped us to get at the truth in cases where certainty was not attainable? The book examines how judges, witch inquisitors, and juries evaluated evidence; how scientists weighed reasons for and against scientific theories; and how merchants counted shipwrecks to determine insurance rates. Also included are the problem of induction before Hume, design arguments for (...) the existence of God, and theories on how to evaluate scientific and historical hypotheses. It is explained how Pascal and Fermat's work on chance arose out of legal thought on aleatory contracts. The book interprets pre-Pascalian unquantified probability in a generally objective Bayesian or logical probabilist sense. (shrink)
Philosophers of experiment have acknowledged that experiments are often more than mere hypothesis-tests, once thought to be an experiment's exclusive calling. Drawing on examples from contemporary biology, I make an additional amendment to our understanding of experiment by examining the way that `wide' instrumentation can, for reasons of efficiency, lead scientists away from traditional hypothesis-directed methods of experimentation and towards exploratory methods.
In What Science Knows, the Australian philosopher and mathematician James Franklin explains in captivating and straightforward prose how science works its magic. It offers a semipopular introduction to an objective Bayesian/logical probabilist account of scientific reasoning, arguing that inductive reasoning is logically justified (though actually existing science sometimes falls short). Its account of mathematics is Aristotelian realist.
Blaming (construed broadly to include both blaming-attitudes and blaming-actions) is a puzzling phenomenon. Even when we grant that someone is blameworthy, we can still sensibly wonder whether we ought to blame him. We sometimes choose to forgive and show mercy, even when it is not asked for. We are naturally led to wonder why we shouldn’t always do this. Wouldn’t it be a better to wholly reject the punitive practices of blame, especially in light of their often undesirable effects, and (...) embrace an ethic of unrelenting forgiveness and mercy? In this paper I seek to address these questions by offering an account of blame that provides a rationale for thinking that to wholly forswear blaming blameworthy agents would be deeply mistaken. This is because, as I will argue, blaming is a way of valuing, it is “a mode of valuation.” I will argue that among the minimal standards of respect generated by valuable objects, notably persons, is the requirement to redress disvalue with blame. It is not just that blame is something additional we are required to do in properly valuing, but rather blame is part of what that it is to properly value. Blaming, given the existence of blameworthy agents, is mode of valuation required by the standards of minimal respect. To forswear blame would be to fail value what we ought to value. (shrink)
Philosophical discussions of species have focused on multicellular, sexual animals and have often neglected to consider unicellular organisms like bacteria. This article begins to fill this gap by considering what species concepts, if any, apply neatly to the bacterial world. First, I argue that the biological species concept cannot be applied to bacteria because of the variable rates of genetic transfer between populations, depending in part on which gene type is prioritized. Second, I present a critique of phylogenetic bacterial species, (...) arguing that phylogenetic bacterial classification requires a questionable metaphysical commitment to the existence of essential genes. I conclude by considering how microbiologists have dealt with these biological complexities by using more pragmatic and not exclusively evolutionary accounts of species. I argue that this pragmatism is not borne of laziness but rather of the substantial conceptual problems in classifying bacteria based on any evolutionary standard. (shrink)
Nearly all defences of the agent-causal theory of free will portray the theory as a distinctively libertarian one — a theory that only libertarians have reason to accept. According to what I call ‘the standard argument for the agent-causal theory of free will’, the reason to embrace agent-causal libertarianism is that libertarians can solve the problem of enhanced control only if they furnish agents with the agent-causal power. In this way it is assumed that there is only reason to accept (...) the agent-causal theory if there is reason to accept libertarianism. I aim to refute this claim. I will argue that the reasons we have for endorsing the agent-causal theory of free will are nonpartisan. The real reason for going agent-causal has nothing to do with determinism or indeterminism, but rather with avoiding reductionism about agency and the self. As we will see, if there is reason for libertarians to accept the agent-causal theory, there is just as much reason for compatibilists to accept it. It is in this sense that I contend that if anyone should be an agent-causalist, then everyone should be an agent-causalist. (shrink)
We give an analysis of the Monty Hall problem purely in terms of confirmation, without making any lottery assumptions about priors. Along the way, we show the Monty Hall problem is structurally identical to the Doomsday Argument.
Although the view that sees proper names as referential singular terms is widely considered orthodoxy, there is a growing popularity to the view that proper names are predicates. This is partly because the orthodoxy faces two anomalies that Predicativism can solve: on the one hand, proper names can have multiple bearers. But multiple bearerhood is a problem to the idea that proper names have just one individual as referent. On the other hand, as Burge noted, proper names can have predicative (...) uses. But the view that proper names are singular terms arguably does not have the resources to deal with Burge’s cases. In this paper I argue that the Predicate View of proper names is mistaken. I first argue against the syntactic evidence used to support the view and against the predicativist’s methodology of inferring a semantic account for proper names based on incomplete syntactic data. I also show that Predicativism can neither explain the behaviour of proper names in full generality, nor claim the fundamentality of predicative names. In developing my own view, however, I accept the insight that proper names in some sense express generality. Hence I propose that proper names—albeit fundamentally singular referential terms—express generality in two senses. First, by being used as predicates, since then they are true of many individuals; and second, by being referentially related to many individuals. I respond to the problem of multiple bearerhood by proposing that proper names are polyreferential, and also explain the behaviour of proper names in light of the wider phenomenon I called category change, and show how Polyreferentialism can account for all uses of proper names. (shrink)
Throughout history, almost all mathematicians, physicists and philosophers have been of the opinion that space and time are infinitely divisible. That is, it is usually believed that space and time do not consist of atoms, but that any piece of space and time of non-zero size, however small, can itself be divided into still smaller parts. This assumption is included in geometry, as in Euclid, and also in the Euclidean and non- Euclidean geometries used in modern physics. Of the few (...) who have denied that space and time are infinitely divisible, the most notable are the ancient atomists, and Berkeley and Hume. All of these assert not only that space and time might be atomic, but that they must be. Infinite divisibility is, they say, impossible on purely conceptual grounds. (shrink)
Just before the Scientific Revolution, there was a "Mathematical Revolution", heavily based on geometrical and machine diagrams. The "faculty of imagination" (now called scientific visualization) was developed to allow 3D understanding of planetary motion, human anatomy and the workings of machines. 1543 saw the publication of the heavily geometrical work of Copernicus and Vesalius, as well as the first Italian translation of Euclid.
Every day, thousands of polls, surveys, and rating scales are employed to elicit the attitudes of humankind. Given the ubiquitous use of these instruments, it seems we ought to have firm answers to what is measured by them, but unfortunately we do not. To help remedy this situation, we present a novel approach to investigate the nature of attitudes. We created a self-transforming paper survey of moral opinions, covering both foundational principles, and current dilemmas hotly debated in the media. This (...) survey used a magic trick to expose participants to a reversal of their previously stated attitudes, allowing us to record whether they were prepared to endorse and argue for the opposite view of what they had stated only moments ago. The result showed that the majority of the reversals remained undetected, and a full 69% of the participants failed to detect at least one of two changes. In addition, participants often constructed coherent and unequivocal arguments supporting the opposite of their original position. These results suggest a dramatic potential for flexibility in our moral attitudes, and indicates a clear role for self-attribution and post-hoc rationalization in attitude formation and change. (shrink)
A problem for Aristotelian realist accounts of universals (neither Platonist nor nominalist) is the status of those universals that happen not to be realised in the physical (or any other) world. They perhaps include uninstantiated shades of blue and huge infinite cardinals. Should they be altogether excluded (as in D.M. Armstrong's theory of universals) or accorded some sort of reality? Surely truths about ratios are true even of ratios that are too big to be instantiated - what is the truthmaker (...) of such truths? It is argued that Aristotelianism can answer the question, but only a semi-Platonist form of it. (shrink)
Leibniz's best-of-all-possible worlds solution to the problem of evil is defended. Enlightenment misrepresentations are removed. The apparent obviousness of the possibility of better worlds is undermined by the much better understanding achieved in modern mathematical sciences of how global structure constrains local possibilities. It is argued that alternative views, especially standard materialism, fail to make sense of the problem ofevil, by implying that evil does not matter, absolutely speaking. Finally, itis shown how ordinary religious thinking incorporates the essentials of Leibniz's (...) view. (shrink)
Reference to the state is ubiquitous in debates about global justice. Some authors see the state as central to the justification of principles of justice, and thereby reject their extension to the international realm. Others emphasize its role in the implementation of those principles. This chapter scrutinizes the variety of ways in which the state figures in the global-justice debate. Our discussion suggests that, although the state should have a prominent role in theorizing about global justice, contrary to what is (...) commonly thought, acknowledging this role does not lead to anti-cosmopolitan conclusions, but to the defense of an “intermediate” position about global justice. From a justificatory perspective, we argue, the state remains a key locus for the application of egalitarian principles of justice, but is not the only one. From the perspective of implementation, we suggest that state institutions are increasingly fragile in a heavily interdependent world, and need to be supplemented—though not supplanted—with supranational authorities. (shrink)
The distinction between the discrete and the continuous lies at the heart of mathematics. Discrete mathematics (arithmetic, algebra, combinatorics, graph theory, cryptography, logic) has a set of concepts, techniques, and application areas largely distinct from continuous mathematics (traditional geometry, calculus, most of functional analysis, differential equations, topology). The interaction between the two – for example in computer models of continuous systems such as fluid flow – is a central issue in the applicable mathematics of the last hundred years. This article (...) explains the distinction and why it has proved to be one of the great organizing themes of mathematics. (shrink)
Modern philosophy of mathematics has been dominated by Platonism and nominalism, to the neglect of the Aristotelian realist option. Aristotelianism holds that mathematics studies certain real properties of the world – mathematics is neither about a disembodied world of “abstract objects”, as Platonism holds, nor it is merely a language of science, as nominalism holds. Aristotle’s theory that mathematics is the “science of quantity” is a good account of at least elementary mathematics: the ratio of two heights, for example, is (...) a perceivable and measurable real relation between properties of physical things, a relation that can be shared by the ratio of two weights or two time intervals. Ratios are an example of continuous quantity; discrete quantities, such as whole numbers, are also realised as relations between a heap and a unit-making universal. For example, the relation between foliage and being-a-leaf is the number of leaves on a tree,a relation that may equal the relation between a heap of shoes and being-a-shoe. Modern higher mathematics, however, deals with some real properties that are not naturally seen as quantity, so that the “science of quantity” theory of mathematics needs supplementation. Symmetry, topology and similar structural properties are studied by mathematics, but are about pattern, structure or arrangement rather than quantity. (shrink)
The formal sciences - mathematical as opposed to natural sciences, such as operations research, statistics, theoretical computer science, systems engineering - appear to have achieved mathematically provable knowledge directly about the real world. It is argued that this appearance is correct.
Dispostions, such as solubility, cannot be reduced to categorical properties, such as molecular structure, without some element of dipositionaity remaining. Democritus did not reduce all properties to the geometry of atoms - he had to retain the rigidity of the atoms, that is, their disposition not to change shape when a force is applied. So dispositions-not-to, like rigidity, cannot be eliminated. Neither can dispositions-to, like solubility.
When and why do socially constructed norms—including the laws of the land, norms of etiquette, and informal customs—generate moral obligations? I argue that the answer lies in the duty to respect others, specifically to give them what I call “agency respect.” This is the kind of respect that people are owed in light of how they exercise their agency. My central thesis is this: To the extent that (i) existing norms are underpinned by people’s commitments as agents and (ii) they (...) do not conflict with morality, they place moral demands on us on agency-respect grounds. This view of the moral force of socially constructed norms, I suggest, is superior to views that deny the moral force of such norms, and it elegantly explains certain instances of wrongdoing that would otherwise remain unaccounted for. (shrink)
Is democracy a requirement of justice or an instrument for realizing it? The correct answer to this question, I argue, depends on the background circumstances against which democracy is defended. In the presence of thin reasonable disagreement about justice, we should value democracy only instrumentally (if at all); in the presence of thick reasonable disagreement about justice, we should value it also intrinsically, as a necessary demand of justice. Since the latter type of disagreement is pervasive in real-world politics, I (...) conclude that theories of justice designed for our world should be centrally concerned with democracy. (shrink)
The winning entry in David Stove's Competition to Find the Worst Argument in the World was: “We can know things only as they are related to us/insofar as they fall under our conceptual schemes, etc., so, we cannot know things as they are in themselves.” That argument underpins many recent relativisms, including postmodernism, post-Kuhnian sociological philosophy of science, cultural relativism, sociobiological versions of ethical relativism, and so on. All such arguments have the same form as ‘We have eyes, therefore we (...) cannot see’, and are equally invalid. (shrink)
In this article, I develop a new account of the liberal view that principles of justice are meant to justify state coercion, and consider its implications for the question of global socioeconomic justice. Although contemporary proponents of this view deny that principles of socioeconomic justice apply globally, on my newly developed account this conclusion is mistaken. I distinguish between two types of coercion, systemic and interactional, and argue that a plausible theory of global justice should contain principles justifying both. The (...) justification of interactional coercion requires principles regulating interstate interference; that of systemic coercion requires principles of global socioeconomic justice. I argue that the proposed view not only helps us make progress in the debate on global justice, but also offers an independently compelling and systematic account of the function and conditions of applicability of justice. -/-. (shrink)
Einstein, like most philosophers, thought that there cannot be mathematical truths which are both necessary and about reality. The article argues against this, starting with prima facie examples such as "It is impossible to tile my bathroom floor with regular pentagonal tiles." Replies are given to objections based on the supposedly purely logical or hypothetical nature of mathematics.
The global/local contrast is ubiquitous in mathematics. This paper explains it with straightforward examples. It is possible to build a circular staircase that is rising at any point (locally) but impossible to build one that rises at all points and comes back to where it started (a global restriction). Differential equations describe the local structure of a process; their solution describes the global structure that results. The interplay between global and local structure is one of the great themes of mathematics, (...) but rarely discussed explicitly. (shrink)
The imperviousness of mathematical truth to anti-objectivist attacks has always heartened those who defend objectivism in other areas, such as ethics. It is argued that the parallel between mathematics and ethics is close and does support objectivist theories of ethics. The parallel depends on the foundational role of equality in both disciplines. Despite obvious differences in their subject matter, mathematics and ethics share a status as pure forms of knowledge, distinct from empirical sciences. A pure understanding of principles is possible (...) because of the simplicity of the notion of equality, despite the different origins of our understanding of equality of objects in general and of the equality of the ethical worth of persons. (shrink)
Political candidates often believe they must focus their campaign efforts on a small number of swing voters open for ideological change. Based on the wisdom of opinion polls, this might seem like a good idea. But do most voters really hold their political attitudes so firmly that they are unreceptive to persuasion? We tested this premise during the most recent general election in Sweden, in which a left- and a right-wing coalition were locked in a close race. We asked our (...) participants to state their voter intention, and presented them with a political survey of wedge issues between the two coalitions. Using a sleight-of-hand we then altered their replies to place them in the opposite political camp, and invited them to reason about their attitudes on the manipulated issues. Finally, we summarized their survey score, and asked for their voter intention again. The results showed that no more than 22% of the manipulated replies were detected, and that a full 92% of the participants accepted and endorsed our altered political survey score. Furthermore, the final voter intention question indicated that as many as 48% (69.2%) were willing to consider a left-right coalition shift. This can be contrasted with the established polls tracking the Swedish election, which registered maximally 10% voters open for a swing. Our results indicate that political attitudes and partisan divisions can be far more flexible than what is assumed by the polls, and that people can reason about the factual issues of the campaign with considerable openness to change. (shrink)
Which standards should we employ to evaluate the global order? Should they be standards of justice or standards of legitimacy? In this article, I argue that liberal political theorists need not face this dilemma, because liberal justice and legitimacy are not distinct values. Rather, they indicate what the same value, i.e. equal respect for persons, demands of institutions under different sets of circumstances. I suggest that under real-world circumstances? characterized by conflicts and disagreements? equal respect demands basic-rights protection and democratic (...) participation, which I here call?political justice?. I conclude the article by considering three possible configurations of the global order? the?democratic world-state?,?independent democratic states?, and?mixed? models? and argue that a commitment to political justice speaks in favour of the latter. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.