Was ist Natur oder was könnte sie sein? Diese und weitere Fragen sind grundlegend für Naturdenken und -handeln. Das Lehr- und Studienbuch bietet eine historisch-systematische und zugleich praxisbezogene Einführung in die Naturphilosophie mit ihren wichtigsten Begriffen. Es nimmt den pluralen Charakter der Wahrnehmung von Natur in den philosophischen Blick und ist auch zum Selbststudium bestens geeignet.
Biological ontologies are used to organize, curate, and interpret the vast quantities of data arising from biological experiments. While this works well when using a single ontology, integrating multiple ontologies can be problematic, as they are developed independently, which can lead to incompatibilities. The Open Biological and Biomedical Ontologies Foundry was created to address this by facilitating the development, harmonization, application, and sharing of ontologies, guided by a set of overarching principles. One challenge in reaching these goals was that the (...) OBO principles were not originally encoded in a precise fashion, and interpretation was subjective. Here we show how we have addressed this by formally encoding the OBO principles as operational rules and implementing a suite of automated validation checks and a dashboard for objectively evaluating each ontology’s compliance with each principle. This entailed a substantial effort to curate metadata across all ontologies and to coordinate with individual stakeholders. We have applied these checks across the full OBO suite of ontologies, revealing areas where individual ontologies require changes to conform to our principles. Our work demonstrates how a sizable federated community can be organized and evaluated on objective criteria that help improve overall quality and interoperability, which is vital for the sustenance of the OBO project and towards the overall goals of making data FAIR. Competing Interest StatementThe authors have declared no competing interest. (shrink)
First, we explain the conception of trustworthiness that we employ. We model trustworthiness as a relation among a trustor, a trustee, and a field of trust defined and delimited by its scope. In addition, both potential trustors and potential trustees are modeled as being more or less reliable in signaling either their willingness to trust or their willingness to prove trustworthy in various fields in relation to various other agents. Second, following Alfano (forthcoming) we argue that the social scale of (...) a potential trust relationship partly determines both explanatory and normative aspects of the relation. Most of the philosophical literature focuses on dyadic trust between a pair of agents (Baier 1986, Jones 1996, Jones 2012, McGeer 2008, Pettit 1995), but there are also small communities of trust (Alfano forthcoming) and trust in large institutions (Potter 2002, Govier 1997, Townley & Garfield 2013, Hardin 2002). The mechanisms that induce people to extend their trust vary depending on the size of the community in question, and the ways in which trustworthiness can be established and trusting warranted vary with these mechanisms. Mechanisms that work in dyads and small communities are often unavailable in the context of trusting an institution or branch of government. Establishing trust on this larger social scale therefore requires new or modified mechanisms. In the third section of the paper, we recommend three policies that – we argue – tend to make institutions more trustworthy and to reliably signal that trustworthiness to the public. First, they should ensure that their decision-making processes are as open and transparent as possible. Second, they should make efforts to engage stakeholders in dialogue with decision-makers such as managers, members of the C-Suite, and highly-placed policy-makers. Third, they should foster diversity – gender, ethnicity, age, socioeconomic background, disability, etc. – in their workforce at all levels, but especially in management and positions of power. We conclude by discussing the warrant for distrust in institutions that do not adopt these policies, which we contend is especially pertinent for people who belong to groups that have historically faced (and in many cases still do face) oppression. (shrink)
One might be inclined to assume, given the mouse donning its cover, that the behavior of interest in Nicole Nelson's book Model Behavior (2018) is that of organisms like mice that are widely used as “stand-ins” for investigating the causes of human behavior. Instead, Nelson's ethnographic study focuses on the strategies adopted by a community of rodent behavioral researchers to identify and respond to epistemic challenges they face in using mice as models to understand the causes of disordered human (...) behaviors associated with mental illness. Although Nelson never explicitly describes the knowledge production activities in which her behavioral geneticist research subjects engage as “exemplary”, the question of whether or not these activities constitute “model behavior(s)”—generalizable norms for engaging in scientific research—is one of the many thought-provoking questions raised by her book. As a philosopher of science interested in this question, I take it up here. (shrink)
Games occupy a unique and valuable place in our lives. Game designers do not simply create worlds; they design temporary selves. Game designers set what our motivations are in the game and what our abilities will be. Thus: games are the art form of agency. By working in the artistic medium of agency, games can offer a distinctive aesthetic value. They support aesthetic experiences of deciding and doing. -/- And the fact that we play games shows something remarkable about us. (...) Our agency is more fluid than we might have thought. In playing a game, we take on temporary ends; we submerge ourselves temporarily in an alternate agency. Games turn out to be a vessel for communicating different modes of agency, for writing them down and storing them. Games create an archive of agencies. And playing games is how we familiarize ourselves with different modes of agency, which helps us develop our capacity to fluidly change our own style of agency. (shrink)
We offer an account of the generic use of the term “porn”, as seen in recent usages such as “food porn” and “real estate porn”. We offer a definition adapted from earlier accounts of sexual pornography. On our account, a representation is used as generic porn when it is engaged with primarily for the sake of a gratifying reaction, freed from the usual costs and consequences of engaging with the represented content. We demonstrate the usefulness of the concept of generic (...) porn by using it to isolate a new type of such porn: moral outrage porn. Moral outrage porn is representations of moral outrage, engaged with primarily for the sake of the resulting gratification, freed from the usual costs and consequences of engaging with morally outrageous content. Moral outrage porn is dangerous because it encourages the instrumentalization of one’s empirical and moral beliefs, manipulating their content for the sake of gratification. Finally, we suggest that when using porn is wrong, it is often wrong because it instrumentalizes what ought not to be instrumentalized. (shrink)
Various authors debate the question of whether neuroscience is relevant to criminal responsibility. However, a plethora of different techniques and technologies, each with their own abilities and drawbacks, lurks beneath the label “neuroscience”; and in criminal law responsibility is not a single, unitary and generic concept, but it is rather a syndrome of at least six different concepts. Consequently, there are at least six different responsibility questions that the criminal law asks—at least one for each responsibility concept—and, I will suggest, (...) a multitude of ways in which the techniques and technologies that comprise neuroscience might help us to address those diverse questions. In a way, on my account neuroscience is relevant to criminal responsibility in many ways, but I hesitate to state my position like this because doing so obscures two points which I would rather highlight: one, neither neuroscience nor criminal responsibility are as unified as that; and two, the criminal law asks many different responsibility questions and not just one generic question. (shrink)
Garrath Williams claims that truly responsible people must possess a “capacity … to respond [appropriately] to normative demands” (2008:462). However, there are people whom we would normally praise for their responsibility despite the fact that they do not yet possess such a capacity (e.g. consistently well-behaved young children), and others who have such capacity but who are still patently irresponsible (e.g. some badly-behaved adults). Thus, I argue that to qualify for the accolade “a responsible person” one need not possess such (...) a capacity, but only to be earnestly willing to do the right thing and to have a history that testifies to this willingness. Although we may have good reasons to prefer to have such a capacity ourselves, and to associate ourselves with others who have it, at a conceptual level I do not think that such considerations support the claim that having this capacity is a necessary condition of being a responsible person in the virtue sense. (shrink)
Our life with art is suffused with trust. We don’t just trust one another’s aesthetic testimony; we trust one another’s aesthetic actions. Audiences trust artists to have made it worth their while; artists trust audiences to put in the effort. Without trust, audiences would have little reason to put in the effort to understand difficult and unfamiliar art. I offer a theory of aesthetic trust, which highlights the importance of trust in aesthetic sincerity. We trust in another’s aesthetic sincerity when (...) we rely on them to fulfill their commitments to act for aesthetic reasons — rather than for, say, financial, social, or political reasons. We feel most thoroughly betrayed by an artist, not when they make bad art, but when they sell out. This teaches us something about the nature of trust in general. According to many standard theories, trust involves thinking the trusted to be cooperative or good-natured. But trust in aesthetic sincerity is different. We trust artists to be true to their own aesthetic sensibility, which might involve selfishly ignoring their audience’s needs. Why do we care so much about an artist’s sincerity, rather than merely trusting them to make good art? We emphasize sincerity when wish to encourage originality, rather than to demand success along predictable lines. And we ask for sincerity when our goal is to discover a shared sensibility. In moral life, we often try to force convergence through coordinated effort. But in aesthetic life, we often hope for the lovely discovery that our sensibilities were similar all along. And for that we need to ask for sincerity, rather than overt coordination. (shrink)
Fred Adams and collaborators advocate a view on which empty-name sentences semantically encode incomplete propositions, but which can be used to conversationally implicate descriptive propositions. This account has come under criticism recently from Marga Reimer and Anthony Everett. Reimer correctly observes that their account does not pass a natural test for conversational implicatures, namely, that an explanation of our intuitions in terms of implicature should be such that we upon hearing it recognize it to be roughly correct. Everett argues that (...) the implicature view provides an explanation of only some our intuitions, and is in fact incompatible with others, especially those concerning the modal profile of sentences containing empty names. I offer a pragmatist treatment of empty names based upon the recognition that the Gricean distinction between what is said and what is implicated is not exhaustive, and argue that such a solution avoids both Everett’s and Reimer’s criticisms.Selon Fred Adams et ses collaborateurs, les phrases comportant des noms propres vides codent sémantiquement des propositions incomplètes, bien qu’elles puissent être utilisées pour impliquer des propositions descriptives dans le contexte d’une conversation. Marga Reimer et Anthony Everett ont récemment critiqué cette théorie. Reimer note judicieusement que leur théorie ne résiste pas à l’examen naturel des implications conversationnelles; une explication de nos intuitions concernant l’implication doit être telle que lorsque nous l’entendons, elle nous apparaît globalement correcte. Everett soutient que la théorie de l’implication ne parvient à expliquer qu’un certain nombre de nos intuitions et reste incompatible avec d’autres, notamment celles qui concernent la dimension modale des phrases contenant des noms propres vides. Je propose ici un traitement pragmatiste des noms propres vides fondé sur l’observation que la distinction Gricéenne entre ce qui est dit et ce qui est impliqué n’est pas exhaustive; je soutiens que cette solution échappe aux critiques d’Everett et de Reimer. (shrink)
Luck egalitarians think that considerations of responsibility can excuse departures from strict equality. However critics argue that allowing responsibility to play this role has objectionably harsh consequences. Luck egalitarians usually respond either by explaining why that harshness is not excessive, or by identifying allegedly legitimate exclusions from the default responsibility-tracking rule to tone down that harshness. And in response, critics respectively deny that this harshness is not excessive, or they argue that those exclusions would be ineffective or lacking in justification. (...) Rather than taking sides, after criticizing both positions I also argue that this way of carrying on the debate – i.e. as a debate about whether the harsh demands of responsibility outweigh other considerations, and about whether exclusions to responsibility-tracking would be effective and/or justified – is deeply problematic. On my account, the demands of responsibility do not – in fact, they can not – conflict with the demands of other normative considerations, because responsibility only provides a formal structure within which those other considerations determine how people may be treated, but it does not generate its own practical demands. (shrink)
What is a game? What are we doing when we play a game? What is the value of playing games? Several different philosophical subdisciplines have attempted to answer these questions using very distinctive frameworks. Some have approached games as something like a text, deploying theoretical frameworks from the study of narrative, fiction, and rhetoric to interrogate games for their representational content. Others have approached games as artworks and asked questions about the authorship of games, about the ontology of the work (...) and its performance. Yet others, from the philosophy of sport, have focused on normative issues of fairness, rule application, and competition. The primary purpose of this article is to provide an overview of several different philosophical approaches to games and, hopefully, demonstrate the relevance and value of the different approaches to each other. Early academic attempts to cope with games tried to treat games as a subtype of narrative and to interpret games exactly as one might interpret a static, linear narrative. A faction of game studies, self-described as “ludologists,” argued that games were a substantially novel form and could not be treated with traditional tools for narrative analysis. In traditional narrative, an audience is told and interprets the story, where in a game, the player enacts and creates the story. Since that early debate, theorists have attempted to offer more nuanced accounts of how games might achieve similar ends to more traditional texts. For example, games might be seen as a novel type of fiction, which uses interactive techniques to achieve immersion in a fictional world. Alternately, games might be seen as a new way to represent causal systems, and so a new way to criticize social and political entities. Work from contemporary analytic philosophy of art has, on the other hand, asked questions whether games could be artworks and, if so, what kind. Much of this debate has concerned the precise nature of the artwork, and the relationship between the artist and the audience. Some have claimed that the audience is a cocreator of the artwork, and so games are a uniquely unfinished and cooperative art form. Others have claimed that, instead, the audience does not help create the artwork; rather, interacting with the artwork is how an audience member appreciates the artist's finished production. Other streams of work have focused less on the game as a text or work, and more on game play as a kind of activity. One common view is that game play occurs in a “magic circle.” Inside the magic circle, players take on new roles, follow different rules, and actions have different meanings. Actions inside the magic circle do not have their usual consequences for the rest of life. Enemies of the magic circle view have claimed that the view ignores the deep integration of game life from ordinary life and point to gambling, gold farming, and the status effects of sports. Philosophers of sport, on the other hand, have approached games with an entirely different framework. This has lead into investigations about the normative nature of games—what guides the applications of rules and how those rules might be applied, interpreted, or even changed. Furthermore, they have investigated games as social practices and as forms of life. (shrink)
In this paper I argue that Beall and Restall's claim that there is one true logic of metaphysical modality is incompatible with the formulation of logical pluralism that they give. I investigate various ways of reconciling their pluralism with this claim, but conclude that none of the options can be made to work.
Benefit/cost analysis is a technique for evaluating programs, procedures, and actions; it is not a moral theory. There is significant controversy over the moral justification of benefit/cost analysis. When a procedure for evaluating social policy is challenged on moral grounds, defenders frequently seek a justification by construing the procedure as the practical embodiment of a correct moral theory. This has the apparent advantage of avoiding difficult empirical questions concerning such matters as the consequences of using the procedure. So, for example, (...) defenders of benefit/cost analysis are frequently tempted to argue that this procedure just is the calculation of moral Tightness – perhaps that what it means for an action to be morally right is just for it to have the best benefit-to-cost ratio given the accounts of “benefit” and “cost” that BCA employs. They suggest, in defense of BCA, that they have found the moral calculus – Bentham's “unabashed arithmetic of morals.” To defend BCA in this manner is to commit oneself to one member of a family of moral theories and, also, to the view that if a procedure is the direct implementation of a correct moral theory, then it is a justified procedure. Neither of these commitments is desirable, and so the temptation to justify BCA by direct appeal to a B/C moral theory should be resisted; it constitutes an unwarranted short cut to moral foundations – in this case, an unsound foundation. Critics of BCA are quick to point out the flaws of B/C moral theories, and to conclude that these undermine the justification of BCA. But the failure to justify BCA by a direct appeal to B/C moral theory does not show that the technique is unjustified. There is hope for BCA, even if it does not lie with B/C moral theory. (shrink)
It has become standard for feminist philosophers of language to analyze Catherine MacKinnon's claim in terms of speech act theory. Backed by the Austinian observation that speech can do things and the legal claim that pornography is speech, the claim is that the speech acts performed by means of pornography silence women. This turns upon the notion of illocutionary silencing, or disablement. In this paper I observe that the focus by feminist philosophers of language on the failure to achieve uptake (...) for illocutionary acts serves to group together different kinds of illocutionary silencing which function in very different ways. (shrink)
This paper is the written version of my contribution to the International Conference «30 years Logica modernorum» held in Amsterdam in November 1997 in honor of the late prof. Lambertus M. de Rijk. Research on Oresme’s modi rerum theory was in the first stage, while now we can read the critical edition of Oresme’s Physics commentary, where modi are introduced and widely used. In this paper I shall consider Oresme’s polemical use of modi rerum, trying to set it in the (...) larger context of both his ontology and his epistemology. Oresme’s challenge to either a realist or terminist ontology by means of modi rerum conceals probably an attack to William of Ockham; Oresme refers explicitly to Ockham concerning exclusive propositions, but I think that on many other occasions the polemical target of Oresme’s criticism can be reasonably identified in William of Ockham or in some unnamed followers of the Venerabilis Inceptor. Some hints are reserved also to the possible sources of Oresme’s modi rerum. (shrink)
Modal collapse arguments are all the rage in certain philosophical circles as of late. The arguments purport to show that classical theism entails the absurdly fatalistic conclusion that everything exists necessarily. My first aim in this paper is bold: to put an end to action-based modal collapse arguments against classical theism. To accomplish this, I first articulate the ‘Simple Modal Collapse Argument’ and then characterize and defend Tomaszewski’s criticism thereof. Second, I critically examine Mullins’ new modal collapse argument formulated in (...) response to the aforementioned criticism. I argue that Mullins’ new argument does not succeed. Third, I critically examine a powers-based modal collapse argument against classical theism that has received much less attention in the literature. Fourth, I show why God’s being purely actual, as well God’s being identical to each of God’s acts, simply cannot entail modal collapse given indeterministic causation. This, I take it, signals the death of modal collapse arguments. But not all hope is lost for proponents of modal collapse arguments—for the death is a fruitful one insofar as it paves the way for new inquiry into at least two new potential problems for classical theism. Showing this is my paper’s second aim. (shrink)
This paper examines how people think about aiding others in a way that can inform both theory and practice. It uses data gathered from Kiva, an online, non-profit organization that allows individuals to aid other individuals around the world, to isolate intuitions that people find broadly compelling. The central result of the paper is that people seem to give more priority to aiding those in greater need, at least below some threshold. That is, the data strongly suggest incorporating both a (...) threshold and a prioritarian principle into the analysis of what principles for aid distribution people accept. This conclusion should be of broad interest to aid practitioners and policy makers. It may also provide important information for political philosophers interested in building, justifying, and criticizing theories about meeting needs using empirical evidence. (shrink)
This thesis considers two allegations which conservatives often level at no-fault systems — namely, that responsibility is abnegated under no-fault systems, and that no-fault systems under- and over-compensate. I argue that although each of these allegations can be satisfactorily met – the responsibility allegation rests on the mistaken assumption that to properly take responsibility for our actions we must accept liability for those losses for which we are causally responsible; and the compensation allegation rests on the mistaken assumption that tort (...) law’s compensatory decisions provide a legitimate norm against which no-fault’s decisions can be compared and criticized – doing so leads in a direction which is at odds with accident law reform advocates’ typical recommendations. On my account, accident law should not just be reformed in line with no-fault’s principles, but rather it should be completely abandoned since the principles that protect no- fault systems from the conservatives’ two allegations are incompatible with retaining the category of accident law, they entail that no-fault systems are a form of social welfare and not accident law systems, and that under these systems serious deprivation – and to a lesser extent causal responsibility – should be conditions of eligibility to claim benefits. (shrink)
A philosophical exchange broadly inspired by the characters of Berkeley’s Three Dialogues. Hylas is the realist philosopher: the view he stands up for reflects a robust metaphysic that is reassuringly close to common sense, grounded on the twofold persuasion that the world comes structured into entities of various kinds and at various levels and that it is the task of philosophy, if not of science generally, to “bring to light” that structure. Philonous, by contrast, is the anti-realist philosopher (though not (...) necessarily an idealist): his metaphysic is stark, arid, dishearteningly bone-dry, and stems from the conviction that a great deal of the structure that we are used to attribute to the world out there lies, on closer inspection, in our head, in our “organizing practices”, in the complex system of concepts and categories that unrerlie our representation of experience and our need to represent it that way. (shrink)
The question I wish to explore is this: Does idealism conflict with common sense? Unfortunately, the answer I give may seem like a rather banal one: It depends. What do we mean by ‘idealism’ and ‘common sense?’ I distinguish three main varieties of idealism: absolute idealism, Berkeleyan idealism, and dualistic idealism. After clarifying what is meant by common sense, I consider whether our three idealisms run afoul of it. The first does, but the latter two don’t. I conclude that while (...) Moore’s famous common sense critique is sound against external world skepticism, against Berkeleyan idealism and dualist idealism it is unavailing. (shrink)
What constitutes illocutionary silencing? This is the key question underlying much recent work on Catherine MacKinnon's claim that pornography silences women. In what follows I argue that the focus of the literature on the notion of audience `uptake' serves to mischaracterize the phenomena. I defend a broader interpretation of what it means for an illocutionary act to succeed, and show how this broader interpretation provides a better characterization of the kinds of silencing experienced by women.
We argue that there is a conflict among classical theism's commitments to divine simplicity, divine creative freedom, and omniscience. We start by defining key terms for the debate related to classical theism. Then we articulate a new argument, the Aloneness Argument, aiming to establish a conflict among these attributes. In broad outline, the argument proceeds as follows. Under classical theism, it's possible that God exists without anything apart from Him. Any knowledge God has in such a world would be wholly (...) intrinsic. But there are contingent truths in every world, including the world in which God exists alone. So, it's possible that God contingently has wholly intrinsic knowledge. But whatever is contingent and wholly intrinsic is an accident. So, God possibly has an accident. This is incompatible with classical theism. Finally, we consider and rebut several objections. (shrink)
Recent conversation has blurred two very different social epistemic phenomena: echo chambers and epistemic bubbles. Members of epistemic bubbles merely lack exposure to relevant information and arguments. Members of echo chambers, on the other hand, have been brought to systematically distrust all outside sources. In epistemic bubbles, other voices are not heard; in echo chambers, other voices are actively undermined. It is crucial to keep these phenomena distinct. First, echo chambers can explain the post-truth phenomena in a way that epistemic (...) bubbles cannot. Second, each type of structure requires a distinct intervention. Mere exposure to evidence can shatter an epistemic bubble, but may actually reinforce an echo chamber. Finally, echo chambers are much harder to escape. Once in their grip, an agent may act with epistemic virtue, but social context will pervert those actions. Escape from an echo chamber may require a radical rebooting of one's belief system. (shrink)
Because spaying/neutering animals involves the harming of some animals in order to prevent harm to others, some ethicists, like David Boonin, argue that the philosophy of animal rights is committed to the view that spaying/neutering animals violates the respect principle and that Trap Neuter Release programs are thus impermissible. In response, I demonstrate that the philosophy of animal rights holds that, under certain conditions, it is justified, and sometimes even obligatory, to cause harm to some animals in order to prevent (...) greater harm to others. As I will argue, causing lesser harm to some animals in order to prevent greater harm to others, as TNR programs do, is compatible with the recognition of the inherent value of the ones who are harmed. Indeed, we can, and do, spay/neuter cats while acknowledging that they have value in their own right. (shrink)
In "Torts, Egalitarianism and Distributive Justice" , Tsachi Keren-Paz presents impressingly detailed analysis that bolsters the case in favour of incremental tort law reform. However, although this book's greatest strength is the depth of analysis offered, at the same time supporters of radical law reform proposals may interpret the complexity of the solution that is offered as conclusive proof that tort law can only take adequate account of egalitarian aims at an unacceptably high cost.
Egalitarians must address two questions: i. What should there be an equality of, which concerns the currency of the ‘equalisandum’; and ii. How should this thing be allocated to achieve the so-called equal distribution? A plausible initial composite answer to these two questions is that resources should be allocated in accordance with choice, because this way the resulting distribution of the said equalisandum will ‘track responsibility’ — responsibility will be tracked in the sense that only we will be responsible for (...) the resources that are available to us, since our allocation of resources will be a consequence of our own choices. But the effects of actual choices should not be preserved until the prior effects of luck in constitution and circumstance are first eliminated. For instance, people can choose badly because their choice-making capacity was compromised due to a lack of intelligence (i.e. due to constitutional bad luck), or because only bad options were open to them (i.e. due to circumstantial bad luck), and under such conditions we are not responsible for our choices. So perhaps a better composite answer to our two questions (from the perspective of tracking responsibility) might be that resources should be allocated so as to reflect people’s choices, but only once those choices have been corrected for the distorting effects of constitutional and circumstantial luck, and on this account choice preservation and luck elimination are two complementary aims of the egalitarian ideal. Nevertheless, it is one thing to say that luck’s effects should be eliminated, but quite another to figure out just how much resource redistribution would be required to achieve this outcome, and so it was precisely for this purpose that in 1981 Ronald Dworkin developed the ingenuous hypothetical insurance market argumentative device (HIMAD), which he then used in conjunction with the talent slavery (TS) argument, to arrive at an estimate of the amount of redistribution that would be required to reduce the extent of luck’s effects. However recently Daniel Markovits has cast doubt over Dworkin’s estimates of the amount of redistribution that would be required, by pointing out flaws with his understanding of how the hypothetical insurance market would function. Nevertheless, Markovits patched it up and he used this patched-up version of Dworkin’s HIMAD together with his own version of the TS argument to reach his own conservative estimate of how much redistribution there ought to be in an egalitarian society. Notably though, on Markovits’ account once the HIMAD is patched-up and properly understood, the TS argument will also allegedly show that the two aims of egalitarianism are not necessarily complementary, but rather that they can actually compete with one another. According to his own ‘equal-agent’ egalitarian theory, the aim of choice preservation is more important than the aim of luck elimination, and so he alleges that when the latter aim comes into conflict with the former aim then the latter will need to be sacrificed to ensure that people are not subordinated to one another as agents. I believe that Markovits’ critique of Dworkin is spot on, but I also think that his own positive thesis — and hence his conclusion about how much redistribution there ought to be in an egalitarian society — is flawed. Hence, this paper will begin in Section I by explaining how Dworkin uses the HIMAD and his TS argument to estimate the amount of redistribution that there ought to be in an egalitarian society — this section will be largely expository in content. Markovits’ critique of Dworkin will then be outlined in Section II, as will be his own positive thesis. My critique of Markovits, and my own positive thesis, will then make a fleeting appearance in Section III. Finally, I will conclude by rejecting both Dworkin’s and Markovits’ estimates of the amount of redistribution that there ought to be in an egalitarian society, and by reaffirming the responsibility-tracking egalitarian claim that choice preservation and luck elimination are complementary and not competing egalitarian aims. (shrink)
It is common for conservationists to refer to non-native species that have undesirable impacts on humans as “invasive”. We argue that the classification of any species as “invasive” constitutes wrongful discrimination. Moreover, we argue that its being wrong to categorize a species as invasive is perfectly compatible with it being morally permissible to kill animals—assuming that conservationists “kill equally”. It simply is not compatible with the double standard that conservationists tend to employ in their decisions about who lives and who (...) dies. (shrink)
It could be argued that tort law is failing, and arguably an example of this failure is the recent public liability and insurance (‘PL&I’) crisis. A number of solutions have been proposed, but ultimately the chosen solution should address whatever we take to be the cause of this failure. On one account, the PL&I crisis is a result of an unwarranted expansion of the scope of tort law. Proponents of this position sometimes argue that the duty of care owed by (...) defendants to plaintiffs has expanded beyond reasonable levels, such that parties who were not really responsible for another’s misfortune are successfully sued, while those who really were to blame get away without taking any responsibility. However people should take responsibility for their actions, and the only likely consequence of allowing them to shirk it is that they and others will be less likely to exercise due care in the future, since the deterrents of liability and of no compensation for accidentally self-imposed losses will not be there. Others also argue that this expansion is not warranted because it is inappropriately fueled by ‘deep pocket’ considerations rather than by considerations of fault. They argue that the presence of liability insurance sways the judiciary to award damages against defendants since they know that insurers, and not the defendant personally, will pay for it in the end anyway. But although it may seem that no real person has to bear these burdens when they are imposed onto insurers, in reality all of society bears them collectively when insurers are forced to hike their premiums to cover these increasing damages payments. In any case, it seems unfair to force insurers to cover these costs simply because they can afford to do so. If such an expansion is indeed the cause of the PL&I crisis, then a contraction of the scope of tort liability, and a pious return to the fault principle, might remedy the situation. However it could also be argued that inadequate deterrence is the cause of this crisis. On this account the problem would lie not with the tort system’s continued unwarranted expansion, but in the fact that defendants really have been too careless. If prospective injurers were appropriately deterred from engaging in unnecessarily risky activities, then fewer accidents would ever occur in the first place, and this would reduce the need for litigation at its very source. If we take this to be the cause of tort law’s failure then our solution should aim to improve deterrence. Glen Robinson has argued that improved deterrence could be achieved if plaintiffs were allowed to sue defendants for wrongful exposure to ongoing risks of future harm, even in the absence of currently materialized losses. He argues that at least in toxic injury type cases the tortious creation of risk [should be seen as] an appropriate basis of liability, with damages being assessed according to the value of the risk, as an alternative to forcing risk victims to abide the outcome of the event and seek damages only if and when harm materializes. In a sense, Robinson wishes to treat newly-acquired wrongful risks as de-facto wrongful losses, and these are what would be compensated in liability for risk creation (‘LFRC’) cases. Robinson argues that if the extent of damages were fixed to the extent of risk exposure, all detected unreasonable risk creators would be forced to bear the costs of their activities, rather than only those who could be found responsible for another’s injuries ‘on the balance of probabilities’. The incidence of accidents should decrease as a result of improved deterrence, reduce the ‘suing fest’, and so resolve the PL&I crisis. So whilst the first solution involves contracting the scope of tort liability, Robinson’s solution involves an expansion of its scope. However Robinson acknowledges that LFRC seems prima facie incompatible with current tort principles which in the least require the presence of plaintiff losses, defendant fault, and causation to be established before making defendants liable for plaintiffs’ compensation. Since losses would be absent in LFRC cases by definition, the first evidentiary requirement would always be frustrated, and in its absence proof of defendant fault and causation would also seem scant. If such an expansion of tort liability were not supported by current tort principles then it would be no better than proposals to switch accident law across to no-fault, since both solutions would require comprehensive legal reform. However Robinson argues that the above three evidentiary requirements could be met in LFRC cases to the same extent that they are met in other currently accepted cases, and hence that his solution would therefore be preferable to no-fault solutions as it would only require incremental but not comprehensive legal reform. Although I believe that actual losses should be present before allowing plaintiffs to seek compensation, I will not present a positive argument for this conclusion. My aim in this paper is not to debate the relative merits of Robinson’s solution as compared to no-fault solutions, nor to determine which account of the cause of the PL&I crisis is closer to the truth, but rather to find out whether Robinson’s solution would indeed require less radical legal reform than, for example, proposed no-fault solutions. I will argue that Robinson fails to show that current tort principles would support his proposed solution, and hence that his solution is at best on an even footing with no-fault solutions since both would require comprehensive legal reform. (shrink)
Third-party property insurance (TPPI) protects insured drivers who accidentally damage an expensive car from the threat of financial ruin. Perhaps more importantly though, TPPI also protects the victims whose losses might otherwise go uncompensated. Ought responsible drivers therefore take out TPPI? This paper begins by enumerating some reasons for why a rational person might believe that they have a moral obligation to take out TPPI. It will be argued that if what is at stake in taking responsibility is the ability (...) to compensate our possible future victims for their losses, then it might initially seem that most people should be thankful for the availability of relatively inexpensive TPPI because without it they may not have sufficient funds to do the right thing and compensate their victims in the event of an accident. But is the ability to compensate one's victims really what is at stake in taking responsibility? The second part of this paper will critically examine the arguments for the above position, and it will argue that these arguments do not support the conclusion that injurers should compensate their victims for their losses, and hence that drivers need not take out TPPI in order to be responsible. Further still, even if these arguments did support the conclusion that injurers should compensate their victims for their losses, then (perhaps surprisingly) nobody should to be allowed to take out TPPI because doing so would frustrate justice. (shrink)
The machine-organism analogy has played a pivotal role in the history of Western philosophy and science. Notwithstanding its apparent simplicity, it hides complex epistemological issues about the status of both organism and machine and the nature of their interaction. What is the real object of this analogy: organisms as a whole, their parts or, rather, bodily functions? How can the machine serve as a model for interpreting biological phenomena, cognitive processes, or more broadly the social and cultural transformations of the (...) relations between individuals, and between individuals and the environments in which they live? Wired Bodies. New Perspectives on the Machine-Organism Analogy provides the reader with some of the latest perspectives on this vast debate, addressing three major topics:1) the development of a ‘mechanistic’ framework in medicine and biology; 2) the methodological issues underlying the use of ‘simulation’ in cognitive science; 3) the interaction between humans and machines according to 20th-century epistemology. (shrink)
Hermann Weyl was one of the most important figures involved in the early elaboration of the general theory of relativity and its fundamentally geometrical spacetime picture of the world. Weyl’s development of “pure infinitesimal geometry” out of relativity theory was the basis of his remarkable attempt at unifying gravitation and electromagnetism. Many interpreters have focused primarily on Weyl’s philosophical influences, especially the influence of Husserl’s transcendental phenomenology, as the motivation for these efforts. In this article, I argue both that these (...) efforts are most naturally understood as an outgrowth of the distinctive mathematical-physical tradition in Göttingen and also that phenomenology has little to no constructive role to play in them. (shrink)
In order to better understand the topic of hope, this paper argues that two separate theories are needed: One for hoping, and the other for hopefulness. This bifurcated approach is warranted by the observation that the word ‘hope’ is polysemous: It is sometimes used to refer to hoping and sometimes, to feeling or being hopeful. Moreover, these two senses of 'hope' are distinct, as a person can hope for some outcome yet not simultaneously feel hopeful about it. I argue that (...) this distinction between hoping and hopefulness is not always observed or fully appreciated in the literature and has consequently caused much confusion. This paper then sketches what theorizing about hope looks like in light of this clarification and discusses some of its implications. (shrink)
Higher-order thought theories maintain that consciousness involves the having of higher-order thoughts about mental states. In response to these theories of consciousness, an attempt is often made to illustrate that nonhuman animals possess said consciousness, overlooking an alarming consequence: attributing higher-order thought to nonhuman animals might entail that they should be held morally accountable for their actions. I argue that moral responsibility requires more than higher-order thought: moral agency requires a specific higher-order thought which concerns a belief about the rightness (...) or wrongness of affecting another’s mental states. This “moral thought” about the rightness or wrongness is not yet demonstrated in even the most intelligent nonhuman animals, thus we should suspend our judgments about the “rightness” or “wrongness” of their actions while further questioning the recent insistence on developing an animal morality. (shrink)
In recent discussions, it has been argued that a theory of animal rights is at odds with a liberal abortion policy. In response, Francione (1995) argues that the principles used in the animal rights discourse do not have implications for the abortion debate. I challenge Francione’s conclusion by illustrating that his own framework of animal rights, supplemented by a relational account of moral obligation, can address the moral issue of abortion. I first demonstrate that Francione’s animal rights position, which grounds (...) moral consideration in sentience, is committed to the claim that a sentient fetus has a right to life. I then illustrate that a fully developed account of animal rights that recognizes the special obligations humans have to assist animals when we cause them to be dependent and vulnerable through our voluntary actions or omissions is committed to the following: a woman also has a special obligation to assist a sentient fetus when she causes it to be dependent and vulnerable through her voluntary actions or omissions. From these considerations, it will become evident that a fully developed and consistent animal rights ethic does in fact have implications for the abortion discussion. (shrink)
In her BBC Reith Lectures on Trust, Onora O’Neill offers a short, but biting, criticism of transparency. People think that trust and transparency go together but in reality, says O'Neill, they are deeply opposed. Transparency forces people to conceal their actual reasons for action and invent different ones for public consumption. Transparency forces deception. I work out the details of her argument and worsen her conclusion. I focus on public transparency – that is, transparency to the public over expert domains. (...) I offer two versions of the criticism. First, the epistemic intrusion argument: The drive to transparency forces experts to explain their reasoning to non-experts. But expert reasons are, by their nature, often inaccessible to non-experts. So the demand for transparency can pressure experts to act only in those ways for which they can offer public justification. Second, the intimate reasons argument: In many cases of practical deliberation, the relevant reasons are intimate to a community and not easily explicable to those who lack a particular shared background. The demand for transparency, then, pressures community members to abandon the special understanding and sensitivity that arises from their particular experiences. Transparency, it turns out, is a form of surveillance. By forcing reasoning into the explicit and public sphere, transparency roots out corruption — but it also inhibits the full application of expert skill, sensitivity, and subtle shared understandings. The difficulty here arises from the basic fact that human knowledge vastly outstrips any individual’s capacities. We all depend on experts, which makes us vulnerable to their biases and corruption. But if we try to wholly secure our trust — if we leash groups of experts to pursuing only the goals and taking only the actions that can be justified to the non-expert public — then we will undermine their expertise. We need both trust and transparency, but they are in essential tension. This is a deep practical dilemma; it admits of no neat resolution, but only painful compromise. (shrink)
“The universe is expanding, not contracting.” Many statements of this form appear unambiguously true; after all, the discovery of the universe’s expansion is one of the great triumphs of empirical science. However, the statement is time-directed: the universe expands towards what we call the future; it contracts towards the past. If we deny that time has a direction, should we also deny that the universe is really expanding? This article draws together and discusses what I call ‘C-theories’ of time — (...) in short, philosophical positions that hold time lacks a direction — from different areas of the literature. I set out the various motivations, aims, and problems for C-theories, and outline different versions of antirealism about the direction of time. (shrink)
Interactive theorem provers might seem particularly impractical in the history of philosophy. Journal articles in this discipline are generally not formalized. Interactive theorem provers involve a learning curve for which the payoffs might seem minimal. In this article I argue that interactive theorem provers have already demonstrated their potential as a useful tool for historians of philosophy; I do this by highlighting examples of work where this has already been done. Further, I argue that interactive theorem provers can continue to (...) be useful tools for historians of philosophy in the future; this claim is defended through a more conceptual analysis of what historians of philosophy do that identifies argument reconstruction as a core activity of such practitioners. It is then shown that interactive theorem provers can assist in this core practice by a description of what interactive theorem provers are and can do. If this is right, then computer verification for historians of philosophy is in the offing. (shrink)
Twitter makes conversation into something like a game. It scores our communication, giving us vivid and quantified feedback, via Likes, Retweets, and Follower counts. But this gamification doesn’t just increase our motivation to communicate; it changes the very nature of the activity. Games are more satisfying than ordinary life precisely because game-goals are simpler, cleaner, and easier to apply. Twitter is thrilling precisely because its goals have been artificially clarified and narrowed. When we buy into Twitter’s gamification, then our values (...) shift from the complex and pluralistic values of communication, to the narrower quest for popularity and virality. Twitter’s gamification bears some resemblance with the phenomena of echo chambers and moral outrage porn. In all these phenomena, we are instrumentalizing our ends for hedonistic reasons. We have shifted our aims in an activity, not because the new aims are more valuable, but in exchange for extra pleasure. (shrink)
Special Issue “Risks of artificial general intelligence”, Journal of Experimental and Theoretical Artificial Intelligence, 26/3 (2014), ed. Vincent C. Müller. http://www.tandfonline.com/toc/teta20/26/3# - Risks of general artificial intelligence, Vincent C. Müller, pages 297-301 - Autonomous technology and the greater human good - Steve Omohundro - pages 303-315 - - - The errors, insights and lessons of famous AI predictions – and what they mean for the future - Stuart Armstrong, Kaj Sotala & Seán S. Ó hÉigeartaigh - pages 317-342 - - (...) - The path to more general artificial intelligence - Ted Goertzel - pages 343-354 - - - Limitations and risks of machine ethics - Miles Brundage - pages 355-372 - - - Utility function security in artificially intelligent agents - Roman V. Yampolskiy - pages 373-389 - - - GOLEM: towards an AGI meta-architecture enabling both goal preservation and radical self-improvement - Ben Goertzel - pages 391-403 - - - Universal empathy and ethical bias for artificial general intelligence - Alexey Potapov & Sergey Rodionov - pages 405-416 - - - Bounding the impact of AGI - András Kornai - pages 417-438 - - - Ethics of brain emulations - Anders Sandberg - pages 439-457. (shrink)
Because factory-farmed meat production inflicts gratuitous suffering upon animals and wreaks havoc on the environment, there are morally compelling reasons to become vegetarian. Yet industrial plant agriculture causes the death of many field animals, and this leads some to question whether consumers ought to get some of their protein from certain kinds of non factory-farmed meat. Donald Bruckner, for instance, boldly argues that the harm principle implies an obligation to collect and consume roadkill and that strict vegetarianism is thus immoral. (...) But this argument works only if the following claims are true: all humans have access to roadkill, roadkill would go to waste if those who happen upon it don’t themselves consume it, it’s impossible to harvest vegetables without killing animals, the animals who are killed in plant production are all-things-considered harmed by crop farming, and the best arguments for vegetarianism all endorse the harm principle. As I will argue in this paper, each claim is deeply problematic. Consequently, in most cases, humans ought to strictly eat plants and save the roadkill for cats. (shrink)
This paper articulates and defends three interconnected claims: first, that the debate on the role of values for science misses a crucial dimension, the institutional one; second, that institutions occupy the intermediate level between scientific activities and values and that they are to be systematically integrated into the analysis; third, that the appraisal of the institutions of science with respect to values should be undertaken within the premises of a comparative approach rather than an ideal approach. Hence, I defend the (...) view that the issue be framed in reference to the following question: "What kind of institutional rules should be in place in order for the scientific process to unfold in such a way that the values that we deem more important come to the fore?" Addressing this concern is equivalent to conducting a debate on institutions and their role for science. (shrink)
What methodological approaches do research programs use to investigate the world? Elisabeth Lloyd’s Logic of Research Questions (LRQ) characterizes such approaches in terms of the questions that the researchers ask and causal factors they consider. She uses the Logic of Research Questions Framework to criticize adaptationist programs in evolutionary biology for dogmatically assuming selection explanations of the traits of organisms. I argue that Lloyd’s general criticism of methodological adaptationism is an artefact of the impoverished LRQ. My Ordered Factors Proposal extends (...) the LRQ to characterize approaches with sequences of questions and factors. I highlight the importance that ordering one’s investigation plays in approaches at the level of adaptationism by analyzing two research programs in community ecology: competitionists and neutralists. Competitionists and neutralists take opposed starting points and use explanatory and developmental heuristics to consider more factors in due time. On the Ordered Factors Proposal, these approaches are not only the ecological factors they are open to considering but also the order in which they will consider them. My disagreement with Lloyd’s over how to characterize methodological approaches reflects different views about methodological monism and pluralism. (shrink)
Games may seem like a waste of time, where we struggle under artificial rules for arbitrary goals. The author suggests that the rules and goals of games are not arbitrary at all. They are a way of specifying particular modes of agency. This is what make games a distinctive art form. Game designers designate goals and abilities for the player; they shape the agential skeleton which the player will inhabit during the game. Game designers work in the medium of agency. (...) Game-playing, then, illuminates a distinctive human capacity. We can take on ends temporarily for the sake of the experience of pursuing them. Game play shows that our agency is significantly more modular and more fluid than we might have thought. It also demonstrates our capacity to take on an inverted motivational structure. Sometimes we can take on an end for the sake of the activity of pursuing that end. (shrink)
In The Case for Animal Rights, Tom Regan argues that although all subjects-of-a-life have equal inherent value, there are often differences in the value of lives. According to Regan, lives that have the highest value are lives which have more possible sources of satisfaction. Regan claims that the highest source of satisfaction, which is available to only rational beings, is the satisfaction associated with thinking impartially about moral choices. Since rational beings can bring impartial reasons to bear on decision making, (...) Regan maintains that they have an additional possible source of satisfaction that nonrational beings do not have and, consequently, the lives of rational beings turn out to have greater value.. (shrink)
Canadian Environmental Philosophy is the first collection of essays to take up theoretical and practical issues in environmental philosophy today, from a Canadian perspective. The essays cover various subjects, including ecological nationalism, the legacy of Grey Owl, the meaning of “outside” to Canadians, the paradigm shift from mechanism to ecology in our understanding of nature, the meaning and significance of the Anthropocene, the challenges of biodiversity protection in Canada, the conservation status of crossbred species in the age of climate change, (...) and the moral status of ecosystems. This wide range of topics is as diverse and challenging as the Canadian landscape itself. Given the extent of humanity's current impact on the biosphere – especially evident with anthropogenic climate change and the ongoing mass extinction – it has never been more urgent for us to confront these environmental challenges as Canadian citizens and citizens of the world. Canadian Environmental Philosophy galvanizes this conversation from the perspective of this place. (shrink)
There seems to be a deep tension between two aspects of aesthetic appreciation. On the one hand, we care about getting things right. On the other hand, we demand autonomy. We want appreciators to arrive at their aesthetic judgments through their own cognitive efforts, rather than deferring to experts. These two demands seem to be in tension; after all, if we want to get the right judgments, we should defer to the judgments of experts. The best explanation, I suggest, is (...) that aesthetic appreciation is something like a game. When we play a game, we try to win. But often, winning isn’t the point; playing is. Aesthetic appreciation involves the same flipped motivational structure: we aim at the goal of correctness, but having correct judgments isn’t the point. The point is the engaged process of interpreting, investigating, and exploring the aesthetic object. Deferring to aesthetic testimony, then, makes the same mistake as looking up the answer to a puzzle, rather than solving it for oneself. The shortcut defeats the whole point. This suggests a new account of aesthetic value: the engagement account. The primary value of the activity of aesthetic appreciation lies in the process of trying to generate correct judgments, and not in having correct judgments. -/- *There is an audio version available: look for the Soundcloud link, below.*. (shrink)
These essays draw on work in the history and philosophy of science, the philosophy of mind and language, the development of concepts in children, conceptual..
I propose to study one problem for epistemic dependence on experts: how to locate experts on what I will call cognitive islands. Cognitive islands are those domains for knowledge in which expertise is required to evaluate other experts. They exist under two conditions: first, that there is no test for expertise available to the inexpert; and second, that the domain is not linked to another domain with such a test. Cognitive islands are the places where we have the fewest resources (...) for evaluating experts, which makes our expert dependences particularly risky. -/- Some have argued that cognitive islands lead to the complete unusability of expert testimony: that anybody who needs expert advice on a cognitive island will be entirely unable to find it. I argue against this radical form of pessimism, but propose a more moderate alternative. I demonstrate that we have some resources for finding experts on cognitive islands, but that cognitive islands leave us vulnerable to an epistemic trap which I will call runaway echo chambers. In a runaway echo chamber, our inexpertise may lead us to pick out bad experts, which will simply reinforce our mistaken beliefs and sensibilities. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.