Was ist Natur oder was könnte sie sein? Diese und weitere Fragen sind grundlegend für Naturdenken und -handeln. Das Lehr- und Studienbuch bietet eine historisch-systematische und zugleich praxisbezogene Einführung in die Naturphilosophie mit ihren wichtigsten Begriffen. Es nimmt den pluralen Charakter der Wahrnehmung von Natur in den philosophischen Blick und ist auch zum Selbststudium bestens geeignet.
First, we explain the conception of trustworthiness that we employ. We model trustworthiness as a relation among a trustor, a trustee, and a field of trust defined and delimited by its scope. In addition, both potential trustors and potential trustees are modeled as being more or less reliable in signaling either their willingness to trust or their willingness to prove trustworthy in various fields in relation to various other agents. Second, following Alfano (forthcoming) we argue that the social scale of (...) a potential trust relationship partly determines both explanatory and normative aspects of the relation. Most of the philosophical literature focuses on dyadic trust between a pair of agents (Baier 1986, Jones 1996, Jones 2012, McGeer 2008, Pettit 1995), but there are also small communities of trust (Alfano forthcoming) and trust in large institutions (Potter 2002, Govier 1997, Townley & Garfield 2013, Hardin 2002). The mechanisms that induce people to extend their trust vary depending on the size of the community in question, and the ways in which trustworthiness can be established and trusting warranted vary with these mechanisms. Mechanisms that work in dyads and small communities are often unavailable in the context of trusting an institution or branch of government. Establishing trust on this larger social scale therefore requires new or modified mechanisms. In the third section of the paper, we recommend three policies that – we argue – tend to make institutions more trustworthy and to reliably signal that trustworthiness to the public. First, they should ensure that their decision-making processes are as open and transparent as possible. Second, they should make efforts to engage stakeholders in dialogue with decision-makers such as managers, members of the C-Suite, and highly-placed policy-makers. Third, they should foster diversity – gender, ethnicity, age, socioeconomic background, disability, etc. – in their workforce at all levels, but especially in management and positions of power. We conclude by discussing the warrant for distrust in institutions that do not adopt these policies, which we contend is especially pertinent for people who belong to groups that have historically faced (and in many cases still do face) oppression. (shrink)
One might be inclined to assume, given the mouse donning its cover, that the behavior of interest in Nicole Nelson's book Model Behavior (2018) is that of organisms like mice that are widely used as “stand-ins” for investigating the causes of human behavior. Instead, Nelson's ethnographic study focuses on the strategies adopted by a community of rodent behavioral researchers to identify and respond to epistemic challenges they face in using mice as models to understand the causes of disordered human (...) behaviors associated with mental illness. Although Nelson never explicitly describes the knowledge production activities in which her behavioral geneticist research subjects engage as “exemplary”, the question of whether or not these activities constitute “model behavior(s)”—generalizable norms for engaging in scientific research—is one of the many thought-provoking questions raised by her book. As a philosopher of science interested in this question, I take it up here. (shrink)
Various authors debate the question of whether neuroscience is relevant to criminal responsibility. However, a plethora of different techniques and technologies, each with their own abilities and drawbacks, lurks beneath the label “neuroscience”; and in criminal law responsibility is not a single, unitary and generic concept, but it is rather a syndrome of at least six different concepts. Consequently, there are at least six different responsibility questions that the criminal law asks—at least one for each responsibility concept—and, I will suggest, (...) a multitude of ways in which the techniques and technologies that comprise neuroscience might help us to address those diverse questions. In a way, on my account neuroscience is relevant to criminal responsibility in many ways, but I hesitate to state my position like this because doing so obscures two points which I would rather highlight: one, neither neuroscience nor criminal responsibility are as unified as that; and two, the criminal law asks many different responsibility questions and not just one generic question. (shrink)
Garrath Williams claims that truly responsible people must possess a “capacity … to respond [appropriately] to normative demands” (2008:462). However, there are people whom we would normally praise for their responsibility despite the fact that they do not yet possess such a capacity (e.g. consistently well-behaved young children), and others who have such capacity but who are still patently irresponsible (e.g. some badly-behaved adults). Thus, I argue that to qualify for the accolade “a responsible person” one need not possess such (...) a capacity, but only to be earnestly willing to do the right thing and to have a history that testifies to this willingness. Although we may have good reasons to prefer to have such a capacity ourselves, and to associate ourselves with others who have it, at a conceptual level I do not think that such considerations support the claim that having this capacity is a necessary condition of being a responsible person in the virtue sense. (shrink)
Luck egalitarians think that considerations of responsibility can excuse departures from strict equality. However critics argue that allowing responsibility to play this role has objectionably harsh consequences. Luck egalitarians usually respond either by explaining why that harshness is not excessive, or by identifying allegedly legitimate exclusions from the default responsibility-tracking rule to tone down that harshness. And in response, critics respectively deny that this harshness is not excessive, or they argue that those exclusions would be ineffective or lacking in justification. (...) Rather than taking sides, after criticizing both positions I also argue that this way of carrying on the debate – i.e. as a debate about whether the harsh demands of responsibility outweigh other considerations, and about whether exclusions to responsibility-tracking would be effective and/or justified – is deeply problematic. On my account, the demands of responsibility do not – in fact, they can not – conflict with the demands of other normative considerations, because responsibility only provides a formal structure within which those other considerations determine how people may be treated, but it does not generate its own practical demands. (shrink)
Fred Adams and collaborators advocate a view on which empty-name sentences semantically encode incomplete propositions, but which can be used to conversationally implicate descriptive propositions. This account has come under criticism recently from Marga Reimer and Anthony Everett. Reimer correctly observes that their account does not pass a natural test for conversational implicatures, namely, that an explanation of our intuitions in terms of implicature should be such that we upon hearing it recognize it to be roughly correct. Everett argues that (...) the implicature view provides an explanation of only some our intuitions, and is in fact incompatible with others, especially those concerning the modal profile of sentences containing empty names. I offer a pragmatist treatment of empty names based upon the recognition that the Gricean distinction between what is said and what is implicated is not exhaustive, and argue that such a solution avoids both Everett’s and Reimer’s criticisms.Selon Fred Adams et ses collaborateurs, les phrases comportant des noms propres vides codent sémantiquement des propositions incomplètes, bien qu’elles puissent être utilisées pour impliquer des propositions descriptives dans le contexte d’une conversation. Marga Reimer et Anthony Everett ont récemment critiqué cette théorie. Reimer note judicieusement que leur théorie ne résiste pas à l’examen naturel des implications conversationnelles; une explication de nos intuitions concernant l’implication doit être telle que lorsque nous l’entendons, elle nous apparaît globalement correcte. Everett soutient que la théorie de l’implication ne parvient à expliquer qu’un certain nombre de nos intuitions et reste incompatible avec d’autres, notamment celles qui concernent la dimension modale des phrases contenant des noms propres vides. Je propose ici un traitement pragmatiste des noms propres vides fondé sur l’observation que la distinction Gricéenne entre ce qui est dit et ce qui est impliqué n’est pas exhaustive; je soutiens que cette solution échappe aux critiques d’Everett et de Reimer. (shrink)
In this paper I argue that Beall and Restall's claim that there is one true logic of metaphysical modality is incompatible with the formulation of logical pluralism that they give. I investigate various ways of reconciling their pluralism with this claim, but conclude that none of the options can be made to work.
It has become standard for feminist philosophers of language to analyze Catherine MacKinnon's claim in terms of speech act theory. Backed by the Austinian observation that speech can do things and the legal claim that pornography is speech, the claim is that the speech acts performed by means of pornography silence women. This turns upon the notion of illocutionary silencing, or disablement. In this paper I observe that the focus by feminist philosophers of language on the failure to achieve uptake (...) for illocutionary acts serves to group together different kinds of illocutionary silencing which function in very different ways. (shrink)
This thesis considers two allegations which conservatives often level at no-fault systems — namely, that responsibility is abnegated under no-fault systems, and that no-fault systems under- and over-compensate. I argue that although each of these allegations can be satisfactorily met – the responsibility allegation rests on the mistaken assumption that to properly take responsibility for our actions we must accept liability for those losses for which we are causally responsible; and the compensation allegation rests on the mistaken assumption that tort (...) law’s compensatory decisions provide a legitimate norm against which no-fault’s decisions can be compared and criticized – doing so leads in a direction which is at odds with accident law reform advocates’ typical recommendations. On my account, accident law should not just be reformed in line with no-fault’s principles, but rather it should be completely abandoned since the principles that protect no- fault systems from the conservatives’ two allegations are incompatible with retaining the category of accident law, they entail that no-fault systems are a form of social welfare and not accident law systems, and that under these systems serious deprivation – and to a lesser extent causal responsibility – should be conditions of eligibility to claim benefits. (shrink)
This paper examines how people think about aiding others in a way that can inform both theory and practice. It uses data gathered from Kiva, an online, non-profit organization that allows individuals to aid other individuals around the world, to isolate intuitions that people find broadly compelling. The central result of the paper is that people seem to give more priority to aiding those in greater need, at least below some threshold. That is, the data strongly suggest incorporating both a (...) threshold and a prioritarian principle into the analysis of what principles for aid distribution people accept. This conclusion should be of broad interest to aid practitioners and policy makers. It may also provide important information for political philosophers interested in building, justifying, and criticizing theories about meeting needs using empirical evidence. (shrink)
Egalitarians must address two questions: i. What should there be an equality of, which concerns the currency of the ‘equalisandum’; and ii. How should this thing be allocated to achieve the so-called equal distribution? A plausible initial composite answer to these two questions is that resources should be allocated in accordance with choice, because this way the resulting distribution of the said equalisandum will ‘track responsibility’ — responsibility will be tracked in the sense that only we will be responsible for (...) the resources that are available to us, since our allocation of resources will be a consequence of our own choices. But the effects of actual choices should not be preserved until the prior effects of luck in constitution and circumstance are first eliminated. For instance, people can choose badly because their choice-making capacity was compromised due to a lack of intelligence (i.e. due to constitutional bad luck), or because only bad options were open to them (i.e. due to circumstantial bad luck), and under such conditions we are not responsible for our choices. So perhaps a better composite answer to our two questions (from the perspective of tracking responsibility) might be that resources should be allocated so as to reflect people’s choices, but only once those choices have been corrected for the distorting effects of constitutional and circumstantial luck, and on this account choice preservation and luck elimination are two complementary aims of the egalitarian ideal. Nevertheless, it is one thing to say that luck’s effects should be eliminated, but quite another to figure out just how much resource redistribution would be required to achieve this outcome, and so it was precisely for this purpose that in 1981 Ronald Dworkin developed the ingenuous hypothetical insurance market argumentative device (HIMAD), which he then used in conjunction with the talent slavery (TS) argument, to arrive at an estimate of the amount of redistribution that would be required to reduce the extent of luck’s effects. However recently Daniel Markovits has cast doubt over Dworkin’s estimates of the amount of redistribution that would be required, by pointing out flaws with his understanding of how the hypothetical insurance market would function. Nevertheless, Markovits patched it up and he used this patched-up version of Dworkin’s HIMAD together with his own version of the TS argument to reach his own conservative estimate of how much redistribution there ought to be in an egalitarian society. Notably though, on Markovits’ account once the HIMAD is patched-up and properly understood, the TS argument will also allegedly show that the two aims of egalitarianism are not necessarily complementary, but rather that they can actually compete with one another. According to his own ‘equal-agent’ egalitarian theory, the aim of choice preservation is more important than the aim of luck elimination, and so he alleges that when the latter aim comes into conflict with the former aim then the latter will need to be sacrificed to ensure that people are not subordinated to one another as agents. I believe that Markovits’ critique of Dworkin is spot on, but I also think that his own positive thesis — and hence his conclusion about how much redistribution there ought to be in an egalitarian society — is flawed. Hence, this paper will begin in Section I by explaining how Dworkin uses the HIMAD and his TS argument to estimate the amount of redistribution that there ought to be in an egalitarian society — this section will be largely expository in content. Markovits’ critique of Dworkin will then be outlined in Section II, as will be his own positive thesis. My critique of Markovits, and my own positive thesis, will then make a fleeting appearance in Section III. Finally, I will conclude by rejecting both Dworkin’s and Markovits’ estimates of the amount of redistribution that there ought to be in an egalitarian society, and by reaffirming the responsibility-tracking egalitarian claim that choice preservation and luck elimination are complementary and not competing egalitarian aims. (shrink)
In "Torts, Egalitarianism and Distributive Justice" , Tsachi Keren-Paz presents impressingly detailed analysis that bolsters the case in favour of incremental tort law reform. However, although this book's greatest strength is the depth of analysis offered, at the same time supporters of radical law reform proposals may interpret the complexity of the solution that is offered as conclusive proof that tort law can only take adequate account of egalitarian aims at an unacceptably high cost.
It could be argued that tort law is failing, and arguably an example of this failure is the recent public liability and insurance (‘PL&I’) crisis. A number of solutions have been proposed, but ultimately the chosen solution should address whatever we take to be the cause of this failure. On one account, the PL&I crisis is a result of an unwarranted expansion of the scope of tort law. Proponents of this position sometimes argue that the duty of care owed by (...) defendants to plaintiffs has expanded beyond reasonable levels, such that parties who were not really responsible for another’s misfortune are successfully sued, while those who really were to blame get away without taking any responsibility. However people should take responsibility for their actions, and the only likely consequence of allowing them to shirk it is that they and others will be less likely to exercise due care in the future, since the deterrents of liability and of no compensation for accidentally self-imposed losses will not be there. Others also argue that this expansion is not warranted because it is inappropriately fueled by ‘deep pocket’ considerations rather than by considerations of fault. They argue that the presence of liability insurance sways the judiciary to award damages against defendants since they know that insurers, and not the defendant personally, will pay for it in the end anyway. But although it may seem that no real person has to bear these burdens when they are imposed onto insurers, in reality all of society bears them collectively when insurers are forced to hike their premiums to cover these increasing damages payments. In any case, it seems unfair to force insurers to cover these costs simply because they can afford to do so. If such an expansion is indeed the cause of the PL&I crisis, then a contraction of the scope of tort liability, and a pious return to the fault principle, might remedy the situation. However it could also be argued that inadequate deterrence is the cause of this crisis. On this account the problem would lie not with the tort system’s continued unwarranted expansion, but in the fact that defendants really have been too careless. If prospective injurers were appropriately deterred from engaging in unnecessarily risky activities, then fewer accidents would ever occur in the first place, and this would reduce the need for litigation at its very source. If we take this to be the cause of tort law’s failure then our solution should aim to improve deterrence. Glen Robinson has argued that improved deterrence could be achieved if plaintiffs were allowed to sue defendants for wrongful exposure to ongoing risks of future harm, even in the absence of currently materialized losses. He argues that at least in toxic injury type cases the tortious creation of risk [should be seen as] an appropriate basis of liability, with damages being assessed according to the value of the risk, as an alternative to forcing risk victims to abide the outcome of the event and seek damages only if and when harm materializes. In a sense, Robinson wishes to treat newly-acquired wrongful risks as de-facto wrongful losses, and these are what would be compensated in liability for risk creation (‘LFRC’) cases. Robinson argues that if the extent of damages were fixed to the extent of risk exposure, all detected unreasonable risk creators would be forced to bear the costs of their activities, rather than only those who could be found responsible for another’s injuries ‘on the balance of probabilities’. The incidence of accidents should decrease as a result of improved deterrence, reduce the ‘suing fest’, and so resolve the PL&I crisis. So whilst the first solution involves contracting the scope of tort liability, Robinson’s solution involves an expansion of its scope. However Robinson acknowledges that LFRC seems prima facie incompatible with current tort principles which in the least require the presence of plaintiff losses, defendant fault, and causation to be established before making defendants liable for plaintiffs’ compensation. Since losses would be absent in LFRC cases by definition, the first evidentiary requirement would always be frustrated, and in its absence proof of defendant fault and causation would also seem scant. If such an expansion of tort liability were not supported by current tort principles then it would be no better than proposals to switch accident law across to no-fault, since both solutions would require comprehensive legal reform. However Robinson argues that the above three evidentiary requirements could be met in LFRC cases to the same extent that they are met in other currently accepted cases, and hence that his solution would therefore be preferable to no-fault solutions as it would only require incremental but not comprehensive legal reform. Although I believe that actual losses should be present before allowing plaintiffs to seek compensation, I will not present a positive argument for this conclusion. My aim in this paper is not to debate the relative merits of Robinson’s solution as compared to no-fault solutions, nor to determine which account of the cause of the PL&I crisis is closer to the truth, but rather to find out whether Robinson’s solution would indeed require less radical legal reform than, for example, proposed no-fault solutions. I will argue that Robinson fails to show that current tort principles would support his proposed solution, and hence that his solution is at best on an even footing with no-fault solutions since both would require comprehensive legal reform. (shrink)
Third-party property insurance (TPPI) protects insured drivers who accidentally damage an expensive car from the threat of financial ruin. Perhaps more importantly though, TPPI also protects the victims whose losses might otherwise go uncompensated. Ought responsible drivers therefore take out TPPI? This paper begins by enumerating some reasons for why a rational person might believe that they have a moral obligation to take out TPPI. It will be argued that if what is at stake in taking responsibility is the ability (...) to compensate our possible future victims for their losses, then it might initially seem that most people should be thankful for the availability of relatively inexpensive TPPI because without it they may not have sufficient funds to do the right thing and compensate their victims in the event of an accident. But is the ability to compensate one's victims really what is at stake in taking responsibility? The second part of this paper will critically examine the arguments for the above position, and it will argue that these arguments do not support the conclusion that injurers should compensate their victims for their losses, and hence that drivers need not take out TPPI in order to be responsible. Further still, even if these arguments did support the conclusion that injurers should compensate their victims for their losses, then (perhaps surprisingly) nobody should to be allowed to take out TPPI because doing so would frustrate justice. (shrink)
What constitutes illocutionary silencing? This is the key question underlying much recent work on Catherine MacKinnon's claim that pornography silences women. In what follows I argue that the focus of the literature on the notion of audience `uptake' serves to mischaracterize the phenomena. I defend a broader interpretation of what it means for an illocutionary act to succeed, and show how this broader interpretation provides a better characterization of the kinds of silencing experienced by women.
Benefit/cost analysis is a technique for evaluating programs, procedures, and actions; it is not a moral theory. There is significant controversy over the moral justification of benefit/cost analysis. When a procedure for evaluating social policy is challenged on moral grounds, defenders frequently seek a justification by construing the procedure as the practical embodiment of a correct moral theory. This has the apparent advantage of avoiding difficult empirical questions concerning such matters as the consequences of using the procedure. So, for example, (...) defenders of benefit/cost analysis are frequently tempted to argue that this procedure just is the calculation of moral Tightness – perhaps that what it means for an action to be morally right is just for it to have the best benefit-to-cost ratio given the accounts of “benefit” and “cost” that BCA employs. They suggest, in defense of BCA, that they have found the moral calculus – Bentham's “unabashed arithmetic of morals.” To defend BCA in this manner is to commit oneself to one member of a family of moral theories and, also, to the view that if a procedure is the direct implementation of a correct moral theory, then it is a justified procedure. Neither of these commitments is desirable, and so the temptation to justify BCA by direct appeal to a B/C moral theory should be resisted; it constitutes an unwarranted short cut to moral foundations – in this case, an unsound foundation. Critics of BCA are quick to point out the flaws of B/C moral theories, and to conclude that these undermine the justification of BCA. But the failure to justify BCA by a direct appeal to B/C moral theory does not show that the technique is unjustified. There is hope for BCA, even if it does not lie with B/C moral theory. (shrink)
Background. Drawing on social identity theory and positive psychology, this study investigated women’s responses to the social environment of physics classrooms. It also investigated STEM identity and gender disparities on academic achievement and flourishing in an undergraduate introductory physics course for STEM majors. 160 undergraduate students enrolled in an introductory physics course were administered a baseline survey with self-report measures on course belonging, physics identification, flourishing, and demographics at the beginning of the course and a post-survey at the end of (...) the academic term. Students also completed force concept inventories and physics course grades were obtained from the registrar. Results. Women reported less course belonging and less physics identification than men. Physics identification and grades evidenced a longitudinal bidirectional relationship for all students (regardless of gender) such that when controlling for baseline physics knowledge: (a) students with higher physics identification were more likely to earn higher grades; and (b) students with higher grades evidenced more physics identification at the end of the term. Men scored higher on the force concept inventory than women, although no gender disparities emerged for course grades. For women, higher physics (versus lower) identification was associated with more positive changes in flourishing over the course of the term. High-identifying men showed the opposite pattern: negative change in flourishing was more strongly associated with high identifiers than low identifiers. Conclusions. Overall, this study underlines gender disparities in physics both in terms of belonging and physics knowledge. It suggests that strong STEM identity may be associated with academic performance and flourishing in undergraduate physics courses at the end of the term, particularly for women. A number of avenues for future research are discussed. (shrink)
A philosophical exchange broadly inspired by the characters of Berkeley’s Three Dialogues. Hylas is the realist philosopher: the view he stands up for reflects a robust metaphysic that is reassuringly close to common sense, grounded on the twofold persuasion that the world comes structured into entities of various kinds and at various levels and that it is the task of philosophy, if not of science generally, to “bring to light” that structure. Philonous, by contrast, is the anti-realist philosopher (though not (...) necessarily an idealist): his metaphysic is stark, arid, dishearteningly bone-dry, and stems from the conviction that a great deal of the structure that we are used to attribute to the world out there lies, on closer inspection, in our head, in our “organizing practices”, in the complex system of concepts and categories that unrerlie our representation of experience and our need to represent it that way. (shrink)
Recent global efforts of the United States and England to withdraw from international institutions, along with recent challenges to human rights courts from Poland and Hungary, have been described as part of a growing global populist backlash against the liberal international order. Several scholars have even identified the recent threat of mass withdrawal of African states from the International Criminal Court (ICC) as part of this global populist backlash. Are the African challenges to the ICC part of a global populist (...) movement developing in Africa? More fundamentally, how are the African challenges to the ICC examples of populism, if at all? In this paper, I show that, while there is considerable overlap between the strategies used by particular African leaders to challenge the ICC and those typically considered populist, as well as a discernible thin populist ideology to sustain them, there is insufficient evidence of a larger anti-ICC populist movement in Africa. Although Africa is not as united against the ICC as the populist narrative suggests, the recent challenges to the Court from Africa pose a significant challenge to the Court, as the institution is still in the early stages of building its legitimacy. (shrink)
This paper examines whether American parents legally violate their children’s privacy rights when they share embarrassing images of their children on social media without their children’s consent. My inquiry is motivated by recent reports that French authorities have warned French parents that they could face fines and imprisonment for such conduct, if their children sue them once their children turn 18. Where French privacy law is grounded in respect for dignity, thereby explaining the French concerns for parental “over-sharing,” I show (...) that there are three major legal roadblocks for such a case to succeed in US law. First, US privacy tort law largely only protects a person’s image where the person has a commercial interest in his or her image. Secondly, privacy tort laws are subject to constitutional constraints respecting the freedom of speech and press. Third, American courts are reluctant to erode parental authority, except in cases where extraordinary threats to children’s welfare exist. I argue that while existing privacy law in the US is inadequate to offer children legal remedy if their parents share their embarrassing images of them without their consent, the dignity-based concerns of the French should not be neglected. I consider a recent proposal to protect children’s privacy by extending to them the “right to be forgotten” online, but I identify problems in this proposal, and argue it is not a panacea to the over-sharing problem. I conclude by emphasizing our shared social responsibilities to protect children by teaching them about the importance of respecting one another’s privacy and dignity in the online context, and by setting examples as responsible users of internet technologies. (shrink)
In this article, I contribute to the debate between two philosophical traditions—the Kantian and the Aristotelian—on the requirements of criminal responsibility and the grounds for excuse by taking this debate to a new context: international criminal law. After laying out broadly Kantian and Aristotelian conceptions of criminal responsibility, I defend a quasi-Aristotelian conception, which affords a central role to moral development, and especially to the development of moral perception, for international criminal law. I show than an implication of this view (...) is that persons who are substantially and non-culpably limited in their capacity for ordinary moral perception warrant an excuse for engaging in unlawful conduct. I identify a particular set of conditions that trigger this excuse, and then I systematically examine it as applied to the controversial case of former-child-soldier-turned leader of the Lord’s Resistance Army, Dominic Ongwen, who is currently at trial at the International Criminal Court. (shrink)
The machine-organism analogy has played a pivotal role in the history of Western philosophy and science. Notwithstanding its apparent simplicity, it hides complex epistemological issues about the status of both organism and machine and the nature of their interaction. What is the real object of this analogy: organisms as a whole, their parts or, rather, bodily functions? How can the machine serve as a model for interpreting biological phenomena, cognitive processes, or more broadly the social and cultural transformations of the (...) relations between individuals, and between individuals and the environments in which they live? Wired Bodies. New Perspectives on the Machine-Organism Analogy provides the reader with some of the latest perspectives on this vast debate, addressing three major topics:1) the development of a ‘mechanistic’ framework in medicine and biology; 2) the methodological issues underlying the use of ‘simulation’ in cognitive science; 3) the interaction between humans and machines according to 20th-century epistemology. (shrink)
Recent conversation has blurred two very different social epistemic phenomena: echo chambers and epistemic bubbles. Members of epistemic bubbles merely lack exposure to relevant information and arguments. Members of echo chambers, on the other hand, have been brought to systematically distrust all outside sources. In epistemic bubbles, other voices are not heard; in echo chambers, other voices are actively undermined. It is crucial to keep these phenomena distinct. First, echo chambers can explain the post-truth phenomena in a way that epistemic (...) bubbles cannot. Second, each type of structure requires a distinct intervention. Mere exposure to evidence can shatter an epistemic bubble, but may actually reinforce an echo chamber. Finally, echo chambers are much harder to escape. Once in their grip, an agent may act with epistemic virtue, but social context will pervert those actions. Escape from an echo chamber may require a radical rebooting of one's belief system. (shrink)
Using placebos in day-to-day practice is an ethical problem. This paper summarises the available epidemiological evidence to support this difficult decision. Based on these data we propose to differentiate between placebo and “knowledge framing”. While the use of placebo should be confined to experimental settings in clinical trials, knowledge framing — which is only conceptually different from placebo — is a desired, expected and necessary component of any doctor-patient encounter. Examples from daily practice demonstrate both, the need to investigate the (...) effects of knowledge framing and its impact on ethical, medical, economical and legal decisions. (shrink)
Kendall Walton argues that photographs, like mirrors and microscopes, meet sufficient conditions to be considered a kind of prosthesis for seeing. Well aware of the controversiality of this claim, he offers three criteria for perception met by photographs like other perceptual aids which makes them transparent –that is, we see through them.1(II) Jonathan Cohen and Aaron Meskin attempt to refute the transparency thesis by arguing that photographs cannot be genuine prostheses for seeing because they fail to meet another necessary condition, (...) namely that of egocentric spatial information (ESI). Only devices that belong to a process type that carries ESI are, in principle, genuine prostheses for seeing.2 (III) I will offer a two- part refutation of the proposed disqualification of photographs by 1) offering an example of a case where another instance of the process-type to which photographs belong carries ESI, establishing the reliability of the process type that allegedly precluded photographs from qualifying (IV) and 2) offering another example to illustrate how photographs can meet the ESI condition. (V) . (shrink)
We offer an account of the generic use of the term “porn”, as seen in recent usages such as “food porn” and “real estate porn”. We offer a definition adapted from earlier accounts of sexual pornography. On our account, a representation is used as generic porn when it is engaged with primarily for the sake of a gratifying reaction, freed from the usual costs and consequences of engaging with the represented content. We demonstrate the usefulness of the concept of generic (...) porn by using it to isolate a new type of such porn: moral outrage porn. Moral outrage porn is representations of moral outrage, engaged with primarily for the sake of the resulting gratification, freed from the usual costs and consequences of engaging with morally outrageous content. Moral outrage porn is dangerous because it encourages the instrumentalization of one’s empirical and moral beliefs, manipulating their content for the sake of gratification. Finally, we suggest that when using porn is wrong, it is often wrong because it instrumentalizes what ought not to be instrumentalized. (shrink)
There seems to be a deep tension between two aspects of aesthetic appreciation. On the one hand, we care about getting things right. On the other hand, we demand autonomy. We want appreciators to arrive at their aesthetic judgments through their own cognitive efforts, rather than deferring to experts. These two demands seem to be in tension; after all, if we want to get the right judgments, we should defer to the judgments of experts. The best explanation, I suggest, is (...) that aesthetic appreciation is something like a game. When we play a game, we try to win. But often, winning isn’t the point; playing is. Aesthetic appreciation involves the same flipped motivational structure: we aim at the goal of correctness, but having correct judgments isn’t the point. The point is the engaged process of interpreting, investigating, and exploring the aesthetic object. Deferring to aesthetic testimony, then, makes the same mistake as looking up the answer to a puzzle, rather than solving it for oneself. The shortcut defeats the whole point. This suggests a new account of aesthetic value: the engagement account. The primary value of the activity of aesthetic appreciation lies in the process of trying to generate correct judgments, and not in having correct judgments. -/- *There is an audio version available: look for the Soundcloud link, below.*. (shrink)
I propose to study one problem for epistemic dependence on experts: how to locate experts on what I will call cognitive islands. Cognitive islands are those domains for knowledge in which expertise is required to evaluate other experts. They exist under two conditions: first, that there is no test for expertise available to the inexpert; and second, that the domain is not linked to another domain with such a test. Cognitive islands are the places where we have the fewest resources (...) for evaluating experts, which makes our expert dependences particularly risky. -/- Some have argued that cognitive islands lead to the complete unusability of expert testimony: that anybody who needs expert advice on a cognitive island will be entirely unable to find it. I argue against this radical form of pessimism, but propose a more moderate alternative. I demonstrate that we have some resources for finding experts on cognitive islands, but that cognitive islands leave us vulnerable to an epistemic trap which I will call runaway echo chambers. In a runaway echo chamber, our inexpertise may lead us to pick out bad experts, which will simply reinforce our mistaken beliefs and sensibilities. (shrink)
Games occupy a unique and valuable place in our lives. Game designers do not simply create worlds; they design temporary selves. Game designers set what our motivations are in the game and what our abilities will be. Thus: games are the art form of agency. By working in the artistic medium of agency, games can offer a distinctive aesthetic value. They support aesthetic experiences of deciding and doing. -/- And the fact that we play games shows something remarkable about us. (...) Our agency is more fluid than we might have thought. In playing a game, we take on temporary ends; we submerge ourselves temporarily in an alternate agency. Games turn out to be a vessel for communicating different modes of agency, for writing them down and storing them. Games create an archive of agencies. And playing games is how we familiarize ourselves with different modes of agency, which helps us develop our capacity to fluidly change our own style of agency. (shrink)
Games may seem like a waste of time, where we struggle under artificial rules for arbitrary goals. The author suggests that the rules and goals of games are not arbitrary at all. They are a way of specifying particular modes of agency. This is what make games a distinctive art form. Game designers designate goals and abilities for the player; they shape the agential skeleton which the player will inhabit during the game. Game designers work in the medium of agency. (...) Game-playing, then, illuminates a distinctive human capacity. We can take on ends temporarily for the sake of the experience of pursuing them. Game play shows that our agency is significantly more modular and more fluid than we might have thought. It also demonstrates our capacity to take on an inverted motivational structure. Sometimes we can take on an end for the sake of the activity of pursuing that end. (shrink)
Most theories of trust presume that trust is a conscious attitude that can be directed only at other agents. I sketch a different form of trust: the unquestioning attitude. What it is to trust, in this sense, is not simply to rely on something, but to rely on it unquestioningly. It is to rely on a resource while suspending deliberation over its reliability. To trust, then, is to set up open pipelines between yourself and parts of the external world — (...) to permit external resources to have a similar relationship to one as one’s internal cognitive faculties. This creates efficiency, but at the price of exquisite vulnerability. We must trust in this way because we are cognitively limited beings in a cognitively overwhelming world. Crucially, we can hold the unquestioning attitude towards objects. When I trust my climbing rope, I climb while putting questions of its reliability out of mind. Many people now trust, in this sense, complex technologies such as search algorithms and online calendars. But, one might worry, how could one ever hold such a normatively loaded attitude as trust towards mere objects? How could it ever make sense to feel betrayed by an object? Such betrayal is grounded, not in considerations of inter-agential cooperation, but in considerations of functional integration. Trust is our engine for expanding and outsourcing our agency — for binding external processes into our practical selves. Thus, we can be betrayed by our smartphones in the same way that we can be betrayed by our memory. When we trust, we try to make something a part of our agency, and we are betrayed when our part lets us down. This suggests a new form of gullibility: agential gullibility, which occurs when agents too hastily and carelessly integrate external resources into their own agency. (shrink)
Persons think. Bodies, time-slices of persons, and brains might also think. They have the necessary neural equipment. Thus, there seems to be more than one thinker in your chair. Critics assert that this is too many thinkers and that we should reject ontologies that allow more than one thinker in your chair. I argue that cases of multiple thinkers are innocuous and that there is not too much thinking. Rather, the thinking shared between, for example, persons and their bodies is (...) exactly what we should expect at the intersection of part sharing and the supervenience of the mental on the physical. I end by responding to the overcrowding objection, the personhood objection, the personal-pronoun reference problem and the epistemic objection. (shrink)
Theory of "conceptual pragmatism" takes into account both modern philosophical thought and modern mathematics. Stimulating discussions of metaphysics, a priori, philosophic method, much more.
The conspicuous similarities between interpretive strategies in classical statistical mechanics and in quantum mechanics may be grounded on their employment of common implementations of probability. The objective probabilities which represent the underlying stochasticity of these theories can be naturally associated with three of their common formal features: initial conditions, dynamics, and observables. Various well-known interpretations of the two theories line up with particular choices among these three ways of implementing probability. This perspective has significant application to debates on primitive ontology (...) and to the quantum measurement problem. (shrink)
In this paper, I defend teleological theories of belief against the exclusivity objection. I argue that despite the exclusive influence of truth in doxastic deliberation, multiple epistemic aims interact when we consider what to believe. This is apparent when we focus on the processes involved in specific instances (or concrete cases) of doxastic deliberation, such that the propositions under consideration are specified. First, I out- line a general schema for weighing aims. Second, I discuss recent attempts to defend the teleological (...) position in relation to this schema. And third, I develop and defend my proposal that multiple epistemic aims interact in doxastic deliberation—a possibility which, as of yet, has received no serious attention in the literature. (shrink)
An adequate theory of rights ought to forbid the harming of animals (human or nonhuman) to promote trivial interests of humans, as is often done in the animal-user industries. But what should the rights view say about situations in which harming some animals is necessary to prevent intolerable injustices to other animals? I develop an account of respectful treatment on which, under certain conditions, it’s justified to intentionally harm some individuals to prevent serious harm to others. This can be compatible (...) with recognizing the inherent value of the ones who are harmed. My theory has important implications for contemporary moral issues in nonhuman animal ethics, such as the development of cultured meat and animal research. (shrink)
In this paper I propose an interpretation of classical statistical mechanics that centers on taking seriously the idea that probability measures represent complete states of statistical mechanical systems. I show how this leads naturally to the idea that the stochasticity of statistical mechanics is associated directly with the observables of the theory rather than with the microstates (as traditional accounts would have it). The usual assumption that microstates are representationally significant in the theory is therefore dispensable, a consequence which suggests (...) interesting possibilities for developing non-equilibrium statistical mechanics and investigating inter-theoretic answers to the foundational questions of statistical mechanics. (shrink)
Games have a complex, and seemingly paradoxical structure: they are both competitive and cooperative, and the competitive element is required for the cooperative element to work out. They are mechanisms for transforming competition into cooperation. Several contemporary philosophers of sport have located the primary mechanism of conversion in the mental attitudes of the players. I argue that these views cannot capture the phenomenological complexity of game-play, nor the difficulty and moral complexity of achieving cooperation through game-play. In this paper, I (...) present a different account of the relationship between competition and cooperation. My view is a distributed view of the conversion: success depends on a large number of features. First, the players must achieve the right motivational state: playing for the sake of the struggle, rather than to win. Second, successful transformation depends on a large number of extra-mental features, including good game design, and social and institutional features. (shrink)
In The Great Endarkenment, Elijah Millgram argues that the hyper-specialization of expert domains has led to an intellectual crisis. Each field of human knowledge has its own specialized jargon, knowledge, and form of reasoning, and each is mutually incomprehensible to the next. Furthermore, says Millgram, modern scientific practical arguments are draped across many fields. Thus, there is no person in a position to assess the success of such a practical argument for themselves. This arrangement virtually guarantees that mistakes will accrue (...) whenever we engage in cross-field practical reasoning. Furthermore, Millgram argues, hyper-specialization makes intellectual autonomy extremely difficult. Our only hope is to provide better translations between the fields, in order to achieve intellectual transparency. I argue against Millgram’s pessimistic conclusion about intellectual autonomy, and against his suggested solution of translation. Instead, I take his analysis to reveal that there are actually several very distinct forms intellectual autonomy that are significantly in tension. One familiar kind is direct autonomy, where we seek to understand arguments and reasons for ourselves. Another kind is delegational autonomy, where we seek to find others to invest with our intellectual trust when we cannot understand. A third is management autonomy, where we seek to encapsulate fields, in order to manage their overall structure and connectivity. Intellectual transparency will help us achieve direct autonomy, but many intellectual circumstances require that we exercise delegational and management autonomy. However, these latter forms of autonomy require us to give up on transparency. (shrink)
What is a game? What are we doing when we play a game? What is the value of playing games? Several different philosophical subdisciplines have attempted to answer these questions using very distinctive frameworks. Some have approached games as something like a text, deploying theoretical frameworks from the study of narrative, fiction, and rhetoric to interrogate games for their representational content. Others have approached games as artworks and asked questions about the authorship of games, about the ontology of the work (...) and its performance. Yet others, from the philosophy of sport, have focused on normative issues of fairness, rule application, and competition. The primary purpose of this article is to provide an overview of several different philosophical approaches to games and, hopefully, demonstrate the relevance and value of the different approaches to each other. Early academic attempts to cope with games tried to treat games as a subtype of narrative and to interpret games exactly as one might interpret a static, linear narrative. A faction of game studies, self-described as “ludologists,” argued that games were a substantially novel form and could not be treated with traditional tools for narrative analysis. In traditional narrative, an audience is told and interprets the story, where in a game, the player enacts and creates the story. Since that early debate, theorists have attempted to offer more nuanced accounts of how games might achieve similar ends to more traditional texts. For example, games might be seen as a novel type of fiction, which uses interactive techniques to achieve immersion in a fictional world. Alternately, games might be seen as a new way to represent causal systems, and so a new way to criticize social and political entities. Work from contemporary analytic philosophy of art has, on the other hand, asked questions whether games could be artworks and, if so, what kind. Much of this debate has concerned the precise nature of the artwork, and the relationship between the artist and the audience. Some have claimed that the audience is a cocreator of the artwork, and so games are a uniquely unfinished and cooperative art form. Others have claimed that, instead, the audience does not help create the artwork; rather, interacting with the artwork is how an audience member appreciates the artist's finished production. Other streams of work have focused less on the game as a text or work, and more on game play as a kind of activity. One common view is that game play occurs in a “magic circle.” Inside the magic circle, players take on new roles, follow different rules, and actions have different meanings. Actions inside the magic circle do not have their usual consequences for the rest of life. Enemies of the magic circle view have claimed that the view ignores the deep integration of game life from ordinary life and point to gambling, gold farming, and the status effects of sports. Philosophers of sport, on the other hand, have approached games with an entirely different framework. This has lead into investigations about the normative nature of games—what guides the applications of rules and how those rules might be applied, interpreted, or even changed. Furthermore, they have investigated games as social practices and as forms of life. (shrink)
A philosophical theory of explanation should provide solutions to a series of problems, both descriptive and normative. The aim of this essay is to establish the claim that this can be best done if one theorizes in terms of explanatory games rather than focusing on the explication of the concept of explanation. The position that is adopted is that of an explanatory pluralism and it is elaborated in terms of the rules that incorporate the normative standards that guide the processes (...) of discovery and justification of explanations as well as the modes of their communication, dissemination, and adoption. They constitute the rules of the explanatory game that the participants are playing. The philosophical project consists in describing and normatively appraising the rules that constitute these games. (shrink)
The current debate over aesthetic testimony typically focuses on cases of doxastic repetition — where, when an agent, on receiving aesthetic testimony that p, acquires the belief that p without qualification. I suggest that we broaden the set of cases under consideration. I consider a number of cases of action from testimony, including reconsidering a disliked album based on testimony, and choosing an artistic educational institution from testimony. But this cannot simply be explained by supposing that testimony is usable for (...) action, but unusable for doxastic repetition. I consider a new asymmetry in the usability aesthetic testimony. Consider the following cases: we seem unwilling to accept somebody hanging a painting in their bedroom based merely on testimony, but entirely willing to accept hanging a painting in a museum based merely on testimony. The switch in intuitive acceptability seems to track, in some complicated way, the line between public life and private life. These new cases weigh against a number of standing theories of aesthetic testimony. I suggest that we look further afield, and that something like a sensibility theory, in the style of John McDowell and David Wiggins, will prove to be the best fit for our intuitions for the usability of aesthetic testimony. I propose the following explanation for the new asymmetry: we are willing to accept testimony about whether a work merits being found beautiful; but we are unwilling to accept testimony about whether something actually is beautiful. (shrink)
Charles Peirce's diagrammatic logic — the Existential Graphs — is presented as a tool for illuminating how we know necessity, in answer to Benacerraf's famous challenge that most ‘semantics for mathematics’ do not ‘fit an acceptable epistemology’. It is suggested that necessary reasoning is in essence a recognition that a certain structure has the particular structure that it has. This means that, contra Hume and his contemporary heirs, necessity is observable. One just needs to pay attention, not merely to individual (...) things but to how those things are related in larger structures, certain aspects of which relations force certain other aspects to be a certain way. (shrink)
What could ground normative restrictions concerning cultural appropriation which are not grounded by independent considerations such as property rights or harm? We propose that such restrictions can be grounded by considerations of intimacy. Consider the familiar phenomenon of interpersonal intimacy. Certain aspects of personal life and interpersonal relationships are afforded various protections in virtue of being intimate. We argue that an analogous phenomenon exists at the level of large groups. In many cases, members of a group engage in shared practices (...) that contribute to a sense of common identity, such as wearing certain hair or clothing styles or performing a certain style of music. Participation in such practices can generate relations of group intimacy, which can ground certain prerogatives in much the same way that interpersonal intimacy can. One such prerogative is making what we call an appropriation claim. An appropriation claim is a request from a group member that non-members refrain from appropriating a given element of the group’s culture. Ignoring appropriation claims can constitute a breach of intimacy. But, we argue, just as for the prerogatives of interpersonal intimacy, in many cases there is no prior fact of the matter about whether the appropriation of a given cultural practice constitutes a breach of intimacy. It depends on what the group decides together. (shrink)
Neuroenhancement involves the use of neurotechnologies to improve cognitive, affective or behavioural functioning, where these are not judged to be clinically impaired. Questions about enhancement have become one of the key topics of neuroethics over the past decade. The current study draws on in-depth public engagement activities in ten European countries giving a bottom-up perspective on the ethics and desirability of enhancement. This informed the design of an online contrastive vignette experiment that was administered to representative samples of 1000 respondents (...) in the ten countries and the United States. The experiment investigated how the gender of the protagonist, his or her level of performance, the efficacy of the enhancer and the mode of enhancement affected support for neuroenhancement in both educational and employment contexts. Of these, higher efficacy and lower performance were found to increase willingness to support enhancement. A series of commonly articulated claims about the individual and societal dimensions of neuroenhancement were derived from the public engagement activities. Underlying these claims, multivariate analysis identified two social values. The Societal/Protective highlights counter normative consequences and opposes the use enhancers. The Individual/Proactionary highlights opportunities and supports use. For most respondents these values are not mutually exclusive. This suggests that for many neuroenhancement is viewed simultaneously as a source of both promise and concern. (shrink)
Neuroenhancement involves the use of neurotechnologies to improve cognitive, affective or behavioural functioning, where these are not judged to be clinically impaired. Questions about enhancement have become one of the key topics of neuroethics over the past decade. The current study draws on in-depth public engagement activities in ten European countries giving a bottom-up perspective on the ethics and desirability of enhancement. This informed the design of an online contrastive vignette experiment that was administered to representative samples of 1000 respondents (...) in the ten countries and the United States. The experiment investigated how the gender of the protagonist, his or her level of performance, the efficacy of the enhancer and the mode of enhancement affected support for neuroenhancement in both educational and employment contexts. Of these, higher efficacy and lower performance were found to increase willingness to support enhancement. A series of commonly articulated claims about the individual and societal dimensions of neuroenhancement were derived from the public engagement activities. Underlying these claims, multivariate analysis identified two social values. The Societal/protective highlights counter normative consequences and opposes the use enhancers. The Individual/proactionary highlights opportunities and supports use. For most respondents these values are not mutually exclusive. This suggests that for many neuroenhancement is viewed simultaneously as a source of both promise and concern. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.