I. EXECUTIVE SUMMARY The MRCT Center Post-trial Responsibilities: Continued Access to an Investigational Medicine Framework outlines a case-based, principled, stakeholder approach to evaluate and guide ethical responsibilities to provide continued access to an investigational medicine at the conclusion of a patient’s participation in a clinical trial. The Post-trial Responsibilities (PTR) Framework includes this Guidance Document as well as the accompanying Toolkit. A 41-member international multi-stakeholder Workgroup convened by the Multi-Regional Clinical Trials Center of Brigham and Women’s Hospital and Harvard University (...) (MRCT Center) developed this Guidance and Toolkit. Project Motivation A number of international organizations have discussed the responsibilities stakeholders have to provide continued access to investigational medicines. The World Medical Association, for example, addressed post-trial access to medicines in Paragraph 34 of the Declaration of Helsinki (WMA, 2013): “In advance of a clinical trial, sponsors, researchers and host country governments should make provisions for post-trial access for all participants who still need an intervention identified as beneficial in the trial. This information must also be disclosed to participants during the informed consent process.” This paragraph and other international guidance documents converge on several consensus points: • Post-trial access (hereafter referred to as “continued access” in this Framework [for terminology clarification – see definitions]) is the responsibility of sponsors, researchers, and host country governments; • The plan for continued access should be determined before the trial begins, and before any individual gives their informed consent; • The protocol should delineate continued access plans; and • The plan should be transparent to potential participants and explained during the informed consent process. -/- However, there is no guidance on how to fulfill these responsibilities (i.e., linking specific responsibilities with specific stakeholders, conditions, and duration). To fill this gap, the MRCT Center convened a working group in September of 2014 to develop a framework to guide stakeholders with identified responsibilities. This resultant Framework sets forth applicable principles, approaches, recommendations and ethical rationales for PTR regarding continued access to investigational medicines for research participants. (shrink)
In “What’s So Bad about Scientism?” (Mizrahi 2017), I argue that Weak Scientism, the view that “Of all the knowledge we have, scientific knowledge is the best knowledge” (Mizrahi 2017, 354; emphasis in original) is a defensible position. That is to say, Weak Scientism “can be successfully defended against objections” (Mizrahi 2017, 354). In his response to Mizrahi (2017), ChristopherBrown (2017) provides more objections against Weak Scientism, and thus another opportunity for me to show that Weak Scientism (...) is a defensible position, which is what I will do in this reply. In fact, I think that I have already addressed Brown’s (2017) objections in Mizrahi (2017), so I will simply highlight these arguments here. (shrink)
§1.1 What m otivated the perpetrators of the holocaust? Christopher Browning and Daniel Goldhagen differ in their analysis of Reserve Police Battalion 101 (Browning 1992, Goldhagen 1996). The battalion consisted of around 500 ‘ordinary’ Germ ans who, during the period 1942-44, killed around 40,000 Jews and who deported as m any to the death cam ps. Browning and Goldhagen differ over the m otivation wit h which the m en killed. I want to com m ent on a central (...) aspect of this debate. (shrink)
This paper asks whether a necessity can be the source of necessity. According to an influential argument due to Simon Blackburn, it cannot. This paper argues that although Blackburn fails to show that a necessity cannot be the source of necessity, extant accounts fail to establish that it is, with particular focus on Bob Hale’s essentialist theory and Christopher Peacocke’s ‘principle-based’ theory of modality. However, the paper makes some positive suggestions for what a satisfactory answer to the challenge must (...) look like. (shrink)
Bernard Wills (2018) joins ChristopherBrown (2017, 2018) in criticizing my defense of Weak Scientism (Mizrahi 2017a, 2017b, 2018a). Unfortunately, it seems that Wills did not read my latest defense of Weak Scientism carefully, nor does he cite any of the other papers in my exchange with Brown.
Intellectual attention, like perceptual attention, is a special mode of mental engagement with the world. When we attend intellectually, rather than making use of sensory information we make use of the kind of information that shows up in occurent thought, memory, and the imagination (Chun, Golomb, & Turk-Browne, 2011). In this paper, I argue that reflecting on what it is like to comprehend memory demonstratives speaks in favour of the view that intellectual attention is required to understand memory demonstratives. Moreover, (...) I argue that this is a line of thought endorsed by Gareth Evans in his Varieties of Reference (1982). In so doing, I improve on interpretations of Evans that have been offered by Christopher Peacocke (1984), and Christoph Hoerl & Theresa McCormack (a coauthored piece, 2005). In so doing I also improve on McDowell’s (1990) criticism of Peacocke’s interpretation of Evans. Like McDowell, I believe that Peacocke might overemphasize the role that “memory-images” play in Evans’ account of comprehending memory demonstratives. But unlike McDowell, I provide a positive characterization of how Evans described the phenomenology of comprehending memory demonstratives. (shrink)
The choices we make in our daily lives have consequences that span the oceans: many consumers are not aware that some of the most exotic foods which belong to our breakfast plates every single day, such as coffee or chocolate, have a profound impact on the lives of many people. In Western societies, we are used to eating and consuming fresh ingredients which sprout on a different continent, yet we are unable to see the very hands that carry a simple (...) thing as a banana to our tables, as a consequence of a global supply chain. This alienation from the places and people involved in the supply chain leads consumers to ignore the impact of producing some foods and enabling them to travel all the way to one’s table. What is regarded as a simple commodity, in fact, is a result of the labour and exploitation of many families and crops on the other side of the ocean. Modern slavery comes in many guises and is often obscured by the alienation of modern consumers from their products, an example of which includes the slave system that holds many people tied behind our food chains. As consumers, we unconsciously become commissioners of a system of inequality and exploitation which we ignore. This includes many ‘fair-trade’ certified products, which are employed by multinationals as a psychological marketing tactic. This phenomenon is described by the cultural anthropologist Richard Robbins (2013) as the ‘commodification of morality’, where even commitments to just, fair or sustainable practices have been monopolised by economic agents. Within this framework, our moral choices are put on the market with a price which rarely returns or reflects the true cost of such products. This article begins by defining modern slavery, proceeding with a particular focus on forced labour in the current neoliberal regime. This is then contextualised in the case study of bananas as one of the most consumed, yet furthest grown, items of Western diets. The article then analyses the ethical backdrop of economic practices, using the fair-trade movement as a synecdoche of the moral economy of our day. The main question raised within this analysis is to what extent our moral choices can contribute to exploitation or to social change, and how our way of eating can oppose the great inequalities that still exist in the present context. (shrink)
Within the classical tripartition of powers, courts and tri- bunals have always held the most marginal role, limited by the interpretation of laws. In the last decades, however, judiciaries have been increasingly addressed with the task of resolving moral issues and political questions, drawing power away from representative institutions. The intention of this essay is to analyse the judicialisation of politics and how this emergent phenomenon is slowly reshaping the skeleton of political structures and mutating the political environment — particularly (...) within the United Kingdom. In order to provide beneficial outcomes, this phenomenon should be accompanied by an attempt to embrace more democratic principles, seeking to promote a more inclusive space in the light of greater political responsibility, dialogue, and plurality of opinion. (shrink)
Decision theory and folk psychology both purport to represent the same phenomena: our belief-like and desire- and preference-like states. They also purport to do the same work with these representations: explain and predict our actions. But they do so with different sets of concepts. There's much at stake in whether one of these two sets of concepts can be accounted for with the other. Without such an account, we'd have two competing representations and systems of prediction and explanation, a dubious (...) dualism. Folk psychology structures our daily lives and has proven fruitful in the study of mind and ethics, while decision theory is pervasive in various disciplines, including the quantitative social sciences, especially economics, and philosophy. My interest is in accounting for folk psychology with decision theory -- in particular, for believe and wanting, which decision theory omits. Many have attempted this task for belief. (The Lockean Thesis says that there is such an account.) I take up the parallel task for wanting, which has received far less attention. I propose necessary and sufficient conditions, stated in terms of decision theory, for when you're truly said to want; I give an analogue of the Lockean Thesis for wanting. My account is an alternative to orthodox accounts that link wanting to preference (e.g. Stalnaker (1984), Lewis (1986)), which I argue are false. I argue further that want ascriptions are context-sensitive. My account explains this context-sensitivity, makes sense of conflicting desires, and accommodates phenomena that motivate traditional theses on which 'want' has multiple senses (e.g. all-things-considered vs. pro tanto). (shrink)
I want to see the concert, but I don’t want to take the long drive. Both of these desire ascriptions are true, even though I believe I’ll see the concert if and only if I take the drive.Yet they, and strongly conflicting desire ascriptions more generally, are predicted incompatible by the standard semantics, given two standard constraints. There are two proposed solutions. I argue that both face problems because they misunderstand how what we believe influences what we desire. I then (...) sketch my own solution: a coarse-worlds semantics that captures the extent to which belief influences desire. My semantics models what I call some-things-considered desire. Considering what the concert would be like, but ignoring the drive, I want to see the concert; considering what the drive would be like, but ignoring the concert, I don’t want to take the drive. (shrink)
Campbell Brown has recently argued that G.E. Moore's intrinsic value holism is superior to Jonathan Dancy's. I show that the advantage which Brown claims for Moore's view over Dancy's is illusory, and that Dancy's view may be superior.
The self-worth of political communities is often understood to be an expression of their position in a hierarchy of power; if so, then the desire for self-worth is a source of competition and conflict in international relations. In early modern German natural law theories, one finds the alternative view, according to which duties of esteem toward political communities should reflect the degree to which they fulfill the functions of civil government. The present article offers a case study, examining the views (...) concerning confederation rights and the resulting duties of esteem in diplomatic relations developed by Christoph Besold (1577–1638). Besold defends the view that confederations including dependent communities—such as the Hanseatic League—could fulfill a stabilizing political function. He also uses sixteenth-century conceptions concerning the acquisition of sovereignty rights through prescription of immemorial time. Both strands of argument lead to the conclusion that the envoys of dependent communities can have the right to be recognized as ambassadors, with all the duties of esteem that follow from this recognition. (shrink)
In Modal Logic as Metaphysics, Timothy Williamson claims that the possibilism-actualism (P-A) distinction is badly muddled. In its place, he introduces a necessitism-contingentism (N-C) distinction that he claims is free of the confusions that purportedly plague the P-A distinction. In this paper I argue first that the P-A distinction, properly understood, is historically well-grounded and entirely coherent. I then look at the two arguments Williamson levels at the P-A distinction and find them wanting and show, moreover, that, when the N-C (...) distinction is broadened (as per Williamson himself) so as to enable necessitists to fend off contingentist objections, the P-A distinction can be faithfully reconstructed in terms of the N-C distinction. However, Williamson’s critique does point to a genuine shortcoming in the common formulation of the P-A distinction. I propose a new definition of the distinction in terms of essential properties that avoids this shortcoming. (shrink)
The question of where the knowledge comes from when we conduct thought experiments has been one of the most fundamental issues discussed in the epistemological position of thought experiments. In this regard, Pierre Duhem shows a skeptical attitude on the subject by stating that thought experiments cannot be evaluated as real experiments or cannot be accepted as an alternative to real experiments. James R. Brown, on the other hand, states that thought experiments, which are not based on new experimental (...) evidence or logically derived from old data, called the Platonic thought experiment, provide intuitive access to a priori knowledge. Unlike Brown, John D. Norton strictly criticizes the idea that thought experiments provide mysterious access to the knowledge of the physical world, and states that thought experiments cannot provide knowledge that transcends empiricism. In the context of the NortonBrown debate, in this article, Brown's stance on thought experiments is supported by critically analyzing the thoughts put forward on the subject. -/- Düşünce deneylerini gerçekleştirdiğimizde sonucunda elde edilen bilginin nereden geldiği sorusu düşünce deneylerinin epistemolojik konumuna ilişkin tartışılan en temel konulardan bir tanesidir. Bu doğrultuda, Pierre Duhem düşünce deneylerinin gerçek deneyler ile aynı statüde değerlendirilemeyeceğini ve hatta düşünce deneylerinin gerçek deneylerin bir alternatifi olarak bile kabul edilemeyeceğini belirterek konuya ilişkin şüpheci bir tavır sergilemektedir. James R. Brown ise yeni deneysel kanıtlara dayanmayan ya da eski verilerden mantıksal olarak türetilmeyen, Platoncu düşünce deneyi olarak adlandırılan düşünce deneylerinin a priori bilgiye sezgisel erişim sağladığını ifade etmektedir. Brown’ın aksine, John D. Norton düşünce deneylerinin fiziksel dünyanın bilgisine gizemli bir erişim sağladığı yönündeki düşünceyi kesin bir dille eleştirmekte ve düşünce deneylerinin ampirizmi aşan bir bilgi sağlamasının mümkün olamayacağını ifade etmektedir. Norton-Brown tartışması çerçevesinde bu makalede, Brown'un düşünce deneylerine ilişkin tutumu, konuyla ilgili düşüncelerin eleştirel olarak analiz edilmesiyle desteklenmektedir. (shrink)
In higher education, interdisciplinarity involves the design of subjects that offer the opportunity to experience ‘different ways of knowing’ from students’ core or preferred disciplines. Such an education is increasingly important in a global knowledge economy. Many universities have begun to introduce interdisciplinary studies or subjects to meet this perceived need. This chapter explores some of the issues inherent in moves towards interdisciplinary higher education. Definitional issues associated with the term ‘academic discipline’, as well as other terms, including ‘multidisciplinary’, ‘cross-disciplinary’, (...) ‘pluridisciplinarity’, ‘transdisciplinarity’ and ‘interdisciplinary’ are examined. A new nomenclature is introduced to assist in clarifying the subtle distinctions between the various positions. The chapter also outlines some of the pedagogical and epistemological considerations which are involved in any move from a conventional form of educational delivery to an interdisciplinary higher education, and recommends caution in any implementation of an interdisciplinary curriculum. (shrink)
‘Sentience’ sometimes refers to the capacity for any type of subjective experience, and sometimes to the capacity to have subjective experiences with a positive or negative valence, such as pain or pleasure. We review recent controversies regarding sentience in fish and invertebrates and consider the deep methodological challenge posed by these cases. We then present two ways of responding to the challenge. In a policy-making context, precautionary thinking can help us treat animals appropriately despite continuing uncertainty about their sentience. In (...) a scientific context, we can draw inspiration from the science of human consciousness to disentangle conscious and unconscious perception (especially vision) in animals. Developing better ways to disentangle conscious and unconscious affect is a key priority for future research. (shrink)
‘If you want to go to Harlem, you have to take the A train’ doesn’t look special. Yet a compositional account of its meaning, and the meaning of anankastic conditionals more generally, has proven an enigma. Semanticists have responded by assigning anankastics a unique status, distinguishing them from ordinary indicative conditionals. Condoravdi & Lauer (2016) maintain instead that “anankastic conditionals are just conditionals.” I argue that Condoravdi and Lauer don’t give a general solution to a well-known problem: the problem of (...) conflicting goals. They rely on a special, “effective preference” interpretation for want on which an agent cannot want two things that conflict with her beliefs. A general solution, though, requires that the goals cannot conflict with the facts. Condoravdi and Lauer’s view fails. Yet they show, I believe, that previous accounts fail too. Anankastic conditionals are still a mystery. (shrink)
This article analyzes the role of entropy in Bayesian statistics, focusing on its use as a tool for detection, recognition and validation of eigen-solutions. “Objects as eigen-solutions” is a key metaphor of the cognitive constructivism epistemological framework developed by the philosopher Heinz von Foerster. Special attention is given to some objections to the concepts of probability, statistics and randomization posed by George Spencer-Brown, a figure of great influence in the field of radical constructivism.
Ectogestation involves the gestation of a fetus in an ex utero environment. The possibility of this technology raises a significant question for the abortion debate: Does a woman’s right to end her pregnancy entail that she has a right to the death of the fetus when ectogestation is possible? Some have argued that it does not Mathison & Davis. Others claim that, while a woman alone does not possess an individual right to the death of the fetus, the genetic parents (...) have a collective right to its death Räsänen. In this paper, I argue that the possibility of ectogestation will radically transform the problem of abortion. The argument that I defend purports to show that, even if it is not a person, there is no right to the death of a fetus that could be safely removed from a human womb and gestated in an artificial womb, because there are competent people who are willing to care for and raise the fetus as it grows into a person. Thus, given the possibility of ectogestation, the moral status of the fetus plays no substantial role in determining whether there is a right to its death. (shrink)
Contemporary recognition theory has developed powerful tools for understanding a variety of social problems through the lens of misrecognition. It has, however, paid somewhat less attention to how to conceive of appropriate responses to misrecognition, usually making the tacit assumption that the proper societal response is adequate or proper affirmative recognition. In this paper I argue that, although affirmative recognition is one potential response to misrecognition, it is not the only such response. In particular, I would like to make the (...) case for derecognition in some cases: derecognition, in particular, through the systematic deinstitutionalization or uncoupling of various reinforcing components of social institutions, components whose tight combination in one social institution has led to the misrecognition in the first place. I make the case through the example of recent United States debates over marriage, especially but not only with respect to gay marriage. I argue that the proper response to the misrecognition of sexual minorities embodied in exclusively heterosexual marriage codes is not affirmative recognition of lesbian and gay marriages, but rather the systematic derecognition of legal marriage as currently understood. I also argue that the systematic misrecognition of women that occurs under the contemporary institution of marriage would likewise best be addressed through legal uncoupling of heterogeneous social components embodied in the contemporary social institution of marriage. (shrink)
This paper argues that political civility is actually an illusionistic ideal and that, as such, realism counsels that we acknowledge both its promise and peril. Political civility is, I will argue, a tension-filled ideal. We have good normative reasons to strive for and encourage more civil political interactions, as they model our acknowledgement of others as equal citizens and facilitate high-quality democratic problem-solving. But we must simultaneously be attuned to civility’s limitations, its possible pernicious side-effects, and its potential for strategic (...) manipulation and oppressive abuse, particularly in contemporary, pluralistic and heterogeneous societies. (shrink)
This article presents the first thematic review of the literature on the ethical issues concerning digital well-being. The term ‘digital well-being’ is used to refer to the impact of digital technologies on what it means to live a life that is good for a human being. The review explores the existing literature on the ethics of digital well-being, with the goal of mapping the current debate and identifying open questions for future research. The review identifies major issues related to several (...) key social domains: healthcare, education, governance and social development, and media and entertainment. It also highlights three broader themes: positive computing, personalised human–computer interaction, and autonomy and self-determination. The review argues that three themes will be central to ongoing discussions and research by showing how they can be used to identify open questions related to the ethics of digital well-being. (shrink)
The Brown vs. Board of Education decision of 1954 mandated school integration. The decision also to recognize that inequalities outside the schools, of both a class- and race-based nature, prevent equality in education. Today, the most prominent argument for integration is that disadvantaged students benefit from the financial, social, and cultural “capital” of middle class families when the children attend the same schools. This argument fails to recognize that disadvantaged students contribute to advantaged students’ educational growth, and sends demeaning (...) messages to the disadvantaged students and messages of unwarranted superiority to the advantaged. Parents, teachers, and schools can adopt a justice perspective that avoids these deleterious aspects of the capital argument, and helps create a community of equals inside the integrated school. Struggles for educational justice must remain closely linked with struggles of both a class- and race-based nature for other forms of justice in the wider society. (shrink)
When people want to identify the causes of an event, assign credit or blame, or learn from their mistakes, they often reflect on how things could have gone differently. In this kind of reasoning, one considers a counterfactual world in which some events are different from their real-world counterparts and considers what else would have changed. Researchers have recently proposed several probabilistic models that aim to capture how people do (or should) reason about counterfactuals. We present a new model and (...) show that it accounts better for human inferences than several alternative models. Our model builds on the work of Pearl (2000), and extends his approach in a way that accommodates backtracking inferences and that acknowledges the difference between counterfactual interventions and counterfactual observations. We present six new experiments and analyze data from four experiments carried out by Rips (2010), and the results suggest that the new model provides an accurate account of both mean human judgments and the judgments of individuals. (shrink)
Animal welfare has a long history of disregard. While in recent decades the study of animal welfare has become a scientific discipline of its own, the difficulty of measuring animal welfare can still be vastly underestimated. There are three primary theories, or perspectives, on animal welfare - biological functioning, natural living and affective state. These come with their own diverse methods of measurement, each providing a limited perspective on an aspect of welfare. This paper describes a perspectival pluralist account of (...) animal welfare, in which all three theoretical perspectives and their multiple measures are necessary to understand this complex phenomenon and provide a full picture of animal welfare. This in turn will offer us a better understanding of perspectivism and pluralism itself. (shrink)
In Between Facts and Norms (1992) Habermas set out a theory of law and politics that is linked both to our high normative expectations and to the realities consequent upon the practices and institutions meant to put them into effect. The article discusses Hugh Baxter’s Habermas: The Discourse Theory of Law and Democracy and the drawbacks he finds in Habermas’ theory. It focuses on raising questions about and objections to some of the author’s leading claims.
What is it like to be a bat? What is it like to be sick? These two questions are much closer to one another than has hitherto been acknowledged. Indeed, both raise a number of related, albeit very complex, philosophical problems. In recent years, the phenomenology of health and disease has become a major topic in bioethics and the philosophy of medicine, owing much to the work of Havi Carel (2007, 2011, 2018). Surprisingly little attention, however, has been given to (...) the phenomenology of animal health and suffering. This omission shall be remedied here, laying the groundwork for the phenomenological evaluation of animal health and suffering. (shrink)
Interactions between an intelligent software agent and a human user are ubiquitous in everyday situations such as access to information, entertainment, and purchases. In such interactions, the ISA mediates the user’s access to the content, or controls some other aspect of the user experience, and is not designed to be neutral about outcomes of user choices. Like human users, ISAs are driven by goals, make autonomous decisions, and can learn from experience. Using ideas from bounded rationality, we frame these interactions (...) as instances of an ISA whose reward depends on actions performed by the user. Such agents benefit by steering the user’s behaviour towards outcomes that maximise the ISA’s utility, which may or may not be aligned with that of the user. Video games, news recommendation aggregation engines, and fitness trackers can all be instances of this general case. Our analysis facilitates distinguishing various subcases of interaction, as well as second-order effects that might include the possibility for adaptive interfaces to induce behavioural addiction, and/or change in user belief. We present these types of interaction within a conceptual framework, and review current examples of persuasive technologies and the issues that arise from their use. We argue that the nature of the feedback commonly used by learning agents to update their models and subsequent decisions could steer the behaviour of human users away from what benefits them, and in a direction that can undermine autonomy and cause further disparity between actions and goals as exemplified by addictive and compulsive behaviour. We discuss some of the ethical, social and legal implications of this technology and argue that it can sometimes exploit and reinforce weaknesses in human beings. (shrink)
We explore the question of whether machines can infer information about our psychological traits or mental states by observing samples of our behaviour gathered from our online activities. Ongoing technical advances across a range of research communities indicate that machines are now able to access this information, but the extent to which this is possible and the consequent implications have not been well explored. We begin by highlighting the urgency of asking this question, and then explore its conceptual underpinnings, in (...) order to help emphasise the relevant issues. To answer the question, we review a large number of empirical studies, in which samples of behaviour are used to automatically infer a range of psychological constructs, including affect and emotions, aptitudes and skills, attitudes and orientations (e.g. values and sexual orientation), personality, and disorders and conditions (e.g. depression and addiction). We also present a general perspective that can bring these disparate studies together and allow us to think clearly about their philosophical and ethical implications, such as issues related to consent, privacy, and the use of persuasive technologies for controlling human behaviour. (shrink)
Political epistemology is the intersection of political philosophy and epistemology. This paper develops a political 'hinge' epistemology. Political hinge epistemology draws on the idea that all belief systems have fundamental presuppositions which play a role in the determination of reasons for belief and other attitudes. It uses this core idea to understand and tackle political epistemological challenges, like political disagreement, polarization, political testimony, political belief, ideology, and biases, among other possibilities. I respond to two challenges facing the development of a (...) political hinge epistemology. The first is about nature and demarcation of political hinges, while the second is about rational deliberation over political hinges. I then use political hinge epistemology to analyze ideology, dealing with the challenge of how an agent's ideology 'masks' or distorts their understanding of social reality, along with the challenge of how ideology critique can change the beliefs of agents who adhere to dominant ideologies, if agents only have their own or the competing ideology to rely on (see Haslanger 2017). I then explore how political hinge epistemology might be extended to further our understanding of political belief polarization. (shrink)
This article presents the first thematic review of the literature on the ethical issues concerning digital well-being. The term ‘digital well-being’ is used to refer to the impact of digital technologies on what it means to live a life that isgood fora human being. The review explores the existing literature on the ethics of digital well-being, with the goal of mapping the current debate and identifying open questions for future research. The review identifies major issues related to several key social (...) domains: healthcare, education, governance and social development, and media and entertainment. It also highlights three broader themes: positive computing, personalised human–computer interaction, and autonomy and self-determination. The review argues that three themes will be central to ongoing discussions and research by showing how they can be used to identify open questions related to the ethics of digital well-being. (shrink)
The aim of the paper is to develop general criteria of argumentative validity and adequacy for probabilistic arguments on the basis of the epistemological approach to argumentation. In this approach, as in most other approaches to argumentation, proabilistic arguments have been neglected somewhat. Nonetheless, criteria for several special types of probabilistic arguments have been developed, in particular by Richard Feldman and Christoph Lumer. In the first part (sects. 2-5) the epistemological basis of probabilistic arguments is discussed. With regard to the (...) philosophical interpretation of probabilities a new subjectivist, epistemic interpretation is proposed, which identifies probabilities with tendencies of evidence (sect. 2). After drawing the conclusions of this interpretation with respect to the syntactic features of the probability concept, e.g. one variable referring to the data base (sect. 3), the justification of basic probabilities (priors) by judgements of relative frequency (sect. 4) and the justification of derivative probabilities by means of the probability calculus are explained (sect. 5). The core of the paper is the definition of '(argumentatively) valid derivative probabilistic arguments', which provides exact conditions for epistemically good probabilistic arguments, together with conditions for the adequate use of such arguments for the aim of rationally convincing an addressee (sect. 6). Finally, some measures for improving the applicability of probabilistic reasoning are proposed (sect. 7). (shrink)
I am going to argue for a robust realism about magnitudes, as irreducible elements in our ontology. This realistic attitude, I will argue, gives a better metaphysics than the alternatives. It suggests some new options in the philosophy of science. It also provides the materials for a better account of the mind’s relation to the world, in particular its perceptual relations.
This paper seeks to recover the function of universal history, which was to place particulars into relation with universals. By the 20th century universal history was largely discredited because of an idealism that served to lend epistemic coherence to the overwhelming complexity arising from universal history's comprehensive scope. Idealism also attempted to account for history's being "open"--for the human ability to transcend circumstance. The paper attempts to recover these virtues without the idealism by defining universal history not by its scope (...) but rather as a scientific method that provides an understanding of any kind of historical process, be it physical, biological or human. While this method is not new, it is in need of a development that offers a more robust historiography and warrant as a liberating historical consciousness. The first section constructs an ontology of process by defining matter as ontic probabilities rather than as closed entities. This is lent warrant in the next section through an appeal to contemporary physical science. The resulting conceptual frame and method is applied to the physical domain of existents, to the biological domain of social being and finally to the human domain of species being. It is then used to account for the emergence of human history's initial stage--the Archaic Socio-Economic Formation and for history' stadial trajectory--its alternation of evolution and revolution. (shrink)
In Fallibilism: Evidence and Knowledge, Jessica Brown identifies a number of problems for the so-called knowledge view of justification. According to this view, we cannot justifiably believe what we do not know. Most epistemologists reject this view on the grounds that false beliefs can be justified if, say, supported by the evidence or produced by reliable processes. We think this is a mistake and that many epistemologists are classifying beliefs as justified because they have properties that indicate that something (...) should be excused. Brown thinks that previous attempts to make this case have been unsuccessful. While the difficulties Brown points to are genuine, I think they show that attempts to explain a classificatory judgment haven't been successful. Still, I would argue that the classification is correct. We need a better explanation of this classificatory judgment. I will try to clarify the justification-excuse distinction and explain why it's a mistake to insist that beliefs that violate epistemic norms might be justified. Just as it's possible for a rational agent to act without justification in spite of her best intentions, it's possible that a rational thinker who follows the evidence and meets our expectations might nevertheless believe without sufficient justification. If our justified beliefs are supposed to guide us in deciding what to do, we probably should draw on discussions from morality and the law about the justification/excuse distinction to inform our understanding of the epistemic case. (shrink)
Professor Christopher Stead was Ely Professor of Divinity from 1971 until his retirement in 1980 and one of the great contributors to the Oxford Patristic Conferences for many years. In this paper I reflect on his work in Patristics, and I attempt to understand how his interests diverged from the other major contributors in the same period, and how they were formed by his philosophical milieu and the spirit of the age. As a case study to illustrate and diagnose (...) his approach, I shall focus on a debate between Stead and Rowan Williams about the significance of the word idios in Arius's theology (in the course of which I also make some suggestions of my own about the issue). (shrink)
Blaming (construed broadly to include both blaming-attitudes and blaming-actions) is a puzzling phenomenon. Even when we grant that someone is blameworthy, we can still sensibly wonder whether we ought to blame him. We sometimes choose to forgive and show mercy, even when it is not asked for. We are naturally led to wonder why we shouldn’t always do this. Wouldn’t it be a better to wholly reject the punitive practices of blame, especially in light of their often undesirable effects, and (...) embrace an ethic of unrelenting forgiveness and mercy? In this paper I seek to address these questions by offering an account of blame that provides a rationale for thinking that to wholly forswear blaming blameworthy agents would be deeply mistaken. This is because, as I will argue, blaming is a way of valuing, it is “a mode of valuation.” I will argue that among the minimal standards of respect generated by valuable objects, notably persons, is the requirement to redress disvalue with blame. It is not just that blame is something additional we are required to do in properly valuing, but rather blame is part of what that it is to properly value. Blaming, given the existence of blameworthy agents, is mode of valuation required by the standards of minimal respect. To forswear blame would be to fail value what we ought to value. (shrink)
A central tension shaping metaethical inquiry is that normativity appears to be subjective yet real, where it’s difficult to reconcile these aspects. On the one hand, normativity pertains to our actions and attitudes. On the other, normativity appears to be real in a way that precludes it from being a mere figment of those actions and attitudes. In this paper, I argue that normativity is indeed both subjective and real. I do so by way of treating it as a special (...) sort of artifact, where artifacts are mind-dependent yet nevertheless can carve at the joints of reality. In particular, I argue that the properties of being a reason and being valuable for are grounded in attitudes yet are still absolutely structural. (shrink)
Though the realm of biology has long been under the philosophical rule of the mechanistic magisterium, recent years have seen a surprisingly steady rise in the usurping prowess of process ontology. According to its proponents, theoretical advances in the contemporary science of evo-devo have afforded that ontology a particularly powerful claim to the throne: in that increasingly empirically confirmed discipline, emergently autonomous, higher-order entities are the reigning explanantia. If we are to accept the election of evo-devo as our best conceptualisation (...) of the biological realm with metaphysical rigour, must we depose our mechanistic ontology for failing to properly “carve at the joints” of organisms? In this paper, I challenge the legitimacy of that claim: not only can the theoretical benefits offered by a process ontology be had without it, they cannot be sufficiently grounded without the metaphysical underpinning of the very mechanisms which processes purport to replace. The biological realm, I argue, remains one best understood as under the governance of mechanistic principles. (shrink)
We discuss the social-epistemic aspects of Catherine Elgin’s theory of reflective equilibrium and understanding and argue that it yields an argument for the view that a crucial social-epistemic function of epistemic authorities is to foster understanding in their communities. We explore the competences that enable epistemic authorities to fulfil this role and argue that among them is an epistemic virtue we call “epistemic empathy”.
A community, for ecologists, is a unit for discussing collections of organisms. It refers to collections of populations, which consist (by definition) of individuals of a single species. This is straightforward. But communities are unusual kinds of objects, if they are objects at all. They are collections consisting of other diverse, scattered, partly-autonomous, dynamic entities (that is, animals, plants, and other organisms). They often lack obvious boundaries or stable memberships, as their constituent populations not only change but also move in (...) and out of areas, and in and out of relationships with other populations. Familiar objects have identifiable boundaries, for example, and if communities do not, maybe they are not objects. Maybe they do not exist at all. The question this possibility suggests, of what criteria there might be for identifying communities, and for determining whether such communities exist at all, has long been discussed by ecologists. This essay addresses this question as it has recently been taken up by philosophers of science, by examining answers to it which appeared a century ago and which have framed the continuing discussion. (shrink)
Emotional states of consciousness, or what are typically called emotional feelings, are traditionally viewed as being innately programed in subcortical areas of the brain, and are often treated as different from cognitive states of consciousness, such as those related to the perception of external stimuli. We argue that conscious experiences, regardless of their content, arise from one system in the brain. On this view, what differs in emotional and non-emotional states is the kind of inputs that are processed by a (...) general cortical network of cognition, a network essential for conscious experiences. Although subcortical circuits are not directly responsible for conscious feelings, they provide non-conscious inputs that coalesce with other kinds of neural signals in the cognitive assembly of conscious emotional experiences. In building the case for this proposal, we defend a modified version of what is known as the higher-order theory of consciousness. (shrink)
It may be true that we are epistemically in the dark about various things. Does this fact ground the truth of fallibilism? No. Still, even the most zealous skeptic will probably grant that it is not clear that one can be incognizant of their own occurrent phenomenal conscious mental goings-on. Even so, this does not entail infallibilism. Philosophers who argue that occurrent conscious experiences play an important epistemic role in the justification of introspective knowledge assume that there are occurrent beliefs. (...) But this assumption is false. This paper argues that there are no occurrent beliefs. And it considers the epistemic consequences this result has for views that attempt to show that at least some phenomenal beliefs are infallible. (shrink)
This paper argues that we should replace the common classification of theories of welfare into the categories of hedonism, desire theories, and objective list theories. The tripartite classification is objectionable because it is unduly narrow and it is confusing: it excludes theories of welfare that are worthy of discussion, and it obscures important distinctions. In its place, the paper proposes two independent classifications corresponding to a distinction emphasised by Roger Crisp: a four-category classification of enumerative theories (about which items constitute (...) welfare), and a four-category classification of explanatory theories (about why these items constitute welfare). (shrink)
The advent of contemporary evolutionary theory ushered in the eventual decline of Aristotelian Essentialism (Æ) – for it is widely assumed that essence does not, and cannot have any proper place in the age of evolution. This paper argues that this assumption is a mistake: if Æ can be suitably evolved, it need not face extinction. In it, I claim that if that theory’s fundamental ontology consists of dispositional properties, and if its characteristic metaphysical machinery is interpreted within the framework (...) of contemporary evolutionary developmental biology, an evolved essentialism is available. The reformulated theory of Æ offered in this paper not only fails to fall prey to the typical collection of criticisms, but is also independently both theoretically and empirically plausible. The paper contends that, properly understood, essence belongs in the age of evolution. (shrink)
What is philosophy of science? Numerous manuals, anthologies or essays provide carefully reconstructed vantage points on the discipline that have been gained through expert and piecemeal historical analyses. In this paper, we address the question from a complementary perspective: we target the content of one major journal of the field—Philosophy of Science—and apply unsupervised text-mining methods to its complete corpus, from its start in 1934 until 2015. By running topic-modeling algorithms over the full-text corpus, we identified 126 key research topics (...) that span across 82 years. We also tracked their evolution and fluctuating significance over time in the journal articles. Our results concur with and document known and lesser-known episodes of the philosophy of science, including the rise and fall of logic and language-related topics, the relative stability of a metaphysical and ontological questioning (space and time, causation, natural kinds, realism), the significance of epistemological issues about the nature of scientific knowledge as well as the rise of a recent philosophy of biology and other trends. These analyses exemplify how computational text-mining methods can be used to provide an empirical large-scale and data-driven perspective on the history of philosophy of science that is complementary to other current historical approaches. (shrink)
Nearly all defences of the agent-causal theory of free will portray the theory as a distinctively libertarian one — a theory that only libertarians have reason to accept. According to what I call ‘the standard argument for the agent-causal theory of free will’, the reason to embrace agent-causal libertarianism is that libertarians can solve the problem of enhanced control only if they furnish agents with the agent-causal power. In this way it is assumed that there is only reason to accept (...) the agent-causal theory if there is reason to accept libertarianism. I aim to refute this claim. I will argue that the reasons we have for endorsing the agent-causal theory of free will are nonpartisan. The real reason for going agent-causal has nothing to do with determinism or indeterminism, but rather with avoiding reductionism about agency and the self. As we will see, if there is reason for libertarians to accept the agent-causal theory, there is just as much reason for compatibilists to accept it. It is in this sense that I contend that if anyone should be an agent-causalist, then everyone should be an agent-causalist. (shrink)
Although they are continually compositionally reconstituted and reconfigured, organisms nonetheless persist as ontologically unified beings over time – but in virtue of what? A common answer is: in virtue of their continued possession of the capacity for morphological invariance which persists through, and in spite of, their mereological alteration. While we acknowledge that organisms‟ capacity for the “stability of form” – homeostasis - is an important aspect of their diachronic unity, we argue that this capacity is derived from, and grounded (...) in a more primitive one – namely, the homeodynamic capacity for the “specified variation of form”. In introducing a novel type of causal power – a „structural power‟ – we claim that it is the persistence of their dynamic potential to produce a specified series of structurally adaptive morphologies which grounds organisms‟ privileged status as metaphysically “one over many” over time. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.