Aesthetic hedonism is the view that to be aesthetically good is to please. For most aesthetic hedonists, aesthetic normativity is hedonic normativity. This paper argues that Kant's third critique contains resources for a non-hedonic account of aesthetic normativity as sourced in autonomy as self-legislation. A case is made that the account is also Kant's because it ties his aesthetics into a key theme of his larger philosophy.
This article discusses Heidegger’s interpretation of Parmenides given in his last public lecture ‘The Principle of Identity’ in 1957. The aim of the piece is to illustrate just how original and significant Heidegger’s reading of Parmenides and the principle of identity is, within the history of Philosophy. Thus the article will examine the traditional metaphysical interpretation of Parmenides and consider G.W.F. Hegel and William James’ account of the principle of identity in light of this. It will then consider Heidegger’s contribution, (...) his return to and re-interpretation of Parmenides in his last lecture. Heidegger will, through the Parmenidean claim that ‘Thinking and Being are one’ deconstruct the traditional metaphysical understanding of the principle of identity, and in its place offer a radically different conception of how our relationship, our ‘belonging together’ with Being can be understood. (shrink)
Vision often dominates other perceptual modalities both at the level of experience and at the level of judgment. In the well-known McGurk effect, for example, one’s auditory experience is consistent with the visual stimuli but not the auditory stimuli, and naïve subjects’ judgments follow their experience. Structurally similar effects occur for other modalities (e.g. rubber hand illusions). Given the robustness of this visual dominance, one might not be surprised that visual imagery often dominates imagery in other modalities. One might (...) be surprised, however, that visual imagery often dominates perception in other modalities. This more controversial claim is motivated both by empirical data and by introspection. Some think of perception-perception visual dominance as epistemically good, holding that cases in which visual dominance misleads us (e.g. McGurk and rubber hand illusions) are cases in which the perceptual system resolves conflicts according to principles that are generally reliable. Here, we explore support for the more controversial claim that imagery-perception visual dominance is epistemically good. We suggest that, when the task is richly spatial, requiring for optimal performance the all-at-once identification of macro-spatial and allocentric properties (e.g. identifying the shape or location of a felt or heard object), the visual, whether perception or imagination, should dominate other modalities. Put another way, when identifying objects, one should go and look or, short of that, visually imagine candidate objects, and then follow the visual, even against conflicting perceptions from other modalities. For this broadly-typed category of of cognitive-perceptual task, vision does dominate and it should. (shrink)
The principle that rational agents should maximize expected utility or choiceworthiness is intuitively plausible in many ordinary cases of decision-making under uncertainty. But it is less plausible in cases of extreme, low-probability risk (like Pascal's Mugging), and intolerably paradoxical in cases like the St. Petersburg and Pasadena games. In this paper I show that, under certain conditions, stochastic dominance reasoning can capture most of the plausible implications of expectational reasoning while avoiding most of its pitfalls. Specifically, given sufficient background (...) uncertainty about the choiceworthiness of one's options, many expectation-maximizing gambles that do not stochastically dominate their alternatives "in a vacuum" become stochastically dominant in virtue of that background uncertainty. But, even under these conditions, stochastic dominance will not require agents to accept options whose expectational superiority depends on sufficiently small probabilities of extreme payoffs. The sort of background uncertainty on which these results depend looks unavoidable for any agent who measures the choiceworthiness of her options in part by the total amount of value in the resulting world. At least for such agents, then, stochastic dominance offers a plausible general principle of choice under uncertainty that can explain more of the apparent rational constraints on such choices than has previously been recognized. (shrink)
This paper offers a fine analysis of different versions of the well known sure-thing principle. We show that Savage's formal formulation of the principle, i.e., his second postulate (P2), is strictly stronger than what is intended originally.
In the first wave of the COVID-19 pandemic, healthcare workers in some countries were forced to make distressing triaging decisions about which individual patients should receive potentially life-saving treatment. Much of the ethical discussion prompted by the pandemic has concerned which moral principles should ground our response to these individual triage questions. In this paper we aim to broaden the scope of this discussion by considering the ethics of broader structural allocation decisions raised by the COVID-19 pandemic. More specifically, (...) we consider how nations ought to distribute a scarce life-saving resource across healthcare regions in a public health emergency, particularly in view of regional differences in projected need and existing capacity. We call this the regional triage question. Using the case study of ventilators in the COVID-19 pandemic, we show how the moral frameworks that we might adopt in response to individual triage decisions do not translate straightforwardly to this regional-level triage question. Having outlined what we take to be a plausible egalitarian approach to the regional triage question, we go on to propose a novel way of operationalising the ‘save the most lives’ principle in this context. We claim that the latter principle ought to take some precedence in the regional triage question, but also note important limitations to the extent of the influence that it should have in regional allocation decisions. (shrink)
This article offers a novel, conservative account of material constitution, one that incorporates sortal essentialism and features a theory of dominant sortals. It avoids coinciding objects, temporal parts, relativizations of identity, mereological essentialism, anti-essentialism, denials of the reality of the objects of our ordinary ontology, and other departures from the metaphysic implicit in ordinary ways of thinking. Defenses of the account against important objections are found in Burke 1997, 2003, and 2004, as well as in the often neglected six paragraphs (...) that conclude section V of this article. (shrink)
Thomas Aquinas embraces a controversial claim about the way in which parts of a substance depend on the substance’s substantial form. On his metaphysics, a ‘substantial form’ is not merely a relation among already existing things, in virtue of which (for example) the arrangement or configuration of those things would count as a substance. The substantial form is rather responsible for the identity or nature of the parts of the substance such a form constitutes. Aquinas’ controversial claim can be roughly (...) put as the view that things are members of their kind in virtue of their substantial form. To put it simply, Aquinas’ claim results in the implication that, every time the xs come to compose a y, those xs have to undergo a change in kind membership. -/- This has been called the “homonymy principle,” and it follows from Aquinas’ view of substantial forms, and specifically from the position that substantial forms inform prime matter, rather than substance-parts. The aim of this paper will be to defend that the Thomistic claim that substantial forms account for the determinate actuality of every part of a substance is plausible and coherent. After defending the Thomistic account, I propose that approaching problems of material composition as a Thomist has a significant, oft-overlooked advantage of involving a thorough-going naturalistic methodology that resolves such problems by appeal to empirical considerations. (shrink)
[Encyclopedia entry] Born in Italy in 1225, and despite a relatively short career that ended around 50 years later in 1274, Thomas Aquinas went on to become one of the most influential medieval thinkers on political and legal questions. Aquinas was educated at both Cologne and Paris, later taking up (after some controversy) a chair as regent master in theology at the University of Paris, where he taught during two separate periods (1256-1259, 1269-1272). In the intermediate period he helped establish (...) a studium for his Order in Rome, beginning work on the Summa Theologiae, the masterwork for which he is still well-known. Subsequent to an experience (traditionally, a vision) Aquinas had on the feast of St. Nicholas in 1273, Aquinas intentionally refrained from further work on that text, so that it remained incomplete at the time of his death a year later in 1274. Apart from his own contributions, the Thomistic school – including followers within and without the Dominican Order to which Aquinas belonged – has had profound and far-reaching influence upon the history of legal thought in the West. Many of the classical developments of Thomistic thought are written as commentaries on the Summa Theologiae (hereafter, ST), including the works of Tomasso de Vio (Cajetan) and those of Domingo Banez, Francisco de Vitoria, Bartholome de las Casas, and the other highly influential members of the School of Salamanca who are noteworthy for developing Aquinas’ political and legal thought in the 16th century. Specifically, the ways in which Aquinas synthesized classical political and legal themes around the law, morality, and the common good provided a touchstone for what has come to be called ‘natural law jurisprudence.’ Natural law thinkers, in short, appeal to objective facts about what is good for human beings, and the social or political nature of the kind of creatures that we are, as a standard against which we measure the legal and social institutions created by human institutions. What is crucial here is that facts about human beings as social animals constitute reasons for individuals and groups to act or be structured in certain ways, such that ‘nature’ is the proper source for jurisprudential and political principles. (shrink)
John Henry Newman's theory of heresiology evolved over the course of his life, accentuating certain Christological characteristics of heresy. He began with the study of the Arian heresy, progressing through the Sabellian and Apolloniarian, and ending with the Monophysite. The theory of heresy and orthodoxy finally developed in the Development of Doctrine reflects this struggle to find common features of orthodoxy corresponding to principles governing Christology in the early Church Fathers. As a consequence, Newman's heresiology, in its final stage, (...) holds that faith is inherently Christological, as it depends on the doctrine of the Incarnation to secure the authority of the divine voice of Christ to which one submits in faith. (shrink)
There has been recent epistemological interest as to whether knowledge is “transmitted” by testimony from the testifier to the hearer, where a hearer acquires knowledge “second-hand.” Yet there is a related area in epistemology of testimony which raises a distinct epistemological problem: the relation of understanding to testimony. In what follows, I am interested in one facet of this relation: whether/how a hearer can receive testimonial knowledge without fully understanding the content of the testimony? I use Thomas Aquinas to motivate (...) a case where, in principle, the content of received testimony cannot be understood but nevertheless constitutes knowledge. Aquinas not only argues that we can receive testimonial knowledge without understanding the content of that testimony, but that we have duties to do so in certain cases. (shrink)
English-speaking research on morally right decisions in a healthcare context over the past three decades has been dominated by two major perspectives, namely, the Four Principles, of which the principle of respect for autonomy has been most salient, and the ethic of care, often presented as a rival to not only a focus on autonomy but also a reliance on principles more generally. In my contribution, I present a novel ethic applicable to bioethics, particularly as it concerns human (...) procreation, that I argue is a promising alternative to these two approaches. According to this new moral theory, an act is right just insofar as it treats people’s capacity to commune with respect, where communing is a matter of identifying with others and exhibiting solidarity with them. This ethic is inspired by relational ideals of communion and harmony from the African philosophical tradition, but is shown to be attractive to a broad, indeed global, audience, with regard to its implications for the morality of reproduction. (shrink)
Theophrastus' treatise "Metaphysics" contains a compact and critical reconstruction of unsolved systematic problems of classical Greek philosophy. It is primarily about fundamental problems of ontology and natural philosophy, such as the question of the interdependence of principles and perceptible phenomena or the plausibility of teleology as a methodical principle of the explanation of nature. The aim of the critical Greek-German edition (with introduction and commentary) is to make visible the systematic significance of Theophrastus' critique of metaphysics.
Background: Community-based education (CBE) involves educating the head (cognitive), heart (affective), and the hand (practical) by utilizing tools that enable us to broaden and interrogate our value systems. This article reports on the use of virtue ethics (VE) theory for understanding the principles that create, maintain and sustain a socially accountable community placement programme for undergraduate medical students. Our research questions driving this secondary analysis were; what are the goods which are internal to the successful practice of CBE in (...) medicine, and what are the virtues that are likely to promote and sustain them? -/- Methods: We conducted a secondary theoretically informed thematic analysis of the primary data based on MacIntyre’s virtue ethics theory as the conceptual framework. -/- Results: Virtue ethics is an ethical approach that emphasizes the role of character and virtue in shaping moral behavior; when individuals engage in practices (such as CBE), goods internal to those practices (such as a collaborative attitude) strengthen the practices themselves, but also augment those individuals’ virtues, and that of their community (such as empathy). We identified several goods that are internal to the practice of CBE and accompanying virtues as important for the development, implementation and sustainability of a socially accountable community placement programme. A service-oriented mind-set, a deep understanding of community needs, a transformed mind, and a collaborative approach emerged as goods internal to the practice of a socially accountable CBE. The virtues needed to sustain the identified internal goods included empathy and compassion, connectedness, accountability, engagement [sustained relationship], cooperation, perseverance, and willingness to be an agent of change. -/- Conclusion: This study found that MacIntyre’s virtue ethics theory provided a useful theoretical lens for understanding the principles that create, maintain and sustain CBE practice. (shrink)
One of the main issues that dominates Neoplatonism in late antique philosophy of the 3rd–6th centuries A.D. is the nature of the first principle, called the ‘One’. From Plotinus onward, the principle is characterized as the cause of all things, since it produces the plurality of intelligible Forms, which in turn constitute the world’s rational and material structure. Given this, the tension that faces Neoplatonists is that the One, as the first cause, must transcend all things that are characterized by (...) plurality—yet because it causes plurality, the One must anticipate plurality within itself. This becomes the main mo- tivation for this study’s focus on two late Neoplatonists, Proclus (5th cent. A.D.) and Damascius (late 5th–early 6th cent. A.D.): both attempt to address this tension in two rather different ways. Proclus’ attempted solution is to posit intermediate principles (the ‘henads’) that mirror the One’s nature, as ‘one’, but directly cause plurality. This makes the One only a cause of unity, while its production of plurality is mediated by the henads that it produces. Damascius, while appropriating Proclus’ framework, thinks that this is not enough: if the One is posed as a cause of all things, it must be directly related to plurality, even if its causality is mediated through the henads. Damascius then splits Proclus’ One into two entities: (1) the Ineffable as the first ‘principle’, which is absolutely transcendent and has no causal relation; and (2) the One as the first ‘cause’ of all things, which is only relatively transcendent under the Ineffable. -/- Previous studies that compare Proclus and Damascius tend to focus either on the Ineffable or a skeptical shift in epistemology, but little work has been done on the causal framework which underlies both figures’ positions. Thus, this study proposes to focus on the causal frameworks behind each figure: why and how does Proclus propose to assert that the One is a cause, at the same time that it transcends its final effect? And what leads Damascius to propose a notion of the One’s causality that no longer makes it transcendent in the way that a higher principle, like the Ineffable, is? The present work will answer these questions in two parts. In the first, Proclus’ and Damascius’ notions of causality will be examined, insofar as they apply to all levels of being. In the second part, the One’s causality will be examined for both figures: for Proclus, the One’s causality in itself and the causality of its intermediate principles; for Damascius, the One’s causality, and how the Ineffable is needed to explain the One. The outcome of this study will show that Proclus’ framework results in an inner tension that Damascius is responding to with his notion of the One. While Damascius’ own solution implies its own tension, he at least solves a difficulty in Proclus—and in so doing, partially returns to a notion of the One much like Iamblichus’ and Plotinus’ One. (shrink)
In his new book, Knowledge: The Philosophical Quest in History, Steve Fuller returns to core themes of his program of social epistemology that he first outlined in his 1988 book, Social Epistemology. He develops a new, unorthodox theology and philosophy building upon his testimony in Kitzmiller v. Dover Area School District in defense of intelligent design, leading to a call for maximal human experimentation. Beginning from the theological premise rooted in the Abrahamic religious tradition that we are created in the (...) image of God, Fuller argues that the spark of the divine within us distinguishes us from animals. I argue that Fuller’s recent work takes us away from key insights of his original work. In contrast, I advocate for a program of social epistemology rooted in evolutionary science rather than intelligent design, emphasize a precautionary and ecological approach rather than a proactionary approach that favors risky human experimentation, and attend to our material and sociological embeddedness rather than a transhumanist repudiation of the body. (shrink)
In this article I develop an elementary system of axioms for Euclidean geometry. On one hand, the system is based on the symmetry principles which express our a priori ignorant approach to space: all places are the same to us, all directions are the same to us and all units of length we use to create geometric figures are the same to us. On the other hand, through the process of algebraic simplification, this system of axioms directly provides the (...) Weyl’s system of axioms for Euclidean geometry. The system of axioms, together with its a priori interpretation, offers new views to philosophy and pedagogy of mathematics: it supports the thesis that Euclidean geometry is a priori, it supports the thesis that in modern mathematics the Weyl’s system of axioms is dominant to the Euclid’s system because it reflects the a priori underlying symmetries, it gives a new and promising approach to learn geometry which, through the Weyl’s system of axioms, leads from the essential geometric symmetry principles of the mathematical nature directly to modern mathematics. (shrink)
The vacuum energy density of free scalar quantum field Φ in a Rindler distributional space-time with distributional Levi-Cività connection is considered. It has been widely believed that, except in very extreme situations, the influence of acceleration on quantum fields should amount to just small, sub-dominant contributions. Here we argue that this belief is wrong by showing that in a Rindler distributional background space-time with distributional Levi-Cività connection the vacuum energy of free quantum fields is forced, by the very same background (...) distributional space-time such a Rindler distributional background space-time, to become dominant over any classical energy density component. This semiclassical gravity effect finds its roots in the singular behavior of quantum fields on a Rindler distributional space-times with distributional Levi-Cività connection. In particular we obtain that the vacuum fluctuations Φ2 have a singular behavior at a Rindler horizon R 0 : 2 ( ) 4 , 2 , δ = Φ δ δ − δ c a a→∞ . Therefore sufficiently strongly accelerated observer burns up near the Rindler horizon. Thus Polchinski’s account doesn’t violate the Einstein equivalence principle. (shrink)
At the present time there is a boom in the use of pharmacological cognitive enhancers (PCEs) particularly within an academic and labor context. Numerous objections to the use of this medicines arise in the context of Neuroethics, being one of the most important, the principle of justice. Among the most prevalent arguments put forward it is noted the disturbance of distributive justice and competitive fairness. -/- Succinctly it is established that hypothetical PCEs without adverse effects could promote the social fragmentation (...) by favoring economically dominant classes. However, it has been experimentally observed that PCEs present benefits ruled by the inverted U phenomenon, where cognitive benefits given by these medicines are not dose-dependent and have dependence on the baseline performance. Producing bigger benefits in individuals that initially had a worst performance. In this way the use of PCEs, assuming a context of open-access, could contribute to social equity and distributive justice. (shrink)
People with the kind of preferences that give rise to the St. Petersburg paradox are problematic---but not because there is anything wrong with infinite utilities. Rather, such people cannot assign the St. Petersburg gamble any value that any kind of outcome could possibly have. Their preferences also violate an infinitary generalization of Savage's Sure Thing Principle, which we call the *Countable Sure Thing Principle*, as well as an infinitary generalization of von Neumann and Morgenstern's Independence axiom, which we call *Countable (...) Independence*. In violating these principles, they display foibles like those of people who deviate from standard expected utility theory in more mundane cases: they choose dominated strategies, pay to avoid information, and reject expert advice. We precisely characterize the preference relations that satisfy Countable Independence in several equivalent ways: a structural constraint on preferences, a representation theorem, and the principle we began with, that every prospect has a value that some outcome could have. (shrink)
Although expected utility theory has proven a fruitful and elegant theory in the finite realm, attempts to generalize it to infinite values have resulted in many paradoxes. In this paper, we argue that the use of John Conway's surreal numbers shall provide a firm mathematical foundation for transfinite decision theory. To that end, we prove a surreal representation theorem and show that our surreal decision theory respects dominance reasoning even in the case of infinite values. We then bring our (...) theory to bear on one of the more venerable decision problems in the literature: Pascal's Wager. Analyzing the wager showcases our theory's virtues and advantages. To that end, we analyze two objections against the wager: Mixed Strategies and Many Gods. After formulating the two objections in the framework of surreal utilities and probabilities, our theory correctly predicts that (1) the pure Pascalian strategy beats all mixed strategies, and (2) what one should do in a Pascalian decision problem depends on what one's credence function is like. Our analysis therefore suggests that although Pascal's Wager is mathematically coherent, it does not deliver what it purports to, a rationally compelling argument that people should lead a religious life regardless of how confident they are in theism and its alternatives. (shrink)
Let’s say that you regard two things as on a par when you don’t prefer one to other and aren’t indifferent between them. What does rationality require of you when choosing between risky options whose outcomes you regard as on a par? According to Prospectism, you are required to choose the option with the best prospects, where an option’s prospects is a probability-distribution over its potential outcomes. In this paper, I argue that Prospectism violates a dominance principle—which I call (...) The Principle of Predominance—because it sometimes requires you to do something that’s no better than the alternatives and might be worse. I argue that this undermines the strongest argument that’s been given in favor of Prospectism. (shrink)
Within the United States, the most prominent justification for criminal punishment is retributivism. This retributivist justification for punishment maintains that punishment of a wrongdoer is justified for the reason that she deserves something bad to happen to her just because she has knowingly done wrong—this could include pain, deprivation, or death. For the retributivist, it is the basic desert attached to the criminal’s immoral action alone that provides the justification for punishment. This means that the retributivist position is not reducible (...) to consequentialist considerations nor in justifying punishment does it appeal to wider goods such as the safety of society or the moral improvement of those being punished. A number of sentencing guidelines in the U.S. have adopted desert as their distributive principle, and it is increasingly given deference in the “purposes” section of state criminal codes, where it can be the guiding principle in the interpretation and application of the code’s provisions. Indeed, the American Law Institute recently revised the Model Penal Code so as to set desert as the official dominate principle for sentencing. And courts have identified desert as the guiding principle in a variety of contexts, as with the Supreme Court’s enthroning retributivism as the “primary justification for the death penalty.” While retributivism provides one of the main sources of justification for punishment within the criminal justice system, there are good philosophical and practical reasons for rejecting it. One such reason is that it is unclear that agents truly deserve to suffer for the wrongs they have done in the sense required by retributivism. In the first section, I explore the retributivist justification of punishment and explain why it is inconsistent with free will skepticism. In the second section, I then argue that even if one is not convinced by the arguments for free will skepticism, there remains a strong epistemic argument against causing harm on retributivist grounds that undermines both libertarian and compatibilist attempts to justify it. I maintain that this argument provides sufficient reason for rejecting the retributive justification of criminal punishment. I conclude in the third section by briefly sketching my public health-quarantine model, a non-retributive alternative for addressing criminal behavior that draws on the public health framework and prioritizes prevention and social justice. I argue that the model is not only consistent with free will skepticism and the epistemic argument against retributivism, it also provides the most justified, humane, and effective way of dealing with criminal behavior. (shrink)
Orthodox decision theory gives no advice to agents who hold two goods to be incommensurate in value because such agents will have incomplete preferences. According to standard treatments, rationality requires complete preferences, so such agents are irrational. Experience shows, however, that incomplete preferences are ubiquitous in ordinary life. In this paper, we aim to do two things: (1) show that there is a good case for revising decision theory so as to allow it to apply non-vacuously to agents with incomplete (...) preferences, and (2) to identify one substantive criterion that any such non-standard decision theory must obey. Our criterion, Competitiveness, is a weaker version of a dominance principle. Despite its modesty, Competitiveness is incompatible with prospectism, a recently developed decision theory for agents with incomplete preferences. We spend the final part of the paper showing why Competitiveness should be retained, and prospectism rejected. (shrink)
Within the United States, the most prominent justification for criminal punishment is retributivism. This retributivist justification for punishment maintains that punishment of a wrongdoer is justified for the reason that she deserves something bad to happen to her just because she has knowingly done wrong—this could include pain, deprivation, or death. For the retributivist, it is the basic desert attached to the criminal’s immoral action alone that provides the justification for punishment. This means that the retributivist position is not reducible (...) to consequentialist considerations nor in justifying punishment does it appeal to wider goods such as the safety of society or the moral improvement of those being punished. A number of sentencing guidelines in the U.S. have adopted desert as their distributive principle, and it is increasingly given deference in the “purposes” section of state criminal codes, where it can be the guiding principle in the interpretation and application of the code’s provisions. Indeed, the American Law Institute recently revised the Model Penal Code so as to set desert as the official dominate principle for sentencing. And courts have identified desert as the guiding principle in a variety of contexts, as with the Supreme Court’s enthroning retributivism as the “primary justification for the death penalty.” While retributivism provides one of the main sources of justification for punishment within the criminal justice system, there are good philosophical and practical reasons for rejecting it. One such reason is that it is unclear that agents truly deserve to suffer for the wrongs they have done in the sense required by retributivism. In the first section, I explore the retributivist justification of punishment and explain why it is inconsistent with free will skepticism. In the second section, I then argue that even if one is not convinced by the arguments for free will skepticism, there remains a strong epistemic argument against causing harm on retributivist grounds that undermines both libertarian and compatibilist attempts to justify it. I maintain that this argument provides sufficient reason for rejecting the retributive justification of criminal punishment. I conclude in the third section by briefly sketching my public health-quarantine model, a non-retributive alternative for addressing criminal behavior that draws on the public health framework and prioritizes prevention and social justice. I argue that the model is not only consistent with free will skepticism and the epistemic argument against retributivism, it also provides the most justified, humane, and effective way of dealing with criminal behavior. (shrink)
This paper has three interdependent aims. The first is to make Reichenbach’s views on induction and probabilities clearer, especially as they pertain to his pragmatic justification of induction. The second aim is to show how his view of pragmatic justification arises out of his commitment to extensional empiricism and moots the possibility of a non-pragmatic justification of induction. Finally, and most importantly, a formal decision-theoretic account of Reichenbach’s pragmatic justification is offered in terms both of the minimax principle and the (...)dominance principle. (shrink)
In The Birthright Lottery, Ayelet Shachar subjects the institution of birthright citizenship to close scrutiny by applying to citizenship the historical and philosophical critique of hereditary ownership built up over four centuries of liberal and democratic theory, and proposing compelling alternatives drawn from the theory of private law to the usual modes of conveyance of membership. Nonetheless, there are some difficulties with this critique. First, the analogy between entailed property and birthright citizenship is not as illustrative as Shachar intends it (...) to be; second, the mechanism of the birthright privilege levy is insufficient for addressing structural impediments to growth; and third, the principle of ius nexi, while an important corrective to currently dominant principles of nationality, will likely have effects both unnecessary and insufficient to correct the injustices that Shachar identifies. In the end, the most significant improvements in the lives of the neediest persons on the planet are more likely advanced through conventional arguments for the lowering of barriers to the circulation of goods, labor, and capital. This shift in attention from opening borders to extending citizenship risks being a distraction from more effective means of addressing the injustices associated with global inequality.Dans son livre The Birthright Lottery, Ayelet Shachar soumet l’institution de la citoyenneté par droit de naissance à un examen rigoureux, en appliquant à la citoyenneté la critique philosophique et historique de la propriété héritée construite pendant quatre siècles de théorie démocratique libérale, et en proposant aux modes habituels d’attribution de la citoyenneté une alternative séduisante tirée de la théorie du droit privé. Néanmoins, cette critique comporte certaines difficultés. Premièrement, l’analogie entre la transmission de la propriété par l’institution de la taille et la citoyenneté par droit de naissance n’est pas aussi éclairante que le soutient Shachar ; deuxièmement, le mécanisme de la taxe sur le privilège du droit de naissance est insuffisant pour s’attaquer aux obstacles structurels à la croissance ; et troisièmement, le principe du jus nexi, bien qu’on puisse le considérer comme un important correctif du principe de nationalité actuellement dominant, aura vraisemblablement des effets à la fois non nécessaires et insuffisants pour corriger les injustices que Shachar identifie. En fin de compte, les améliorations les plus significatives dans la vie des personnes les plus démunies de la planète sont vraisemblablement mieux défendues à l’aide des arguments conventionnels en faveur d’une baisse des barrières à la circulation des biens, du travail et du capital. Ce déplacement de l’attention de l’ouverture des frontières à l’extension de la citoyenneté risque de nous distraire des moyens plus efficaces de nous attaquer aux injustices associées à l’inégalité globale. (shrink)
Standard decision theory has trouble handling cases involving acts without finite expected values. This paper has two aims. First, building on earlier work by Colyvan (2008), Easwaran (2014), and Lauwers and Vallentyne (2016), it develops a proposal for dealing with such cases, Difference Minimizing Theory. Difference Minimizing Theory provides satisfactory verdicts in a broader range of cases than its predecessors. And it vindicates two highly plausible principles of standard decision theory, Stochastic Equivalence and Stochastic Dominance. The second aim (...) is to assess some recent arguments against Stochastic Equivalence and Stochastic Dominance. If successful, these arguments refute Difference Minimizing Theory. This paper contends that these arguments are not successful. (shrink)
In the light of Halliday's Ideational Grammatical Metaphor, Rhetoric and Critical Discourse Analysis, the major objectives of this study are to investigate and analyze Barack Obama's 2012 five speeches, which amount to 19383 words, from the point of frequency and functions of Nominalization, Rhetorical strategies, Passivization and Modality, in which we can grasp the effective and dominant principles and tropes utilized in political discourse. Fairclough’s Critical Discourse Analysis frameworks based on a Hallidayan perspective are used to depict the orator’s (...) deft and clever use of these strategies in the speeches which are bound up with his overall political purposes. The results represent that nominalization, parallelism, unification strategies and modality have dominated in his speeches. There are some antithesis, expletive devices as well as passive voices in these texts. Accordingly, in terms of nominalization, some implications are drawn for political writing and reading, for translators and instructors entailed in reading and writing pedagogy. (shrink)
Defenders of deontological constraints in normative ethics face a challenge: how should an agent decide what to do when she is uncertain whether some course of action would violate a constraint? The most common response to this challenge has been to defend a threshold principle on which it is subjectively permissible to act iff the agent's credence that her action would be constraint-violating is below some threshold t. But the threshold approach seems arbitrary and unmotivated: what would possibly determine where (...) the threshold should be set, and why should there be any precise threshold at all? Threshold views also seem to violate ought agglomeration, since a pair of actions each of which is below the threshold for acceptable moral risk can, in combination, exceed that threshold. In this paper, I argue that stochastic dominance reasoning can vindicate and lend rigor to the threshold approach: given characteristically deontological assumptions about the moral value of acts, it turns out that morally safe options will stochastically dominate morally risky alternatives when and only when the likelihood that the risky option violates a moral constraint is greater than some precisely definable threshold (in the simplest case, .5). I also show how, in combination with the observation that deontological moral evaluation is relativized to particular choice situations, this approach can overcome the agglomeration problem. This allows the deontologist to give a precise and well-motivated response to the problem of uncertainty. (shrink)
The dominant view on the ethics of cognitive enhancement (CE) is that CE is beholden to the principle of autonomy. However, this principle does not seem to reflect commonly held ethical judgments about enhancement. Is the principle of autonomy at fault, or should common judgments be adjusted? Here I argue for the first, and show how common judgments can be justified as based on a principle of service.
This paper offers an account of human dignity based on a discussion of Kant's moral and political philosophy and then shows its relevance for articulating and developing in a fresh way some normative dimensions of Marx’s critique of capitalism as involving exploitation, domination, and alienation, and the view of socialism as involving a combination of freedom and solidarity. What is advanced here is not Kant’s own conception of dignity, but an account that partly builds on that conception and partly criticizes (...) it. The same is the case with the account of socialism in relation to Marx’s work. As articulated, Kantian dignity and Marxian socialism turn out to be quite appealing and mutually supportive. (shrink)
The literature on counterfactuals is dominated by strict accounts and variably strict accounts. Counterexamples to the principle of Antecedent Strengthening were thought to be fatal to SA; but it has been shown that by adding dynamic resources to the view, such examples can be accounted for. We broaden the debate between VSA and SA by focusing on a new strengthening principle, Strengthening with a Possibility. We show dynamic SA classically validates this principle. We give a counterexample to it and show (...) that extra dynamic resources cannot help SA. We then show VSA accounts for the counterexample if it allows for orderings on worlds that are not almost-connected, and that such an ordering naturally falls out of a Kratzerian ordering source semantics. We conclude that the failure of Strengthening with a Possibility tells strongly against Dynamic SA and in favor of an ordering source-based version of VSA. (shrink)
We seek to elucidate the philosophical context in which one of the most important conceptual transformations of modern mathematics took place, namely the so-called revolution in rigor in infinitesimal calculus and mathematical analysis. Some of the protagonists of the said revolution were Cauchy, Cantor, Dedekind,and Weierstrass. The dominant current of philosophy in Germany at the time was neo-Kantianism. Among its various currents, the Marburg school (Cohen, Natorp, Cassirer, and others) was the one most interested in matters scientific and mathematical. Our (...) main thesis is that Marburg neo-Kantian philosophy formulated a sophisticated position towards the problems raised by the concepts of limits and infinitesimals. The Marburg school neither clung to the traditional approach of logically and metaphysically dubious infinitesimals, nor whiggishly subscribed to the new orthodoxy of the “great triumvirate” of Cantor, Dedekind, and Weierstrass that declared infinitesimals conceptus nongrati in mathematical discourse. Rather, following Cohen’s lead, the Marburg philosophers sought to clarify Leibniz’s principle of continuity, and to exploit it in making sense of infinitesimals and related concepts. (shrink)
In this paper, I argue that the principle of respect for autonomy can serve as the basis for laws that significantly limit conduct, including orders mandating isolation and quarantine. This thesis is fundamentally at odds with an overwhelming consensus in contemporary bioethics that the principle of respect for autonomy, while important in everyday clinical encounters, must be 'curtailed', 'constrained', or 'overridden' by other principles in times of crisis. I contend that bioethicists have embraced an indefensibly 'thin' notion of autonomy (...) that uproots the concept from its foundations in Kantian ethics. According to this thin conception, respect for autonomy, if unconditioned by competing principles (beneficence, justice, non-maleficence) would give competent adults the right to do anything they desired to do so long as they satisfied certain baseline psychological conditions. I argue that the dominant 'principlist' model of bioethical reasoning depends on this thin view of autonomy and show how it deprives us of powerful analytical tools that would help us to think seriously about the foundations of human rights, justice, and law. Then, I offer a brief sketch of a 'thick', historically grounded notion of autonomy and show what we could gain by taking it seriously. (shrink)
Desire has not been at the center of recent preoccupations in the philosophy of mind. Consequently, the literature settled into several dogmas. The first part of this introduction presents these dogmas and invites readers to scrutinize them. The main dogma is that desires are motivational states. This approach contrasts with the other dominant conception: desires are positive evaluations. But there are at least four other dogmas: the world should conform to our desires (world-to-mind direction of fit), desires involve a positive (...) evaluation (the “guise of the good”), we cannot desire what we think is actual (the “death of desire” principle), and, in neuroscience, the idea that the reward system is the key to understanding desire. The second part of the introduction summarizes the contributions to this volume. The hope is to contribute to the emergence of a fruitful debate on this neglected, albeit crucial, aspect of the mind. (shrink)
An exciting theory in neuroscience is that the brain is an organ for prediction error minimization. This theory is rapidly gaining influence and is set to dominate the science of mind and brain in the years to come. PEM has extreme explanatory ambition, and profound philosophical implications. Here, I assume the theory, briefly explain it, and then I argue that PEM implies that the brain is essentially self-evidencing. This means it is imperative to identify an evidentiary boundary between the brain (...) and its environment. This boundary defines the mind-world relation, opens the door to skepticism, and makes the mind transpire as more inferentially secluded and neurocentrically skull-bound than many would nowadays think. Therefore, PEM somewhat deflates contemporary hypotheses that cognition is extended, embodied and enactive; however, it can nevertheless accommodate the kinds of cases that fuel these hypotheses. (shrink)
Which of the two dominant arguments for duties to alleviate global poverty, supposing their premises were generally accepted, would be more likely to produce their desired outcome? I take Pogge's argument for obligations grounded in principles of justice, a "contribution" argument, and Campbell's argument for obligations grounded in principles of humanity, an "assistance" argument, to be prototypical. Were people to accept the premises of Campbell's argument, how likely would they be to support governmental reform in policies for international (...) aid, or to make individual contributions to international aid organizations? And I ask the same question, mutatis mutandis, for Pogge's argument. (shrink)
For aggregative theories of moral value, it is a challenge to rank worlds that each contain infinitely many valuable events. And, although there are several existing proposals for doing so, few provide a cardinal measure of each world's value. This raises the even greater challenge of ranking lotteries over such worlds—without a cardinal value for each world, we cannot apply expected value theory. How then can we compare such lotteries? To date, we have just one method for doing so (proposed (...) separately by Arntzenius, Bostrom, and Meacham), which is to compare the prospects for value at each individual location, and to then represent and compare lotteries by their expected values at each of those locations. But, as I show here, this approach violates several key principles of decision theory and generates some implausible verdicts. I propose an alternative—one which delivers plausible rankings of lotteries, which is implied by a plausible collection of axioms, and which can be applied alongside almost any ranking of infinite worlds. (shrink)
In this article I articulate and defend an African moral theory, i.e., a basic and general principle grounding all particular duties that is informed by sub-Saharan values commonly associated with talk of "ubuntu" and cognate terms that signify personhood or humanness. The favoured interpretation of ubuntu (as of 2007) is the principle that an action is right insofar as it respects harmonious relationships, ones in which people identify with, and exhibit solidarity toward, one another. I maintain that this is the (...) most defensible moral theory with an African pedigree at the time, and that it should be developed further with an eye to rivalling dominant Western theories such as utilitarianism and Kantianism. (shrink)
One of the main challenges faced by realists in political philosophy is that of offering an account of authority that is genuinely normative and yet does not consist of a moralistic application of general, abstract ethical principles to the practice of politics. Political moralists typically start by devising a conception of justice based on their pre-political moral commitments; authority would then be legitimate only if political power is exercised in accordance with justice. As an alternative to that dominant approach (...) I put forward the idea that upturning the relationship between justice and legitimacy affords a normative notion of authority that does not depend on a pre-political account of morality, and thus avoids some serious problems faced by mainstream theories of justice. I then argue that the appropriate purpose of justice is simply to specify the implementation of an independently grounded conception of legitimacy, which in turn rests on a context- and practice-sensitive understanding of the purpose of political power. (shrink)
In this volume Axel Honneth deepens and develops his highly influential theory of recognition, showing how it enables us both to rethink the concept of justice and to offer a compelling account of the relationship between social reproduction and individual identity formation. Drawing on his reassessment of Hegel’s practical philosophy, Honneth argues that our conception of social justice should be redirected from a preoccupation with the principles of distributing goods to a focus on the measures for creating symmetrical relations (...) of recognition. This theoretical reorientation has far-reaching implications for the theory of justice, as it obliges this theory to engage directly with problems concerning the organization of work and with the ideologies that stabilize relations of domination. In the final part of this volume Honneth shows how the theory of recognition provides a fruitful and illuminating way of exploring the relation between social reproduction and identity formation. Rather than seeing groups as regressive social forms that threaten the autonomy of the individual, Honneth argues that the ‘I’ is dependent on forms of social recognition embodied in groups, since neither self-respect nor self-esteem can be maintained without the supportive experience of practising shared values in the group. This important new book by one of the leading social philosophers of our time will be of great interest to students and scholars in philosophy, sociology, politics and the humanities and social sciences generally. (shrink)
Respect for autonomy and beneficence are frequently regarded as the two essential principles of medical ethics, and the potential for these two principles to come into conflict is often emphasised as a fundamental problem. On the one hand, we have the value of beneficence, the driving force of medicine, which demands that medical professionals act to protect or promote the wellbeing of patients or research subjects. On the other, we have a principle of respect for autonomy, which demands (...) that we respect the self-regarding decisions of individuals. As well as routinely coming into opposition with the demands of beneficence in medicine, the principle of respect for autonomy in medical ethics is often seen as providing protection against beneficial coercion (i.e. paternalism) in medicine. However, these two values are not as straightforwardly opposed as they may appear on the surface. In fact, the way that we understand autonomy can lead us to implicitly sanction a great deal of paternalistic action, or can smuggle in paternalistic elements under the guise of respect for autonomy. -/- This paper is dedicated to outlining three ways in which the principle of respect for autonomy, depending on how we understand the concept of autonomy, can sanction or smuggle in paternalistic elements. As the specific relationship between respect for autonomy and beneficence will depend on how we conceive of autonomy, I begin by outlining two dominant conceptions of autonomy, both of which have great influence in medical ethics. I then turn to the three ways in which how we understand or employ autonomy can increase or support paternalism: firstly, when we equate respect for autonomy with respect for persons; secondly, when our judgements about what qualifies as an autonomous action contain intersubjective elements; and thirdly, when we expect autonomy to play an instrumental role, that is, when we expect people, when they are acting autonomously, to act in a way that promotes or protects their own wellbeing. I then provide a proposal for how we might work to avoid this. I will suggest that it may be impossible to fully separate paternalistic elements out from judgements about autonomy. Instead, we are better off looking at why we are motivated to use judgements about autonomy as a means of restricting the actions of patients or research subjects. I will argue that this is a result of discomfort about speaking directly about our beneficent motivations in medical ethics. Perhaps we can reduce the incentive to smuggle in these beneficent motivations under the guise of autonomy by talking directly about beneficent motivations in medicine. This will also force us to recognise paternalistic motivations in medicine when they appear, and to justify paternalism where it occurs. (shrink)
Physics has been slowly and reluctantly beginning to address the role and fundamental basis of the ‘observer’ which has until now also been considered metaphysical and beyond the mandate empirical rigor. It is suggested that the fundamental premise of the currently dominant view of ‘Cognitive Theory’ - “Mind Equals Brain” is erroneous; and the associated belief that the ‘Planck scale, ‘the so-called basement level of reality’, as an appropriate arena from which to model psycho-physical bridging is also in error. In (...) this paper we delineate a simple, inexpensive experimental design to ‘crack the so-called cosmic egg’ thereby opening the door to largescale extra dimensions (LSXD) tantamount to the regime of the unified field and thus awareness. The methodology surmounts the quantum uncertainty principle in a manner violating Quantum Electrodynamics, (QED), a cornerstone of modern theoretical physics, by spectrographic analysis of newly theorized Tight-Bound State (TBS) Bohr orbits in ‘continuous-state’ transition frequencies of atomic hydrogen. If one wonders why QED violation in the spectra of atomic hydrogen relates to solving the mind-body (observer) problem; consider this a 1st wrench in a forthcoming tool box of Unified Field Mechanics, UF that will soon enough in retrospect cause the current tools of Classical and Quantum Mechanics to appear as stone axes. Max Planck is credited as the founder of quantum mechanics with his 1900 quantum hypothesis that energy is radiated and absorbed discretely by the formulation, E = hv. Empirically implementing this next paradigm shift utilizing parameters of the long sought associated ‘new physics’ of the 3rd regime (classical-quantum- unified) allows access to LSXD of space; thus pragmatically opening the domain of mental action for the 1st time in history. This rendering constitutes a massive paradigm shift to Unified Field Theory creating a challenge for both the writer and the reader! (shrink)
On Jaeggi’s reading, the immanent and progressive features of ideology critique are rooted in the connection between its explanatory and its normative tasks. I argue that this claim can be cashed out in terms of the mechanisms involved in a functional explanation of ideology and that stability plays a crucial role in this connection. On this reading, beliefs can be said to be ideological if (a) they have the function of supporting existing social practices, (b) they are the output of (...) systematically distorted processes of belief formation, (c) the conditions in which distorting mechanisms trigger can be traced back to structural causal factors shaped by the social practice their outputs are designed to support. Functional problems thus turn out to be interlocked with normative problems because ideology fails to provide principles to regulate cooperation that would be accepted under conditions of non-domination, hence failing to anchor a stable cooperative scheme. By explaining ideology as parasitic on domination, ideology critique points to the conditions under which cooperation stabilizes as those of a practice whose principles are accepted without coercion. Thus, it seems to entail a conception of justice whose principles are articulated as part of a theory of social cooperation. (shrink)
The Upanishads reveal that in the beginning, nothing existed: “This was but non-existence in the beginning. That became existence. That became ready to be manifest”. (Chandogya Upanishad 3.15.1) The creation began from this state of non-existence or nonduality, a state comparable to (0). One can add any number of zeros to (0), but there will be nothing except a big (0) because (0) is a neutral number. If we take (0) as Nirguna Brahman (God without any form and attributes), then (...) from where and how did the universe come into existence? -/- The neutral power (0) cannot produce anything without having an element of duality in it. Although Nirguna Brahman is neutral, it has a positive, negative, and neutral pole, constituting its Prakriti or nature. Prakriti has three latent Gunas (modes or qualities): Satva, Rajas, and Tamas. They are related to Gyana Shakti (the power of knowledge) Sankalpa Shakti (the power of ideation), and Kriya Shakti (the power of action). Science says that Atom is the basic element from which the universe evolved. The Atom has three nuclei- electron, pluton, and neutron. The Satva, Rajas, and Tamas in Indian spirituality are nothing but the mystical names for the nuclei of an atom. -/- According to Bhagavata Purana, Prakriti is also constituted of the elements of Time, Karma (action/destiny) and Swabhava (innate nature). Time disturbs the equilibrium of Gunas. From the aspect of Karma is produced an entity called Mahat, in the form of intelligence. Mahat is dominated by Satva and Rajas. From Mahat manifests the next evolute dominated by Tamas with three predominant qualities – Dravya (substance), Kriya (action), as well as intelligence. It forms to become the Ego principle (Ahamkara). -/- Ahamkara has three modes – Satvic, Rajasic and Tamasic. Satvic is Jnana oriented, Rajasic action-oriented and Tamasic Dravya (substance) oriented. From the Tamasic Ahankara, five gross elements are produced (ether, air, fire, water, and earth - Akash, Vayu, Agni, Jal and Prithvi). From the Satvic Ahamkara, guardian deities (the sun and moon, deities of sense organs, and organs of action) and from Rajasic Ahamkara, ten senses (Indriyas), five senses of perception, five organs of action, the faculties of intellect and Prana (life breath) are produced. -/- Bhagavata says that since these energies, elements, and faculties remained disassociated, they were combined to form a Cosmic Egg. The egg floats in the primal waters for a thousand years. Then God enters this Cosmic Egg and manifests himself as the Cosmic Purusha. He is the first nucleus, the God particle equivalent to the number (1) which is the embodiment of everything in the universe. The concept of creation and dissolution in Hinduism can be compared to the waves in an ocean that appear and disappear incessantly. The Manvantaras are such successive episodes of creation emerging from the Cosmic Person, the Manu who is embodied God-Consciousness, God himself. These episodes of creation are measured in Hinduism in terms of Manvantaras, the epochs of Manus. -/- Manu is the ‘First-Born’ (1) of God, the Cosmic Purusha from whom the world has originated. This Cosmic Person (Purusha) has been described as having fourteen biospheres (heavens) that are inhabited by various life forms in the order of evolution of consciousness. If we begin from the human biosphere (Bhuloka), there are ten astral biospheres that a man must transcend to attain liberation or Mukti from the cycle of births and deaths. -/- The number (1) produces the primary numbers up to (9) through the transmutation of the creational energies and qualities. The process can be related to the descent and cyclical evolution of (1) through ten spiritual stages, finally merging with the nondual (0). The number (10) marks the merger of (1) with (0). The Hindu worldview is that all life forms in an episode of creation must evolve to become one with the non-dual state (0) to attain liberation from the cycle of births and deaths in a cyclical process. -/- . (shrink)
Kantian ethics today is dominated followers of Rawls, many of them his former students. Following Rawls they interpret Kant as a moral constructivist who defines the good in terms of the reasonable. Such readings give priority to the first formulation of the categorical imperative and argue that the other two formulations are (ontologically or definitionally) dependent upon it. In contrast the aim of my paper will be to show that Kant should be interpreted firstly as a moral idealist and secondly (...) as, it least in a certain sense a particularist who takes morality to involve the exercise of recognitional capacities rather than following principles or rules. In claiming that Kant is a moral idealist we won’t mean to imply that he is an anti-realist, indeed we believe that he is a realist. Instead, by ‘moral idealism’ it is meant the position that maintains that to be moral is to instantiate an ideal. And so understood moral idealism can be seen as offering an alternative to both constructivism and utilitarianism. (shrink)
Hume views the passions as having both intentionality and qualitative character, which, in light of his Separability Principle, seemingly contradicts their simplicity. I reject the dominant solution to this puzzle of claiming that intentionality is an extrinsic property of the passions, arguing that a number of Hume’s claims regarding the intentionality of the passions (pride and humility in particular) provide reasons for thinking an intrinsic account of the intentionality of the passions to be required. Instead, I propose to resolve this (...) tension by appealing to Hume’s treatment of the ‘distinctions of reason’, as explained by Garrett (1997). (shrink)
Margaret MacDonald (1907–56) was a central figure in the history of early analytic philosophy in Britain due to both her editorial work as well as her own writings. While her later work on aesthetics and political philosophy has recently received attention, her early writings in the 1930s present a coherent and, for its time, strikingly original blend of common-sense and scientific philosophy. In these papers, MacDonald tackles the central problems of philosophy of her day: verification, the problem of induction, and (...) the relationship between philosophical and scientific method. MacDonald’s philosophy of science starts from the principle that we should carefully analyse the elements of scientific practice (particularly its temporal features) and the ways that scientists describe that practice. That is, she applies the techniques of ordinary language philosophy to actual scientific language. MacDonald shows how ‘scientific common-sense’ is inconsistent with both of the dominant schools of philosophy of her day. Bringing MacDonald back into the story of analytic philosophy corrects the impression that in early analytic philosophy, there are fundamental dichotomies between the style of Moore and Wittgenstein, on the one hand, and the Vienna Circle on the other. (shrink)
Starting from two observations regarding nursing ethics research in the past two decades, namely, the dominant influence of both the empirical methods and the principles approach, we present the cornerstones of a foundational argument-based nursing ethics framework. First, we briefly outline the general philosophical–ethical background from which we develop our framework. This is based on three aspects: lived experience, interpretative dialogue, and normative standard. Against this background, we identify and explore three key concepts—vulnerability, care, and dignity—that must be observed (...) in an ethical approach to nursing. Based on these concepts, we argue that the ethical essence of nursing is the provision of care in response to the vulnerability of a human being in order to maintain, protect, and promote his or her dignity as much as possible. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.