There is a long-standing disagreement in the philosophy of probability and Bayesian decision theory about whether an agent can hold a meaningful credence about an upcoming action, while she deliberates about what to do. Can she believe that it is, say, 70% probable that she will do A, while she chooses whether to do A? No, say some philosophers, for Deliberation Crowds Out Prediction (DCOP), but others disagree. In this paper, we propose a valid core for DCOP, and identify terminological (...) causes for some of the apparent disputes. (shrink)
Can an agent deliberating about an action A hold a meaningful credence that she will do A? 'No', say some authors, for 'Deliberation Crowds Out Prediction' (DCOP). Others disagree, but we argue here that such disagreements are often terminological. We explain why DCOP holds in a Ramseyian operationalist model of credence, but show that it is trivial to extend this model so that DCOP fails. We then discuss a model due to Joyce, and show that Joyce's rejection of DCOP rests (...) on terminological choices about terms such as 'intention', 'prediction', and 'belief'. Once these choices are in view, they reveal underlying agreement between Joyce and the DCOP-favouring tradition that descends from Ramsey. Joyce's Evidential Autonomy Thesis (EAT) is effectively DCOP, in different terminological clothing. Both principles rest on the so-called 'transparency' of first-person present-tensed reflection on one's own mental states. (shrink)
The surface grammar of reports such as ‘I have a pain in my leg’ suggests that pains are objects which are spatially located in parts of the body. We show that the parallel construction is not available in Mandarin. Further, four philosophically important grammatical features of such reports cannot be reproduced. This suggests that arguments and puzzles surrounding such reports may be tracking artefacts of English, rather than philosophically significant features of the world.
In the philosophy of mind, revelation is the claim that the nature of qualia is revealed in phenomenal experience. In the literature, revelation is often thought of as intuitive but in tension with physicalism. While mentions of revelation are frequent, there is room for further discussion of how precisely to formulate the thesis of revelation and what it exactly amounts to. Drawing on the work of David Lewis, this paper provides a detailed discussion on how the thesis of revelation, as (...) well as its incompatibility with physicalism, is to be understood. (shrink)
(Recipient of the 2020 Everett Mendelsohn Prize.) This article revisits the development of the protoplasm concept as it originally arose from critiques of the cell theory, and examines how the term “protoplasm” transformed from a botanical term of art in the 1840s to the so-called “living substance” and “the physical basis of life” two decades later. I show that there were two major shifts in biological materialism that needed to occur before protoplasm theory could be elevated to have equal status (...) with cell theory in the nineteenth century. First, I argue that biologists had to accept that life could inhere in matter alone, regardless of form. Second, I argue that in the 1840s, ideas of what formless, biological matter was capable of dramatically changed: going from a “coagulation paradigm” that had existed since Theophrastus, to a more robust conception of matter that was itself capable of movement and self-maintenance. In addition to revisiting Schleiden and Schwann’s original writings on cell theory, this article looks especially closely at Hugo von Mohl’s definition of the protoplasm concept in 1846, how it differed from his primordial utricle theory of cell structure two years earlier. This article draws on Lakoff and Johnson’s theory of “ontological metaphors” to show that the cell, primordial utricle, and protoplasm can be understood as material container, object, and substance, and that these overlapping distinctions help explain the chaotic and confusing early history of cell theory. (shrink)
This paper offers a fine analysis of different versions of the well known sure-thing principle. We show that Savage's formal formulation of the principle, i.e., his second postulate (P2), is strictly stronger than what is intended originally.
Most theories of slurs fall into one of two families: those which understand slurring terms to involve special descriptive/informational content (however conveyed), and those which understand them to encode special emotive/expressive content. Our view is that both offer essential insights, but that part of what sets slurs apart is use-theoretic content. In particular, we urge that slurring words belong at the intersection of a number of categories in a sociolinguistic register taxonomy, one that usually includes [+slang] and [+vulgar] and always (...) includes [-polite] and [+derogatory]. Thus, e.g., what distinguishes ‘Chinese’ from ‘chink’ is neither a peculiar sort of descriptive nor emotional content, but rather the fact that ‘chink’ is lexically marked as belonging to different registers than ‘Chinese’. It is, moreover, partly such facts which makes slurring ethically unacceptable. (shrink)
Philosophers debate over the truth of the Doctrine of Doing and Allowing, the thesis that there is a morally significant difference between doing harm and merely allowing harm to happen. Deontologists tend to accept this doctrine, whereas consequentialists tend to reject it. A robust defence of this doctrine would require a conceptual distinction between doing and allowing that both matches our ordinary use of the concepts in a wide range of cases and enables a justification for the alleged moral difference. (...) In this article, I argue not only that a robust defence of this doctrine is available, but also that it is available within a consequentialist framework. (shrink)
This paper is an attempt to improve the practical argument for beliefs in God. Some theists, most famously Kant and William James, called our attention to a particular set of beliefs, the Jamesian-type beliefs, which are justified by virtue of their practical significance, and these theists tried to justify theistic beliefs on the exact same ground. I argue, contra the Jamesian tradition, that theistic beliefs are different from the Jamesian-type beliefs and thus cannot be justified on the same ground. I (...) also argue that the practical argument, as it stands, faces a problem of self-defeat. I then construct a new practical argument that avoids both problems. According to this new argument, theistic beliefs are rational to accept because such beliefs best supply us with motivation strong enough to carry out demanding moral tasks. (shrink)
Today, the lipid bilayer structure is nearly ubiquitous, taken for granted in even the most rudimentary introductions to cell biology. Yet the image of the lipid bilayer, built out of lipids with heads and tails, went from having obscure origins deep in colloid chemical theory in 1924 to being “obvious to any competent physical chemist” by 1935. This chapter examines how this schematic, strictly heuristic explanation of the idea of molecular orientation was developed within colloid physical chemistry, and how the (...) image was transformed into a reflection of the reality and agency of lipid molecules in the biological microworld. Whereas in physical and colloid chemistry these images considered secondary to instrumental measurement and mathematical modeling of surface phenomena, in biology the manipulable image of the lipid on paper became an essential tool for the molecularization of the cell. (shrink)
A frequent caveat in online dating profiles – “No fats, femmes, or Asians” – caused an LGBT activist to complain about the bias against Asians in the American gay community, which he called “racial looksism”. In response, he was asked that, if he himself would not date a fat person, why he should find others not dating Asians so upsetting. This response embodies a popular attitude that personal preferences or tastes are simply personal matters – they are not subject to (...) moral evaluation. In this paper, I argue, against this popular attitude, that a personal preference like racial looksism is indeed wrong. A preference like racial looksism is wrong because it is an overgeneralization that disrespects individuality by treating people as exchangeable tokens of one type, and such disrespect denies its objects appreciation that their dignity entitles them to. As it turns out, there is on my account a relevant moral difference between racial looksism and simple looksism. (shrink)
In his classic book “the Foundations of Statistics” Savage developed a formal system of rational decision making. The system is based on (i) a set of possible states of the world, (ii) a set of consequences, (iii) a set of acts, which are functions from states to consequences, and (iv) a preference relation over the acts, which represents the preferences of an idealized rational agent. The goal and the culmination of the enterprise is a representation theorem: Any preference relation that (...) satisfies certain arguably acceptable postulates determines a (finitely additive) probability distribution over the states and a utility assignment to the consequences, such that the preferences among acts are determined by their expected utilities. Additional problematic assumptions are however required in Savage's proofs. First, there is a Boolean algebra of events (sets of states) which determines the richness of the set of acts. The probabilities are assigned to members of this algebra. Savage's proof requires that this be a σ-algebra (i.e., closed under infinite countable unions and intersections), which makes for an extremely rich preference relation. On Savage's view we should not require subjective probabilities to be σ-additive. He therefore finds the insistence on a σ-algebra peculiar and is unhappy with it. But he sees no way of avoiding it. Second, the assignment of utilities requires the constant act assumption: for every consequence there is a constant act, which produces that consequence in every state. This assumption is known to be highly counterintuitive. The present work contains two mathematical results. The first, and the more difficult one, shows that the σ-algebra assumption can be dropped. The second states that, as long as utilities are assigned to finite gambles only, the constant act assumption can be replaced by the more plausible and much weaker assumption that there are at least two non-equivalent constant acts. The second result also employs a novel way of deriving utilities in Savage-style systems -- without appealing to von Neumann-Morgenstern lotteries. The paper discusses the notion of “idealized agent" that underlies Savage's approach, and argues that the simplified system, which is adequate for all the actual purposes for which the system is designed, involves a more realistic notion of an idealized agent. (shrink)
This book examines some possible ethical principles to resolve moral dilemmas involving water. Existing problems in current water management practices are discussed in light of these principles. Transformation of human water ethics has the potential to be far more effective, cheaper and acceptable than some existing means of “regulation”, but transformation of personal and societal ethics need time because the changes to ethical values are slow.
Background -/- Previous literature suggests that those with reading disability (RD) have more pronounced deficits during semantic processing in reading as compared to listening comprehension. This discrepancy has been supported by recent neuroimaging studies showing abnormal activity in RD during semantic processing in the visual but not in the auditory modality. Whether effective connectivity between brain regions in RD could also show this pattern of discrepancy has not been investigated. Methodology/Principal Findings -/- Children (8- to 14-year-olds) were given a semantic (...) task in the visual and auditory modality that required an association judgment as to whether two sequentially presented words were associated. Effective connectivity was investigated using Dynamic Causal Modeling (DCM) on functional magnetic resonance imaging (fMRI) data. Bayesian Model Selection (BMS) was used separately for each modality to find a winning family of DCM models separately for typically developing (TD) and RD children. BMS yielded the same winning family with modulatory effects on bottom-up connections from the input regions to middle temporal gyrus (MTG) and inferior frontal gyrus(IFG) with inconclusive evidence regarding top-down modulations. Bayesian Model Averaging (BMA) was thus conducted across models in this winning family and compared across groups. The bottom-up effect from the fusiform gyrus (FG) to MTG rather than the top-down effect from IFG to MTG was stronger in TD compared to RD for the visual modality. The stronger bottom-up influence in TD was only evident for related word pairs but not for unrelated pairs. No group differences were noted in the auditory modality. Conclusions/Significance -/- This study revealed a modality-specific deficit for children with RD in bottom-up effective connectivity from orthographic to semantic processing regions. There were no group differences in connectivity from frontal regions, suggesting that the core deficit in RD is not in top-down modulation. (shrink)
Causalists and Evidentialists can agree about the right course of action in an (apparent) Newcomb problem, if the causal facts are not as initially they seem. If declining $1,000 causes the Predictor to have placed $1m in the opaque box, CDT agrees with EDT that one-boxing is rational. This creates a difficulty for Causalists. We explain the problem with reference to Dummett's work on backward causation and Lewis's on chance and crystal balls. We show that the possibility that the causal (...) facts might be properly judged to be non-standard in Newcomb problems leads to a dilemma for Causalism. One horn embraces a subjectivist understanding of causation, in a sense analogous to Lewis's own subjectivist conception of objective chance. In this case the analogy with chance reveals a terminological choice point, such that either (i) CDT is completely reconciled with EDT, or (ii) EDT takes precedence in the cases in which the two theories give different recommendations. The other horn of the dilemma rejects subjectivism, but now the analogy with chance suggests that it is simply mysterious why causation so construed should constrain rational action. (shrink)
My task in this paper is to study Sartre’s ontology as a godless theology. The urgency of defending freedom and responsibility in the face of determinism called for an overarching first principle, a role that God used to play. I first show why such a principle is important and how Sartre filled the void that God had left with a solipsist consciousness. Then I characterize Sartre’s ontology of this consciousness as a “dualist monism”, explaining how it supports his radical conception (...) of freedom. Then, by assessing Sartre’s dualist monism through a theological lens, I disclose an inconsistency in his thought concerning the idea that the in-itself is a deterministic plenitude, which presumes a theos different from consciousness and hence threatens monism. Finally I argue that his inconsistency originates from the finitude of Sartre’s first principle and analyze this finitude by examining the modes of temporality it implies. The entire trajectory problematizes the practice of theo-logy, the idea that a theos stands at the origin of the “logic” (organization or intelligibility) of everything such that all must be conceived under the logos of the theos. While Sartre forcefully criticized the theology of the infinite, his was nonetheless a theology of finitude. (shrink)
Savage's framework of subjective preference among acts provides a paradigmatic derivation of rational subjective probabilities within a more general theory of rational decisions. The system is based on a set of possible states of the world, and on acts, which are functions that assign to each state a consequence. The representation theorem states that the given preference between acts is determined by their expected utilities, based on uniquely determined probabilities (assigned to sets of states), and numeric utilities assigned to consequences. (...) Savage's derivation, however, is based on a highly problematic well-known assumption not included among his postulates: for any consequence of an act in some state, there is a "constant act" which has that consequence in all states. This ability to transfer consequences from state to state is, in many cases, miraculous -- including simple scenarios suggested by Savage as natural cases for applying his theory. We propose a simplification of the system, which yields the representation theorem without the constant act assumption. We need only postulates P1-P6. This is done at the cost of reducing the set of acts included in the setup. The reduction excludes certain theoretical infinitary scenarios, but includes the scenarios that should be handled by a system that models human decisions. (shrink)
Imperatives occur ubiquitously in natural languages. They produce forces which change the addressee’s cognitive state and regulate her actions accordingly. In real life we often receive conflicting orders, typically, issued by various authorities with different ranks. A new update semantics is proposed in this paper to formalize this idea. The general properties of this semantics, as well as its background ideas are discussed extensively. In addition, we compare our framework with other approaches of deontic logics in the context of normative (...) conflicts. (shrink)
The Protein Ontology (PRO) is designed as a formal and principled Open Biomedical Ontologies (OBO) Foundry ontology for proteins. The components of PRO extend from a classification of proteins on the basis of evolutionary relationships at the homeomorphic level to the representation of the multiple protein forms of a gene, including those resulting from alternative splicing, cleavage and/or posttranslational modifications. Focusing specifically on the TGF-beta signaling proteins, we describe the building, curation, usage and dissemination of PRO. PRO provides a framework (...) for the formal representation of protein classes and protein forms in the OBO Foundry. It is designed to enable data retrieval and integration and machine reasoning at the molecular level of proteins, thereby facilitating cross-species comparisons, pathway analysis, disease modeling and the generation of new hypotheses. (shrink)
The notion of comparative probability defined in Bayesian subjectivist theory stems from an intuitive idea that, for a given pair of events, one event may be considered “more probable” than the other. Yet it is conceivable that there are cases where it is indeterminate as to which event is more probable, due to, e.g., lack of robust statistical information. We take that these cases involve indeterminate comparative probabilities. This paper provides a Savage-style decision-theoretic foundation for indeterminate comparative probabilities.
This paper addresses the issue of finite versus countable additivity in Bayesian probability and decision theory -- in particular, Savage's theory of subjective expected utility and personal probability. I show that Savage's reason for not requiring countable additivity in his theory is inconclusive. The assessment leads to an analysis of various highly idealised assumptions commonly adopted in Bayesian theory, where I argue that a healthy dose of, what I call, conceptual realism is often helpful in understanding the interpretational value of (...) sophisticated mathematical structures employed in applied sciences like decision theory. In the last part, I introduce countable additivity into Savage's theory and explore some technical properties in relation to other axioms of the system. (shrink)
This commentary focuses on explaining the intuition of revelation, an issue that Chalmers (2018) raises in his paper. I first sketch how the truth of revelation provides an explanation for the intuition of revelation, and then assess a physicalist proposal to explain the intuition that appeals to Derk Pereboom’s (2011, 2016, 2019) qualitative inaccuracy hypothesis.
It is widely taken that the first-order part of Frege's Begriffsschrift is complete. However, there does not seem to have been a formal verification of this received claim. The general concern is that Frege's system is one axiom short in the first-order predicate calculus comparing to, by now, the standard first-order theory. Yet Frege has one extra inference rule in his system. Then the question is whether Frege's first-order calculus is still deductively sufficient as far as the first-order completeness is (...) concerned. In this short note we confirm that the missing axiom is derivable from his stated axioms and inference rules, and hence the logic system in the Begriffsschrift is indeed first-order complete. (shrink)
This paper defends an approach to modeling and models in science that is against model fictionalism of a recent stripe (the “new fictionalism” that takes models to be abstract entities that are analogous to works of fiction). It further argues that there is a version of fictionalism on models to which my approach is neutral and which only makes sense if one adopts a special sort of antirealism (e.g. constructive empiricism). Otherwise, my approach strongly suggests that one stays away from (...) fictionalism and embraces realism directly. (shrink)
This paper begins with Thomas Nagel's (1970) investigation of the possibility of altruism to further examine how to motivate altruism. When the pursuit of the gratification of one's own desires generally has an immediate causal efficacy, how can one also be motivated to care for others and to act towards the well-being of others? A successful motivational theory of altruism must explain how altruism is possible under all these motivational interferences. The paper will begin with an exposition of Nagel's proposal, (...) and see where it is insufficient with regard to this further issue. It will then introduce the views of Zhang Zai and Wang Fuzhi, and see which one could offer a better motivational theory of altruism. All three philosophers offer different insights on the role of human reason/reflection and human sentiments in moral motivation. The paper will end with a proposal for a socioethical moral program that incorporates both moral reason and moral sentiments as motivation. (shrink)
This short paper has two parts. First, we prove a generalisation of Aumann's surprising impossibility result in the context of rational decision making. We then move, in the second part, to discuss the interpretational meaning of some formal setups of epistemic models, and we do so by means of presenting an interesting puzzle in epistemic logic. The aim is to highlight certain problematic aspects of these epistemic systems concerning first/third-person asymmetry which underlies both parts of the story. This asymmetry, we (...) argue, reveals certain limits of what epistemic models can be. (shrink)
The paradox of pain refers to the idea that the folk concept of pain is paradoxical, treating pains as simultaneously mental states and bodily states (e.g. Hill 2005, 2017; Borg et al. 2020). By taking a close look at our pain terms, this paper argues that there is no paradox of pain. The air of paradox dissolves once we recognise that pain terms are polysemous and that there are two separate but related concepts of pain rather than one.
Recently, professors Christian List and Laura Valentini attempt to develop a new concept of freedom, criticizing the ones under the liberal and republican traditions. Their strategy is to find a concept of freedom satisfying the robust and nonmoralized conditions and to argue that the liberal and republican conceptions are not plausible. However, my view is that List and Valentini do not reasonably criticize the republican conception led by Philip Pettit. In other words, they do not see the real problem of (...) republican freedom so that the straw man fallacy would arise. The real issue for the republican freedom is the problem of political legitimacy, not the nonmoralized one. In this paper, I would like to examine the arguments from List and Valentini to explain why the real problem of republican freedom is the problem of political legitimacy. I would also explain that if we can take the issue seriously, then we know the relationship between the political freedom and the institution in a further step. (shrink)
Slurs are derogatory, and theories of slurs aim at explaining their “derogatory force”. This paper draws a distinction between the type derogatory force and the token derogatory force of slurs. To explain the type derogatory force is to explain why a slur is a derogatory word. By contrast, to explain the token derogatory force is to explain why an utterance of a slur is derogatory. This distinction will be defended by examples in which the type and the token derogatory force (...) come apart. Because of the distinction, an adequate theory of slurs must be plausible for both the type and the token derogatory force. However, I will argue that many theories fail to be plausible for both. In particular, Hom’s combinatorial externalism and the conventional implicature theory offer implausible accounts of the token derogatory force, whereas the prohibitionist theory is insufficient to explain the type derogatory force. (shrink)
In this paper I argue against a deflationist view that as representational vehicles symbols and models do their jobs in essentially the same way. I argue that symbols are conventional vehicles whose chief function is denotation while models are epistemic vehicles whose chief function is showing what their targets are like in the relevant aspects. It is further pointed out that models usually do not rely on similarity or some such relations to relate to their targets. For that referential relation (...) they reply instead on symbols (names and labels) given to them and their parts. And a Goodmanian view on pictures of fictional characters reveals the distinction between symbolic and model representations. (shrink)
Since the early nineteenth century a membrane or wall has been central to the cell’s identity as the elementary unit of life. Yet the literally and metaphorically marginal status of the cell membrane made it the site of clashes over the definition of life and the proper way to study it. In this article I show how the modern cell membrane was conceived of by analogy to the first “artificial cell,” invented in 1864 by the chemist Moritz Traube (1826–1894), and (...) reimagined by the plant physiologist Wilhelm Pfeffer (1845–1920) as a precision osmometer. Pfeffer’s artificial cell osmometer became the conceptual and empirical basis for the law of dilute solutions in physical chemistry, but his use of an artificial analogue to theorize the existence of the plasma membrane as distinct from the cell wall prompted debate over whether biology ought to be more closely unified with the physical sciences, or whether it must remain independent as the science of life. By examining how the histories of plant physiology and physical chemistry intertwined through the artificial cell, I argue that modern biology relocated vitality from protoplasmic living matter to nonliving chemical substances—or, in broader cultural terms, that the disenchantment of life was accompanied by the (re)enchantment of ordinary matter. (shrink)
In a recent paper, Reuter, Seinhold and Sytsma put forward an implicature account to explain the intuitive failure of the pain-in-mouth argument. They argue that utterances such as ‘There is tissue damage / a pain / an inflammation in my mouth’ carry the conversational implicature that there is something wrong with the speaker’s mouth. Appealing to new empirical data, this paper argues against the implicature account and for the entailment account, according to which pain reports using locative locutions, such as (...) ‘There is a pain in my mouth’, are intuitively understood as entailing corresponding predicative locutions, such as ‘My mouth hurts.’ On this latter account, the pain-in-mouth argument seems invalid because the conclusion is naturally understood as entailing something which cannot be inferred from the premisses. Implications for the philosophical debate about pain are also drawn. (shrink)
Review of: Sophia Roosth, Synthetic: How Life Got Made (University of Chicago Press, 2017); and Andrew S. Balmer, Katie Bulpin, and Susan Molyneux-Hodgson, Synthetic Biology: A Sociology of Changing Practices (Palgrave Macmillan, 2016).
Weakly Aggregative Modal Logic (WAML) is a collection of disguised polyadic modal logics with n-ary modalities whose arguments are all the same. WAML has some interesting applications on epistemic logic and logic of games, so we study some basic model theoretical aspects of WAML in this paper. Specifically, we give a van Benthem-Rosen characterization theorem of WAML based on an intuitive notion of bisimulation and show that each basic WAML system Kn lacks Craig Interpolation.
This Review paper is about the security of bio metric templates in cloud databases. Biometrics is proved to be the best authentication method. However, the main concern is the security of the biometric template, the process to extract and stored in the database within the same database along with many other. Many techniques and methods have already been proposed to secure templates, but everything comes with its pros and cons, this paper provides a critical overview of these issues and solutions.
Biomedical ontologies are emerging as critical tools in genomic and proteomic research where complex data in disparate resources need to be integrated. A number of ontologies exist that describe the properties that can be attributed to proteins; for example, protein functions are described by Gene Ontology, while human diseases are described by Disease Ontology. There is, however, a gap in the current set of ontologies—one that describes the protein entities themselves and their relationships. We have designed a PRotein Ontology (PRO) (...) to facilitate protein annotation and to guide new experiments. The components of PRO extend from the classification of proteins on the basis of evolutionary relationships to the representation of the multiple protein forms of a gene (products generated by genetic variation, alternative splicing, proteolytic cleavage, and other post-translational modification). PRO will allow the specification of relationships between PRO, GO and other OBO Foundry ontologies. Here we describe the initial development of PRO, illustrated using human proteins from the TGF-beta signaling pathway. (shrink)
Research has indicated that microRNAs (miRNAs), a special class of non-coding RNAs (ncRNAs), can perform important roles in different biological and pathological processes. miRNAs’ functions are realized by regulating their respective target genes (targets). It is thus critical to identify and analyze miRNA-target interactions for a better understanding and delineation of miRNAs’ functions. However, conventional knowledge discovery and acquisition methods have many limitations. Fortunately, semantic technologies that are based on domain ontologies can render great assistance in this regard. In our (...) previous investigations, we developed a miRNA domain-specific application ontology, Ontology for MIcroRNA Target (OMIT), to provide the community with common data elements and data exchange standards in the miRNA research. This paper describes (1) our continuing efforts in the OMIT ontology development and (2) the application of the OMIT to enable a semantic approach for knowledge capture of miRNA-target interactions. (shrink)
Artificial intelligence (AI) is increasingly expected to disrupt the ordinary functioning of society. From how we fight wars or govern society, to how we work and play, and from how we create to how we teach and learn, there is almost no field of human activity which is believed to be entirely immune from the impact of this emerging technology. This poses a multifaceted problem when it comes to designing and understanding regulatory responses to AI. This article aims to: (i) (...) defend the need for a novel conceptual model for understanding the systemic legal disruption caused by new technologies such as AI; (ii) to situate this model in relation to preceding debates about the interaction of regulation with new technologies (particularly the ‘cyberlaw’ and ‘robolaw’ debates); and (iii) to set out a detailed model for understanding the legal disruption precipitated by AI, examining both pathways stemming from new affordances that can give rise to a regulatory ‘disruptive moment’, as well as the Legal Development, Displacement or Destruction that can ensue. The article proposes that this model of legal disruption can be broadly generalisable to understanding the legal effects and challenges of other emerging technologies. (shrink)
Past work has shown systematic differences between Easterners' and Westerners' intuitions about the reference of proper names. Understanding when these differences emerge in development will help us understand their origins. In the present study, we investigate the referential intuitions of English- and Chinese-speaking children and adults in the U.S. and China. Using a truth-value judgment task modeled on Kripke's classic Gödel case, we find that the cross-cultural differences are already in place at age seven. Thus, these differences cannot be attributed (...) to later education or enculturation. Instead, they must stem from differences that are present in early childhood. We consider alternate theories of reference that are compatible with these findings and discuss the possibility that the cross-cultural differences reflect differences in perspective-taking strategies. (shrink)
In a recent article, Xiaofei Liu seeks to defend, from the standpoint of consequentialism, the Doctrine of Doing and Allowing: DDA. While there are various conceptions of DDA, Liu understands it as the view that it is more difficult to justify doing harm than allowing harm. Liu argues that a typical harm doing involves the production of one more evil and one less good than a typical harm allowing. Thus, prima facie, it takes a greater amount of good to justify (...) doing a certain harm than it does to justify allowing that same harm. In this reply, I argue that Liu fails to show, from within a consequentialist framework, that there is an asymmetry between the evils produced by doing and allowing harm. I conclude with some brief remarks on what may establish such an asymmetry. (shrink)
The present paper offers a libertarian reading of one of the most important Chinese novels of the twentieth century, The Travels of Laocan, written by Chinese entrepreneur Liu E between 1903 and 1906. I start with an exposition of the ideas associated with the concept of “Asian values,” the evident cultural unviability of this notion, and how “Asian authoritarianism” has been rationalized and justified on the basis of a Hobbesian conception of human nature. Next, I examine Liu E’s life and (...) career as an entrepreneur in a highly interventionist society. Finally, I focus on his magnum opus, The Travels of Laocan, a fictionalized autobiography that explains Liu E’s philosophical and libertarian ideas. (shrink)
[Article currently freely available to all at the DOI link below] A question arising from the COVID-19 crisis is whether the merits of cases for climate policies have been affected. This article focuses on carbon pricing, in the form of either carbon taxes or emissions trading. It discusses the extent to which relative costs and benefits of introducing carbon pricing may have changed in the context of COVID-19, during both the crisis and the recovery period to follow. In several ways, (...) the case for introducing a carbon price is stronger during the COVID-19 crisis than under normal conditions. Oil costs are lower than normal, so we would expect less harm to consumers compared to normal conditions. Governments have immediate need for diversified new revenue streams in light of both decreased tax receipts and greater use of social safety nets. Finally, supply and demand shocks have led to already destabilized supply-side activities, and carbon pricing would allow this destabilization to equilibrate around greener production for the long-term. The strengthening of the case for introducing carbon pricing now is highly relevant to discussions about recovery measures, especially in the context of policy announcements from the European Union and United States House of Representatives. Key Policy Insights: • Persistently low oil prices mean that consumers will face lower pain from carbon pricing than under normal conditions. • Many consumers are more price-sensitive during the COVID-19 context, which suggests that a greater relative burden from carbon prices would fall upon producers as opposed to consumers than under normal conditions. • Carbon prices in the COVID-19 context can introduce new revenue streams, assisting with fiscal holes or with other green priorities. • Carbon pricing would contribute to a more sustainable COVID-19 recovery period, since many of the costs of revamping supply chains are already being felt while idled labor capacity can be incorporated into firms with lower carbon-intensity. (shrink)
Ontologies, as the term is used in informatics, are structured vocabularies comprised of human- and computer-interpretable terms and relations that represent entities and relationships. Within informatics fields, ontologies play an important role in knowledge and data standardization, representation, integra- tion, sharing and analysis. They have also become a foundation of artificial intelligence (AI) research. In what follows, we outline the Coronavirus Infectious Disease Ontology (CIDO), which covers multiple areas in the domain of coronavirus diseases, including etiology, transmission, epidemiology, pathogenesis, diagnosis, (...) prevention, and treatment. We emphasize CIDO development relevant to COVID-19. (shrink)
Host-microbiome interactions (HMIs) are critical for the modulation of biological processes and are associated with several diseases, and extensive HMI studies have generated large amounts of data. We propose that the logical representation of the knowledge derived from these data and the standardized representation of experimental variables and processes can foster integration of data and reproducibility of experiments and thereby further HMI knowledge discovery. A community-based Ontology of Host-Microbiome Interactions (OHMI) was developed following the OBO Foundry principles. OHMI leverages established (...) ontologies to create logically structured representations of microbiomes, microbial taxonomy, host species, host anatomical entities, and HMIs under different conditions and associated study protocols and types of data analysis and experimental results. (shrink)
In a number of papers, Liu Qingping has critiqued Confucianism’s commitment to “consanguineous affection” or filial values, claiming it to be excessive and indefensible. Many have taken issue with his textual readings and interpretive claims, but these responses do little to undermine the force of his central claim that filial values cause widespread corruption in Chinese society. This is not an interpretive claim but an empirical one. If true, it merits serious consideration. But is it true? How can we know? (...) I survey the empirical evidence and argue that there is no stable or direct relationship between filial values and corruption. Instead, other cultural dimensions are more robust predictors of corruption. As it happens, China ranks very high in these other cultural dimensions. I conclude that if the empirical research is correct then Liu’s claims lack support. (shrink)
This essay briefly evaluates the ongoing controversy between LIU Qingping and GUO Qiyong (and their followers) about the “moral heart ”of Confucianism in order to draw acomparison with Islamic ethics for mutual illumination of the two traditions.
A differenza della meccanica quantistica, i cui fondamenti sono sempre stati al centro di un ininterrotto dibattito, gli aspetti concettuali della meccanica statistica non hanno attratto interessi così vasti; tra le eccezioni citiamo il bel libro di Emch e Liu. In questo breve contributo discuteremo alcuni problemi concettuali della meccanica statistica, in particolare il ruolo del caos e l’emergenza di proprietà collettive che appaiono quando il numero delle particelle del sistema è molto grande.
It is our great pleasure to announce that the recipient of the 2020 Everett Mendelsohn Prize is Daniel Liu, whose essay, “The Cell and Protoplasm as Container, Object, and Substance, 1835–1861,” appeared in the Journal of the History of Biology, Volume 50, 4 (2017), pp. 889–925.
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.