We make some remarks on the mathematics and metaphysics of the hole argument, in response to a recent article in this journal by Weatherall ([2018]). Broadly speaking, we defend the mainstream philosophical literature from the claim that correct usage of the mathematics of general relativity `blocks' the argument.
This paper reviews the hole argument as an argument against spacetime substantivalism. After a careful presentation of the argument itself, I critically review possible responses.
I present an account of the passage of time and the present in relativistic spacetimes, and I defend these views against recent criticism by OliverPooley and Craig Callender.
In this paper I argue, based on a comparison of Spinoza's and Descartes‟s discussion of error, that beliefs are affirmations of the content of imagination that is not false in itself, only in relation to the object. This interpretation is an improvement both on the winning ideas reading and on the interpretation reading of beliefs. Contrary to the winning ideas reading it is able to explain belief revision concerning the same representation. Also, it does not need the assumption that I (...) misinterpret my otherwise correct ideas as the interpretation reading would have it. In the first section I will provide a brief overview of the notion of inherence and its role in Spinoza‟s discussion of the status of finite minds. Then by examining the relation between Spinoza‟s and Descartes‟ distinction of representations and attitudes, I show that affirmation can be identified with beliefs in Spinoza. Next, I will take a closer look at the identification of intellect and will and argue that Spinoza's identification of the two is based on the fact that Spinoza sees both as the active aspect of the mind. After that, I analyze Spinoza‟s comments on the different scopes of will and intellect, and argue that beliefs are affirmations of the imaginative content of the idea. Finally, through Spinoza‟s example of the utterance of mathematical error, I present my solution to the problem of inherence of false beliefs. (shrink)
Ontologies, as the term is used in informatics, are structured vocabularies comprised of human- and computer-interpretable terms and relations that represent entities and relationships. Within informatics fields, ontologies play an important role in knowledge and data standardization, representation, integra- tion, sharing and analysis. They have also become a foundation of artificial intelligence (AI) research. In what follows, we outline the Coronavirus Infectious Disease Ontology (CIDO), which covers multiple areas in the domain of coronavirus diseases, including etiology, transmission, epidemiology, pathogenesis, diagnosis, (...) prevention, and treatment. We emphasize CIDO development relevant to COVID-19. (shrink)
In this paper, through a close reading of Spinoza's use of common notions I argue for the role of experiential and experimental knowledge in Spinoza's epistemology.
The immediate occasion for this special issue was Christia Mercer’s influential paper “The Contextualist Revolution in Early Modern Philosophy”. In her paper, Mercer clearly demarcates two methodologies of the history of early modern philosophy. She argues that there has been a silent contextualist revolution in the past decades, and the reconstructivist methodology has been abandoned. One can easily get the impression that ‘reconstructivist’ has become a pejorative label that everyone outright rejects. Mercer’s examples of reconstructivist historians of philosophy are deceased (...) (P. F. Strawson, Margaret J. Osler, Richard Watson, and Bernard Williams), anonymous (the fans of philosopher* mentioned by Lisa Downing), or authors whose more recent work followed a contextualist methodology (Jonathan Bennett). The reconstructivist camp seems to have turned into a shadowy group, largely extinct by now, not unlike the Death Eaters in the fictional universe of Harry Potter. There are some figures who previously belonged to this group, but they have since reformed their ways – or so it seems. Sometimes it is rumoured that certain people may still have secret allegiance to it, but no one dares to fly its banner openly. We decided to create this special issue because we believe that reconstructivist methodology does not deserve this shadowy existence. (shrink)
In this paper I offer an argument against one important version of panentheism, that is, mereological panentheism. Although panentheism has proven difficult to define, I provide a working definition of the view, and proceed to argue that given this way of thinking about the doctrine, mereological accounts of panentheism have serious theological drawbacks. I then explore some of these theological drawbacks. In a concluding section I give some reasons for thinking that the classical theistic alternative to panentheism is preferable, all (...) things considered. (shrink)
Oliver Marchart constructs an elaborate ontologization of the political that builds on theories developed by the Essex School while relying on Heideggerianism and Hegelianism. This original thought is a powerful and convincing attempt to think the ontology of the political without lapsing into a celebration of essentialist grounding or complete groundlessness, which are equally metaphysical and mutually supporting positions. Tensions arise within Marchart’s own thought when the notion of instrumentality appears to be inscribed solely on the side of politics (...) or the ontic. I suggest that a theory of practical judgment that is inchoate in Marchart’s own position can resolve the tensions toward constructing a genuinely materialist ontology. (shrink)
Spinoza’s account of memory has not received enough attention, even though it is relevant for his theory of consciousness. Recent literature has studied the “pancreas problem.” This paper argues that there is an analogous problem for memories: if memories are in the mind, why is the mind not conscious of them? I argue that Spinoza’s account of memory can be better reconstructed in the context of Descartes’s account to show that Spinoza responded to these views. Descartes accounted for the preservation (...) of memories by holding that they are brain states without corresponding mental states, and that the mind is able to interpret perception either as new experience or as memory. Spinoza has none of these conceptual resources because of his substance monism. Spinoza accounts for memories as the mind’s ability to generate ideas according to the order of images. This ability consists in the connection of ideas, which is not an actual property, but only a dispositional one and thus not conscious. It is, however, grounded in the actual property of parts of the body, of which ideas are conscious. (shrink)
In this paper, I investigate whether Spinoza theory of intellect can be considered as an Averroistic, Themistian or Alexandrian theory of intellect. I identify key doctrines of these theories that are argumentatively and theoretically independent from Aristotelian hylomorphism and thus can be accepted by someone rejecting hylomorphism. Next, I argue that the textual evidence is inconclusive: depending on the reading of Spinoza's philosophy accepted, Spinoza's theory of intellect can or cannot be considered as an Averroistic theory.
In this paper I examine the question whether Spinoza can account for the necessity of death. I argue that he cannot because within his ethical intellectualist system the subject cannot understand the cause of her death, since by understanding it renders it harmless. Then, I argue that Spinoza could not solve this difficulties because of deeper commitments of his system. At the end I draw a historical parallel to the problem from medieval philosophy.
This paper uses a schema for infinite regress arguments to provide a solution to the problem of the infinite regress of justification. The solution turns on the falsity of two claims: that a belief is justified only if some belief is a reason for it, and that the reason relation is transitive.
Binding specificity is a centrally important concept in molecular biology, yet it has received little philosophical attention. Here I aim to remedy this by analyzing binding specificity as a causal property. I focus on the concept’s role in drug design, where it is highly prized and hence directly studied. From a causal perspective, understanding why binding specificity is a valuable property of drugs contributes to an understanding of causal selection—of how and why scientists distinguish between causes, not just causes from (...) noncauses. In particular, the specificity of drugs is precisely what underwrites their value as experimental interventions on biological processes. (shrink)
Empirical philosophers of science aim to base their philosophical theories on observations of scientific practice. But since there is far too much science to observe it all, how can we form and test hypotheses about science that are sufficiently rigorous and broad in scope, while avoiding the pitfalls of bias and subjectivity in our methods? Part of the answer, we claim, lies in the computational tools of the digital humanities, which allow us to analyze large volumes of scientific literature. Here (...) we advocate for the use of these methods by addressing a number of large-scale, justificatory concerns—specifically, about the epistemic value of journal articles as evidence for what happens elsewhere in science, and about the ability of DH tools to extract this evidence. Far from ignoring the gap between scientific literature and the rest of scientific practice, effective use of DH tools requires critical reflection about these relationships. (shrink)
This article argues that we can construct a complex interpretation of the nature of time by linking Aciman’s gnostic thread to aspects of twentieth century theory, from philosophy and psychoanalysis. In brief, it attempts to demonstrate the roles of dislocation, deferral, and Otherness in constituting human temporality. The essay begins by surmising the conceptual history of time, touching on key ideas put forward by Augustine and Bergson. The second section takes a psychoanalytic turn after exploring Homo Irrealis to describe the (...) significance of desire and fantasy. Thirdly, we develop a unique and temporal application of difference and deferral, building off of Deleuze and Derrida. The fourth section will consider how the psychoanalytic concept of the Other is inhered within time. We conclude that an Acimanic analysis of time is the means by which we can understand existence not as a series of moments, but a rich progression of dissimilitude and Otherness defined moreso by its lack of cohesion and directness of being than by a unified and self-identifying subject. (shrink)
Two objections are raised against Oliver and Smiley’s analysis of the collective–distributive opposition in their 2016 book: They take it as a basic premise that the collective reading of ‘baked a cake’ corresponds to a predicate different from its distributive reading, and the same applies to all predicate expressions that admit both a collective and a distributive interpretation. At the same time, however, they argue that inflectional forms of the same lexeme reveal a univocity that should be preserved in (...) a formal representation of English. These two assumptions sit uneasily. In developing their analysis, Oliver and Smiley come to the conclusion that even a singular predication such as ‘Tom baked a cake’ must be regarded as ambiguous between a collective and a distributive reading. This is so artificial that it hardly makes sense, and yet there seems to be no way out of the difficulty unless we are prepared to give up the basic premise just mentioned. (shrink)
Jacques Monod (1971) argued that certain molecular processes rely critically on the property of chemical arbitrariness, which he claimed allows those processes to “transcend the laws of chemistry”. It seems natural, as some philosophers have done, to interpret this in modal terms: a biological relationship is chemically arbitrary if it is possible, within the constraints of chemical “law”, for that relationship to have been otherwise than it is. But while modality is certainly important for understanding chemical arbitrariness, understanding its biological (...) role also requires an account of the concrete causal-functional features that distinguish arbitrary from non-arbitrary phenomena. In this paper I elaborate on this under-emphasised aspect by offering a general account of these features: arbitrary relations are instantiated by mechanisms that involve molecular adapters, which causally couple two properties or processes which would otherwise be uncorrelated. Additionally, adapters work by acting as intermediate rather than cooperating causes. (shrink)
The following four theses all have some intuitive appeal: (I) There are valid norms. (II) A norm is valid only if justified by a valid norm. (III) Justification, on the class of norms, has an irreflexive proper ancestral. (IV) There is no infinite sequence of valid norms each of which is justified by its successor. However, at least one must be false, for (I)--(III) together entail the denial of (IV). There is thus a conflict between intuition and logical possibility. This (...) paper, after distinguishing various conceptions of a norm, of validity and of justification, argues for the following position. (I) is true. (II) is false for legislative justification and true for epistemic justification. (III) is true for legislative and false for epistemic justification. (IV) is true for legislative justification; for epistemic justification (IV) is true or false depending on the conception taken of a norm. Our intuition in favour of (II) must therefore be abandoned where justification is conceived legislatively. Our intuition in favour of (III) must be abandoned, and our intuition in favour of (IV) qualified, where justification is conceived epistemically. (shrink)
The way we answer the question, .what ought I to do?. goes to show what we believe about our life and the way to live that life. However we answer the question .what ought I to do?., we are prescribing a mode of -/- action and this action has a direct bearing on other people and our society at large. So the moral question has a direct connection with what society becomes. If we answer rightly then the impact on our (...) society will be salutary but if wrongly, the impact too will be fatal. So, what we do influences society for good or bad. -/- In this book, the meaning of morality and, ethics and how these can help in personal and national development will be examined. (shrink)
We introduce a family of operators to combine Description Logic concepts. They aim to characterise complex concepts that apply to instances that satisfy \enough" of the concept descriptions given. For instance, an individual might not have any tusks, but still be considered an elephant. To formalise the meaning of "enough", the operators take a list of weighted concepts as arguments, and a certain threshold to be met. We commence a study of the formal properties of these operators, and study some (...) variations. The intended applications concern the representation of cognitive aspects of classi cation tasks: the interdependencies among the attributes that de ne a concept, the prototype of a concept, and the typicality of the instances. (shrink)
Amidst the constantly augmenting gastronomic capital of celebrity chefs, this study scrutinizes from a critical discourse analytic angle how Jamie Oliver has managed to carve a global brand identity through a process that is termed (dis)placed branding. A roadmap is furnished as to how Italy as place brand and Italianness are discursively articulated, (dis)placed and appropriated in Jamie Oliver’s travelogues which are reflected in his global brand identity. By enriching the CDA methodological toolbox with a deconstructive reading strategy, (...) it is shown that Oliver’s celebrity equity ultimately boils down to supplementing the localized meaning of place of origin with a simulacral, hyperreal place of origin. In this manner, the celebrity’s recipes become more original than the original or doubly original. The (dis)placed branding process that is outlined in the face of Oliver’s global branding strategy is critically discussed with reference to the employed discursive strategies, lexicogrammatical and multimodal choices. (shrink)
Lack of consent is valorized within popular culture to the point that sexual assault has become a spectator sport and creepshot entertainment on social media. Indeed, the valorization of nonconsensual sex has reached the extreme where sex with unconscious girls, especially accompanied by photographs as trophies, has become a goal of some boys and men.
We discuss the role of perceptron (or threshold) connectives in the context of Description Logic, and in particular their possible use as a bridge between statistical learning of models from data and logical reasoning over knowledge bases. We prove that such connectives can be added to the language of most forms of Description Logic without increasing the complexity of the corresponding inference problem. We show, with a practical example over the Gene Ontology, how even simple instances of perceptron connectives are (...) expressive enough to represent learned, complex concepts derived from real use cases. This opens up the possibility to import concepts learnt from data into existing ontologies. (shrink)
Throughout the biological and biomedical sciences there is a growing need for, prescriptive ‘minimum information’ (MI) checklists specifying the key information to include when reporting experimental results are beginning to find favor with experimentalists, analysts, publishers and funders alike. Such checklists aim to ensure that methods, data, analyses and results are described to a level sufficient to support the unambiguous interpretation, sophisticated search, reanalysis and experimental corroboration and reuse of data sets, facilitating the extraction of maximum value from data sets (...) them. However, such ‘minimum information’ MI checklists are usually developed independently by groups working within representatives of particular biologically- or technologically-delineated domains. Consequently, an overview of the full range of checklists can be difficult to establish without intensive searching, and even tracking thetheir individual evolution of single checklists may be a non-trivial exercise. Checklists are also inevitably partially redundant when measured one against another, and where they overlap is far from straightforward. Furthermore, conflicts in scope and arbitrary decisions on wording and sub-structuring make integration difficult. This presents inhibit their use in combination. Overall, these issues present significant difficulties for the users of checklists, especially those in areas such as systems biology, who routinely combine information from multiple biological domains and technology platforms. To address all of the above, we present MIBBI (Minimum Information for Biological and Biomedical Investigations); a web-based communal resource for such checklists, designed to act as a ‘one-stop shop’ for those exploring the range of extant checklist projects, and to foster collaborative, integrative development and ultimately promote gradual integration of checklists. (shrink)
Many aspects of how humans form and combine concepts are notoriously difficult to capture formally. In this paper, we focus on the representation of three particular such aspects, namely overexten- sion, underextension, and dominance. Inspired in part by the work of Hampton, we consider concepts as given through a prototype view, and by considering the interdependencies between the attributes that define a concept. To approach this formally, we employ a recently introduced family of operators that enrich Description Logic languages. These (...) operators aim to characterise complex concepts by collecting those instances that apply, in a finely controlled way, to ‘enough’ of the concept’s defin- ing attributes. Here, the meaning of ‘enough’ is technically realised by accumulating weights of satisfied attributes and comparing with a given threshold that needs to be met. (shrink)
Axiom weakening is a novel technique that allows for fine-grained repair of inconsistent ontologies. In a multi-agent setting, integrating ontologies corresponding to multiple agents may lead to inconsistencies. Such inconsistencies can be resolved after the integrated ontology has been built, or their generation can be prevented during ontology generation. We implement and compare these two approaches. First, we study how to repair an inconsistent ontology resulting from a voting-based aggregation of views of heterogeneous agents. Second, we prevent the generation of (...) inconsistencies by letting the agents engage in a turn-based rational protocol about the axioms to be added to the integrated ontology. We instantiate the two approaches using real-world ontologies and compare them by measuring the levels of satisfaction of the agents w.r.t. the ontology obtained by the two procedures. (shrink)
In this retrospective for Ethics, I discuss H.M. Oliver’s “Established Expectations and American Economic Policies.” This article, by a then-modestly-famous economist, has been ignored (no citations) since its 1940 publication. Yet it bears directly on a normative problem at the intersection of ethics and economics that challenges today’s policymakers but has received comparatively little philosophical attention: how should we balance potentially desirable institutional change against the disruption of established expectations? -/- Oliver details how the principle of fulfilling established (...) expectations cuts across political lines. Conservatives, he observes, criticized inflation for disrupting expectations, and demanded the protection of established corporations. New Deal progressives achieved “the safeguarding of the economic positions of certain important sections of the American people” (104) via statutes designed to protect income and homeownership status. And labor leaders lobbied for the preservation of occupational status. Oliver criticizes these demands on two grounds. First, they are noncompossible: they can’t simultaneously be fulfilled. Second, they are economically inefficient. He concludes that “in a modern dynamic economy, the preservation of status is not and cannot be a feasible criterion of economic justice” (107). -/- I argue that Oliver accurately recognizes both the wide endorsement and the moral ill-foundedness of fulfilling expectations. However, I criticize Oliver’s belief in the noncompossibility of expectations. The established expectations of the wealthy, middle-class homeowners and retirees, and current workers can all be maintained, but at the price of constricting the opportunities of new graduates, immigrants, and the poor—all groups yet to develop settled expectations. This insight renders the protection of expectations not merely inefficient but also unjust. (shrink)
When people combine concepts these are often characterised as “hybrid”, “impossible”, or “humorous”. However, when simply considering them in terms of extensional logic, the novel concepts understood as a conjunctive concept will often lack meaning having an empty extension (consider “a tooth that is a chair”, “a pet flower”, etc.). Still, people use different strategies to produce new non-empty concepts: additive or integrative combination of features, alignment of features, instantiation, etc. All these strategies involve the ability to deal with conflicting (...) attributes and the creation of new (combinations of) properties. We here consider in particular the case where a Head concept has superior ‘asymmetric’ control over steering the resulting concept combination (or hybridisation) with a Modifier concept. Specifically, we propose a dialogical approach to concept combination and discuss an implementation based on axiom weakening, which models the cognitive and logical mechanics of this asymmetric form of hybridisation. (shrink)
Climate change increases the frequency and intensity of certain kinds of natural hazard events in alpine areas. This interdisciplinary study addresses the hypothetical possibility of relocating the residents of three alpine areas in Austria: the Sölk valleys, the Johnsbach valley, and the St. Lorenzen/Schwarzenbach valleys. Our particular focus is on these residents’ expectations about such relocations. We find that (1) many residents expect that in the next decades the state will provide them with a level of natural hazards protection, aid, (...) and relief that allows them to continue to live in these valleys; (2) this expectation receives some legal protection but only when it is associated with fundamental rights; and (3) the expectation is morally significant, i.e., it ought to be considered in assessing the moral rightness or justness of relocation policies. These results suggest legal changes and likely extend to many other (Austrian) alpine areas as well. (shrink)
For the past thirty years, the late Tom Regan bucked the trend among secular animal rights philosophers and spoke patiently and persistently to the best angels of religious ethics in a stream of publications that enjoins religious scholars, clergy, and lay people alike to rediscover the resources within their traditions for articulating and living out an animal ethics that is more consistent with their professed values of love, mercy, and justice. My aim in this article is to showcase some of (...) the wealth of insight offered in this important but under-utilized archive of Regan’s work to those of us, religious or otherwise, who wish to challenge audiences of faith to think and do better by animals. (shrink)
We present an algorithm for concept combination inspired and informed by the research in cognitive and experimental psychology. Dealing with concept combination requires, from a symbolic AI perspective, to cope with competitive needs: the need for compositionality and the need to account for typicality effects. Building on our previous work on weighted logic, the proposed algorithm can be seen as a step towards the management of both these needs. More precisely, following a proposal of Hampton [1], it combines two weighted (...) Description Logic formulas, each defining a concept, using the following general strategy. First it selects all the features needed for the combination, based on the logical distinc- tion between necessary and impossible features. Second, it determines the threshold and assigns new weights to the features of the combined concept trying to preserve the relevance and the necessity of the features. We illustrate how the algorithm works exploiting some paradigmatic examples discussed in the cognitive literature. (shrink)
Ontologies represent principled, formalised descriptions of agents’ conceptualisations of a domain. For a community of agents, these descriptions may differ among agents. We propose an aggregative view of the integration of ontologies based on Judgement Aggregation (JA). Agents may vote on statements of the ontologies, and we aim at constructing a collective, integrated ontology, that reflects the individual conceptualisations as much as possible. As several results in JA show, many attractive and widely used aggregation procedures are prone to return inconsistent (...) collective ontologies. We propose to solve the possible inconsistencies in the collective ontology by applying suitable weakenings of axioms that cause inconsistencies. (shrink)
Ontology engineering is a hard and error-prone task, in which small changes may lead to errors, or even produce an inconsistent ontology. As ontologies grow in size, the need for automated methods for repairing inconsistencies while preserving as much of the original knowledge as possible increases. Most previous approaches to this task are based on removing a few axioms from the ontology to regain consistency. We propose a new method based on weakening these axioms to make them less restrictive, employing (...) the use of refinement operators. We introduce the theoretical framework for weakening DL ontologies, propose algorithms to repair ontologies based on the framework, and provide an analysis of the computational complexity. Through an empirical analysis made over real-life ontologies, we show that our approach preserves significantly more of the original knowledge of the ontology than removing axioms. (shrink)
In an interdisciplinary discussion with an international group of experts, we address the question of why faces matter so much. We approach the issue from different academic, technological and artistic perspectives and integrate these different perspectives in an open dialogue in order to raise awareness about the importance of faces at a time when we are hiding them more than ever, be it in “facing” other human beings or in “facing” digital technology.
Driven by the use cases of PubChemRDF and SCAIView, we have developed a first community-based clinical trial ontology (CTO) by following the OBO Foundry principles. CTO uses the Basic Formal Ontology (BFO) as the top level ontology and reuses many terms from existing ontologies. CTO has also defined many clinical trial-specific terms. The general CTO design pattern is based on the PICO framework together with two applications. First, the PubChemRDF use case demonstrates how a drug Gleevec is linked to multiple (...) clinical trials investigating Gleevec’s related chemical compounds. Second, the SCAIView text mining engine shows how the use of CTO terms in its search algorithm can identify publications referring to COVID-19-related clinical trials. Future opportunities and challenges are discussed. (shrink)
We introduce and discuss a knowledge-driven distillation approach to explaining black-box models by means of two kinds of interpretable models. The first is perceptron (or threshold) connectives, which enrich knowledge representation languages such as Description Logics with linear operators that serve as a bridge between statistical learning and logical reasoning. The second is Trepan Reloaded, an ap- proach that builds post-hoc explanations of black-box classifiers in the form of decision trees enhanced by domain knowledge. Our aim is, firstly, to target (...) a model-agnostic distillation approach exemplified with these two frameworks, secondly, to study how these two frameworks interact on a theoretical level, and, thirdly, to investigate use-cases in ML and AI in a comparative manner. Specifically, we envision that user-studies will help determine human understandability of explanations generated using these two frameworks. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.