The Protein Ontology (PRO) provides a formal, logically-based classification of specific protein classes including structured representations of protein isoforms, variants and modified forms. Initially focused on proteins found in human, mouse and Escherichia coli, PRO now includes representations of protein complexes. The PRO Consortium works in concert with the developers of other biomedical ontologies and protein knowledge bases to provide the ability to formally organize and integrate representations of precise protein forms so as to enhance accessibility to results of protein (...) research. PRO (http://pir.georgetown.edu/pro) is part of the Open Biomedical Ontologies (OBO) Foundry. (shrink)
Biomedical ontologies are emerging as critical tools in genomic and proteomic research where complex data in disparate resources need to be integrated. A number of ontologies exist that describe the properties that can be attributed to proteins; for example, protein functions are described by Gene Ontology, while human diseases are described by Disease Ontology. There is, however, a gap in the current set of ontologies—one that describes the protein entities themselves and their relationships. We have designed a PRotein Ontology (PRO) (...) to facilitate protein annotation and to guide new experiments. The components of PRO extend from the classification of proteins on the basis of evolutionary relationships to the representation of the multiple protein forms of a gene (products generated by genetic variation, alternative splicing, proteolytic cleavage, and other post-translational modification). PRO will allow the specification of relationships between PRO, GO and other OBO Foundry ontologies. Here we describe the initial development of PRO, illustrated using human proteins from the TGF-beta signaling pathway. (shrink)
The Protein Ontology (PRO; http://proconsortium.org) formally defines protein entities and explicitly represents their major forms and interrelations. Protein entities represented in PRO corresponding to single amino acid chains are categorized by level of specificity into family, gene, sequence and modification metaclasses, and there is a separate metaclass for protein complexes. All metaclasses also have organism-specific derivatives. PRO complements established sequence databases such as UniProtKB, and interoperates with other biomedical and biological ontologies such as the Gene Ontology (GO). PRO relates to (...) UniProtKB in that PRO’s organism-specific classes of proteins encoded by a specific gene correspond to entities documented in UniProtKB entries. PRO relates to the GO in that PRO’s representations of organism-specific protein complexes are subclasses of the organism-agnostic protein complex terms in the GO Cellular Component Ontology. The past few years have seen growth and changes to the PRO, as well as new points of access to the data and new applications of PRO in immunology and proteomics. Here we describe some of these developments. (shrink)
The Protein Ontology (PRO; http://purl.obolibrary.org/obo/pr) formally defines and describes taxon-specific and taxon-neutral protein-related entities in three major areas: proteins related by evolution; proteins produced from a given gene; and protein-containing complexes. PRO thus serves as a tool for referencing protein entities at any level of specificity. To enhance this ability, and to facilitate the comparison of such entities described in different resources, we developed a standardized representation of proteoforms using UniProtKB as a sequence reference and PSI-MOD as a post-translational modification (...) reference. We illustrate its use in facilitating an alignment between PRO and Reactome protein entities. We also address issues of scalability, describing our first steps into the use of text mining to identify protein-related entities, the large-scale import of proteoform information from expert curated resources, and our ability to dynamically generate PRO terms. Web views for individual terms are now more informative about closely-related terms, including for example an interactive multiple sequence alignment. Finally, we describe recent improvement in semantic utility, with PRO now represented in OWL and as a SPARQL endpoint. These developments will further support the anticipated growth of PRO and facilitate discoverability of and allow aggregation of data relating to protein entities. (shrink)
The Protein Ontology (PRO) web resource provides an integrative framework for protein-centric exploration and enables specific and precise annotation of proteins and protein complexes based on PRO. Functionalities include: browsing, searching and retrieving, terms, displaying selected terms in OBO or OWL format, and supporting URIs. In addition, the PRO website offers multiple ways for the user to request, submit, or modify terms and/or annotation. We will demonstrate the use of these tools for protein research and annotation.
Research has indicated that microRNAs (miRNAs), a special class of non-coding RNAs (ncRNAs), can perform important roles in different biological and pathological processes. miRNAs’ functions are realized by regulating their respective target genes (targets). It is thus critical to identify and analyze miRNA-target interactions for a better understanding and delineation of miRNAs’ functions. However, conventional knowledge discovery and acquisition methods have many limitations. Fortunately, semantic technologies that are based on domain ontologies can render great assistance in this regard. In our (...) previous investigations, we developed a miRNA domain-specific application ontology, Ontology for MIcroRNA Target (OMIT), to provide the community with common data elements and data exchange standards in the miRNA research. This paper describes (1) our continuing efforts in the OMIT ontology development and (2) the application of the OMIT to enable a semantic approach for knowledge capture of miRNA-target interactions. (shrink)
Biological ontologies are used to organize, curate, and interpret the vast quantities of data arising from biological experiments. While this works well when using a single ontology, integrating multiple ontologies can be problematic, as they are developed independently, which can lead to incompatibilities. The Open Biological and Biomedical Ontologies Foundry was created to address this by facilitating the development, harmonization, application, and sharing of ontologies, guided by a set of overarching principles. One challenge in reaching these goals was that the (...) OBO principles were not originally encoded in a precise fashion, and interpretation was subjective. Here we show how we have addressed this by formally encoding the OBO principles as operational rules and implementing a suite of automated validation checks and a dashboard for objectively evaluating each ontology’s compliance with each principle. This entailed a substantial effort to curate metadata across all ontologies and to coordinate with individual stakeholders. We have applied these checks across the full OBO suite of ontologies, revealing areas where individual ontologies require changes to conform to our principles. Our work demonstrates how a sizable federated community can be organized and evaluated on objective criteria that help improve overall quality and interoperability, which is vital for the sustenance of the OBO project and towards the overall goals of making data FAIR. Competing Interest StatementThe authors have declared no competing interest. (shrink)
Identification of non-coding RNAs (ncRNAs) has been significantly enhanced due to the rapid advancement in sequencing technologies. On the other hand, semantic annotation of ncRNA data lag behind their identification, and there is a great need to effectively integrate discovery from relevant communities. To this end, the Non-Coding RNA Ontology (NCRO) is being developed to provide a precisely defined ncRNA controlled vocabulary, which can fill a specific and highly needed niche in unification of ncRNA biology.
In recent years, sequencing technologies have enabled the identification of a wide range of non-coding RNAs (ncRNAs). Unfortunately, annotation and integration of ncRNA data has lagged behind their identification. Given the large quantity of information being obtained in this area, there emerges an urgent need to integrate what is being discovered by a broad range of relevant communities. To this end, the Non-Coding RNA Ontology (NCRO) is being developed to provide a systematically structured and precisely defined controlled vocabulary for the (...) domain of ncRNAs, thereby facilitating the discovery, curation, analysis, exchange, and reasoning of data about structures of ncRNAs, their molecular and cellular functions, and their impacts upon phenotypes. The goal of NCRO is to serve as a common resource for annotations of diverse research in a way that will significantly enhance integrative and comparative analysis of the myriad resources currently housed in disparate sources. It is our belief that the NCRO ontology can perform an important role in the comprehensive unification of ncRNA biology and, indeed, fill a critical gap in both the Open Biological and Biomedical Ontologies (OBO) Library and the National Center for Biomedical Ontology (NCBO) BioPortal. Our initial focus is on the ontological representation of small regulatory ncRNAs, which we see as the first step in providing a resource for the annotation of data about all forms of ncRNAs. (shrink)
In recent years, sequencing technologies have enabled the identification of a wide range of non-coding RNAs (ncRNAs). Unfortunately, annotation and integration of ncRNA data has lagged behind their identification. Given the large quantity of information being obtained in this area, there emerges an urgent need to integrate what is being discovered by a broad range of relevant communities. To this end, the Non-Coding RNA Ontology (NCRO) is being developed to provide a systematically structured and precisely defined controlled vocabulary for the (...) domain of ncRNAs, thereby facilitating the discovery, curation, analysis, exchange, and reasoning of data about structures of ncRNAs, their molecular and cellular functions, and their impacts upon phenotypes. The goal of NCRO is to serve as a common resource for annotations of diverse research in a way that will significantly enhance integrative and comparative analysis of the myriad resources currently housed in disparate sources. It is our belief that the NCRO ontology can perform an important role in the comprehensive unification of ncRNA biology and, indeed, fill a critical gap in both the Open Biological and Biomedical Ontologies (OBO) Library and the National Center for Biomedical Ontology (NCBO) BioPortal. Our initial focus is on the ontological representation of small regulatory ncRNAs, which we see as the first step in providing a resource for the annotation of data about all forms of ncRNAs. The NCRO ontology is free and open to all users. (shrink)
Representing species-specific proteins and protein complexes in ontologies that are both human and machine-readable facilitates the retrieval, analysis, and interpretation of genome-scale data sets. Although existing protin-centric informatics resources provide the biomedical research community with well-curated compendia of protein sequence and structure, these resources lack formal ontological representations of the relationships among the proteins themselves. The Protein Ontology (PRO) Consortium is filling this informatics resource gap by developing ontological representations and relationships among proteins and their variants and modified forms. Because (...) proteins are often functional only as members of stable protein complexes, the PRO Consortium, in collaboration with existing protein and pathway databases, has launched a new initiative to implement logical and consistent representation of protein complexes. We describe here how the PRO Consortium is meeting the challenge of representing species-specific protein complexes, how protein complex representation in PRO supports annotation of protein complexes and comparative biology, and how PRO is being integrated into existing community bioinformatics resources. The PRO resource is accessible at http://pir.georgetown.edu/pro/. (shrink)
Identification of non-coding RNAs (ncRNAs) has been significantly improved over the past decade. On the other hand, semantic annotation of ncRNA data is facing critical challenges due to the lack of a comprehensive ontology to serve as common data elements and data exchange standards in the field. We developed the Non-Coding RNA Ontology (NCRO) to handle this situation. By providing a formally defined ncRNA controlled vocabulary, the NCRO aims to fill a specific and highly needed niche in semantic annotation of (...) large amounts of ncRNA biological and clinical data. (shrink)
The Protein Ontology (PRO) is designed as a formal and principled Open Biomedical Ontologies (OBO) Foundry ontology for proteins. The components of PRO extend from a classification of proteins on the basis of evolutionary relationships at the homeomorphic level to the representation of the multiple protein forms of a gene, including those resulting from alternative splicing, cleavage and/or posttranslational modifications. Focusing specifically on the TGF-beta signaling proteins, we describe the building, curation, usage and dissemination of PRO. PRO provides a framework (...) for the formal representation of protein classes and protein forms in the OBO Foundry. It is designed to enable data retrieval and integration and machine reasoning at the molecular level of proteins, thereby facilitating cross-species comparisons, pathway analysis, disease modeling and the generation of new hypotheses. (shrink)
The Protein Ontology (PRO) provides terms for and supports annotation of species-specific protein complexes in an ontology framework that relates them both to their components and to species-independent families of complexes. Comprehensive curation of experimentally known forms and annotations thereof is expected to expose discrepancies, differences, and gaps in our knowledge. We have annotated the early events of innate immune signaling mediated by Toll-Like Receptor 3 and 4 complexes in human, mouse, and chicken. The resulting ontology and annotation data set (...) has allowed us to identify species-specific gaps in experimental data and possible functional differences between species, and to employ inferred structural and functional relationships to suggest plausible resolutions of these discrepancies and gaps. (shrink)
Many epistemological problems can be solved by the objective Bayesian view that there are rationality constraints on priors, that is, inductive probabilities. But attempts to work out these constraints have run into such serious problems that many have rejected objective Bayesianism altogether. I argue that the epistemologist should borrow the metaphysician’s concept of naturalness and assign higher priors to more natural hypotheses.
Sometimes we learn what the world is like, and sometimes we learn where in the world we are. Are there any interesting differences between the two kinds of cases? The main aim of this article is to argue that learning where we are in the world brings into view the same kind of observation selection effects that operate when sampling from a population. I will first explain what observation selection effects are ( Section 1 ) and how they are relevant (...) to learning where we are in the world ( Section 2 ). I will show how measurements in the Many Worlds Interpretation of quantum mechanics can be understood as learning where you are in the world via some observation selection effect ( Section 3 ). I will apply a similar argument to the Sleeping Beauty Problem ( Section 4 ) and explain what I take the significance of the analogy to be ( Section 5 ). Finally, I will defend the Restricted Principle of Indifference on which some of my arguments depend ( Section 6 ). (shrink)
Jurgen Habermas has argued that carrying out pre-natal germline enhancements would be inimical to the future child's autonomy. In this article, I suggest that many of the objections that have been made against Habermas' arguments by liberals in the enhancement debate misconstrue his claims. To explain why, I begin by explaining how Habermas' view of personal autonomy confers particular importance to the agent's embodiment and social environment. In view of this, I explain that it is possible to draw two arguments (...) against germline enhancements from Habermas' thought. I call these arguments ‘the argument from negative freedom’ and ‘the argument from natality’. Although I argue that many of the common liberal objections to Habermas are not applicable when his arguments are properly understood, I go on to suggest ways in which supporters of enhancement might appropriately respond to Habermas' arguments. (shrink)
Sober and Elgin defend the claim that there are a priori causal laws in biology. Lange and Rosenberg take issue with this on Humean grounds, among others. I will argue that Sober and Elgin don’t go far enough – there are a priori causal laws in many sciences. Furthermore, I will argue that this thesis is compatible with a Humean metaphysics and an empiricist epistemology.
Since the publication of Kenneth Howard’s 2017 article, “The Religion Singularity: A Demographic Crisis Destabilizing and Transforming Institutional Christianity,” there has been an increasing demand to understand the root causes and historical foundations for why institutional Christianity is in a state of de-institutionalization. In response to Howard’s research, a number of authors have sought to provide a contextual explanation for why the religion singularity is currently happening, including studies in epistemology, church history, psychology, anthropology, and church ministry. The purpose of (...) this article is to offer a brief survey and response to these interactions with Howard’s research, identifying the overall implications of each researcher’s perspective for understanding the religion singularity phenomenon. It explores factors relating to denominational switching in Jeshua Branch’s research, social memory in John Lingelbach’s essay, religious politics in Kevin Seybold’s survey, scientific reductionism in Jack David Eller’s position paper, and institutional moral failure in Brian McLaren’s article. (shrink)
Cian Dorr (2002) gives an argument for the 1/3 position in Sleeping Beauty. I argue this is based on a mistake about Sleeping Beauty's epistemic position.
The main argument given for relevant alternatives theories of knowledge has been that they answer scepticism about the external world. I will argue that relevant alternatives also solve two other problems that have been much discussed in recent years, a) the bootstrapping problem and b) the apparent conflict between semantic externalism and armchair self-knowledge. Furthermore, I will argue that scepticism and Mooreanism can be embedded within the relevant alternatives framework.
Livro-texto de introdução à lógica, com (mais do que) pitadas de filosofia da lógica, produzido como uma versão revista e ampliada do livro Forallx: Calgary. Trata-se da versão de 05 maio de 2022. Comentários, críticas, correções e sugestões são muito bem-vindos.
Carrie Jenkins (2005, 2008) has developed a theory of the a priori that she claims solves the problem of how justification regarding our concepts can give us justification regarding the world. She claims that concepts themselves can be justified, and that beliefs formed by examining such concepts can be justified a priori. I object that we can have a priori justified beliefs with unjustified concepts if those beliefs have no existential import. I then argue that only beliefs without existential import (...) can be justified a priori on the widely held conceptual approach. This limits the scope of the a priori and undermines arguments for essentialism. (shrink)
In this paper I argue that whether or not a computer can be built that passes the Turing test is a central question in the philosophy of mind. Then I show that the possibility of building such a computer depends on open questions in the philosophy of computer science: the physical Church-Turing thesis and the extended Church-Turing thesis. I use the link between the issues identified in philosophy of mind and philosophy of computer science to respond to a prominent argument (...) against the possibility of building a machine that passes the Turing test. Finally, I respond to objections against the proposed link between questions in the philosophy of mind and philosophy of computer science. (shrink)
What if your peers tell you that you should disregard your perceptions? Worse, what if your peers tell you to disregard the testimony of your peers? How should we respond if we get evidence that seems to undermine our epistemic rules? Several philosophers have argued that some epistemic rules are indefeasible. I will argue that all epistemic rules are defeasible. The result is a kind of epistemic particularism, according to which there are no simple rules connecting descriptive and normative facts. (...) I will argue that this type of particularism is more plausible in epistemology than in ethics. The result is an unwieldy and possibly infinitely long epistemic rule — an Uber-rule. I will argue that the Uber-rule applies to all agents, but is still defeasible — one may get misleading evidence against it and rationally lower one’s credence in it. (shrink)
The notion that there existed a distinction between so-called “Alexandrian” and “Antiochene” exegesis in the ancient church has become a common assumption among theologians. The typical belief is that Alexandria promoted an allegorical reading of Scripture, whereas Antioch endorsed a literal approach. However, church historians have long since recognized that this distinction is neither wholly accurate nor helpful to understanding ancient Christian hermeneutics. Indeed, neither school of interpretation sanctioned the practice of just one exegetical method. Rather, both Alexandrian and Antiochene (...) theologians were expedient hermeneuts, meaning they utilized whichever exegetical practice (allegory, typology, literal, historical) that would supply them with their desired theology or interpretive conclusion. The difference between Alexandria and Antioch was not exegetical; it was theological. In other words, it was their respective theological paradigms that dictated their exegetical practices, allowing them to utilize whichever hermeneutical method was most expedient for their theological purposes. Ultimately, neither Alexandrian nor Antiochene exegetes possessed a greater respect for the biblical text over the other, nor did they adhere to modern-day historical-grammatical hermeneutics as theologians would like to believe. (shrink)
How do temporal and eternal beliefs interact? I argue that acquiring a temporal belief should have no effect on eternal beliefs for an important range of cases. Thus, I oppose the popular view that new norms of belief change must be introduced for cases where the only change is the passing of time. I defend this position from the purported counter-examples of the Prisoner and Sleeping Beauty. I distinguish two importantly different ways in which temporal beliefs can be acquired and (...) draw some general conclusions about their impact on eternal beliefs. (shrink)
If an agent believes that the probability of E being true is 1/2, should she accept a bet on E at even odds or better? Yes, but only given certain conditions. This paper is about what those conditions are. In particular, we think that there is a condition that has been overlooked so far in the literature. We discovered it in response to a paper by Hitchcock (2004) in which he argues for the 1/3 answer to the Sleeping Beauty problem. (...) Hitchcock argues that this credence follows from calculating her fair betting odds, plus the assumption that Sleeping Beauty’s credences should track her fair betting odds. We will show that this last assumption is false. Sleeping Beauty’s credences should not follow her fair betting odds due to a peculiar feature of her epistemic situation. (shrink)
There is a widely shared belief that the higher-level sciences can provide better explanations than lower-level sciences. But there is little agreement about exactly why this is so. It is often suggested that higher-level explanations are better because they omit details. I will argue instead that the preference for higher-level explanations is just a special case of our general preference for informative, logically strong, beliefs. I argue that our preference for informative beliefs entirely accounts for why higher-level explanations are sometimes (...) better—and sometimes worse—than lower-level explanations. The result is a step in the direction of the unity of science hypothesis. 1Introduction2Background: Is Omitting Details an Explanatory Virtue? 2.1Anti-reductionist arguments2.2Reductionist argument2.3Logical strength3Bases, Links and Logical Strength4Functionalism and Fodor’s Argument5Two Generalizations6Should the Base Really Be Maximally Strong?7Anti-reductionist Arguments Regarding the Base8Should the Antecedent of the Link Really Be Maximally Weak? (shrink)
Self-locating beliefs cause a problem for conditionalization. Miriam Schoenfield offers a solution: that on learning E, agents should update on the fact that they learned E. However, Schoenfield is not explicit about whether the fact that they learned E is self-locating. I will argue that if the fact that they learned E is self-locating then the original problem has not been addressed, and if the fact that they learned E is not self-locating then the theory generates implausible verdicts which Schoenfield (...) explicitly rejects. (shrink)
Many who take a dismissive attitude towards metaphysics trace their view back to Carnap’s ‘Empiricism, Semantics and Ontology’. But the reason Carnap takes a dismissive attitude to metaphysics is a matter of controversy. I will argue that no reason is given in ‘Empiricism, Semantics and Ontology’, and this is because his reason for rejecting metaphysical debates was given in ‘Pseudo-Problems in Philosophy’. The argument there assumes verificationism, but I will argue that his argument survives the rejection of verificationism. The root (...) of his argument is the claim that metaphysical statements cannot be justified; the point is epistemic, not semantic. I will argue that this remains a powerful challenge to metaphysics that has yet to be adequately answered. (shrink)
This article defends the Doomsday Argument, the Halfer Position in Sleeping Beauty, the Fine-Tuning Argument, and the applicability of Bayesian confirmation theory to the Everett interpretation of quantum mechanics. It will argue that all four problems have the same structure, and it gives a unified treatment that uses simple models of the cases and no controversial assumptions about confirmation or self-locating evidence. The article will argue that the troublesome feature of all these cases is not self-location but selection effects.
In this article I rethink death and mortality on the basis of birth and natality, drawing on the work of the Italian feminist philosopher Adriana Cavarero. She understands birth to be the corporeal event whereby a unique person emerges from the mother’s body into the common world. On this basis Cavarero reconceives death as consisting in bodily dissolution and re-integration into cosmic life. This impersonal conception of death coheres badly with her view that birth is never exclusively material but always (...) has ontological significance as the appearance of someone new and singular in the world of relations with others. This view of birth calls for a relational conception of death, which I develop in this article. On this conception, death is always collective, affecting all those with whom the one who dies has maintained relations: As such, our different deaths shade into one another. Moreover, because each person is unique in virtue of consisting of a unique web of relations with others, death always happens to persons as webs of relations. Death is relational in this way as a corporeal, and specifically biological, phenomenon, to which we are subject as bodily beings and as interdependent living organisms. I explore this with reference to Simone de Beauvoir’s memoir of her mother’s death from cancer. Finally I argue that, on this relational conception, death is something to be feared. (shrink)
What does logic tells us how about we ought to reason? If P entails Q, and I believe P, should I believe Q? I will argue that we should embed the issue in an independently motivated contextualist semantics for ‘ought’, with parameters for a standard and set of propositions. With the contextualist machinery in hand, we can defend a strong principle expressing how agents ought to reason while accommodating conflicting intuitions. I then show how our judgments about blame and guidance (...) can be handled by this machinery. (shrink)
The essay reflects on how Hans-Georg Gadamer and Karl Barth view interpretation of the Christian Bible. It proceeds in three main sections. The first contends that Gadamer secularizes Christian theology, and that this has drawbacks for the sort of reading his hermeneutic can give to Christian Scripture. The second part turns to Barth, arguing that the whole structure of his approach to the Bible factors in theological commitment, with benefits for the readings he can deliver. The final part makes a (...) case that contemporary reflection on interpretation can nonetheless glean important insights from Gadamer, especially regarding the readerly reception of texts, because his perspective has a certain sort of richness that Barth’s cannot match. The overall suggestion emerging from the interrogation of these two thinkers is that phenomenology and theology might learn from one another, that they each contribute something valuable to discussions of biblical interpretation. (shrink)
The ethics of biological procreation has received a great deal of attention in recent years. Yet, as I show in this paper, much of what has come to be called procreative ethics is conducted in a strangely abstract, impersonal mode, one which stands little chance of speaking to the practical perspectives of any prospective parent. In short, the field appears to be flirting with a strange sort of practical irrelevance, wherein its verdicts are answers to questions that no-one is asking. (...) I go on to articulate a theory of what I call existential grounding, a notion which explains the role that prospective children play in the lives of many would-be parents. Procreative ethicists who want their work to have real practical relevance must, I claim, start to engage with this markedly first-personal kind of practical consideration. (shrink)
There is a divide in epistemology between those who think that, for any hypothesis and set of total evidence, there is a unique rational credence in that hypothesis, and those who think that there can be many rational credences. Schultheis offers a novel and potentially devastating objection to Permissivism, on the grounds that Permissivism permits dominated credences. I will argue that Permissivists can plausibly block Schultheis' argument. The issue turns on getting clear about whether we should be certain whether our (...) credences are rational. (shrink)
In Bradley, I offered an analysis of Sleeping Beauty and the Everettian interpretation of quantum mechanics. I argued that one can avoid a kind of easy confirmation of EQM by paying attention to observation selection effects, that halfers are right about Sleeping Beauty, and that thirders cannot avoid easy confirmation for the truth of EQM. Wilson agrees with my analysis of observation selection effects in EQM, but goes on to, first, defend Elga’s thirder argument on Sleeping Beauty and, second, argue (...) that the analogy I draw between Sleeping Beauty and EQM fails. I will argue that neither point succeeds. 1 Introduction2 Background3 Wilson’s Argument for ⅓ in Sleeping Beauty4 Reply: Explaining Away the Crazy5 Wilson's Argument for the Breakdown of the Analogy6 Reply: The Irrelevance of Chance7 Conclusion. (shrink)
Colin Howson (1995 ) offers a counter-example to the rule of conditionalization. I will argue that the counter-example doesn't hit its target. The problem is that Howson mis-describes the total evidence the agent has. In particular, Howson overlooks how the restriction that the agent learn 'E and nothing else' interacts with the de se evidence 'I have learnt E'.
A critical overview of the latest discussion of anti-natalism, with particular reference to David Benatar's work and three additional rationales for anti-natalism that differ from Benatar's.
Arendt claims that our natality (i.e., our condition of being born) is the “source” or “root” of our capacity to begin (i.e., of our capacity to initiate something new). But she does not fully explain this claim. How does the capacity to begin derive from the condition of birth? That Arendt does not immediately and unambiguously provide an answer to this question can be seen in the fact that her notion of natality has received very different interpretations. In the present (...) paper, I seek to clarify the notion. I bring together and examine Arendt’s scattered remarks about natality and propose a new interpretation that responds to the stated question. Along the way, I show how the various existing interpretations have arisen and argue that, in view of that question, they are inadequate. (shrink)
Jonathan Weisberg (2010 ) argues that, given that life exists, the fact that the universe is fine-tuned for life does not confirm the design hypothesis. And if the fact that life exists confirms the design hypothesis, fine-tuning is irrelevant. So either way, fine-tuning has nothing to do with it. I will defend a design argument that survives Weisberg’s critique — the fact that life exists supports the design hypothesis, but it only does so given fine-tuning.
The use of language models in Web applications and other areas of computing and business have grown significantly over the last five years. One reason for this growth is the improvement in performance of language models on a number of benchmarks — but a side effect of these advances has been the adoption of a “bigger is always better” paradigm when it comes to the size of training, testing, and challenge datasets. Drawing on previous criticisms of this paradigm as applied (...) to large training datasets crawled from pre-existing text on the Web, we extend the critique to challenge datasets custom-created by crowdworkers. We present several sets of criticisms, where ethical and scientific issues in language model research reinforce each other: labour injustices in crowdwork, dataset quality and inscrutability, inequities in the research community, and centralized corporate control of the technology. We also present a new type of tool for researchers to use in examining large datasets when evaluating them for quality. (shrink)
How should we update our beliefs when we learn new evidence? Bayesian confirmation theory provides a widely accepted and well understood answer – we should conditionalize. But this theory has a problem with self-locating beliefs, beliefs that tell you where you are in the world, as opposed to what the world is like. To see the problem, consider your current belief that it is January. You might be absolutely, 100%, sure that it is January. But you will soon believe it (...) is February. This type of belief change cannot be modelled by conditionalization. We need some new principles of belief change for this kind of case, which I call belief mutation. In part 1, I defend the Relevance-Limiting Thesis, which says that a change in a purely self-locating belief of the kind that results in belief mutation should not shift your degree of belief in a non-self-locating belief, which can only change by conditionalization. My method is to give detailed analyses of the puzzles which threaten this thesis: Duplication, Sleeping Beauty, and The Prisoner. This also requires giving my own theory of observation selection effects. In part 2, I argue that when self-locating evidence is learnt from a position of uncertainty, it should be conditionalized on in the normal way. I defend this position by applying it to various cases where such evidence is found. I defend the Halfer position in Sleeping Beauty, and I defend the Doomsday Argument and the Fine-Tuning Argument. (shrink)
This work, in English "Struggle for power, organic crisis and judicial independence", has its origin in research academics of the IAEN carried out to provide expert advise to the Inter American Court of Human Rights in the case Quintana and others (Supreme Court of Justice) vs the State of Ecuador. The research is about the nature of the evolution of the ecuadorian state, the dynamics of its institutions, its players, parties, laws, its factors of instability, the way rights have been (...) deployed since the return to democracy in 1979, those who have benefited from neoliberal democracy and those who have paid the costs, and where in all this was the Supreme Court of Justice. The Introduction takes a look at this origin of the work and its intellectual import, provides a brief characterization of the Inter American Sistem of Human Rights Protection, how the Quintana Case fits within it, and an overview of the chapters and their logic. The first chapter analyzes the institutional and sociological factors that have contributed to Ecuador's historically unstable state, based primarily in an antagonistic struggle for power between the Executive and Legislative branches. The second chapter analyzes the economic and political factors that provoked an organic crisis, or crisis of hegemony, between 1997 and 2007 in Ecuador. The third chapter analyzes the evolution of the way in which judges are selected in Europe, the outcomes they have generated, and how they compare with the ecuadorian systems of judge selection in terms of securing judicial independence. The fourth chapter develops a theory of judicial independence in democracy and analyzes how Ecuador measures up to that standard, from when it became a neoliberal minimal democracy in 1981, to when it became a neoliberal false democracy as a consequence of instalation of a system evidently lacking in the people's support under the guise of the 1998 Constitution and its system of judge selection, and the new democratic horizons set by the 2008 Constitution, and the judicial system that springs from it. (shrink)
Pessimism is, roughly, the view that life is not worth living. In chapter 46 of the second volume of The World as Will and Representation, Arthur Schopenhauer provides an oft-neglected argument for this view. The argument is that a life is worth living only if it does not contain any uncompensated evils; but since all our lives happen to contain such evils, none of them are worth living. The now standard interpretation of this argument (endorsed by Kuno Fischer and Christopher (...) Janaway) proceeds from the claim that the value—or rather valuelessness—of life’s goods makes compensation impossible. But this interpretation is neither philosophically attractive nor faithful to the text. In this paper, I develop and defend an alternative interpretation (suggested by Wilhelm Windelband and Mark Migotti) according to which it is instead the actual temporal arrangement of life’s goods and evils that makes compensation impossible. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.