This paper presents the first bibliometric mapping analysis of the field of computer and information ethics (C&IE). It provides a map of the relations between 400 key terms in the field. This term map can be used to get an overview of concepts and topics in the field and to identify relations between information and communication technology concepts on the one hand and ethical concepts on the other hand. To produce the term map, a data set of over (...) thousand articles published in leading journals and conference proceedings in the C&IE field was constructed. With the help of various computer algorithms, key terms were identified in the titles and abstracts of the articles and co-occurrence frequencies of these key terms were calculated. Based on the co-occurrence frequencies, the term map was constructed. This was done using a computer program called VOSviewer. The term map provides a visual representation of the C&IE field and, more specifically, of the organization of the field around three main concepts, namely privacy, ethics, and the Internet. (shrink)
In recent years, academics and educators have begun to use software mapping tools for a number of education-related purposes. Typically, the tools are used to help impart critical and analytical skills to students, to enable students to see relationships between concepts, and also as a method of assessment. The common feature of all these tools is the use of diagrammatic relationships of various kinds in preference to written or verbal descriptions. Pictures and structured diagrams are thought to be more comprehensible (...) than just words, and a clearer way to illustrate understanding of complex topics. Variants of these tools are available under different names: “concept mapping”, “mind mapping” and “argument mapping”. Sometimes these terms are used synonymously. However, as this paper will demonstrate, there are clear differences in each of these mapping tools. This paper offers an outline of the various types of tool available and their advantages and disadvantages. It argues that the choice of mapping tool largely depends on the purpose or aim for which the tool is used and that the tools may well be converging to offer educators as yet unrealised and potentially complementary functions. (shrink)
Anyone familiar with Russell’s work on the multiple-relation theory of judgment will at some point have puzzled over the map of the five-term understanding complex at the end of Chapter 1, Part II of his Theory of Knowledge (1913). Russell presents the map with the intention of clarifying what goes on when a subject S understands the “proposition” that A and B are similar. But the map raises more questions than it answers. In this paper I present and develop (...) some of the central issues that arise from Russell’s map, and I offer an interpretation of it that reflects his evolving views in the manuscript. I argue that multiple lines in the map are not meant to represent many relations, but rather one comprehensive multiple relation of understanding. And I argue that such a relation relates in a complex way due to the distinctive nature of its relata. (shrink)
A common way to understand memory structures in the cognitive sciences is as a cognitive map. Cognitive maps are representational systems organized by dimensions shared with physical space. The appeal to these maps begins literally: as an account of how spatial information is represented and used to inform spatial navigation. Invocations of cognitive maps, however, are often more ambitious; cognitive maps are meant to scale up and provide the basis for our more sophisticated memory capacities. The extension is not meant (...) to be metaphorical, but the way in which these richer mental structures are supposed to remain map-like is rarely made explicit. Here we investigate this missing link, asking: how do cognitive maps represent non-spatial information? We begin with a survey of foundational work on spatial cognitive maps and then provide a comparative review of alternative, non-spatial representational structures. We then turn to several cutting-edge projects that are engaged in the task of scaling up cognitive maps so as to accommodate non-spatial information: first, on the spatial-isometric approach , encoding content that is non-spatial but in some sense isomorphic to spatial content; second, on the abstraction approach , encoding content that is an abstraction over first-order spatial information; and third, on the embedding approach , embedding non-spatial information within a spatial context, a prominent example being the Method-of-Loci. Putting these cases alongside one another reveals the variety of options available for building cognitive maps, and the distinctive limitations of each. We conclude by reflecting on where these results take us in terms of understanding the place of cognitive maps in memory. (shrink)
An argument map visually represents the structure of an argument, outlining its informal logical connections and informing judgments as to its worthiness. Argument mapping can be augmented with dedicated software that aids the mapping process. Empirical evidence suggests that semester‐length subjects using argument mapping along with dedicated software can produce remarkable increases in students’ critical thinking abilities. Introducing such specialised subjects, however, is often practically and politically difficult. This study ascertains student perceptions of the use of argument mapping in two (...) large, regular, semester‐length classes in a Business and Economics Faculty at the University of Melbourne. Unlike the semester‐length expert‐led trials in prior research, in our study only one expert‐led session was conducted at the beginning of the semester and followed by class practice. Survey results conducted at the end of the semester, show that, with reservations, even this minimalist, ‘one‐shot inoculation’ of argument mapping is effective in terms of students’ perceptions of improvements in their critical thinking skills. (shrink)
Twenty two years since the arrival of the first consumer digital camera (Tatsuno 36) Western culture is now characterised by ubiquitous photography. The disappearance of the camera inside the mobile phone has ensured that even the most banal moments of the day can become a point of photographic reverie, potentially shared instantly. Supported by the increased affordability of computers, digital storage and access to broadband, consumers are provided with new opportunities for the capture and transmission of images, particularly online where (...) snapshot photography is being transformed from an individual to a communal activity. As the digital image proliferates online and becomes increasingly delivered via networks, numerous practices emerge surrounding the image’s transmission, encoding, ordering and reception. Informing these practices is a growing cultural shift towards a conception of the Internet as a platform for sharing and collaboration, supported by a mosaic of technologies termed Web 2.0. (shrink)
Background: Ontologies and taxonomies are among the most important computational resources for molecular biology and bioinformatics. A series of recent papers has shown that the Gene Ontology (GO), the most prominent taxonomic resource in these fields, is marked by flaws of certain characteristic types, which flow from a failure to address basic ontological principles. As yet, no methods have been proposed which would allow ontology curators to pinpoint flawed terms or definitions in ontologies in a systematic way. Results: We present (...) computational methods that automatically identify terms and definitions which are defined in a circular or unintelligible way. We further demonstrate the potential of these methods by applying them to isolate a subset of 6001 problematic GO terms. By automatically aligning GO with other ontologies and taxonomies we were able to propose alternative synonyms and definitions for some of these problematic terms. This allows us to demonstrate that these other resources do not contain definitions superior to those supplied by GO. Conclusion: Our methods provide reliable indications of the quality of terms and definitions in ontologies and taxonomies. Further, they are well suited to assist ontology curators in drawing their attention to those terms that are ill-defined. We have further shown the limitations of ontology mapping and alignment in assisting ontology curators in rectifying problems, thus pointing to the need for manual curation. (shrink)
After coining the term “philopsychy” to describe a “soul-loving” approach to philosophical practice, especially when it welcomes a creative synthesis of philosophy and psychology, this article identifies a system of geometrical figures (or “maps”) that can be used to stimulate reflection on various types of perspectival differences. The maps are part of the author’s previously established mapping methodology, known as the Geometry of Logic. As an illustration of how philosophy can influence the development of psychology, Immanuel Kant’s table of (...) twelve categories and Carl Jung’s theory of psychological types are shown to share a common logical structure. Just as Kant proposes four basic categories, each expressed in terms of three subordinate categories, Jung proposes four basic personality functions, each having three possible manifestations. The concluding section presents four scenarios illustrating how such maps can be used in philosophical counseling sessions to stimulate philopsychic insight. (shrink)
In contemporary human brain mapping, it is commonly assumed that the “mind is what the brain does”. Based on that assumption, task-based imaging studies of the last three decades measured differences in brain activity that are thought to reflect the exercise of human mental capacities (e.g., perception, attention, memory). With the advancement of resting state studies, tractography and graph theory in the last decade, however, it became possible to study human brain connectivity without relying on cognitive tasks or constructs. It (...) therefore is currently an open question whether the assumption that “the mind is what the brain does” is an indispensable working hypothesis in human brain mapping. This paper argues that the hypothesis is, in fact, dispensable. If it is dropped, researchers can “meet the brain on its own terms” by searching for new, more adequate concepts to describe human brain organization. Neuroscientists can establish such concepts by conducting exploratory experiments that do not test particular cognitive hypotheses. The paper provides a systematic account of exploratory neuroscientific research that would allow researchers to form new concepts and formulate general principles of brain connectivity, and to combine connectivity studies with manipulation methods to identify neural entities in the brain. These research strategies would be most fruitful if applied to the mesoscopic scale of neuronal assemblies, since the organizational principles at this scale are currently largely unknown. This could help researchers to link microscopic and macroscopic evidence to provide a more comprehensive understanding of the human brain. The paper concludes by comparing this account of exploratory neuroscientific experiments to recent proposals for large-scale, discovery-based studies of human brain connectivity. (shrink)
It is suggested that the impetus to generate models is probably the most fundamental point of connection between mysticism and psychology. In their concern with the relation between ‘unseen’ realms and the ‘seen’, mystical maps parallel cognitive models of the relation between ‘unconscious’ and ‘conscious’ processes. The map or model constitutes an explanation employing terms current within the respective canon. The case of language mysticism is examined to illustrate the premise that cognitive models may benefit from an understanding of the (...) kinds of experiences gained, and explanatory concepts advanced, within mystical traditions. Language mysticism is of particular interest on account of the central role thought to be played by language in relation to self and the individual's construction of reality. The discussion focuses on traditions of language mysticism within Judaism, in which emphasis is placed on the deconstruction of language into primary elements and the overarching significance of the divine Name. Analysis of the detailed techniques used suggests ways in which multiple associations to any given word/concept were consciously explored in an altered state. It appears that these mystics were consciously engaging with what are normally preconscious cognitive processes, whereby schematic associations to sensory images or thoughts are activated. The testimony from their writings implies that these mystics experienced distortions of the sense of self , which may suggest that, in the normal state, ‘I’ is constructed in relation to the preconscious system of associations. Moreover, an important feature of Hebrew language mysticism is its emphasis on embodiment -- specific associations were deemed to exist between the letters and each structure of the body. Implications, first, for the relationship between language and self, and, second, for the role of embodiment in relation to self are discussed. The importance of the continual emphasis on the Name of God throughout the linguistic practices may have provided a means for effectively replacing the cognitive indexing function hypothesized here to be normally played by ‘I’ with a more transpersonal cognitive index, especially in relation to memory. (shrink)
Any logic is represented as a certain collection of well-orderings admitting or not some algebraic structure such as a generalized lattice. Then universal logic should refer to the class of all subclasses of all well-orderings. One can construct a mapping between Hilbert space and the class of all logics. Thus there exists a correspondence between universal logic and the world if the latter is considered a collection of wave functions, as which the points in Hilbert space can be interpreted. The (...) correspondence can be further extended to the foundation of mathematics by set theory and arithmetic, and thus to all mathematics. (shrink)
In this paper, we examine the extent to which the concept of emergence can be applied to questions about the nature and moral justification of territorial borders. Although the term is used with many different senses in philosophy, the concept of “weak emergence”—advocated by, for example, Sawyer (2002, 2005) and Bedau (1997)—is especially applicable, since it forces a distinction between prediction and explanation that connects with several issues in the dis-cussion of territory. In particular, we argue, weak emergentism about (...) borders allows us to distinguish between (a) using a theory of territory to say where a border should be drawn, and (b) looking at an existing border and saying whether or not it is justified (Miller, 2012; Nine, 2012; Stilz, 2011). Many authors conflate these two factors, or identify them by claiming that having one without the other is in some sense incoherent. But on our account—given the concept of emergence—one might unproblematically be able to have (b) without (a); at the very least, the distinction between these two issues is much more significant than has often been recognised, and more importantly gives us some reason to prefer “statist” as opposed to “cultural” theories of territorial borders. We conclude with some further reflections on related matters concerning, firstly, the apparent causal powers of borders, and secondly, the different ways in which borders are physically implemented (e.g., land vs. water). (shrink)
One can construct a mapping between Hilbert space and the class of all logic if the latter is defined as the set of all well-orderings of some relevant set (or class). That mapping can be further interpreted as a mapping of all states of all quantum systems, on the one hand, and all logic, on the other hand. The collection of all states of all quantum systems is equivalent to the world (the universe) as a whole. Thus that mapping establishes (...) a fundamentally philosophical correspondence between the physical world and universal logic by the meditation of a special and fundamental structure, that of Hilbert space, and therefore, between quantum mechanics and logic by mathematics. Furthermore, Hilbert space can be interpreted as the free variable of "quantum information" and any point in it, as a value of the same variable as "bound" already axiom of choice. (shrink)
that can serve as a foundation for more refined ontologies in the field of proteomics. Standard data sources classify proteins in terms of just one or two specific aspects. Thus SCOP (Structural Classification of Proteins) is described as classifying proteins on the basis of structural features; SWISSPROT annotates proteins on the basis of their structure and of parameters like post-translational modifications. Such data sources are connected to each other by pairwise term-to-term mappings. However, there are obstacles which stand (...) in the way of combining them together to form a robust meta-classification of the needed sort. We discuss some formal ontological principles which should be taken into account within the existing datasources in order to make such a metaclassification possible, taking into account also the Gene Ontology (GO) and its application to the annotation of proteins. (shrink)
The free energy principle is notoriously difficult to understand. In this paper, we relate the principle to a framework that philosophers of biology are familiar with: Ruth Millikan's teleosemantics. We argue that: (i) systems that minimise free energy are systems with a proper function; and (ii) Karl Friston's notion of *implicit modelling* can be understood in terms of Millikan's notion of *mapping relations*. Our analysis reveals some surprising formal similarities between the two frameworks, and suggests interesting lines of future research. (...) We hope this will aid further philosophical evaluation of the free energy principle. (shrink)
In contemporary literature, the fact that there is negative causation is the primary motivation for rejecting the physical connection view, and arguing for alternative accounts of causation. In this paper we insist that such a conclusion is too fast. We present two frameworks, which help the proponent of the physical connection view to resist the anti-connectionist conclusion. According to the first framework, there are positive causal claims, which co-refer with at least some negative causal claims. According to the second framework, (...) negative causal claims are generated from mapping and comparing different scenarios, which can fully be accounted for in purely positive terms. Since the positive causal claims evoked by both frameworks pose no obvious difficulties for the physical connection view, these frameworks make it possible for the connectionists to accommodate negative causal claims into their theory. Once these strategies are available, the connectionists become able to render all the arguments starting from the observation that there are negative causal claims in our causal discourse inconclusive with regard to the viability of the physical connection view. (shrink)
Event concepts are unstructured atomic concepts that apply to event types. A paradigm example of such an event type would be that of diaper changing, and so a putative example of an atomic event concept would be DADDY'S-CHANGING-MY-DIAPER.1 I will defend two claims about such concepts. First, the conceptual claim that it is in principle possible to possess a concept such as DADDY'S-CHANGING-MY-DIAPER without possessing the concept DIAPER. Second, the empirical claim that we actually possess such concepts and that they (...) play an important role in our cognitive lives. The argument for the empirical claim has the form of an inference to the best explanation and is aimed at those who are already willing to attribute concepts and beliefs to infants and nonhuman animals. Many animals and prelinguistic infants seem capable of re-identifying event-types in the world, and they seem to store information about things happening at particular times and places. My account offers a plausible model of how such organisms are able to do this without attributing linguistically structured mental states to them. And although language allows adults to form linguistically structured mental representations of the world, there is no good reason to think that such structured representations necessarily replace the unstructured ones. There is also no good reason for a philosopher who is willing to explain the behavior of an organism by appealing to atomic concepts of individuals or kinds to not use a similar form of explanation when explaining the organism's capacity to recognize events. -/- We can form empirical concepts of individuals, kinds, properties, event-types, and states of affairs, among other things, and I assume that such concepts function like what François Recanati calls ‘mental files’ or what Ruth Millikan calls ‘substance concepts’ (Recanati 2012; Millikan 1999, 2000, 2017). To possess such a concept one must have a reliable capacity to re-identify the object in question, but this capacity of re-identification does not fix the reference of the concept. Such concepts allow us to collect and utilize useful information about things that we re-encounter in our environment. We can distinguish between a perception-action system and a perception-belief system, and I will argue that empirical concepts, including atomic event concepts, can play a role in both systems. The perception-action system involves the application of concepts in the service of (often skilled) action. We can think of the concept as a mental file containing motor-plans that can be activated once the individual recognizes that they are in a certain situation. In this way, recognizing something (whether an object or an event) as a token of a type, plays a role in guiding immediate action. The perception-belief system, in contrast, allows for the formation of beliefs that can play a role in deliberation and planning and in the formation of expectations. I distinguish between two particular types of belief which I call where-beliefs and when-beliefs, and I argue that we can model the formation of such perceptual beliefs in nonlinguistic animals and human infants in terms of the formation of a link between an empirical concept and a position on a cognitive map. According to the account offered, seemingly complex beliefs, such as a baby's belief that Daddy changed her diaper in the kitchen earlier, will not be linguistically structured. If we think that prelinguistic infants possess such concepts and are able to form such beliefs, it is likely that adults do too. The ability to form such beliefs does not require the capacity for public language, and we can model them in nonlinguistic terms; thus, we have no good reason to think of such beliefs as propositional attitudes. Of course, we can use sentences to refer to such beliefs, and thus it is possible to think of such beliefs as somehow relations to propositions. But it is not clear to me what is gained by this as we have a perfectly good way to think about the structure of such beliefs that does not involve any appeal to language. (shrink)
Two hundred and sixty-three subjects each gave examples for one of five geographic categories: geographic features, geographic objects, geographic concepts, something geographic, and something that could be portrayed on a map. The frequencies of various responses were significantly different, indicating that the basic ontological terms feature, object, etc., are not interchangeable but carry different meanings when combined with adjectives indicating geographic or mappable. For all of the test phrases involving geographic, responses were predominantly natural features such as mountain, river, lake, (...) ocean, hill. Artificial geographic features such as town and city were listed hardly at all for geographic categories, an outcome that contrasts sharply with the disciplinary self-understanding of academic geography. However, geographic artifacts and fiat objects, such as roads, cities, boundaries, countries, and states, were frequently listed by the subjects responding to the phrase something that could be portrayed on a map. In this paper, we present the results of these experiments in visual form, and provide interpretations and implications for further research. (shrink)
This paper is about knowledge construction in music listening. It argues for an experiential approach to music cognition, stressing the dynamic-vectorial field of meaning rather than the symbolic field. Starting from the conceptual framework of deixis and indexical devices, it elaborates on the concept of pointing as a heuristic guide for sense-making which allows the listener to conceive of perceptual elements in terms of salience, valence and semantical weight. As such, the act of (mental) pointing can be predicative, either in (...) a nominalistic or processual way, giving a description of the temporal evolution of a situation as against episodic nominalizations that refer to just one single instance of the process. The latter, especially, are characterized by distancing and polarization between the listener and the music, allowing him/her to deal with the music also at a level of mental imagery and to construct a mental or cognitive map of its unfolding. (shrink)
The paper argues that the depiction of the Mediterranean coast of Africa in Ptolemy’s Geography was based on a source similar to the Stadiasmus of the Great Sea. Ptolemy’s and the Stadiasmus’ toponymy and distances between major points are mostly in good agreement. Ptolemy’s place names overlap with those of the Stadiasmus by 80%, and the total length of the coastline from Alexandria to Utica on Ptolemy’s map deviates from the Stadiasmus data by only 1% or 1.5%. A number of (...) serious disagreements between Ptolemy’s map and the Stadiasmus regarding the length of particular coastal stretches can be explained by assuming that Ptolemy had to tailor the distance data derived from periploi to his other sources, especially, to the longitudes of the key reference points, such as Cape Phyces, Cyrena, Berenica, Aesporis, Thena, and Carthage. A notable stretching and the subsequent contraction of the coast between Alexandria and Cyrenaica, as are exhibited by Ptolemy’s map relative to the Stadiasmus’ data, can be explained by assuming that several points on this coast were tied to the position of Crete, which was moved to the west being pushed by the westward shift of Rhodes. A sharp contraction of the two coastal stretches of the Great Sirte, oriented along the north-south direction, can be explained by Ptolemy’s erroneously underestimated value for the circumference of the Earth. The analysis of this contraction, as well as of the east-west stretching of the coast between Alexandria and Cyrene in angular terms relative to the modern map, makes it possible to assess the magnitude of Ptolemy’s error and to determine the length of his stade. This analysis shows that Ptolemy’s value for the circumference of the Earth must have been underestimated by approximately 20–27%, and Ptolemy’s stade must have been approximately 175–185 m length. Comparison of the Stadiasmus distance data with the modern map shows that the average length of the stade was close to 179 m or to the “common” stade of 185 m for the stretch between Alexandria and Berenice. (shrink)
Ontology is a burgeoning field, involving researchers from the computer science, philosophy, data and software engineering, logic, linguistics, and terminology domains. Many ontology-related terms with precise meanings in one of these domains have different meanings in others. Our purpose here is to initiate a path towards disambiguation of such terms. We draw primarily on the literature of biomedical informatics, not least because the problems caused by unclear or ambiguous use of terms have been there most thoroughly addressed. We advance a (...) proposal resting on a distinction of three levels too often run together in biomedical ontology research: 1. the level of reality; 2. the level of cognitive representations of this reality; 3. the level of textual and graphical artifacts. We propose a reference terminology for ontology research and development that is designed to serve as common hub into which the several competing disciplinary terminologies can be mapped. We then justify our terminological choices through a critical treatment of the ‘concept orientation’ in biomedical terminology research. (shrink)
This article presents the first thematic review of the literature on the ethical issues concerning digital well-being. The term ‘digital well-being’ is used to refer to the impact of digital technologies on what it means to live a life that is good for a human being. The review explores the existing literature on the ethics of digital well-being, with the goal of mapping the current debate and identifying open questions for future research. The review identifies major issues related to (...) several key social domains: healthcare, education, governance and social development, and media and entertainment. It also highlights three broader themes: positive computing, personalised human–computer interaction, and autonomy and self-determination. The review argues that three themes will be central to ongoing discussions and research by showing how they can be used to identify open questions related to the ethics of digital well-being. (shrink)
This article presents the first thematic review of the literature on the ethical issues concerning digital well-being. The term ‘digital well-being’ is used to refer to the impact of digital technologies on what it means to live a life that is good for a human being. The review explores the existing literature on the ethics of digital well-being, with the goal of mapping the current debate and identifying open questions for future research. The review identifies major issues related to (...) several key social domains: healthcare, education, governance and social development, and media and entertainment. It also highlights three broader themes: positive computing, personalised human–computer interaction, and autonomy and self-determination. The review argues that three themes will be central to ongoing discussions and research by showing how they can be used to identify open questions related to the ethics of digital well-being. (shrink)
Does geometry constitues a core set of intuitions present in all humans, regarless of their language or schooling ? We used two non verbal tests to probe the conceptual primitives of geometry in the Munduruku, an isolated Amazonian indigene group. Our results provide evidence for geometrical intuitions in the absence of schooling, experience with graphic symbols or maps, or a rich language of geometrical terms.
This paper demarcates a theoretically interesting class of "evaluational adjectives." This class includes predicates expressing various kinds of normative and epistemic evaluation, such as predicates of personal taste, aesthetic adjectives, moral adjectives, and epistemic adjectives, among others. Evaluational adjectives are distinguished, empirically, in exhibiting phenomena such as discourse-oriented use, felicitous embedding under the attitude verb `find', and sorites-susceptibility in the comparative form. A unified degree-based semantics is developed: What distinguishes evaluational adjectives, semantically, is that they denote context-dependent measure functions ("evaluational (...) perspectives")—context-dependent mappings to degrees of taste, beauty, probability, etc., depending on the adjective. This perspective-sensitivity characterizing the class of evaluational adjectives cannot be assimilated to vagueness, sensitivity to an experiencer argument, or multidimensionality; and it cannot be demarcated in terms of pretheoretic notions of subjectivity, common in the literature. I propose that certain diagnostics for "subjective" expressions be analyzed instead in terms of a precisely specified kind of discourse-oriented use of context-sensitive language. I close by applying the account to `find x PRED' ascriptions. (shrink)
"Understanding Scientific Progress constitutes a potentially enormous and revolutionary advancement in philosophy of science. It deserves to be read and studied by everyone with any interest in or connection with physics or the theory of science. Maxwell cites the work of Hume, Kant, J.S. Mill, Ludwig Bolzmann, Pierre Duhem, Einstein, Henri Poincaré, C.S. Peirce, Whitehead, Russell, Carnap, A.J. Ayer, Karl Popper, Thomas Kuhn, Imre Lakatos, Paul Feyerabend, Nelson Goodman, Bas van Fraassen, and numerous others. He lauds Popper for advancing beyond (...) verificationism and Hume’s problem of induction, but faults both Kuhn and Popper for being unable to show that and how their work could lead nearer to the truth." —Dr. LLOYD EBY teaches philosophy at The George Washington University and The Catholic University of America, in Washington, DC "Maxwell's aim-oriented empiricism is in my opinion a very significant contribution to the philosophy of science. I hope that it will be widely discussed and debated." – ALAN SOKAL, Professor of Physics, New York University "Maxwell takes up the philosophical challenge of how natural science makes progress and provides a superb treatment of the problem in terms of the contrast between traditional conceptions and his own scientifically-informed theory—aim-oriented empiricism. This clear and rigorously-argued work deserves the attention of scientists and philosophers alike, especially those who believe that it is the accumulation of knowledge and technology that answers the question."—LEEMON McHENRY, California State University, Northridge "Maxwell has distilled the finest essence of the scientific enterprise. Science is about making the world a better place. Sometimes science loses its way. The future depends on scientists doing the right things for the right reasons. Maxwell's Aim-Oriented Empiricism is a map to put science back on the right track."—TIMOTHY McGETTIGAN, Professor of Sociology, Colorado State University - Pueblo "Maxwell has a great deal to offer with these important ideas, and deserves to be much more widely recognised than he is. Readers with a background in philosophy of science will appreciate the rigour and thoroughness of his argument, while more general readers will find his aim-oriented rationality a promising way forward in terms of a future sustainable and wise social order."—David Lorimer, Paradigm Explorer, 2017/2 "This is a book about the very core problems of the philosophy of science. The idea of replacing Standard Empiricism with Aim-Oriented Empiricism is understood by Maxwell as the key to the solution of these central problems. Maxwell handles his main tool masterfully, producing a fascinating and important reading to his colleagues in the field. However, Nicholas Maxwell is much more than just a philosopher of science. In the closing part of the book he lets the reader know about his deep concern and possible solutions of the biggest problems humanity is facing."—Professor PEETER MŰŰREPP, Tallinn University of Technology, Estonia “For many years, Maxwell has been arguing that fundamental philosophical problems about scientific progress, especially the problem of induction, cannot be solved granted standard empiricism (SE), a doctrine which, he thinks, most scientists and philosophers of science take for granted. A key tenet of SE is that no permanent thesis about the world can be accepted as a part of scientific knowledge independent of evidence. For a number of reasons, Maxwell argues, we need to adopt a rather different conception of science which he calls aim-oriented empiricism (AOE). This holds that we need to construe physics as accepting, as a part of theoretical scientific knowledge, a hierarchy of metaphysical theses about the comprehensibility and knowability of the universe, which become increasingly insubstantial as we go up the hierarchy. In his book “Understanding Scientific Progress: Aim-Oriented Empiricism”, Maxwell gives a concise and excellent illustration of this view and the arguments supporting it… Maxwell’s book is a potentially important contribution to our understanding of scientific progress and philosophy of science more generally. Maybe it is the time for scientists and philosophers to acknowledge that science has to make metaphysical assumptions concerning the knowability and comprehensibility of the universe. Fundamental philosophical problems about scientific progress, which cannot be solved granted SE, may be solved granted AOE.” Professor SHAN GAO, Shanxi University, China . (shrink)
Faultless disagreement and faultless retraction have been taken to motivate relativism for predicates of personal taste, like ‘tasty’. Less attention has been devoted to the question of what aspect of their meaning underlies this relativist behavior. This paper illustrates these same phenomena with a new category of expressions: appearance predicates, like ‘tastes vegan’ and ‘looks blue’. Appearance predicates and predicates of personal taste both fall into the broader category of experiential predicates. Approaching predicates of personal taste from this angle suggests (...) that their relativist behavior is due to their experience-sensitivity, rather than their evaluative meaning. Furthermore, appearance predicates hold interest beyond what they can teach us about predicates of personal taste. Examination of a variety of uses of appearance predicates reveals that they give rise to relativist behavior for a variety of reasons—including some that apply also to other types of expressions, such as epistemic modals and comparative terms. This paper thus serves both to probe the source of relativist behavior in discourse about personal taste, as well as to map out this kind of behavior in the rich and under-explored discourse about appearances. (shrink)
We have a variety of different ways of dividing up, classifying, mapping, sorting and listing the objects in reality. The theory of granular partitions presented here seeks to provide a general and unified basis for understanding such phenomena in formal terms that is more realistic than existing alternatives. Our theory has two orthogonal parts: the first is a theory of classification; it provides an account of partitions as cells and subcells; the second is a theory of reference or intentionality; it (...) provides an account of how cells and subcells relate to objects in reality. We define a notion of well-formedness for partitions, and we give an account of what it means for a partition to project onto objects in reality. We continue by classifying partitions along three axes: (a) in terms of the degree of correspondence between partition cells and objects in reality; (b) in terms of the degree to which a partition represents the mereological structure of the domain it is projected onto; and (c) in terms of the degree of completeness with which a partition represents this domain. (shrink)
Many empirically minded philosophers have used neuroscientific data to argue against the multiple realization of cognitive functions in existing biological organisms. I argue that neuroscientists themselves have proposed a biologically based concept of multiple realization as an alternative to interpreting empirical findings in terms of one‐to‐one structure‐function mappings. I introduce this concept and its associated research framework and also how some of the main neuroscience‐based arguments against multiple realization go wrong. *Received October 2009; revised December 2009. †To contact the author, (...) please write to: Department of Philosophy, 260 English‐Philosophy Building, University of Iowa, Iowa City, IA 52242; e‐mail: carrie‐[email protected] (shrink)
Three classic distinctions specify that truths can be necessary versus contingent,analytic versus synthetic, and a priori versus a posteriori. The philosopher reading this article knows very well both how useful and ordinary such distinctions are in our conceptual work and that they have been subject to many and detailed debates, especially the last two. In the following pages, I do not wish to discuss how far they may be tenable. I shall assume that, if they are reasonable and non problematic (...) in some ordinary cases, then they can be used in order to understand what kind of knowledge the maker’s knowledge is. By this I mean the sort of knowledge that Alice enjoys when she holds the information that Bob’s coffee is sweetened because she just put two spoons of sugar in it herself. The maker’s knowledge tradition is quite important but it is not mainstream in modern and analytic epistemology and lacks grounding in terms of exactly what sort of knowledge one is talking about. My suggestion is that this grounding can be provided by a minimalist approach, based on an information-theoretical analysis. In the article, I argue that we need to decouple a fourth distinction, namely informative versus uninformative, from the previous three and, in particular, from its implicit association with analytic versus synthetic and a priori versus a posteriori; such a decoupling facilitates, and is facilitated by, moving from a monoagent to a multiagent approach: the distinctions qualify a proposition, a message, or some information not just in themselves but relationally, with respect to an informational agent; the decoupling and the multiagent approach enable a re-mapping of currently available positions in epistemology on these four dichotomies; within such a re-mapping, two positions, capturing the nature of a witness’ knowledge and of a maker’s knowledge, can best be described as contingent, synthetic, a posteriori, and uninformative and as contingent, synthetic, weakly a priori, and uninformative respectively. In the conclusion, I indicate why the analysis of the maker’s knowledge has important consequences in all those cases in which the poietic intervention on a system determines the truth of the model of that system. (shrink)
The Morris water maze has been put forward in the philosophy of neuroscience as an example of an experimental arrangement that may be used to delineate the cognitive faculty of spatial memory (e.g., Craver and Darden, Theory and method in the neurosciences, University of Pittsburgh Press, Pittsburgh, 2001; Craver, Explaining the brain: Mechanisms and the mosaic unity of neuroscience, Oxford University Press, Oxford, 2007). However, in the experimental and review literature on the water maze throughout the history of its use, (...) we encounter numerous responses to the question of “what” phenomenon it circumscribes ranging from cognitive functions (e.g., “spatial learning”, “spatial navigation”), to representational changes (e.g., “cognitive map formation”) to terms that appear to refer exclusively to observable changes in behavior (e.g., “water maze performance”). To date philosophical analyses of the water maze have not been directed at sorting out what phenomenon the device delineates nor the sources of the different answers to the question of what. I undertake both of these tasks in this paper. I begin with an analysis of Morris’s first published research study using the water maze and demonstrate that he emerged from it with an experimental learning paradigm that at best circumscribed a discrete set of observable changes in behavior. However, it delineated neither a discrete set of representational changes nor a discrete cognitive function. I cite this in combination with a reductionist-oriented research agenda in cellular and molecular neurobiology dating back to the 1980s as two sources of the lack of consistency across the history of the experimental and review literature as to what is under study in the water maze. (shrink)
In this paper we propose a formal theory of partitions (ways of dividing up or sorting or mapping reality) and we show how the theory can be applied in the geospatial domain. We characterize partitions at two levels: as systems of cells (theory A), and in terms of their projective relation to reality (theory B). We lay down conditions of well-formedness for partitions and we define what it means for partitions to project truly onto reality. We continue by classifying well-formed (...) partitions along three axes: (a) degree of correspondence between partition cells and objects in reality; (b) degree to which a partition represents the mereological structure of the domain it is projected onto; and (c) degree of completeness and exhaustiveness with which a partition represents reality. This classification is used to characterize three types of partitions that play an important role in spatial information science: cadastral partitions, categorical coverages, and the partitions involved in folk categorizations of the geospatial domain. (shrink)
Unless one embraces activities as foundational, understanding activities in mechanisms requires an account of the means by which entities in biological mechanisms engage in their activities—an account that does not merely explain activities in terms of more basic entities and activities. Recent biological research on molecular motors exemplifies such an account, one that explains activities in terms of free energy and constraints. After describing the characteristic “stepping” activities of these molecules and mapping the stages of those steps onto the stages (...) of the motors’ hydrolytic cycles, researchers pieced together from images of the molecules in different hydrolyzation states accounts of how the chemical energy in ATP is transformed in the constrained environments of the motors into the characteristic activities of the motors. We argue that New Mechanism’s standard set of analytic categories—entities, activities, and organization—should be expanded to include constraints and energetics. Not only is such an expansion required descriptively to capture research on molecular motors but, more importantly from a philosophical point of view, it enables a non-regressive account of activities in mechanisms. In other words, this expansion enables a philosophical account of mechanistic explanation that avoids a regress of entities and activities “all the way down.” Rather, mechanistic explanation bottoms out in constraints and energetics. (shrink)
Discussions over whether these natural kinds exist, what is the nature of their existence, and whether natural kinds are themselves natural kinds aim to not only characterize the kinds of things that exist in the world, but also what can knowledge of these categories provide. Although philosophically critical, much of the past discussions of natural kinds have often answered these questions in a way that is unresponsive to, or has actively avoided, discussions of the empirical use of natural kinds and (...) what I dub “activities of natural kinding” and “natural kinding practices”. The natural kinds of a particular discipline are those entities, events, mechanisms, processes, relationships, and concepts that delimit investigation within it—but we might reasonably ask: How are these natural kinds discovered?, How are they made?, Are they revisable?, and Where do they come from? A turn to natural kinding practices reveals a new set of questions open for investigation: How do natural kinds explain through practice?, What are natural kinding practices and classifications and why should we care?, What is the nature of natural kinds viewed as a set of activities?, and How do practice approaches to natural kinds shape and reconfigure scientific disciplines? Natural kinds have traditionally been discussed in terms of how they classify the contents of the world. The metaphysical project has been one which identifies essences, laws, sameness relations, fundamental properties, and clusters of family resemblances and how these map out the ontological space of the world. But actually how this is done has been less important in the discussion than the resultant categories that are produced. I aim to rectify these omissions and suggest a new metaphysical project investigating kinds in practice. (shrink)
The term “Complex Systems Biology” was introduced a few years ago [Kaneko, 2006] and, although not yet of widespread use, it seems particularly well suited to indicate an approach to biology which is well rooted in complex systems science. Although broad generalizations are always dangerous, it is safe to state that mainstream biology has been largely dominated by a gene-centric view in the last decades, due to the success of molecular biology. So the one gene - one trait approch, (...) which has often proved to be effective, has been extended to cover even complex traits. This simplifying view has been appropriately criticized, and the movement called systems biology has taken off. Systems biology [Noble, 2006] emphasizes the presence of several feedback loops in biological systems, which severely limit the range of validity of explanations based upon linear causal chains (e.g. gene → behaviour). Mathematical modelling is one the favourite tools of systems biologists to analyze the possible effects of competing negative and positive feedback loops which can be observed at several levels (from molecules to organelles, cells, tissues, organs, organisms, ecosystems). Systems biology is by now a well-established field, as it can be inferred by the rapid growth in number of conferences and journals devoted to it, as well as by the existence of several grants and funded projects.Systems biology is mainly focused upon the description of specific biological items, like for example specific organisms, or specific organs in a class of animals, or specific genetic-metabolic circuits. It therefore leaves open the issue of the search for general principles of biological organization, which apply to all living beings or to at least to broad classes. We know indeed that there are some principles of this kind, biological evolution being the most famous one. The theory of cellular organization also qualifies as a general principle. But the main focus of biological research has been that of studying specific cases, with some reluctance to accept (and perhaps a limited interest for) broad generalizations. This may however change, and this is indeed the challenge of complex systems biology: looking for general principles in biological systems, in the spirit of complex systems science which searches for similar features and behaviours in various kinds of systems. The hope to find such general principles appears well founded, and I will show in Section 2 that there are indeed data which provide support to this claim. Besides data, there are also general ideas and models concerning the way in which biological systems work. The strategy, in this case, is that of introducing simplified models of biological organisms or processes, and to look for their generic properties: this term, borrowed from statistical physics, is used for those properties which are shared by a wide class of systems. In order to model these properties, the most effective approach has been so far that of using ensembles of systems, where each member can be different from another one, and to look for those properties which are widespread. This approach was introduced many years ago [Kauffman, 1971] in modelling gene regulatory networks. At that time one had very few information about the way in which the expression of a given gene affects that of other genes, apart from the fact that this influence is real and can be studied in few selected cases (like e.g. the lactose metabolism in E. coli). Today, after many years of triumphs of molecular biology, much more has been discovered, however the possibility of describing a complete map of gene-gene interactions in a moderately complex organism is still out of reach. Therefore the goal of fully describing a network of interacting genes in a real organism could not be (and still cannot be) achieved. But a different approach has proven very fruitful, that of asking what are the typical properties of such a set of interacting genes. Making some plausible hypotheses and introducing some simplifying assumptions, Kauffman was able to address some important problems. In particular, he drew attention to the fact that a dynamical system of interacting genes displays selforganizing properties which explain some key aspects of life, most notably the existence of a limited number of cellular types in every multicellular organism (these numbers are of the order of a few hundreds, while the number of theoretically possible types, absent interactions, would be much much larger than the number of protons in the universe). In section 3 I will describe the ensemble based approach in the context of gene regulatory networks, and I will show that it can describe some important experimental data. Finally, in section 4 I will discuss some methodological aspects. (shrink)
Healthcare systems across the globe are struggling with increasing costs and worsening outcomes. This presents those responsible for overseeing healthcare with a challenge. Increasingly, policymakers, politicians, clinical entrepreneurs and computer and data scientists argue that a key part of the solution will be ‘Artificial Intelligence’ (AI) – particularly Machine Learning (ML). This argument stems not from the belief that all healthcare needs will soon be taken care of by “robot doctors.” Instead, it is an argument that rests on the classic (...) counterfactual definition of AI as an umbrella term for a range of techniques that can be used to make machines complete tasks in a way that would be considered intelligent were they to be completed by a human. Automation of this nature could offer great opportunities for the improvement of healthcare services and ultimately patients’ health by significantly improving human clinical capabilities in diagnosis, drug discovery, epidemiology, personalised medicine, and operational efficiency. However, if these AI solutions are to be embedded in clinical practice, then at least three issues need to be considered: the technical possibilities and limitations; the ethical, regulatory and legal framework; and the governance framework. In this article, we report on the results of a systematic analysis designed to provide a clear overview of the second of these elements: the ethical, regulatory and legal framework. We find that ethical issues arise at six levels of abstraction (individual, interpersonal, group, institutional, sectoral, and societal) and can be categorised as epistemic, normative, or overarching. We conclude by stressing how important it is that the ethical challenges raised by implementing AI in healthcare settings are tackled proactively rather than reactively and map the key considerations for policymakers to each of the ethical concerns highlighted. (shrink)
Information-theoretic approaches to formal logic analyse the "common intuitive" concept of propositional implication (or argumental validity) in terms of information content of propositions and sets of propositions: one given proposition implies a second if the former contains all of the information contained by the latter; an argument is valid if the conclusion contains no information beyond that of the premise-set. This paper locates information-theoretic approaches historically, philosophically and pragmatically. Advantages and disadvantages are identified by examining such approaches in themselves and (...) by contrasting them with standard transformation-theoretic approaches. Transformation-theoretic approaches analyse validity (and thus implication) in terms of transformations that map one argument onto another: a given argument is valid if no transformation carries it onto an argument with all true premises and false conclusion. Model-theoretic, set-theoretic, and substitution-theoretic approaches, which dominate current literature, can be construed as transformation-theoretic, as can the so-called possible-worlds approaches. Ontic and epistemic presuppositions of both types of approaches are considered. Attention is given to the question of whether our historically cumulative experience applying logic is better explained from a purely information-theoretic perspective or from a purely transformation-theoretic perspective or whether apparent conflicts between the two types of approaches need to be reconciled in order to forge a new type of approach that recognizes their basic complementarity. (shrink)
Sensibility, in any of its myriad realms – moral, physical, aesthetic, medical and so on – seems to be a paramount case of a higher-level, intentional property, not a basic property. Diderot famously made the bold and attributive move of postulating that matter itself senses, or that sensibility (perhaps better translated ‘sensitivity’ here) is a general or universal property of matter, even if he at times took a step back from this claim and called it a “supposition.” Crucially, sensibility is (...) here playing the role of a ‘booster’: it enables materialism to provide a full and rich account of the phenomena of conscious, sentient life, contrary to what its opponents hold: for if matter can sense, and sensibility is not a merely mechanical process, then the loftiest cognitive plateaus are accessible to materialist analysis, or at least belong to one and the same world as the rest of matter. This was noted by the astute anti-materialist critic, the Abbé Lelarge de Lignac, who, in his 1751 Lettres à un Amériquain, criticized Buffon for “granting to the body [la machine, a common term for the body at the time] a quality which is essential to minds, namely sensibility.” This view, here attributed to Buffon and definitely held by Diderot, was comparatively rare. If we look for the sources of this concept, the most notable ones are physiological and medical treatises by prominent figures such as Robert Whytt, Albrecht von Haller and the Montpellier vitalist Théophile de Bordeu. We then have, or so I shall try to sketch out, an intellectual landscape in which new – or newly articulated – properties such as irritability and sensibility are presented either as an experimental property of muscle fibers, that can be understood mechanistically (Hallerian irritability, as studied recently by Hubert Steinke and Dominique Boury) or a property of matter itself (whether specifically living matter as in Bordeu and his fellow montpelliérains Ménuret and Fouquet, or matter in general, as in Diderot). I am by no means convinced that it is one and the same ‘sensibility’ that is at issue in debates between these figures (as when Bordeu attacks Haller’s distinction between irritability and sensibility and claims that ‘his own’ property of sensibility is both more correct and more fundamental in organic beings), but I am interested in mapping out a topography of the problem of sensibility as property of matter or as vital force in mid-eighteenth-century debates – not an exhaustive cartography of all possible positions or theories, but an attempt to understand the ‘triangulation’ of three views: a vitalist view in which sensibility is fundamental, matching up with a conception of the organism as the sum of parts conceived as little lives (Bordeu et al.); a mechanist, or ‘enhanced mechanist’ view in which one can work upwards, step by step from the basic property of irritability to the higher-level property of sensibility (Haller); and, more eclectic, a materialist view which seeks to combine the mechanistic, componential rigour and explanatory power of the Hallerian approach, with the monistic and metaphysically explosive potential of the vitalist approach (Diderot). It is my hope that examining Diderot in the context of this triangulated topography of sensibility as property sheds light on his famous proclamation regarding sensibility as a universal property of matter. (shrink)
Multiple realizability has been at the heart of debates about whether the mind reduces to the brain, or whether the items of a special science reduce to the items of a physical science. I analyze the two central notions implied by the concept of multiple realizability: "multiplicity," otherwise known as property variability, and "realizability." Beginning with the latter, I distinguish three broad conceptual traditions. The Mathematical Tradition equates realization with a form of mapping between objects. Generally speaking, x realizes (or (...) is the realization of) y because elements of y map onto elements of x. The Logico-Semantic Tradition translates realization into a kind of intentional or semantic notion. Generally speaking, x realizes (or is the realization of) a term or concept y because x can be interpreted to meet the conditions for satisfying y. The Metaphysical Tradition views realization as a species of determination between objects. Generally speaking, x realizes (or is the realization of) y because x brings about or determines y. I then turn to the subject of property variability and define it in a formal way. I then conclude by discussing some debates over property identity and scientific theory reduction where the resulting notion of multiple realizability has played a central role, for example, whether the nonreductive consequences of multiple realizability can be circumvented by scientific theories framed in terms of narrow domain-specific properties, or wide disjunctive properties. (shrink)
The classical holism-reductionism debate, which has been of major importance to the development of ecological theory and methodology, is an epistemological patchwork. At any moment, there is a risk of it slipping into an incoherent, chaotic Tower of Babel. Yet philosophy, like the sciences, requires that words and their correlative concepts be used rigorously and univocally. The prevalent use of everyday language in the holism-reductionism issue may give a false impression regarding its underlying clarity and coherence. In reality, the conceptual (...) categories underlying the debate have yet to be accurately defined and consistently used. There is a need to map out a clear conceptual, logical and epistemological framework. To this end, we propose a minimalist epistemological foundation. The issue is easier to grasp if we keep in mind that holism generally represents the ontological background of emergentism, but does not necessarily coincide with it. We therefore speak in very loose terms of the “holism-reductionism” debate, although it would really be better characterised by the terms emergentism and reductionism. The confrontation between these antagonistic paradigms unfolds at various semantic and operational levels. In definitional terms, there is not just emergentism and reductionism, but various kinds of emergentisms and reductionisms. (shrink)
The widely accepted two-dimensional circumplex model of emotions posits that most instances of human emotional experience can be understood within the two general dimensions of valence and activation. Currently, this model is facing some criticism, because complex emotions in particular are hard to define within only these two general dimensions. The present theory-driven study introduces an innovative analytical approach working in a way other than the conventional, two-dimensional paradigm. The main goal was to map and project semantic emotion space in (...) terms of mutual positions of various emotion prototypical categories. Participants (N = 187; 54.5% females) judged 16 discrete emotions in terms of valence, intensity, controllability and utility. The results revealed that these four dimensional input measures were uncorrelated. This implies that valence, intensity, controllability and utility represented clearly different qualities of discrete emotions in the judgments of the participants. Based on this data, we constructed a 3D hypercube-projection and compared it with various two-dimensional projections. This contrasting enabled us to detect several sources of bias when working with the traditional, two-dimensional analytical approach. Contrasting two-dimensional and three-dimensional projections revealed that the 2D models provided biased insights about how emotions are conceptually related to one another along multiple dimensions. The results of the present study point out the reductionist nature of the two-dimensional paradigm in the psychological theory of emotions and challenge the widely accepted circumplex model. (shrink)
Kant is the philosophical tradition's arch-anti-consequentialist – if anyone insists that intentions alone make an action what it is, it is Kant. This chapter takes up Kant's account of the relation between intention and action, aiming both to lay it out and to understand why it might appeal. The chapter first maps out the motivational architecture that Kant attributes to us. We have wills that are organized to action by two parallel and sometimes competing motivational systems. One determines us by (...) way of motives that are sensuous, natural and given from without, the other by motives that are intellectual, rational, and given from within. Each set of motives belongs to a system of laws – natural motives to the laws of nature, rational motives to the laws of freedom. For Kant, all things, including actions, are what they are in virtue of the laws governing them; actions, qua actions, are always governed by laws that govern individual wills. These laws are Kantian maxims, 'or subjective practical laws.' Maxims, for Kant, thus make actions the actions they are. The chapter then maps out the implications of this motivational architecture for Kant's theory of value. Maxims always advert to or 'contain' both ends and means. Ends are always specifications of one of two ultimate ends. Actions have the moral value they have depending on which of two ultimate ends the maxim adverts to. The possibilities are 'happiness,' or gratification of desires with sensuous origins, and 'duty,' or accord with the moral demand to will in ways that respect free rational agency wherever it is found. Only actions aimed at the latter – actions with rational motives – have moral value. Actions aimed at the former – actions with natural motives – though not immoral in themselves, become so when pursuit works against rational motives. For Kant, actions aimed at happiness are ultimately allied with efforts to sustain our 'animal' existence, and so are governed by terms and conditions given by the natural world. Actions aimed at duty, in contrast, are ultimately allied with efforts to impose a rational form on nature, to make it over, so to speak, according to values not given by nature itself. Actions aimed at duty, therefore, create a specifically moral world, one in which mores and norms, formal and informal arrangements, institutions, policies, and so on, realize, harmonize, and promote free rational agency itself. Finally, the chapter addresses motivations for Kant's view. The architecture of will and the theories of action and value he proposes allow Kant to accommodate a host of intuitions and commitments. His view makes room for metaphysically free agency, and for the lived experience of motivational freedom from ever-changing natural desires. It makes room for conflicts within the will while still holding out hope that resolution is possible. It accommodates views that the best human lives engage 'higher' faculties in sustained ways. It identifies a stable, necessary, universal end amidst the evident contingency, pluralism, and instability of most ends. It makes us, and not God or nature, the authors of our moral lives. In the end, Kant's 'anti-consequentialism,' his focus on intentions, is a way of insisting on actions that take their character and value from what should matter most to us, namely individual and collective free rational agency, rather than only and always taking the character of reactive responses to circumstance. (shrink)
In this chapter I provide resources for assessing the charge that post-secondary students are self-censoring. The argument is advanced in three broad steps. First, I argue that both a duality at the heart of the concept of self-censorship and the term’s negative lay connotation should incline us to limit the charge of self-censorship to a specific subset of its typical extension. I argue that in general we ought to use the neutral term “refrainment from speech,” reserving the more (...) normatively charged “self-censorship” for cases of bad refrainment. In the second step of the argument, I seek to narrow down what counts as bad refrainment by mapping broad categories of possible reasons for and consequences of refrainment from speech. I argue that in general refrainment from speech is only bad if it is for bad (or what I will later term vicious) reasons or has pernicious consequences. When considering pernicious consequences, I argue that we should be concerned in particular about systems that perpetuate the coercive silencing of marginalized voices. I draw on Kristie Dotson’s work to describe two means by which marginalized voices are systemically silenced: testimonial quieting and testimonial smothering. After considering these types of silencing, I circle back to the post-secondary context to assess whether there is cause for concern if, as some reports suggests, US college students are refraining from speech within the educational context. (shrink)
A huge amount of data is being generated everyday through different transactions in industries, social networking, communication systems etc. Big data is a term that represents vast volumes of high speed, complex and variable data that require advanced procedures and technologies to enable the capture, storage, management, and analysis of the data. Big data analysis is the capacity of representing useful information from these large datasets. Due to characteristics like volume, veracity, and velocity, big data analysis is becoming one (...) of the most challenging research problems. Semantic analysis is method to better understand the implied or practical meaning of the input dataset. It is mostly applied with ontology to analyze content mainly in web resources. This field of research combines text analysis and Semantic Web technologies. The use semantic knowledge is to aid sentiment analysis of queries like emotion mining, popularity analysis, recommendation systems, user profiling, etc. A new method has been proposed to extract semantic relationships between different data attributes of big data which can be applied to a decision system. (shrink)
A consumer health information system must be able to comprehend both expert and non-expert medical vocabulary and to map between the two. We describe an ongoing project to create a new lexical database called Medical WordNet (MWN), consisting of medically relevant terms used by and intelligible to non-expert subjects and supplemented by a corpus of natural-language sentences that is designed to provide medically validated contexts for MWN terms. The corpus derives primarily from online health information sources targeted to consumers, and (...) involves two sub-corpora, called Medical FactNet (MFN) and Medical BeliefNet (MBN), respectively. The former consists of statements accredited as true on the basis of a rigorous process of validation, the latter of statements which non-experts believe to be true. We summarize the MWN / MFN / MBN project, and describe some of its applications. (shrink)
The goal of the OBO (Open Biomedical Ontologies) Foundry initiative is to create and maintain an evolving collection of non-overlapping interoperable ontologies that will offer unambiguous representations of the types of entities in biological and biomedical reality. These ontologies are designed to serve non-redundant annotation of data and scientific text. To achieve these ends, the Foundry imposes strict requirements upon the ontologies eligible for inclusion. While these requirements are not met by most existing biomedical terminologies, the latter may nonetheless support (...) the Foundry’s goal of consistent and non-redundant annotation if appropriate mappings of data annotated with their aid can be achieved. To construct such mappings in reliable fashion, however, it is necessary to analyze terminological resources from an ontologically realistic perspective in such a way as to identify the exact import of the ‘concepts’ and associated terms which they contain. We propose a framework for such analysis that is designed to maximize the degree to which legacy terminologies and the data coded with their aid can be successfully used for information-driven clinical and translational research. (shrink)
We advance the understanding of the philosophy and psychology of curiosity by operationalizing and constructing an empirical measure of Nietzsche’s conception of inquisitive curiosity, expressed by the German term Wissbegier, (“thirst for knowledge” or “need/impetus to know”) and Neugier (“curiosity” or “inquisitiveness”). First, we show that existing empirical measures of curiosity do not tap the construct of inquisitive curiosity, though they may tap related constructs such as idle curiosity and phenomenological curiosity. Next, we map the concept of inquisitive curiosity (...) and connect it to related concepts, such as open-mindedness and intellectual humility. The bulk of the paper reports four studies: an Anglophone exploratory factor analysis, an Anglophone confirmatory factor analysis, an informant study, and a Germanophone exploratory and confirmatory factor analysis. (shrink)
Mirror neuron research has come a long way since the early 1990s, and many theorists are now stressing the heterogeneity and complexity of the sensorimotor properties of fronto-parietal circuits. However, core aspects of the initial ‘ mirror mechanism ’ theory, i.e. the idea of a symmetric encapsulated mirroring function translating sensory action perceptions into motor formats, still appears to be shaping much of the debate. This article challenges the empirical plausibility of the sensorimotor segregation implicit in the original mirror metaphor. (...) It is proposed instead that the teleological organization found in the broader fronto-parietal circuits might be inherently sensorimotor. Thus the idea of an independent ‘purely perceptual’ goal understanding process is questioned. Further, it is hypothesized that the often asymmetric, heterogeneous and contextually modulated mirror and canonical neurons support a function of multisensory mapping and tracking of the perceiving agents affordance space. Such a shift in the interpretative framework offers a different theoretical handle on how sensorimotor processes might ground various aspects of intentional action choice and social cognition. Mirror neurons would under the proposed “social affordance model” be seen as dynamic parts of larger circuits, which support tracking of currently shared and competing action possibilities. These circuits support action selection processes—but also our understanding of the options and action potentials that we and perhaps others have in the affordance space. In terms of social cognition ‘ mirror ’ circuits might thus help us understand not only the intentional actions others are actually performing—but also what they could have done, did not do and might do shortly. (shrink)
Dieter Henrich ‘s “Notion of a Deduction” (1989), opened up approaches to both Deductions in terms of legal as opposed to syllogistic reasoning. Since the CpR is shot through with juridical metaphors and analogies, many points of connection suggest themselves. In this paper, I extend and modify Henrich’s approach, in order to extract a particular logic of evidence. I argue that the three syntheses of the A-Deduction correspond to parts of a deductive procedure, and that their names have been chosen (...) to indicate this connection to the reader. Nonetheless, the principal aim of the paper is not to develop and defend these historiographical claims, but to explicate the structure of the logic of evidence in question and link it to Kant’s intended refutation of Hume. Since the procedures Kant describes are part of the law of evidence of many nations and are equally well at work in contemporary information-theory, a precise reconstruction can map directly onto contemporary problems in philosophy, physics, and informatics, without any loss of historical accuracy. (shrink)
We designed a new protocol requiring French adult participants to group a large number of Munsell colour chips into three or four groups. On one, relativist, view, participants would be expected to rely on their colour lexicon in such a task. In this framework, the resulting groups should be more similar to French colour categories than to other languages categories. On another, universalist, view, participants would be expected to rely on universal features of perception. In this second framework, the resulting (...) groups should match colour categories of three and four basic terms languages. In this work, we first collected data to build an accurate map of French colour terms categories. We went on testing how native French speakers spontaneously sorted a set of randomly presented coloured chips and, in line with the relativist prediction, we found that the resulting colour groups were more similar to French colour categories than to three and four basic terms languages. However, the same results were obtained in a verbal interference condition, suggesting that participants rely on language specific and nevertheless perceptual, colour categories. Collectively, these results suggest that the universalist/relativist dichotomy is a too narrow one. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.