The idea of Artificial Intelligence for Social Good (henceforth AI4SG) is gaining traction within information societies in general and the AI community in particular. It has the potential to address social problems effectively through the development of AI-based solutions. Yet, to date, there is only limited understanding of what makes AI socially good in theory, what counts as AI4SG in practice, and how to reproduce its initial successes in terms of policies (Cath et al. 2018). This article addresses this gap (...) by extrapolating seven ethical factors that are essential for future AI4SG initiatives from the analysis of 27 case studies of AI4SG projects. Some of these factors are almost entirely novel to AI, while the significance of other factors is heightened by the use of AI. From each of these factors, corresponding best practices are formulated which, subject to context and balance, may serve as preliminary guidelines to ensure that well-designed AI is more likely to serve the social good. (shrink)
Healthcare systems across the globe are struggling with increasing costs and worsening outcomes. This presents those responsible for overseeing healthcare with a challenge. Increasingly, policymakers, politicians, clinical entrepreneurs and computer and data scientists argue that a key part of the solution will be ‘Artificial Intelligence’ (AI) – particularly Machine Learning (ML). This argument stems not from the belief that all healthcare needs will soon be taken care of by “robot doctors.” Instead, it is an argument that rests on the classic (...) counterfactual definition of AI as an umbrella term for a range of techniques that can be used to make machines complete tasks in a way that would be considered intelligent were they to be completed by a human. Automation of this nature could offer great opportunities for the improvement of healthcare services and ultimately patients’ health by significantly improving human clinical capabilities in diagnosis, drug discovery, epidemiology, personalised medicine, and operational efficiency. However, if these AI solutions are to be embedded in clinical practice, then at least three issues need to be considered: the technical possibilities and limitations; the ethical, regulatory and legal framework; and the governance framework. In this article, we report on the results of a systematic analysis designed to provide a clear overview of the second of these elements: the ethical, regulatory and legal framework. We find that ethical issues arise at six levels of abstraction (individual, interpersonal, group, institutional, sectoral, and societal) and can be categorised as epistemic, normative, or overarching. We conclude by stressing how important it is that the ethical challenges raised by implementing AI in healthcare settings are tackled proactively rather than reactively and map the key considerations for policymakers to each of the ethical concerns highlighted. (shrink)
in the tripartite psychology of the Republic, Plato characterizes the “spirited” part of the soul as the “ally of reason”: like the auxiliaries of the just city, whose distinctive job is to support the policies and judgments passed down by the rulers, spirit’s distinctive “job” in the soul is to support and defend the practical decisions and commands of the reasoning part. This is to include not only defense against external enemies who might interfere with those commands, but also, and (...) most importantly, defense against unruly appetites within the individual’s own soul.1 Spirit, according to this picture, is by nature reason’s faithful auxiliary in the soul, while appetite is always a potential enemy to be watched .. (shrink)
Since 2016, there has been an explosion of academic work and journalism that fixes its subject matter using the terms ‘fake news’ and ‘post-truth’. In this paper, I argue that this terminology is not up to scratch, and that academics and journalists ought to completely stop using the terms ‘fake news’ and ‘post-truth’. I set out three arguments for abandonment. First, that ‘fake news’ and ‘post-truth’ do not have stable public meanings, entailing that they are either nonsense, context-sensitive, or contested. (...) Secondly, that these terms are unnecessary, because we already have a rich vocabulary for thinking about epistemic dysfunction. Thirdly, I observe that ‘fake news’ and ‘post-truth’ have propagandistic uses, meaning that using them legitimates anti-democratic propaganda, and runs the risk of smuggling bad ideology into conversations. (shrink)
It has been widely believed since the nineteenth century that modern science provides a serious challenge to religion, but less agreement as to the reason. One main complication is that whenever there has been broad consensus for a scientific theory that challenges traditional religious doctrines, one finds religious believers endorsing the theory or even formulating it. As a result, atheists who argue for the incompatibility of science and religion often go beyond the religious implications of individual scientific theories, arguing that (...) the sciences taken together provide a comprehensive challenge to religious belief. Scientific theories, on this view, can be integrated to form a general vision of humans and our place in nature, one that excludes the existence of supernatural phenomena to which many religious traditions refer. The most common name given to this general vision is the scientific worldview. The purpose of my paper is to argue that the relation of a worldview to science is more complex and ambiguous than this position allows, drawing upon recent work in the history and philosophy of science. While there are other ways to complicate the picture, this paper will focus on differing views that scientists and philosophers have on the proper scope and limits of scientific inquiry. I will identify two different types of science—Baconian and Cartesian—that have different ambitions with respect to scientific theories, and thus different answers about the possibility of a scientific worldview. The paper will conclude by showing how their differing intuitions about scientific inquiry are evident in contemporary debates about reductionism, drawing upon the work of two physicists, Steven Weinberg and John Polkinghorne. History is more complex than this simple schema allows, of course, but these types provide a useful first approximation into the ambiguities of modern science. (shrink)
Normally this is not how we think material objects work. I, for example, am a material object that is located in multiple places: this place to my left where my left arm is, and this, distinct, place to my right, where my right arm is. But I am only partially located in each place. My left arm is a part of me that fills exactly the place to my left, and my right arm is a distinct part of me that (...) fills exactly the place to my right. I am located in multiple places by virtue of having distinct parts in those places. So entension is not happening to me — I do not entend. (shrink)
Paper begins: I have two gloves, a left glove and a right glove. I can fit the left glove onto my left hand, but not the right glove. Why? Because the right glove is the wrong shape to go on my left hand. So the two gloves are different shapes….
The problem of imperative consequence consists in the fact that theses (i) through (iii) are inconsistent; but yet all three are attractive (for the reasons sketched above). A solution to the problem consists in the denial of one of the three theses; I describe solutions as belonging to type 1, type 2, or type 3, depending on which thesis they deny. For the purposes of this paper, I would like to focus on a certain variety of type 3 solution – (...) a solution that offers a revised criterion of validity of a particular kind. (shrink)
It’s now commonplace — since Korsgaard (1996) — in ethical theory to distinguish between two distinctions: on the one hand, the distinction between value an object has in virtue of its intrinsic properties vs. the value it has in virtue of all its properties, intrinsic or extrinsic; and on the other hand, the distinction between the value has an object as an end, vs. the value it has as a means to something else. I’ll call the former distinction the distinction (...) between intrinsic and nonintrinsic value; the latter is between value as-an-end and instrumental value. (shrink)
s Gibson (1982) correctly points out, despite Quine’s brief flirtation with a “mitigated phenomenalism” (Gibson’s phrase) in the late 1940’s and early 1950’s, Quine’s ontology of 1953 (“On Mental Entities”) and beyond left no room for non-physical sensory objects or qualities. Anyone familiar with the contemporary neo-dualist qualia-freak-fest might wonder why Quinean lessons were insufficiently transmitted to the current generation.
Clark acknowledges but resists the indirect mind–world relation inherent in prediction error minimization (PEM). But directness should also be resisted. This creates a puzzle, which calls for reconceptualization of the relation. We suggest that a causal conception captures both aspects. With this conception, aspects of situated cognition, social interaction and culture can be understood as emerging through precision optimization.
Science frequently gives us multiple, compatible ways of solving the same problem or formulating the same theory. These compatible formulations change our understanding of the world, despite providing the same explanations. According to what I call "conceptualism," reformulations change our understanding by clarifying the epistemic structure of theories. I illustrate conceptualism by analyzing a typical example of symmetry-based reformulation in chemical physics. This case study poses a problem for "explanationism," the rival thesis that differences in understanding require ontic explanatory differences. (...) To defend conceptualism, I consider how prominent accounts of explanation might accommodate this case study. I argue that either they do not succeed, or they generate a skeptical challenge. (shrink)
Reformulating a scientific theory often leads to a significantly different way of understanding the world. Nevertheless, accounts of both theoretical equivalence and scientific understanding have neglected this important aspect of scientific theorizing. This essay provides a positive account of how reformulating theories changes our understanding. My account simultaneously addresses a serious challenge facing existing accounts of scientific understanding. These accounts have failed to characterize understanding in a way that goes beyond the epistemology of scientific explanation. By focusing on cases where (...) we have differences in understanding without differences in explanation, I show that understanding cannot be reduced to explanation. (shrink)
This article gives a brief history of chance in the Christian tradition, from casting lots in the Hebrew Bible to the discovery of laws of chance in the modern period. I first discuss the deep-seated skepticism towards chance in Christian thought, as shown in the work of Augustine, Aquinas, and Calvin. The article then describes the revolution in our understanding of chance—when contemporary concepts such as probability and risk emerged—that occurred a century after Calvin. The modern ability to quantify chance (...) has transformed ideas about the universe and human nature, separating Christians today from their predecessors, but has received little attention by Christian historians and theologians. (shrink)
As part of Timothy Williamson’s inquiry into how we gain knowledge from thought experiments he submits various ways of representing the argument underlying Gettier cases in modal and counterfactual terms. But all of these ways run afoul of the problem of deviance - that there are cases that might satisfy the descriptions given by a Gettier text but still fail to be counterexamples to the justified true belief model of knowledge). Problematically, this might mean that either it is too hard (...) to know the truth of the premises of the arguments Williamson presents or that the relevant premises might be false. I argue that the Gettier-style arguments can make do with weaker premises (and a slightly weaker conclusion) that suffice to show that “necessarily, if one justifiably believes some true proposition p, then one knows p” is not true. The modified version of the argument is preferable because it is not troubled by the existence of deviant Gettier cases. (shrink)
Some theorists approach the Gordian knot of consciousness by proclaiming its inherent tangle and mystery. Others draw out the sword of reduction and cut the knot to pieces. Philosopher Thomas Metzinger, in his important new book, Being No One: The Self-Model Theory of Subjectivity, instead attempts to disentangle the knot one careful strand at a time. The result is an extensive and complex work containing almost 700 pages of philosophical analysis, phenomenological reflection, and scientific data. The text offers a sweeping (...) and comprehensive tour through the entire landscape of consciousness studies, and it lays out Metzinger's rich and stimulating theory of the subjective mind. Metzinger's skilled integration of philosophy and neuroscience provides a valuable framework for interdisciplinary research on consciousness. (shrink)
What happens when a psychologist who’s spent the last 30 years developing a method of introspective sampling and a philosopher whose central research project is casting skeptical doubt on the accuracy of introspection write a book together? The result, Hurlburt & Schwitzgebel’s thought-provoking Describing Inner Experience?, is both encouraging and disheartening. Encouraging, because the book is a fine example of fruitful and open-minded interdisciplinary engagement; disheartening, because it makes clear just how difficult it is to justify the accuracy of introspective (...) methods in psychology and philosophy. And since debates in consciousness studies largely turn on fine points of introspective detail, this is no minor methodological stumbling block. (shrink)
Many feminists (e.g. T. Bettcher and B. R. George) argue for a principle of first person authority (FPA) about gender, i.e. that we should (at least) not disavow people's gender self-categorisations. However, there is a feminist tradition resistant to FPA about gender, which I call "radical feminism”. Feminists in this tradition define gender-categories via biological sex, thus denying non-binary and trans self-identifications. Using a taxonomy by B. R. George, I begin to demystify the concept of gender. We are also able (...) to use the taxonomy to model various feminist approaches. It becomes easier to see how conceptualisations ofgender which allow for FPA often do not allow for understanding female subjugation as being rooted in reproductive biology. I put forward a conceptual scheme: radical FPA feminism. If we accept FPA, but also radical feminist concerns, radical FPA feminism is an attractive way of conceptualising gender. (shrink)
Rational decision change can happen without information change. This is a problem for standard views of decision theory, on which linguistic intervention in rational decision-making is captured in terms of information change. But the standard view gives us no way to model interventions involving expressions that only have an attentional effects on conversational contexts. How are expressions with non-informational content - like epistemic modals - used to intervene in rational decision making? We show how to model rational decision change without (...) information change: replace a standard conception of value (on which the value of a set of worlds reduces to values of individual worlds in the set) with one on which the value of a set of worlds is determined by a selection function that picks out a generic member world. We discuss some upshots of this view for theorizing in philosophy and formal semantics. (shrink)
This paper began life as a short section of a more general paper about non-classical mereologies. In that paper I had a mereological theory that I wanted to show could be applied to all sorts of different metaphysical positions — notably, to those positions that believe in mereological vagueness in re — in “vague individuals”. To do that I felt I first had to dispatch the leading rival theory of vague individuals, which is due to Peter van Inwa-gen, and holds (...) that the part-whole relation admits of degrees. It seemed to me that this theory had a serious technical problem, or at best a serious gap. I sat down to write a paragraph or two highlighting the gap, preferably showing that it couldn’t be filled. This paper is the result. (shrink)
Truthmaker theorists typically claim not only that all truths have truthmakers (Truthmaker Maximalism), but also that there is some enlightening metaphysical theory to be given of the nature of those truthmakers (e.g. that they are Armstrongian states of affairs, or tropes, or concrete individuals). Call this latter thesis the "Material Theory Thesis" (it is the thesis that there is some true material theory of truthmakers). I argue that the Material Theory Thesis is inconsistent with Truthmaker Maximalism.
An imperative conditional is a conditional in the imperative mood (by analogy with “indicative conditional”, “subjunctive conditional”). What, in general, is the meaning and the illocutionary effect of an imperative conditional? I survey four answers: the answer that imperative conditionals are commands to the effect that an indicative conditional be true; two versions of the answer that imperative conditionals express irreducibly conditional commands; and finally, the answer that imperative conditionals express a kind of hybrid speech act between command and assertion.
Analytic philosophers usually think about modality in terms of possible worlds. According to the possible worlds framework, a proposition is necessary if it is true according to all possible worlds; it is possible if it is true according to some possible world. There are as many possible worlds as there are ways the actual world might be. Only one world is actual.
This logic has a standard one-dimensional possible worlds semantics with an accessibility relation (I will call this, for short, the accessibility semantics for KDDc4, contrasting with the preposcription semantics given in “Command and consequence”). In the accessibility semantics, the semantic value of a sentence is a world (rather than a pair of worlds).
“The truth,” Quine says, “is that you can bathe in the same river twice, but not in the same river stage. You can bathe in two river stages which are stages of the same river, and this is what constitutes bathing in the same river twice. A river is a process through time, and the river stages are its momentary parts.” (Quine 1953, p. 65) Quine’s view is four-dimensionalism, and that is what Theodore Sider’s book is about. In Sider’s usage, (...) four-dimensionalism is the view that, necessarily, anything in space and time has a distinct temporal part, or stage, corresponding to each time at which it exists (p. 59). (shrink)
Montague and Kaplan began a revolution in semantics, which promised to explain how a univocal expression could make distinct truth-conditional contributions in its various occurrences. The idea was to treat context as a parameter at which a sentence is semantically evaluated. But the revolution has stalled. One salient problem comes from recurring demonstratives: "He is tall and he is not tall". For the sentence to be true at a context, each occurrence of the demonstrative must make a different truth-conditional contribution. (...) But this difference cannot be accounted for by standard parameter sensitivity. Semanticists, consoled by the thought that this ambiguity would ultimately be needed anyhow to explain anaphora, have been too content to posit massive ambiguities in demonstrative pronouns. This article aims to revived the parameter revolution by showing how to treat demonstrative pronouns as univocal while providing an account of anaphora that doesn't end up re-introducing the ambiguity. (shrink)
The book is very well structured to support practical skills development in understanding DSGE modelling through exercises to graduate a user knowledge on macroeconomic application relevant for policy decisions through use of scientific programs like DYNARE /IRIS, appropriate for use with MatLab/Octave. The author also provided useful references for the more inquisitive reader or practitioner to develop his / her ontological quest for further knowledge in the macroeconomic management of a state (Jackson, 2016). On the basis of relevance of its (...) contents pertaining to theoretical application of macroeconomic policy and management of an economy, I strongly recommend this book to anyone preparing for graduate courses in Economics and related areas like Econometrics, Economic Policy Management and also, to the practitioner-researcher engaged in macroeconomic model construction and policy formulation. (shrink)
This paper begins with a response to Josh Gert’s challenge that ‘on a par with’ is not a sui generis fourth value relation beyond ‘better than’, ‘worse than’, and ‘equally good’. It then explores two further questions: can parity be modeled by an interval representation of value? And what should one rationally do when faced with items on a par? I argue that an interval representation of value is incompatible with the possibility that items are on a par (a (...) mathematical proof is given in the appendix). I also suggest that there are three senses of ‘rationally permissible’ which, once distinguished, show that parity does distinctive practical work that cannot be done by the usual trichotomy of relations or by incomparability. In this way, we have an additional argument for parity from the workings of practical reason. (shrink)
In this paper, I argue against the claim recently defended by Josh Weisberg that a certain version of the self-representational approach to phenomenal consciousness cannot avoid a set of problems that have plagued higher-order approaches. These problems arise specifically for theories that allow for higher-order misrepresentation or—in the domain of self-representational theories—self-misrepresentation. In response to Weisberg, I articulate a self-representational theory of phenomenal consciousness according to which it is contingently impossible for self-representations tokened in the context of a conscious (...) mental state to misrepresent their objects. This contingent infallibility allows the theory to both acknowledge the (logical) possibility of self-misrepresentation and avoid the problems of self-misrepresentation. Expanding further on Weisberg’s work, I consider and reveal the shortcomings of three other self-representational models—put forward by Kreigel, Van Gulick, and Gennaro—in order to show that each indicates the need for this sort of infallibility. I then argue that contingent infallibility is in principle acceptable on naturalistic grounds only if we attribute (1) a neo-Fregean kind of directly referring, indexical content to self-representational mental states and (2) a certain ontological structure to the complex conscious mental states of which these indexical self-representations are a part. In these sections I draw on ideas from the work of Perry and Kaplan to articulate the context-dependent semantic structure of inner-representational states. (shrink)
This is my reply to Josh Weisberg, Robert Van Gulick, and William Seager, published in JCS vol 20, 2013. This symposium grew out of an author-meets-critics session at the Central APA conference in 2013 on my 2012 book THE CONSCIOUSNESS PARADOX (MIT Press). Topics covered include higher-order thought (HOT) theory, my own "wide intrinsicality view," the problem of misrepresentation, targetless HOTs, conceptualism, introspection, and the transitivity principle.
Marcus William Hunt argues that when co-parents disagree over whether to raise their child (or children) as a vegan, they should reach a compromise as a gift given by one parent to the other out of respect for his or her authority. Josh Millburn contends that Hunt’s proposal of parental compromise over veganism is unacceptable on the ground that it overlooks respect for animal rights, which bars compromising. However, he contemplates the possibility of parental compromise over ‘unusual eating,’ of (...) animal-based foods obtained without the violation of animal rights. I argue for zero parental compromise, rejecting a rights-oriented approach, and propose a policy that an ethical vegan parent and a non-vegan co-parent should follow to determine how to raise their children. (shrink)
This is a review of *Necessary Existence* (by Alex Pruss and Josh Rasmussen). The review outlines a response to the main line of argument that is developed in the book.
Trends in experimental philosophy have provided new and compelling results that are cause for re-evaluations in contemporary discussions of free will. In this paper, I argue for one such re-evaluation by criticizing Robert Kane’s well-known views on free will. I argue that Kane’s claims about pre-theoretical intuitions are not supported by empirical findings on two accounts. First, it is unclear that either incompatibilism or compatibalism is more intuitive to nonphilosophers, as different ways of asking about free will and responsibility reveal (...) different answers. Secondly, I discuss how a study by Josh May supporting a cluster concept of free will may provide ethicists with a reason to give up a definitional model, and I discuss a direction future work might take. Both of these objections come from a larger project concerned with understanding the cognitive mechanisms that people employ when they make judgments about agency and responsibility—a project that promises not only to challenge contemporary philosophy, but to inform it. (shrink)
The aim of this paper is to argue that Moore’s paradox stands for Essential Indexicality because it occurs only when self-reference appears, and thus, for the case of Moore’s paradox, to contend that it is not possible to construct a case of the Frege counterpart that Herman Cappelen and Josh Dever assert as a counterexample to John Perry’s Essential Indexical. Moore’s paradox is widely regarded as a typical example of the peculiarity and irremovability of the first-person, but curiously, Cappelen (...) and Dever did not address Moore’s paradox in their discussions that deny the philosophical significance of the first-person. With this in mind, I would like to show in this paper that Moore’s paradox is a counterexample to their argument. (shrink)
Informally speaking, a truthmaker is something in the world in virtue of which the sentences of a language can be made true. This fundamental philosophical notion plays a central role in applied ontology. In particular, a recent nonorthodox formulation of this notion proposed by the philosopher Josh Parsons, which we labelled weak truthamking, has been shown to be extremely useful in addressing a number of classical problems in the area of Conceptual Modeling. In this paper, after revisiting the classical (...) notion of truthmaking, we conduct an in depth analysis of Parsons’ account of weak truthmaking. By doing that, we expose some difficulties in his original formulation. As the main contribution of this paper, we propose solutions to address these issues which are then integrated in a new precise interpretation of truthmaking that is harmonizable with. (shrink)
In a widely read essay, “For the Law, Neuroscience Changes Nothing and Everything,” Joshua Greene and Jonathan Cohen argue that the advance of neuroscience will result in the widespread rejection of free will, and with it – of retributivism. They go on to propose that consequentialist reforms are in order, and they predict such reforms will take place. We agree that retributivism should be rejected, and we too are optimistic that rejected it will be. But we don’t think that such (...) a development will have much to do with neuroscience – it won’t, because neuroscience is unlikely to show that we have no free will. We have two main aims in this paper. The first is to rebut various aspects of the case against free will. The second is to examine the case for consequentialist reforms. We take Greene and Cohen’s essay as a hobbyhorse, but our criticisms are applicable to neurodeterministic anti-free-willism in general. We first suggest that Greene and Cohen take proponents of free will to be committed to an untenable homuncular account of agency. But proponents of free will can dispense with such a commitment. In fact, we argue, it is Greene and Cohen who work with an overly simple account of free will. We sketch a more nuanced conception. We then turn to the proposal for consequentialist reforms. We argue that retributivism will fall out of favor not as a consequence of neuroscience-driven rejection of free will, but rather, as a result of a familiar feature of moral progress – the expanding circle of concern. In short, retributivism can and must die, but neuroscience will not kill it – humanity will. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.