Odd and memorable examples are a distinctive feature of Charles Travis's work: cases involving squash balls, soot-covered kettles, walls that emit poison gas, faces turning puce, ties made of freshly cooked linguine, and people grunting when punched in the solar plexus all figure in his arguments. One of Travis's examples, involving a pair of situations in which the leaves of a Japanese maple tree are painted green, has even spawned its own literature consisting of attempts to explain the context (...) class='Hi'>sensitivity of color adjectives ("green", e.g.). For Travis, these examples play a central role in his arguments for occasion-sensitivity, which he takes to be a pervasive feature of how we understand natural language. But how, exactly, do these examples work? My aims in this paper are to put Travis’s examples under the microscope, using recent experimental studies of Travis-style cases to raise worries about aspects of the way Travis's cases are informally presented, but then show how his examples can be redesigned to respond to these doubts. (shrink)
According to Charles Travis our language is occasion-sensitive. The truth- conditions of all our sentences, and their correctness-conditions more generally, vary depending on the occasions on which they are used. This is part of a broader view of language as unshadowed. This paper develops objections Travis has made from this viewpoint against Michael Dummett’s anti-realism. It aims to show that the arguments are suggestive but inconclusive. For all it shows unshadowed anti-realism is a possibility.
I argue, in this thesis, that proper name reference is a wholly pragmatic phenomenon. The reference of a proper name is neither constitutive of, nor determined by, the semantic content of that name, but is determined, on an occasion of use, by pragmatic factors. The majority of views in the literature on proper name reference claim that reference is in some way determined by the semantics of the name, either because their reference simply constitutes their semantics (which generally requires (...) a very fine-grained individuation of names), or because names have an indexical-like semantics that returns a referent given certain specific contextual parameters. I discuss and criticize these views in detail, arguing, essentially, in both cases, that there can be no determinate criteria for reference determination—a claim required by both types of semantic view. I also consider a less common view on proper name reference: that it is determined wholly by speakers’ intentions. I argue that the most plausible version of this view—a strong neo-Gricean position whereby all utterance content is determined by the communicative intentions of the speaker—is implausible in light of psychological data. In the positive part of my thesis, I develop a pragmatic view of proper name reference that is influenced primarily by the work of Charles Travis. I argue that the reference of proper names can only be satisfactorily accounted for by claiming that reference occurs not at the level of word meaning, but at the pragmatic level, on an occasion of utterance. I claim that the contextual mechanisms that determine the reference of a name on an occasion are the same kinds of thing that determine the truth-values of utterances according to Travis. Thus, names are, effectively, occasion sensitive in the way that Travis claims predicates and sentences (amongst other expressions) are. Finally, I discuss how further research might address how my pragmatic view of reference affects traditional issues in the literature on names, and the consequences of the view for the semantics of names. (shrink)
Charles Travis has been forcefully arguing that meaning does not determine truth-conditions for more than two decades now. To this end, he has devised ingenious examples whereby different utterances of the same prima facie non-ambiguous and non-indexical expression type have different truth-conditions depending on the occasion on which they are delivered. However, Travis does not argue that meaning varies with circumstances; only that truth-conditions do. He assumes that meaning is a stable feature of both words and sentences. After surveying (...) some of the explanations that semanticists and pragmaticians have produced in order to account for Travis cases, I propose a view which differs substantially from all of them. I argue that the variability in the truth-conditions that an utterance type can have is due to meaning facts alone. To support my argument, I suggest that we think about the meanings of words (in particular, the meanings of nouns) as rich conceptual structures; so rich that the way in which a property concept applies to an object concept is not determined. (shrink)
It's sometimes thought that context-invariant linguistic meaning must be a character (a function from context types to contents) i.e. that linguistic meaning must determine how the content of an expression is fixed in context. This is thought because if context-invariant linguistic meaning were not a character then communication would not be possible. In this paper, I explain how communication could proceed even if context-invariant linguistic meaning were not a character.
It is increasingly common for philosophers to rely on the notion of an idealised listener when explaining how the semantic values of context-sensitive expressions are determined. Some have identified the semantic values of such expressions, as used on particular occasions, with whatever an appropriately idealised listener would take them to be. Others have argued that, for something to count as the semantic value, an appropriately idealised listener should be able to recover it. Our aim here is to explore the range (...) of ways that such idealisation might be worked out, and then to argue that none of these results in a very plausible theory. We conclude by reflecting on what this negative result reveals about the nature of meaning and responsibility. (shrink)
The evaluation of labour markets and of particular jobs ought to be sensitive to a plurality of benefits and burdens of work. We use the term 'the goods of work' to refer to those benefits of work that cannot be obtained in exchange for money and that can be enjoyed mostly or exclusively in the context of work. Drawing on empirical research and various philosophical traditions of thinking about work we identify four goods of work: 1) attaining various types of (...) excellence; 2) making a social contribution; 3) experiencing community; and 4) gaining social recognition. Our account of the goods of work can be read as unpacking the ways in which work can be meaningful. The distribution of the goods of work is a concern of justice for two conjoint reasons: First, they are part of the conception of the good of a large number of individuals. Second, in societies without an unconditional income and in which most people are not independently wealthy, paid work is non-optional and workers have few, if any, occasions to realize these goods outside their job. Taking into account the plurality of the goods of work and their importance for justice challenges the theoretical and political status quo, which focuses mostly on justice with regard to the distribution of income. We defend this account against the libertarian challenge that a free labour market gives individuals sufficient options to realise the goods of work important to them, and discuss the challenge from state neutrality. In the conclusion, we hint towards possible implications for today’s labour markets. (shrink)
The meaning that expressions take on particular occasions often depends on the context in ways which seem to transcend its direct effect on context-sensitive parameters. ‘Truth-conditional pragmatics’ is the project of trying to model such semantic flexibility within a compositional truth-conditional framework. Most proposals proceed by radically ‘freeing up’ the compositional operations of language. I argue, however, that the resulting theories are too unconstrained, and predict flexibility in cases where it is not observed. These accounts fall into this position because (...) they rarely, if ever, take advantage of the rich information made available by lexical items. I hold, instead, that lexical items encode both extension and non-extension determining information. Under certain conditions, the non-extension determining information of an expression e can enter into the compositional processes that determine the meaning of more complex expressions which contain e. This paper presents and motivates a set of type-driven compositional operations that can access non-extension determining information and introduce bits of it into the meaning of complex expressions. The resulting multidimensional semantics has the tools to deal with key cases of semantic flexibility in appropriately constrained ways, making it a promising framework to pursue the project of truth-conditional pragmatics. (shrink)
Sterelny (2003) develops an idealised natural history of folk-psychological kinds. He argues that belief-like states are natural elaborations of simpler control systems, called detection systems, which map directly from environmental cue to response. Belief-like states exhibit robust tracking (sensitivity to multiple environmental states), and response breadth (occasioning a wider range of behaviours). The development of robust tracking and response-breadth depend partly on properties of the informational environment. In a transparent environment the functional relevance of states of the world is (...) directly detectable. Outside transparent environments, selection can favour decoupled representations. Sterelny maintains that these arguments do not generalise to desire. Unlike the external environment, the internal processes of an organism, he argues, are selected for transparency. Parts of a single organism gain nothing from deceiving one another, but gain significantly from accurate signalling of their states and needs. Key conditions favouring the development of belief-like states are therefore absent in the case of desires. Here I argue that Sterelny’s reasons for saying that his treatment of belief does not generalise to motivation (desires, or preferences) are insufficient. There are limits to the transparency that internal environments can achieve. Even if there were not, tracking the motivational salience of external states suggests possible gains for systematic tracking of outcome values in any system in which selection has driven the production of belief-like states. (shrink)
Clinical guidelines are special types of plans realized by collective agents. We provide an ontological theory of such plans that is designed to support the construction of a framework in which guideline-based information systems can be employed in the management of workflow in health care organizations. The framework we propose allows us to represent in formal terms how clinical guidelines are realized through the actions of are realized through the actions of individuals organized into teams. We provide various levels of (...) implementation representing different levels of conformity on the part of health care organizations. Implementations built in conformity with our framework are marked by two dimensions of flexibility that are designed to make them more likely to be accepted by health care professionals than standard guideline-based management systems. They do justice to the fact 1) that responsibilities within a health care organization are widely shared, and 2) that health care professionals may on different occasions be non-compliant with guidelines for a variety of well justified reasons. The advantage of the framework lies in its built-in flexibility, its sensitivity to clinical context, and its ability to use inference tools based on a robust ontology. One disadvantage lies in its complicated implementation. (shrink)
Semantic essentialism holds that any scientific term that appears in a well-confirmed scientific theory has a fixed kernel of meaning. Semantic essentialism cannot make sense of the strategies scientists use to argue for their views. Newton's central optical expression "light ray" suggests a context-sensitive view of scientific language. On different occasions, Newton's expression could refer to different things depending on his particular argumentative goals - a visible beam, an irreducibly smallest section of propagating light, or a traveling particle of light. (...) Essentialist views are too crude to account for the richness and subtleties present in actual episodes of scientific debate and theory-change. (shrink)
During the Transfiguration, the apostles on Tabor, “indeed saw the same grace of the Spirit which would later dwell in them”. The light of grace “illuminates from outside on those who worthily approached it and sent the illumination to the soul through the sensitive eyes; but today, because it is confounded with us and exists in us, it illuminates the soul from inward ”. The opposition between knowledge, which comes from outside - a human and purely symbolic knowledge - and (...) “intellectual” knowledge, which comes from within, Meyendorff says what it already exists at Pseudo-Dionysius: “For it is not from without that God stirs them toward the divine. Rather he does so via the intellect and from within and he willingly enlightens them with a ray that is pure and immaterial”. The assertions of the Calabrian philosopher about an “unique knowledge”, common both to the Christians and the Hellenes and pursuing the same goal, the hesychast theologian opposes the reality of the two knowledge, having two distinct purposes and based on two different instruments of perception: “Palamas admitted the authenticity of natural knowledge, however the latter is opposed to the revealed wisdom, that is why it does not provide, by itself, salvation”. Therefore, in the purified human intellect begins to shine of the Trinity light. Purity also depends on the return of the intellect to itself. In this way, we see how the true knowledge of God is an internal meeting or “inner retrieval” of the whole being of man. As well as in the Syrian mystic, on several occasions we have to make the distinction between the contemplative ways of knowledge: intellection illuminated by grace and spiritual vision without any conceptual or symbolic meaning. For example, Robert Beulay shows that, “The term of ‘intellection’ first of all, is employed by John of Dalyatha to be applied to operations caused by grace”. (shrink)
Modal knowledge accounts that are based on standards possible-worlds semantics face well-known problems when it comes to knowledge of necessities. Beliefs in necessities are trivially sensitive and safe and, therefore, trivially constitute knowledge according to these accounts. In this paper, I will first argue that existing solutions to this necessity problem, which accept standard possible-worlds semantics, are unsatisfactory. In order to solve the necessity problem, I will utilize an unorthodox account of counterfactuals, as proposed by Nolan, on which we also (...) consider impossible worlds. Nolan’s account for counterpossibles delivers the intuitively correct result for sensitivity i.e. S’s belief is sensitive in intuitive cases of knowledge of necessities and insensitive in intuitive cases of knowledge failure. However, we acquire the same plausible result for safety only if we reject his strangeness of impossibility condition and accept the modal closeness of impossible worlds. In this case, the necessity problem can be analogously solved for sensitivity and safety. For some, such non-moderate accounts might come at too high a cost. In this respect, sensitivity is better off than safety when it comes to knowing necessities. (shrink)
Recent attempts to resolve the Paradox of the Gatecrasher rest on a now familiar distinction between individual and bare statistical evidence. This paper investigates two such approaches, the causal approach to individual evidence and a recently influential (and award-winning) modal account that explicates individual evidence in terms of Nozick's notion of sensitivity. This paper offers counterexamples to both approaches, explicates a problem concerning necessary truths for the sensitivity account, and argues that either view is implausibly committed to the (...) impossibility of no-fault wrongful convictions. The paper finally concludes that the distinction between individual and bare statistical evidence cannot be maintained in terms of causation or sensitivity. We have to look elsewhere for a solution of the Paradox of the Gatecrasher. (shrink)
Sensitivity has sometimes been thought to be a highly epistemologically significant property, serving as a proxy for a kind of responsiveness to the facts that ensure that the truth of our beliefs isn’t just a lucky coincidence. But it's an imperfect proxy: there are various well-known cases in which sensitivity-based anti-luck conditions return the wrong verdicts. And as a result of these failures, contemporary theorists often dismiss such conditions out of hand. I show here, though, that a (...) class='Hi'>sensitivity-based understanding of epistemic luck can be developed that respects what was attractive about sensitivity-based approaches in the first place but that's immune to these failures. (shrink)
Vogel, Sosa, and Huemer have all argued that sensitivity is incompatible with knowing that you do not believe falsely, therefore the sensitivity condition must be false. I show that this objection misses its mark because it fails to take account of the basis of belief. Moreover, if the objection is modified to account for the basis of belief then it collapses into the more familiar objection that sensitivity is incompatible with closure.
Value Sensitive Design (VSD) is an established method for integrating values into technical design. It has been applied to different technologies and, more recently, to artificial intelligence (AI). We argue that AI poses a number of challenges specific to VSD that require a somewhat modified VSD approach. Machine learning (ML), in particular, poses two challenges. First, humans may not understand how an AI system learns certain things. This requires paying attention to values such as transparency, explicability, and accountability. Second, ML (...) may lead to AI systems adapting in ways that ‘disembody’ the values embedded in them. To address this, we propose a threefold modified VSD approach: 1) integrating a known set of VSD principles (AI4SG) as design norms from which more specific design requirements can be derived; 2) distinguishing between values that are promoted and respected by the design to ensure outcomes that not only do no harm but also contribute to good; and 3) extending the VSD process to encompass the whole life cycle of an AI technology in order to monitor unintended value consequences and redesign as needed. We illustrate our VSD for AI approach with an example use case of a SARS-CoV-2 contact tracing app. (shrink)
Healthcare is becoming increasingly automated with the development and deployment of care robots. There are many benefits to care robots but they also pose many challenging ethical issues. This paper takes care robots for the elderly as the subject of analysis, building on previous literature in the domain of the ethics and design of care robots. Using the value sensitive design approach to technology design, this paper extends its application to care robots by integrating the values of care, values that (...) are specific to AI, and higher-scale values such as the United Nations Sustainable Development Goals. The ethical issues specific to care robots for the elderly are discussed at length alongside examples of specific design requirements that work to ameliorate these ethical concerns. (shrink)
A number of prominent epistemologists claim that the principle of sensitivity “play[s] a starring role in the solution to some important epistemological problems”. I argue that traditional sensitivity accounts fail to explain even the most basic data that are usually considered to constitute their primary motivation. To establish this result I develop Gettier and lottery cases involving necessary truths. Since beliefs in necessary truths are sensitive by default, the resulting cases give rise to a serious explanatory problem for (...) the defenders of sensitivity accounts. It is furthermore argued that attempts to modally strengthen traditional sensitivity accounts to avoid the problem must appeal to a notion of safety—the primary competitor of sensitivity in the literature. The paper concludes that the explanatory virtues of sensitivity accounts are largely illusory. In the framework of modal epistemology, it is safety rather than sensitivity that does the heavy explanatory lifting with respect to Gettier cases, lottery examples, and other pertinent cases. (shrink)
ABSTRACTIn a recent paper, Michael Pardo argues that the epistemic property that is legally relevant is the one called Safety, rather than Sensitivity. In the process, he argues against our Sensitivity-related account of statistical evidence. Here we revisit these issues, partly in order to respond to Pardo, and partly in order to make general claims about legal epistemology. We clarify our account, we show how it adequately deals with counterexamples and other worries, we raise suspicions about Safety's value (...) here, and we revisit our general skepticism about the role that epistemological considerations should play in determining legal policy. (shrink)
In a recent paper, Melchior pursues a novel argumentative strategy against the sensitivity condition. His claim is that sensitivity suffers from a ‘heterogeneity problem:’ although some higher-order beliefs are knowable, other, very similar, higher-order beliefs are insensitive and so not knowable. Similarly, the conclusions of some bootstrapping arguments are insensitive, but others are not. In reply, I show that sensitivity does not treat different higher-order beliefs differently in the way that Melchior states and that while genuine bootstrapping (...) arguments have insensitive conclusions, the cases that Melchior describes as sensitive ‘bootstrapping’ arguments don’t deserve the name, since they are a perfectly good way of getting to know their conclusions. In sum, sensitivity doesn’t have a heterogeneity problem. (shrink)
Two studies demonstrate that a dispositional proneness to disgust (“disgust sensitivity”) is associated with intuitive disapproval of gay people. Study 1 was based on previous research showing that people are more likely to describe a behavior as intentional when they see it as morally wrong (see Knobe, 2006, for a review). As predicted, the more disgust sensitive participants were, the more likely they were to describe an agent whose behavior had the side effect of causing gay men to kiss (...) in public as having intentionally encouraged gay men to kiss publicly— even though most participants did not explicitly think it wrong to encourage gay men to kiss in public. No such effect occurred when subjects were asked about heterosexual kissing. Study 2 used the Implicit Association Test (IAT; Nosek, Banaji, & Greenwald, 2006) as a dependent measure. The more disgust sensitive participants were, the more they showed.. (shrink)
The law views with suspicion statistical evidence, even evidence that is probabilistically on a par with direct, individual evidence that the law is in no way suspicious of. But it has proved remarkably hard to either justify this suspicion, or to debunk it. In this paper, we connect the discussion of statistical evidence to broader epistemological discussions of similar phenomena. We highlight Sensitivity – the requirement that a belief be counterfactually sensitive to the truth in a specific way – (...) as a way of epistemically explaining the legal suspicion towards statistical evidence. Still, we do not think of this as a satisfactory vindication of the reluctance to rely on statistical evidence. Knowledge – and Sensitivity, and indeed epistemology in general – are of little, if any, legal value. Instead, we tell an incentive-based story vindicating the suspicion towards statistical evidence. We conclude by showing that the epistemological story and the incentive-based story are closely and interestingly related, and by offering initial thoughts about the role of statistical evidence in morality. (shrink)
In this paper, I defend the heterogeneity problem for sensitivity accounts of knowledge against an objection that has been recently proposed by Wallbridge in Philosophia. I argue in, 479–496, 2015) that sensitivity accounts of knowledge face a heterogeneity problem when it comes to higher-level knowledge about the truth of one’s own beliefs. Beliefs in weaker higher-level propositions are insensitive, but beliefs in stronger higher-level propositions are sensitive. The resulting picture that we can know the stronger propositions without being (...) in a position to know the weaker propositions is too heterogeneous to be plausible. Wallbridge objects that there is no heterogeneity problem because beliefs in the weaker higher-level propositions are also sensitive. I argue against Wallbridge that the heterogeneity problem is not solved but only displaced. Only some beliefs in the weaker higher-level propositions are sensitive. I conclude that the heterogeneity problem is one of a family of instability problems that sensitivity accounts of knowledge face and that Wallbridge’s account raises a further problem of this kind. (shrink)
Some contextually sensitive expressions are such that their context independent conventional meanings need to be in some way supplemented in context for the expressions to secure semantic values in those contexts. As we’ll see, it is not clear that there is a paradigm here, but ‘he’ used demonstratively is a clear example of such an expression. Call expressions of this sort supplementives in order to highlight the fact that their context independent meanings need to be supplemented in context for them (...) to have semantic values relative to the context. Many philosophers and linguists think that there is a lot of contextual sensitivity in natural language that goes well beyond the pure indexicals and supplementives like ‘he’. Constructions/expressions that are good candidates for being contextually sensitive include: quantifiers, gradable adjectives including “predicates of personal taste”, modals, conditionals, possessives and relational expressions taking implicit arguments. It would appear that in none of these cases does the expression/construction in question have a context independent meaning that when placed in context suffices to secure a semantic value for the expression/construction in the context. In each case, some sort of supplementation is required to do this. Hence, all these expressions are supplementives in my sense. For a given supplementive, the question arises as to what the mechanism is for supplementing its conventional meanings in context so as to secure a semantic value for it in context. That is, what form does the supplementation take? The question also arises as to whether different supplementives require different kinds of supplementation. Let us call an account of what, in addition to its conventional meaning, secures a semantic value for a supplementive in context a metasemantics for that supplementive. So we can put our two questions thus: what is the proper metasemantics for a given supplementive; and do all supplementives have the same metasemantics? In the present work, I sketch the metasemantics I formulated for demonstratives in earlier work. Next, I briefly consider a number of other supplementives that I think the metasemantics I propose plausibly applies to and explain why I think that. Finally, I consider the prospects for extending the account to all supplementives. In so doing, I take up arguments due to Michael Glanzberg to the effect that supplementives are governed by two different metasemantics and attempt to respond to them. (shrink)
In this paper, I argue that Contextualist theories of semantics are not undermined by their purported failure to explain the practice of indirect reporting. I adoptCappelen & Lepore's test for context sensitivity to show that the scope of context sensitivity is much broader than Semantic Minimalists are willing to accept. Thefailure of their arguments turns on their insistence that the content of indirect reports is semantically minimal.
This paper puts forward an argument for a systematic, technical approach to formulation in verbal interaction. I see this as a kind of expansion of Sacks’ membership categorization analysis, and as something that is not offered (at least not in a fully developed form) by sequential analysis, the currently dominant form of conversation analysis. In particular, I suggest a technique for the study of “occasioned semantics,” that is, the study of structures of meaningful expressions in actual occasions of conversation. I (...) propose that meaning and rhetoric be approached through consideration of various dimensions or operations or properties, including, but not limited to, contrast and co-categorization, generalization and specification, scaling, and marking. As illustration, I consider a variety of cases, focused on generalization and specification. The paper can be seen as a return to some classical concerns with meaning, as illuminated by more recent insights into indexicality, social action, and interaction in recorded talk. (shrink)
This paper argues that if knowledge is defined in terms of probabilistic tracking then the benefits of epistemic closure follow without the addition of a closure clause. (This updates my definition of knowledge in Tracking Truth 2005.) An important condition on this result is found in "Closure Failure and Scientific Inquiry" (2017).
It has been argued that an advantage of the safety account over the sensitivity account is that the safety account preserves epistemic closure, while the sensitivity account implies epistemic closu...
Historically the focus of moral decision-making in games has been narrow, mostly confined to challenges of moral judgement (deciding right and wrong). In this paper, we look to moral psychology to get a broader view of the skills involved in ethical behaviour and how these skills can be employed in games. Following the Four Component Model of Rest and colleagues, we identify four “lenses” – perspectives for considering moral gameplay in terms of focus, sensitivity, judgement and action – and (...) describe the design problems raised by each. To conclude, we analyse two recent games, The Walking Dead and Papers, Please, and show how the lenses give us insight into important design differences between these games. (shrink)
Sosa, Pritchard, and Vogel have all argued that there are cases in which one knows something inductively but does not believe it sensitively, and that sensitivity therefore cannot be necessary for knowledge. I defend sensitivity by showing that inductive knowledge is sensitive.
Safe-by-Design (SBD) frameworks for the development of emerging technologies have become an ever more popular means by which scholars argue that transformative emerging technologies can safely incorporate human values. One such popular SBD methodology is called Value Sensitive Design (VSD). A central tenet of this design methodology is to investigate stakeholder values and design those values into technologies during early stage research and development (R&D). To accomplish this, the VSD framework mandates that designers consult the philosophical and ethical literature to (...) best determine how to weigh moral trade-offs. However, the VSD framework also concedes the universalism of moral values, particularly the values of freedom, autonomy, equality trust and privacy justice. This paper argues that the VSD methodology, particularly applied to nano-bio-info-cogno (NBIC) technologies, has an insufficient grounding for the determination of moral values. As such, an exploration of the value-investigations of VSD are deconstructed to illustrate both its strengths and weaknesses. This paper also provides possible modalities for the strengthening of the VSD methodology, particularly through the application of moral imagination and how moral imagination exceed the boundaries of moral intuitions in the development of novel technologies. (shrink)
Fitelson (1999) demonstrates that the validity of various arguments within Bayesian confirmation theory depends on which confirmation measure is adopted. The present paper adds to the results set out in Fitelson (1999), expanding on them in two principal respects. First, it considers more confirmation measures. Second, it shows that there are important arguments within Bayesian confirmation theory and that there is no confirmation measure that renders them all valid. Finally, the paper reviews the ramifications that this "strengthened problem of measure (...)sensitivity" has for Bayesian confirmation theory and discusses whether it points at pluralism about notions of confirmation. (shrink)
The value sensitive design (VSD) approach to designing transformative technologies for human values is taken as the object of study in this chapter. VSD has traditionally been conceptualized as another type of technology or instrumentally as a tool. The various parts of VSD’s principled approach would then aim to discern the various policy requirements that any given technological artifact under consideration would implicate. Yet, little to no consideration has been given to how laws, regulations, policies and social norms engage within (...) VSD practices. Similarly, how the interactive nature of the VSD approach can, in turn, influence those directives. This is exacerbated when we consider machine ethics policy that have global consequences outside their development spheres. What constructs and models will position AI designers to engage in policy concerns? How can the design of AI policy be integrated with technical design? How might VSD be used to develop AI policy? How might law, regulations, social norms, and other kinds of policy regarding AI systems be engaged within value sensitive design? This chapter takes the VSD as its starting point and aims to determine how laws, regulations and policies come to influence how value trade-offs can be managed within VSD practices. It shows that the iterative and interactional nature of VSD both permits and encourages existing policies to be integrated both early on and throughout the design process. The chapter concludes with some potential future research programs. (shrink)
In the Essay Concerning Human Understanding, Locke insists that all knowledge consists in perception of the agreement or disagreement of ideas. However, he also insists that knowledge extends to outer reality, claiming that perception yields ‘sensitive knowledge’ of the existence of outer objects. Some scholars have argued that Locke did not really mean to restrict knowledge to perceptions of relations within the realm of ideas; others have argued that sensitive knowledge is not strictly speaking a form of knowledge for Locke. (...) This chapter argues that Locke’s conception of sensitive knowledge is in fact compatible with his official definition of knowledge, and discusses his treatment of the problem of skepticism, both in the Essay and in the correspondence with Stillingfleet. (shrink)
In this paper, the notion of degree of inconsistency is introduced as a tool to evaluate the sensitivity of the Full Bayesian Significance Test (FBST) value of evidence with respect to changes in the prior or reference density. For that, both the definition of the FBST, a possibilistic approach to hypothesis testing based on Bayesian probability procedures, and the use of bilattice structures, as introduced by Ginsberg and Fitting, in paraconsistent logics, are reviewed. The computational and theoretical advantages of (...) using the proposed degree of inconsistency based sensitivity evaluation as an alternative to traditional statistical power analysis is also discussed. (shrink)
This chapter proposed a novel design methodology called Value-Sensitive Design and its potential application to the field of artificial intelligence research and design. It discusses the imperatives in adopting a design philosophy that embeds values into the design of artificial agents at the early stages of AI development. Because of the high risk stakes in the unmitigated design of artificial agents, this chapter proposes that even though VSD may turn out to be a less-than-optimal design methodology, it currently provides a (...) framework that has the potential to embed stakeholder values and incorporate current design methods. The reader should begin to take away the importance of a proactive design approach to intelligent agents. (shrink)
Although artificial intelligence has been given an unprecedented amount of attention in both the public and academic domains in the last few years, its convergence with other transformative technologies like cloud computing, robotics, and augmented/virtual reality is predicted to exacerbate its impacts on society. The adoption and integration of these technologies within industry and manufacturing spaces is a fundamental part of what is called Industry 4.0, or the Fourth Industrial Revolution. The impacts of this paradigm shift on the human operators (...) who continue to work alongside and symbiotically with these technologies in the industry bring with it novel ethical issues. Therefore, how to design these technologies for human values becomes the critical area of intervention. This paper takes up the case study of robotic AI-based assistance systems to explore the potential value implications that emerge due to current design practices and use. The design methodology known as Value Sensitive Design (VSD) is proposed as a sufficient starting point for designing these technologies for human values to address these issues. (shrink)
In this paper I argue that Locke takes sensitive knowledge (i.e. knowledge from sensation) to be genuine knowledge that material objects exist. Samuel Rickless has recently argued that, for Locke, sensitive knowledge is merely an “assurance”, or a highly probable judgment that falls short of certainty. In reply, I show that Locke sometimes uses “assurance” to describe certain knowledge, and so the use of the term “assurance” to describe sensitive knowledge does not entail that it is less than certain. Further, (...) I show that sensitive knowledge includes the perception of a relation between ideas, and thus it satisfies Locke’s definition of knowledge. He also repeatedly claims that sensitive knowledge is certain. So, despite recent challenges to this interpretation raised in the secondary literature, Locke really does take sensitive knowledge to be certain knowledge. (shrink)
In this paper I explore the relationship between skill and sensitivity to reasons for action. I want to know to what degree we can explain the fact that the skilled agent is very good at performing a cluster of actions within some domain in terms of the fact that the skilled agent has a refined sensitivity to the reasons for action common to the cluster. The picture is a little bit complex. While skill can be partially explained by (...)sensitivity to reasons – a sensitivity often produced by rational practice – the skilled human agent, because imperfect, must navigate a trade-off between full sensitivity and a capacity to succeed. (shrink)
This paper develops a question-sensitive theory of intention. We show that this theory explains some puzzling closure properties of intention. In particular, it can be used to explain why one is rationally required to intend the means to one’s ends, even though one is not rationally required to intend all the foreseen consequences of one’s intended actions. It also explains why rational intention is not always closed under logical implication, and why one can only intend outcomes that one believes to (...) be under one’s control. (shrink)
It has recently been argued that a sensitivity theory of knowledge cannot account for intuitively appealing instances of higher-order knowledge. In this paper, we argue that it can once careful attention is paid to the methods or processes by which we typically form higher-order beliefs. We base our argument on what we take to be a well-motivated and commonsensical view on how higher-order knowledge is typically acquired, and we show how higher-order knowledge is possible in a sensitivity theory (...) once this view is adopted. (shrink)
Offering a solution to the skeptical puzzle is a central aim of Nozick's sensitivity account of knowledge. It is well-known that this account faces serious problems. However, because of its simplicity and its explanatory power, the sensitivity principle has remained attractive and has been subject to numerous modifications, leading to a of sensitivity accounts. I will object to these accounts, arguing that sensitivity accounts of knowledge face two problems. First, they deliver a far too heterogeneous picture (...) of higher-level beliefs about the truth or falsity of one's own beliefs. Second, this problem carries over to bootstrapping and Moorean reasoning. Some beliefs formed via bootstrapping or Moorean reasoning are insensitive, but some closely related beliefs in even stronger propositions are sensitive. These heterogeneous results regarding sensitivity do not fit with our intuitions about bootstrapping and Moorean reasoning. Thus, neither Nozick's sensitivity account of knowledge nor any of its modified versions can provide the basis for an argument that bootstrapping and Moorean reasoning are flawed or for an explanation why they seem to be flawed. (shrink)
The study aimed to identify the strategic sensitivity and its impact on enhancing the creative behavior of Palestinian NGOs in Gaza Strip, and the study used the descriptive analytical approach and the questionnaire as a main tool for collecting data from employees of associations working in Gaza Strip governorates, and the cluster sample method was used and the sample size reached (343) individuals (298) questionnaires were retrieved, and the following results were reached: The relative weight of strategic sensitivity (...) was 79.22 (%), and the relative weight of creative behavior was 78.99 (%), a statistically significant relationship between all strategic sensitivity and creative behavior, and the presence of a sensitivity effect The strategy’s strategy on creative behavior, there are statistically significant differences in the scale dimensions attributable to the gender variable and the differences were in favor of females, there are no statistically significant differences between the averages of strategic sensitivity due to the age variable, and the educational qualification, and there were no statistically significant differences in creative behavior according to The gender variable, age, educational qualification, specialization, and the study presented a set of recommendations, the most important of which are: the need for civil organizations in Gaza Strip to seek funding from external countries in order to provide self-income for associations to face crises and give them independence Mechanism in order to keep them to carry out their role in society, the need to follow up the strategic plan of civil organizations using e-mails as they pave the way to reach excellence and creativity in the field of work. (shrink)
In this paper, I develop a theory of how claims about an agent’s normative reasons are sensitive to the epistemic circumstances of this agent, which preserves the plausible ideas that reasons are facts and that reasons can be discovered in deliberation and disclosed in advice. I argue that a plausible theory of this kind must take into account the difference between synchronic and diachronic reasons, i.e. reasons for acting immediately and reasons for acting at some later point in time. I (...) provide a general account of the relation between synchronic and diachronic reasons, demonstrate its implications for the evidence-sensitivity of reasons and finally present and defend an argument for my view. (shrink)
In medical ethics, business ethics, and some branches of political philosophy (multi-culturalism, issues of just allocation, and equitable distribution) the literature increasingly combines insights from ethics and the social sciences. Some authors in medical ethics even speak of a new phase in the history of ethics, hailing "empirical ethics" as a logical next step in the development of practical ethics after the turn to "applied ethics." The name empirical ethics is ill-chosen because of its associations with "descriptive ethics." Unlike descriptive (...) ethics, however, empirical ethics aims to be both descriptive and normative. The first question on which I focus is what kind of empirical research is used by empirical ethics and for which purposes. I argue that the ultimate aim of all empirical ethics is to improve the context-sensitivity of ethics. The second question is whether empirical ethics is essentially connected with specific positions in meta-ethics. I show that in some kinds of meta-ethical theories, which I categorize as broad contextualist theories, there is an intrinsic need for connecting normative ethics with empirical social research. But context-sensitivity is a goal that can be aimed for from any meta-ethical position. (shrink)
Robert Nozick (1981, 172) offers the following analysis of knowledge (where S stands for subject and p for proposition): -/- D1 S knows that p =df (1) S believes p, (2) p is true, (3) if p weren’t true, S wouldn’t believe that p (variation condition), and (4) If p were true, S would believe it (adherence condition). Jointly, Nozick refers to conditions 3 and 4 as the sensitivity condition: for they require that the belief be sensitive to the (...) truth-value of the proposition—such that if the proposition were false, the subject would not have believed it, and if the proposition remains true in a slightly different situation, the subject would have still believed it. In other words, they ask us to consider the status of the belief in close possible situations (those that obtain in close possible worlds); specifically, in situations that would obtain if the proposition is false, and in those in which it remains true. Condition 3 specifies how belief should vary with the truth of what is believed, while condition 4 specifies how belief shouldn’t vary when the truth of the belief does not vary. I will discuss some notable problem cases for Nozick’s analysis and then look at why the sensitivity condition he proposes fails in these cases. (shrink)
Time sensitivity seems to affect our intuitive evaluation of the reasonable risk of fallibility in testimonies. All things being equal, we tend to be less demanding in accepting time sensitive testimonies as opposed to time insensitive testimonies. This paper considers this intuitive response to testimonies as a strategy of acceptance. It argues that the intuitive strategy, which takes time sensitivity into account, is epistemically superior to two adjacent strategies that do not: the undemanding strategy adopted by non-reductionists and (...) the cautious strategy adopted by reductionists. The paper demonstrates that in adopting the intuitive strategy of acceptance, one is likely to form more true beliefs and fewer false beliefs. Also, in following the intuitive strategy, the listener will be fulfilling his epistemic duties more efficiently. (shrink)
This paper defends the claim that although ‘Superman is Clark Kent and some people who believe that Superman flies do not believe that Clark Kent flies’ is a logically inconsistent sentence, we can still utter this sentence, while speaking literally, without asserting anything false. The key idea is that the context-sensitivity of attitude reports can be - and often is - resolved in different ways within a single sentence.
Previous Responsible Innovation (RI) research has provided valuable insights on the value conflicts inherent to societally desirable innovation. By observing the responses of firms to these conflicts, Value-sensitive Absorptive Capacity (VAC) captures the organizational capabilities to become sensitive to these value conflicts and thus, innovate more responsibly. In this article, we construct a survey instrument to assess VAC, based on previous work by CSR and RI scholars. The construct and concurrent validity of the instrument were tested in an empirical study, (...) including 109 employees of 30 food manufacturing firms. The results from the survey were then compared with the conceptual VAC dimensions. With this comparison, we do not only contribute to the substantiation of the VAC construct, but we also show how inductive and deductive approaches can be combined to build theory regarding RI in a transdisciplinary manner. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.