Modal knowledge accounts that are based on standards possible-worlds semantics face well-known problems when it comes to knowledge of necessities. Beliefs in necessities are trivially sensitive and safe and, therefore, trivially constitute knowledge according to these accounts. In this paper, I will first argue that existing solutions to this necessity problem, which accept standard possible-worlds semantics, are unsatisfactory. In order to solve the necessity problem, I will utilize an unorthodox account of counterfactuals, as proposed by Nolan, on which we also (...) consider impossible worlds. Nolan’s account for counterpossibles delivers the intuitively correct result for sensitivity i.e. S’s belief is sensitive in intuitive cases of knowledge of necessities and insensitive in intuitive cases of knowledge failure. However, we acquire the same plausible result for safety only if we reject his strangeness of impossibility condition and accept the modal closeness of impossible worlds. In this case, the necessity problem can be analogously solved for sensitivity and safety. For some, such non-moderate accounts might come at too high a cost. In this respect, sensitivity is better off than safety when it comes to knowing necessities. (shrink)
Recent attempts to resolve the Paradox of the Gatecrasher rest on a now familiar distinction between individual and bare statistical evidence. This paper investigates two such approaches, the causal approach to individual evidence and a recently influential (and award-winning) modal account that explicates individual evidence in terms of Nozick's notion of sensitivity. This paper offers counterexamples to both approaches, explicates a problem concerning necessary truths for the sensitivity account, and argues that either view is implausibly committed to the (...) impossibility of no-fault wrongful convictions. The paper finally concludes that the distinction between individual and bare statistical evidence cannot be maintained in terms of causation or sensitivity. We have to look elsewhere for a solution of the Paradox of the Gatecrasher. (shrink)
Sensitivity has sometimes been thought to be a highly epistemologically significant property, serving as a proxy for a kind of responsiveness to the facts that ensure that the truth of our beliefs isn’t just a lucky coincidence. But it's an imperfect proxy: there are various well-known cases in which sensitivity-based anti-luck conditions return the wrong verdicts. And as a result of these failures, contemporary theorists often dismiss such conditions out of hand. I show here, though, that a (...) class='Hi'>sensitivity-based understanding of epistemic luck can be developed that respects what was attractive about sensitivity-based approaches in the first place but that's immune to these failures. (shrink)
Vogel, Sosa, and Huemer have all argued that sensitivity is incompatible with knowing that you do not believe falsely, therefore the sensitivity condition must be false. I show that this objection misses its mark because it fails to take account of the basis of belief. Moreover, if the objection is modified to account for the basis of belief then it collapses into the more familiar objection that sensitivity is incompatible with closure.
Value Sensitive Design (VSD) is an established method for integrating values into technical design. It has been applied to different technologies and, more recently, to artificial intelligence (AI). We argue that AI poses a number of challenges specific to VSD that require a somewhat modified VSD approach. Machine learning (ML), in particular, poses two challenges. First, humans may not understand how an AI system learns certain things. This requires paying attention to values such as transparency, explicability, and accountability. Second, ML (...) may lead to AI systems adapting in ways that ‘disembody’ the values embedded in them. To address this, we propose a threefold modified VSD approach: 1) integrating a known set of VSD principles (AI4SG) as design norms from which more specific design requirements can be derived; 2) distinguishing between values that are promoted and respected by the design to ensure outcomes that not only do no harm but also contribute to good; and 3) extending the VSD process to encompass the whole life cycle of an AI technology in order to monitor unintended value consequences and redesign as needed. We illustrate our VSD for AI approach with an example use case of a SARS-CoV-2 contact tracing app. (shrink)
Healthcare is becoming increasingly automated with the development and deployment of care robots. There are many benefits to care robots but they also pose many challenging ethical issues. This paper takes care robots for the elderly as the subject of analysis, building on previous literature in the domain of the ethics and design of care robots. Using the value sensitive design approach to technology design, this paper extends its application to care robots by integrating the values of care, values that (...) are specific to AI, and higher-scale values such as the United Nations Sustainable Development Goals. The ethical issues specific to care robots for the elderly are discussed at length alongside examples of specific design requirements that work to ameliorate these ethical concerns. (shrink)
A number of prominent epistemologists claim that the principle of sensitivity “play[s] a starring role in the solution to some important epistemological problems”. I argue that traditional sensitivity accounts fail to explain even the most basic data that are usually considered to constitute their primary motivation. To establish this result I develop Gettier and lottery cases involving necessary truths. Since beliefs in necessary truths are sensitive by default, the resulting cases give rise to a serious explanatory problem for (...) the defenders of sensitivity accounts. It is furthermore argued that attempts to modally strengthen traditional sensitivity accounts to avoid the problem must appeal to a notion of safety—the primary competitor of sensitivity in the literature. The paper concludes that the explanatory virtues of sensitivity accounts are largely illusory. In the framework of modal epistemology, it is safety rather than sensitivity that does the heavy explanatory lifting with respect to Gettier cases, lottery examples, and other pertinent cases. (shrink)
ABSTRACTIn a recent paper, Michael Pardo argues that the epistemic property that is legally relevant is the one called Safety, rather than Sensitivity. In the process, he argues against our Sensitivity-related account of statistical evidence. Here we revisit these issues, partly in order to respond to Pardo, and partly in order to make general claims about legal epistemology. We clarify our account, we show how it adequately deals with counterexamples and other worries, we raise suspicions about Safety's value (...) here, and we revisit our general skepticism about the role that epistemological considerations should play in determining legal policy. (shrink)
In a recent paper, Melchior pursues a novel argumentative strategy against the sensitivity condition. His claim is that sensitivity suffers from a ‘heterogeneity problem:’ although some higher-order beliefs are knowable, other, very similar, higher-order beliefs are insensitive and so not knowable. Similarly, the conclusions of some bootstrapping arguments are insensitive, but others are not. In reply, I show that sensitivity does not treat different higher-order beliefs differently in the way that Melchior states and that while genuine bootstrapping (...) arguments have insensitive conclusions, the cases that Melchior describes as sensitive ‘bootstrapping’ arguments don’t deserve the name, since they are a perfectly good way of getting to know their conclusions. In sum, sensitivity doesn’t have a heterogeneity problem. (shrink)
Two studies demonstrate that a dispositional proneness to disgust (“disgust sensitivity”) is associated with intuitive disapproval of gay people. Study 1 was based on previous research showing that people are more likely to describe a behavior as intentional when they see it as morally wrong (see Knobe, 2006, for a review). As predicted, the more disgust sensitive participants were, the more likely they were to describe an agent whose behavior had the side effect of causing gay men to kiss (...) in public as having intentionally encouraged gay men to kiss publicly— even though most participants did not explicitly think it wrong to encourage gay men to kiss in public. No such effect occurred when subjects were asked about heterosexual kissing. Study 2 used the Implicit Association Test (IAT; Nosek, Banaji, & Greenwald, 2006) as a dependent measure. The more disgust sensitive participants were, the more they showed.. (shrink)
The law views with suspicion statistical evidence, even evidence that is probabilistically on a par with direct, individual evidence that the law is in no way suspicious of. But it has proved remarkably hard to either justify this suspicion, or to debunk it. In this paper, we connect the discussion of statistical evidence to broader epistemological discussions of similar phenomena. We highlight Sensitivity – the requirement that a belief be counterfactually sensitive to the truth in a specific way – (...) as a way of epistemically explaining the legal suspicion towards statistical evidence. Still, we do not think of this as a satisfactory vindication of the reluctance to rely on statistical evidence. Knowledge – and Sensitivity, and indeed epistemology in general – are of little, if any, legal value. Instead, we tell an incentive-based story vindicating the suspicion towards statistical evidence. We conclude by showing that the epistemological story and the incentive-based story are closely and interestingly related, and by offering initial thoughts about the role of statistical evidence in morality. (shrink)
In this paper, I defend the heterogeneity problem for sensitivity accounts of knowledge against an objection that has been recently proposed by Wallbridge in Philosophia. I argue in, 479–496, 2015) that sensitivity accounts of knowledge face a heterogeneity problem when it comes to higher-level knowledge about the truth of one’s own beliefs. Beliefs in weaker higher-level propositions are insensitive, but beliefs in stronger higher-level propositions are sensitive. The resulting picture that we can know the stronger propositions without being (...) in a position to know the weaker propositions is too heterogeneous to be plausible. Wallbridge objects that there is no heterogeneity problem because beliefs in the weaker higher-level propositions are also sensitive. I argue against Wallbridge that the heterogeneity problem is not solved but only displaced. Only some beliefs in the weaker higher-level propositions are sensitive. I conclude that the heterogeneity problem is one of a family of instability problems that sensitivity accounts of knowledge face and that Wallbridge’s account raises a further problem of this kind. (shrink)
Some contextually sensitive expressions are such that their context independent conventional meanings need to be in some way supplemented in context for the expressions to secure semantic values in those contexts. As we’ll see, it is not clear that there is a paradigm here, but ‘he’ used demonstratively is a clear example of such an expression. Call expressions of this sort supplementives in order to highlight the fact that their context independent meanings need to be supplemented in context for them (...) to have semantic values relative to the context. Many philosophers and linguists think that there is a lot of contextual sensitivity in natural language that goes well beyond the pure indexicals and supplementives like ‘he’. Constructions/expressions that are good candidates for being contextually sensitive include: quantifiers, gradable adjectives including “predicates of personal taste”, modals, conditionals, possessives and relational expressions taking implicit arguments. It would appear that in none of these cases does the expression/construction in question have a context independent meaning that when placed in context suffices to secure a semantic value for the expression/construction in the context. In each case, some sort of supplementation is required to do this. Hence, all these expressions are supplementives in my sense. For a given supplementive, the question arises as to what the mechanism is for supplementing its conventional meanings in context so as to secure a semantic value for it in context. That is, what form does the supplementation take? The question also arises as to whether different supplementives require different kinds of supplementation. Let us call an account of what, in addition to its conventional meaning, secures a semantic value for a supplementive in context a metasemantics for that supplementive. So we can put our two questions thus: what is the proper metasemantics for a given supplementive; and do all supplementives have the same metasemantics? In the present work, I sketch the metasemantics I formulated for demonstratives in earlier work. Next, I briefly consider a number of other supplementives that I think the metasemantics I propose plausibly applies to and explain why I think that. Finally, I consider the prospects for extending the account to all supplementives. In so doing, I take up arguments due to Michael Glanzberg to the effect that supplementives are governed by two different metasemantics and attempt to respond to them. (shrink)
In this paper, I argue that Contextualist theories of semantics are not undermined by their purported failure to explain the practice of indirect reporting. I adoptCappelen & Lepore's test for context sensitivity to show that the scope of context sensitivity is much broader than Semantic Minimalists are willing to accept. Thefailure of their arguments turns on their insistence that the content of indirect reports is semantically minimal.
It has been argued that an advantage of the safety account over the sensitivity account is that the safety account preserves epistemic closure, while the sensitivity account implies epistemic closu...
This paper argues that if knowledge is defined in terms of probabilistic tracking then the benefits of epistemic closure follow without the addition of a closure clause. (This updates my definition of knowledge in Tracking Truth 2005.) An important condition on this result is found in "Closure Failure and Scientific Inquiry" (2017).
Historically the focus of moral decision-making in games has been narrow, mostly confined to challenges of moral judgement (deciding right and wrong). In this paper, we look to moral psychology to get a broader view of the skills involved in ethical behaviour and how these skills can be employed in games. Following the Four Component Model of Rest and colleagues, we identify four “lenses” – perspectives for considering moral gameplay in terms of focus, sensitivity, judgement and action – and (...) describe the design problems raised by each. To conclude, we analyse two recent games, The Walking Dead and Papers, Please, and show how the lenses give us insight into important design differences between these games. (shrink)
Sosa, Pritchard, and Vogel have all argued that there are cases in which one knows something inductively but does not believe it sensitively, and that sensitivity therefore cannot be necessary for knowledge. I defend sensitivity by showing that inductive knowledge is sensitive.
Safe-by-Design (SBD) frameworks for the development of emerging technologies have become an ever more popular means by which scholars argue that transformative emerging technologies can safely incorporate human values. One such popular SBD methodology is called Value Sensitive Design (VSD). A central tenet of this design methodology is to investigate stakeholder values and design those values into technologies during early stage research and development (R&D). To accomplish this, the VSD framework mandates that designers consult the philosophical and ethical literature to (...) best determine how to weigh moral trade-offs. However, the VSD framework also concedes the universalism of moral values, particularly the values of freedom, autonomy, equality trust and privacy justice. This paper argues that the VSD methodology, particularly applied to nano-bio-info-cogno (NBIC) technologies, has an insufficient grounding for the determination of moral values. As such, an exploration of the value-investigations of VSD are deconstructed to illustrate both its strengths and weaknesses. This paper also provides possible modalities for the strengthening of the VSD methodology, particularly through the application of moral imagination and how moral imagination exceed the boundaries of moral intuitions in the development of novel technologies. (shrink)
Fitelson (1999) demonstrates that the validity of various arguments within Bayesian confirmation theory depends on which confirmation measure is adopted. The present paper adds to the results set out in Fitelson (1999), expanding on them in two principal respects. First, it considers more confirmation measures. Second, it shows that there are important arguments within Bayesian confirmation theory and that there is no confirmation measure that renders them all valid. Finally, the paper reviews the ramifications that this "strengthened problem of measure (...)sensitivity" has for Bayesian confirmation theory and discusses whether it points at pluralism about notions of confirmation. (shrink)
In this paper I explore the relationship between skill and sensitivity to reasons for action. I want to know to what degree we can explain the fact that the skilled agent is very good at performing a cluster of actions within some domain in terms of the fact that the skilled agent has a refined sensitivity to the reasons for action common to the cluster. The picture is a little bit complex. While skill can be partially explained by (...)sensitivity to reasons – a sensitivity often produced by rational practice – the skilled human agent, because imperfect, must navigate a trade-off between full sensitivity and a capacity to succeed. (shrink)
In the Essay Concerning Human Understanding, Locke insists that all knowledge consists in perception of the agreement or disagreement of ideas. However, he also insists that knowledge extends to outer reality, claiming that perception yields ‘sensitive knowledge’ of the existence of outer objects. Some scholars have argued that Locke did not really mean to restrict knowledge to perceptions of relations within the realm of ideas; others have argued that sensitive knowledge is not strictly speaking a form of knowledge for Locke. (...) This chapter argues that Locke’s conception of sensitive knowledge is in fact compatible with his official definition of knowledge, and discusses his treatment of the problem of skepticism, both in the Essay and in the correspondence with Stillingfleet. (shrink)
The value sensitive design (VSD) approach to designing transformative technologies for human values is taken as the object of study in this chapter. VSD has traditionally been conceptualized as another type of technology or instrumentally as a tool. The various parts of VSD’s principled approach would then aim to discern the various policy requirements that any given technological artifact under consideration would implicate. Yet, little to no consideration has been given to how laws, regulations, policies and social norms engage within (...) VSD practices. Similarly, how the interactive nature of the VSD approach can, in turn, influence those directives. This is exacerbated when we consider machine ethics policy that have global consequences outside their development spheres. What constructs and models will position AI designers to engage in policy concerns? How can the design of AI policy be integrated with technical design? How might VSD be used to develop AI policy? How might law, regulations, social norms, and other kinds of policy regarding AI systems be engaged within value sensitive design? This chapter takes the VSD as its starting point and aims to determine how laws, regulations and policies come to influence how value trade-offs can be managed within VSD practices. It shows that the iterative and interactional nature of VSD both permits and encourages existing policies to be integrated both early on and throughout the design process. The chapter concludes with some potential future research programs. (shrink)
Although artificial intelligence has been given an unprecedented amount of attention in both the public and academic domains in the last few years, its convergence with other transformative technologies like cloud computing, robotics, and augmented/virtual reality is predicted to exacerbate its impacts on society. The adoption and integration of these technologies within industry and manufacturing spaces is a fundamental part of what is called Industry 4.0, or the Fourth Industrial Revolution. The impacts of this paradigm shift on the human operators (...) who continue to work alongside and symbiotically with these technologies in the industry bring with it novel ethical issues. Therefore, how to design these technologies for human values becomes the critical area of intervention. This paper takes up the case study of robotic AI-based assistance systems to explore the potential value implications that emerge due to current design practices and use. The design methodology known as Value Sensitive Design (VSD) is proposed as a sufficient starting point for designing these technologies for human values to address these issues. (shrink)
In this paper I argue that Locke takes sensitive knowledge (i.e. knowledge from sensation) to be genuine knowledge that material objects exist. Samuel Rickless has recently argued that, for Locke, sensitive knowledge is merely an “assurance”, or a highly probable judgment that falls short of certainty. In reply, I show that Locke sometimes uses “assurance” to describe certain knowledge, and so the use of the term “assurance” to describe sensitive knowledge does not entail that it is less than certain. Further, (...) I show that sensitive knowledge includes the perception of a relation between ideas, and thus it satisfies Locke’s definition of knowledge. He also repeatedly claims that sensitive knowledge is certain. So, despite recent challenges to this interpretation raised in the secondary literature, Locke really does take sensitive knowledge to be certain knowledge. (shrink)
This chapter proposed a novel design methodology called Value-Sensitive Design and its potential application to the field of artificial intelligence research and design. It discusses the imperatives in adopting a design philosophy that embeds values into the design of artificial agents at the early stages of AI development. Because of the high risk stakes in the unmitigated design of artificial agents, this chapter proposes that even though VSD may turn out to be a less-than-optimal design methodology, it currently provides a (...) framework that has the potential to embed stakeholder values and incorporate current design methods. The reader should begin to take away the importance of a proactive design approach to intelligent agents. (shrink)
In this paper, the notion of degree of inconsistency is introduced as a tool to evaluate the sensitivity of the Full Bayesian Significance Test (FBST) value of evidence with respect to changes in the prior or reference density. For that, both the definition of the FBST, a possibilistic approach to hypothesis testing based on Bayesian probability procedures, and the use of bilattice structures, as introduced by Ginsberg and Fitting, in paraconsistent logics, are reviewed. The computational and theoretical advantages of (...) using the proposed degree of inconsistency based sensitivity evaluation as an alternative to traditional statistical power analysis is also discussed. (shrink)
This paper develops a question-sensitive theory of intention. We show that this theory explains some puzzling closure properties of intention. In particular, it can be used to explain why one is rationally required to intend the means to one’s ends, even though one is not rationally required to intend all the foreseen consequences of one’s intended actions. It also explains why rational intention is not always closed under logical implication, and why one can only intend outcomes that one believes to (...) be under one’s control. (shrink)
Offering a solution to the skeptical puzzle is a central aim of Nozick's sensitivity account of knowledge. It is well-known that this account faces serious problems. However, because of its simplicity and its explanatory power, the sensitivity principle has remained attractive and has been subject to numerous modifications, leading to a of sensitivity accounts. I will object to these accounts, arguing that sensitivity accounts of knowledge face two problems. First, they deliver a far too heterogeneous picture (...) of higher-level beliefs about the truth or falsity of one's own beliefs. Second, this problem carries over to bootstrapping and Moorean reasoning. Some beliefs formed via bootstrapping or Moorean reasoning are insensitive, but some closely related beliefs in even stronger propositions are sensitive. These heterogeneous results regarding sensitivity do not fit with our intuitions about bootstrapping and Moorean reasoning. Thus, neither Nozick's sensitivity account of knowledge nor any of its modified versions can provide the basis for an argument that bootstrapping and Moorean reasoning are flawed or for an explanation why they seem to be flawed. (shrink)
It has recently been argued that a sensitivity theory of knowledge cannot account for intuitively appealing instances of higher-order knowledge. In this paper, we argue that it can once careful attention is paid to the methods or processes by which we typically form higher-order beliefs. We base our argument on what we take to be a well-motivated and commonsensical view on how higher-order knowledge is typically acquired, and we show how higher-order knowledge is possible in a sensitivity theory (...) once this view is adopted. (shrink)
The study aimed to identify the strategic sensitivity and its impact on enhancing the creative behavior of Palestinian NGOs in Gaza Strip, and the study used the descriptive analytical approach and the questionnaire as a main tool for collecting data from employees of associations working in Gaza Strip governorates, and the cluster sample method was used and the sample size reached (343) individuals (298) questionnaires were retrieved, and the following results were reached: The relative weight of strategic sensitivity (...) was 79.22 (%), and the relative weight of creative behavior was 78.99 (%), a statistically significant relationship between all strategic sensitivity and creative behavior, and the presence of a sensitivity effect The strategy’s strategy on creative behavior, there are statistically significant differences in the scale dimensions attributable to the gender variable and the differences were in favor of females, there are no statistically significant differences between the averages of strategic sensitivity due to the age variable, and the educational qualification, and there were no statistically significant differences in creative behavior according to The gender variable, age, educational qualification, specialization, and the study presented a set of recommendations, the most important of which are: the need for civil organizations in Gaza Strip to seek funding from external countries in order to provide self-income for associations to face crises and give them independence Mechanism in order to keep them to carry out their role in society, the need to follow up the strategic plan of civil organizations using e-mails as they pave the way to reach excellence and creativity in the field of work. (shrink)
In this paper, I develop a theory of how claims about an agent’s normative reasons are sensitive to the epistemic circumstances of this agent, which preserves the plausible ideas that reasons are facts and that reasons can be discovered in deliberation and disclosed in advice. I argue that a plausible theory of this kind must take into account the difference between synchronic and diachronic reasons, i.e. reasons for acting immediately and reasons for acting at some later point in time. I (...) provide a general account of the relation between synchronic and diachronic reasons, demonstrate its implications for the evidence-sensitivity of reasons and finally present and defend an argument for my view. (shrink)
In medical ethics, business ethics, and some branches of political philosophy (multi-culturalism, issues of just allocation, and equitable distribution) the literature increasingly combines insights from ethics and the social sciences. Some authors in medical ethics even speak of a new phase in the history of ethics, hailing "empirical ethics" as a logical next step in the development of practical ethics after the turn to "applied ethics." The name empirical ethics is ill-chosen because of its associations with "descriptive ethics." Unlike descriptive (...) ethics, however, empirical ethics aims to be both descriptive and normative. The first question on which I focus is what kind of empirical research is used by empirical ethics and for which purposes. I argue that the ultimate aim of all empirical ethics is to improve the context-sensitivity of ethics. The second question is whether empirical ethics is essentially connected with specific positions in meta-ethics. I show that in some kinds of meta-ethical theories, which I categorize as broad contextualist theories, there is an intrinsic need for connecting normative ethics with empirical social research. But context-sensitivity is a goal that can be aimed for from any meta-ethical position. (shrink)
Robert Nozick (1981, 172) offers the following analysis of knowledge (where S stands for subject and p for proposition): -/- D1 S knows that p =df (1) S believes p, (2) p is true, (3) if p weren’t true, S wouldn’t believe that p (variation condition), and (4) If p were true, S would believe it (adherence condition). Jointly, Nozick refers to conditions 3 and 4 as the sensitivity condition: for they require that the belief be sensitive to the (...) truth-value of the proposition—such that if the proposition were false, the subject would not have believed it, and if the proposition remains true in a slightly different situation, the subject would have still believed it. In other words, they ask us to consider the status of the belief in close possible situations (those that obtain in close possible worlds); specifically, in situations that would obtain if the proposition is false, and in those in which it remains true. Condition 3 specifies how belief should vary with the truth of what is believed, while condition 4 specifies how belief shouldn’t vary when the truth of the belief does not vary. I will discuss some notable problem cases for Nozick’s analysis and then look at why the sensitivity condition he proposes fails in these cases. (shrink)
Time sensitivity seems to affect our intuitive evaluation of the reasonable risk of fallibility in testimonies. All things being equal, we tend to be less demanding in accepting time sensitive testimonies as opposed to time insensitive testimonies. This paper considers this intuitive response to testimonies as a strategy of acceptance. It argues that the intuitive strategy, which takes time sensitivity into account, is epistemically superior to two adjacent strategies that do not: the undemanding strategy adopted by non-reductionists and (...) the cautious strategy adopted by reductionists. The paper demonstrates that in adopting the intuitive strategy of acceptance, one is likely to form more true beliefs and fewer false beliefs. Also, in following the intuitive strategy, the listener will be fulfilling his epistemic duties more efficiently. (shrink)
This paper defends the claim that although ‘Superman is Clark Kent and some people who believe that Superman flies do not believe that Clark Kent flies’ is a logically inconsistent sentence, we can still utter this sentence, while speaking literally, without asserting anything false. The key idea is that the context-sensitivity of attitude reports can be - and often is - resolved in different ways within a single sentence.
Keith DeRose has argued that the two main problems facing subject-sensitive invariantism come from the appropriateness of certain third-person denials of knowledge and the inappropriateness of now you know it, now you don't claims. I argue that proponents of SSI can adequately address both problems. First, I argue that the debate between contextualism and SSI has failed to account for an important pragmatic feature of third-person denials of knowledge. Appealing to these pragmatic features, I show that straightforward third-person denials are (...) inappropriate in the relevant cases. And while there are certain denials that are appropriate, they pose no problems for SSI. Next, I offer an explanation, compatible with SSI, of the oddity of now you know it, now you don't claims. To conclude, I discuss the intuitiveness of purism, whose rejection is the source of many problems for SSI. I propose to explain away the intuitiveness of purism as a side-effect of the narrow focus of previous epistemological inquiries. (shrink)
Decision-making regarding healthcare expenditure hinges heavily on an individual's health status and the certainty about the future. This study uses data on propensity of general health exam (GHE) spending to show that despite the debate on the necessity of GHE, its objective is clear—to obtain more information and certainty about one’s health so as to minimise future risks. Most studies on this topic, however, focus only on factors associated with GHE uptake and overlook the shifts in behaviours and attitudes regarding (...) different levels of cost. To fill the gap, this study analyses a dataset of 2068 subjects collected from Hanoi (Vietnam) and its vicinities using the baseline-category logit method. We evaluate the sensitivity of Vietnamese healthcare consumers against two groups of factors (demographic and socioeconomic-cognitive) regarding payment for periodic GHE, which is not covered by insurance. Our study shows that uninsured, married and employed individuals are less sensitive to cost than their counterparts because they value the information in reducing future health uncertainty. The empirical results challenge the objections to periodic health screening by highlighting its utility. The relevance of behavioural economics is further highlighted through a look at the bounded rationality of healthcare consumers and private insurance companies in using and providing the service, respectively. (shrink)
The sensitivity condition on knowledge says that one knows that P only if one would not believe that P if P were false. Difficulties for this condition are now well documented. Keith DeRose has recently suggested a revised sensitivity condition that is designed to avoid some of these difficulties. We argue, however, that there are decisive objections to DeRose’s revised condition. Yet rather than simply abandoning his proposed condition, we uncover a rationale for its adoption, a rationale which (...) suggests a further revision that avoids our objections as well as others. The payoff is considerable: along the way to our revision, we learn lessons about the epistemic significance of certain explanatory relations, about how we ought to envisage epistemic closure principles, and about the epistemic significance of methods of belief formation. (shrink)
John MacFarlane has recently argued that his brand of truth relativism – Assessment Sensitivity – provides the best solution to the puzzle of future contingents: statements about the future that are metaphysically neither necessary nor impossible. In this paper, we show that even if we grant all of the metaphysical, semantic and pragmatic assumptions in terms of which MacFarlane sets and solves the puzzle, Assessment Sensitivity is ultimately self-refuting.
Do philosophers use intuitions? Should philosophers use intuitions? Can philosophical methods (where intuitions are concerned) be improved upon? In order to answer these questions we need to have some idea of how we should go about answering them. I defend a way of going about methodology of intuitions: a metamethodology. I claim the following: (i) we should approach methodological questions about intuitions with a thin conception of intuitions in mind; (ii) we should carve intuitions finely; and, (iii) we should carve (...) to a grain to which we are sensitive in our everyday philosophising. The reason is that, unless we do so, we don’t get what we want from philosophical methodology. I argue that what we want is information that will aid us in formulating practical advice concerning how to do philosophy responsibly/well/better. (shrink)
Previous Responsible Innovation (RI) research has provided valuable insights on the value conflicts inherent to societally desirable innovation. By observing the responses of firms to these conflicts, Value-sensitive Absorptive Capacity (VAC) captures the organizational capabilities to become sensitive to these value conflicts and thus, innovate more responsibly. In this article, we construct a survey instrument to assess VAC, based on previous work by CSR and RI scholars. The construct and concurrent validity of the instrument were tested in an empirical study, (...) including 109 employees of 30 food manufacturing firms. The results from the survey were then compared with the conceptual VAC dimensions. With this comparison, we do not only contribute to the substantiation of the VAC construct, but we also show how inductive and deductive approaches can be combined to build theory regarding RI in a transdisciplinary manner. (shrink)
I examine the role of time-sensitivity in science by drawing on a discussion between Kevin Elliott and Daniel McKaughan and Daniel Steel, on the role of non-epistemic values in theory assessment and the epistemic status of speed of inference. I argue that: 1) speed supervenes on ease of use in the cases they discuss, 2) speed is an epistemic value, and 3) Steel’s account of values doesn’t successfully distinguish extrinsically epistemic from non-epistemic values. Finally, I propose an account of (...) time-sensitivity. (shrink)
With emotional motivation the organism prepares the body to obtain a goal. There is an anticipatory sensitization of the sensory systems in the body and the brain. Presynaptic facilitation of the sensory afference in the spinal cord is probably involved. In a second stage the higher centers develop an action image/plan to realize the goal, modifying the initial preparations in the body. The subject experiences the changes in the body as a feeling. Three empirical studies supporting this description are summarized. (...) This description of how feelings develop from emotion circuits is discussed from a phenomenological viewpoint. (shrink)
Previous theories of early vision have assumed that visual search is based on simple two-dimensional aspects of an image, such as the orientation of edges and lines. It is shown here that search can also be based on three-dimensional orientation of objects in the corresponding scene, provided that these objects are simple convex blocks. Direct comparison shows that image-based and scene-based orientation are similar in their ability to facilitate search. These findings support the hypothesis that scene-based properties are represented at (...) preattentive levels in early vision. (shrink)
The majority of studies on absorptive capacity underscore the importance of absorbing technological knowledge from other firms to create economic value. However, to preserve moral legitimacy and create social value, firms must also discern and adapt to societal values. A comparative case study of eight firms in the food industry reveals how organizations prioritize and operationalize the societal value health in product innovation while navigating inter- and intravalue conflicts. The value-sensitive framework induced in this article extends AC by explaining how (...) technically savvy, economic value–creating firms diverge in their receptivity, articulation, and reflexivity of societal values. (shrink)
Permissivism about rationality is the view that there is sometimes more than one rational response to a given body of evidence. In this paper I discuss the relationship between permissivism, deference to rationality, and peer disagreement. I begin by arguing that—contrary to popular opinion—permissivism supports at least a moderate version of conciliationism. I then formulate a worry for permissivism. I show that, given a plausible principle of rational deference, permissive rationality seems to become unstable and to collapse into unique rationality. (...) I conclude with a formulation of a way out of this problem on behalf of the permissivist. (shrink)
Jason Stanley has argued recently that Epistemic Contextualism and Subject‐Sensitive Invariantism are explanatorily on a par with regard to certain data arising from modal and temporal embeddings of ‘knowledge’‐ascriptions. This paper argues against Stanley that EC has a clear advantage over SSI in the discussed field and introduces a new type of linguistic datum strongly suggesting the falsity of SSI.
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.