The essay proceeds from the assumptions that for a economic/political integration group to succeed, first, its participants’ motives should ideally be as alike as possible and not oppose one another and, second, their expectations from integration should correspond to the organisation’s capabilities. In light of these assumptions, the study endeavours to assess the Eurasian Economic Union’s (EAEU) potential for stability and development. First, the author analyses the key motives that were driving its member states’ decisions to enter the organisation, (...) compares them with one another and discusses how the countries’ motives influence their conduct in the union. Second, the author confronts those motives against the EAEU’s activities and the general logic of interstate politics on the post-Soviet space to reckon up whether the bloc’s capabilities fit with the expectations of its member countries. Finally, based on that discussion, the author speculates on how the divergence/convergence of EAEU member states’ goals, as well as the (in-)feasibility of their expectations, affect the organisation’s development. (shrink)
A paradox, it is claimed, is a radical form of contradiction, one that produces gaps in meaning. In order to approach this idea, two senses of “separation” are distinguished: separation by something and separation by nothing. The latter does not refer to nothing in an ordinary sense, however, since in that sense what’s intended is actually less than nothing. Numerous ordinary nothings in philosophy as well as in other fields are surveyed so as to clarify the contrast. Then follows the (...) suggestion that philosophies which one would expect to have room for paradoxes actually tend either to exclude them altogether or to dull them. There is a clear alternative, however, one that fully recognizes paradoxes and yet also strives to overcome them. (shrink)
Gender issues are well-researched in the general management literature, particular in studies on new ventures. Unfortunately, gender issues have been largely ignored in the dynamic capabilities literature. We address this gap by analyzing the effects of gender diversity on dynamic capabilities among micro firms. We consider the gender of managers and personnel in 124 Ukrainian tourism micro firms. We examine how a manager’s gender affects the firm’s sensing capacities and investigate how it moderates team gender diversity’s impact on sensing capacities. (...) We also investigate how personnel composition impacts seizing and reconfiguration capacities. We find that female managers have several shortcomings concerning a firm’s sensing capacity but that personnel gender diversity increases this capacity. Team gender diversity has positive effects on a firm’s seizing and reconfiguration abilities. Our study advances research on gender diversity and its impact on firm capabilities and illustrates its relevance for staffing practices in micro firms. (shrink)
Gender issues are well-researched in the general management literature, particular in studies on new ventures. Unfortunately, gender issues have been largely ignored in the dynamic capabilities literature. We address this gap by analyzing the effects of gender diversity on dynamic capabilities among micro firms. We consider the gender of managers and personnel in 124 Ukrainian tourism micro firms. We examine how a manager’s gender affects the firm’s sensing capacities and investigate how it moderates team gender diversity’s impact on sensing capacities. (...) We also investigate how personnel composition impacts seizing and reconfiguration capacities. We find that female managers have several shortcomings concerning a firm’s sensing capacity but that personnel gender diversity increases this capacity. Team gender diversity has positive effects on a firm’s seizing and reconfiguration abilities. Our study advances research on gender diversity and its impact on firm capabilities and illustrates its relevance for staffing practices in micro firms. (shrink)
Sparrow argues that military robots capable of making their own decisions would be independent enough to allow us denial for their actions, yet too unlike us to be the targets of meaningful blame or praise—thereby fostering what Matthias has dubbed “the responsibility gap.” We agree with Sparrow that someone must be held responsible for all actions taken in a military conflict. That said, we think Sparrow overlooks the possibility of what we term “blank check” responsibility: A person of sufficiently high (...) standing could accept responsibility for the actions of autonomous robotic devices—even if that person could not be causally linked to those actions besides this prior agreement. The basic intuition behind our proposal is that humans can impute relations even when no other form of contact can be established. The missed alternative we want to highlight, then, would consist in an exchange: Social prestige in the occupation of a given office would come at the price of signing away part of one's freedoms to a contingent and unpredictable future guided by another agency. (shrink)
With the existing commitments to climate change mitigation, global warming is likely to exceed 2°C and to trigger irreversible and harmful threshold effects. The difference between the reductions necessary to keep the 2°C limit and those reductions countries have currently committed to is called the ‘emissions gap’. I argue that capable states not only have a moral duty to make voluntary contributions to bridge that gap, but that complying states ought to make up for the failures of some other states (...) to comply with this duty. While defecting or doing less than one’s fair share can be a good move in certain circumstances, it would be morally wrong in this situation. In order to bridge the emissions gap, willing states ought to take up the slack left by others. The paper will reject the unfairness-objection, namely that it is wrong to require agents to take on additional costs to discharge duties that are not primarily theirs. Sometimes what is morally right is simply unfair. (shrink)
Artificial Intelligence (AI) systems are ubiquitous. From social media timelines, video recommendations on YouTube, and the kinds of adverts we see online, AI, in a very real sense, filters the world we see. More than that, AI is being embedded in agent-like systems, which might prompt certain reactions from users. Specifically, we might find ourselves feeling frustrated if these systems do not meet our expectations. In normal situations, this might be fine, but with the ever increasing sophistication of AI-systems, (...) this might become a problem. While it seems unproblematic to realize that being angry at your car for breaking down is unfitting, can the same be said for AI-systems? In this paper, therefore, I will investigate the so-called “reactive attitudes”, and their important link to our responsibility practices. I then show how within this framework there exist exemption and excuse conditions, and test whether our adopting the “objective attitude” toward agential AI is justified. I argue that such an attitude is appropriate in the context of three distinct senses of responsibility (answerability, attributability, and accountability), and that, therefore, AI-systems do not undermine our responsibility ascriptions. (shrink)
Much attention has recently been paid to the idea, which I label ‘External World Acquaintance’ (EWA), that the phenomenal character of perceptual experience is partially constituted by external features. One motivation for EWA which has received relatively little discussion is its alleged ability to help deal with the ‘Explanatory Gap’ (e.g. Fish 2008, 2009, Langsam 2011, Allen 2016). I provide a reformulation of this general line of thought, which makes clearer how and when EWA could help to explain the specific (...) phenomenal nature of visual experience. In particular, I argue that by focusing on the different kinds of perceptual actions that are available in the case of visual spatial vs. colour perception, we get a natural explanation for why we should expect the specific nature of colour phenomenology to remain less readily intelligible than the specific nature of visual spatial phenomenology. (shrink)
Hyperbole is traditionally understood as exaggeration. Instead, in this paper, we shall define it not just in terms of its form, but in terms of its effects and its purpose. Specifically, we characterize its form as a shift of magnitude along a scale of measurement. In terms of its effect, it uses this magnitude shift to make the target property more salient. The purpose of hyperbole is to express with colour and force that the target property is either greater or (...) lesser than expected or desired. This purpose is well suited to hyperbolic expression. This because hyperbole naturally draws a contrast between two points: how things are versus how they were expected to be. We also consider compound figures involving hyperbole. When it combines with other figures hyperbole operates by magnifying the specific effects of the figure it operates on. We shall see that sometimes hyperbole works as an input for irony; and at other times it builds on a metaphor to increase the effects of that metaphor. (shrink)
Many philosophers think the distinctive function of deontic evaluation is to guide action. This idea is used in arguments for a range of substantive claims. In this paper, we entirely do one completely destructive thing and partly do one not entirely constructive thing. The first thing: we argue that there is an unrecognized gap between the claim that the function of deontic evaluation is to guide action and attempts to put that claim to use. We consider and reject four arguments (...) intended to bridge this gap. The interim conclusion is thus that arguments starting with the claim that the function of deontic evaluation is to guide action have a lacuna. The second thing: we consider a different tack for making arguments of this sort work. We sketch a methodology one could accept that would do the trick. Unfortunately, as we’ll explain, although this methodology would bridge the gap in arguments that put claims about the function of deontic evaluation to work, it would do so in a way that vitiates any interest we might have in such arguments. As an aside, we’ll also point out how epistemologists, who have recently become interested in the function of epistemic evaluation, appear to already recognize this fact. The conclusion is hence a dilemma: either arguments from deontic function to substance have a lacuna or such arguments lack teeth. (shrink)
Open Access: Trudy Govier (FR) argues for “conditional unforgivability,” yet avers that we should never give up on a human being. She not only says it is justifiable to take a “hopeful and respectful attitude” toward one’s wrongdoers, she indicates that it is wrong not to; she says it is objectionable to adopt an attitude that any individual is “finally irredeemable” or “could never change,” because such an attitude “anticipates and communicates the worst” (137). Govier’s recommendation to hold a hopeful (...) attitude seems to follow from one’s knowing that an appropriate object of unforgivability is also an agent capable of moral transformation. I appeal to Blake Myers-Schultz’s and Eric Schwitzgebels’ account of knowledge without belief, and Schwitzgebels’ account of attitudes, to argue that a victim’s knowledge that a wrongdoer has the capacities of a moral agent does not entail belief in the possibility that a wrongdoer will exercise those moral capacities, nor does knowledge of a wrongdoer’s moral capacities entail hopeful attitudes toward the prospects of an individual wrongdoer’s moral transformation. I conclude that what victims can hope for should not be that which victims are held to as a moral minimum. (shrink)
Different anesthetics are known to modulate different types of membrane-bound receptors. Their common mechanism of action is expected to alter the mechanism for consciousness. Consciousness is hypothesized as the integral of all the units of internal sensations induced by reactivation of inter-postsynaptic membrane functional LINKs during mechanisms that lead to oscillating potentials. The thermodynamics of the spontaneous lateral curvature of lipid membranes induced by lipophilic anesthetics can lead to the formation of non-specific inter-postsynaptic membrane functional LINKs by different mechanisms. These (...) include direct membrane contact by excluding the inter-membrane hydrophilic region and readily reversible partial membrane hemifusion. The constant reorganization of the lipid membranes at the lateral edges of the postsynaptic terminals (dendritic spines) resulting from AMPA receptor-subunit vesicle exocytosis and endocytosis can favor the effect of anesthetic molecules on lipid membranes at this location. Induction of a large number of non-specific LINKs can alter the conformation of the integral of the units of internal sensations that maintain consciousness. Anesthetic requirement is reduced in the presence of dopamine that causes enlargement of dendritic spines. Externally applied pressure can transduce from the middle ear through the perilymph, cerebrospinal fluid, and the recently discovered glymphatic pathway to the extracellular matrix space, and finally to the paravenular space. The pressure gradient reduce solubility and displace anesthetic molecules from the membranes into the paravenular space, explaining the pressure reversal of anesthesia. Changes in membrane composition and the conversion of membrane hemifusion to fusion due to defects in the checkpoint mechanisms can lead to cytoplasmic content mixing between neurons and cause neurodegenerative changes. The common mechanism of anesthetics presented here can operate along with the known specific actions of different anesthetics. (shrink)
Since our moral and legal judgments are focused on our decisions and actions, one would expect information about the neural underpinnings of human decision-making and action-production to have a significant bearing on those judgments. However, despite the wealth of empirical data, and the public attention it has attracted in the past few decades, the results of neuroscientific research have had relatively little influence on legal practice. It is here argued that this is due, at least partly, to the discussion on (...) the relationship of the neurosciences and law mixing up a number of separate issues that have different relevance on our moral and legal judgments. The approach here is hierarchical; more and less feasible ways in which neuroscientific data could inform such judgments are separated from each other. The neurosciences and other physical views on human behavior and decision-making do have the potential to have an impact on our legal reasoning. However, this happens in various different ways, and too often appeal to any neural data is assumed to be automatically relevant to shaping our moral and legal judgments. Our physicalist intuitions easily favor neural-level explanations to mental-level ones. But even if you were to subscribe to some reductionist variant of physicalism, it would not follow that all neural data should be automatically relevant to our moral and legal reasoning. However, the neurosciences can give us indirect evidence for reductive physicalism, which can then lead us to challenge the very idea of free will. Such a development can, ultimately, also have repercussions on law and legal practice. (shrink)
Are groups ever capable of bearing responsibility, over and above their individual members? This chapter discusses and defends the view that certain organized collectives – namely, those that qualify as group moral agents – can be held responsible for their actions, and that group responsibility is not reducible to individual responsibility. The view has important implications. It supports the recognition of corporate civil and even criminal liability in our legal systems, and it suggests that, by recognizing group agents as loci (...) of responsibility, we may be able to avoid “responsibility gaps” in some cases of collectively caused harms for which there is a shortfall of individual responsibility. The chapter further asks whether the view that certain groups are responsible agents commits us to the view that those groups should also be given rights of their own and gives a qualified negative answer. (shrink)
As stochastic independence is essential to the mathematical development of probability theory, it seems that any foundational work on probability should be able to account for this property. Bayesian decision theory appears to be wanting in this respect. Savage’s postulates on preferences under uncertainty entail a subjective expected utility representation, and this asserts only the existence and uniqueness of a subjective probability measure, regardless of its properties. What is missing is a preference condition corresponding to stochastic independence. To fill this (...) significant gap, the article axiomatizes Bayesian decision theory afresh and proves several representation theorems in this novel framework. (shrink)
We present two defeasible logics of norm-propositions (statements about norms) that (i) consistently allow for the possibility of normative gaps and normative conflicts, and (ii) map each premise set to a sufficiently rich consequence set. In order to meet (i), we define the logic LNP, a conflict- and gap-tolerant logic of norm-propositions capable of formalizing both normative conflicts and normative gaps within the object language. Next, we strengthen LNP within the adaptive logic framework for non-monotonic reasoning in order to meet (...) (ii). This results in the adaptive logics LNPr and LNPm, which interpret a given set of premises in such a way that normative conflicts and normative gaps are avoided ‘whenever possible’. LNPr and LNPm are equipped with a preferential semantics and a dynamic proof theory. (shrink)
Artificial intelligence is expected to revolutionise the practice of medicine. Recent advancements in the field of deep learning have demonstrated success in variety of clinical tasks: detecting diabetic retinopathy from images, predicting hospital readmissions, aiding in the discovery of new drugs, etc. AI’s progress in medicine, however, has led to concerns regarding the potential effects of this technology on relationships of trust in clinical practice. In this paper, I will argue that there is merit to these concerns, since AI systems (...) can be relied on, and are capable of reliability, but cannot be trusted, and are not capable of trustworthiness. Insofar as patients are required to rely on AI systems for their medical decision-making, there is potential for this to produce a deficit of trust in relationships in clinical practice. (shrink)
Do we rightly expect a perfectly loving God to bring it about that, right now, we reasonably believe that He exists? It seems so. For love at its best desires the well-being of the beloved, not from a distance, but up close, explicitly participating in her life in a personal fashion, allowing her to draw from that relationship what she may need to flourish. But why suppose that we would be significantly better off were God to engage in an explicit, (...) personal relationship with us? Well, first, there would be broadly moral benefits. We would be able to draw on the resources of that relationship to overcome seemingly everpresent flaws in our character. And we would be more likely to emulate the self-giving love with which we were loved. So loved, we would be more likely to flourish as human beings. Secondly, there would be experiential benefits. We would be, for example, more likely to experience peace and joy stemming from the strong conviction that we were properly related to our Maker, security in suffering knowing that, ultimately, all shall be well, and there would be the sheer pleasure of God's loving presence. As a consequence of these moral and experiential benefits, our relationships with others would likely improve. Thirdly, to be personally related to God is intrinsically valuable, indeed, according to the Christian tradition, the greatest intrinsic good. In these ways our well-being would be enhanced if God were to relate personally to us. Moreover, the best love does not seek a personal relationship only for the sake of the beloved. As Robert Adams rightly notes, "It is an abuse of the word 'love' to say that one loves a person, or any other object, if one does not care, except instrumentally, about one's relation to that object."1 Thus, God would want a personal relationship with us not only for the benefit we would receive from it but for its own sake as well. So, if a perfectly loving God exists, He wants a personal relationship with us, or more accurately, every capable creature, those cognitively and affectively equipped to relate personally with Him.. (shrink)
Coordination is a key problem for addressing goal–action gaps in many human endeavors. We define interpersonal coordination as a type of communicative action characterized by low interpersonal belief and goal conflict. Such situations are particularly well described as having collectively “intelligent”, “common good” solutions, viz., ones that almost everyone would agree constitute social improvements. Coordination is useful across the spectrum of interpersonal communication—from isolated individuals to organizational teams. Much attention has been paid to coordination in teams and organizations. In this (...) paper we focus on the looser interpersonal structures we call active support networks, and on technology that meets their needs. We describe two needfinding investigations focused on social support, which examined four application areas for improving coordination in ASNs: academic coaching, vocational training, early learning intervention, and volunteer coordination; and existing technology relevant to ASNs. We find a thus-far unmet need for personal task management software that allows smooth integration with an individual’s active support network. Based on identified needs, we then describe an open architecture for coordination that has been developed into working software. The design includes a set of capabilities we call “social prompting”, as well as templates for accomplishing multi-task goals, and an engine that controls coordination in the network. The resulting tool is currently available and in continuing development. We explain its use in ASNs with an example. Follow-up studies are underway in which the technology is being applied in existing support networks. (shrink)
Many have expected that understanding the evolution of norms should, in some way, bear on our first-order normative outlook: How norms evolve should shape which norms we accept. But recent philosophy has not done much to shore up this expectation. Most existing discussions of evolution and norms either jump headlong into the is/ought gap or else target meta-ethical issues, such as the objectivity of norms. My aim in this paper is to sketch a different way in which evolutionary considerations can (...) feed into normative thinking—focusing on stability. I will discuss two forms of argument that utilize information about social stability drawn from evolutionary models, and employs it to assess claims in political philosophy. One such argument treats stability as feature of social states that may be taken into account alongside other features. The other uses stability as a constraint on the realization of social ideals, via a version of the ought-implies-can maxim. These forms of argument are not new; indeed they have a history going back at least to early modern philosophy. But their marriage with evolutionary information is relatively recent, has a significantly novel character, and has received little attention in recent moral and political philosophy. (shrink)
Vigorous debate over the moral propriety of cognitive enhancement exists, but the views of the public have been largely absent from the discussion. To address this gap in our knowledge, four experiments were carried out with contrastive vignettes in order to obtain quantitative data on public attitudes towards cognitive enhancement. The data collected suggest that the public is sensitive to and capable of understanding the four cardinal concerns identified by neuroethicists, and tend to cautiously accept cognitive enhancement even as they (...) recognize its potential perils. The public is biopolitically moderate, endorses both meritocratic principles and the intrinsic value of hard work, and appears to be sensitive to the salient moral issues raised in the debate. Taken together, these data suggest that public attitudes toward enhancement are sufficiently sophisticated to merit inclusion in policy deliberations, especially if we seek to align public sentiment and policy. (shrink)
This paper considers questions about continuity and discontinuity between life and mind. It begins by examining such questions from the perspective of the free energy principle (FEP). The FEP is becoming increasingly influential in neuroscience and cognitive science. It says that organisms act to maintain themselves in their expected biological and cognitive states, and that they can do so only by minimizing their free energy given that the long-term average of free energy is entropy. The paper then argues that there (...) is no singular interpretation of the FEP for thinking about the relation between life and mind. Some FEP formulations express what we call an independence view of life and mind. One independence view is a cognitivist view of the FEP. It turns on information processing with semantic content, thus restricting the range of systems capable of exhibiting mentality. Other independence views exemplify what we call an overly generous non-cognitivist view of the FEP, and these appear to go in the opposite direction. That is, they imply that mentality is nearly everywhere. The paper proceeds to argue that non-cognitivist FEP, and its implications for thinking about the relation between life and mind, can be usefully constrained by key ideas in recent enactive approaches to cognitive science. We conclude that the most compelling account of the relationship between life and mind treats them as strongly continuous, and that this continuity is based on particular concepts of life (autopoiesis and adaptivity) and mind (basic and non-semantic). (shrink)
Reductive intellectualists (e.g., Stanley & Williamson 2001; Stanley 2011a; 2011b; Brogaard 2008; 2009; 2011) hold that knowledge-how is a kind of knowledge-that. If this thesis is correct, then we should expect the defeasibility conditions for knowledge-how and knowledge-that to be uniform—viz., that the mechanisms of epistemic defeat which undermine propositional knowledge will be equally capable of imperilling knowledge-how. The goal of this paper is twofold: first, against intellectualism, we will show that knowledge-how is in fact resilient to being undermined by (...) the very kinds of traditional (propositional) epistemic defeaters which clearly defeat the items of propositional knowledge which intellectualists identify with knowledge-how. Second, we aim to fill an important lacuna in the contemporary debate, which is to develop an alternative way in which epistemic defeat for knowledge-how could be modelled within an anti-intellectualist framework. (shrink)
A naturalistic theory of intentionality is proposed that differs from previous evolutionary and tracking theories. Full-blown intentionality is constructed through a series of evolvable refinements. A first, minimal version of intentionality originates from a conjectured internal process that estimates an organism’s own fitness and that continually modifies the organism. This process produces the directedness of intentionality. The internal estimator can be parsed into intentional components that point to components of the process that produces fitness. It is argued that such intentional (...) components can point to mistaken or non-existing entities. Different Fregian senses of the same reference correspond to different components that have different roles in the estimator. Intentional components that point to intentional components in other organisms produce directedness towards semi-abstract entities. Finally, adding a general, population-wide means of communication enables intentional components that point to fully abstract entities. Intentionality thus naturalized has all of its expected properties: being directed; potentially making errors; possibly pointing to non-existent, abstract, or rigid entities; capable of pointing many-to-one and one-to-many; distinguishing sense and reference; having perspective and grain; and having determinate content. Several examples, such as ‘swampman’ and ‘brain-in-a-vat’, illustrate how the theory can be applied. (shrink)
Priority setting in health care is ubiquitous and health authorities are increasingly recognising the need for priority setting guidelines to ensure efficient, fair, and equitable resource allocation. While cost-effectiveness concerns seem to dominate many policies, the tension between utilitarian and deontological concerns is salient to many, and various severity criteria appear to fill this gap. Severity, then, must be subjected to rigorous ethical and philosophical analysis. Here we first give a brief history of the path to today’s severity criteria in (...) Norway and Sweden. The Scandinavian perspective on severity might be conducive to the international discussion, given its long-standing use as a priority setting criterion, despite having reached rather different conclusions so far. We then argue that severity can be viewed as a multidimensional concept, drawing on accounts of need, urgency, fairness, duty to save lives, and human dignity. Such concerns will often be relative to local mores, and the weighting placed on the various dimensions cannot be expected to be fixed. Thirdly, we present what we think are the most pertinent questions to answer about severity in order to facilitate decision making in the coming years of increased scarcity, and to further the understanding of underlying assumptions and values that go into these decisions. We conclude that severity is poorly understood, and that the topic needs substantial further inquiry; thus we hope this article may set a challenging and important research agenda. (shrink)
Beginning with the Untimely Meditations (1873) and continuing until his final writings of 1888-9, Nietzsche refers to humility (Demuth or a cognate) in fifty-two passages and to modesty (Bescheidenheit or a cognate) in one hundred and four passages, yet there are only four passages that refer to both terms. Moreover, perhaps surprisingly, he often speaks positively of modesty, especially in epistemic contexts. These curious facts might be expected to lead scholars to explore what Nietzsche thinks of humility and modesty, but (...) to date there have been no systematic analyses of Nietzsche’s reflections on these dispositions. In this chapter, I fill that gap in the literature using semantic network analysis and systematically-guided close-reading. In so doing, I show that Nietzsche sharply distinguished between humility and modesty, considering the former a vice (for certain types of people in certain contexts) and the latter a virtue (again, for certain types of people in certain contexts). (shrink)
Typically, fair machine learning research focuses on a single decision maker and assumes that the underlying population is stationary. However, many of the critical domains motivating this work are characterized by competitive marketplaces with many decision makers. Realistically, we might expect only a subset of them to adopt any non-compulsory fairness-conscious policy, a situation that political philosophers call partial compliance. This possibility raises important questions: how does partial compliance and the consequent strategic behavior of decision subjects affect the allocation outcomes? (...) If k% of employers were to voluntarily adopt a fairness-promoting intervention, should we expect k% progress (in aggregate) towards the benefits of universal adoption, or will the dynamics of partial compliance wash out the hoped-for benefits? How might adopting a global (versus local) perspective impact the conclusions of an auditor? In this paper, we propose a simple model of an employment market, leveraging simulation as a tool to explore the impact of both interaction effects and incentive effects on outcomes and auditing metrics. Our key findings are that at equilibrium: (1) partial compliance by k% of employers can result in far less than proportional (k%) progress towards the full compliance outcomes; (2) the gap is more severe when fair employers match global (vs local) statistics; (3) choices of local vs global statistics can paint dramatically different pictures of the performance vis-a-vis fairness desiderata of compliant versus non-compliant employers; (4) partial compliance based on local parity measures can induce extreme segregation. Finally, we discuss implications for auditors and insights concerning the design of regulatory frameworks. (shrink)
The debate on the epistemology of disagreement has so far focused almost exclusively on cases of disagreement between individual persons. Yet, many social epistemologists agree that at least certain kinds of groups are equally capable of having beliefs that are open to epistemic evaluation. If so, we should expect a comprehensive epistemology of disagreement to accommodate cases of disagreement between group agents, such as juries, governments, companies, and the like. However, this raises a number of fundamental questions concerning what it (...) means for groups to be epistemic peers and to disagree with each other. In this paper, we explore what group peer disagreement amounts to given that we think of group belief in terms of List and Pettit’s ‘belief aggregation model’. We then discuss how the so-called ‘equal weight view’ of peer disagreement is best accommodated within this framework. The account that seems most promising to us says, roughly, that the parties to a group peer disagreement should adopt the belief that results from applying the most suitable belief aggregation function for the combined group on all members of the combined group. To motivate this view, we test it against various intuitive cases, derive some of its notable implications, and discuss how it relates to the equal weight view of individual peer disagreement. (shrink)
Warfare is becoming increasingly automated, from automatic missile defense systems to micro-UAVs (WASPs) that can maneuver through urban environments with ease, and each advance brings with it ethical questions in need of resolving. Proponents of lethal autonomous weapons systems (LAWS) provide varied arguments in their favor; robots are capable of better identifying combatants and civilians, thus reducing "collateral damage"; robots need not protect themselves and so can incur more risks to protect innocents or gather more information before using deadly force; (...) robots can assess situations more quickly and do so without emotion, reducing the likelihood of fatal mistakes due to human error; and sending robots to war protects our own soldiers from harm. However, these arguments only point in favor of autonomous weapons systems, failing to demonstrate why such systems need be made *lethal*. In this paper I argue that if one grants all of the proponents' points in favor of LAWS, then, contrary to what might be expected, this leads to the conclusion that it would be both immoral and illegal to deploy *lethal* autonomous weapons, because the many features that speak in favor of them also undermine the need for them to be programmed to take lives. In particular, I argue that such systems, if lethal, would violate the moral and legal principle of necessity, which forbids the use of weapons that impose superfluous injury or unnecessary harm. I conclude by highlighting that the argument is not against autonomous weapons per se, but only against *lethal* autonomous weapons. (shrink)
The purpose of this paper is to defend Langsam’s Theory of Appearing (TA) against Djukic et al’s critique. In strengthening Langsam’s defense of TA, I adopt some of Le Morvan's arguments in defending Direct Realism. TA states that experiences are relations between material object and mind, and that phenomenal features are appearances of relations held between material objects and mind. Djukic objects to TA on three grounds of Hallucination, Causal Principle (CP), and Time-Gap: First, Djukic objects to TA on the (...) ground that perception and hallucinations are phenomenally indistinguishable, thus phenomenal features (or properties) instantiated in perception may not be relations either, and thus TA could fail. In defending TA, Langsam argues that indistinguishability does not entail that perception and hallucination instantiate the same appearance. Moreover, disjunctivist conception of experience supports TA in that phenomenal features are either a relation between a material object and mind, or it is something else (as in cases of hallucination). I aim to show that sense-data (or like) theories of perception, that Djukic favors as being superior, would fail Djukic's own scrutiny in cases of hallucination in addition to being against common-sense. Second objection is that perception and hallucination must have the same-cause because they are indistinguishable, and also CP requires that same-causes to produce the same-effects. But hallucination and perception are different experiences, and hence TA fails CP. Responding to CP objection, “same-cause same-effect” only applies to intrinsic changes and intrinsic changes are changes in intrinsic properties and relations between intrinsic properties. Third, TA is opposed because for a given Time-Gap we cannot experience objects as they are (were) at the time of our perception. TA defeats this objection because it does not claim that “we can now (experience) the no-longer existent object as it is now, but only that we can now (experience) the once-existent object as it used to be. Fourth, to further strengthen TA, I will raise objections to TA including from the vantage point of Durability, Perceptual Relativity, Illusion, and Partial Perception arguments, and respond to such objections accordingly. To explicate TA, I argue from the vantage points of common sense, realistic physical biological considerations, and non-miraculous expectations from any theory of perception, including from TA. (shrink)
Self-determination theory, like other psychological theories that study eudaimomia, focuses on general processes of growth and self-realization. An aspect that tends to be sidelined in the relevant literature is virtue. We propose that special focus needs to be placed on moral virtue and its development. We review different types of moral motivation and argue that morally virtuous behavior is regulated through integrated regulation. We describe the process of moral integration and how it relates to the development of moral virtue. We (...) then discuss what morally virtuous individuals are like, what shape their internal moral system is expected to take and introduce moral self-concordance. We consider why morally virtuous individuals are expected to experience eudaimonic well-being. Finally, we address the current gap in self-determination theory research on eudaimonia. (shrink)
The Protein Ontology (PRO) provides terms for and supports annotation of species-specific protein complexes in an ontology framework that relates them both to their components and to species-independent families of complexes. Comprehensive curation of experimentally known forms and annotations thereof is expected to expose discrepancies, differences, and gaps in our knowledge. We have annotated the early events of innate immune signaling mediated by Toll-Like Receptor 3 and 4 complexes in human, mouse, and chicken. The resulting ontology and annotation data set (...) has allowed us to identify species-specific gaps in experimental data and possible functional differences between species, and to employ inferred structural and functional relationships to suggest plausible resolutions of these discrepancies and gaps. (shrink)
Values-based practice (VBP), developed as a partner theory to evidence-based medicine (EBM), takes into explicit consideration patients’ and clinicians’ values, preferences, concerns and expectations during the clinical encounter in order to make decisions about proper interventions. VBP takes seriously the importance of life narratives, as well as how such narratives fundamentally shape patients’ and clinicians’ values. It also helps to explain difficulties in the clinical encounter as conflicts of values. While we believe that VBP adds an important dimension to (...) the clinician’s reasoning and decision-making procedures, we argue that it ignores the degree to which values can shift and change, especially in the case of psychiatric disorders. VBP does this in three respects. First, it does not appropriately engage with the fact that a person’s values can change dramatically in light of major life events. Second, it does not acknowledge certain changes in the way people value, or in their modes of valuing, that occur in cases of severe psychiatric disorder. And third, it does not acknowledge the fact that certain disorders can even alter the degree to which one is capable of valuing anything at all. We believe that ignoring such changes limits the degree to which VBP can be effectively applied to clinical treatment and care. We conclude by considering a number of possible remedies to this issue, including the use of proxies and written statements of value generated through interviews and discussions between patient and clinician. (shrink)
Multiple Authors - please see paper attached. -/- AI systems have seen dramatic advancement in recent years, bringing many applications that pervade our everyday life. However, we are still mostly seeing instances of narrow AI: many of these recent developments are typically focused on a very limited set of competencies and goals, e.g., image interpretation, natural language processing, classification, prediction, and many others. We argue that a better study of the mechanisms that allow humans to have these capabilities can help (...) us understand how to imbue AI systems with these competencies. We focus especially on D. Kahneman’s theory of thinking fast and slow , and we propose a multi-agent AI architecture (called SOFAI, for SlOw and Fast AI) where incoming problems are solved by either system 1 (or "fast") agents (also called "solvers"), that react by exploiting only past experience, or by system 2 (or "slow") agents, that are deliberately activated when there is the need to reason and search for optimal solutions beyond what is expected from the system 1 agent. (shrink)
The paper investigates whether it is plausible to hold the late stage demented criminally responsible for past actions. The concern is based on the fact that policy makers in the United States and in Britain are starting to wonder what to do with prison inmates in the later stages of dementia who do not remember their crimes anymore. The problem has to be expected to become more urgent as the population ages and the number of dementia patients increases. This paper (...) argues that the late-stage demented should not be punished for past crimes. Applicable theories of punishment, especially theories with an appropriate expressivist, or communicative element, fail to justify the imprisonment of the late-stage demented. Further imprisonment would require a capacity for comprehension on the part of the punished, and, under certain narrowly specified conditions, even a capacity to be at least in principle capable of recalling the crime again. (shrink)
Current technology and surveillance practices make behaviors traceable to persons in unprecedented ways. This causes a loss of anonymity and of many privacy measures relied on in the past. These de facto privacy losses are by many seen as problematic for individual psychology, intimate relations and democratic practices such as free speech and free assembly. I share most of these concerns but propose that an even more fundamental problem might be that our very ability to act as autonomous and purposive (...) agents relies on some degree of privacy, perhaps particularly as we act in public and semi-public spaces. I suggest that basic issues concerning action choices have been left largely unexplored, due to a series of problematic theoretical assumptions at the heart of privacy debates. One such assumption has to do with the influential conceptualization of privacy as pertaining to personal intimate facts belonging to a private sphere as opposed to a public sphere of public facts. As Helen Nissenbaum has pointed out, the notion of privacy in public sounds almost like an oxymoron given this traditional private-public dichotomy. I discuss her important attempt to defend privacy in public through her concept of ‘contextual integrity.’ Context is crucial, but Nissenbaum’s descriptive notion of existing norms seems to fall short of a solution. I here agree with Joel Reidenberg’s recent worries regarding any approach that relies on ‘reasonable expectations’ . The problem is that in many current contexts we have no such expectations. Our contexts have already lost their integrity, so to speak. By way of a functional and more biologically inspired account, I analyze the relational and contextual dynamics of both privacy needs and harms. Through an understanding of action choice as situated and options and capabilities as relational, a more consequence-oriented notion of privacy begins to appear. I suggest that privacy needs, harms and protections are relational. Privacy might have less to do with seclusion and absolute transactional control than hitherto thought. It might instead hinge on capacities to limit the social consequences of our actions through knowing and shaping our perceptible agency and social contexts of action. To act with intent we generally need the ability to conceal during exposure. If this analysis is correct then relational privacy is an important condition for autonomic purposive and responsible agency—particularly in public space. Overall, this chapter offers a first stab at a reconceptualization of our privacy needs as relational to contexts of action. In terms of ‘rights to privacy’ this means that we should expand our view from the regulation and protection of the information of individuals to questions of the kind of contexts we are creating. I am here particularly interested in what I call ‘unbounded contexts’, i.e. cases of context collapses, hidden audiences and even unknowable future agents. (shrink)
The EDPS Ethics Advisory Group (EAG) has carried out its work against the backdrop of two significant social-political moments: a growing interest in ethical issues, both in the public and in the private spheres and the imminent entry into force of the General Data Protection Regulation (GDPR) in May 2018. For some, this may nourish a perception that the work of the EAG represents a challenge to data protection professionals, particularly to lawyers in the field, as well as to companies (...) struggling to adapt their processes and routines to the requirements of the GDPR. What is the purpose of a report on digital ethics, if the GDPR already provides all regulatory requirements to protect European citizens with regard to the processing of their personal data? Does the existence of this EAG mean that a new normative ethics of data protection will be expected to fill regulatory gaps in data protection law with more flexible, and thus less easily enforceable ethical rules? Does the work of the EAG signal a weakening of the foundation of legal doctrine, such as the rule of law, the theory of justice, or the fundamental values supporting human rights, and a strengthening of a more cultural approach to data protection? Not at all. The reflections of the EAG contained in this report are not intended as the continuation of policy by other means. It neither supersedes nor supplements the law or the work of legal practitioners. Its aims and means are different. On the one hand, the report seeks to map and analyse current and future paradigm shifts which are characterised by a general shift from analogue experience of human life to a digital one. On the other hand, and in light of this shift, it seeks to re-evaluate our understanding of the fundamental values most crucial to the well-being of people, those taken for granted in a data-driven society and those most at risk. The objective of this report is thus not to generate definitive answers, nor to articulate new norms for present and future digital societies but to identify and describe the most crucial questions for the urgent conversation to come. This requires a conversation between legislators and data protection experts, but also society at large - because the issues identified in this report concern us all, not only as citizens but also as individuals. They concern us in our daily lives, whether at home or at work and there isn’t a place we could travel to where they would cease to concern us as members of the human species. (shrink)
In this article, instead of taking a particular method as translation, we ask: what does one expect to do with a translation? The answer to this question will reveal, though, that none of the first order methods are capable of fully represent the required transference of ontological commitments. Lastly, we will show that this view on translation enlarge considerably the scope of translatable, and, therefore, ontologically comparable theories.
In their book EVALUATING CRITICAL THINKING Stephen Norris and Robert Ennis say: “Although it is tempting to think that certain [unstated] assumptions are logically necessary for an argument or position, they are not. So do not ask for them.” Numerous writers of introductory logic texts as well as various highly visible standardized tests (e.g., the LSAT and GRE) presume that the Norris/Ennis view is wrong; the presumption is that many arguments have (unstated) necessary assumptions and that readers and test takers (...) can reasonably be expected to identify such assumptions. This paper proposes and defends criteria for determining necessary assumptions of arguments. Both theoretical and empirical considerations are brought to bear. (shrink)
There has been much discussion on the two-envelope paradox. Clark and Shackel (2000) have proposed a solution to the paradox, which has been refuted by Meacham and Weisberg (2003). Surprisingly, however, the literature still contains no axiomatic justification for the claim that one should be indifferent between the two envelopes before opening one of them. According to Meacham and Weisberg, "decision theory does not rank swapping against sticking [before opening any envelope]" (p. 686). To fill this gap in the literature, (...) we present a simple axiomatic justification for indifference, avoiding any expectation reasoning, which is often considered problematic in infinite cases. Although the two-envelope paradox assumes an expectation-maximizing agent, we show that analogous paradoxes arise for agents using different decision principles such as maximin and maximax, and that our justification for indifference before opening applies here too. (shrink)
The purpose of this article is to fill an interpretive gap in L. Wittgenstein’s Tractatus Logico-Philosophicus in what has been overlooked by most scholars of the Austrian philosopher. It is the consideration of the possible influences that he would have suffered from the time of Mechanical Engineering studies and that reflected directly in his philosophy, especially those arising from the field of Physics. Due to the extensive restrictions that involve a scientific article, it will not be possible to present here (...) what we believe to be the influences of L. Boltzmann’s thought on the Wittgenstein Tractatus – which will remain for future work. However, we present the influences of H. Hertz’s The Principles of Mechanics on at least three fundamental themes of Wittgenstein’s Tractatus: on the ontological formalism of Tractarian objects, on the picture theory of language and on the conception of science of that work. It is expected that such clarifications will serve a new and important understanding of this seminal work of the 20th century, this time from the perspective of the relationship between Philosophy and Physics in Wittgenstein. (shrink)
The issue of justice after catastrophe is an enormous challenge to contemporary theories of distributive justice. In the past three decades, the controversy over distributive justice has centered on the ideal of equality. One of intensely debated issues concerns what is often called the “equality of what,” on which there are three primary views: welfarism, resourcism, and the capabilities approach. Another major point of dispute can be termed the “equality or another,” about which three positions debate: egalitarianism, prioritarianism, and sufficientarianism. (...) On these topics of distributive justice, authors are concerned with the current difference between the better-off and the worse-off or the present situation of the badly-off. By contrast, it is essential to take account of the past distribution of well-being as well as the present situation in order to explore questions of post-catastrophe justice. Without looking at the pre-disaster distribution of income, preference satisfaction, or basic capabilities among affected people, no present assessment of the damage caused by the disaster could be correct and no proposed remedy adequate. It is true that luck egalitarians assess the current distribution among people by referring to the decision that each individual made. Yet they pay scant attention to the situation in which each one stayed in the past. Therefore, we can legitimately say that most theorists of distributive justice, including luck egalitarians, have failed to give consideration to the past state of each person. -/- To fill this gap in the literature, the present article explores philosophical questions that arise when we take account of each person’s past and present situations in discussing distributive justice regarding public compensation and assistance to survivors and families of victims of natural and industrial disasters. In addressing these novel questions, I develop and refine various concepts, ideas, and arguments that have been presented in the study of distributive justice in normal settings. I tackle two tasks, the first of which is to explore the foundation and scope of luck egalitarianism. Despite the moral appeal it has in many cases, luck egalitarianism has attracted the so-called harshness objection. Some luck egalitarians attempt to avoid this objection in a pragmatic way by combining the luck egalitarian doctrine with the principle of basic needs satisfaction. However, they do not provide any systematic rationale for this combination. In contrast with such pragmatic responses, I seek to offer a principled argument for holding individuals responsible for their choices only when their basic needs are met, by invoking the ideas of respect for human voluntariness and rescue of human vulnerability. Based on this argument, I propose a form of responsibility-sensitive theory, which considers the pre-disaster distribution of well-being as a default position. The second task I take on is to refine sufficientarianism in the context of post-catastrophe justice. Luck egalitarianism with boundaries set by the basic needs principle seems to indicate the potential for sufficientarianism. But major proponents of this view conceive the welfarist assumption, a considerably high standard of well-being, and the controversial treatment of persons staying below the threshold, all of which seem problematic in the post-disaster situation. I try to construct a new version of sufficientarianism by replacing these current features with more robust ones. (shrink)
The INBIOSA project brings together a group of experts across many disciplines who believe that science requires a revolutionary transformative step in order to address many of the vexing challenges presented by the world. It is INBIOSA’s purpose to enable the focused collaboration of an interdisciplinary community of original thinkers. This paper sets out the case for support for this effort. The focus of the transformative research program proposal is biology-centric. We admit that biology to date has been more fact-oriented (...) and less theoretical than physics. However, the key leverageable idea is that careful extension of the science of living systems can be more effectively applied to some of our most vexing modern problems than the prevailing scheme, derived from abstractions in physics. While these have some universal application and demonstrate computational advantages, they are not theoretically mandated for the living. A new set of mathematical abstractions derived from biology can now be similarly extended. This is made possible by leveraging new formal tools to understand abstraction and enable computability. [The latter has a much expanded meaning in our context from the one known and used in computer science and biology today, that is "by rote algorithmic means", since it is not known if a living system is computable in this sense (Mossio et al., 2009).] Two major challenges constitute the effort. The first challenge is to design an original general system of abstractions within the biological domain. The initial issue is descriptive leading to the explanatory. There has not yet been a serious formal examination of the abstractions of the biological domain. What is used today is an amalgam; much is inherited from physics (via the bridging abstractions of chemistry) and there are many new abstractions from advances in mathematics (incentivized by the need for more capable computational analyses). Interspersed are abstractions, concepts and underlying assumptions “native” to biology and distinct from the mechanical language of physics and computation as we know them. A pressing agenda should be to single out the most concrete and at the same time the most fundamental process-units in biology and to recruit them into the descriptive domain. Therefore, the first challenge is to build a coherent formal system of abstractions and operations that is truly native to living systems. Nothing will be thrown away, but many common methods will be philosophically recast, just as in physics relativity subsumed and reinterpreted Newtonian mechanics. -/- This step is required because we need a comprehensible, formal system to apply in many domains. Emphasis should be placed on the distinction between multi-perspective analysis and synthesis and on what could be the basic terms or tools needed. The second challenge is relatively simple: the actual application of this set of biology-centric ways and means to cross-disciplinary problems. In its early stages, this will seem to be a “new science”. This White Paper sets out the case of continuing support of Information and Communication Technology (ICT) for transformative research in biology and information processing centered on paradigm changes in the epistemological, ontological, mathematical and computational bases of the science of living systems. Today, curiously, living systems cannot be said to be anything more than dissipative structures organized internally by genetic information. There is not anything substantially different from abiotic systems other than the empirical nature of their robustness. We believe that there are other new and unique properties and patterns comprehensible at this bio-logical level. The report lays out a fundamental set of approaches to articulate these properties and patterns, and is composed as follows. -/- Sections 1 through 4 (preamble, introduction, motivation and major biomathematical problems) are incipient. Section 5 describes the issues affecting Integral Biomathics and Section 6 -- the aspects of the Grand Challenge we face with this project. Section 7 contemplates the effort to formalize a General Theory of Living Systems (GTLS) from what we have today. The goal is to have a formal system, equivalent to that which exists in the physics community. Here we define how to perceive the role of time in biology. Section 8 describes the initial efforts to apply this general theory of living systems in many domains, with special emphasis on crossdisciplinary problems and multiple domains spanning both “hard” and “soft” sciences. The expected result is a coherent collection of integrated mathematical techniques. Section 9 discusses the first two test cases, project proposals, of our approach. They are designed to demonstrate the ability of our approach to address “wicked problems” which span across physics, chemistry, biology, societies and societal dynamics. The solutions require integrated measurable results at multiple levels known as “grand challenges” to existing methods. Finally, Section 10 adheres to an appeal for action, advocating the necessity for further long-term support of the INBIOSA program. -/- The report is concluded with preliminary non-exclusive list of challenging research themes to address, as well as required administrative actions. The efforts described in the ten sections of this White Paper will proceed concurrently. Collectively, they describe a program that can be managed and measured as it progresses. (shrink)
Understanding cooperative human behaviour depends on insights into the biological basis of human altruism, as well as into socio-cultural development. In terms of evolutionary theory, kinship and reciprocity are well established as underlying cooperativeness. Reasons will be given suggesting an additional source, the capability of a cognition-based empathy that may have evolved as a by-product of strategic thought. An assessment of the range, the intrinsic limitations, and the conditions for activation of human cooperativeness would profit from a systems approach (...) combining biological and socio-cultural aspects. However, this is not yet the prevailing attitude among contemporary social and biological scientists who often hold prejudiced views of each other's notions. It is therefore worth noticing that the desirable integration of aspects has already been attempted, in remarkable and encouraging ways, in the history of thought on human nature. I will exemplify this with the ideas of the fourteenth century Arab-Muslim historian Ibn Khaldun. He set out to explicate human cooperativeness - "asabiyah" - as having a biological basis in common descent, but being extendable far beyond within social systems, though in a relatively unstable and attenuated fashion. He combined psychological and material factors in a dynamical theory of the rise and decline of political rulership, and related general social phenomena to basic features of human behaviour influenced by kinship, expectation of reciprocity, and empathic emotions. -/- . (shrink)
This article explores the potential for “trust in technology” to make a productive conceptual contribution to the ethical evaluation of technology, complementing the concepts of “acceptance” and “acceptability” already established in technology assessment. It shows that for digital technologies in particular, “trust” can better address aspects of security against attacks as it allows to integrate concepts of IT security. Furthermore, “trustworthy technology” allows for a better inclusion of lay perspectives, since rationally justified trust in the sense of risk expectations (...) can be mediated interpersonally by experts. Especially for the evaluation of digital technologies, “trust in technology” can thus bridge a conceptual gap between acceptance and acceptability. (shrink)
This book studies the technognomies of memory in scripto as in texts, lists, dictionaries and databases and less the technognomies of memory in vivo (as in remembering). There are of course some relations between these two kinds of memories, being memory-in-scripto a development parallel to the development of written language. We notice that the historical presentation is built upon both forms of memory. We notice that the historical explanation is tied to the concrete experience of persons belonging to a culture. (...) In the history of memory then, it is necessary to distinguish two important aspects, the development of spoken memory and the development of written memory. The essential characteristic of written memory is its muteness. Muteness is also associated to spatiality and to stability. On the other hand, audial presentations are inseparable of the notions of movement and time passing. According to Don Ihde the spheres of the invisible and the silent, limit the spheres of the visual and the audial. These two spheres overlap partially in visual presentations that also are audial presentations; however, their natural being is to be independent from each other. [Ihde, Don. Listening and Voice. Phenomenologies of Sound; State University Press; 2007; p. 50-51.] In our time, which is also the time of the globalization and digitalization of culture, a new philosophical paradigm is going on characterized by the fragmentation of experience. This fragmentation does not allow an overview of the totality of a field of experience, which is only possible to reduce to singular analytical moments. The fragmentation of experience is the result of a new jump of the capability to concretion. Inside this new paradigm, the world becomes multistable. The multistability of the world creates a gap between intention and implementation that distinguishes the “full” history as Natural history from the “broken” history as Cultural history. (shrink)
The analysis focuses on the passage of Gorgias (506c–507c) in which Plato’s Socrates is having a dialog with himself. Socrates is talking to someone who, better than any other partner of discussion, is capable to discern the truth; this is an extraordinary way of expressing philosophical views by Plato. It suggests that in this passage Plato is considering questions which are of a primary importance. There are also other signs, both in the structure of the text and in the comments (...) made by Socrates regarding the form of discussion, which confirm that in this passage we deal with the most important matters. The analyzed passage refers to justice. First, Plato is summarizing views on justice of soul which were developed in the earlier parts of Gorgias. Then he is talking about justice of deeds as consisting in doing what is fitting as regards men. It is an addition to the earlier discussion and therefore the expression of this view can be recognized as the important reason why Socrates’ dialog with himself was constructed. Attention is paid to a certain gap in the argumentation. At the first glance it is not clear why the thesis that justice of soul is based on order and harmony of its elements justifies the thesis that justice of deeds consists in doing what is fitting as regards men. This gap can be filled in by the so-called Plato’s unwritten teaching on the good in itself. According to this teaching the good in itself is the one (the unity) which from its very nature gives existence and life by doing what is fitting as being advantageous to something. Order and harmony in the soul is the basis of its unity and therefore of its similarity to the good in itself and of the similarity of soul’s activity to the way the good is acting. Acting justly, doing what is fitting, in the sense of being advantageous to others, is the best thing a man or a woman can do, it is something which fulfills and makes him or her happy. Wisdom and — generally speaking — knowledge is clearly regarded as a means and not the aim in itself. An individual, and not an abstract idea, is the primary object of the activity which fulfills a rational being. (shrink)
Intellectual humility can be broadly construed as being conscious of the limits of one’s existing knowledge and capable to acquire more knowledge, which makes it a key virtue of the information age. However, the claim “I am (intellectually) humble” seems paradoxical in that someone who has the disposition in question would not typically volunteer it. There is an explanatory gap between the meaning of the sentence and the meaning the speaker ex- presses by uttering it. We therefore suggest analyzing intellectual (...) humility semantically, using a psycholexical approach that focuses on both synonyms and antonyms of ‘intellectual humili- ty’. We present a thesaurus-based methodology to map the semantic space of intellectual hu- mility and the vices it opposes as a heuristic to support philosophical and psychological anal- ysis of this disposition. We performed the mapping both in English and German in order to test for possible cultural differences in the understanding of intellectual humility. In both lan- guages, we find basically the same three semantic dimensions of intellectual humility (sensi- bility, discreetness, and knowledge dimensions) as well as three dimensions of its related vic- es (self-overrating, other-underrating and dogmatism dimensions). The resulting semantic clusters were validated in an empirical study with English (n=276) and German (n=406) par- ticipants. We find medium to high correlations (0.54-0.72) between thesaurus similarity and perceived similarity, and we can validate the labels of the three dimensions identified in the study. But we also find indications of the limitations of the thesaurus methodology in terms of cluster plausibility. We conclude by discussing the importance of these findings for construct- ing psychometric scales for intellectual humility. (shrink)
Over the past few years we have seen an increasing number of legal proceedings related to inappropriately implemented technology. At the same time career paths have diverged from the foundation of statistics out to Data Scientist, Machine Learning and AI. All of these new branches being fundamentally branches of statistics and mathematics. This has meant that formal training has struggled to keep up with what is required in the plethora of new roles. Mathematics as a taught subject is still based (...) in decades old teaching specifications and has not been updated centrally as a curriculum to include new technologies, coding or ethics. This subject area is firmly split between ICT and Mathematics in secondary school, continuing on to be split between Computer Science and Mathematics at University. As we move forwards with technology we see these once seperate fields becoming increasingly intertwined. We propose that existing provision for concepts such as ethics and societal responsibility in analysis currently exist but have not been incorporated into the mainstream curriculum of School or University. This is partially due to the split between fields in an educational setting but also the speed with which education is able to keep up with Industry and its requirements. Principles and frameworks of socially responsible modelling beginning at school level means that ethics and real-life modelling are introduced much earlier than normal. Integrating these concepts with philosophical principles of society and politics ensures a suitable background for future modellers and users of technology to draw on. Modelling is currently undertaken in technical sciences at University but the Subject Benchmark Statements are not current (Subject Benchmark Statements describe the nature of study and the academic standards expected of graduates in specific subject areas. They show what graduates might reasonably be expected to know, do and understand at the end of their studies). Even in 2019 the UK did not yet have Benchmark Statements that discuss the learning to be done in Higher Education around AI and advanced Machine Learning. Where there is provision for AI and Data Science within degree courses ethics is not generally highlighted as a key concept. As such there can be a lack of focus on the teaching of modelling or ethics as specific skills. The skills required to use a basic statistical model, for example would not be sufficient to start from scratch and build an ethical model reflecting real-world scenarios with which to inform policy or organisational decision making. This is a skill in itself and includes such aspects as awareness of data quality, ethics, user implementation problems, context and an understanding of the environment that the model is being created in. As the field of analytics has progressed so quickly in the last decade modules such as those covering AI have simply been bolted on to Maths or Computing degrees rather than being fully detailed as key areas of study or driving new, more integrated courses of study. This paper posits that gaps at primary, secondary and tertiary educational levels need to be addressed. Implementing and integrating key concepts from school level is essential so that areas such as assumptions, caveats, quality assurance and answering the right questions with constructive challenge become a cultural fixture. This not only helps developers of technology but users of rapidly developing technology also. In addition, leadership and soft skills as part of this education will ensure that a cultural shift can take place and promote continuous improvement in analysis within organisations. The addition of key concepts throughout the educational system and the updating of potentially outdated curriculums is key to ensuring a functional society. A society where every citizen is a user of tech and those that develop it can be ethical and socially responsible in its development. (shrink)
The hard problem of consciousness arises in most incarnations of present day physicalism. Why should certain physical processes necessarily be accompanied by experience? One possible response is that physicalism itself should be modified in order to accommodate experience: But, modified how? In the present work, we investigate whether an ontology derived from quantum field theory can help resolve the hard problem. We begin with the assumption that experience cannot exist without being accompanied by a subject of experience (SoE). While people (...) well versed in Indian philosophy will not find that statement problematic, it is still controversial in the analytic tradition. Luckily for us, Strawson has elaborately defended the notion of a thin subject—an SoE which exhibits a phenomenal unity with different types of content (sensations, thoughts etc.) occurring during its temporal existence. Next, following Stoljar, we invoke our ignorance of the true physical as the reason for the explanatory gap between present day physical processes (events, properties) and experience. We are therefore permitted to conceive of thin subjects as related to the physical via a new, yet to be elaborated relation. While this is difficult to conceive under most varieties of classical physics, we argue that this may not be the case under certain quantum field theory ontologies. We suggest that the relation binding an SoE to the physical is akin to the relation between a particle and (quantum) field. In quantum field theory, a particle is conceived as a coherent excitation of a field. Under the right set of circumstances, a particle coalesces out of a field and dissipates. We suggest that an SoE can be conceived as akin to a particle—a SelfOn—which coalesces out of physical fields, persists for a brief period of time and then dissipates in a manner similar to the phenomenology of a thin subject. Experiences are physical properties of selfons with the constraint (specified by a similarity metric) that selfons belonging to the same natural kind will have similar experiences. While it is odd at first glance to conceive of subjects of experience as akin to particles, the spatial and temporal unity exhibited by particles as opposed to fields and the expectation that selfons are new kinds of particles, paves the way for cementing this notion. Next, we detail the various no-go theorems in most versions of quantum field theory and discuss their impact on the existence of selfons. Finally, we argue that the time is ripe for a rejuvenated Indian philosophy to begin tackling the three-way relationship between SoEs (which may become equivalent to jivas in certain Indian frameworks), phenomenal content and the physical world. With analytic philosophy still struggling to come to terms with the complex worlds of quantum field theory and with the relative inexperience of the western world in arguing the jiva-world relation, there is a clear and present opportunity for Indian philosophy to make a worldcentric contribution to the hard problem of experience. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.