Recent conversation has blurred two very different social epistemic phenomena: echo chambers and epistemic bubbles. Members of epistemic bubbles merely lack exposure to relevant information and arguments. Members of echo chambers, on the other hand, have been brought to systematically distrust all outside sources. In epistemic bubbles, other voices are not heard; in echo chambers, other voices are actively undermined. It is crucial to keep these phenomena distinct. First, echo chambers can explain the post-truth phenomena in a (...) way that epistemic bubbles cannot. Second, each type of structure requires a distinct intervention. Mere exposure to evidence can shatter an epistemic bubble, but may actually reinforce an echo chamber. Finally, echo chambers are much harder to escape. Once in their grip, an agent may act with epistemic virtue, but social context will pervert those actions. Escape from an echo chamber may require a radical rebooting of one's belief system. (shrink)
It is often claimed that epistemic bubbles and echo chambers foster post-truth by filtering our access to information and manipulating our epistemic attitude. In this paper, I try to add a further level of analysis by adding the issue of belief formation. Building on cognitive psychology work, I argue for a dual-system theory according to which beliefs derive from a default system and a critical system. One produces beliefs in a quasi-automatic, effortless way, the other in a slow, effortful (...) way. I also argue that digital socio-epistemic environments tend to inculcate disvalues in their agent's epistemic identity, a process that causes the cognitive shortcircuits typical of conspiracy theories. (shrink)
Social epistemologists should be well-equipped to explain and evaluate the growing vulnerabilities associated with filterbubbles, echo chambers, and group polarization in social media. However, almost all social epistemology has been built for social contexts that involve merely a speaker-hearer dyad. Filterbubbles, echo chambers, and group polarization all presuppose much larger and more complex network structures. In this paper, we lay the groundwork for a properly social epistemology that gives the role and structure of networks (...) their due. In particular, we formally define epistemic constructs that quantify the structural epistemic position of each node within an interconnected network. We argue for the epistemic value of a structure that we call the (m,k)-observer. We then present empirical evidence that (m,k)-observers are rare in social media discussions of controversial topics, which suggests that people suffer from serious problems of epistemic vulnerability. We conclude by arguing that social epistemologists and computer scientists should work together to develop minimal interventions that improve the structure of epistemic networks. (shrink)
The internet has become an ubiquitous epistemic source. However, it comes with several drawbacks. For instance, the world wide web seems to foster filterbubbles and echo chambers and includes search results that promote bias and spread misinformation. Richard Heersmink suggests online intellectual virtues to combat these epistemically detrimental effects . These are general epistemic virtues applied to the online environment based on our background knowledge of this online environment. I argue that these online intellectual virtues also demand (...) a particular view of cognitive integration. Online intellectual virtues are incompatible with a popular conception of extended minds proposed by Andy Clark and David Chalmers . I suggest that if we want to hold on to both a conception of online intellectual virtues and some conception of the extended mind, we have to accept a more gradual theory of cognitive integration along the lines of second-wave theories of the extended mind. (shrink)
Many scholars agree that the Internet plays a pivotal role in self-radicalization, which can lead to behaviours ranging from lone-wolf terrorism to participation in white nationalist rallies to mundane bigotry and voting for extremist candidates. However, the mechanisms by which the Internet facilitates self-radicalization are disputed; some fault the individuals who end up self-radicalized, while others lay the blame on the technology itself. In this paper, we explore the role played by technological design decisions in online self-radicalization in its myriad (...) guises, encompassing extreme as well as more mundane forms. We begin by characterizing the phenomenon of technological seduction. Next, we distinguish between top-down seduction and bottom-up seduction. We then situate both forms of technological seduction within the theoretical model of dynamical systems theory. We conclude by articulating strategies for combatting online self-radicalization. (shrink)
The fact that Internet companies may record our personal data and track our online behavior for commercial or political purpose has emphasized aspects related to online privacy. This has also led to the development of search engines that promise no tracking and privacy. Search engines also have a major role in spreading low-quality health information such as that of anti-vaccine websites. This study investigates the relationship between search engines’ approach to privacy and the scientific quality of the information they return. (...) We analyzed the first 30 webpages returned searching “vaccines autism” in English, Spanish, Italian, and French. The results show that not only “alternative” search engines but also other commercial engines often return more anti-vaccine pages (10–53%) than Google (0%). Some localized versions of Google, however, returned more anti-vaccine webpages (up to 10%) than Google. Health information returned by search engines has an impact on public health and, specifically, in the acceptance of vaccines. The issue of information quality when seeking information for making health-related decisions also impact the ethical aspect represented by the right to an informed consent. Our study suggests that designing a search engine that is privacy savvy and avoids issues with filterbubbles that can result from user-tracking is necessary but insufficient; instead, mechanisms should be developed to test search engines from the perspective of information quality (particularly for health-related webpages) before they can be deemed trustworthy providers of public health information. (shrink)
“Filter bubble”, “echo chambers”, “information diet” – the metaphors to describe today’s information dynamics on social media platforms are fairly diverse. People use them to describe the impact of the viral spread of fake, biased or purposeless content online, as witnessed during the recent race for the US presidency or the latest outbreak of the Ebola virus (in the latter case a tasteless racist meme was drowning out any meaningful content). This unravels the potential envisioned to arise from emergent (...) activities of human collectives on the World Wide Web, as exemplified by the Arab Spring mass movements or digital disaster response supported by the Ushahidi tool suite. (shrink)
This paper will identify three central dialectics within cloud services. These constitute defining positions regarding the nature of cloud services in terms of privacy, ethical responsibility, technical architecture and economics. These constitute the main frameworks within which ethical discussions of cloud services occur. The first dialectic concerns the question of whether it is it essential that personal privacy be reduced in order to deliver personalised cloud services. I shall evaluate the main arguments in favour of the view that it is. (...) To contrast this, I shall review Langheinrich’s Principles of Privacy-Aware Ubiquitous Systems [24]. This offers a design strategy which maintains functionality while embedding privacy protection into the architecture and operation of cloud services. -/- The second dialectic is concerned with the degree to which people who design or operate cloud services are ethically responsible for the consequences of the actions of those systems, sometimes known as the “responsibility gap.” I shall briefly review two papers which argue that no one is ethically responsible for such software, then contrast them with two papers which make strong arguments for responsibility. I shall show how claims for no responsibility rest on very narrow definitions of responsibility combined with questionable conceptions of technology itself. The current shape of cloud services is dominated by a tension between open and closed systems. I shall show how this is reflected in architecture, standards and organisational models. I will then examine alternatives to the current state of affairs, including recent developments in support of alternative business models at government level, such as the House of Lords call for the Internet to be treated as a public utility (The Select Committee on Digital Skills, 2015). (shrink)
In this chapter we argue that emotions are mediated in an incomplete way in online social media because of the heavy reliance on textual messages which fosters a rationalistic bias and an inclination towards less nuanced emotional expressions. This incompleteness can happen either by obscuring emotions, showing less than the original intensity, misinterpreting emotions, or eliciting emotions without feedback and context. Online interactions and deliberations tend to contribute rather than overcome stalemates and informational bubbles, partially due to prevalence of (...) anti-social emotions. It is tempting to see emotions as being the cause of the problem of online verbal aggression and bullying. However, we argue that social media are actually designed in a predominantly rationalistic way, because of the reliance on text-based communication, thereby filtering out social emotions and leaving space for easily expressed antisocial emotions. Based on research on emotions that sees these as key ingredients to moral interaction and deliberation, as well as on research on text-based versus non-verbal communication, we propose a richer understanding of emotions, requiring different designs of online deliberation platforms. We propose that such designs should move from text-centred designs and should find ways to incorporate the complete expression of the full range of human emotions so that these can play a constructive role in online deliberations. (shrink)
The use of Quality-Adjusted Life Years (QALYs) in healthcare allocation has been criticized as discriminatory against people with disabilities. This article considers a response to this criticism from Nick Beckstead and Toby Ord. They say that even if QALYs are discriminatory, attempting to avoid discrimination – when coupled with other central principles that an allocation system should favour – sometimes leads to irrationality in the form of cyclic preferences. I suggest that while Beckstead and Ord have identified a problem, it (...) is a misdiagnosis to lay it at the feet of an anti-discrimination principle. The problem in fact comes from a basic tension between respecting reasonable patient preferences and other ways of ranking treatment options. As such, adopting a QALY system does not solve the problem they identify. (shrink)
I first distinguish four types of objection to paternalism and argue that only one – the principled objection – amounts to a substantive and distinct normative doctrine. I then argue that this doctrine should be understood as preventing certain facts from playing the role of reasons they would otherwise play. I explain how this filter approach makes antipaternalism independent of several philosophical controversies: On the role reasons play, on what reasons there are, and on how reasons are related to (...) values. I go on to contrast the filter approach with the competing and dominant action-focused approach, which understands objections to paternalism in terms of paternalistic action, behavior, law, policy and the like. Seana Shiffrin and Peter de Marneffe are singled out as prominent recent proponents of this approach. By engaging with their definitions of paternalism, I explain how the action-focused approach makes antipaternalism dependent on the sorting of actions into paternalistic and nonpaternalistic according to what reasons support them. Because one and the same action can be supported by many different reasons, and by different sorts of reasons, such sorting is very difficult. The upshot is that antipaternalism on the action-focused account fails to provide the precise normative implications of the filter approach that I favor. (shrink)
The purpose of this paper is to suggest that we are in the midst of a Cantorian bubble, just as, for example, there was a dot com bubble in the late 1990’s.
Epistemic closure refers to the assumption that humans are able to recognize what entails or contradicts what they believe and know, or more accurately, that humans’ epistemic states are closed under logical inferences. Epistemic closure is part of a larger theory of mind ability, which is arguably crucial for downstream NLU tasks, such as inference, QA and conversation. In this project, we introduce a new automatically constructed natural language inference dataset that tests inferences related to epistemic closure. We test and (...) further fine tune the model RoBERTa-large-mnli on the new dataset, with limited positive results. (shrink)
The idea of dreams being mere internal artifacts of the mind does not seem to be essential to externalism and extended mind theories, which seem as if they would function as well without this additional assumption. The Many Bubble Interpretation could allow a simpler rationale to externalist theories, which may be even simpler if the assumption that dreams have no worthwhile content outside the mind is omitted.
In his 1993 article George Bealer offers three separate arguments that are directed against the internal coherence of empiricism, specifically against Quine’s version of empiricism. One of these arguments is the starting points argument (SPA) and it is supposed to show that Quinean empiricism is incoherent. We argue here that this argument is deeply flawed, and we demonstrate how a Quinean may successfully defend his views against Bealer’s SPA. Our defense of Quinean empiricism against the SPA depends on showing (1) (...) that Bealer is, in an important sense, a foundationalist, and (2) that Quine is, in an important sense, a coherentist. Having established these two contentions we show that Bealer’s SPA begs the question against Quinean empiricists. (shrink)
The scientists propose a new model with dark energy and our universe riding on an expanding bubble in an extra dimension. The whole universe accommodated on the edge of the this expanding bubble. The thought experiment in this paper involves a bubble universe that is assumed to be rising in direction of North bubble pole. The rising is based on assumption with its comparison with parcel of air observed in meteorology. The thought experiment in this paper involves a bubble universe (...) that is assumed to be rising in direction of its north bubble pole. Her universe is also imagined moving in a direction like a parcel of air but on the base of pressure mismatch but this thought experiment involves imagining the bubble rising that would be its movement along the direction of it north bubble pole. If these bubbles are assumed to be moving upward like parcel of air as this thought experiment involves the assumption of them moving upward i.e. moving in direction of their respective north bubble pole could have collisions. The possible assumption of poles of bubbles assumes that their movements, expanding and cooling of the universe are all correlated. (shrink)
How promising is Bitcoin as a currency? This paper discusses four claims on the advantages of Bitcoin: a more stable currency than state-backed ones; a secure and efficient payment system; a credible alternative to the central management of money; and a better protection of transaction privacy. We discuss these arguments by relating them to their philosophical roots in libertarian and neoliberal theories, and assess whether Bitcoin can effectively meet these expectations. We conclude that despite its advocates’ enthusiasm, there are good (...) reasons to doubt that Bitcoin can fulfill its promises and act as a functioning currency, rather than as a mere speculative asset. (shrink)
Universities and funders in many countries have been using Journal Impact Factor (JIF) as an indicator for research and grant assessment despite its controversial nature as a statistical representation of scientific quality. This study investigates how the changes of JIF over the years can affect its role in research evaluation and science management by using JIF data from annual Journal Citation Reports (JCR) to illustrate the changes. The descriptive statistics find out an increase in the median JIF for the top (...) 50 journals in the JCR, from 29.300 in 2017 to 33.162 in 2019. Moreover, on average, elite journal families have up to 27 journals in the top 50. In the group of journals with a JIF of lower than 1, the proportion has shrunk by 14.53% in the 2015–2019 period. The findings suggest a potential ‘JIF bubble period’ that science policymaker, university, public fund managers, and other stakeholders should pay more attention to JIF as a criterion for quality assessment to ensure more efficient science management. (shrink)
Collaborative filtering is being used within organizations and in community contexts for knowledge management and decision support as well as the facilitation of interactions among individuals. This article analyzes rhetorical and technical efforts to establish trust in the constructions of individual opinions, reputations, and tastes provided by these systems. These initiatives have some important parallels with early efforts to support quantitative opinion polling and construct the notion of “public opinion.” The article explores specific ways to increase trust in these systems, (...) albeit a “guarded trust” in which individuals actively seek information about system foibles and analyze the reputations of participants. (shrink)
Goffman’s (1959) dramaturgical identity theory requires modification when theorising about presentations of self on social media. This chapter contributes to these efforts, refining a conception of digital identities by differentiating them from ‘corporatised identities’. Armed with this new distinction, I ultimately argue that social media platforms’ production of corporatised identities undermines their users’ autonomy and digital well-being. This follows from the disentanglement of several commonly conflated concepts. Firstly, I distinguish two kinds of presentation of self that I collectively refer to (...) as ‘expressions of digital identity’. These digital performances (boyd 2007) and digital artefacts (Hogan 2010) are distinct, but often confused. Secondly, I contend this confusion results in the subsequent conflation of corporatised identities – poor approximations of actual digital identities, inferred and extrapolated by algorithms from individuals’ expressions of digital identity – with digital identities proper. Finally, and to demonstrate the normative implications of these clarifications, I utilise MacKenzie’s (2014, 2019) interpretation of relational autonomy to propose that designing social media sites around the production of corporatised identities, at the expense of encouraging genuine performances of digital identities, has undermined multiple dimensions of this vital liberal value. In particular, the pluralistic range of authentic preferences that should structure flourishing human lives are being flattened and replaced by commercial, consumerist preferences. For these reasons, amongst others, I contend that digital identities should once again come to drive individuals’ actions on social media sites. Only upon doing so can individuals’ autonomy, and control over their digital identities, be rendered compatible with social media. (shrink)
One of the explanations for the Great Crisis of 2007-2008 was that financial authorities should have issued stricter regulations to prevent the housing bubble. However, according to Alan Greenspan, President of the Federal Reserve System (FED) from 1987 to 2006, this is to judge with hindsight. No one can guess when a “bubble” begins, nor when it ends; they happen because of the “irrational exuberance” in investors’ behavior, which causes boom and bust cycles. Regulators are not in a better situation (...) for assessing risks, though: since market participants supposedly know their own risks better than the regulator (a kind of informational asymmetry), an intervention (except to ensure law-enforcement) would imply unjustified paternalism. However, a regulator does not have to be conceived as a paternalistic authority. We sketch an objection to Greenspan's argument, arguing that crises don’t require a defective reasoning such as the “irrational exuberance” – our usual bounded rationality might be enough to provide the kind of “self-fulfilling prophecy” observed in the rise and fall of bubble assets value. Given the possibility of grave externalities, authorities are justified in adopting measures to ensure investors behave in a prudent way, even if they supposedly know better their own risks. (shrink)
I prove both the mathematical conjectures P ≠ NP and the Continuum Hypothesis are eternally unprovable using the same fundamental idea. Starting with the Saunders Maclane idea that a proof is eternal or it is not a proof, I use the indeterminacy of human biological capabilities in the eternal future to show that since both conjectures are independent of Axioms and have definitions connected with human biological capabilities, it would be impossible to prove them eternally without the creation and widespread (...) acceptance of new axioms. I also show that the same fundamental concepts cannot be used to demonstrate the eternal unprovability of many other mathematical theorems and open conjectures. Finally I investigate the idea’s implications for the foundations of mathematics including its relation to Godel’s Incompleteness Theorem and Tarsky’s Undefinability Theorem. (shrink)
The bruxism is a medical sleep syndrome it is the remedial span for crushing the tines and gritting the jowl. Human rarely chore their tines and jowl, slightly than crushing their teeth lacking it producing any signals. The symptoms of bruxism are arduousness in the jowl joint, breakable teeth, headache, earache and difficulty in open in mouth etc. The causes of bruxism are snooze sickness, pressure and nervousness. The REM is a rapid eye movement its a stages of sleep. The (...) EEG signal are used in the measurement of neuron, the alpha, beta, gamma, theta and delta wave are used in the prognostic of bruxism syndrome. Its used in MATLAB coding by the six steps in prognostic in bruxism. Md Belal Bin Heyat | Faijan Akhtar | Shadab Azad "Comparative Analysis of Original Wave & Filtered Wave of EEG signal Used in the Prognostic of Bruxism medical Sleep syndrome" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-1 | Issue-1 , December 2016,. (shrink)
The bruxism is a medical sleep syndrome it is the remedial span for crushing the tines and gritting the jowl. Human rarely chore their tines and jowl, slightly than crushing their teeth lacking it producing any signals. The symptoms of bruxism are arduousness in the jowl joint, breakable teeth, headache, earache and difficulty in open in mouth etc. The causes of bruxism are snooze sickness, pressure and nervousness. The REM is a rapid eye movement it's a stages of sleep. The (...) EEG signal are used in the measurement of neuron, the alpha, beta, gamma, theta and delta wave are used in the prognostic of bruxism syndrome. It's used in MATLAB coding by the six steps in prognostic in bruxism. Md Belal Bin Heyat | Faijan Akhtar | Shadab Azad "Comparative Analysis of Original Wave & Filtered Wave of EEG signal Used in the Prognostic of Bruxism medical Sleep syndrome" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-1 | Issue-1 , December 2016,. (shrink)
Recently, there has been growing concern that increased partisanship in news sources, as well as new ways in which people acquire information, has led to a proliferation of epistemic bubbles and echo chambers: in the former, one tends to acquire information from a limited range of sources, ones that generally support the kinds of beliefs that one already has, while the latter function in the same way, but possess the additional characteristic that certain beliefs are actively reinforced. Here I (...) argue, first, that we should conceive of epistemic bubbles and echo chambers as types of epistemically pernicious groups, and second, that while analyses of such groups have typically focused on relationships between individual members, at least part of what such groups epistemically pernicious pertains to the way that members rely on the groups themselves as sources of information. I argue that member reliance on groups results in groups being attributed a degree of credibility that outruns their warrant, a process I call groupstrapping. I argue that by recognizing the groupstrapping as an illicit method of forming and updating beliefs we can make progress on some of the open questions concerning epistemically pernicious groups. (shrink)
New media (highly interactive digital technology for creating, sharing, and consuming information) affords users a great deal of control over their informational diets. As a result, many users of new media unwittingly encapsulate themselves in epistemic bubbles (epistemic structures, such as highly personalized news feeds, that leave relevant sources of information out (Nguyen forthcoming)). Epistemically paternalistic alterations to new media technologies could be made to pop at least some epistemic bubbles. We examine one such alteration that Facebook has (...) made in an effort to fight fake news and conclude that it is morally permissible. We further argue that many epistemically paternalistic policies can (and should) be a perennial part of the internet information environment. (shrink)
People increasingly form beliefs based on information gained from automatically filtered Internet sources such as search engines. However, the workings of such sources are often opaque, preventing subjects from knowing whether the information provided is biased or incomplete. Users’ reliance on Internet technologies whose modes of operation are concealed from them raises serious concerns about the justificatory status of the beliefs they end up forming. Yet it is unclear how to address these concerns within standard theories of knowledge and justification. (...) To shed light on the problem, we introduce a novel conceptual framework that clarifies the relations between justified belief, epistemic responsibility, action, and the technological resources available to a subject. We argue that justified belief is subject to certain epistemic responsibilities that accompany the subject’s particular decision-taking circumstances, and that one typical responsibility is to ascertain, so far as one can, whether the information upon which the judgment will rest is biased or incomplete. What this responsibility comprises is partly determined by the inquiry-enabling technologies available to the subject. We argue that a subject’s beliefs that are formed based on Internet-filtered information are less justified than they would be if she either knew how filtering worked or relied on additional sources, and that the subject may have the epistemic responsibility to take measures to enhance the justificatory status of such beliefs.. (shrink)
Imagine you are casually browsing an online bookstore, looking for an interesting novel. Suppose the store predicts you will want to buy a particular novel: the one most chosen by people of your same age, gender, location, and occupational status. The store recommends the book, it appeals to you, and so you choose it. Central to this scenario is an automated prediction of what you desire. This article raises moral concerns about such predictions. More generally, this article examines the ethics (...) of artificial social cognition—the ethical dimensions of attribution of mental states to humans by artificial systems. The focus is presumptuous aim attributions, which are defined here as aim attributions based crucially on the premise that the person in question will have aims like superficially similar people. Several everyday examples demonstrate that this sort of presumptuousness is already a familiar moral concern. The scope of this moral concern is extended by new technologies. In particular, recommender systems based on collaborative filtering are now commonly used to automatically recommend products and information to humans. Examination of these systems demonstrates that they naturally attribute aims presumptuously. This article presents two reservations about the widespread adoption of such systems. First, the severity of our antecedent moral concern about presumptuousness increases when aim attribution processes are automated and accelerated. Second, a foreseeable consequence of reliance on these systems is an unwarranted inducement of interpersonal conformity. (shrink)
In their responses to my article “Epistemically Pernicious Groups and the Groupstrapping Problem” (Boyd, 2018), Bert Baumgaertner (“Groupstrapping, Boostrapping, and Oops-strapping: A Reply to Boyd”) and C. Thi Nguyen (“Group-strapping, Bubble, or Echo Chamber?”) have raised interesting questions and opened lines of inquiry regarding my discussion of what I hope to be a way to help make sense of how members of groups can continue to hold beliefs that are greatly outweighed by countervailing evidence (e.g. antivaxxers, climate-change deniers, etc.). Here (...) I respond to these arguments and suggestions by providing three new reasons to believe that groupstrapping as I describe it occurs in epistemically pernicious groups. (shrink)
In 2001 soon after the Asian Crises of 1997-1998, the DotcomBubble, 9/11, the Enron crises triggered a fraud crisis in Wall Street that impacted the market to the core. Since then scandals such as the Lehman Brothers and WorldCom in 2007-2008 and the Great Recession have surpassed it, Enron still remains one of the most important cases of fraudulent accounting. In 2000’s even though the financial industry had become highly regulated, deregulation of the energy industry allowed companies to place bets (...) on future prices. At the peak of the dotcom bubble Enron was named as a star innovator but when the dotcom bubble burst, Enron’s plan to build high speed internet did not flourish and investors started to realize losses. Furthermore, the financial losses of the operations were hid using the market to market accounting technique instead of book value and using special purpose entities to hide debt. The root cause that was identified as a company with a toxic corporate culture focused on officer compensation rather than social responsibility and hence faulty leadership. Is it possible then that; ethical accounting practices, social responsibility and ethics all become inferior goods as income rises in an ‘irrationally exuberant’era? (shrink)
Actual situations where folk philosophy might have predicted precognition effects were studied and dealt with experimentally and theoretically. Extremely strong experimental results were obtained but the findings supported not precognition but the Many Bubble Interpretation, which uses at this time dynamical systems theory as applied to the physics of the brain. Further experiments and theoretical work were discussed.
This paper is an attempt to localize Herman and Chomsky’s analysis of the commercial media and use this concept to fit in the Philippine media climate. Through the propaganda model, they introduced the five interrelated media filters which made possible the “manufacture of consent.” By consent, Herman and Chomsky meant that the mass communication media can be a powerful tool to manufacture ideology and to influence a wider public to believe in a capitalistic propaganda. Thus, they call their theory the (...) “propaganda model” referring to the capitalist media structure and its underlying political function. Herman and Chomsky’s analysis has been centered upon the US media, however, they also believed that the model is also true in other parts of the world as the media conglomeration is also found all around the globe. In the Philippines, media conglomeration is not an alien concept especially in the presence of a giant media outlet, such as, ABS-CBN. In this essay, the authors claim that the propaganda model is also observed even in the less obvious corporate media in the country, disguised as an independent media entity but like a chameleon, it camouflages into an invisible creature leaving predators without any clue. Hence, the reason to analyze and scrutinize a highly reputable news organization in the country, namely, the Philippine Center for Investigative Journalism (PCIJ) in relation to their portrayal of the Duterte presidency. (shrink)
P.F. Strawson argued that ‘mature sensible experience (in general) presents itself as … an immediate consciousness of the existence of things outside us’ (1979: 97). He began his defence of this very natural idea by asking how someone might typically give a description of their current visual experience, and offered this example of such a description: ‘I see the red light of the setting sun filtering through the black and thickly clustered branches of the elms; I see the dappled deer (...) grazing in groups on the vivid green grass…’ (1979: 97). In other words, in describing experience, we tend to describe the objects of experience – the things which we experience – and the ways they are when we are experiencing them. Some go further. According to Heidegger. (shrink)
This chapter proposes the concept of the mindsponge and its underlying themes that explain why and how executives, managers, and corporations could replace waning values in their mindsets with those absorbed during their exposure to multicultural and global settings. It first provides a brief literature review on global mindset and cultural values, which suggests that not only can a mindset be improved, but that it is learning mechanism can also be developed. Then the chapter offers a conceptual framework, called the (...) 'mindsponge', which builds upon earlier works linking mindset to themes of multi-filtering. The process is proposed to help identify emerging values in the transition economy of Vietnam and also to reconfirm existing core values. The concept of mindsponge provides executives, managers, and organizations with not only a practical framework for improving their global mind-set by identifying and strengthening core values, but also capturing emerging opportunities. (shrink)
It is often assumed that if the fetus is a person, then abortion should be illegal. Thomson1 laid the groundwork to challenge this assumption, and Boonin2 has recently argued that it is false: he argues that abortion should be legal even if the fetus is a person. In this article, I explain both Thomson’s and Boonin’s reason for thinking that abortion should be legal even if the fetus is a person. After this, I show that Thomson’s and Boonin’s argument for (...) legalised abortion fail; they have not given us good reason for thinking abortion should be legal.1 Finally, I argue that—if we play Boonin’s game—abortion should be illegal. When discussing the ethics and politics of abortion, whether the fetus is a person is usually central.3 4.2 However, Thomson1 long ago challenged this view. She argued that abortion is permissible even if the fetus is a person. To do so, she asks us to consider a case in which you are non-consensually taken to a hospital and hooked up to a famous violinist. The violinist, if he is to survive, needs to filter his blood through your body. After 9 months, he will be fine and you can unplug yourself and go on your way. If you unplug yourself prior to this, however, he will die. Is it permissible to unplug yourself? Thomson thinks the answer is ‘yes’. And this, she thinks, shows that abortion is permissible even if the fetus is a person. While the majority of her article focuses on the ethics of abortion, it is clear that she also aims to show that abortion should be legal if the fetus is a person; she would no doubt reject the view that the state is right to coerce you into …. (shrink)
This paper examines the role of prestige bias in shaping academic philosophy, with a focus on its demographics. I argue that prestige bias exacerbates the structural underrepresentation of minorities in philosophy. It works as a filter against (among others) philosophers of color, women philosophers, and philosophers of low socio-economic status. As a consequence of prestige bias our judgments of philosophical quality become distorted. I outline ways in which prestige bias in philosophy can be mitigated.
Inflationary cosmology has been widely accepted due to its successful predictions: for a “generic” initial state, inflation produces a homogeneous, flat, bubble with an appropriate spectrum of density perturbations. However, the discovery that inflation is “generically eternal,” leading to a vast multiverse of inflationary bubbles with different low-energy physics, threatens to undermine this account. There is a “predictability crisis” in eternal inflation, because extracting predictions apparently requires a well-defined measure over the multiverse. This has led to discussions of anthropic (...) predictions based on a measure over the multiverse, and an assumption that we are typical observers. I will give a pessimistic assessment of attempts to make predictions in this sense, emphasizing in particular problems that arise even if a unique measure can be found. (shrink)
Can we consciously see more items at once than can be held in visual working memory? This question has elud- ed resolution because the ultimate evidence is subjects’ reports in which phenomenal consciousness is filtered through working memory. However, a new technique makes use of the fact that unattended ‘ensemble prop- erties’ can be detected ‘for free’ without decreasing working memory capacity.
This paper argues for the general proper functionalist view that epistemic warrant consists in the normal functioning of the belief-forming process when the process has forming true beliefs reliably as an etiological function. Such a process is reliable in normal conditions when functioning normally. This paper applies this view to so-called testimony-based beliefs. It argues that when a hearer forms a comprehension-based belief that P (a belief based on taking another to have asserted that P) through the exercise of a (...) reliable competence to comprehend and filter assertive speech acts, then the hearer's belief is prima facie warranted. The paper discusses the psychology of comprehension, the function of assertion, and the evolution of filtering mechanisms, especially coherence checking. (shrink)
Li (禮) and yi (義) are two central moral concepts in the Analects. Li has a broad semantic range, referring to formal ceremonial rituals on the one hand, and basic rules of personal decorum on the other. What is similar across the range of referents is that the li comprise strictures of correct behavior. The li are a distinguishing characteristic of Confucian approaches to ethics and socio-political thought, a set of rules and protocols that were thought to constitute the wise (...) practices of ancient moral exemplars filtered down through dynasties of the past. However, even while the li were extensive and meant to be followed diligently, they were also understood as incapable of exhausting the whole range of activity that constitutes human life. There were bound to be situations in life where there would be no obvious recourse to the li for guidance. As part of their reflections on the good life, the Confucians maintained another moral concept that seemed to cover morally upright exemplary behavior in these types of situations. This concept is that of yi or rightness. In this chapter, I begin with a brief historical sketch to provide some context, and will then turn to li and yi in turn. In the end, I will suggest how li and yi were both meant to facilitate the supreme value of social harmony that pervades much of the Analects and serves as its ultimate orientation. (shrink)
We argue that definite noun phrases give rise to uniqueness inferences characterized by a pattern we call definiteness projection. Definiteness projection says that the uniqueness inference of a definite projects out unless there is an indefinite antecedent in a position that filters presuppositions. We argue that definiteness projection poses a serious puzzle for e-type theories of (in)definites; on such theories, indefinites should filter existence presuppositions but not uniqueness presuppositions. We argue that definiteness projection also poses challenges for dynamic approaches, (...) which have trouble generating uniqueness inferences and predicting some filtering behavior, though unlike the challenge for e-type theories, these challenges have mostly been noted in the literature, albeit in a piecemeal way. Our central aim, however, is not to argue for or against a particular view, but rather to formulate and motivate a generalization about definiteness which any adequate theory must account for. (shrink)
Online service providers —such as AOL, Facebook, Google, Microsoft, and Twitter—significantly shape the informational environment and influence users’ experiences and interactions within it. There is a general agreement on the centrality of OSPs in information societies, but little consensus about what principles should shape their moral responsibilities and practices. In this article, we analyse the main contributions to the debate on the moral responsibilities of OSPs. By endorsing the method of the levels of abstract, we first analyse the moral responsibilities (...) of OSPs in the web. These concern the management of online information, which includes information filtering, Internet censorship, the circulation of harmful content, and the implementation and fostering of human rights. We then consider the moral responsibilities ascribed to OSPs on the web and focus on the existing legal regulation of access to users’ data. The overall analysis provides an overview of the current state of the debate and highlights two main results. First, topics related to OSPs’ public role—especially their gatekeeping function, their corporate social responsibilities, and their role in implementing and fostering human rights—have acquired increasing relevance in the specialised literature. Second, there is a lack of an ethical framework that can define OSPs’ responsibilities, and provide the fundamental sharable principles necessary to guide OSPs’ conduct within the multicultural and international context in which they operate. This article contributes to the ethical framework necessary to deal with and by endorsing a LoA enabling the definition of the responsibilities of OSPs with respect to the well-being of the infosphere and of the entities inhabiting it. (shrink)
A set of visual search experiments tested the proposal that focused attention is needed to detect change. Displays were arrays of rectangles, with the target being the item that continually changed its orientation or contrast polarity. Five aspects of performance were examined: linearity of response, processing time, capacity, selectivity, and memory trace. Detection of change was found to be a self-terminating process requiring a time that increased linearly with the number of items in the display. Capacity for orientation was found (...) to be about 5 items, a value comparable to estimates of attentional capacity. Observers were able to filter out both static and dynamic variations in irrelevant properties. Analysis also indicated a memory for previously-attended locations. These results support the hypothesis that the process needed to detect change is much the same as the attentional process needed to detect complex static patterns. Interestingly, the features of orientation and polarity were found to be handled in somewhat different ways. Taken together, these results not only provide evidence that focused attention is needed to see change, but also show that change detection itself can provide new insights into the nature of attentional processing. (shrink)
Model-data symbiosis is the view that there is an interdependent and mutually beneficial relationship between data and models, whereby models are not only data-laden, but data are also model-laden or model filtered. In this paper I elaborate and defend the second, more controversial, component of the symbiosis view. In particular, I construct a preliminary taxonomy of the different ways in which theoretical and simulation models are used in the production of data sets. These include data conversion, data correction, data interpolation, (...) data scaling, data fusion, data assimilation, and synthetic data. Each is defined and briefly illustrated with an example from the geosciences. I argue that model-filtered data are typically more accurate and reliable than the so-called raw data, and hence beneficially serve the epistemic aims of science. By illuminating the methods by which raw data are turned into scientifically useful data sets, this taxonomy provides a foundation for developing a more adequate philosophy of data. (shrink)
I investigate whether degreed beliefs are able to play the predictive, explanatory, and modeling roles that they are frequently taken to play. The investigation focuses on evidence—both from sources familiar in epistemology as well as recent work in behavioral economics and cognitive psychology—of variability in agents' apparent degrees of belief. Although such variability has been noticed before, there has been little philosophical discussion of its breadth or of the psychological mechanisms underlying it. Once these are appreciated, the inadequacy of degrees (...) of belief becomes clear. I offer a theoretical alternative to degrees of belief, what I call the filter theory. (shrink)
In this paper, we use antecedent-final conditionals to formulate two problems for parsing-based theories of presupposition projection and triviality of the kind given in Schlenker 2009. We show that, when it comes to antecedent-final conditionals, parsing-based theories predict filtering of presuppositions where there is in fact projection, and triviality judgments for sentences which are in fact felicitous. More concretely, these theories predict that presuppositions triggered in the antecedent of antecedent-final conditionals will be filtered (i.e. will not project) if the negation (...) of the consequent entails the presupposition. But this is wrong: John isn’t in Paris, if he regrets being in France intuitively presupposes that John is in France, contrary to this prediction. Likewise, parsing-based approaches to triviality predict that material entailed by the negation of the consequent will be redundant in the antecedent of the conditional; but John isn’t in Paris, if he’s in France and Mary is with him is intuitively felicitous, contrary to these predictions. Importantly, given that the trigger appears in sentence-final position, both incremental (left-to-right) and symmetric versions of such theories make the same predictions. These data constitute a challenge to the idea that presupposition projection and triviality should be computed on the basis of parsing. This issue is important because it relates to the more general question as to whether presupposition and triviality calculation should be thought of as a pragmatic post-compositional phenomenon or as part of compositional semantics (as in the more traditional dynamic approaches). We discuss a solution which allows us to maintain the parsing-based pragmatic approach; it is based on an analysis of conditionals which incorporates a presupposition that their antecedent is compatible with the context, together with a modification to Schlenker’s (2009) algorithm for calculating local contexts so that it takes into account presupposed material. As we will discuss, this solution works within a framework broadly similar to that of Schlenker’s (2009), but it doesn’t extend in an obvious way to other parsing-based accounts (e.g. parsing-based trivalent approaches). We conclude that a parsing-based theory can be maintained, but only if we adopt a substantial change of perspective on the framework. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.