This Open Access book explains the philosophy, design principles, and community organization of Decidim and provides essential insights into how the platform works. Decidim is the world leading digital infrastructure for participatory democracy, built entirely and collaboratively as free software, and used by more than 500 institutions with over three million users worldwide. -/- The platform allows any organization (government, association, university, NGO, neighbourhood, or cooperative) to support multitudinous processes of participatory democracy. In a context dominated by corporate-owned digital platforms, (...) in the era of increasing social structuring via Artificial Intelligence, Decidim stands as a public or community owned platform for collective human intelligence. Yet, the project is much more than its technological features. Decidim is in itself a crossroad of the various dimensions of the networked society, a detailed practical map of its complexities and conflicts. Theauthors distinguish three general dimensions of the project: (1) the political - shedding light on the democratic model that Decidim promotes and its impact on public policies and organizations, (2) the technopolitical - explaining how this technology is democratically designed and managed to produce and protect certain political effects, and (3) the technical - presenting the conditions of production, operation, and success of the project. This book systematically covers those three levels in an academically sound, technologically consistent, and politically innovative manner. Serving as a useful resource and handbook for the use of Decidim, it will not only appeal to students and scholars interested in participatory and digital democracy but also to professionals, policy-makers, and a wider audience interested in learning more about the Decidim platform. (shrink)
The prestige of an academic institution may be determined as a function of affiliations with other academic institutions. Using digital tools to data-scrape, data-mine, and perform network analysis on university websites, an approximation of numbers of academic affiliations may be measured. Especially observing the alma mater institutions of the faculty of employed institutions, these numbers show the relative employment of alumni and a proxy metric for the relative prestige of their degree-granting institutions. These affiliations can be charted and graphed to (...) determine the distributions of affiliations throughout an academic ecosystem from which we might draw conclusions about that system’s hierarchies and inequalities. Here we use anglophone PhD-granting philosophy departments as a case study for this methodology with tentative conclusions. (shrink)
During the Covid-19 pandemic, a considerable amount of people seem to have been lured into believing in conspiracy theories. These people deliberately disregard expert advice by virologists and physicians concerning social behaviour that is aimed at reducing the number of new infections. Disregarding traditional experts and their advice is just one example of what, in the philosophy of science, is referred to as a crisis of expertise – the phenomenon whereby people seem to have lost their trust in traditional expert (...) advice and are looking for alternatives. In the following paper, the trend to use Internet technology as an epistemic alternative will be analysed in detail by investigating the question of whether the Internet really allows people to become epistemically more autonomous. The focus will be on the epistemic and moral vulnerability of people resorting to new media tools instead of relying on traditional expert opinion. It will be shown that some important presuppositions about the Internet and, in particular, social media tools as alternative ways to collect information and find emotional support in a group of like-minded people cannot be maintained. (shrink)
Despite growing understanding of the addictive qualities of the internet, and rising concerns about the effects of excessive internet use on personal wellbeing and mental health, the corresponding ethical debate is still in its infancy, and many of the relevant philosophical and conceptual frameworks are underdeveloped. Our goal in this chapter is to explore some of this evolving terrain. While there are unique ethical considerations that pertain to the formalisation of a disorder related to excessive internet use, our ethical concerns (...) (and indeed our mental health concerns) about whether certain technologies undermine wellbeing can and should be far broader than the debates about the appropriateness of particular diagnostic categories. In this chapter we introduce some of these wider debates with regards to persuasive digital technologies— particularly those which aim to maximise use, or even to encourage compulsive engagement—as well as the difficulty in articulating the harms involved in excessive internet use, especially where such use has not led to functional impairment. Following these conversations we briefly consider some more practical ethical implications, including regulation of certain design features, concerns about growing socioeconomic inequality in online services, and whether there should be a “right to disconnect.”. (shrink)
This paper argues that Internet access should be recognised as a human right because it has become practically indispensable for having adequate opportunities to realise our socio-economic human rights. This argument is significant for a philosophically informed public understanding of the Internet and because it provides the basis for creating new duties. For instance, accepting a human right to Internet access minimally requires guaranteeing access for everyone and protecting Internet access and use from certain objectionable interferences (e.g. surveillance, censorship, online (...) abuse). Realising this right thus requires creating an Internet that is crucially different from the one we currently have. The argument thus has wide-ranging implications. (shrink)
The aim of this paper is to consider whether critical rationalism has any ideas which could usefully be applied to the internet. Today we tend to take the internet for granted and it is easy to forget that it was only about two decades ago that it began to be used to any significant extent. Accordingly in section 1 of the paper, there is a brief consideration of the history of the internet. At first sight this makes it looks implausible (...) that any of Popper’s ideas could be applicable to the internet, since Popper died before the internet came into general use. However, section 2 argues that Popper’s theory of World 3 does apply very well to the internet. This application is significant because, as shown in section 3, it leads to the problem of misinformation, which is one of the most significant problems generated by the internet. In section 4 there is an attempt to solve this problem using ideas taken from Popper’s epistemology. It is argued that there should be changes in education designed to prepare students for the internet age. Teaching in the internet age should focus on presenting to the students not just the accepted theories but also the evidence on which they are based. An illustration of how this might be done is given by considering an example from science teaching, namely the teaching of Newtonian mechanics in the last years of school or first years of university. (shrink)
The Internet of Things (IoT) is a rapidly growing technology that connects and integrates billions of smart devices, generating vast volumes of data and impacting various aspects of daily life and industrial systems. However, the inherent characteristics of IoT devices, including limited battery life, universal connectivity, resource-constrained design, and mobility, make them highly vulnerable to cybersecurity attacks, which are increasing at an alarming rate. As a result, IoT security and privacy have gained significant research attention, with a particular focus on (...) developing anomaly detection systems. In recent years, machine learning (ML) has made remarkable progress, evolving from a lab novelty to a powerful tool in critical applications. ML has been proposed as a promising solution for addressing IoT security and privacy challenges. In this article, we conducted a study of the existing security and privacy challenges in the IoT environment. Subsequently, we present the latest ML-based models and solutions to address these challenges, summarizing them in a table that highlights the key parameters of each proposed model. Additionally, we thoroughly studied available datasets related to IoT technology. Through this article, readers will gain a detailed understanding of IoT architecture, security attacks, and countermeasures using ML techniques, utilizing available datasets. We also discuss future research directions for ML-based IoT security and privacy. Our aim is to provide valuable insights into the current state of research in this field and contribute to the advancement of IoT security and privacy. (shrink)
Leading prescriptions for addressing the spread of fake news, misinformation, and other forms of epistemically toxic content online target either the platform or platform users as a single site for intervention. Neither approach attends to the intense feedback between people, posts, and platforms. Leading prescriptions boil down to the suggestion that we make social media more like traditional media, whether by making platforms take active roles as gatekeepers, or by exhorting individuals to behave more like media professionals. Both approaches are (...) impracticable and wrong. (shrink)
The medical use of computing and information and communication technologies (ICTs) has a history of several decades, but the emergence of the internet, and especially the web and social media, created a new situation. As a result, currently the term eHealth is widely used – and the usage of the internet (and mobile) “technologies” in healthcare (among the patients and professionals, too) tends to be usual practice. There are more and more signs of the institutionalization of this new sub-disciplinary field (...) of medicine, such as social organizations, healthcare institutes, scientific journals, regular conferences, etc. In this paper, collecting the most relevant developments we will try to characterize this state of affairs in the field. Moreover, as it is well-known, the use of the internet has an enormous impact on society, social systems and subsystems, and even on the everyday life of people. This extended practice also influences medicine and healthcare as social subsystems, and fundamentally transforms some of their characteristics. In this paper, we try to show several important dimensions of these changes. (shrink)
Examples of extended cognition typically involve the use of technologically low-grade bio-external resources (e.g., the use of pen and paper to solve long multiplication problems). The present paper describes a putative case of extended cognizing based around a technologically advanced mixed reality device, namely, the Microsoft HoloLens. The case is evaluated from the standpoint of a mechanistic perspective. In particular, it is suggested that a combination of organismic (e.g., the human individual) and extra-organismic (e.g., the HoloLens) resources form part of (...) a common mechanism that realizes a bona fide cognitive routine. In addition to demonstrating how the theoretical resources of neo-mechanical philosophy might be used to evaluate extended cognitive systems, the present paper illustrates one of the ways in which mixed reality devices, virtual objects (i.e., holograms), and online (Internet-accessible) computational routines might be incorporated into human cognitive processes. This, it is suggested, speaks to the recent interest in mixed/virtual reality technologies across a number of disciplines. It also introduces us to issues that cross-cut disparate fields of philosophical research, such as the philosophy of science and the philosophy of technology. (shrink)
We present a novel model of individual people, online posts, and media platforms to explain the online spread of epistemically toxic content such as fake news and suggest possible responses. We argue that a combination of technical features, such as the algorithmically curated feed structure, and social features, such as the absence of stable social-epistemic norms of posting and sharing in social media, is largely responsible for the unchecked spread of epistemically toxic content online. Sharing constitutes a distinctive communicative act, (...) governed by a dedicated norm and motivated to a large extent by social identity maintenance. But confusion about this norm and its lack of inherent epistemic checks lead readers to misunderstand posts, attribute excess or insufficient credibility to posts, and allow posters to evade epistemic accountability—all contributing to the spread of epistemically toxic content online. This spread can be effectively addressed if people and platforms add significantly more context to shared posts and platforms nudge people to develop and follow recognized epistemic norms of posting and sharing. (shrink)
This article created to address the current state of affairs, which has resulted in an insufficient progress and innovation system. The purpose of this overview article is to increase educate society's knowledge of how to use modern and innovative technologies based on need, cultural aspects, social context, and state context. As a result, I used secondary sources to assist readers understand how state actors and policies might best respond to society's aspirations to use and communicate through technology and information, as (...) well as innovation, in a fair and equitable approach. It might be either positive or negative. Modernization and innovation can primarily noticed constructively. It means that before bringing forward and totally researching [technology], we must analyse what we need, how we treat it, and why we use it. (shrink)
This paper responds to the question of whether the Internet has made lectures obsolete and Matthew Pickles’ investigation of why lectures persist. It is written as a pastiche of R.K. Narayan, about whom a somewhat parallel question is probably asked. Pickles refers to a logic lecturer so dry people went swimming, and a pastiche approach is an alternative.
It’s interesting and a bit surprising how little attention philosophy has given to the status of emoji, those funny little symbols that punctuate text messages, Twitter, and other digital spaces. They have become ubiquitous, but maybe because they’re seen as frivolous or a “lower” form of communication, philosophy hasn’t paid them much mind. But they are an interesting aesthetic phenomenon. They are part language, part representational image. They are phenomenologically interesting in their effect on how we experience the written word. (...) They punctuate, accentuate, emphasize, and add flavor to our communication in ways that are difficult to achieve otherwise. It would not be ridiculous to say that they represent a genuine linguistic development—a change in conventional orthography, and of an almost unbelievably sudden and dramatic, even revolutionary, kind. (shrink)
Introduction. The development of online marketing in social networks creates unique opportunities for personal selling. Especially these opportunities are manifested in online education when they buy a brand of an expert with experience in a particular field. That is why a competitive space is being formed in the Instagram social network, where a personal brand acts as a product or service. -/- Materials and methods. Studying the effectiveness of promoting a personal brand in social networks based on the Instagram platform (...) was chosen to have great visual opportunities for self-presentation. As part of the collection of empirical material, two methods were used: a survey (N-200) and content analysis of three blogger accounts with high rates of activity and popularity. -/- Results and discussion. Content analysis of bloggers showed that an algorithmic feed on a social network allows bloggers to control the content. To help them, Instagram provides statistical data on user reach, thereby capturing trends in the movement of the blogger’s audience. The main task of a blogger is to combine real and “virtual” images so as not to lose consumer confidence. A survey of social network users confirmed the importance of a personal brand for them. The survey also made it possible to identify the most popular audience requests that they expect from bloggers and their accounts: valuable reviews and recommendations, case studies and author’s solutions, storytelling, blogger’s reflections and motivating messages, live broadcasts, and stories are important. -/- Conclusions. The results obtained underline followers’ high level of interest in the bloggers’ personal brand on Instagram. The study results show that the most significant number of those involved (those who give feedback and are constantly involved in interaction with the account) is where the formation of a personal brand is built to a greater extent on the emotional level of perception of the individual. (shrink)
Can we identify potential long-term consequences of digitalisation on public health? Environmental studies, metabolic research, and state of the art research in neurobiology point towards the reduced amount of natural day and sunlight exposure of the developing child, as a consequence of increasingly long hours spent indoors online, as the single unifying source of a whole set of health risks identified worldwide, as is made clear in this review of currently available literature. Over exposure to digital environments, from abuse to (...) addiction, now concerns even the youngest (ages 0 to 2) and triggers, as argued on the basis of clear examples herein, a chain of interdependent negative and potentially long-term metabolic changes. This leads to a deregulation of the serotonin and dopamine neurotransmitter pathways in the developing brain, currently associated with online activity abuse and/or internet addiction, and akin to that found in severe substance abuse syndromes. A general functional working model is proposed under the light of evidence brought to the forefront in this review. (shrink)
Until the late 19th century scientists almost always assumed that the world could be described as a rule-based and hence deterministic system or as a set of such systems. The assumption is maintained in many 20th century theories although it has also been doubted because of the breakthrough of statistical theories in thermodynamics (Boltzmann and Gibbs) and other fields, unsolved questions in quantum mechanics as well as several theories forwarded within the social sciences. Until recently it has furthermore been assumed (...) that a rule-based and deterministic system was also predictable if only the rules were known, but this assumption has now been undermined by modern chaos-theory describing rule-based and deterministic, but unpredictable systems, while catastrophe-theory delivers a set of types describing various kinds of instability and conditions for the stability of a given system. Hence the main trait in the theoretical development in the 20th-century science can be described as a basic modification and limitation of some of the fundamental and strong assumptions forwarded in the previous epochs of modern science. Ironically, the very same process has been a process in which the human capacity to intervene in nature has expanded dramatically and mainly with the help of the very same theories, and not least because they allow nature to be described and made manipulable on a lower level and a more fine-grained scale. While the overall theoretical consistency between the various theories has gone, the reach of human intervention in nature has increased based on quite new dimensions whether in the area of physics (e.g.: energy technologies, chemical technologies, nanotechnologies etc.) or biology (ge¬netic manipulation) or in the area of psychology, sociology and culture (artificial simulations of mental processes, new means of communication, implying changes in the social infrastructure and cultural behaviour etc.). While some of these changes and new conditions can be reflected from within the conceptual framework of rule-based systems, albeit more complex than formerly recognized, others seem to give rise to the question of whether there are »systems« and relations between different systems in the world which are not rule-based? For instance, it seems to be obvious that the notion of instability represents a major conceptual break with former theories of rule-based systems, as the stability of the latter is an axiomatically given property implied in the very notion of rule-based systems, while instability can only be the result of external influence which should be explained as the result of another rule-based sy¬stem. While there are no difficulties implied concerning the stability of rule-based systems, the notion of unstable states of a system raises the question of how there can be a system at all if there are no invariant stabilising principles? This is the first question which I will address. And I shall do so by taking two examples of such systems as my point of departure. The first example will be the computer and the second will be ordinary language. In both cases I will argue that the stability of these systems (which are both defined by the existence/presence of human intentions) are provided by the help of - differently organised - redundancy functions which both allow the maintenance of systems in unstable macro-states, suspension of previous rules, underdetermination and overdetermination and generation/emergence/creation of new rules more or less independent of previous rules by the help of optional recursions to the permanently accessible underlying levels as for instance the level of binary representation in computers. Since the notion of redundancy is both controversial as such and often avoided, the concept is discussed (as defined in Claude Shannon's mathematical theory of information and in the semiotic framework of J. J. Greimas) and leading to a more general definition in which the redundancy functions serve to overcome noisy conditions, but at the cost of rule-based stability, determination, and predictability. A second question will be how the notion of rule-generating systems relates to the notion of anticipatory systems and it will be argued that rule-generating systems share some features with an¬ticipatory systems and that the former from a certain viewpoint can be seen as a subclass of the latter, although anticipative features are not necessarily a part of the definition of rule-generating systems. On the other hand, it will be discussed whether anticipatory systems which are not rule-generating systems can exist and it will be argued that the capacity to anticipate is strongly limited if it is not part of a rule-generating system. Therefore, it is concluded that the most powerful anticipatory systems need to be rule-generating systems. (shrink)
Human existence is being transformed. Its structure, many thousand years old, seems to be changing: built on the natural and the social, there is a third form of existence: web-life. Man is now the citizen of three worlds and its nature is being formed by the relations of natural, social and web-life. We regard as our main goal the study of web-life, which has developed as the result of Internet use.
The paper argues for the necessity of building up a philosophy of the Internet and proposes a version of it, an «Aristotelian» philosophy of the Internet. First, an overview of the recent trends in the Internet research is presented. This train of thoughts leads to a proposal of understanding the nature of the Internet in the spirit of the Aristotelian philosophy i. e., to conceive the Internet as the Internet, as a totality of its all aspects, as a whole entity. (...) For this purpose, the Internet is explained in four (easily distinguishable, but obviously connected) contexts: we regard it as a system of technology, as an element of communication, as a cultural medium and as an independent organism. Based on these investigations we conclude that the Internet is the medium of a new mode of human existence created by late modern man; a mode that is built on earlier (i. e., natural, and social) spheres of existence and yet it is markedly different from them. We call this newly formed existence web-life. Finally using two enlightening cultural-historical analogies (the reformation of knowledge and the formation of web-life) several fundamental characteristics of the web-life is presented. (shrink)
The paper argues for the necessity of building up a philosophy of the internet and proposes a version of it, an ‘Aristotelian’ philosophy of the internet. First, a short overview of some recent trends in the internet research is presented. This train of thoughts leads to a proposal of understanding the nature of the internet in the spirit of the Aristotelian philosophy i.e., to conceive “the internet as the internet”, as a totality of its all aspects, as a whole entity. (...) For this purpose, the internet is explained in four – easily distinguishable, but obviously connected – contexts: we regard it as a system of technology, as an element of communication, as a cultural medium and as an independent organism. Based on these investigations we conclude that the internet is the medium of a new mode of human existence created by late modern man; a mode that is built on earlier (i.e., natural, and social) spheres of existence and yet it is markedly different from them. We call this newly formed existence web-life. (shrink)
This paper is the second half of an invited paper given by the author to the international conference, promoted by the UNESCO Philosophy Forum, to celebrate the fiftieth anniversary of the founding of the organisation (Paris, 14–17 March 1995). The first half, which deals with a slightly different theme, is published as an Article earlier in this issue.
The Internet is like a new country, with a growing population of millions of well educated citizens. If it wants to keep track of its own cultural achievements in real time, it will have to provide itself with an infostructure like a virtual National Library system. This paper proposes that institutions all over the world should take full advantage of the new technologies available, and promote and coordinate such a global service. This is essential in order to make possible a (...) really efficient management of human knowledge on a global scale. (shrink)
This article is a modified version of a paper I gave to the conference Philosophy & Informatics - First Italian Conference on the use of ICT in philosophical disciplines, promoted by the Italian Philosophical Association (University of Rome "La Sapienza", 23-24 November, 1995).
Social machines are a prominent focus of attention for those who work in the field of Web and Internet science. Although a number of online systems have been described as social machines, there is, as yet, little consensus as to the precise meaning of the term “social machine.” This presents a problem for the scientific study of social machines, especially when it comes to the provision of a theoretical framework that directs, informs, and explicates the scientific and engineering activities of (...) the social machine community. The present paper outlines an approach to understanding social machines that draws on recent work in the philosophy of science, especially work in so-called mechanical philosophy. This is what might be called a mechanistic view of social machines. According to this view, social machines are systems whose phenomena are explained via an appeal to socio-technical mechanisms. We show how this account is able to accommodate a number of existing attempts to define the social machine concept, thereby yielding an important opportunity for theoretical integration. (shrink)
Arguments for extended cognition and the extended mind are typically directed at human-centred forms of cognitive extension—forms of cognitive extension in which the cognitive/mental states/processes of a given human individual are subject to a form of extended or wide realization. The same is true of debates and discussions pertaining to the possibility of Web-extended minds and Internet-based forms of cognitive extension. In this case, the focus of attention concerns the extent to which the informational and technological elements of the online (...) environment form part of the machinery of the (individual) human mind. In this paper, we direct attention to a somewhat different form of cognitive extension. In particular, we suggest that the Web allows human individuals to be incorporated into the computational/cognitive routines of online systems. These forms of computational/cognitive extension highlight the potential of the Web and Internet to support bidirectional forms of computational/cognitive incorporation. The analysis of such bidirectional forms of incorporation broadens the scope of philosophical debates in this area, with potentially important implications for our understanding of the foundational notions of extended cognition and the extended mind. (shrink)
In this paper, we introduce the Japanese philosopher Tetsurō Watsuji’s phenomenology of aidagara (“betweenness”) and use his analysis in the contemporary context of online space. We argue that Watsuji develops a prescient analysis anticipating modern technologically-mediated forms of expression and engagement. More precisely, we show that instead of adopting a traditional phenomenological focus on face-to-face interaction, Watsuji argues that communication technologies — which now include Internet-enabled technologies and spaces — are expressive vehicles enabling new forms of emotional expression, shared experiences, (...) and modes of betweenness that would be otherwise inaccessible. Using Watsuji’s phenomenological analysis, we argue that the Internet is not simply a sophisticated form of communication technology that expresses our subjective spatiality (although it is), but that it actually gives rise to new forms of subjective spatiality itself. We conclude with an exploration of how certain aspects of our online interconnections are hidden from lay users in ways that have significant political and ethical implications. (shrink)
2020 is the year of the first pandemic lived through the Internet. More than half of the world population is now online and because of self-isolation, our moral and social lives unfold almost exclusively online. Two pressing questions arise in this context: how much can we rely on the Internet, as a set of technologies, and how much should we trust online platforms and applications? In order to answer these two questions, I develop an argument based on two fundamental assumptions: (...) the development stage of today’s Internet is a contingent and not a necessary one and technologies are never neutral. My argument is that today’s pandemic produced an acceleration of the change which started almost 30 years ago, one of online migration, datafication and algorithmization of day to day life, a change in which digital technologies take on the classic role of institutions, both formal and informal. My thesis is a descriptive one, although it has some normative implications: this take over is not just a translation, but a modification or mutation which demands a creative collective answer, although users don’t have direct power over it. When they play the role of institutions, these technologies are surrogates or simulacra. I offer the paradigmatic case of online education, an experiment we were forced into because of the pandemic, in order to show that the institutional matrix of education cannot be reproduced online, due to the limitations of the medium and of the applications, especially those concerning physical presence. In the end I will critically address the question of the role played by Artificial Intelligence in the pandemic, and then I will show that the open source ethics was more useful in approaching scientifically and technologically the problems generated by the epidemic. (shrink)
The alt-right claims it responsibly advocates for its positions while the Ku Klux Klan was “ad-hoc.” This allows them to accept the philosophy of white nationalism while rejecting comparisons with prior white nationalist organizations. I confront this by comparing the methodologies of alt-right trolls and the KKK. After studying each movement’s genesis in pranks done for amusement, I demonstrate that each uses threats to police behavior, encourages comradery around ethnic heritage, and manipulates politics to exclude voices from public deliberation. Differences (...) between alt-right trolls and the KKK originate in the technologies they use, not out of a concern for responsible advocacy. (shrink)
The Internet has been identified in human enhancement scholarship as a powerful cognitive enhancement technology. It offers instant access to almost any type of information, along with the ability to share that information with others. The aim of this paper is to critically assess the enhancement potential of the Internet. We argue that unconditional access to information does not lead to cognitive enhancement. The Internet is not a simple, uniform technology, either in its composition, or in its use. We will (...) look into why the Internet as an informational resource currently fails to enhance cognition. We analyze some of the phenomena that emerge from vast, continual fluxes of information–information overload, misinformation and persuasive design—and show how they could negatively impact users’ cognition. Methods for mitigating these negative impacts are then advanced: individual empowerment, better collaborative systems for sorting and categorizing information, and the use of artificial intelligence assistants that could guide users through the informational space of today’s Internet. (shrink)
Частным аспектом предлагаемой реформы интернета является метаномические правила референцирования и адресации виртуальных объектов с целью приближения их к естественным языкам, и создания их логической системы. Однако реформа интернета и преобразование его в средство общения и служения обществу требует также других мер. К этим мерам относится передача определённых имён в неотчуждаемое владение, чтобы их собственники безусловно и беспрепятственно могли осуществлять их право публикации, т.е. публичного высказывания и выражения их мыслей, что также важно для документографии. Другой мерой должно стать упразднение всей сложившейся (...) системы управления интернетом, т.е. бюрократии, после чего техническим службам будут делегированы обязанности по обеспечению его функционирования с учётом высказанной критики. (shrink)
Internet, a revolutionary invention, is always transforming into some new kind of hardware and software making it unpreventable for anyone. The type of communication that we see today is either human-to-human or human-to-device, but the Internet of Things (IoT) promises a great future for the internet where the type of communication is machine-to-machine (M2M). The Internet of Things (IoT) is defined as a paradigm in which objects provide with sensors, actuators, and processors communicate with each other to serve a meaningful (...) purpose. In this paper we discussed IoT and its architecture. Further we explained different applications of IoT for users, IoT advantages and disadvantages. (shrink)
With the advent of Internet of Things (IoT) and data convergence using rich cloud services, data computing has been pushed to new horizons. However, much of the data generated at the edge of the network leading to the requirement of high response time. A new computing paradigm, edge computing, processing the data at the edge of the network is the need of the time. In this paper, we discuss the IoT architecture, predominant application protocols, definition of edge computing and its (...) research opportunities. (shrink)
L'ontologie d'entreprise établit une distinction claire entre le niveau de données, le niveau d'information et le niveau essentiel des transactions blockchain et des contrats intelligents. La méthodologie OntoClean analyse des ontologies basées sur des propriétés formelles, indépendantes des domaines (méta-propriétés), constituant la première tentative de formalisation des concepts d'analyse ontologique pour des systèmes informatiques. Les notions sont extraites de l'ontologie philosophique. DOI: 10.13140/RG.2.2.12557.49120 .
Philosophical work exploring the relation between cognition and the Internet is now an active area of research. Some adopt an externalist framework, arguing that the Internet should be seen as environmental scaffolding that drives and shapes cognition. However, despite growing interest in this topic, little attention has been paid to how the Internet influences our affective life — our moods, emotions, and our ability to regulate these and other feeling states. We argue that the Internet scaffolds not only cognition but (...) also affect. Using various case studies, we consider some ways that we are increasingly dependent on our Internet-enabled “techno-social niches” to regulate the contours of our own affective life and participate in the affective lives of others. We argue further that, unlike many of the other environmental resources we use to regulate affect, the Internet has distinct properties that introduce new dimensions of complexity to these regulative processes. First, it is radically social in a way many of these other resources are not. Second, it is a radically distributed and decentralized resource; no one individual or agent is responsible for the Internet’s content or its affective impact on users. Accordingly, while the Internet can profoundly augment and enrich our affective life and deepen our connection with others, there is also a distinctive kind of affective precarity built into our online endeavors as well. (shrink)
У статті досліджено мовні помилки, що трапляються в одній із українських інтернет-газет упродовж одного (випадково вибраного) дня цього (2017) року. Указано на високу частотність помилок, на те, що правильність текстів в українських інтернет-ЗМІ значною мірою залежить від мовного оформлення їх безпосереднього джерела, і на те, що в інтернет-ЗМІ часто трапляються найтиповіші помилки сучасної української мови. Доведено, що багато з цих помилок далі поширюють різними веб-порталами (бо часто просто копіюють) без будь-якого мовного редагування. Наголошено на слушності мовного моніторингу українських інтернет-ЗМІ не (...) лише щодо вибору мови, а й щодо мовної правильності. (shrink)
Varying degrees of symmetry can exist in a social network's connections. Some early online social networks (OSNs) were predicated on symmetrical connections, such as Facebook 'friendships' where both actors in a 'friendship' have an equal and reciprocal connection. Newer platforms -- Twitter, Instagram, and Facebook's 'Pages' inclusive -- are counterexamples of this, where 'following' another actor (friend, celebrity, business) does not guarantee a reciprocal exchange from the other. -/- This paper argues that the basic asymmetric connections in an OSN leads (...) to emergent asymmetrical behaviour in the OSN's overall influence and connectivity, amongst others. This paper will then draw on empirical examples from popular sites (and prior network research) to illustrate how asymmetric connections can render individuals 'voiceless'. -/- The crux of this paper is an argument from the existentialist viewpoint on how the above asymmetric network properties lead to Sartrean bad faith (Sartre, 1943). Instead of genuine interpersonal connection, one finds varying degrees of pressure to assume the Sartrean 'in-itself' (the en soi) mode-of-being, irregardless of the magnitude of 'followers' one has. -/- Finally, this paper poses an open question: what other philosophical issues does this inherent asymmetry in modern social networking give rise to? (shrink)
У статті досліджено феномен віртуальної тілесності, яка не тільки посідає важливе місце у сфері зацікавлення гуманітарних наук, а й увійшла як елемент повсякденності в життя значної частини сучасного людства завдяки мережі Інтернет. Розглянуто концепції поєднання віртуальності та тілесності, ключові підходи до аналізу цього поєднання. Предметом аналізу стали анонімні форуми як яскравий приклад конфігурації віртуального тіла, радикально відмінний від інших через анонімний спосіб репрезентації. Інформація, яку індивід викладає в публічний доступ, сприймається як його втілення в буквальному сенсі слова, а цифровий формат (...) інформації впливає на характеристики буття віртуального тіла індивіда. (shrink)
Статтю присвячено феномену Інтернет-фольклору, який розглядається на прикладі візуального образу персонажа «бітард». Наведено деякі аргументи для обґрунтування доцільності вживання поняття «Інтернет-фольклор», описано особливості комунікації через специфічний тип анонімних форумів – іміджборд, а також процес формування фольклору іміджборд. Проаналізовано один із елементів цього фольклору – персонаж «бітард», як відображення досвіду користувачів форумів. Описано його визначальні риси та головні характеристики.
Social media platforms allow users to perform different speech acts: status updates could be assertives, a like is an expressive, a friendship request is a directive, and so on. But sharing (or "retweeting") seems to lack a fixed illocutive status: this explains why present controversies concerning the sharing of misinformation have been debated in legal procedure and discussed from the point of view of personal responsibility without reaching a general consensus. The premise of this paper is that the diffusion of (...) false or unwarranted information could be better analyzed if we consider sharing a precisely definable speech act. I will describe some dominant interpretations of the act of sharing that are not, however, sufficient to fully explain it. As an alternative, it will be shown that there is a specific illocutive structure of the act of sharing, which not only consists in asserting the "shareworthiness" or the relevance of a content, but is primarily comparable to an "attention-orienting" directive. (shrink)
In Information and Computer Ethics (ICE), and, in fact, in normative and evaluative research of Information Technology (IT) in general, researchers have paid few attentions to the prudential values of IT. Hence, analyses of the prudential values of IT are mostly found in popular discourse. Yet, the analyses of the prudential values of IT are important for answering normative questions about people’s well-being. In this chapter, the author urges researchers in ICE to take the analyses of the prudential values of (...) IT seriously. A serious study of the analyses, he argues, will enrich the research of ICE. But, what are the analyses? The author will distinguish the analyses of the prudential values of IT, i.e. the prudential analysis, from other types of normative and evaluative analysis of IT. Then, the author will explain why prudential analyses are not taken seriously by the researchers in ICE, and argue why they deserve more attentions. After that, he will outline a framework to analyse and evaluate prudential analyses, and he will apply the framework to an actual prudential analysis. Finally, he will briefly conclude this chapter by highlighting the limits of the proposed framework and identifying the directions for future research. (shrink)
Journalism and media studies lack robust theoretical concepts for studying journalistic knowledge generation. More specifically, conceptual challenges attend the emergence of big data and algorithmic sources of journalistic knowledge. A family of frameworks apt to this challenge is provided by “social epistemology”: a young philosophical field which regards society’s participation in knowledge generation as inevitable. Social epistemology offers the best of both worlds for journalists and media scholars: a thorough familiarity with biases and failures of obtaining knowledge, and a strong (...) orientation toward best practices in the realm of knowledge-acquisition and truth-seeking. This paper articulates the lessons of social epistemology for two central nodes of knowledge-acquisition in contemporary journalism: human-mediated knowledge and technology-mediated knowledge. . (shrink)
The concept of 'authenticity' is highly valued on social media sites (SMSes), despite its ambiguous nature and definition. One interpretation of 'authenticity' by media scholars is a human's congruence with online portrayals of themselves (e.g. posting spontaneous photographs from their lives, or using real biodata online). For marketers and 'influencers', these patterns of behaviour can achieve certain gains: sales for a business, or success of a campaign. For existentialist philosophers, using 'authenticity' as a means to an end is against its (...) very definition. In this paper, I investigate what SMS users are looking for by their supposed 'authentic' portrayal online. My experimental approach draws upon empirical data from the Instagram social media site. Using machine learning techniques, descriptions and features of posts - including subjects, captions, and contexts - will be categorised and aggregated. I will then interpret these findings, drawing upon work by Taylor, Golomb, and Guignon, whose works on authenticity are based on mid-20th century existentalists. I argue that the existentialist ideals on authenticity are not necessarily present in contemporary SMS use. I will also argue that the popular interpretation of authenticity on SMSes can be self-defeating, when it seeks to turn the 'for-itself' into an 'in-itself'. (shrink)
Motivated by the significant amount of successful collaborative problem solving activity on the Web, we ask: Can the accumulated information propagation behavior on the Web be conceived as a giant machine, and reasoned about accordingly? In this paper we elaborate a thesis about the computational capability embodied in information sharing activities that happen on the Web, which we term socio-technical computation, reflecting not only explicitly conditional activities but also the organic potential residing in information on the Web.
The debate on whether and how the Internet can protect and foster human rights has become a defining issue of our time. This debate often focuses on Internet governance from a regulatory perspective, underestimating the influence and power of the governance of the Internet’s architecture. The technical decisions made by Internet Standard Developing Organisations that build and maintain the technical infrastructure of the Internet influences how information flows. They rearrange the shape of the technically mediated public sphere, including which rights (...) it protects and which practices it enables. In this article, we contribute to the debate on SDOs’ ethical responsibility to bring their work in line with human rights. We defend three theses. First, SDOs’ work is inherently political. Second, the Internet Engineering Task Force, one of the most influential SDOs, has a moral obligation to ensure its work is coherent with, and fosters, human rights. Third, the IETF should enable the actualisation of human rights through the protocols and standards it designs by implementing a responsibility-by-design approach to engineering. We conclude by presenting some initial recommendations on how to ensure that work carried out by the IETF may enable human rights. (shrink)
The internet has considerably changed epistemic practices in science as well as in everyday life. Apparently, this technology allows more and more people to get access to a huge amount of information. Some people even claim that the internet leads to a democratization of knowledge. In the following text, we will analyze this statement. In particular, we will focus on a potential change in epistemic structure. Does the internet change our common epistemic practice to rely on expert opinions? Does it (...) alter or even undermine the division of epistemic labor? The epistemological framework of our investigation is a naturalist-pragmatist approach to knowledge. We take it that the internet generates a new environment to which people seeking information must adapt. How can they, and how should they, expand their repertory of social markers to continue the venture of filtering, and so make use of the possibilities the internet apparently provides? To find answers to these questions we will take a closer look at two case studies. The first example is about the internet platform WikiLeaks that allows so-called whistle-blowers to anonymously distribute their information. The second case study is about the search engine Google and the problem of personalized searches. Both instances confront a knowledge-seeking individual with particular difficulties which are based on the apparent anonymity of the information distributor. Are there ways for the individual to cope with this problem and to make use of her social markers in this context nonetheless? (shrink)
In the last forty years the number of specialized publications on deliberative democracy has increased steadily. Yet, today, one of the greatest challenges we still face today is to deepen into the knowledge of our actual and singular deliberative cultures. In order to achieve this, it is necessary that we use theoretical and methodological approaches that enable us to capture the inherent complexity to the specific forms of deliberation that are present in as different areas as that of politics, economics, (...) law, science, urbanism or the Net. (shrink)
This review makes a case for scholars putting up their works online and for removing pay-walls of any kind. Therefore, this review is in sync with the stated aims of philpapers.org.
Create an account to enable off-campus access through your institution's proxy server or OpenAthens.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.