Digitalethics, also known as computer ethics or information ethics, is now a lively field that draws a lot of attention, but how did it come about and what were the developments that lead to its existence? What are the traditions, the concerns, the technological and social developments that pushed digitalethics? How did ethical issues change with digitalisation of human life? How did the traditional discipline of philosophy respond? The article provides an overview, (...) proposing historical epochs: ‘pre-modernity’ prior to digital computation over data, via the ‘modernity’ of digital data processing to our present ‘post-modernity’ when not only the data is digital, but our lives themselves are largely digital. In each section, the situation in technology and society is sketched, and then the developments in digitalethics are explained. Finally, a brief outlook is provided. (shrink)
Ethical codes, ethics committees, and respect for autonomy have been key to the development of medical ethics —elements that digitalethics would do well to emulate.
The increasingly prominent role of digital technologies during the coronavirus pandemic has been accompanied by concerning trends in privacy and digitalethics. But more robust protection of our rights in the digital realm is possible in the future. -/- After surveying some of the challenges we face, I argue for the importance of diplomacy. Democratic countries must try to come together and reach agreements on minimum standards and rules regarding cybersecurity, privacy and the governance of AI.
The EDPS Ethics Advisory Group (EAG) has carried out its work against the backdrop of two significant social-political moments: a growing interest in ethical issues, both in the public and in the private spheres and the imminent entry into force of the General Data Protection Regulation (GDPR) in May 2018. For some, this may nourish a perception that the work of the EAG represents a challenge to data protection professionals, particularly to lawyers in the field, as well as to (...) companies struggling to adapt their processes and routines to the requirements of the GDPR. What is the purpose of a report on digitalethics, if the GDPR already provides all regulatory requirements to protect European citizens with regard to the processing of their personal data? Does the existence of this EAG mean that a new normative ethics of data protection will be expected to fill regulatory gaps in data protection law with more flexible, and thus less easily enforceable ethical rules? Does the work of the EAG signal a weakening of the foundation of legal doctrine, such as the rule of law, the theory of justice, or the fundamental values supporting human rights, and a strengthening of a more cultural approach to data protection? Not at all. The reflections of the EAG contained in this report are not intended as the continuation of policy by other means. It neither supersedes nor supplements the law or the work of legal practitioners. Its aims and means are different. On the one hand, the report seeks to map and analyse current and future paradigm shifts which are characterised by a general shift from analogue experience of human life to a digital one. On the other hand, and in light of this shift, it seeks to re-evaluate our understanding of the fundamental values most crucial to the well-being of people, those taken for granted in a data-driven society and those most at risk. The objective of this report is thus not to generate definitive answers, nor to articulate new norms for present and future digital societies but to identify and describe the most crucial questions for the urgent conversation to come. This requires a conversation between legislators and data protection experts, but also society at large - because the issues identified in this report concern us all, not only as citizens but also as individuals. They concern us in our daily lives, whether at home or at work and there isn’t a place we could travel to where they would cease to concern us as members of the human species. (shrink)
Modern digital technologies—from web-based services to Artificial Intelligence (AI) solutions—increasingly affect the daily lives of billions of people. Such innovation brings huge opportunities, but also concerns about design, development, and deployment of digital technologies. This article identifies and discusses five clusters of risk in the international debate about digitalethics: ethics shopping; ethics bluewashing; ethics lobbying; ethics dumping; and ethics shirking.
What is the relation between the ethics, the law, and the governance of the digital? In this article I articulate and defend what I consider the most reasonable answer.
The web is increasingly inhabited by the remains of its departed users, a phenomenon that has given rise to a burgeoning digital afterlife industry. This industry requires a framework for dealing with its ethical implications. We argue that the regulatory conventions guiding archaeological exhibitions could provide the basis for such a framework.
Common mental health disorders are rising globally, creating a strain on public healthcare systems. This has led to a renewed interest in the role that digital technologies may have for improving mental health outcomes. One result of this interest is the development and use of artificial intelligence for assessing, diagnosing, and treating mental health issues, which we refer to as ‘digital psychiatry’. This article focuses on the increasing use of digital psychiatry outside of clinical settings, in the (...) following sectors: education, employment, financial services, social media, and the digital well-being industry. We analyse the ethical risks of deploying digital psychiatry in these sectors, emphasising key problems and opportunities for public health, and offer recommendations for protecting and promoting public health and well-being in information societies. (shrink)
Digital tracing technologies are heralded as an effective way of containing SARS-CoV-2 faster than it is spreading, thereby allowing the possibility of easing draconic measures of population-wide quarantine. But existing technological proposals risk addressing the wrong problem. The proper objective is not solely to maximise the ratio of people freed from quarantine but to also ensure that the composition of the freed group is fair. We identify several factors that pose a risk for fair group composition along with an (...) analysis of general lessons for a philosophy of technology. Policymakers, epidemiologists, and developers can use these risk factors to benchmark proposal technologies, curb the pandemic, and keep public trust. (shrink)
This chapter serves as an introduction to the edited collection of the same name, which includes chapters that explore digital well-being from a range of disciplinary perspectives, including philosophy, psychology, economics, health care, and education. The purpose of this introductory chapter is to provide a short primer on the different disciplinary approaches to the study of well-being. To supplement this primer, we also invited key experts from several disciplines—philosophy, psychology, public policy, and health care—to share their thoughts on what (...) they believe are the most important open questions and ethical issues for the multi-disciplinary study of digital well-being. We also introduce and discuss several themes that we believe will be fundamental to the ongoing study of digital well-being: digital gratitude, automated interventions, and sustainable co-well-being. (shrink)
This article presents the first thematic review of the literature on the ethical issues concerning digital well-being. The term ‘digital well-being’ is used to refer to the impact of digital technologies on what it means to live a life that is good for a human being. The review explores the existing literature on the ethics of digital well-being, with the goal of mapping the current debate and identifying open questions for future research. The review identifies (...) major issues related to several key social domains: healthcare, education, governance and social development, and media and entertainment. It also highlights three broader themes: positive computing, personalised human–computer interaction, and autonomy and self-determination. The review argues that three themes will be central to ongoing discussions and research by showing how they can be used to identify open questions related to the ethics of digital well-being. (shrink)
This chapter serves as an introduction to the edited collection of the same name, which includes chapters that explore digital well-being from a range of disciplinary perspectives, including philosophy, psychology, economics, health care, and education. The purpose of this introductory chapter is to provide a short primer on the different disciplinary approaches to the study of well-being. To supplement this primer, we also invited key experts from several disciplines—philosophy, psychology, public policy, and health care—to share their thoughts on what (...) they believe are the most important open questions and ethical issues for the multi-disciplinary study of digital well-being. We also introduce and discuss several themes that we believe will be fundamental to the ongoing study of digital well-being: digital gratitude, automated interventions, and sustainable co-well-being. (shrink)
This article presents the first thematic review of the literature on the ethical issues concerning digital well-being. The term ‘digital well-being’ is used to refer to the impact of digital technologies on what it means to live a life that isgood fora human being. The review explores the existing literature on the ethics of digital well-being, with the goal of mapping the current debate and identifying open questions for future research. The review identifies major issues (...) related to several key social domains: healthcare, education, governance and social development, and media and entertainment. It also highlights three broader themes: positive computing, personalised human–computer interaction, and autonomy and self-determination. The review argues that three themes will be central to ongoing discussions and research by showing how they can be used to identify open questions related to the ethics of digital well-being. (shrink)
Public health scholars and public health officials increasingly worry about health-related misinformation online, and they are searching for ways to mitigate it. Some have suggested that the tools of digital influence are themselves a possible answer: we can use targeted, automated digital messaging to counter health-related misinformation and promote accurate information. In this commentary, I raise a number of ethical questions prompted by such proposals—and familiar from the ethics of influence and ethics of AI—highlighting hidden costs (...) of targeted digital messaging that ought to be weighed against the health benefits they promise. (shrink)
Digital COVID certificates are a novel public health policy to tackle the COVID-19 pandemic. These immunity certificates aim to incentivize vaccination and to deny international travel or access to essential spaces to those who are unable to prove that they are not infectious. In this article, we start by describing immunity certificates and highlighting their differences from vaccination certificates. Then, we focus on the ethical, legal, and social issues involved in their use, namely autonomy and consent, data protection, equity, (...) and international mobility from a global fairness perspective. The main conclusion of our analysis is that digital COVID certificates are only acceptable if they meet certain conditions: that they should not process personal data beyond what is strictly necessary for the aimed goals, that equal access to them should be guaranteed, and that they should not restrict people’s autonomy to access places where contagion is unlikely. We conclude that, if such conditions are guaranteed, digital COVID certificates could contribute to mitigating some of the most severe socioeconomic consequences of the pandemic. (shrink)
During the past decade, a fairly extensive literature on the digital divide has emerged. Many reports and studies have provided statistical data pertaining to sociological aspects of ‘the divide,’ while some studies have examined policy issues involving universal service and universal access. Other studies have suggested ways in which the digital divide could be better understood if it were ‘reconceptualized’ in terms of an alternative metaphor, e.g. a ‘divide’ having to do with literacy, power, content, or the environment. (...) However, with the exception of Johnson and Koehler, authors have tended not to question ‐ at least not directly ‐ whether the digital divide is, at bottom, an ethical issue. Many authors seem to assume that because disparities involving access to computing technology exist, issues underlying the digital divide are necessarily moral in nature. Many further assume that because this particular ‘divide’ has to do with something that is digital or technological in nature, it is best understood as a computer ethical issue. The present study, which examines both assumptions, considers four questions: What exactly is the digital divide? Is this ‘divide’ ultimately an ethical issue? Assuming that the answer to is ‘yes,’ is the digital divide necessarily an issue for computer ethics? If the answer to is ‘yes,’ what can/should computer professionals do bridge the digital divide? (shrink)
As a full expression of techne, the information society has already posed fundamental ethical problems, whose complexity and global dimensions are rapidlyevolving. What is the best strategy to construct an information society that is ethically sound? This is the question I discuss in this paper. The task is to formulate aninformation ethics that can treat the world of data, information, knowledge and communication as a new environment, the infosphere. This information ethics must be able to address and solve (...) the ethical challenges arising in the new environment on the basis of the fundamental principles of respect for information, its conservation and valorisation. It must be an ecological ethics for the information environment. (shrink)
Digital me ontology and ethics. 21 December 2020. -/- Ljupco Kocarev and Jasna Koteska. -/- This paper addresses ontology and ethics of an AI agent called digital me. We define digital me as autonomous, decision-making, and learning agent, representing an individual and having practically immortal own life. It is assumed that digital me is equipped with the big-five personality model, ensuring that it provides a model of some aspects of a strong AI: consciousness, free (...) will, and intentionality. As computer-based personality judgments are more accurate than those made by humans, digital me can judge the personality of the individual represented by the digital me, other individuals’ personalities, and other digital me-s. We describe seven ontological qualities of digital me: a) double-layer status of Digital Being versus digital me, b) digital me versus real me, c) mind-digital me and body-digital me, d) digital me versus doppelganger (shadow digital me), e) non-human time concept, f) social quality, g) practical immortality. We argue that with the advancement of AI’s sciences and technologies, there exist two digital me thresholds. The first threshold defines digital me having some (rudimentarily) form of consciousness, free will, and intentionality. The second threshold assumes that digital me is equipped with moral learning capabilities, implying that, in principle, digital me could develop their own ethics which significantly differs from human’s understanding of ethics. Finally we discuss the implications of digital me metaethics, normative and applied ethics, the implementation of the Golden Rule in digital me-s, and we suggest two sets of normative principles for digital me: consequentialist and duty based digital me principles. -/- Authors are ordered alphabetically and equally contributed to the paper. (shrink)
Background: The concept of digital twins has great potential for transforming the existing health care system by making it more personalized. As a convergence of health care, artificial intelligence, and information and communication technologies, personalized health care services that are developed under the concept of digital twins raise a myriad of ethical issues. Although some of the ethical issues are known to researchers working on digital health and personalized medicine, currently, there is no comprehensive review that maps (...) the major ethical risks of digital twins for personalized health care services. Objective: This study aims to fill the research gap by identifying the major ethical risks of digital twins for personalized health care services. We first propose a working definition for digital twins for personalized health care services to facilitate future discussions on the ethical issues related to these emerging digital health services. We then develop a process-oriented ethical map to identify the major ethical risks in each of the different data processing phases. Methods: We resorted to the literature on eHealth, personalized medicine, precision medicine, and information engineering to identify potential issues and developed a process-oriented ethical map to structure the inquiry in a more systematic way. The ethical map allows us to see how each of the major ethical concerns emerges during the process of transforming raw data into valuable information. Developers of a digital twin for personalized health care service may use this map to identify ethical risks during the development stage in a more systematic way and can proactively address them. Results: This paper provides a working definition of digital twins for personalized health care services by identifying 3 features that distinguish the new application from other eHealth services. On the basis of the working definition, this paper further layouts 10 major operational problems and the corresponding ethical risks. Conclusions: It is challenging to address all the major ethical risks that a digital twin for a personalized health care service might encounter proactively without a conceptual map at hand. The process-oriented ethical map we propose here can assist the developers of digital twins for personalized health care services in analyzing ethical risks in a more systematic manner. (shrink)
Well before the COVID-19 pandemic, proponents of digital psychiatry were touting the promise of various digital tools and techniques to revolutionize mental healthcare. As social distancing and its knock-on effects have strained existing mental health infrastructures, calls have grown louder for implementing various digital mental health solutions at scale. Decisions made today will shape the future of mental healthcare for the foreseeable future. We argue that bioethicists are uniquely positioned to cut through the hype surrounding digital (...) mental health, which can obscure crucial ethical and epistemic gaps that ought to be considered by policymakers before committing to a digital psychiatric future. Here, we describe four such gaps: The evidence gap, the inequality gap, the prediction-intervention gap, and the safety gap. (shrink)
As COVID-19 spread, clinicians warned of mental illness epidemics within the coronavirus pandemic. Funding for digital mental health is surging and researchers are calling for widespread adoption to address the mental health sequalae of COVID-19. -/- We consider whether these technologies improve mental health outcomes and whether they exacerbate existing health inequalities laid bare by the pandemic. We argue the evidence for efficacy is weak and the likelihood of increasing inequalities is high. -/- First, we review recent trends in (...)digital mental health. Next, we turn to the clinical literature to show that many technologies proposed as a response to COVID-19 are unlikely to improve outcomes. Then, we argue that even evidence-based technologies run the risk of increasing health disparities. We conclude by suggesting that policymakers should not allocate limited resources to the development of many digital mental health tools and should focus instead on evidence-based solutions to address mental health inequalities. (shrink)
The development of personal technologies has recently shifted from devices that seek to capture user attention to those that aim to improve user well-being. Digital wellness technologies use the same attractive qualities of other persuasive apps to motivate users towards behaviors that are personally and socially valuable, such as exercise, wealth-management, and meaningful communication. While these aims are certainly an improvement over the market-driven motivations of earlier technologies, they retain their predecessors’ focus on influencing user behavior as a primary (...) metric of success. Digital wellness technologies are still persuasive technologies, and they do not evade concerns over whether their influence on users is ethically justified. In this paper, we describe several ethical frameworks with which to assess the justification of digital wellness technologies’ influence on users. We propose that while some technologies help users to complete tasks and satisfy immediate preferences, other technologies encourage users to reflect on the values underlying their habits and teach them to evaluate their lives’ competing demands. While the former approach to digital wellness technology is not unethical, we propose that the latter approach is more likely to lead to skillful user engagement with technology. (shrink)
Social media use is soaring globally. Existing research of its ethical implications predominantly focuses on the relationships amongst human users online, and their effects. The nature of the software-to-human relationship and its impact on digital well-being, however, has not been sufficiently addressed yet. This paper aims to close the gap. I argue that some intelligent software agents, such as newsfeed curator algorithms in social media, manipulate human users because they do not intend their means of influence to reveal the (...) user’s reasons. I support this claim by defending a novel account of manipulation and by showing that some intelligent software agents are manipulative in this sense. Apart from revealing a priori reason for thinking that some intelligent software agents are manipulative, the paper offers a framework for further empirical investigation of manipulation online. (shrink)
Marginalized communities are confronted with issues resulting from their marginalization, such as exclusion, invisibility, misrepresentation, and hate speech, not only offline but – due to digital change – increasingly online. Our research project DigitalDialog21 aims at evaluating the effects of digital change on society and how digital change, and the risks and possibilities that come with it, is perceived by the population. Digital change is understood as a factor of social change in this project. By investigating (...)digital change and its effects on society, we are able to draw more general inferences on how societies change socially and what needs to be done in education to establish digital trust. In 2017 the Digital Evolution Index observed an increasing trust deficit and skepticism towards digitalization in both very industrialized and lesser industrialized countries. We will draw inferences from these observations here and hypothesize the following: It seems that especially in marginalized communities, that is: in communities that are structurally and systemically disadvantaged and that experience societal marginalization, this trust deficit and skepticism towards digitalization is prevalent. This could be so because, as a member of a marginalized community, one might quickly find issues resulting from marginalization to be present in digital spaces just like they are present in non-digital spaces. This paper will examine critically how attitudes in marginalized communities towards digital media are influenced by digital change. Do the risks of digitalization outweigh the advantages for marginalized communities? Are digital media perceived to provide advantages such as increased visibility and representation or are they perceived to fuel discrimination against marginalized people? Our research project brings together secondary analyses of existing studies with the results of our own qualitative interview study on marginalized people's perceptions of digital change. The paper will portray some example interviews, and thereby analyze if and how attitudes in the margins towards digital media change, what this change consists of, and how we can understand the changing attitudes within an ethical and educational framework. Our findings will point at a necessity to increase ethically based digital literacy (PEAT) in the German education system, starting in early education. Marginalized groups in particular must be empowered to become digitally competent through problem-oriented education. Independent self-efficacy can increase digital trust in fragmented societies. The paper concludes by introducing media ethics tools that are being developed as part of the project. (shrink)
We argue that while digital health technologies (e.g. artificial intelligence, smartphones, and virtual reality) present significant opportunities for improving the delivery of healthcare, key concepts that are used to evaluate and understand their impact can obscure significant ethical issues related to patient engagement and experience. Specifically, we focus on the concept of empowerment and ask whether it is adequate for addressing some significant ethical concerns that relate to digital health technologies for mental healthcare. We frame these concerns using (...) five key ethical principles for AI ethics (i.e. autonomy, beneficence, non-maleficence, justice, and explicability), which have their roots in the bioethical literature, in order to critically evaluate the role that digital health technologies will have in the future of digital healthcare. (shrink)
Research on the ethics of algorithms has grown substantially over the past decade. Alongside the exponential development and application of machine learning algorithms, new ethical problems and solutions relating to their ubiquitous use in society have been proposed. This article builds on a review of the ethics of algorithms published in 2016, 2016). The goals are to contribute to the debate on the identification and analysis of the ethical implications of algorithms, to provide an updated analysis of epistemic (...) and normative concerns, and to offer actionable guidance for the governance of the design, development and deployment of algorithms. (shrink)
In the paper it is argued that bridging the digital divide may cause a new ethical and social dilemma. Using Hardin's Tragedy of the Commons, we show that an improper opening and enlargement of the digital environment (Infosphere) is likely to produce a Tragedy of the Digital Commons (TDC). In the course of the analysis, we explain why Adar and Huberman's previous use of Hardin's Tragedy to interpret certain recent phenomena in the Infosphere (especially peer-to-peer communication) may (...) not be entirely satisfactory. We then seek to provide an improved version of the TDC that avoids the possible shortcomings of their model. Next, we analyse some problems encountered by the application of classical ethics in the resolution of the TDC. In the conclusion, we outline the kind of work that will be required to develop an ethical approach that may bridge the digital divide but avoid the TDC. (shrink)
Goffman’s (1959) dramaturgical identity theory requires modification when theorising about presentations of self on social media. This chapter contributes to these efforts, refining a conception of digital identities by differentiating them from ‘corporatised identities’. Armed with this new distinction, I ultimately argue that social media platforms’ production of corporatised identities undermines their users’ autonomy and digital well-being. This follows from the disentanglement of several commonly conflated concepts. Firstly, I distinguish two kinds of presentation of self that I collectively (...) refer to as ‘expressions of digital identity’. These digital performances (boyd 2007) and digital artefacts (Hogan 2010) are distinct, but often confused. Secondly, I contend this confusion results in the subsequent conflation of corporatised identities – poor approximations of actual digital identities, inferred and extrapolated by algorithms from individuals’ expressions of digital identity – with digital identities proper. Finally, and to demonstrate the normative implications of these clarifications, I utilise MacKenzie’s (2014, 2019) interpretation of relational autonomy to propose that designing social media sites around the production of corporatised identities, at the expense of encouraging genuine performances of digital identities, has undermined multiple dimensions of this vital liberal value. In particular, the pluralistic range of authentic preferences that should structure flourishing human lives are being flattened and replaced by commercial, consumerist preferences. For these reasons, amongst others, I contend that digital identities should once again come to drive individuals’ actions on social media sites. Only upon doing so can individuals’ autonomy, and control over their digital identities, be rendered compatible with social media. (shrink)
This article analyses the ethical aspects of multistakeholder recommendation systems (RSs). Following the most common approach in the literature, we assume a consequentialist framework to introduce the main concepts of multistakeholder recommendation. We then consider three research questions: who are the stakeholders in a RS? How are their interests taken into account when formulating a recommendation? And, what is the scientific paradigm underlying RSs? Our main finding is that multistakeholder RSs (MRSs) are designed and theorised, methodologically, according to neoclassical welfare (...) economics. We consider and reply to some methodological objections to MRSs on this basis, concluding that the multistakeholder approach offers the resources to understand the normative social dimension of RSs. (shrink)
The gig economy is a phenomenon that is rapidly expanding, redefining the nature of work and contributing to a significant change in how contemporary economies are organised. Its expansion is not unproblematic. This article provides a clear and systematic analysis of the main ethical challenges caused by the gig economy. Following a brief overview of the gig economy, its scope and scale, we map the key ethical problems that it gives rise to, as they are discussed in the relevant literature. (...) We map them onto three categories: the new organisation of work (what is done), the new nature of work (how it is done), and the new status of workers (who does it). We then evaluate a recent initiative from the EU that seeks to address the challenges of the gig economy. The 2019 report of the European High-Level Expert Group on the Impact of the Digital Transformation on EU Labour Markets is a positive step in the right direction. However, we argue that ethical concerns relating to algorithmic systems as mechanisms of control, and the discrimination, exclusion and disconnectedness faced by gig workers require further deliberation and policy response. A brief conclusion completes the analysis. The appendix presents the methodology underpinning our literature review. (shrink)
The relevance of the study is due to the need to transform educational methods and technologies that can satisfy the cognitive, social, and emotional needs of people in the digital world. The modern education system is focused on the implementation of educational strategies that meet high ethical and technical standards. The purpose of the article is the study of humanization as the development direction of education in the digital age. The methodological basis of this study is an understanding (...) of the principle of humanization as combining the general cultural, social, moral, and professional development of an individual. It is shown that digital devices are an integral part of the identity of a modern person, which leads to the need to develop media literacy among students using media education. As part of the humanization of education in educational institutions, a direction is being implemented for organizing and implementing project activities. This instructional format involves the transformation of the role of participants in educational processes and the methods and technologies that are used for its implementation. The results prove that the goals of modern education are possible provided if the principles of humanization and individualization are implemented in the educational process. (shrink)
Recent years have seen growing public concern about the effects of persuasive digital technologies on public mental health and well-being. As the draws on our attention reach such staggering scales and as our ability to focus our attention on our own considered ends erodes ever further, the need to understand and articulate what is at stake has become pressing. In this ethical viewpoint, we explore the concept of attentional harms and emphasize their potential seriousness. We further argue that the (...) acknowledgment of these harms has relevance for evolving debates on digital inequalities. An underdiscussed aspect of web-based inequality concerns the persuasions, and even the manipulations, that help to generate sustained attentional loss. These inequalities are poised to grow, and as they do, so will concerns about justice with regard to the psychological and self-regulatory burdens of web-based participation for different internet users. In line with calls for multidimensional approaches to digital inequalities, it is important to recognize these potential harms as well as to empower internet users against them even while expanding high-quality access. (shrink)
In this article we raise a problem, and we offer two practical contributions to its solution. The problem is that academic communities interested in digital publishing do not have adequate tools to help them in choosing a publishing model that suits their needs. We believe that excessive focus on Open Access (OA) has obscured some important issues; moreover exclusive emphasis on increasing openness has contributed to an agenda and to policies that show clear practical shortcomings. We believe that academic (...) communities have different needs and priorities; therefore there cannot be a ranking of publishing models that fits all and is based on only one criterion or value. We thus believe that two things are needed. First, communities need help in working out what they want from their digital publications. Their needs and desiderata should be made explicit and their relative importance estimated. This exercise leads to the formulation and ordering of their objectives. Second, available publishing models should be assessed on the basis of these objectives, so as to choose one that satisfies them well. Accordingly we have developed a framework that assists communities in going through these two steps. The framework can be used informally, as a guide to the collection and systematic organization of the information needed to make an informed choice of publishing model. In order to do so it maps the values that should be weighed and the technical features that embed them. Building on our framework, we also offer a method to produce ordinal and cardinal scores of publishing models. When these techniques are applied the framework becomes a formal decision–making tool. Finally, the framework stresses that, while the OA movement tackles important issues in digital publishing, it cannot incorporate the whole range of values and interests that are at the core of academic publishing. Therefore the framework suggests a broader agenda that is relevant in making better policy decisions around academic publishing and OA. (shrink)
At the beginning of the COVID-19 pandemic, high hopes were placed on digital contact tracing. Digital contact tracing apps can now be downloaded in many countries, but as further waves of COVID-19 tear through much of the northern hemisphere, these apps are playing a less important role in interrupting chains of infection than anticipated. We argue that one of the reasons for this is that most countries have opted for decentralised apps, which cannot provide a means of rapidly (...) informing users of likely infections while avoiding too many false positive reports. Centralised apps, in contrast, have the potential to do this. But policy making was influenced by public debates about the right app configuration, which have tended to focus heavily on privacy, and are driven by the assumption that decentralised apps are “privacy preserving by design”. We show that both types of apps are in fact vulnerable to privacy breaches, and, drawing on principles from safety engineering and risk analysis, compare the risks of centralised and decentralised systems along two dimensions, namely the probability of possible breaches and their severity. We conclude that a centralised app may in fact minimise overall ethical risk, and contend that we must reassess our approach to digital contact tracing, and should, more generally, be cautious about a myopic focus on privacy when conducting ethical assessments of data technologies. (shrink)
The harms associated with wireless mobile devices (e.g. smartphones) are well documented. They have been linked to anxiety, depression, diminished attention span, sleep disturbance, and decreased relationship satisfaction. Perhaps what is most worrying from a moral perspective, however, is the effect these devices can have on our autonomy. In this article, we argue that there is an obligation to foster and safeguard autonomy in ourselves, and we suggest that wireless mobile devices pose a serious threat to our capacity to fulfill (...) this obligation. We defend the existence of an imperfect duty to be a ‘digital minimalist’. That is, we have a moral obligation to be intentional about how and to what extent we use these devices. The empirical findings already justify prudential reasons in favor of digital minimalism, but the moral duty is distinct from and independent of prudential considerations. (shrink)
Online technologies enable vast amounts of data to outlive their producers online, thereby giving rise to a new, digital form of afterlife presence. Although researchers have begun investigating the nature of such presence, academic literature has until now failed to acknowledge the role of commercial interests in shaping it. The goal of this paper is to analyse what those interests are and what ethical consequences they may have. This goal is pursued in three steps. First, we introduce the concept (...) of the Digital Afterlife Industry, and define it as an object of study. Second, we identify the politico-economic interests of the DAI. For this purpose, we develop an analytical approach based on an informational interpretation of Marxian economics. Third, we explain the practical manifestations of the interests using four real life cases. The findings expose the incentives of the DAI to alter what is referred to as the “informational bodies” of the dead, which in turn is to be seen as a violation of the principle of human dignity. To prevent such consequences, we argue that the ethical conventions that guide trade with remains of organic bodies may serve as a good model for future regulation of DAI. (shrink)
The first chapter of The Sympathy of Things published in Research & Design: Textile Tectonics (2011). It develops the notion of a “gothic ontology” which inverts Deleuze’s baroque ontology of the fold. Where in the universe of the fold continuity precedes singularity, in the gothic singularity precedes continuity. The reversal is based on the Ruskinian notion of the rib, which is the source of “changefulness”, expressed through “millions of variations” of figures. Figures move and change only to interact with or (...) connect to other figures, i.e. to build configurations. Ruskin’s notion of savageness, which is always categorized as an ethics-over-aesthetics, now becomes a notion of craft that is inherent to matter, not to humans. (shrink)
Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and (...) used by humans. This includes issues of privacy (§2.1) and manipulation (§2.2), opacity (§2.3) and bias (§2.4), human-robot interaction (§2.5), employment (§2.6), and the effects of autonomy (§2.7). Then AI systems as subjects, i.e., ethics for the AI systems themselves in machine ethics (§2.8) and artificial moral agency (§2.9). Finally, the problem of a possible future AI superintelligence leading to a “singularity” (§2.10). We close with a remark on the vision of AI (§3). - For each section within these themes, we provide a general explanation of the ethical issues, outline existing positions and arguments, then analyse how these play out with current technologies and finally, what policy consequences may be drawn. (shrink)
This article presents the first, systematic analysis of the ethical challenges posed by recommender systems through a literature review. The article identifies six areas of concern, and maps them onto a proposed taxonomy of different kinds of ethical impact. The analysis uncovers a gap in the literature: currently user-centred approaches do not consider the interests of a variety of other stakeholders—as opposed to just the receivers of a recommendation—in assessing the ethical impacts of a recommender system.
In this chapter we argue that emotions are mediated in an incomplete way in online social media because of the heavy reliance on textual messages which fosters a rationalistic bias and an inclination towards less nuanced emotional expressions. This incompleteness can happen either by obscuring emotions, showing less than the original intensity, misinterpreting emotions, or eliciting emotions without feedback and context. Online interactions and deliberations tend to contribute rather than overcome stalemates and informational bubbles, partially due to prevalence of anti-social (...) emotions. It is tempting to see emotions as being the cause of the problem of online verbal aggression and bullying. However, we argue that social media are actually designed in a predominantly rationalistic way, because of the reliance on text-based communication, thereby filtering out social emotions and leaving space for easily expressed antisocial emotions. Based on research on emotions that sees these as key ingredients to moral interaction and deliberation, as well as on research on text-based versus non-verbal communication, we propose a richer understanding of emotions, requiring different designs of online deliberation platforms. We propose that such designs should move from text-centred designs and should find ways to incorporate the complete expression of the full range of human emotions so that these can play a constructive role in online deliberations. (shrink)
In this review, we present some ethical imperatives observed in this pandemic from a data ethics perspective. Our exposition connects recurrent ethical problems in the discipline, such as, privacy, surveillance, transparency, accountability, and trust, to broader societal concerns about equality, discrimination, and justice. We acknowledge data ethics role as significant to develop technological, inclusive, and pluralist societies. - - - Resumen: En esta revisión, exponemos algunos de los imperativos éticos observados desde la ética de datos en esta pandemia. (...) Nuestra exposición busca conectar problemas éticos típicos dentro de esta disciplina, a saber, privacidad, vigilancia, transparencia, responsabilidad y confianza, con preocupaciones a nivel social relacionadas con la igualdad, discrimi nación y justicia. Consideramos que la ética de datos tiene un rol significativo para desarrollar sociedades tecnificadas, inclusivas y pluralistas. (shrink)
In this chapter, we argue that the web is a poietically- enabling environment, which both enhances and requires the development of a “constructionist ethics”. We begin by explaining the appropriate concept of “constructionist ethics”, and analysing virtue ethics as the primary example. We then show why CyberEthics (or Computer Ethics, as it is also called) cannot be based on virtue ethics, yet needs to retain a constructionist approach. After providing evidence for significant poietic uses of (...) the web, we argue that ethical constructionism is not only facilitated by the web, but is also what the web requires as an ethics of the digital environment. In conclusion, we relate the present discussion to standard positions in CyberEthics and to a broader project for Information Ethics. (shrink)
It is well known that on the Internet, computer algorithms track our website browsing, clicks, and search history to infer our preferences, interests, and goals. The nature of this algorithmic tracking remains unclear, however. Does it involve what many cognitive scientists and philosophers call ‘mindreading’, i.e., an epistemic capacity to attribute mental states to people to predict, explain, or influence their actions? Here I argue that it does. This is because humans are in a particular way embedded in the process (...) of algorithmic tracking. Specifically, if we endorse common conditions for extended cognition, then human mindreading (by website operators and users) is often literally extended into, that is, partly realized by, not merely causally coupled to, computer systems performing algorithmic tracking. The view that human mindreading extends outside the body into computers in this way has significant ethical advantages. It points to new conceptual ways to reclaim our autonomy and privacy in the face of increasing risks of computational control and online manipulation. These benefits speak in favor of endorsing the notion of extended mindreading. (shrink)
The article entitled “Post-COVID-19: Education and Thai Society in Digital Era” has two objectives: 1) to study digital technology 2) to study the living life in Thailand in the digital era after COVID-19 pandemics. According to the study, it was found that the new digitized service is a service process on digital platforms such as ordering food, hailing a taxi, and online trading. It is a service called via smartphone. The information is used digitally. Public relations, (...)digital marketing, and living in cyberspace play an increasingly digital role. People have communication skills, use Line, Facebook, social media, have tools to search for knowledge, learn and improve themselves. People in the new era of life have a new way of life therefore there is social life in the cyber world. There should be ethics, attitudes, values, and click-on images that are linked to digital use. For a good time life, It is important to have digital intelligence (Digital Quotient) to live after post-COVID19, the digital era. (shrink)
Online therapy sessions and other forms of digital mental health services (DMH) have seen a sharp spike in new users since the start of the COVID-19 pandemic. Having little access to their social networks and support systems, people have had to turn to digital tools and spaces to cope with their experiences of anxiety and loss. With no clear end to the pandemic in sight, many of us are likely to remain reliant upon DMH for the foreseeable future. (...) As such, it is important to articulate some of the specific ways in which the pandemic is affecting our self and world-relation, such that we can identify how DMH services are best able to accommodate some of the newly emerging needs of their users. In this paper I will identify a specific type of loss brought about by the COVID-19 pandemic and present it as an important concept for DMH. I refer to this loss as loss of perceptual world-familiarity. Loss of perceptual world-familiarity entails a breakdown in the ongoing effortless responsiveness to our perceptual environment that characterizes much of our everyday lives. To cash this out I will turn to insights from the phenomenological tradition. Initially, my project is descriptive. I aim to bring out how loss of perceptual world-familiarity is a distinctive form of loss that is deeply pervasive yet easily overlooked-hence the relevance of explicating it for DMH purposes. But I will also venture into the space of the normative, offering some reasons for seeing perceptual world-familiarity as a component of well-being. I conclude the paper with a discussion of how loss of perceptual world-familiarity affects the therapeutic setting now that most if not all therapeutic interactions have transitioned to online spaces and I explore the potential to augment these spaces with social interaction technologies. Throughout, my discussion aims to do justice to the reality that perceptual world-familiarity is not an evenly distributed phenomenon, that factors like disability, gender and race affect its robustness, and that this ought to be reckoned with when seeking to incorporate the phenomenon into or mitigate it through DMH services. (shrink)
It has been suggested that to overcome the challenges facing the UK’s National Health Service (NHS) of an ageing population and reduced available funding, the NHS should be transformed into a more informationally mature and heterogeneous organisation, reliant on data-based and algorithmically-driven interactions between human, artificial, and hybrid (semi-artificial) agents. This transformation process would offer significant benefit to patients, clinicians, and the overall system, but it would also rely on a fundamental transformation of the healthcare system in a way that (...) poses significant governance challenges. In this article, we argue that a fruitful way to overcome these challenges is by adopting a pro-ethical approach to design that analyses the system as a whole, keeps society-in-the-loop throughout the process, and distributes responsibility evenly across all nodes in the system. (shrink)
This theme issue has the founding ambition of landscaping Data Ethics as a new branch of ethics that studies and evaluates moral problems related to data (including generation, recording, curation, processing, dissemination, sharing, and use), algorithms (including AI, artificial agents, machine learning, and robots), and corresponding practices (including responsible innovation, programming, hacking, and professional codes), in order to formulate and support morally good solutions (e.g. right conducts or right values). Data Ethics builds on the foundation provided by (...) Computer and Information Ethics but, at the same time, it refines the approach endorsed so far in this research field, by shifting the Level of Abstraction of ethical enquiries, from being information-centric to being data-centric. This shift brings into focus the different moral dimensions of all kinds of data, even the data that never translate directly into information but can be used to support actions or generate behaviours, for example. It highlights the need for ethical analyses to concentrate on the content and nature of computational operations — the interactions among hardware, software, and data — rather than on the variety of digital technologies that enables them. And it emphasises the complexity of the ethical challenges posed by Data Science. Because of such complexity, Data Ethics should be developed from the start as a macroethics, that is, as an overall framework that avoids narrow, ad hoc approaches and addresses the ethical impact and implications of Data Science and its applications within a consistent, holistic, and inclusive framework. Only as a macroethics Data Ethics will provide the solutions that can maximise the value of Data Science for our societies, for all of us, and for our environments. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.