My analysis takes as its point of departure the controversial assumption that contemporary ethical theories cannot capture adequately the ethical and social challenges of scientific and technological development. This assumption is rooted in the argument that classical ethical theory invariably addresses the issue of ethical responsibility in terms of whether and how intentional actions of individuals can be justified. Scientific and technological developments, however, have produced unintentional consequences and side-consequences. These consequences very often result from collective decisions concerning the way (...) we wish to organise our economies and society, rather than from individual actions. It has been apparent for a long time now that it is not sufficient to construct an ethics of science and technology on the basis of the image of a scientist who intentionally wants to create a Frankenstein. Thus, as a minimum we would require an ethical framework that addresses both the aspect of unintentional side consequences (rather than intentional actions) and the aspect of collective decisions (rather than individual decisions) with regard to complex societal systems, such as the operation of our economy. We do not have such a theory available. More disturbing than the principle shortcomings of ethical theory are the shortcomings of conventional ethical practice with respect to technological developments. Below I will suggest how four different developments can illustrate these shortcomings, which centre around the fact that individuals in our society can simply not be held fully accountable for their individual role within the context of scientific technological developments. I will call these shortcomings of a theory (and practice) of individual role responsibility. This may help us to reflect on robotics too, insofar as robots may be perceived as replacements for “roles”. From there, I will argue why we have to shift our attention to an ethics of knowledge assessment in the framework of deliberative procedures. (shrink)
A closer look at the theories and questions in philosophy of technology and ethics of technology shows the absence and marginality of non-Western philosophical traditions in the discussions. Although, increasingly, some philosophers have sought to introduce non-Western philosophical traditions into the debates, there are few systematic attempts to construct and articulate general accounts of ethics and technology based on other philosophical traditions. This situation is understandable, for the questions of modern sciences and technologies appear to (...) be originated from the West; at the same time, the situation is undesirable. The overall aim of this paper, therefore, is to introduce an alternative account of ethics of technology based on the Confucian tradition. In doing so, it is hoped that the current paper can initiate a relatively uncharted field in philosophy of technology and ethics of technology. (shrink)
Martin Peterson’s The Ethics of Technology: A Geometric Analysis of Five Moral Principles offers a welcome contribution to the ethics of technology, understood by Peterson as a branch of applied ethics that attempts ‘to identify the morally right courses of action when we develop, use, or modify technological artifacts’ (3). He argues that problems within this field are best treated by the use of five domain-specific principles: the Cost-Benefit Principle, the Precautionary Principle, the Sustainability Principle, (...) the Autonomy Principle, and the Fairness Principle. These principles are, in turn, to be understood and applied with reference to the geometric method. This method is perhaps the most interesting and novel part of Peterson’s book, and I’ll devote the bulk of my review to it. (shrink)
There are a number of recent attempts to introduce Confucian values to the ethical analysis of technology. These works, however, have not attended sufficiently to one central aspect of Confucianism, namely Ritual (‘Li’). Li is central to Confucian ethics, and it has been suggested that the emphasis on Li in Confucian ethics is what distinguishes it from other ethical traditions. Any discussion of Confucian ethics for technology, therefore, remains incomplete without accounting for Li. This chapter (...) aims to elaborate on the concept of Confucian Li and discuss its relevance to ethics of technology. Particularly, by referring to Li’s communicative, formative, and aesthetic function, I formulate an approach to ethics of technology with an emphasis on community, performance, and the aesthetic and demonstrate how this approach proceeds with the ethical analysis of technology. In doing so, I attempt to answer the question: why Confucianism matters in ethics of technology. (shrink)
This research project aims to accomplish two primary objectives: (1) propose an argument that a posthuman ethics in the design of technologies is sound and thus warranted and, (2) how can existent SBD approaches begin to envision principled and methodological ways of incorporating nonhuman values into design. In order to do this this MRP will provide a rudimentary outline of what constitutes SBD approaches. A particular design approach - Value Sensitive Design (VSD) - is taken up as an illustrative (...) example given that it, among the other SBD frameworks, most clearly illustrates a principled approach to the integration of values in design. -/- This explication will be followed by the strongest arguments for a posthumanist ethic, primarily drawing from the works of the Italian philosophers Leonardo Caffo and Roberto Marchesini and Francesa Ferrando. In doing so I will show how the human imperative to account for nonhuman values is a duty and as such must be continually ready-to-hand when making value-critical decision. (shrink)
Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and used (...) by humans. This includes issues of privacy (§2.1) and manipulation (§2.2), opacity (§2.3) and bias (§2.4), human-robot interaction (§2.5), employment (§2.6), and the effects of autonomy (§2.7). Then AI systems as subjects, i.e., ethics for the AI systems themselves in machine ethics (§2.8) and artificial moral agency (§2.9). Finally, the problem of a possible future AI superintelligence leading to a “singularity” (§2.10). We close with a remark on the vision of AI (§3). - For each section within these themes, we provide a general explanation of the ethical issues, outline existing positions and arguments, then analyse how these play out with current technologies and finally, what policy consequences may be drawn. (shrink)
This collection of papers were originally presented during conferences on ethics in science and technology that UNESCO’s Regional Unit for Social and Human Sciences (RUSHSAP) has been convening since 2005. Since intercultural communication and information-sharing are essential components of these deliberations, the books also provide theme-related discourse from the conferences.
Environmental ethics has mostly been practiced separately from philosophy of technology, with few exceptions. However, forward thinking suggests that environmental ethics must become more interdisciplinary when we consider that almost everything affects the environment. Most notably,technology has had a huge impact on the natural realm. In the following discussion, the notions of synthesising philosophy of technology and environmental ethics are explored with a focus on research, development, and policy.
For several years, scholars have (for good reason) been largely preoccupied with worries about the use of artificial intelligence and machine learning (AI/ML) tools to make decisions about us. Only recently has significant attention turned to a potentially more alarming problem: the use of AI/ML to influence our decision-making. The contexts in which we make decisions—what behavioral economists call our choice architectures—are increasingly technologically-laden. Which is to say: algorithms increasingly determine, in a wide variety of contexts, both the sets of (...) options we choose from and the way those options are framed. Moreover, artificial intelligence and machine learning (AI/ML) makes it possible for those options and their framings—the choice architectures—to be tailored to the individual chooser. They are constructed based on information collected about our individual preferences, interests, aspirations, and vulnerabilities, with the goal of influencing our decisions. At the same time, because we are habituated to these technologies we pay them little notice. They are, as philosophers of technology put it, transparent to us—effectively invisible. I argue that this invisible layer of technological mediation, which structures and influences our decision-making, renders us deeply susceptible to manipulation. Absent a guarantee that these technologies are not being used to manipulate and exploit, individuals will have little reason to trust them. (shrink)
Conventional ethics of how humans should eat often ignore that human life is itself a form of organic activity. Using Henri Bergson’s notions of intellect and intuition, this chapter brings a wider perspective of the human organism to the ethical question of how humans appropriate life for nutriment. The intellect’s tendency to instrumentalize living things as though they were inert seems to subtend the moral failures evident in practices such as industrial animal agriculture. Using the case study of Temple (...) Grandin’s sympathetic cattle technologies, this chapter moves beyond animal welfare concerns to ground food ethics on the phenomenal character of food that is obscured by human activities of fabrication. (shrink)
Philosophers have talked to each other about moral issues concerning technology, but few of them have talked about issues of technology and the good life, and even fewer have talked about technology and the good life with the public in the form of recommendation. In effect, recommendations for various technologies are often left to technologists and gurus. Given the potential benefits of informing the public on their impacts on the good life, however, this is a curious state (...) of affairs. In the present paper, I will examine why philosophers are seemingly reluctant to offer recommendations to the public. While there are many reasons for philosophers to refrain from offering recommendations, I shall focus on a specific normative reason. More specifically, it appears that, according to a particular definition, offering recommendations can be viewed as paternalistic, and therefore is prima facie wrong to do so. I will provide an argument to show that the worry about paternalism is unfounded, because a form of paternalism engendered by technology is inevitable. Given the inevitability of paternalism, I note that philosophers should accept the duty to offer recommendations to the public. I will then briefly turn to design ethics, which has reconceptualised the role of philosophers and, in my mind, fitted well with the inevitability of paternalism. Finally, I shall argue that design ethics has to be supplemented by the practice of recommendation if it is to sustain its objective. (shrink)
The goal of this article is to present a first list of ethical concerns that may arise from research and personal use of virtual reality (VR) and related technology, and to offer concrete recommendations for minimizing those risks. Many of the recommendations call for focused research initiatives. In the first part of the article, we discuss the relevant evidence from psychology that motivates our concerns. In Section “Plasticity in the Human Mind,” we cover some of the main results suggesting (...) that one’s environment can influence one’s psychological states, as well as recent work on inducing illusions of embodiment. Then, in Section “Illusions of Embodiment and Their Lasting Effect,” we go on to discuss recent evidence indicating that immersion in VR can have psychological effects that last after leaving the virtual environment. In the second part of the article, we turn to the risks and recommendations. We begin, in Section “The Research Ethics of VR,” with the research ethics of VR, covering six main topics: the limits of experimental environments, informed consent, clinical risks, dual-use, online research, and a general point about the limitations of a code of conduct for research. Then, in Section “Risks for Individuals and Society,” we turn to the risks of VR for the general public, covering four main topics: long-term immersion, neglect of the social and physical environment, risky content, and privacy. We offer concrete recommendations for each of these 10 topics, summarized in Table 1. (shrink)
Digital tracing technologies are heralded as an effective way of containing SARS-CoV-2 faster than it is spreading, thereby allowing the possibility of easing draconic measures of population-wide quarantine. But existing technological proposals risk addressing the wrong problem. The proper objective is not solely to maximise the ratio of people freed from quarantine but to also ensure that the composition of the freed group is fair. We identify several factors that pose a risk for fair group composition along with an analysis (...) of general lessons for a philosophy of technology. Policymakers, epidemiologists, and developers can use these risk factors to benchmark proposal technologies, curb the pandemic, and keep public trust. (shrink)
Cloud computing is rapidly gaining traction in business. It offers businesses online services on demand (such as Gmail, iCloud and Salesforce) and allows them to cut costs on hardware and IT support. This is the first paper in business ethics dealing with this new technology. It analyzes the informational duties of hosting companies that own and operate cloud computing datacenters (e.g., Amazon). It considers the cloud services providers leasing ‘space in the cloud’ from hosting companies (e.g, Dropbox, Salesforce). (...) And it examines the business and private ‘clouders’ using these services. The first part of the paper argues that hosting companies, services providers and clouders have mutual informational (epistemic) obligations to provide and seek information about relevant issues such as consumer privacy, reliability of services, data mining and data ownership. The concept of interlucency is developed as an epistemic virtue governing ethically effective communication. The second part considers potential forms of government restrictions on or proscriptions against the development and use of cloud computing technology. Referring to the concept of technology neutrality, it argues that interference with hosting companies and cloud services providers is hardly ever necessary or justified. It is argued, too, however, that businesses using cloud services (banks, law firms, hospitals etc. storing client data in the cloud, e.g.) will have to follow rather more stringent regulations. (shrink)
Robust technological enhancement of core cognitive capacities is now a realistic possibility. From the perspective of neutralism, the view that justifications for public policy should be neutral between reasonable conceptions of the good, only members of a subset of the ethical concerns serve as legitimate justifications for public policy regarding robust technological enhancement. This paper provides a framework for the legitimate use of ethical concerns in justifying public policy decisions regarding these enhancement technologies by evaluating the ethical concerns that arise (...) in the context of testing such technologies on nonhuman animals. Traditional issues in bioethics, as well as novel concerns such as the possibility of moral status enhancement, are evaluated from the perspective of neutralism. (shrink)
In Information and Computer Ethics (ICE), and, in fact, in normative and evaluative research of Information Technology (IT) in general, researchers have paid few attentions to the prudential values of IT. Hence, analyses of the prudential values of IT are mostly found in popular discourse. Yet, the analyses of the prudential values of IT are important for answering normative questions about people’s well-being. In this chapter, the author urges researchers in ICE to take the analyses of the prudential (...) values of IT seriously. A serious study of the analyses, he argues, will enrich the research of ICE. But, what are the analyses? The author will distinguish the analyses of the prudential values of IT, i.e. the prudential analysis, from other types of normative and evaluative analysis of IT. Then, the author will explain why prudential analyses are not taken seriously by the researchers in ICE, and argue why they deserve more attentions. After that, he will outline a framework to analyse and evaluate prudential analyses, and he will apply the framework to an actual prudential analysis. Finally, he will briefly conclude this chapter by highlighting the limits of the proposed framework and identifying the directions for future research. (shrink)
The prospect of consumable meat produced in a laboratory setting without the need to raise and slaughter animals is both realistic and exciting. Not only could such in vitro meat become popular due to potential cost savings, but it also avoids many of the ethical and environmental problems with traditional meat productions. However, as with any new technology, in vitro meat is likely to face some detractors. We examine in detail three potential objections: 1) in vitro meat is disrespectful, (...) either to nature or to animals; 2) it will reduce the number of happy animals in the world; and 3) it will open the door to cannibalism. While each objection has some attraction, we ultimately find that all can be overcome. The upshot is that in vitro meat production is generally permissible and, especially for ethical vegetarians, worth promoting. (shrink)
In an era of global interdependence, the concept of autonomy may no longer name our core moral need. Shifting friendships and enmities across political boundaries bear significant consequences for the individual. Perhaps social alliances and hostilities have always had an impact on the flourishing of individuals and communities. But globalization (especially as viewed through the technology of the information age) magnifies the impact of external forces on sovereign bodies. These forces remind individuals of the need to establish the right (...) kind of connections, and diminish (but do not exclude) the relative importance of autonomy for moral and political discourse. (shrink)
The history of sonar technology provides a fascinating case study for philosophers of science. During the first and second World Wars, sonar technology was primarily associated with activity on the part of the sonar technicians and researchers. Usually this activity is concerned with creation of sound waves under water, as in the classic “ping and echo”. The last fifteen years have seen a shift toward passive, ambient noise “acoustic daylight imaging” sonar. Along with this shift a new relationship (...) has begun between sonar technicians and environmental ethics. I have found a significant shift in the values, and the environmental ethics, of the underwater community by looking closely at the term “noise” as it has been conceptualized and reconceptualized in the history of sonar technology. To illustrate my view, I will include three specific sets of information: 1) a discussion of the 2003 debate regarding underwater active low- frequency sonar and its impact on marine life; 2) a review of the history of sonar technology in diagrams, abstracts, and artifacts; 3) the latest news from February 2004 on how the military and the acoustic daylight imaging passive sonar community has responded to the current debates. (shrink)
The internet of things is increasingly spreading into the domain of medical and social care. Internet-enabled devices for monitoring and managing the health and well-being of users outside of traditional medical institutions have rapidly become common tools to support healthcare. Health-related internet of things (H-IoT) technologies increasingly play a key role in health management, for purposes including disease prevention, real-time tele-monitoring of patient’s functions, testing of treatments, fitness and well-being monitoring, medication dispensation, and health research data collection. H-IoT promises many (...) benefits for health and healthcare. However, it also raises a host of ethical problems stemming from the inherent risks of Internet enabled devices, the sensitivity of health-related data, and their impact on the delivery of healthcare. This paper maps the main ethical problems that have been identified by the relevant literature and identifies key themes in the on-going debate on ethical problems concerning H-IoT. (shrink)
The burgeoning literature on the ethical issues raised by climate engineering has explored various normative questions associated with the research and deployment of climate engineering, and has examined a number of responses to them. While researchers have noted the ethical issues from climate engineering are global in nature, much of the discussion proceeds predominately with ethical framework in the Anglo-American and European traditions, which presume particular normative standpoints and understandings of human–nature relationship. The current discussion on the ethical issues, therefore, (...) is far from being a genuine global dialogue. The aim of this article is to address the lack of intercultural exchange by exploring the ethics of climate engineering from a perspective of Confucian environmental ethics. Drawing from the existing discussion on Confucian environmental ethics and Confucian ethics of technology, I discuss what Confucian ethics can contribute to the ethical debate on climate engineering. (shrink)
This paper explores the usefulness of the 'ethical matrix', proposed by Ben Mepham, as a tool in technology assessment, specifically in food ethics. We consider what the matrix is, how it might be useful as a tool in ethical decision-making, and what drawbacks might be associated with it. We suggest that it is helpful for fact-finding in ethical debates relating to food ethics; but that it is much less helpful in terms of weighing the different ethical problems (...) that it uncovers. Despite this drawback, we maintain that, with some modifications, the ethical matrix can be a useful tool in debates in food ethics. We argue that useful modifications might be to include future generations amongst the stakeholders in the matrix, and to substitute the principle of solidarity for the principle of justice. (shrink)
Personal AI assistants are now nearly ubiquitous. Every leading smartphone operating system comes with a personal AI assistant that promises to help you with basic cognitive tasks: searching, planning, messaging, scheduling and so on. Usage of such devices is effectively a form of algorithmic outsourcing: getting a smart algorithm to do something on your behalf. Many have expressed concerns about this algorithmic outsourcing. They claim that it is dehumanising, leads to cognitive degeneration, and robs us of our freedom and autonomy. (...) Some people have a more subtle view, arguing that it is problematic in those cases where its use may degrade important interpersonal virtues. In this article, I assess these objections to the use of AI assistants. I will argue that the ethics of their use is complex. There are no quick fixes or knockdown objections to the practice, but there are some legitimate concerns. By carefully analysing and evaluating the objections that have been lodged to date, we can begin to articulate an ethics of personal AI use that navigates those concerns. In the process, we can locate some paradoxes in our thinking about outsourcing and technological dependence, and we can think more clearly about what it means to live a good life in the age of smart machines. (shrink)
Developments in biological technology in the last few decades highlight the surprising and ever-expanding practical benefits of stem cells. With this progress, the possibility of combining human and nonhuman organisms is a reality, with ethical boundaries that are not readily obvious. These inter-species hybrids are of a larger class of biological entities called “chimeras.” As the concept of a human-nonhuman creature is conjured in our minds, either incredulous wonder or grotesque horror is likely to follow. This paper seeks to (...) mitigate those worries and demotivate reasonable concerns raised against chimera research, all the while pressing current ethical positions toward their plausible conclusions. -/- In service of this overall aim, first, I intend to show that chimeras are far less foreign and fantastic in light of recent research in the lab; second, I intend to show that anti-realist (so-called “constructivist”) commitments regarding species ontology render the species distinction (i.e., the divide between human and nonhuman) superfluous as a basis for ethical practice; and third, I discuss some prevailing dignity accounts regarding the practical ethics of the creation, research, and treatment of chimeras. Consequently I intend to show that the adoption of this particular set of views (constructivist ontology, capacity-based ethics) in conjunction with recent research ought to justify a parallel with what we accord to humans persons, and that trajectory allows for cases of moral permissibility. (shrink)
Ethical issues of information and communication technologies (ICTs) are important because they can have significant effects on human liberty, happiness, their ability to lead a good life. They are also of functional interest because they can determine whether technologies are used and whether their positive potential can unfold. For these reasons policy makers are interested in finding out what these issues are and how they can be addressed. The best way of creating ICT policy that is sensitive to ethical issues (...) would be to be proactive and address such issues at early stages of the technology life cycle. The present paper uses this position as a starting point and discusses how knowledge of ethical aspects of emerging ICTs can be gained. It develops a methodology that goes beyond established futures methodologies to cater for the difficult nature of ethical issues. The paper goes on to outline some of the preliminary findings of a European research project that has applied this method. (shrink)
Safe-by-Design (SBD) frameworks for the development of emerging technologies have become an ever more popular means by which scholars argue that transformative emerging technologies can safely incorporate human values. One such popular SBD methodology is called Value Sensitive Design (VSD). A central tenet of this design methodology is to investigate stakeholder values and design those values into technologies during early stage research and development (R&D). To accomplish this, the VSD framework mandates that designers consult the philosophical and ethical literature to (...) best determine how to weigh moral trade-offs. However, the VSD framework also concedes the universalism of moral values, particularly the values of freedom, autonomy, equality trust and privacy justice. This paper argues that the VSD methodology, particularly applied to nano-bio-info-cogno (NBIC) technologies, has an insufficient grounding for the determination of moral values. As such, an exploration of the value-investigations of VSD are deconstructed to illustrate both its strengths and weaknesses. This paper also provides possible modalities for the strengthening of the VSD methodology, particularly through the application of moral imagination and how moral imagination exceed the boundaries of moral intuitions in the development of novel technologies. (shrink)
Hans Jonas 's responsibility ethics is an important achievement of modern technology criticism and ethical theory innovation. The maturity of Jonas's ethical thought has gone through three main stages, namely, the critique of modern technology, the reflection of traditional ethics and the construction of the " Future-oriented " Responsibility Ethics. Jonas's criticism of modern technology not only has a strong epochal character but also carries on the spirit of social criticism since Marx. His insight (...) into the traditional ethics theory and the ethical characteristics of the technical age constituted the background of the Responsibility ethics. Jonas's responsibility ethics is not only the return of "responsibility" spirit in ethics but also the dimension of "Future", which is the characteristic of his theory. Through criticism, reflection and construction, Jonas formed this kind of asymmetrical ethics thought of "Future-oriented" Responsibility and faced the challenge of modern technology. (shrink)
If a robot sends a deceptive signal to a human user, is this always and everywhere an unethical act, or might it sometimes be ethically desirable? Building upon previous work in robot ethics, this article tries to clarify and refine our understanding of the ethics of robotic deception. It does so by making three arguments. First, it argues that we need to distinguish between three main forms of robotic deception (external state deception; superficial state deception; and hidden state (...) deception) in order to think clearly about its ethics. Second, it argues that the second type of deception – superficial state deception – is not best thought of as a form of deception, even though it is frequently criticised as such. And third, it argues that the third type of deception is best understood as a form of betrayal because doing so captures the unique ethical harm to which it gives rise, and justifies special ethical protections against its use. (shrink)
Collected and edited by Noah Levin -/- Table of Contents: -/- UNIT ONE: INTRODUCTION TO CONTEMPORARY ETHICS: TECHNOLOGY, AFFIRMATIVE ACTION, AND IMMIGRATION 1 The “Trolley Problem” and Self-Driving Cars: Your Car’s Moral Settings (Noah Levin) 2 What is Ethics and What Makes Something a Problem for Morality? (David Svolba) 3 Letter from the Birmingham City Jail (Martin Luther King, Jr) 4 A Defense of Affirmative Action (Noah Levin) 5 The Moral Issues of Immigration (B.M. Wooldridge) 6 The (...)Ethics of our Digital Selves (Noah Levin) -/- UNIT TWO: TORTURE, DEATH, AND THE “GREATER GOOD” 7 The Ethics of Torture (Martine Berenpas) 8 What Moral Obligations do we have (or not have) to Impoverished Peoples? (B.M. Wooldridge) 9 Euthanasia, or Mercy Killing (Nathan Nobis) 10 An Argument Against Capital Punishment (Noah Levin) 11 Common Arguments about Abortion (Nathan Nobis & Kristina Grob) 12 Better (Philosophical) Arguments about Abortion (Nathan Nobis & Kristina Grob) -/- UNIT THREE: PERSONS, AUTONOMY, THE ENVIRONMENT, AND RIGHTS 13 Animal Rights (Eduardo Salazar) 14 John Rawls and the “Veil of Ignorance” (Ben Davies) 15 Environmental Ethics: Climate Change (Jonathan Spelman) 16 Rape, Date Rape, and the “Affirmative Consent” Law in California (Noah Levin) 17 The Ethics of Pornography: Deliberating on a Modern Harm (Eduardo Salazar) 18 The Social Contract (Thomas Hobbes) -/- UNIT FOUR: HAPPINESS 19 Is Pleasure all that Matters? Thoughts on the “Experience Machine” (Prabhpal Singh) 20 Utilitarianism (J.S. Mill) 21 Utilitarianism: Pros and Cons (B.M. Wooldridge) 22 Existentialism, Genetic Engineering, and the Meaning of Life: The Fifths (Noah Levin) 23 The Solitude of the Self (Elizabeth Cady Stanton) 24 Game Theory, the Nash Equilibrium, and the Prisoner’s Dilemma (Douglas E. Hill) -/- UNIT FIVE: RELIGION, LAW, AND ABSOLUTE MORALITY 25 The Myth of Gyges and The Crito (Plato) 26 God, Morality, and Religion (Kristin Seemuth Whaley) 27 The Categorical Imperative (Immanuel Kant) 28 The Virtues (Aristotle) 29 Beyond Good and Evil (Friedrich Nietzsche) 30 Other Moral Theories: Subjectivism, Relativism, Emotivism, Intuitionism, etc. (Jan F. Jacko). (shrink)
This article presents the first thematic review of the literature on the ethical issues concerning digital well-being. The term ‘digital well-being’ is used to refer to the impact of digital technologies on what it means to live a life that is good for a human being. The review explores the existing literature on the ethics of digital well-being, with the goal of mapping the current debate and identifying open questions for future research. The review identifies major issues related to (...) several key social domains: healthcare, education, governance and social development, and media and entertainment. It also highlights three broader themes: positive computing, personalised human–computer interaction, and autonomy and self-determination. The review argues that three themes will be central to ongoing discussions and research by showing how they can be used to identify open questions related to the ethics of digital well-being. (shrink)
In this paper I critique the ethical implications of automating CCTV surveillance. I consider three modes of CCTV with respect to automation: manual, fully automated, and partially automated. In each of these I examine concerns posed by processing capacity, prejudice towards and profiling of surveilled subjects, and false positives and false negatives. While it might seem as if fully automated surveillance is an improvement over the manual alternative in these areas, I demonstrate that this is not necessarily the case. In (...) preference to the extremes I argue in favour of partial automation in which the system integrates a human CCTV operator with some level of automation. To assess the degree to which such a system should be automated I draw on the further issues of privacy and distance. Here I argue that the privacy of the surveilled subject can benefit from automation, while the distance between the surveilled subject and the CCTV operator introduced by automation can have both positive and negative effects. I conclude that in at least the majority of cases more automation is preferable to less within a partially automated system where this does not impinge on efficacy. (shrink)
Mosquito-borne diseases represent a significant global disease burden, and recent outbreaks of such diseases have led to calls to reduce mosquito populations. Furthermore, advances in ‘gene-drive’ technology have raised the prospect of eradicating certain species of mosquito via genetic modification. This technology has attracted a great deal of media attention, and the idea of using gene-drive technology to eradicate mosquitoes has been met with criticism in the public domain. In this paper, I shall dispel two moral objections (...) that have been raised in the public domain against the use of gene-drive technologies to eradicate mosquitoes. The first objection invokes the concept of the ‘sanctity of life’ in order to claim that we should not drive an animal to extinction. In response, I follow Peter Singer in raising doubts about general appeals to the sanctity of life, and argue that neither individual mosquitoes nor mosquitoes species considered holistically are appropriately described as bearing a significant degree of moral status. The second objection claims that seeking to eradicate mosquitoes amounts to displaying unacceptable degrees of hubris. Although I argue that this objection also fails, I conclude by claiming that it raises the important point that we need to acquire more empirical data about, inter alia, the likely effects of mosquito eradication on the ecosystem, and the likelihood of gene-drive technology successfully eradicating the intended mosquito species, in order to adequately inform our moral analysis of gene-drive technologies in this context. (shrink)
The paper Ethical Issues in Arms Technology is written to highlight and explain some ethical issues in arms production. These issues include the act of innovation; issues with weapons of mass destruction, the issue of privacy; humanizing arms technology, artificial intelligence – military killer robots, etc. The paper advocated for a critical evaluation of the structural and potential nature of arms before they are mass-produced. We need to ask and address all possible moral questions at research level rather (...) than wait for the technologies to be developed and sent to the market place. The research employed the methods of rational speculation, critical analysis, evaluation, and prescription to call for ethical assessment of all arms before they are made available to the markets. This paper advocate for a stop on the production of any arms technology that will destroy the relationship between man and his environment. Scientists and arms researchers should give ethics a stool the heart of their research and production. Before production, every arms technology template must be ethically worthy before consideration is made to mass-produce them. With this new development, we shall eradicate the production of non-human value-adding arms from our world. Humanity does not need everything. We should accept and produce only what we need. (shrink)
This paper investigates the history of systems of thought different from those of the West. A closer look at Japan’s long philosophical tradition draws attention to the presence of uniquely designed acculturation and training techniques designed as kata or shikata, shedding light on kata as a generic technique of self-perfection and self-transformation. By seeing kata as foundational to the Japanese mind and comparing it to Michel Foucault’s research on technologies of the self, the groundwork is laid for a comparative analysis (...) in terms of the principle of ἐπιμελείσθαι σαυτού, an ethical and aesthetic paradigm dating back to European antiquity. Not only does this bring to light their similarities as techniques of individuation, it also reinforces the importance of Watsuji’s relational understanding of human being. (shrink)
The ethical neutrality of technology has been widely questioned, for example, in the case of the creation and continued existence of weapons. At stake is whether technology changes the ethical character of our experience: compare the experience of seeing a beating to videotaping it. Interpreting and elaborating on the work of George Grant and Marshall McLuhan, this paper consists of three arguments: 1) the existence of technologies determines the structures of civilization that are imposed on the world, 2) (...) technologies shape what we do and determine how we do it, and 3) technology, unlike any other kind of thing, seems not to make moral demands of us: it is morally neutral. This means that they offer us the freedom of imposing on something that does not impose back. The introduction of this experience of freedom changes the way we experience the world in general by introducing a new way of relating to the good, namely by introducing the act of subjective valuation. Each of these points implies that technology structurally changes or interferes with our ethical relationship with things, with the result that through subjective valuation the experience of the obligation to act can be suspended. (shrink)
Compartmentalizing our distinct personal identities is increasingly difficult in big data reality. Pictures of the person we were on past vacations resurface in employers’ Google searches; LinkedIn which exhibits our income level is increasingly used as a dating web site. Whether on vacation, at work, or seeking romance, our digital selves stream together. One result is that a perennial ethical question about personal identity has spilled out of philosophy departments and into the real world. Ought we possess one, unified identity (...) that coherently integrates the various aspects of our lives, or, incarnate deeply distinct selves suited to different occasions and contexts? At bottom, are we one, or many? The question is not only palpable today, but also urgent because if a decision is not made by us, the forces of big data and surveillance capitalism will make it for us by compelling unity. Speaking in favor of the big data tendency, Facebook’s Mark Zuckerberg promotes the ethics of an integrated identity, a single version of selfhood maintained across diverse contexts and human relationships. This essay goes in the other direction by sketching two ethical frameworks arranged to defend our compartmentalized identities, which amounts to promoting the dis-integration of our selves. One framework connects with natural law, the other with language, and both aim to create a sense of selfhood that breaks away from its own past, and from the unifying powers of big data technology. (shrink)
We argue that the fragility of contemporary marriages—and the corresponding high rates of divorce—can be explained (in large part) by a three-part mismatch: between our relationship values, our evolved psychobiological natures, and our modern social, physical, and technological environment. “Love drugs” could help address this mismatch by boosting our psychobiologies while keeping our values and our environment intact. While individual couples should be free to use pharmacological interventions to sustain and improve their romantic connection, we suggest that they may have (...) an obligation to do so as well, in certain cases. Specifically, we argue that couples with offspring may have a special responsibility to enhance their relationships for the sake of their children. We outline an evolutionarily informed research program for identifying promising biomedical enhancements of love and commitment. (shrink)
The technology to create and automate large numbers of fake social media users, or “social bots”, is becoming increasingly more accessible to private individuals. This paper explores one potential use of the technology, namely the creation of “political bots”: social bots aimed at influencing the political opinions of others. Despite initial worries about licensing the use of such bots by private individuals, this paper provides an, albeit limited, argument in favour of this. The argument begins by providing a (...) prima facie case in favour of these political bots and proceeds by attempting to answer a series of potential objections. These objections are based on (1) the dangerous effectiveness of the technology; the (2) corruptive, (3) deceitful and (4) manipulating nature of political bots; (5) the worry that the technology will lead to chaos and be detrimental to trust online; and (6) practical issues involved in ensuring acceptable use of the technology. In all cases I will argue that the objections are overestimated, and that a closer look at the use of political bots helps us realise that using them is simply a new way of speaking up in modern society. (shrink)
Artificial intelligence (AI) is a digital technology that will be of major importance for the development of humanity in the near future. AI has raised fundamental questions about what we should do with such systems, what the systems themselves should do, what risks they involve and how we can control these. - After the background to the field (1), this article introduces the main debates (2), first on ethical issues that arise with AI systems as objects, i.e. tools made (...) and used by humans; here, the main sections are privacy (2.1), manipulation (2.2), opacity (2.3), bias (2.4), autonomy & responsibility (2.6) and the singularity (2.7). Then we look at AI systems as subjects, i.e. when ethics is for the AI systems themselves in machine ethics (2.8.) and artificial moral agency (2.9). Finally we look at future developments and the concept of AI (3). For each section within these themes, we provide a general explanation of the ethical issues, we outline existing positions and arguments, then we analyse how this plays out with current technologies and finally what policy conse-quences may be drawn. (shrink)
Despite frequent calls by players, managers and fans, FIFA's resistance to the implementation of goal-line technology (GLT) has been well documented in national print and online media as well as FIFA's own website. In 2010, FIFA president Sepp Blatter outlined eight reasons why GLT should not be used in football. The reasons given by FIFA can be broadly separated into three categories; those dealing with the nature and value of the game of football, those related to issues of justice, (...) and those concerned with the practical implementation of GLT. This paper intends to evaluate these eight reasons in order to assess whether there are, indeed, any good arguments against GLT in football. (shrink)
There is a large gap between attitude and action when it comes to consumer purchases of ethical food. Amongst the various aspects of this gap, this paper focuses on the difficulty in knowing enough about the various dimensions of food production, distribution and consumption to make an ethical food purchasing decision. There is neither one universal definition of ethical food. We suggest that it is possible to support consumers in operationalizing their own ethics of food with the use of (...) appropriate information and communication technology. We consider eggs as an example because locally produced options are available to many people on every continent. We consider the dimensions upon which food ethics may be constructed, then discuss the information required to assess it and the tools that can support it. We then present an overview of opportunities for design of a new software tool. Finally, we offer some points for discussion and future work. (shrink)
This study develops a Science–Technology–Society (STS)-based science ethics education program for high school students majoring in or planning to major in science and engineering. Our education program includes the fields of philosophy, history, sociology and ethics of science and technology, and other STS-related theories. We expected our STS-based science ethics education program to promote students’ epistemological beliefs and moral judgment development. These psychological constructs are needed to properly solve complicated moral and social dilemmas in the (...) fields of science and engineering. We applied this program to a group of Korean high school science students gifted in science and engineering. To measure the effects of this program, we used an essay-based qualitative measurement. The results indicate that there was significant development in both epistemological beliefs and moral judgment. In closing, we briefly discuss the need to develop epistemological beliefs and moral judgment using an STS-based science ethics education program. (shrink)
When an employee’s off-duty conduct generates mass social media outrage, managers commonly respond by firing the employee. This, I argue, can be a mistake. The thesis I defend is the following: the fact that a firing would occur in a mass social media outrage context brought about by the employee’s off-duty conduct generates a strong ethical reason weighing against the act. In particular, it contributes to the firing constituting an inappropriate act of blame. Scholars who caution against firing an employee (...) for off-duty conduct have thus far focused primarily on due process related issues or legal concerns pertaining to free speech, lifestyle discrimination, and employment at-will. However, these concerns amount to only a partial, and contingent, diagnosis of what is at issue. I argue that even when due process considerations are met, firings in these contexts can be unjustified. Moreover, even if a business is not concerned with the unethical conduct per se, but is rather strictly concerned with PR, the argument I advance nevertheless provides one important ethical reason that counts against firings in mass social media outrage contexts. Given that managers are often under significant pressure to respond swiftly in cases where an employee is at the center of mass social media outrage, it is especially important that scholars begin to clarify the normative issues. This article builds on the burgeoning philosophical literature on the ethics of blame and provides a novel account of a distinctive ethical concern that arises with firings in mass social media outrage contexts. (shrink)
According to Facebook’s Mark Zuckerberg, big data reality means, “The days of having a different image for your co-workers and for others are coming to an end, which is good because having multiple identities represents a lack of integrity.” Two sets of questions follow. One centers on technology and asks how big data mechanisms collapse our various selves (work-self, family-self, romantic-self) into one personality. The second question set shifts from technology to ethics by asking whether we want (...) the kind of integrity that Zuckerberg lauds, and that big data technology enables. The negative response is explored by sketching three ethical conceptions of selfhood that recommend personal identity be understood as dis-integrating. The success of the strategies partially depends upon an undermining use of big data platforms. (shrink)
A prominent view in contemporary philosophy of technology suggests that more technology implies more possibilities and, therefore, more responsibilities. Consequently, the question ‘What technology?’ is discussed primarily on the backdrop of assessing, assigning, and avoiding technology-borne culpability. The view is reminiscent of the Olympian gods’ vengeful and harsh reaction to Prometheus’ play with fire. However, the Olympian view leaves unexplained how technologies increase possibilities. Also, if Olympians are right, endorsing their view will at some point demand (...) putting a halt to technological development, which is absurd. Hence, we defend an alternative perspective on the relationship between responsibility and technology: Our Promethean view recognises technology as the result of collective, forward-looking responsibility and not only as a cause thereof. Several examples illustrate that technologies are not always the right means to tackle human vulnerabilities. Together, these arguments prompt a change in focus from the question ‘What technology?’ to ‘Why technology?’. (shrink)
Virtues, broadly understood as stable and robust dispositions for certain responses across morally relevant situations, have been a growing topic of interest in psychology. A central topic of discussion has been whether studies showing that situations can strongly influence our responses provide evidence against the existence of virtues (as a kind of stable and robust disposition). In this review, we examine reasons for thinking that the prevailing methods for examining situational influences are limited in their ability to test dispositional stability (...) and robustness; or, then, whether virtues exist. We make the case that these limitations can be addressed by aggregating repeated, cross-situational assessments of environmental, psychological and physiological variables within everyday life—a form of assessment often called ecological momentary assessment (EMA, or experience sampling). We, then, examine how advances in smartphone application (app) technology, and their mass adoption, make these mobile devices an unprecedented vehicle for EMA and, thus, the psychological study of virtue. We, additionally, examine how smartphones might be used for virtue development by promoting changes in thought and behavior within daily life; a technique often called ecological momentary intervention (EMI). While EMA/I have become widely employed since the 1980s for the purposes of understanding and promoting change amongst clinical populations, few EMA/I studies have been devoted to understanding or promoting virtues within non-clinical populations. Further, most EMA/I studies have relied on journaling, PDAs, phone calls and/or text messaging systems. We explore how smartphone app technology provides a means of making EMA a more robust psychological method, EMI a more robust way of promoting positive change, and, as a result, opens up new possibilities for studying and promoting virtues. (shrink)
We live in a world in which ‘smart’ algorithmic tools are regularly used to structure and control our choice environments. They do so by affecting the options with which we are presented and the choices that we are encouraged or able to make. Many of us make use of these tools in our daily lives, using them to solve personal problems and fulfill goals and ambitions. What consequences does this have for individual autonomy and how should our legal and regulatory (...) systems respond? This chapter defends three claims by way of response. First, it argues that autonomy is indeed under threat in some new and interesting ways. Second, it evaluates and disputes the claim that we shouldn’t overestimate these new threats because the technology is just an old wolf in a new sheep’s clothing. Third, and finally, it looks at responses to these threats at both the individual and societal level and argues that although we shouldn’t encourage an attitude of ‘helplessness’ among the users of algorithmic tools there is an important role for legal and regulatory responses to these threats that go beyond what are currently on offer. (shrink)
Abstract -/- In 1998, the Council for Science and Technology established the Bioethics Committee and asked its members to examine the ethical and legal aspects of human cloning. The Committee concluded in 1999 that human cloning should be prohibited, and, based on the report, the government presented a bill for the regulation of human cloning in 2000. After a debate in the Diet, the original bill was slightly modified and issued on December 6, 2000. In this paper, I take (...) a closer look at this process and discuss some of the ethical problems that were debated. Also, I make a brief analysis of the concept “the sprout of human life.” Not only people who object to human cloning, but also many of those who seek to promote research on human cloning admit that a human embryo is the sprout of human life and, hence, it should be highly respected. I also discuss the function of the language of utilitarianism, the language of skepticism, and religious language appeared in the discussion of human cloning in Japan. (shrink)
Dialogue between feminist and mainstream philosophy of science has been limited in recent years, although feminist and mainstream traditions each have engaged in rich debates about key concepts and their efficacy. Noteworthy criticisms of concepts like objectivity, consensus, justification, and discovery can be found in the work of philosophers of science including Philip Kitcher, Helen Longino, Peter Galison, Alison Wylie, Lorraine Daston, and Sandra Harding. As a graduate student in philosophy of science who worked in both literatures, I was often (...) left with the feeling that I had joined a broken family with two warring factions. This is apparent in the number of anthologies that have emerged on both sides in the aftermath of the “Science Wars” (Gross, Paul R., Norman Levitt, and Martin W. Lewis, eds. 1996; Koertge, Noretta, ed. 1998; Sokal, Alan and Jean Bricmont. 1998; etc.) Depending on one’s perspective on the Science Wars, the breadth of illustrative cases and examples found in Science and Other Cultures can either give more ammunition for the battle, or grounding for a much needed treaty of accord. The most important feature of this book is that it does not merely claim that science is only political, and it does not merely dismiss science as a social phenomenon to be deconstructed using the standard postmodern conceptual tools. Instead, the collection illustrates ways in which postcolonial analysis and multicultural examples can enrich our understanding of “good” science and ethics. Here, the concept of “strong objectivity” from Harding’s earlier books is fleshed out through a variety of cases. The anthology is the culmination of a series of research activities funded by a National Science Foundation grant to the American Philosophical Association. The grant, under the auspices of the NSF Ethics and Values Program, sponsored fourteen summer research projects and thirty-six presentations at four regional APA meetings. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.