There are a number of recent attempts to introduce Confucian values to the ethical analysis of technology. These works, however, have not attended sufficiently to one central aspect of Confucianism, namely Ritual (‘Li’). Li is central to Confucian ethics, and it has been suggested that the emphasis on Li in Confucian ethics is what distinguishes it from other ethical traditions. Any discussion of Confucian ethics for technology, therefore, remains incomplete without accounting for Li. This chapter (...) aims to elaborate on the concept of Confucian Li and discuss its relevance to ethics of technology. Particularly, by referring to Li’s communicative, formative, and aesthetic function, I formulate an approach to ethics of technology with an emphasis on community, performance, and the aesthetic and demonstrate how this approach proceeds with the ethical analysis of technology. In doing so, I attempt to answer the question: why Confucianism matters in ethics of technology. (shrink)
Martin Peterson’s The Ethics of Technology: A Geometric Analysis of Five Moral Principles offers a welcome contribution to the ethics of technology, understood by Peterson as a branch of applied ethics that attempts ‘to identify the morally right courses of action when we develop, use, or modify technological artifacts’ (3). He argues that problems within this field are best treated by the use of five domain-specific principles: the Cost-Benefit Principle, the Precautionary Principle, the Sustainability Principle, (...) the Autonomy Principle, and the Fairness Principle. These principles are, in turn, to be understood and applied with reference to the geometric method. This method is perhaps the most interesting and novel part of Peterson’s book, and I’ll devote the bulk of my review to it. (shrink)
My analysis takes as its point of departure the controversial assumption that contemporary ethical theories cannot capture adequately the ethical and social challenges of scientific and technological development. This assumption is rooted in the argument that classical ethical theory invariably addresses the issue of ethical responsibility in terms of whether and how intentional actions of individuals can be justified. Scientific and technological developments, however, have produced unintentional consequences and side-consequences. These consequences very often result from collective decisions concerning the way (...) we wish to organise our economies and society, rather than from individual actions. It has been apparent for a long time now that it is not sufficient to construct an ethics of science and technology on the basis of the image of a scientist who intentionally wants to create a Frankenstein. Thus, as a minimum we would require an ethical framework that addresses both the aspect of unintentional side consequences (rather than intentional actions) and the aspect of collective decisions (rather than individual decisions) with regard to complex societal systems, such as the operation of our economy. We do not have such a theory available. More disturbing than the principle shortcomings of ethical theory are the shortcomings of conventional ethical practice with respect to technological developments. Below I will suggest how four different developments can illustrate these shortcomings, which centre around the fact that individuals in our society can simply not be held fully accountable for their individual role within the context of scientific technological developments. I will call these shortcomings of a theory (and practice) of individual role responsibility. This may help us to reflect on robotics too, insofar as robots may be perceived as replacements for “roles”. From there, I will argue why we have to shift our attention to an ethics of knowledge assessment in the framework of deliberative procedures. (shrink)
Disruptive technologies can be conceptualized in different ways. Depending on how they are conceptualized, different ethical issues come into play. This article contributes to a general framework to navigate the ethics of disruptive technologies. It proposes three basic distinctions to be included in such a framework. First, emerging technologies may instigate localized “first-order” disruptions, or systemic “second-order” disruptions. The ethical significance of these disruptions differs: first-order disruptions tend to be of modest ethical significance, whereas second-order disruptions are highly significant. (...) Secondly, technologies may be classified as disruptive based on their technological features or based on their societal impact. Depending on which of these classifications one adopts and takes as the starting point of ethical inquiry, different ethical questions are foregrounded. Thirdly, the ethics of disruptive technology raises concerns at four different levels of technology assessment: the technology level, the artifact level, the application level, and the society level. The respective suitability of approaches in technologyethics to address concerns about disruptive technologies co-varies with the respective level of analysis. The article clarifies these distinctions, thereby laying some of the groundwork for an ethical framework tailored for assessing disruptive technologies. (shrink)
A closer look at the theories and questions in philosophy of technology and ethics of technology shows the absence and marginality of non-Western philosophical traditions in the discussions. Although, increasingly, some philosophers have sought to introduce non-Western philosophical traditions into the debates, there are few systematic attempts to construct and articulate general accounts of ethics and technology based on other philosophical traditions. This situation is understandable, for the questions of modern sciences and technologies appear to (...) be originated from the West; at the same time, the situation is undesirable. The overall aim of this paper, therefore, is to introduce an alternative account of ethics of technology based on the Confucian tradition. In doing so, it is hoped that the current paper can initiate a relatively uncharted field in philosophy of technology and ethics of technology. (shrink)
Robust technological enhancement of core cognitive capacities is now a realistic possibility. From the perspective of neutralism, the view that justifications for public policy should be neutral between reasonable conceptions of the good, only members of a subset of the ethical concerns serve as legitimate justifications for public policy regarding robust technological enhancement. This paper provides a framework for the legitimate use of ethical concerns in justifying public policy decisions regarding these enhancement technologies by evaluating the ethical concerns that arise (...) in the context of testing such technologies on nonhuman animals. Traditional issues in bioethics, as well as novel concerns such as the possibility of moral status enhancement, are evaluated from the perspective of neutralism. (shrink)
Contra mercantile propaganda, technology is "humanized" to the extent that it satisfies or at least permits satisfaction of basic human needs or enhancements. To assess a technology's contribution to humanization requires (1) rejection of the primacy of the machine (cyborg model) and commitment to primacy of the human being (prosthesis model) in man/machine relations, and (2) insistence on the responsibility of managers for consequences of their technology-related decisions. Such decisions are appropriate in this respect to the extent (...) that they help meet basic human needs rather than artificially engendered needs. Meta-evaluation requires active citizen participation in government regulation of technology. (shrink)
In this brief text, I will sketch developments in the philosophy of technology in the Netherlands and in Europe since Paul Durbin published his extensive study on the state of the field in 2006.
Mosquito-borne diseases represent a significant global disease burden, and recent outbreaks of such diseases have led to calls to reduce mosquito populations. Furthermore, advances in ‘gene-drive’ technology have raised the prospect of eradicating certain species of mosquito via genetic modification. This technology has attracted a great deal of media attention, and the idea of using gene-drive technology to eradicate mosquitoes has been met with criticism in the public domain. In this paper, I shall dispel two moral objections (...) that have been raised in the public domain against the use of gene-drive technologies to eradicate mosquitoes. The first objection invokes the concept of the ‘sanctity of life’ in order to claim that we should not drive an animal to extinction. In response, I follow Peter Singer in raising doubts about general appeals to the sanctity of life, and argue that neither individual mosquitoes nor mosquitoes species considered holistically are appropriately described as bearing a significant degree of moral status. The second objection claims that seeking to eradicate mosquitoes amounts to displaying unacceptable degrees of hubris. Although I argue that this objection also fails, I conclude by claiming that it raises the important point that we need to acquire more empirical data about, inter alia, the likely effects of mosquito eradication on the ecosystem, and the likelihood of gene-drive technology successfully eradicating the intended mosquito species, in order to adequately inform our moral analysis of gene-drive technologies in this context. (shrink)
This special issue of Ethics and Information Technology focuses on the ethics of new and emerging information technology (IT). The papers have been selected from submissions to the sixth international conference on Computer Ethics: Philosophical Enquiry (CEPE2005), which took place at the University of Twente, the Netherlands, July 17–19, 2005. -/- .
This collection of papers were originally presented during conferences on ethics in science and technology that UNESCO’s Regional Unit for Social and Human Sciences (RUSHSAP) has been convening since 2005. Since intercultural communication and information-sharing are essential components of these deliberations, the books also provide theme-related discourse from the conferences.
This article presents the first thematic review of the literature on the ethical issues concerning digital well-being. The term ‘digital well-being’ is used to refer to the impact of digital technologies on what it means to live a life that is good for a human being. The review explores the existing literature on the ethics of digital well-being, with the goal of mapping the current debate and identifying open questions for future research. The review identifies major issues related to (...) several key social domains: healthcare, education, governance and social development, and media and entertainment. It also highlights three broader themes: positive computing, personalised human–computer interaction, and autonomy and self-determination. The review argues that three themes will be central to ongoing discussions and research by showing how they can be used to identify open questions related to the ethics of digital well-being. (shrink)
This article presents the first thematic review of the literature on the ethical issues concerning digital well-being. The term ‘digital well-being’ is used to refer to the impact of digital technologies on what it means to live a life that isgood fora human being. The review explores the existing literature on the ethics of digital well-being, with the goal of mapping the current debate and identifying open questions for future research. The review identifies major issues related to several key (...) social domains: healthcare, education, governance and social development, and media and entertainment. It also highlights three broader themes: positive computing, personalised human–computer interaction, and autonomy and self-determination. The review argues that three themes will be central to ongoing discussions and research by showing how they can be used to identify open questions related to the ethics of digital well-being. (shrink)
Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and used (...) by humans. This includes issues of privacy (§2.1) and manipulation (§2.2), opacity (§2.3) and bias (§2.4), human-robot interaction (§2.5), employment (§2.6), and the effects of autonomy (§2.7). Then AI systems as subjects, i.e., ethics for the AI systems themselves in machine ethics (§2.8) and artificial moral agency (§2.9). Finally, the problem of a possible future AI superintelligence leading to a “singularity” (§2.10). We close with a remark on the vision of AI (§3). - For each section within these themes, we provide a general explanation of the ethical issues, outline existing positions and arguments, then analyse how these play out with current technologies and finally, what policy consequences may be drawn. (shrink)
The paper investigates the ethics of information transparency (henceforth transparency). It argues that transparency is not an ethical principle in itself but a pro-ethical condition for enabling or impairing other ethical practices or principles. A new definition of transparency is offered in order to take into account the dynamics of information production and the differences between data and information. It is then argued that the proposed definition provides a better understanding of what sort of information should be disclosed and (...) what sort of information should be used in order to implement and make effective the ethical practices and principles to which an organisation is committed. The concepts of “heterogeneous organisation” and “autonomous computational artefact” are further defined in order to clarify the ethical implications of the technology used in implementing information transparency. It is argued that explicit ethical designs, which describe how ethical principles are embedded into the practice of software design, would represent valuable information that could be disclosed by organisations in order to support their ethical standing. (shrink)
Experimental philosophy is a relatively recent discipline that employs experimental methods to investigate the intuitions, concepts, and assumptions behind traditional philosophical arguments, problems, and theories. While experimental philosophy initially served to interrogate the role that intuitions play in philosophy, it has since branched out to bring empirical methods to bear on problems within a variety of traditional areas of philosophy—including metaphysics, philosophy of language, philosophy of mind, and epistemology. To date, no connection has been made between developments in experimental philosophy (...) and philosophy of technology. In this paper, I develop and defend a research program for an experimental philosophy of technology. (shrink)
Cloud computing is rapidly gaining traction in business. It offers businesses online services on demand (such as Gmail, iCloud and Salesforce) and allows them to cut costs on hardware and IT support. This is the first paper in business ethics dealing with this new technology. It analyzes the informational duties of hosting companies that own and operate cloud computing datacenters (e.g., Amazon). It considers the cloud services providers leasing ‘space in the cloud’ from hosting companies (e.g, Dropbox, Salesforce). (...) And it examines the business and private ‘clouders’ using these services. The first part of the paper argues that hosting companies, services providers and clouders have mutual informational (epistemic) obligations to provide and seek information about relevant issues such as consumer privacy, reliability of services, data mining and data ownership. The concept of interlucency is developed as an epistemic virtue governing ethically effective communication. The second part considers potential forms of government restrictions on or proscriptions against the development and use of cloud computing technology. Referring to the concept of technology neutrality, it argues that interference with hosting companies and cloud services providers is hardly ever necessary or justified. It is argued, too, however, that businesses using cloud services (banks, law firms, hospitals etc. storing client data in the cloud, e.g.) will have to follow rather more stringent regulations. (shrink)
The internet of things is increasingly spreading into the domain of medical and social care. Internet-enabled devices for monitoring and managing the health and well-being of users outside of traditional medical institutions have rapidly become common tools to support healthcare. Health-related internet of things (H-IoT) technologies increasingly play a key role in health management, for purposes including disease prevention, real-time tele-monitoring of patient’s functions, testing of treatments, fitness and well-being monitoring, medication dispensation, and health research data collection. H-IoT promises many (...) benefits for health and healthcare. However, it also raises a host of ethical problems stemming from the inherent risks of Internet enabled devices, the sensitivity of health-related data, and their impact on the delivery of healthcare. This paper maps the main ethical problems that have been identified by the relevant literature and identifies key themes in the on-going debate on ethical problems concerning H-IoT. (shrink)
For nearly two decades, ethicists have expressed concerns that the further development and use of memory modification technologies (MMTs)—techniques allowing to intentionally and selectively alter memories—may threaten the very foundations of who we are, our personal identity, and thus pose a threat to our well-being, or even undermine our “humaneness.” This paper examines the potential ramifications of memory-modifying interventions such as changing the valence of targeted memories and selective deactivation of a particular memory as these interventions appear to be at (...) the same time potentially both most promising clinically as well as menacing to identity. However, unlike previous works discussing the potential consequences of MMTs, this article analyzes them in the context of the narrative relational approach to personal identity and potential issues related to autonomy. I argue that such a perspective brings to light the ethical aspects and moral issues arising from the use of MMTs that have been hidden from previously adopted approaches. In particular, this perspective demonstrates how important the social context in which an individual lives is for the ethical evaluation of a given memory-modifying intervention. I conclude by suggesting that undertaking memory modifications without taking into account the social dimension of a person’s life creates the risk that she will not be able to meet one of the basic human needs—the autonomous construction and maintenance of personal identity. Based on this conclusion, I offer some reflections on the permissibility and advisability of MMTs and what these considerations suggest for the future. (shrink)
Digital tracing technologies are heralded as an effective way of containing SARS-CoV-2 faster than it is spreading, thereby allowing the possibility of easing draconic measures of population-wide quarantine. But existing technological proposals risk addressing the wrong problem. The proper objective is not solely to maximise the ratio of people freed from quarantine but to also ensure that the composition of the freed group is fair. We identify several factors that pose a risk for fair group composition along with an analysis (...) of general lessons for a philosophy of technology. Policymakers, epidemiologists, and developers can use these risk factors to benchmark proposal technologies, curb the pandemic, and keep public trust. (shrink)
The prospect of consumable meat produced in a laboratory setting without the need to raise and slaughter animals is both realistic and exciting. Not only could such in vitro meat become popular due to potential cost savings, but it also avoids many of the ethical and environmental problems with traditional meat productions. However, as with any new technology, in vitro meat is likely to face some detractors. We examine in detail three potential objections: 1) in vitro meat is disrespectful, (...) either to nature or to animals; 2) it will reduce the number of happy animals in the world; and 3) it will open the door to cannibalism. While each objection has some attraction, we ultimately find that all can be overcome. The upshot is that in vitro meat production is generally permissible and, especially for ethical vegetarians, worth promoting. (shrink)
Conventional ethics of how humans should eat often ignore that human life is itself a form of organic activity. Using Henri Bergson’s notions of intellect and intuition, this chapter brings a wider perspective of the human organism to the ethical question of how humans appropriate life for nutriment. The intellect’s tendency to instrumentalize living things as though they were inert seems to subtend the moral failures evident in practices such as industrial animal agriculture. Using the case study of Temple (...) Grandin’s sympathetic cattle technologies, this chapter moves beyond animal welfare concerns to ground food ethics on the phenomenal character of food that is obscured by human activities of fabrication. (shrink)
Usually technological innovation and artistic work are seen as very distinctive practices, and innovation of technologies is understood in terms of design and human intention. Moreover, thinking about technological innovation is usually categorized as “technical” and disconnected from thinking about culture and the social. Drawing on work by Dewey, Heidegger, Latour, and Wittgenstein and responding to academic discourses about craft and design, ethics and responsible innovation, transdisciplinarity, and participation, this essay questions these assumptions and examines what kind of knowledge (...) and practices are involved in art and technological innovation. It argues that technological innovation is indeed “technical”, but, if conceptualized as techne, can be understood as art and performance. It is argued that in practice, innovative techne is not only connected to episteme as theoretical knowledge but also has the mode of poiesis: it is not just the outcome of human design and intention but rather involves a performative process in which there is a “dialogue” between form and matter and between creator and environment in which humans and non-humans participate. Moreover, this art is embedded in broader cultural patterns and grammars—ultimately a ‘form of life’—that shape and make possible the innovation. In that sense, there is no gap between science and society—a gap that is often assumed in STS and in, for instance, discourse on responsible innovation. It is concluded that technology and art were only relatively recently and unfortunately divorced, conceptually, but that in practices and performances they were always linked. If we understand technological innovation as a poetic, participative, and performative process, then bringing together technological innovation and artistic practices should not be seen as a marginal or luxury project but instead as one that is central, necessary, and vital for cultural-technological change. This conceptualization supports not only a different approach to innovation but has also social-transformative potential and has implications for ethics of technology and responsible innovation. (shrink)
Now Online: The Ethics of Rationalism & Empiricism Author: Irfan Ajvazi -/- The Ethics of Rationalism & Empiricism -/- Table of Contents: Chapter I: The Ethics of Rationalism Chapter II: Karl Popper and Rationalism Chapter III: Knowledge, Rationalism, Empiricism and the Kantian Synthesis Chapter IV: Kant’s Knowledge Empiricism and Rationalism Chapter V: The Radical Rationalism of Rene Descartes Chapter VI: Was Plato a rationalist or an empricist? Chapter VII: What is rationalism for Descartes? Chapter VIII: What is (...) Empiricism? Chapter IX: Is the rational-empirical form of epistemology superior to the religious form of epistemology? Chapter X: How was Betrand Russell a rationalist? -/- Rationalism is also contrasted with the idea that faith and revelation too are valid sources of knowledge and verification. -/- If you use the methods of the above three doctrines – namely of rationalism, empiricism and faith (revelation) – to assess the validity of the same doctrines for all practical purposes, we can see that all of them have their place in life as lived by us every day. The problem is when the adherents of each of these doctrines claim that only that particular doctrine is valid to the exclusion of all others. In the march of human progress in all spheres of human endeavor such as science, technology, art and the efforts for peace-building and social cohesion among other things, we badly need reason, experience and faith. In the laboratories of science, experiments are conducted, the processes as well as their results are observed and inferences are made. In such cases, both observation using the senses and logical reasoning are crucial. In order to achieve social or national integration among disparate groups in the society or country for instance, we need to have faith not only in the goodness of our fellow beings, but also in the religious values of truth, justice and sympathy. The rationalists adopt a one-sided view of the world: they ignore a good share of the profound complexities of the wealth of human life. Their approach is effectively reductive, as they cast doubt on knowledge that is not derived by logical thinking. (shrink)
The American justice system, from police departments to the courts, is increasingly turning to information technology for help identifying potential offenders, determining where, geographically, to allocate enforcement resources, assessing flight risk and the potential for recidivism amongst arrestees, and making other judgments about when, where, and how to manage crime. In particular, there is a focus on machine learning and other data analytics tools, which promise to accurately predict where crime will occur and who will perpetrate it. Activists and (...) academics have begun to raise critical questions about the use of these tools in policing contexts. In this chapter, I review the emerging critical literature on predictive policing and contribute to it by raising ethical questions about the use of predictive analytics tools to identify potential offenders. Drawing from work on the ethics of profiling, I argue that the much-lauded move from reactive to preemptive policing can mean wrongfully generalizing about individuals, making harmful assumptions about them, instrumentalizing them, and failing to respect them as full ethical persons. I suggest that these problems stem both from the nature of predictive policing tools and from the sociotechnical contexts in which they are implemented... (shrink)
The goal of this article is to present a first list of ethical concerns that may arise from research and personal use of virtual reality (VR) and related technology, and to offer concrete recommendations for minimizing those risks. Many of the recommendations call for focused research initiatives. In the first part of the article, we discuss the relevant evidence from psychology that motivates our concerns. In Section “Plasticity in the Human Mind,” we cover some of the main results suggesting (...) that one’s environment can influence one’s psychological states, as well as recent work on inducing illusions of embodiment. Then, in Section “Illusions of Embodiment and Their Lasting Effect,” we go on to discuss recent evidence indicating that immersion in VR can have psychological effects that last after leaving the virtual environment. In the second part of the article, we turn to the risks and recommendations. We begin, in Section “The Research Ethics of VR,” with the research ethics of VR, covering six main topics: the limits of experimental environments, informed consent, clinical risks, dual-use, online research, and a general point about the limitations of a code of conduct for research. Then, in Section “Risks for Individuals and Society,” we turn to the risks of VR for the general public, covering four main topics: long-term immersion, neglect of the social and physical environment, risky content, and privacy. We offer concrete recommendations for each of these 10 topics, summarized in Table 1. (shrink)
Personal AI assistants are now nearly ubiquitous. Every leading smartphone operating system comes with a personal AI assistant that promises to help you with basic cognitive tasks: searching, planning, messaging, scheduling and so on. Usage of such devices is effectively a form of algorithmic outsourcing: getting a smart algorithm to do something on your behalf. Many have expressed concerns about this algorithmic outsourcing. They claim that it is dehumanising, leads to cognitive degeneration, and robs us of our freedom and autonomy. (...) Some people have a more subtle view, arguing that it is problematic in those cases where its use may degrade important interpersonal virtues. In this article, I assess these objections to the use of AI assistants. I will argue that the ethics of their use is complex. There are no quick fixes or knockdown objections to the practice, but there are some legitimate concerns. By carefully analysing and evaluating the objections that have been lodged to date, we can begin to articulate an ethics of personal AI use that navigates those concerns. In the process, we can locate some paradoxes in our thinking about outsourcing and technological dependence, and we can think more clearly about what it means to live a good life in the age of smart machines. (shrink)
To teach the ethics of science to science majors, I follow several teachers in the literature who recommend “persona” writing, or the student construction of dialogues between ethical thinkers of interest. To engage science majors in particular, and especially those new to academic philosophy, I recommend constructing persona dialogues from Henri Poincaré’s essay, “Ethics and Science”, and the non-theological third chapter of Pope Francis’s encyclical on the environment, Laudato si. This pairing of interlocutors offers two advantages. The first (...) is that science students are likely to recognize both names, since Poincaré appears in undergraduate mathematics and physics textbooks, and because Francis is an environmentalist celebrity. Hence students show more interest in these figures than in other philosophers. The second advantage is that the third chapter of Laudato si reads like an implicit rebuttal of Poincaré’s essay in many respects, and so contriving a dialogue between those authors both facilitates classroom discussion, and deserves attention from professional ethicists in its own right. In this paper, I present my own contrived dialogue between Francis and Poincaré, not for assigning to students as a reading, but as a template for an effective assignment product, and as a crib sheet for educators to preview the richly antiparallel themes between the two works. (shrink)
In the near future we may be able to manipulate human embryos through genetic intervention. Jürgen Habermas has argued against the development of technologies which could make such intervention possible. His argument has received widespread criticism among bioethicists. These critics argue that Habermas's argument relies on implausible assumptions about human nature. Moreover, they challenge Habermas's claim that genetic intervention adds something new to intergenerational relationships pointing out that parents have already strong control over their children through education. In this paper (...) a new approach to Habermas's theory is suggested which makes clear that he has a strong point against genetic intervention. A more charitable reading of Habermas with respect to his assumptions concerning human nature is presented. Moreover, Habermas's assumption concerning the power of genetic controlling is evaluated. By means of a close comparison of genetic and educational control it is shown that Habermas's argument relies on much weaker assumptions than generally understood. (shrink)
Compartmentalizing our distinct personal identities is increasingly difficult in big data reality. Pictures of the person we were on past vacations resurface in employers’ Google searches; LinkedIn which exhibits our income level is increasingly used as a dating web site. Whether on vacation, at work, or seeking romance, our digital selves stream together. One result is that a perennial ethical question about personal identity has spilled out of philosophy departments and into the real world. Ought we possess one, unified identity (...) that coherently integrates the various aspects of our lives, or, incarnate deeply distinct selves suited to different occasions and contexts? At bottom, are we one, or many? The question is not only palpable today, but also urgent because if a decision is not made by us, the forces of big data and surveillance capitalism will make it for us by compelling unity. Speaking in favor of the big data tendency, Facebook’s Mark Zuckerberg promotes the ethics of an integrated identity, a single version of selfhood maintained across diverse contexts and human relationships. This essay goes in the other direction by sketching two ethical frameworks arranged to defend our compartmentalized identities, which amounts to promoting the dis-integration of our selves. One framework connects with natural law, the other with language, and both aim to create a sense of selfhood that breaks away from its own past, and from the unifying powers of big data technology. (shrink)
There are prima facie ethical reasons and prudential reasons for people to avoid or withdraw from social media platforms. But in response to pushes for people to quit social media, a number of authors have argued that there is something ethically questionable about quitting social media: that it involves — typically, if not necessarily — an objectionable expression of privilege on the part of the quitter. In this paper I contextualise privilege-based objections to quitting social media and explain the underlying (...) principles and assumptions that feed into these objections. I show how they misrepresent the kind of act people are performing in quitting, in part by downplaying its role in promoting reforms in communication systems and technologies. And I suggest that this misrepresentation is related to a more widespread, and ultimately insidious, tendency to think of recently-established technological states of affairs as permanent fixtures of our society. (shrink)
We live in a world in which ‘smart’ algorithmic tools are regularly used to structure and control our choice environments. They do so by affecting the options with which we are presented and the choices that we are encouraged or able to make. Many of us make use of these tools in our daily lives, using them to solve personal problems and fulfill goals and ambitions. What consequences does this have for individual autonomy and how should our legal and regulatory (...) systems respond? This chapter defends three claims by way of response. First, it argues that autonomy is indeed under threat in some new and interesting ways. Second, it evaluates and disputes the claim that we shouldn’t overestimate these new threats because the technology is just an old wolf in a new sheep’s clothing. Third, and finally, it looks at responses to these threats at both the individual and societal level and argues that although we shouldn’t encourage an attitude of ‘helplessness’ among the users of algorithmic tools there is an important role for legal and regulatory responses to these threats that go beyond what are currently on offer. (shrink)
Purpose of the present work is to attempt to give a glance at the problem of existential and anthropological risk caused by the contemporary man-made civilization from the perspective of comparison and confrontation of aesthetics, the substrate of which is emotional and metaphorical interpretation of individual subjective values and politics feeding by objectively rational interests of social groups. In both cases there is some semantic gap present between the represented social reality and its representation in perception of works of art (...) and in the political doctrines as well. Methodology of the research is evolutionary anthropologicalcomparativistics. Originality of the conducted analysis comes to the following: As the antithesis to biological and social reductionism in interpretation of the phenomenon of bio-power it is proposed a co-evolutionary semantic model in accordance with which the described semantic gap is of the substantial nature related to the complex module organization of a consistent and adaptive human strategy consisting of three associated but independently functional modules. Evolutionary trajectory of all anthropogenesis components including civilization cultural and social-political evolution is identified by the proportion between two macro variables – evolutionary effectiveness and evolutionary stability, i.e. preservation in the context of consequential transformations of some invariants of Homo sapiens species specificity organization. It should be noted that inasmuch as in respect to human, some modules of the evolutionary strategy assume self-reflection attributes, it would be more correctly to state about evolutionary correctness, i.e. correspondence to some perfection. As a result, the future of human nature depends not only on the rationalist principles of ethics of Homo species, but also on the holistic and emotionally aesthetic image of «Self». In conclusion it should be noted that there is a causal link between the development of High Hume technologies and the totality of the trend in the anthropological phenomenon of bio-power that permeates all the available human existence in modern civilization. As a result, there is a transformation of a contemporary social risk in the evolutionary civilization risk. (shrink)
This research project aims to accomplish two primary objectives: (1) propose an argument that a posthuman ethics in the design of technologies is sound and thus warranted and, (2) how can existent SBD approaches begin to envision principled and methodological ways of incorporating nonhuman values into design. In order to do this this MRP will provide a rudimentary outline of what constitutes SBD approaches. A particular design approach - Value Sensitive Design (VSD) - is taken up as an illustrative (...) example given that it, among the other SBD frameworks, most clearly illustrates a principled approach to the integration of values in design. -/- This explication will be followed by the strongest arguments for a posthumanist ethic, primarily drawing from the works of the Italian philosophers Leonardo Caffo and Roberto Marchesini and Francesa Ferrando. In doing so I will show how the human imperative to account for nonhuman values is a duty and as such must be continually ready-to-hand when making value-critical decision. (shrink)
Artificial intelligence (AI) is a digital technology that will be of major importance for the development of humanity in the near future. AI has raised fundamental questions about what we should do with such systems, what the systems themselves should do, what risks they involve and how we can control these. - After the background to the field (1), this article introduces the main debates (2), first on ethical issues that arise with AI systems as objects, i.e. tools made (...) and used by humans; here, the main sections are privacy (2.1), manipulation (2.2), opacity (2.3), bias (2.4), autonomy & responsibility (2.6) and the singularity (2.7). Then we look at AI systems as subjects, i.e. when ethics is for the AI systems themselves in machine ethics (2.8.) and artificial moral agency (2.9). Finally we look at future developments and the concept of AI (3). For each section within these themes, we provide a general explanation of the ethical issues, we outline existing positions and arguments, then we analyse how this plays out with current technologies and finally what policy conse-quences may be drawn. (shrink)
Purpose (metatask) of the present work is to attempt to give a glance at the problem of existential and anthropo- logical risk caused by the contemporary man-made civilization from the perspective of comparison and confronta- tion of aesthetics, the substrate of which is emotional and metaphorical interpretation of individual subjective values and politics feeding by objectively rational interests of social groups. In both cases there is some semantic gap pre- sent between the represented social reality and its representation in perception (...) of works of art and in the political doctrines as well. Methodology of the research is evolutionary anthropological comparativistics. Originality of the conducted analysis comes to the following: As the antithesis to biological and social reductionism in interpretation of the phenomenon of bio-power it is proposed a co-evolutionary semantic model in accordance with which the de- scribed semantic gap is of the substantial nature related to the complex module organization of a consistent and adaptive human strategy consisting of three associated but independently functional modules (genetic, cultural and techno-rational). Evolutionary trajectory of all anthropogenesis components including civilization cultural and so- cial-political evolution is identified by the proportion between two macro variables – evolutionary effectiveness and evolutionary stability (sameness), i.e. preservation in the context of consequential transformations of some invari- ants of Homo sapiens species specificity organization. It should be noted that inasmuch as in respect to human, some modules of the evolutionary (adaptive) strategy assume self-reflection attributes, it would be more correctly to state about evolutionary correctness, i.e. correspondence to some perfection. As a result, the future of human nature de- pends not only on the rationalist principles of ethics of Homo species (the archaism of Jurgen Habermas), but also on the holistic and emotionally aesthetic image of «Self». In conclusion it should be noted that there is a causal link between the development of High Hume (NBIC) technologies and the totality of the trend in the anthropological phenomenon of bio-power that permeates all the available human existence in modern civilization. As a result, there is a transformation of a contemporary social (man-made) risk in the evolutionary civilization risk. (shrink)
The impacts that AI and robotics systems can and will have on our everyday lives are already making themselves manifest. However, there is a lack of research on the ethical impacts and means for amelioration regarding AI and robotics within tourism and hospitality. Given the importance of designing technologies that cross national boundaries, and given that the tourism and hospitality industry is fundamentally predicated on multicultural interactions, this is an area of research and application that requires particular attention. Specifically, tourism (...) and hospitality have a range of context-unique stakeholders that need to be accounted for in the salient design of AI systems is to be achieved. This paper adopts a stakeholder approach to develop the conceptual framework to centralize human values in designing and deploying AI and robotics systems in tourism and hospitality. The conceptual framework includes several layers – ‘Human-human-AI’ interaction level, direct and indirect stakeholders, and the macroenvironment. The ethical issues on each layer are outlined as well as some possible solutions to them. Additionally, the paper develops a research agenda on the topic. (shrink)
Abstract -/- In 1998, the Council for Science and Technology established the Bioethics Committee and asked its members to examine the ethical and legal aspects of human cloning. The Committee concluded in 1999 that human cloning should be prohibited, and, based on the report, the government presented a bill for the regulation of human cloning in 2000. After a debate in the Diet, the original bill was slightly modified and issued on December 6, 2000. In this paper, I take (...) a closer look at this process and discuss some of the ethical problems that were debated. Also, I make a brief analysis of the concept “the sprout of human life.” Not only people who object to human cloning, but also many of those who seek to promote research on human cloning admit that a human embryo is the sprout of human life and, hence, it should be highly respected. I also discuss the function of the language of utilitarianism, the language of skepticism, and religious language appeared in the discussion of human cloning in Japan. (shrink)
According to Facebook’s Mark Zuckerberg, big data reality means, “The days of having a different image for your co-workers and for others are coming to an end, which is good because having multiple identities represents a lack of integrity.” Two sets of questions follow. One centers on technology and asks how big data mechanisms collapse our various selves (work-self, family-self, romantic-self) into one personality. The second question set shifts from technology to ethics by asking whether we want (...) the kind of integrity that Zuckerberg lauds, and that big data technology enables. The negative response is explored by sketching three ethical conceptions of selfhood that recommend personal identity be understood as dis-integrating. The success of the strategies partially depends upon an undermining use of big data platforms. (shrink)
Ethical issues of information and communication technologies (ICTs) are important because they can have significant effects on human liberty, happiness, their ability to lead a good life. They are also of functional interest because they can determine whether technologies are used and whether their positive potential can unfold. For these reasons policy makers are interested in finding out what these issues are and how they can be addressed. The best way of creating ICT policy that is sensitive to ethical issues (...) would be to be proactive and address such issues at early stages of the technology life cycle. The present paper uses this position as a starting point and discusses how knowledge of ethical aspects of emerging ICTs can be gained. It develops a methodology that goes beyond established futures methodologies to cater for the difficult nature of ethical issues. The paper goes on to outline some of the preliminary findings of a European research project that has applied this method. (shrink)
Originally applied on domestic and lab animals, assisted reproduction technologies (ARTs) have also found application in conservation breeding programs, where they can make the genetic management of populations more efficient, and increase the number of individuals per generation. However, their application in wildlife conservation opens up new ethical scenarios that have not yet been fully explored. This study presents a frame for the ethical analysis of the application of ART procedures in conservation based on the Ethical Matrix (EM), and discusses (...) a specific case study—ovum pick-up (OPU) procedures performed in the current conservation efforts for the northern white rhinoceros (Ceratotherium simum cottoni)—providing a template for the assessment of ART procedures in projects involving other endangered species. (shrink)
The relation between Martin Heidegger and radical environmentalism has been subject of discussion for several years now. On the one hand, Heidegger is portrayed as a forerunner of the deep ecology movement, providing an alternative for the technological age we live in. On the other, commentators contend that the basic thrust of Heidegger’s thought cannot be found in such an ecological ethos. In this article, this debate is revisited in order to answer the question whether it is possible to conceive (...) human dwelling on earth in a way which is consistent with the technological world we live in and heralds another beginning at the same time. Our point of departure in this article is not the work of Heidegger but the affordance theory of James Gibson, which will prove to be highly compatible with the radical environmentalist concept of nature as well as Heidegger’s concept of the challenging of nature. (shrink)
Contrary to the tendency to harmony, consensus and alignment among stakeholders in most of the literature on participation and partnership in corporate social responsibility and responsible innovation practices, in this chapter we ask which concept of participation and partnership is able to account for stakeholder engagement while acknowledging and appreciating their fundamentally different judgements, value frames and viewpoints. To this end, we reflect on a non-reductive and ethical approach to stakeholder engagement, collaboration and partnership, inspired by the philosophy of Emmanuel (...) Levinas. We contrast a cognitive approach with an ethical approach to stakeholder engagement, collaboration and partnership, and explore four characteristics of this ethical approach. Based on the ethical approach to stakeholder engagement, collaboration and partnership, we also provide a three-stage framework for partnership formation in CSR and RI practices. (shrink)
Praised as a panacea for resolving all societal issues, and self-evidently presupposed as technological innovation, the concept of innovation has become the emblem of our age. This is especially reflected in the context of the European Union, where it is considered to play a central role in both strengthening the economy and confronting the current environmental crisis. The pressing question is how technological innovation can be steered into the right direction. To this end, recent frameworks of Responsible Innovation focus on (...) how to enable outcomes of innovation processes to become societally desirable and ethically acceptable. However, questions with regard to the technological nature of these innovation processes are rarely raised. For this reason, this paper raises the following research question: To what extent is RI possible in the current age, where the concept of innovation is predominantly presupposed as technological innovation? On the one hand, we depart from a post-phenomenological perspective to evaluate the possibility of RI in relation to the particular technological innovations discussed in the RI literature. On the other hand, we emphasize the central role innovation plays in the current age, and suggest that the presupposed concept of innovation projects a techno-economic paradigm. In doing so, we ultimately argue that in the attempt to steer innovation, frameworks of RI are in fact steered by the techno-economic paradigm inherent in the presupposed concept of innovation. Finally, we account for what implications this has for the societal purpose of RI. (shrink)
The technology to create and automate large numbers of fake social media users, or “social bots”, is becoming increasingly more accessible to private individuals. This paper explores one potential use of the technology, namely the creation of “political bots”: social bots aimed at influencing the political opinions of others. Despite initial worries about licensing the use of such bots by private individuals, this paper provides an, albeit limited, argument in favour of this. The argument begins by providing a (...) prima facie case in favour of these political bots and proceeds by attempting to answer a series of potential objections. These objections are based on (1) the dangerous effectiveness of the technology; the (2) corruptive, (3) deceitful and (4) manipulating nature of political bots; (5) the worry that the technology will lead to chaos and be detrimental to trust online; and (6) practical issues involved in ensuring acceptable use of the technology. In all cases I will argue that the objections are overestimated, and that a closer look at the use of political bots helps us realise that using them is simply a new way of speaking up in modern society. (shrink)
Praised as a panacea for resolving all societal issues, and self-evidently presupposed as technological innovation, the concept of innovation has become the emblem of our age. This is especially reflected in the context of the European Union, where it is considered to play a central role in both strengthening the economy and confronting the current environmental crisis. The pressing question is how technological innovation can be steered into the right direction. To this end, recent frameworks of Responsible Innovation focus on (...) how to enable outcomes of innovation processes to become societally desirable and ethically acceptable. However, questions with regard to the technological nature of these innovation processes are rarely raised. For this reason, this paper raises the following research question: To what extent is RI possible in the current age, where the concept of innovation is predominantly presupposed as technological innovation? On the one hand, we depart from a post-phenomenological perspective to evaluate the possibility of RI in relation to the particular technological innovations discussed in the RI literature. On the other hand, we emphasize the central role innovation plays in the current age, and suggest that the presupposed concept of innovation projects a techno-economic paradigm. In doing so, we ultimately argue that in the attempt to steer innovation, frameworks of RI are in fact steered by the techno-economic paradigm inherent in the presupposed concept of innovation. Finally, we account for what implications this has for the societal purpose of RI. (shrink)
In this paper I critique the ethical implications of automating CCTV surveillance. I consider three modes of CCTV with respect to automation: manual, fully automated, and partially automated. In each of these I examine concerns posed by processing capacity, prejudice towards and profiling of surveilled subjects, and false positives and false negatives. While it might seem as if fully automated surveillance is an improvement over the manual alternative in these areas, I demonstrate that this is not necessarily the case. In (...) preference to the extremes I argue in favour of partial automation in which the system integrates a human CCTV operator with some level of automation. To assess the degree to which such a system should be automated I draw on the further issues of privacy and distance. Here I argue that the privacy of the surveilled subject can benefit from automation, while the distance between the surveilled subject and the CCTV operator introduced by automation can have both positive and negative effects. I conclude that in at least the majority of cases more automation is preferable to less within a partially automated system where this does not impinge on efficacy. (shrink)
La autonomía constituye uno de los pilares básicos de un sistema político como el democrático que se asocia a la capacidad de toma de decisiones de la ciudadanía como su núcleo moral principal. Los descubrimientos en el ámbito de las neurociencias y su aplicación al campo del marketing y a la comunicación política despiertan hoy en día las sospechas por la posible capacidad de activar el "botón del voto" de los electores. Este artículo tiene como objetivo adentrarse en el estudio (...) de los principales trabajos desarrollados sobre el neuromarketing político y la neuropolítica. La finalidad de esta propuesta consiste en presentar los debates éticos que irrumpen con el neuromarketing político. Autonomy is one of the basic pillars of a political system like democracy, which is associated with citizens’ decision-making capacity as its main moral core. Discoveries in the neurosciences domain and their application to the marketing and political communication field now arouse suspicions about the possible capacity of activating voter’s "voting button". The objective of this article is to examine the main works on political neuromarketing and neuropolitics. The purpose of this proposal consists in presenting the specific ethical debates that emerge around political neuromarketing. (shrink)
Smart Farming Technologies raise ethical issues associated with the increased corporatization and industrialization of the agricultural sector. We explore the concept of biomimicry to conceptualize smart farming technologies as ecological innovations which are embedded in and in accordance with the natural environment. Such a biomimetic approach of smart farming technologies takes advantage of its potential to mitigate climate change, while at the same time avoiding the ethical issues related to the industrialization of the agricultural sector. We explore six principles of (...) a natural concept of biomimicry and apply these principles in the context of smart farming technologies. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.