This paper argues that the Value Sensitive Design (VSD) methodology provides a principled approach to embedding common values in to AI systems both early and throughout the design process. To do so, it draws on an important case study: the evidence and final report of the UK Select Committee on ArtificialIntelligence. This empirical investigation shows that the different and often disparate stakeholder groups that are implicated in AI design and use share some common values that can be (...) used to further strengthen design coordination efforts. VSD is shown to be both able to distill these common values as well as provide a framework for stakeholder coordination. (shrink)
The text "ArtificialIntelligence and Analytic Pragmatism" was translated from the book by Robert B. Brand: Between Saying and Doing: Towards an Analytical Pragmatism. Chapter 3. Oxford University Press. pp. 69 - 92.
In this paper, I provide an overview of today’s philosophical approaches to the problem of “intelligence” in the field of artificialintelligence by examining several important papers on phenomenology and the philosophy of biology such as those on Heideggerian AI, Jonas's metabolism model, and slime mold type intelligence.
In two experiments (total N=693) we explored whether people are willing to consider paintings made by AI-driven robots as art, and robots as artists. Across the two experiments, we manipulated three factors: (i) agent type (AI-driven robot v. human agent), (ii) behavior type (intentional creation of a painting v. accidental creation), and (iii) object type (abstract v. representational painting). We found that people judge robot paintings and human painting as art to roughly the same extent. However, people are much less (...) willing to consider robots as artists than humans, which is partially explained by the fact that they are less disposed to attribute artistic intentions to robots. (shrink)
There is considerable enthusiasm about the prospect that artificialintelligence (AI) will help to improve the safety and efficacy of health services and the efficiency of health systems. To realize this potential, however, AI systems will have to overcome structural problems in the culture and practice of medicine and the organization of health systems that impact the data from which AI models are built, the environments into which they will be deployed, and the practices and incentives that structure (...) their development. This perspective elaborates on some of these structural challenges and provides recommendations to address potential shortcomings. (shrink)
Applications of artificialintelligence (AI) for cybersecurity tasks are attracting greater attention from the private and the public sectors. Estimates indicate that the market for AI in cybersecurity will grow from US$1 billion in 2016 to a US$34.8 billion net worth by 2025. The latest national cybersecurity and defence strategies of several governments explicitly mention AI capabilities. At the same time, initiatives to define new standards and certification procedures to elicit users’ trust in AI are emerging on a (...) global scale. However, trust in AI (both machine learning and neural networks) to deliver cybersecurity tasks is a double edged sword: it can improve substantially cybersecurity practices, but can also facilitate new forms of attacks to the AI applications themselves, which may pose severe security threats. We argue that trust in AI for cybersecurity is unwarranted and that, to reduce security risks, some form of control to ensure the deployment of ‘reliable AI’ for cybersecurity is necessary. To this end, we offer three recommendations focusing on the design, development and deployment of AI for cybersecurity. (shrink)
Artificialintelligence (AI) is increasingly expected to disrupt the ordinary functioning of society. From how we fight wars or govern society, to how we work and play, and from how we create to how we teach and learn, there is almost no field of human activity which is believed to be entirely immune from the impact of this emerging technology. This poses a multifaceted problem when it comes to designing and understanding regulatory responses to AI. This article aims (...) to: (i) defend the need for a novel conceptual model for understanding the systemic legal disruption caused by new technologies such as AI; (ii) to situate this model in relation to preceding debates about the interaction of regulation with new technologies (particularly the ‘cyberlaw’ and ‘robolaw’ debates); and (iii) to set out a detailed model for understanding the legal disruption precipitated by AI, examining both pathways stemming from new affordances that can give rise to a regulatory ‘disruptive moment’, as well as the Legal Development, Displacement or Destruction that can ensue. The article proposes that this model of legal disruption can be broadly generalisable to understanding the legal effects and challenges of other emerging technologies. (shrink)
Whether causing flash crashes in financial markets, purchasing illegal drugs, or running over pedestrians, AI is increasingly engaging in activity that would be criminal for a natural person, or even an artificial person like a corporation. We argue that criminal law falls short in cases where an AI causes certain types of harm and there are no practically or legally identifiable upstream criminal actors. This Article explores potential solutions to this problem, focusing on holding AI directly criminally liable where (...) it is acting autonomously and irreducibly. Conventional wisdom holds that punishing AI is incongruous with basic criminal law principles such as the capacity for culpability and the requirement of a guilty mind. -/- Drawing on analogies to corporate and strict criminal liability, as well as familiar imputation principles, we show how a coherent theoretical case can be constructed for AI punishment. AI punishment could result in general deterrence and expressive benefits, and it need not run afoul of negative limitations such as punishing in excess of culpability. Ultimately, however, punishing AI is not justified, because it might entail significant costs and it would certainly require radical legal changes. Modest changes to existing criminal laws that target persons, together with potentially expanded civil liability, are a better solution to AI crime. (shrink)
In October 2016, the White House, the European Parliament, and the UK House of Commons each issued a report outlining their visions on how to prepare society for the widespread use of artificialintelligence. In this article, we provide a comparative assessment of these three reports in order to facilitate the design of policies favourable to the development of a ‘good AI society’. To do so, we examine how each report addresses the following three topics: the development of (...) a ‘good AI society’; the role and responsibility of the government, the private sector, and the research community in pursuing such a development; and where the recommendations to support such a development may be in need of improvement. Our analysis concludes that the reports address adequately various ethical, social, and economic topics, but come short of providing an overarching political vision and long-term strategy for the development of a ‘good AI society’. In order to contribute to fill this gap, in the conclusion we suggest a two-pronged approach. (shrink)
The future rests under the sign of technology. Given the prevalence of technological neutrality and inevitabilism, most conceptualizations of the future tend to ignore moral problems. In this paper we argue that every choice about future technologies is a moral choice and even the most technology-dominated scenarios of the future are, in fact, moral provocations we have to imagine solutions to. We begin by explaining the intricate connection between morality and the future. After a short excursion into the history of (...)ArtificialIntelligence, we analyse two possible scenarios, which show that building the future with technology is, first and foremost, a moral endeavor. (shrink)
This paper investigates the claim that artificialIntelligence Systems cannot be held morally responsible because they do not have an ability for agential self-awareness e.g. they cannot be aware that they are the agents of an action. The main suggestion is that if agential self-awareness and related first person representations presuppose an awareness of a self, the possibility of responsible artificialintelligence systems cannot be evaluated independently of research conducted on the nature of the self. Focusing (...) on a specific account of the self from the phenomenological tradition, this paper suggests that a minimal necessary condition that artificialintelligence systems must satisfy so that they have a capability for self-awareness, is having a minimal self defined as ‘a sense of ownership’. As this sense of ownership is usually associated with having a living body, one suggestion is that artificialintelligence systems must have similar living bodies so they can have a sense of self. Discussing cases of robotic animals as examples of the possibility of artificialintelligence systems having a sense of self, the paper concludes that the possibility of artificialintelligence systems having a ‘sense of ownership’ or a sense of self may be a necessary condition for having responsibility. (shrink)
Today, humanity is trying to turn the artificialintelligence that it produces into natural intelligence. Although this effort is technologically exciting, it often raises ethical concerns. Therefore, the intellectual ability of artificialintelligence will always bring new questions. Although there have been significant developments in the consciousness of artificialintelligence, the issue of consciousness must be fully explained in order to complete this development. When consciousness is fully understood by human beings, the subject (...) of “free will” will be explained. Therefore, human consciousness should be re-examined and perceptions that we are not aware of from a philosophical point of view should be examined. The relevance of the perceptions we do not realize to the unconscious, and finally the impact on consciousness goes back to the sources of philosophy. Hegel, in particular, we may find information about these perceptions unusual. Consciousness cannot be separated from the unconscious. Consciousness should be rethought in the context of memory models and unconscious in this sense. Seeing how Hegel's human cognition acts, especially in Hegel's perception, raises the unconscious question again. If we expect something different from an artificialintelligence, we need to rethink the artificial cognitive model. This paper argues that without the unconscious component of artificialintelligence, it cannot approach human cognition. -/- Key Words: ArtificialIntelligence, Creativity, Memory, perception . (shrink)
What is the essential ingredient of creativity that only humans – and not machines – possess? Can artificialintelligence help refine the notion of creativity by reference to that essential ingredient? How / do we need to redefine our conceptual and legal frameworks for rewarding creativity because of this new qualifying – actually creatively significant – factor? -/- Those are the questions tackled in this essay. The author’s conclusion is that consciousness, experiential states (such as a raw feel (...) of what is like to be creating) and propositional attitudes (such as intention to instigate change by creating) appear pivotal to qualifying an exploratory effort as creativity. Artificialintelligence systems would supposedly be capable of creativity if they could exhibit such states, which philosophers and computer scientists posit as conceptually admissible and practically possible. -/- The existing legal framework rewards creative endeavours by reference to the novelty or originality of the end result. But this bar is not insurmountable for artificialintelligence. Technically speaking, artificialintelligence systems can create works that are novel and/or original. Are we then prepared to grant to those systems the legal status of “creators” in their own right? Whom should the associated benefits and rewards be assigned to? How does the position change (or not) based on the qualifying factors set out above? Should – and if, how – the general public benefit from inventions / creative works of artificialintelligence systems if troves of personal data are the key component that fueled and informed creative choices? (shrink)
Accountability is a cornerstone of the governance of artificialintelligence (AI). However, it is often defined too imprecisely because its multifaceted nature and the sociotechnical structure of AI systems imply a variety of values, practices, and measures to which accountability in AI can refer. We address this lack of clarity by defining accountability in terms of answerability, identifying three conditions of possibility (authority recognition, interrogation, and limitation of power), and an architecture of seven features (context, range, agent, forum, (...) standards, process, and implications). We analyse this architecture through four accountability goals (compliance, report, oversight, and enforcement). We argue that these goals are often complementary and that policy-makers emphasise or prioritise some over others depending on the proactive or reactive use of accountability and the missions of AI governance. (shrink)
The publication of the book Beta Writer. 2019. Lithium-Ion Batteries. A Machine-Generated Summary of Current Research. New York, NY: Springer, produced with ArtificialIntelligence software prompts analysis and reflections in several areas. First of all, on what ArtificialIntelligence systems are able to do in the production of informative texts. This raises the question if and how an ArtificialIntelligence software system can be treated as the author of a text it has produced. Evaluating (...) whether this is correct and possible leads to re-examine the current conception for which it is taken for granted that the author is a person. This, in turn, when faced with texts produced by ArtificialIntelligence systems necessarily raises the question of whether they, like the author-person, are endowed with agency. The article concludes that ArtificialIntelligence systems are characterized by a distributed agency, shared with those who designed them and make them work, and that in the wake of the reflections of 50 years ago by Barthes and Foucault, it is necessary to define and recognize a new type of author. (shrink)
For those who find Dreyfus’s critique of AI compelling, the prospects for producing true artificial human intelligence are bleak. An important question thus becomes, what are the prospects for producing artificial non-human intelligence? Applying Dreyfus’s work to this question is difficult, however, because his work is so thoroughly human-centered. Granting Dreyfus that the body is fundamental to intelligence, how are we to conceive of non-human bodies? In this paper, I argue that bringing Dreyfus’s work into (...) conversation with the work of Mark Bickhard offers a way of answering this question, and I try to suggest what doing so means for AI research. (shrink)
In this article we analyse the role that artificialintelligence (AI) could play, and is playing, to combat global climate change. We identify two crucial opportunities that AI offers in this domain: it can help improve and expand current understanding of climate change and it contribute to combating the climate crisis effectively. However, the development of AI also raises two sets of problems when considering climate change: the possible exacerbation of social and ethical challenges already associated with AI, (...) and the contribution to climate change of the greenhouse gases emitted by training data and computation-intensive AI systems. We assess the carbon footprint of AI research, and the factors that influence AI’s greenhouse gas (GHG) emissions in this domain. We find that the carbon footprint of AI research may be significant and highlight the need for more evidence concerning the trade-off between the GHG emissions generated by AI research and the energy and resource efficiency gains that AI can offer. In light of our analysis, we argue that leveraging the opportunities offered by AI for global climate change whilst limiting its risks is a gambit which requires responsive, evidence-based and effective governance to become a winning strategy. We conclude by identifying the European Union as being especially well-placed to play a leading role in this policy response and provide 13 recommendations that are designed to identify and harness the opportunities of AI for combating climate change, while reducing its impact on the environment. (shrink)
There is, in some quarters, concern about high–level machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity. In other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development, and how fast they see these developing. (...) We thus designed a brief questionnaire and distributed it to four groups of experts in 2012/2013. The median estimate of respondents was for a one in two chance that high-level machine intelligence will be developed around 2040-2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter. They estimate the chance is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity. (shrink)
Artificialintelligence has arrived. In the online world it is already a part of everyday life, sitting invisibly behind a wide range of search engines and online commerce sites. It offers huge potential to enable more efficient and effective business and government but the use of artificialintelligence brings with it important questions about governance, accountability and ethics. Realising the full potential of artificialintelligence and avoiding possible adverse consequences requires societies to find satisfactory (...) answers to these questions. This report sets out some possible approaches, and describes some of the ways government is already engaging with these issues. (shrink)
Artificialintelligence research and regulation seek to balance the benefits of innovation against any potential harms and disruption. However, one unintended consequence of the recent surge in AI research is the potential re-orientation of AI technologies to facilitate criminal acts, term in this article AI-Crime. AIC is theoretically feasible thanks to published experiments in automating fraud targeted at social media users, as well as demonstrations of AI-driven manipulation of simulated markets. However, because AIC is still a relatively young (...) and inherently interdisciplinary area—spanning socio-legal studies to formal science—there is little certainty of what an AIC future might look like. This article offers the first systematic, interdisciplinary literature analysis of the foreseeable threats of AIC, providing ethicists, policy-makers, and law enforcement organisations with a synthesis of the current problems, and a possible solution space. (shrink)
Advanced AI systems are rapidly making their way into medical research and practice, and, arguably, it is only a matter of time before they will surpass human practitioners in terms of accuracy, reliability, and knowledge. If this is true, practitioners will have a prima facie epistemic and professional obligation to align their medical verdicts with those of advanced AI systems. However, in light of their complexity, these AI systems will often function as black boxes: the details of their contents, calculations, (...) and procedures cannot be meaningfully understood by human practitioners. When AI systems reach this level of complexity, we can also speak of black-box medicine. In this paper, we want to argue that black-box medicine conflicts with core ideals of patient-centered medicine. In particular, we claim, black-box medicine is not conducive for supporting informed decision-making based on shared information, shared deliberation, and shared mind between practitioner and patient. (shrink)
Artificialintelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made (...) and used by humans. This includes issues of privacy (§2.1) and manipulation (§2.2), opacity (§2.3) and bias (§2.4), human-robot interaction (§2.5), employment (§2.6), and the effects of autonomy (§2.7). Then AI systems as subjects, i.e., ethics for the AI systems themselves in machine ethics (§2.8) and artificial moral agency (§2.9). Finally, the problem of a possible future AI superintelligence leading to a “singularity” (§2.10). We close with a remark on the vision of AI (§3). - For each section within these themes, we provide a general explanation of the ethical issues, outline existing positions and arguments, then analyse how these play out with current technologies and finally, what policy consequences may be drawn. (shrink)
ArtificialIntelligence is part of the Industrial Revolution 4.0 and already exists today. This shows that the future has come and everyone must prepare for the implementation of ArtificialIntelligence to face the transformation of the digital era, especially the world of education. The community service workshop was attended by 66 participants, namely students, teachers, and structural officials of SMK Negeri 2 Singkawang. The workshop was held using demonstration methods, lectures, discussions and question and answer. This (...) workshop provides information to teachers and students about the importance of ArtificialIntelligence (AI) in the digital transformation process. For teachers and students at SMKN 2 Singkawang it was introduced that algorithms or artificialintelligence methods could be given simply by representing problems into simple solutions with several examples of implementing artificialintelligence using Microsoft Excel and utilizing VBA macros. (shrink)
Defining ArtificialIntelligence and Artificial General Intelligence remain controversial and disputed. They stem from a longer-standing controversy of what is the definition of consciousness, which if solved could possibly offer a solution to defining AI and AGI. Central to these problems is the paradox that appraising AI and Consciousness requires epistemological objectivity of domains that are ontologically subjective. I propose that applying the philosophy of art, which also aims to define art through a lens of epistemological (...) objectivity where the domains are ontologically subjective, can further elucidate this unsolved question. In this sense, Art and AI and ultimately consciousness are multifaceted domains where conventional complexity theory and current philosophical approaches may be augmented by aesthetic principles ranging from classical Aristotelian essentialism to Wittgensteinian anti-essentialism. This approach of AI as art may offer novel solutions to characterising and elucidating the ciphers of consciousness and AI. (shrink)
This paper critically assesses the possibility of moral enhancement with ambient intelligence technologies and artificialintelligence presented in Savulescu and Maslen (2015). The main problem with their proposal is that it is not robust enough to play a normative role in users’ behavior. A more promising approach, and the one presented in the paper, relies on an artifi-cial moral reasoning engine, which is designed to present its users with moral arguments grounded in first-order normative theories, such as (...) Kantianism or utilitarianism, that reason-responsive people can be persuaded by. This proposal can play a normative role and it is also a more promising avenue towards moral enhancement. It is more promising because such a system can be designed to take advantage of the sometimes undue trust that people put in automated technologies. We could therefore expect a well-designed moral reasoner system to be able to persuade people that may not be persuaded by similar arguments from other people. So, all things considered, there is hope in artificial intelli-gence for moral enhancement, but not in artificialintelligence that relies solely on ambient intelligence technologies. (shrink)
The first decade of this century has seen the nascency of the first mathematical theory of general artificialintelligence. This theory of Universal ArtificialIntelligence (UAI) has made significant contributions to many theoretical, philosophical, and practical AI questions. In a series of papers culminating in book (Hutter, 2005), an exciting sound and complete mathematical model for a super intelligent agent (AIXI) has been developed and rigorously analyzed. While nowadays most AI researchers avoid discussing intelligence, the (...) award-winning PhD thesis (Legg, 2008) provided the philosophical embedding and investigated the UAI-based universal measure of rational intelligence, which is formal, objective and non-anthropocentric. Recently, effective approximations of AIXI have been derived and experimentally investigated in JAIR paper (Veness et al. 2011). This practical breakthrough has resulted in some impressive applications, finally muting earlier critique that UAI is only a theory. For the first time, without providing any domain knowledge, the same agent is able to self-adapt to a diverse range of interactive environments. For instance, AIXI is able to learn from scratch to play TicTacToe, Pacman, Kuhn Poker, and other games by trial and error, without even providing the rules of the games. These achievements give new hope that the grand goal of Artificial General Intelligence is not elusive. This article provides an informal overview of UAI in context. It attempts to gently introduce a very theoretical, formal, and mathematical subject, and discusses philosophical and technical ingredients, traits of intelligence, some social questions, and the past and future of UAI. (shrink)
The ethical concerns regarding the successful development of an ArtificialIntelligence have received a lot of attention lately. The idea is that even if we have good reason to believe that it is very unlikely, the mere possibility of an AI causing extreme human suffering is important enough to warrant serious consideration. Others look at this problem from the opposite perspective, namely that of the AI itself. Here the idea is that even if we have good reason to (...) believe that it is very unlikely, the mere possibility of humanity causing extreme suffering to an AI is important enough to warrant serious consideration. This paper starts from the observation that both concerns rely on problematic philosophical assumptions. Rather than tackling these assumptions directly, it proceeds to present an argument that if one takes these assumptions seriously, then one has a moral obligation to advocate for a ban on the development of a conscious AI. (shrink)
AI, especially in the case of Deepfakes, has the capacity to undermine our confidence in the original, genuine, authentic nature of what we see and hear. And yet digital technologies, in the form of databases and other detection tools also make it easier to spot forgeries and to establish the authenticity of a work. Using the notion of ectypes, this paper discusses current conceptions of authenticity and reproduction and examines how, in the future, these might be adapted for use in (...) the digital sphere. (shrink)
The use of machine translation as artificialintelligence (AI) keeps increasing and the world’s most popular a translation tool is Google Translate (GT). This tool is not merely used for the benefits of learning and obtaining information from foreign languages through translation but has also been used as a medium of interaction and communication in hospitals, airports and shopping centres. This paper aims to explore machine translation accuracy in translating French-Indonesian culinary texts (recipes). The samples of culinary text (...) were taken from the internet. The research results show that the semiotic model of machine translation in GT is the translation from the signifier (forms) of the source language to the signifier (forms) of the target language by emphasizing the equivalence of the concept (signified) of the source language and the target language. GT aids to translate the existing French-Indonesian culinary text concepts through words, phrases and sentences. A problem encountered in machine translation for culinary texts is a cultural equivalence. GT machine translation cannot accurately identify the cultural context of the source language and the target language, so the results are in the form of a literal translation. However, the accuracy of GT can be improved by refining the translation of cultural equivalents through words, phrases and sentences from one language to another. (shrink)
The tendency to idealise artificialintelligence as independent from human manipulators, combined with the growing ontological entanglement of humans and digital machines, has created an “anthrobotic” horizon, in which data analytics, statistics and probabilities throw our agential power into question. How can we avoid the consequences of a reified definition of intelligence as universal operation becoming imposed upon our destinies? It is here argued that the fantasised autonomy of automated intelligence presents a contradistinctive opportunity for philosophical (...) consciousness to understand itself anew as holistic and co-creative, beyond the recent “analytic” moment of the history of philosophy. Here we introduce the concept of “crealectic intelligence”, a meta-analytic and meta-dialectic aspect of consciousness. Intelligent behaviour may consist in distinguishing discrete familiar parts or reproducible functions in the midst of noise via an analytic process of segmentation; intelligence may also manifest itself in the constitution of larger wholes and dynamic unities through a dialectic process of association or assemblage. But, by contrast, crealectic intelligence co-creates realities in the image of an ideal or truth, taking into account the desiring agent imbued with a sense of possibility, in a relationship not only with the Real but also with the creative sublime or “Creal”. (shrink)
[Müller, Vincent C. (ed.), (2016), Fundamental issues of artificialintelligence (Synthese Library, 377; Berlin: Springer). 570 pp.] -- This volume offers a look at the fundamental issues of present and future AI, especially from cognitive science, computer science, neuroscience and philosophy. This work examines the conditions for artificialintelligence, how these relate to the conditions for intelligence in humans and other natural agents, as well as ethical and societal problems that artificialintelligence raises (...) or will raise. The key issues this volume investigates include the relation of AI and cognitive science, ethics of AI and robotics, brain emulation and simulation, hybrid systems and cyborgs, intelligence and intelligence testing, interactive systems, multi-agent systems, and superintelligence. Based on the 2nd conference on “Theory and Philosophy of ArtificialIntelligence” held in Oxford, the volume includes prominent researchers within the field from around the world. (shrink)
On the one hand, people have witnessed a lot of amazing technological inventions and innovations in the multifaceted performances of artificialintelligence systems ever since the earliest stages of their development. Activities previously done with a lot of manual and muscular efforts are now accomplished with no sweat and just at the tip of one’s finger. I would venture to say that artificialintelligence is among the highest scientific and technological achievements of humanity in the post-modern (...) civilization. Yet on the other hand, are we worried that humanity will soon be threatened by the dark side of AI systems when the truth of the matter is long before the advent of AI, humanity has always been threatened by the evil forces of totalitarian powers well-entrenched in governments and big capitalist empires in control of nations’ economies? Future AI systems employed and mobilized in the service of these political and economic powers will certainly heighten the degree of their oppressive domination and intensify the common people’s oppression. (shrink)
In July 2017, China’s State Council released the country’s strategy for developing artificialintelligence, entitled ‘New Generation ArtificialIntelligence Development Plan’. This strategy outlined China’s aims to become the world leader in AI by 2030, to monetise AI into a trillion-yuan industry, and to emerge as the driving force in defining ethical norms and standards for AI. Several reports have analysed specific aspects of China’s AI policies or have assessed the country’s technical capabilities. Instead, in this (...) article, we focus on the socio-political background and policy debates that are shaping China’s AI strategy. In particular, we analyse the main strategic areas in which China is investing in AI and the concurrent ethical debates that are delimiting its use. By focusing on the policy backdrop, we seek to provide a more comprehensive and critical understanding of China’s AI policy by bringing together debates and analyses of a wide array of policy documents. (shrink)
This book is a collection of all the papers published in the special issue “ArtificialIntelligence, Robots, and Philosophy,” Journal of Philosophy of Life, Vol.13, No.1, 2023, pp.1-146. The authors discuss a variety of topics such as science fiction and space ethics, the philosophy of artificialintelligence, the ethics of autonomous agents, and virtuous robots. Through their discussions, readers are able to think deeply about the essence of modern technology and the future of humanity. All papers (...) were invited and completed in spring 2020, though because of the Covid-19 pandemic and other problems, the publication was delayed until this year. I apologize to the authors and potential readers for the delay. I hope that readers will enjoy these arguments on digital technology and its relationship with philosophy. *** -/- Contents*** -/- Introduction : Descartes and ArtificialIntelligence; Masahiro Morioka*** -/- Isaac Asimov and the Current State of Space Science Fiction : In the Light of Space Ethics; Shin-ichiro Inaba*** -/- ArtificialIntelligence and Contemporary Philosophy : Heidegger, Jonas, and Slime Mold; Masahiro Morioka*** -/- Implications of Automating Science : The Possibility of Artificial Creativity and the Future of Science; Makoto Kureha*** -/- Why Autonomous Agents Should Not Be Built for War; István Zoltán Zárdai*** -/- Wheat and Pepper : Interactions Between Technology and Humans; Minao Kukita*** -/- Clockwork Courage : A Defense of Virtuous Robots; Shimpei Okamoto*** -/- Reconstructing Agency from Choice; Yuko Murakami*** -/- Gushing Prose : Will Machines Ever be Able to Translate as Badly as Humans?; Rossa Ó Muireartaigh***. (shrink)
With the advancement of artificialintelligence and humanoid robotics and an ongoing debate between human rights and rule of law, moral philosophers, legal and political scientists are facing difficulties to answer the questions like, “Do humanoid robots have same rights as of humans and if these rights are superior to human rights or not and why?” This paper argues that the sustainability of human rights will be under question because, in near future the scientists (considerably the most rational (...) people) will be the biggest critics of the human rights. Whereas to make artificialintelligence sustainable, it is very important to reconcile it with human rights. Above all, there is a need to find a consensus between human rights and robotics rights in the framework of our established legal systems. (shrink)
The current paradigm of ArtificialIntelligence emerged as the result of a series of cultural innovations, some technical and some social. Among them are apparently small design decisions, that led to a subtle reframing of the field’s original goals, and are by now accepted as standard. They correspond to technical shortcuts, aimed at bypassing problems that were otherwise too complicated or too expensive to solve, while still delivering a viable version of AI. Far from being a series of (...) separate problems, recent cases of unexpected effects of AI are the consequences of those very choices that enabled the field to succeed, and this is why it will be difficult to solve them. In this chapter we review three of these choices, investigating their connection to some of today’s challenges in AI, including those relative to bias, value alignment, privacy and explainability. We introduce the notion of “ethical debt” to describe the necessity to undertake expensive rework in the future in order to address ethical problems created by a technical system. (shrink)
Artificialintelligence (AI) is a digital technology that will be of major importance for the development of humanity in the near future. AI has raised fundamental questions about what we should do with such systems, what the systems themselves should do, what risks they involve and how we can control these. - After the background to the field (1), this article introduces the main debates (2), first on ethical issues that arise with AI systems as objects, i.e. tools (...) made and used by humans; here, the main sections are privacy (2.1), manipulation (2.2), opacity (2.3), bias (2.4), autonomy & responsibility (2.6) and the singularity (2.7). Then we look at AI systems as subjects, i.e. when ethics is for the AI systems themselves in machine ethics (2.8.) and artificial moral agency (2.9). Finally we look at future developments and the concept of AI (3). For each section within these themes, we provide a general explanation of the ethical issues, we outline existing positions and arguments, then we analyse how this plays out with current technologies and finally what policy conse-quences may be drawn. (shrink)
Breast cancer is the leading cause of death among women with cancer. Computer-aided diagnosis is an efficient method for assisting medical experts in early diagnosis, improving the chance of recovery. Employing artificialintelligence (AI) in the medical area is very crucial due to the sensitivity of this field. This means that the low accuracy of the classification methods used for cancer detection is a critical issue. This problem is accentuated when it comes to blurry mammogram images. In this (...) paper, convolutional neural networks (CNNs) are employed to present the traditional convolutional neural network (TCNN) and supported convolutional neural network (SCNN) approaches. The TCNN and SCNN approaches contribute by overcoming the shift and scaling problems included in blurry mammogram images. In addition, the flipped rotation-based approach (FRbA) is proposed to enhance the accuracy of the prediction process (classification of the type of cancerous mass) by taking into account the different directions of the cancerous mass to extract effective features to form the map of the tumour. The proposed approaches are implemented on the MIAS medical dataset using 200 mammogram breast images. Compared to similar approaches based on KNN and RF, the proposed approaches show better performance in terms of accuracy, sensitivity, spasticity, precision, recall, time of performance, and quality of image metrics. (shrink)
The study aims to identify the impact of the use of artificialintelligence techniques in improving the outputs of higher education in Business Administrative Colleges in the universities under study that formed the research community. As for the sample, it consisted of (130) academic respondents in these universities under study. The research concluded that there is a statistically significant effect of using artificialintelligence techniques (expert systems, neural networks) in improving the outputs of higher education in (...) Business Administrative Colleges under study. It was found that artificialintelligence technologies contribute to finding graduates who are able to carry out the process of modernization and professional development in various fields of work. These technologies also contribute to improving and developing the skills of graduates in the labor market and providing them with new skills and characteristics to perform their duties. The research recommended increasing interest in the expert systems technology by the universities under study because of its scientific importance in improving the outputs of higher education by reformulating them in the form of programs embraced by computers. (shrink)
ArtificialIntelligence (AI) is a diverse technology. It is already having significant effects on many jobs and sectors of the economy and over the next ten to twenty years it will drive profound changes in the way New Zealanders live and work. Within the workplace AI will have three dominant effects. This report (funded by the New Zealand Law Foundation) addresses: Chapter 1 Defining the Technology of Interest; Chapter 2 The changing nature and value of work; Chapter 3 (...) AI and the employment relationship; Chapter 4 Consumers, professions and society. The report includes recommendations to the New Zealand Government. (shrink)
For several years, scholars have (for good reason) been largely preoccupied with worries about the use of artificialintelligence and machine learning (AI/ML) tools to make decisions about us. Only recently has significant attention turned to a potentially more alarming problem: the use of AI/ML to influence our decision-making. The contexts in which we make decisions—what behavioral economists call our choice architectures—are increasingly technologically-laden. Which is to say: algorithms increasingly determine, in a wide variety of contexts, both the (...) sets of options we choose from and the way those options are framed. Moreover, artificialintelligence and machine learning (AI/ML) makes it possible for those options and their framings—the choice architectures—to be tailored to the individual chooser. They are constructed based on information collected about our individual preferences, interests, aspirations, and vulnerabilities, with the goal of influencing our decisions. At the same time, because we are habituated to these technologies we pay them little notice. They are, as philosophers of technology put it, transparent to us—effectively invisible. I argue that this invisible layer of technological mediation, which structures and influences our decision-making, renders us deeply susceptible to manipulation. Absent a guarantee that these technologies are not being used to manipulate and exploit, individuals will have little reason to trust them. (shrink)
This book reports on the results of the third edition of the premier conference in the field of philosophy of artificialintelligence, PT-AI 2017, held on November 4 - 5, 2017 at the University of Leeds, UK. It covers: advanced knowledge on key AI concepts, including complexity, computation, creativity, embodiment, representation and superintelligence; cutting-edge ethical issues, such as the AI impact on human dignity and society, responsibilities and rights of machines, as well as AI threats to humanity and (...) AI safety; and cutting-edge developments in techniques to achieve AI, including machine learning, neural networks, dynamical systems. The book also discusses important applications of AI, including big data analytics, expert systems, cognitive architectures, and robotics. It offers a timely, yet very comprehensive snapshot of what is going on in the field of AI, especially at the interfaces between philosophy, cognitive science, ethics and computing. (shrink)
In this paper, we argue that because of the advent of ArtificialIntelligence, the secret ballot is now much less effective at protecting voters from voting related instances of social ostracism and social punishment. If one has access to vast amounts of data about specific electors, then it is possible, at least with respect to a significant subset of electors, to infer with high levels of accuracy how they voted in a past election. Since the accuracy levels of (...)ArtificialIntelligence are so high, the practical consequences of someone inferring one’s vote are identical to the practical consequences of having one’s vote revealed directly under an open voting regime. Therefore, if one thinks that the secret ballot is at least partly justified because it protects electors against voting related social ostracism and social punishment, one should be morally troubled by how ArtificialIntelligence today can be used to infer individual electors’ past voting behaviour. (shrink)
[Müller, Vincent C. (ed.), (2013), Philosophy and theory of artificialintelligence (SAPERE, 5; Berlin: Springer). 429 pp. ] --- Can we make machines that think and act like humans or other natural intelligent agents? The answer to this question depends on how we see ourselves and how we see the machines in question. Classical AI and cognitive science had claimed that cognition is computation, and can thus be reproduced on other computing machines, possibly surpassing the abilities of human (...)intelligence. This consensus has now come under threat and the agenda for the philosophy and theory of AI must be set anew, re-defining the relation between AI and Cognitive Science. We can re-claim the original vision of general AI from the technical AI disciplines; we can reject classical cognitive science and replace it with a new theory (e.g. embodied); or we can try to find new ways to approach AI, for example from neuroscience or from systems theory. To do this, we must go back to the basic questions on computing, cognition and ethics for AI. The 30 papers in this volume provide cutting-edge work from leading researchers that define where we stand and where we should go from here. (shrink)
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.