Results for 'Artificial Intelligence Governance'

960 found
Order:
  1. Artificial intelligence: opportunities and implications for the future of decision making.U. K. Government & Office for Science - 2016
    Artificial intelligence has arrived. In the online world it is already a part of everyday life, sitting invisibly behind a wide range of search engines and online commerce sites. It offers huge potential to enable more efficient and effective business and government but the use of artificial intelligence brings with it important questions about governance, accountability and ethics. Realising the full potential of artificial intelligence and avoiding possible adverse consequences requires societies to find (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2. Artificial intelligence and the ‘Good Society’: the US, EU, and UK approach.Corinne Cath, Sandra Wachter, Brent Mittelstadt, Mariarosaria Taddeo & Luciano Floridi - 2018 - Science and Engineering Ethics 24 (2):505-528.
    In October 2016, the White House, the European Parliament, and the UK House of Commons each issued a report outlining their visions on how to prepare society for the widespread use of artificial intelligence. In this article, we provide a comparative assessment of these three reports in order to facilitate the design of policies favourable to the development of a ‘good AI society’. To do so, we examine how each report addresses the following three topics: the development of (...)
    Download  
     
    Export citation  
     
    Bookmark   29 citations  
  3. The Case for Government by Artificial Intelligence.Steven James Bartlett - 2016 - Willamette University Faculty Research Website: Http://Www.Willamette.Edu/~Sbartlet/Documents/Bartlett_The%20Case%20for%20Government%20by%20Artifici al%20Intelligence.Pdf.
    THE CASE FOR GOVERNMENT BY ARTIFICIAL INTELLIGENCE. Tired of election madness? The rhetoric of politicians? Their unreliable promises? And less than good government? -/- Until recently, it hasn’t been hard for people to give up control to computers. Not very many people miss the effort and time required to do calculations by hand, to keep track of their finances, or to complete their tax returns manually. But relinquishing direct human control to self-driving cars is expected to be more (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  4. Accountability in Artificial Intelligence: What It Is and How It Works.Claudio Novelli, Mariarosaria Taddeo & Luciano Floridi - 2023 - AI and Society 1:1-12.
    Accountability is a cornerstone of the governance of artificial intelligence (AI). However, it is often defined too imprecisely because its multifaceted nature and the sociotechnical structure of AI systems imply a variety of values, practices, and measures to which accountability in AI can refer. We address this lack of clarity by defining accountability in terms of answerability, identifying three conditions of possibility (authority recognition, interrogation, and limitation of power), and an architecture of seven features (context, range, agent, (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  5. Artificial Intelligence and Legal Disruption: A New Model for Analysis.John Danaher, Hin-Yan Liu, Matthijs Maas, Luisa Scarcella, Michaela Lexer & Leonard Van Rompaey - forthcoming - Law, Innovation and Technology.
    Artificial intelligence (AI) is increasingly expected to disrupt the ordinary functioning of society. From how we fight wars or govern society, to how we work and play, and from how we create to how we teach and learn, there is almost no field of human activity which is believed to be entirely immune from the impact of this emerging technology. This poses a multifaceted problem when it comes to designing and understanding regulatory responses to AI. This article aims (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  6. The Chinese approach to artificial intelligence: an analysis of policy, ethics, and regulation.Huw Roberts, Josh Cowls, Jessica Morley, Mariarosaria Taddeo, Vincent Wang & Luciano Floridi - 2021 - AI and Society 36 (1):59–⁠77.
    In July 2017, China’s State Council released the country’s strategy for developing artificial intelligence, entitled ‘New Generation Artificial Intelligence Development Plan’. This strategy outlined China’s aims to become the world leader in AI by 2030, to monetise AI into a trillion-yuan industry, and to emerge as the driving force in defining ethical norms and standards for AI. Several reports have analysed specific aspects of China’s AI policies or have assessed the country’s technical capabilities. Instead, in this (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  7. Is Artificial Intelligence A Threat?Ruel F. Pepa - manuscript
    On the one hand, people have witnessed a lot of amazing technological inventions and innovations in the multifaceted performances of artificial intelligence systems ever since the earliest stages of their development. Activities previously done with a lot of manual and muscular efforts are now accomplished with no sweat and just at the tip of one’s finger. I would venture to say that artificial intelligence is among the highest scientific and technological achievements of humanity in the post-modern (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. The impact of artificial intelligence on jobs and work in New Zealand.James Maclaurin, Colin Gavaghan & Alistair Knott - 2021 - Wellington, New Zealand: New Zealand Law Foundation.
    Artificial Intelligence (AI) is a diverse technology. It is already having significant effects on many jobs and sectors of the economy and over the next ten to twenty years it will drive profound changes in the way New Zealanders live and work. Within the workplace AI will have three dominant effects. This report (funded by the New Zealand Law Foundation) addresses: Chapter 1 Defining the Technology of Interest; Chapter 2 The changing nature and value of work; Chapter 3 (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9. The effective and ethical development of artificial intelligence: An opportunity to improve our wellbeing.James Maclaurin, Toby Walsh, Neil Levy, Genevieve Bell, Fiona Wood, Anthony Elliott & Iven Mareels - 2019 - Melbourne VIC, Australia: Australian Council of Learned Academies.
    This project has been supported by the Australian Government through the Australian Research Council (project number CS170100008); the Department of Industry, Innovation and Science; and the Department of Prime Minister and Cabinet. ACOLA collaborates with the Australian Academy of Health and Medical Sciences and the New Zealand Royal Society Te Apārangi to deliver the interdisciplinary Horizon Scanning reports to government. The aims of the project which produced this report are: 1. Examine the transformative role that artificial intelligence may (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  10. Artificial Intelligence, Control and Legitimacy.Olga Gil - manuscript
    In this work, a general framework for the analysis of governance of artificial intelligence is presented. A dashboard developed for this analysis comes from the perspective of political theory. This dashboard allows eventual comparisons between democratic and non democratic regimes, useful for countries in the global south and western countries. The dashboard allows us to assess the key features that determine the governance model for artificial intelligence at the national level, for local governments and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11. African Reasons Why Artificial Intelligence Should Not Maximize Utility.Thaddeus Metz - 2021 - In Beatrice Dedaa Okyere-Manu (ed.), African Values, Ethics, and Technology: Questions, Issues, and Approaches. Palgrave-Macmillan. pp. 55-72.
    Insofar as artificial intelligence is to be used to guide automated systems in their interactions with humans, the dominant view is probably that it would be appropriate to programme them to maximize (expected) utility. According to utilitarianism, which is a characteristically western conception of moral reason, machines should be programmed to do whatever they could in a given circumstance to produce in the long run the highest net balance of what is good for human beings minus what is (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  12. Deontology and Safe Artificial Intelligence.William D’Alessandro - forthcoming - Philosophical Studies:1-24.
    The field of AI safety aims to prevent increasingly capable artificially intelligent systems from causing humans harm. Research on moral alignment is widely thought to offer a promising safety strategy: if we can equip AI systems with appropriate ethical rules, according to this line of thought, they'll be unlikely to disempower, destroy or otherwise seriously harm us. Deontological morality looks like a particularly attractive candidate for an alignment target, given its popularity, relative technical tractability and commitment to harm-avoidance principles. I (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. What Do Technology and Artificial Intelligence Mean Today?Scott H. Hawley & Elias Kruger - forthcoming - In Hector Fernandez (ed.), Sociedad Tecnológica y Futuro Humano, vol. 1: Desafíos conceptuales. pp. 17.
    Technology and Artificial Intelligence, both today and in the near future, are dominated by automated algorithms that combine optimization with models based on the human brain to learn, predict, and even influence the large-scale behavior of human users. Such applications can be understood to be outgrowths of historical trends in industry and academia, yet have far-reaching and even unintended consequences for social and political life around the world. Countries in different parts of the world take different regulatory views (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. Accountability in Artificial Intelligence.Prof Olga Gil - manuscript
    This work stresses the importance of AI accountability to citizens and explores how a fourth independent government branch/institutions could be endowed to ensure that algorithms in today´s democracies convene to the principles of Constitutions. The purpose of this fourth branch of government in modern democracies could be to enshrine accountability of artificial intelligence development, including software-enabled technologies, and the implementation of policies based on big data within a wider democratic regime context. The work draws on Philosophy of Science, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15. In Conversation with Artificial Intelligence: Aligning language Models with Human Values.Atoosa Kasirzadeh - 2023 - Philosophy and Technology 36 (2):1-24.
    Large-scale language technologies are increasingly used in various forms of communication with humans across different contexts. One particular use case for these technologies is conversational agents, which output natural language text in response to prompts and queries. This mode of engagement raises a number of social and ethical questions. For example, what does it mean to align conversational agents with human norms or values? Which norms or values should they be aligned with? And how can this be accomplished? In this (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  16. Human Autonomy in the Age of Artificial Intelligence.C. Prunkl - 2022 - Nature Machine Intelligence 4 (2):99-101.
    Current AI policy recommendations differ on what the risks to human autonomy are. To systematically address risks to autonomy, we need to confront the complexity of the concept itself and adapt governance solutions accordingly.
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  17. The Concept of Accountability in AI Ethics and Governance.Theodore Lechterman - 2023 - In Justin B. Bullock, Yu-Che Chen, Johannes Himmelreich, Valerie M. Hudson, Anton Korinek, Matthew M. Young & Baobao Zhang (eds.), The Oxford Handbook of AI Governance. Oxford University Press.
    Calls to hold artificial intelligence to account are intensifying. Activists and researchers alike warn of an “accountability gap” or even a “crisis of accountability” in AI. Meanwhile, several prominent scholars maintain that accountability holds the key to governing AI. But usage of the term varies widely in discussions of AI ethics and governance. This chapter begins by disambiguating some different senses and dimensions of accountability, distinguishing it from neighboring concepts, and identifying sources of confusion. It proceeds to (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  18. A Robust Governance for the AI Act: AI Office, AI Board, Scientific Panel, and National Authorities.Claudio Novelli, Philipp Hacker, Jessica Morley, Jarle Trondal & Luciano Floridi - 2024 - European Journal of Risk Regulation 4:1-25.
    Regulation is nothing without enforcement. This particularly holds for the dynamic field of emerging technologies. Hence, this article has two ambitions. First, it explains how the EU´s new Artificial Intelligence Act (AIA) will be implemented and enforced by various institutional bodies, thus clarifying the governance framework of the AIA. Second, it proposes a normative model of governance, providing recommendations to ensure uniform and coordinated execution of the AIA and the fulfilment of the legislation. Taken together, the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. Innovating with confidence: embedding AI governance and fairness in a financial services risk management framework.Luciano Floridi, Michelle Seng Ah Lee & Alexander Denev - 2020 - Berkeley Technology Law Journal 34.
    An increasing number of financial services (FS) companies are adopting solutions driven by artificial intelligence (AI) to gain operational efficiencies, derive strategic insights, and improve customer engagement. However, the rate of adoption has been low, in part due to the apprehension around its complexity and self-learning capability, which makes auditability a challenge in a highly regulated industry. There is limited literature on how FS companies can implement the governance and controls specific to AI-driven solutions. AI auditing cannot (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20. Explainable AI lacks regulative reasons: why AI and human decision‑making are not equally opaque.Uwe Peters - forthcoming - AI and Ethics.
    Many artificial intelligence (AI) systems currently used for decision-making are opaque, i.e., the internal factors that determine their decisions are not fully known to people due to the systems’ computational complexity. In response to this problem, several researchers have argued that human decision-making is equally opaque and since simplifying, reason-giving explanations (rather than exhaustive causal accounts) of a decision are typically viewed as sufficient in the human case, the same should hold for algorithmic decision-making. Here, I contend that (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  21. A Roadmap for Governing AI: Technology Governance and Power Sharing Liberalism.Danielle Allen, Sarah Hubbard, Woojin Lim, Allison Stanger, Shlomit Wagman & Kinney Zalesne - 2024 - Harvard Ash Center for Democratic Governance and Innovation.
    This paper aims to provide a roadmap to AI governance. In contrast to the reigning paradigms, we argue that AI governance should not be merely a reactive, punitive, status-quo-defending enterprise, but rather the expression of an expansive, proactive vision for technology—to advance human flourishing. Advancing human flourishing in turn requires democratic/political stability and economic empowerment. Our overarching point is that answering questions of how we should govern this emerging technology is a chance not merely to categorize and manage (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22. AI Sovereignty: Navigating the Future of International AI Governance.Yu Chen - manuscript
    The rapid proliferation of artificial intelligence (AI) technologies has ushered in a new era of opportunities and challenges, prompting nations to grapple with the concept of AI sovereignty. This article delves into the definition and implications of AI sovereignty, drawing parallels to the well-established notion of cyber sovereignty. By exploring the connotations of AI sovereignty, including control over AI development, data sovereignty, economic impacts, national security considerations, and ethical and cultural dimensions, the article provides a comprehensive understanding of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. Algorithms and Posthuman Governance.James Hughes - 2017 - Journal of Posthuman Studies.
    Since the Enlightenment, there have been advocates for the rationalizing efficiency of enlightened sovereigns, bureaucrats, and technocrats. Today these enthusiasms are joined by calls for replacing or augmenting government with algorithms and artificial intelligence, a process already substantially under way. Bureaucracies are in effect algorithms created by technocrats that systematize governance, and their automation simply removes bureaucrats and paper. The growth of algorithmic governance can already be seen in the automation of social services, regulatory oversight, policing, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  24.  40
    Preserving our humanity in the growing AI-mediated politics: Unraveling the concepts of Democracy (民主) and People as the Roots of the state (民本).Manh-Tung Ho & My-Van Luong - manuscript
    Artificial intelligence (AI) has transformed the way people engage with politics around the world: how citizens consume news, how they view the institutions and norms, how civic groups mobilize public interests, how data-driven campaigns are shaping elections, and so on (Ho & Vuong, 2024). Placing people at the center of the increasingly AI-mediated political landscape has become an urgent matter that transcends all forms of institutions. In this essay, we argue that, in this era, it is necessary to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. AI Governance and the Policymaking Process: Key Considerations for Reducing AI Risk.Brandon Perry & Risto Uuk - 2019 - Big Data and Cognitive Computing 3 (2):1-17.
    This essay argues that a new subfield of AI governance should be explored that examines the policy-making process and its implications for AI governance. A growing number of researchers have begun working on the question of how to mitigate the catastrophic risks of transformative artificial intelligence, including what policies states should adopt. However, this essay identifies a preceding, meta-level problem of how the space of possible policies is affected by the politics and administrative mechanisms of how (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. (1 other version)Intention Reconsideration in Artificial Agents: a Structured Account.Fabrizio Cariani - forthcoming - Special Issue of Phil Studies.
    An important module in the Belief-Desire-Intention architecture for artificial agents (which builds on Michael Bratman's work in the philosophy of action) focuses on the task of intention reconsideration. The theoretical task is to formulate principles governing when an agent ought to undo a prior committed intention and reopen deliberation. Extant proposals for such a principle, if sufficiently detailed, are either too task-specific or too computationally demanding. I propose that an agent ought to reconsider an intention whenever some incompatible prospect (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. Machine Advisors: Integrating Large Language Models into Democratic Assemblies.Petr Špecián - forthcoming - Social Epistemology.
    Could the employment of large language models (LLMs) in place of human advisors improve the problem-solving ability of democratic assemblies? LLMs represent the most significant recent incarnation of artificial intelligence and could change the future of democratic governance. This paper assesses their potential to serve as expert advisors to democratic representatives. While LLMs promise enhanced expertise availability and accessibility, they also present specific challenges. These include hallucinations, misalignment and value imposition. After weighing LLMs’ benefits and drawbacks against (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28.  21
    Beyond Competence: Why AI Needs Purpose, Not Just Programming.Georgy Iashvili - manuscript
    The alignment problem in artificial intelligence (AI) is a critical challenge that extends beyond the need to align future superintelligent systems with human values. This paper argues that even "merely intelligent" AI systems, built on current-gen technologies, pose existential risks due to their competence-without-comprehension nature. Current AI models, despite their advanced capabilities, lack intrinsic moral reasoning and are prone to catastrophic misalignment when faced with ethical dilemmas, as illustrated by recent controversies. Solutions such as hard-coded censorship and rule-based (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29. Regulation by Design: Features, Practices, Limitations, and Governance Implications.Kostina Prifti, Jessica Morley, Claudio Novelli & Luciano Floridi - 2024 - Minds and Machines 34 (2):1-23.
    Regulation by design (RBD) is a growing research field that explores, develops, and criticises the regulative function of design. In this article, we provide a qualitative thematic synthesis of the existing literature. The aim is to explore and analyse RBD’s core features, practices, limitations, and related governance implications. To fulfil this aim, we examine the extant literature on RBD in the context of digital technologies. We start by identifying and structuring the core features of RBD, namely the goals, regulators, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. Risk Imposition by Artificial Agents: The Moral Proxy Problem.Johanna Thoma - 2022 - In Silja Voeneky, Philipp Kellmeyer, Oliver Mueller & Wolfram Burgard (eds.), The Cambridge Handbook of Responsible Artificial Intelligence: Interdisciplinary Perspectives. Cambridge University Press.
    Where artificial agents are not liable to be ascribed true moral agency and responsibility in their own right, we can understand them as acting as proxies for human agents, as making decisions on their behalf. What I call the ‘Moral Proxy Problem’ arises because it is often not clear for whom a specific artificial agent is acting as a moral proxy. In particular, we need to decide whether artificial agents should be acting as proxies for low-level agents (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  31. É Possível Evitar Vieses Algorítmicos?Carlos Barth - 2021 - Revista de Filosofia Moderna E Contemporânea 8 (3):39-68.
    Artificial intelligence (AI) techniques are used to model human activities and predict behavior. Such systems have shown race, gender and other kinds of bias, which are typically understood as technical problems. Here we try to show that: 1) to get rid of such biases, we need a system that can understand the structure of human activities and;2) to create such a system, we need to solve foundational problems of AI, such as the common-sense problem. Additionally, when informational platforms (...)
    Download  
     
    Export citation  
     
    Bookmark  
  32. Isbell Conjugacy for Developing Cognitive Science.Venkata Rayudu Posina, Posina Venkata Rayudu & Sisir Roy - manuscript
    What is cognition? Equivalently, what is cognition good for? Or, what is it that would not be but for human cognition? But for human cognition, there would not be science. Based on this kinship between individual cognition and collective science, here we put forward Isbell conjugacy---the adjointness between objective geometry and subjective algebra---as a scientific method for developing cognitive science. We begin with the correspondence between categorical perception and category theory. Next, we show how the Gestalt maxim is subsumed by (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  33. From Iron to AI: The Evolution of the Sources of State Power.Yu Chen - manuscript
    This article, “From Iron to AI: The Evolution of the Sources of State Power,” examines the progression of fundamental resources that have historically underpinned state power, from tangible assets like land and iron to modern advancements in artificial intelligence (AI). It traces the development of state power through three significant eras: the ancient period characterized by land, population, horses, and iron; the industrial era marked by railroads, coal, and electricity; and the contemporary digital age dominated by the Internet (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. From Virtual Reality to Metaverse : Ethical Risks and the Co-governance of Real and Virtual Worlds.Yi Zeng & Aorigele Bao - 2022 - Philosophical Trends 2022:43-48+127.
    Firstly, the "Metaverse" possesses two distinctive features, "thickness" and "imagination," promising the public a structure of unknown scenarios but with unclear definitions. Attempts to establish an open framework through incompleteness, however, fail to facilitate interactions between humans and the scenario. Due to the dilemma of "digital twinning," the "Metaverse" cannot be realized as "another universe". Hence, the "Metaverse" is, in fact, only a virtual experiential territory created by aggregating technologies that offer immersion and interactivity. Secondly, when artificial intelligence (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. Narrow AI Nanny: Reaching Strategic Advantage via Narrow AI to Prevent Creation of the Dangerous Superintelligence.Alexey Turchin - manuscript
    Abstract: As there are no currently obvious ways to create safe self-improving superintelligence, but its emergence is looming, we probably need temporary ways to prevent its creation. The only way to prevent it is to create a special type of AI that is able to control and monitor the entire world. The idea has been suggested by Goertzel in the form of an AI Nanny, but his Nanny is still superintelligent, and is not easy to control. We explore here ways (...)
    Download  
     
    Export citation  
     
    Bookmark  
  36. How to Save Face & the Fourth Amendment: Developing an Algorithmic Auditing and Accountability Industry for Facial Recognition Technology in Law Enforcement.Lin Patrick - 2023 - Albany Law Journal of Science and Technology 33 (2):189-235.
    For more than two decades, police in the United States have used facial recognition to surveil civilians. Local police departments deploy facial recognition technology to identify protestors’ faces while federal law enforcement agencies quietly amass driver’s license and social media photos to build databases containing billions of faces. Yet, despite the widespread use of facial recognition in law enforcement, there are neither federal laws governing the deployment of this technology nor regulations settings standards with respect to its development. To make (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37. AI Rights for Human Safety.Peter Salib & Simon Goldstein - manuscript
    AI companies are racing to create artificial general intelligence, or “AGI.” If they succeed, the result will be human-level AI systems that can independently pursue high-level goals by formulating and executing long-term plans in the real world. Leading AI researchers agree that some of these systems will likely be “misaligned”–pursuing goals that humans do not desire. This goal mismatch will put misaligned AIs and humans into strategic competition with one another. As with present-day strategic competition between nations with (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. Should machines be tools or tool-users? Clarifying motivations and assumptions in the quest for superintelligence.Dan J. Bruiger - manuscript
    Much of the basic non-technical vocabulary of artificial intelligence is surprisingly ambiguous. Some key terms with unclear meanings include intelligence, embodiment, simulation, mind, consciousness, perception, value, goal, agent, knowledge, belief, optimality, friendliness, containment, machine and thinking. Much of this vocabulary is naively borrowed from the realm of conscious human experience to apply to a theoretical notion of “mind-in-general” based on computation. However, if there is indeed a threshold between mechanical tool and autonomous agent (and a tipping point (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39.  98
    “Democratizing AI” and the Concern of Algorithmic Injustice.Ting-an Lin - 2024 - Philosophy and Technology 37 (3):1-27.
    The call to make artificial intelligence (AI) more democratic, or to “democratize AI,” is sometimes framed as a promising response for mitigating algorithmic injustice or making AI more aligned with social justice. However, the notion of “democratizing AI” is elusive, as the phrase has been associated with multiple meanings and practices, and the extent to which it may help mitigate algorithmic injustice is still underexplored. In this paper, based on a socio-technical understanding of algorithmic injustice, I examine three (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. (1 other version)Ethics-based auditing to develop trustworthy AI.Jakob Mökander & Luciano Floridi - 2021 - Minds and Machines 31 (2):323–327.
    A series of recent developments points towards auditing as a promising mechanism to bridge the gap between principles and practice in AI ethics. Building on ongoing discussions concerning ethics-based auditing, we offer three contributions. First, we argue that ethics-based auditing can improve the quality of decision making, increase user satisfaction, unlock growth potential, enable law-making, and relieve human suffering. Second, we highlight current best practices to support the design and implementation of ethics-based auditing: To be feasible and effective, ethics-based auditing (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  41. Disciplining Deliberation: A Sociotechnical Perspective on Machine Learning Trade-offs.Sina Fazelpour - 2021
    This paper focuses on two highly publicized formal trade-offs in the field of responsible artificial intelligence (AI) -- between predictive accuracy and fairness and between predictive accuracy and interpretability. These formal trade-offs are often taken by researchers, practitioners, and policy-makers to directly imply corresponding tensions between underlying values. Thus interpreted, the trade-offs have formed a core focus of normative engagement in AI governance, accompanied by a particular division of labor along disciplinary lines. This paper argues against this (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42. Artificial Intelligence as a Means to Moral Enhancement.Michał Klincewicz - 2016 - Studies in Logic, Grammar and Rhetoric 48 (1):171-187.
    This paper critically assesses the possibility of moral enhancement with ambient intelligence technologies and artificial intelligence presented in Savulescu and Maslen (2015). The main problem with their proposal is that it is not robust enough to play a normative role in users’ behavior. A more promising approach, and the one presented in the paper, relies on an artifi-cial moral reasoning engine, which is designed to present its users with moral arguments grounded in first-order normative theories, such as (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  43. Leveraging Artificial Intelligence for Strategic Business Decision-Making: Opportunities and Challenges.Mohammed Hazem M. Hamadaqa, Mohammad Alnajjar, Mohammed N. Ayyad, Mohammed A. Al-Nakhal, Basem S. Abunasser & Samy S. Abu-Naser - 2024 - International Journal of Academic Information Systems Research (IJAISR) 8 (8):16-23.
    Abstract: Artificial Intelligence (AI) has rapidly evolved, offering transformative capabilities for business decision-making. This paper explores how AI can be leveraged to enhance strategic decision-making in business contexts. It examines the integration of AI-driven analytics, predictive modeling, and automation to improve decision accuracy and operational efficiency. By analyzing current applications and case studies, the paper highlights the opportunities AI presents, including enhanced data insights, risk management, and personalized customer experiences. Additionally, it addresses the challenges businesses face in adopting (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. Evaluating Future Nanotechnology: The Net Societal Impacts of Atomically Precise Manufacturing.Steven Umbrello & Seth D. Baum - 2018 - Futures 100:63-73.
    Atomically precise manufacturing (APM) is the assembly of materials with atomic precision. APM does not currently exist, and may not be feasible, but if it is feasible, then the societal impacts could be dramatic. This paper assesses the net societal impacts of APM across the full range of important APM sectors: general material wealth, environmental issues, military affairs, surveillance, artificial intelligence, and space travel. Positive effects were found for material wealth, the environment, military affairs (specifically nuclear disarmament), and (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  45. Artificial intelligence, deepfakes and a future of ectypes.Luciano Floridi - 2018 - Philosophy and Technology 31 (3):317-321.
    AI, especially in the case of Deepfakes, has the capacity to undermine our confidence in the original, genuine, authentic nature of what we see and hear. And yet digital technologies, in the form of databases and other detection tools also make it easier to spot forgeries and to establish the authenticity of a work. Using the notion of ectypes, this paper discusses current conceptions of authenticity and reproduction and examines how, in the future, these might be adapted for use in (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  46. Artificial Intelligence in Agriculture: Enhancing Productivity and Sustainability.Mohammed A. Hamed, Mohammed F. El-Habib, Raed Z. Sababa, Mones M. Al-Hanjor, Basem S. Abunasser & Samy S. Abu-Naser - 2024 - International Journal of Engineering and Information Systems (IJEAIS) 8 (8):1-8.
    Abstract: Artificial Intelligence (AI) is revolutionizing the agricultural sector by enhancing productivity and sustainability. This paper explores the transformative impact of AI technologies on agriculture, focusing on their applications in precision farming, predictive analytics, and automation. AI-driven tools enable more efficient management of crops and resources, leading to improved yields and reduced environmental impact. The paper examines key AI technologies, including machine learning algorithms for crop monitoring, robotics for automated planting and harvesting, and data analytics for optimizing resource (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. The Threat of Algocracy: Reality, Resistance and Accommodation.John Danaher - 2016 - Philosophy and Technology 29 (3):245-268.
    One of the most noticeable trends in recent years has been the increasing reliance of public decision-making processes on algorithms, i.e. computer-programmed step-by-step instructions for taking a given set of inputs and producing an output. The question raised by this article is whether the rise of such algorithmic governance creates problems for the moral or political legitimacy of our public decision-making processes. Ignoring common concerns with data protection and privacy, it is argued that algorithmic governance does pose a (...)
    Download  
     
    Export citation  
     
    Bookmark   56 citations  
  48. Artificial Intelligence and Neuroscience Research: Theologico-Philosophical Implications for the Christian Notion of the Human Person.Justin Nnaemeka Onyeukaziri - 2023 - Maritain Studies/Etudes Maritainiennes 39:85-103.
    This paper explores the theological and philosophical implications of artificial intelligence (AI) and Neuroscience research on the Christian’s notion of the human person. The paschal mystery of Christ is the intuitive foundation of Christian anthropology. In the intellectual history of the Christianity, Platonism and Aristotelianism have been employed to articulate the Christian philosophical anthropology. The Aristotelian systematization has endured to this era. Since the modern period of the Western intellectual history, Aristotelianism has been supplanted by the positive sciences (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  49. (1 other version)Future progress in artificial intelligence: A survey of expert opinion.Vincent C. Müller & Nick Bostrom - 2016 - In Vincent C. Müller (ed.), Fundamental Issues of Artificial Intelligence. Cham: Springer. pp. 553-571.
    There is, in some quarters, concern about high–level machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity. In other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development, and how fast they see these developing. (...)
    Download  
     
    Export citation  
     
    Bookmark   39 citations  
  50. Artificial Intelligence: Arguments for Catastrophic Risk.Adam Bales, William D'Alessandro & Cameron Domenico Kirk-Giannini - 2024 - Philosophy Compass 19 (2):e12964.
    Recent progress in artificial intelligence (AI) has drawn attention to the technology’s transformative potential, including what some see as its prospects for causing large-scale harm. We review two influential arguments purporting to show how AI could pose catastrophic risks. The first argument — the Problem of Power-Seeking — claims that, under certain assumptions, advanced AI systems are likely to engage in dangerous power-seeking behavior in pursuit of their goals. We review reasons for thinking that AI systems might seek (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
1 — 50 / 960