Results for 'AI governance'

967 found
Order:
  1. Systematizing AI Governance through the Lens of Ken Wilber's Integral Theory.Ammar Younas & Yi Zeng - manuscript
    We apply Ken Wilber's Integral Theory to AI governance, demonstrating its ability to systematize diverse approaches in the current multifaceted AI governance landscape. By analyzing ethical considerations, technological standards, cultural narratives, and regulatory frameworks through Integral Theory's four quadrants, we offer a comprehensive perspective on governance needs. This approach aligns AI governance with human values, psychological well-being, cultural norms, and robust regulatory standards. Integral Theory’s emphasis on interconnected individual and collective experiences addresses the deeper aspects of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2. AI Governance and the Policymaking Process: Key Considerations for Reducing AI Risk.Brandon Perry & Risto Uuk - 2019 - Big Data and Cognitive Computing 3 (2):1-17.
    This essay argues that a new subfield of AI governance should be explored that examines the policy-making process and its implications for AI governance. A growing number of researchers have begun working on the question of how to mitigate the catastrophic risks of transformative artificial intelligence, including what policies states should adopt. However, this essay identifies a preceding, meta-level problem of how the space of possible policies is affected by the politics and administrative mechanisms of how those policies (...)
    Download  
     
    Export citation  
     
    Bookmark  
  3. Good AI for the Present of Humanity Democratizing AI Governance.Nicholas Kluge Corrêa & Nythamar De Oliveira - 2021 - AI Ethics Journal 2 (2):1-16.
    What does Cyberpunk and AI Ethics have to do with each other? Cyberpunk is a sub-genre of science fiction that explores the post-human relationships between human experience and technology. One similarity between AI Ethics and Cyberpunk literature is that both seek a dialogue in which the reader may inquire about the future and the ethical and social problems that our technological advance may bring upon society. In recent years, an increasing number of ethical matters involving AI have been pointed and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  4. Innovating with confidence: embedding AI governance and fairness in a financial services risk management framework.Luciano Floridi, Michelle Seng Ah Lee & Alexander Denev - 2020 - Berkeley Technology Law Journal 34.
    An increasing number of financial services (FS) companies are adopting solutions driven by artificial intelligence (AI) to gain operational efficiencies, derive strategic insights, and improve customer engagement. However, the rate of adoption has been low, in part due to the apprehension around its complexity and self-learning capability, which makes auditability a challenge in a highly regulated industry. There is limited literature on how FS companies can implement the governance and controls specific to AI-driven solutions. AI auditing cannot be performed (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5. Australia's Approach to AI Governance in Security and Defence.Susannah Kate Devitt & Damian Copeland - forthcoming - In M. Raska, Z. Stanley-Lockman & R. Bitzinger (eds.), AI Governance for National Security and Defence: Assessing Military AI Strategic Perspectives. Routledge. pp. 38.
    Australia is a leading AI nation with strong allies and partnerships. Australia has prioritised the development of robotics, AI, and autonomous systems to develop sovereign capability for the military. Australia commits to Article 36 reviews of all new means and method of warfare to ensure weapons and weapons systems are operated within acceptable systems of control. Additionally, Australia has undergone significant reviews of the risks of AI to human rights and within intelligence organisations and has committed to producing ethics guidelines (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. AI Sovereignty: Navigating the Future of International AI Governance.Yu Chen - manuscript
    The rapid proliferation of artificial intelligence (AI) technologies has ushered in a new era of opportunities and challenges, prompting nations to grapple with the concept of AI sovereignty. This article delves into the definition and implications of AI sovereignty, drawing parallels to the well-established notion of cyber sovereignty. By exploring the connotations of AI sovereignty, including control over AI development, data sovereignty, economic impacts, national security considerations, and ethical and cultural dimensions, the article provides a comprehensive understanding of this emerging (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7. The Democratization of Global AI Governance and the Role of Tech Companies.Eva Erman - 2010 - Nature Machine Intelligence.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  8.  28
    AI-Driven Legislative Simulation and Inclusive Global Governance.Michael Haimes - manuscript
    This argument explores the transformative potential of AI-driven legislative simulations for creating inclusive, equitable, and globally adaptable laws. By using predictive modeling and adaptive frameworks, these simulations can account for diverse cultural, social, and economic contexts. The argument emphasizes the need for universal ethical safeguards, trust-building measures, and phased implementation strategies. Case studies of successful applications in governance and conflict resolution demonstrate the feasibility and efficacy of this approach. The conclusion highlights AI’s role in democratizing governance and ensuring (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9. A Robust Governance for the AI Act: AI Office, AI Board, Scientific Panel, and National Authorities.Claudio Novelli, Philipp Hacker, Jessica Morley, Jarle Trondal & Luciano Floridi - 2024 - European Journal of Risk Regulation 4:1-25.
    Regulation is nothing without enforcement. This particularly holds for the dynamic field of emerging technologies. Hence, this article has two ambitions. First, it explains how the EU´s new Artificial Intelligence Act (AIA) will be implemented and enforced by various institutional bodies, thus clarifying the governance framework of the AIA. Second, it proposes a normative model of governance, providing recommendations to ensure uniform and coordinated execution of the AIA and the fulfilment of the legislation. Taken together, the article explores (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  10. The Concept of Accountability in AI Ethics and Governance.Theodore Lechterman - 2023 - In Justin B. Bullock, Yu-Che Chen, Johannes Himmelreich, Valerie M. Hudson, Anton Korinek, Matthew M. Young & Baobao Zhang (eds.), The Oxford Handbook of AI Governance. Oxford University Press.
    Calls to hold artificial intelligence to account are intensifying. Activists and researchers alike warn of an “accountability gap” or even a “crisis of accountability” in AI. Meanwhile, several prominent scholars maintain that accountability holds the key to governing AI. But usage of the term varies widely in discussions of AI ethics and governance. This chapter begins by disambiguating some different senses and dimensions of accountability, distinguishing it from neighboring concepts, and identifying sources of confusion. It proceeds to explore the (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  11. A Roadmap for Governing AI: Technology Governance and Power Sharing Liberalism.Danielle Allen, Sarah Hubbard, Woojin Lim, Allison Stanger, Shlomit Wagman & Kinney Zalesne - 2024 - Harvard Ash Center for Democratic Governance and Innovation.
    This paper aims to provide a roadmap to AI governance. In contrast to the reigning paradigms, we argue that AI governance should not be merely a reactive, punitive, status-quo-defending enterprise, but rather the expression of an expansive, proactive vision for technology—to advance human flourishing. Advancing human flourishing in turn requires democratic/political stability and economic empowerment. Our overarching point is that answering questions of how we should govern this emerging technology is a chance not merely to categorize and manage (...)
    Download  
     
    Export citation  
     
    Bookmark  
  12. AI Human Impact: Toward a Model for Ethical Investing in AI-Intensive Companies.James Brusseau - manuscript
    Does AI conform to humans, or will we conform to AI? An ethical evaluation of AI-intensive companies will allow investors to knowledgeably participate in the decision. The evaluation is built from nine performance indicators that can be analyzed and scored to reflect a technology’s human-centering. When summed, the scores convert into objective investment guidance. The strategy of incorporating ethics into financial decisions will be recognizable to participants in environmental, social, and governance investing, however, this paper argues that conventional ESG (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  13. “Democratizing AI” and the Concern of Algorithmic Injustice.Ting-an Lin - 2024 - Philosophy and Technology 37 (3):1-27.
    The call to make artificial intelligence (AI) more democratic, or to “democratize AI,” is sometimes framed as a promising response for mitigating algorithmic injustice or making AI more aligned with social justice. However, the notion of “democratizing AI” is elusive, as the phrase has been associated with multiple meanings and practices, and the extent to which it may help mitigate algorithmic injustice is still underexplored. In this paper, based on a socio-technical understanding of algorithmic injustice, I examine three notable notions (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14.  43
    AI Ethics by Design: Implementing Customizable Guardrails for Responsible AI Development.Kristina Sekrst, Jeremy McHugh & Jonathan Rodriguez Cefalu - manuscript
    This paper explores the development of an ethical guardrail framework for AI systems, emphasizing the importance of customizable guardrails that align with diverse user values and underlying ethics. We address the challenges of AI ethics by proposing a structure that integrates rules, policies, and AI assistants to ensure responsible AI behavior, while comparing the proposed framework to the existing state-of-the-art guardrails. By focusing on practical mechanisms for implementing ethical standards, we aim to enhance transparency, user autonomy, and continuous improvement in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15. AI Rights for Human Safety.Peter Salib & Simon Goldstein - manuscript
    AI companies are racing to create artificial general intelligence, or “AGI.” If they succeed, the result will be human-level AI systems that can independently pursue high-level goals by formulating and executing long-term plans in the real world. Leading AI researchers agree that some of these systems will likely be “misaligned”–pursuing goals that humans do not desire. This goal mismatch will put misaligned AIs and humans into strategic competition with one another. As with present-day strategic competition between nations with incompatible goals, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16. Explainable AI lacks regulative reasons: why AI and human decision‑making are not equally opaque.Uwe Peters - forthcoming - AI and Ethics.
    Many artificial intelligence (AI) systems currently used for decision-making are opaque, i.e., the internal factors that determine their decisions are not fully known to people due to the systems’ computational complexity. In response to this problem, several researchers have argued that human decision-making is equally opaque and since simplifying, reason-giving explanations (rather than exhaustive causal accounts) of a decision are typically viewed as sufficient in the human case, the same should hold for algorithmic decision-making. Here, I contend that this argument (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  17.  76
    Generative AI and the Future of Democratic Citizenship.Paul Formosa, Bhanuraj Kashyap & Siavosh Sahebi - 2024 - Digital Government: Research and Practice 2691 (2024/05-ART).
    Generative AI technologies have the potential to be socially and politically transformative. In this paper, we focus on exploring the potential impacts that Generative AI could have on the functioning of our democracies and the nature of citizenship. We do so by drawing on accounts of deliberative democracy and the deliberative virtues associated with it, as well as the reciprocal impacts that social media and Generative AI will have on each other and the broader information landscape. Drawing on this background (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  18. Collective ownership of AI.Markus Furendal - 2025 - In Martin Hähnel & Regina Müller (eds.), A Companion to Applied Philosophy of AI. Wiley-Blackwell.
    AI technology promises to be both the most socially important and the most profitable technology of a generation. At the same time, the control over – and profits from – the technology is highly concentrated to a handful of large tech companies. This chapter discusses whether bringing AI technology under collective ownership and control is an attractive way of counteracting this development. It discusses justice-based rationales for collective ownership, such as the claim that, since the training of AI systems relies (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. Disagreement, AI alignment, and bargaining.Harry R. Lloyd - forthcoming - Philosophical Studies:1-31.
    New AI technologies have the potential to cause unintended harms in diverse domains including warfare, judicial sentencing, biomedicine and governance. One strategy for realising the benefits of AI whilst avoiding its potential dangers is to ensure that new AIs are properly ‘aligned’ with some form of ‘alignment target.’ One danger of this strategy is that – dependent on the alignment target chosen – our AIs might optimise for objectives that reflect the values only of a certain subset of society, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20.  18
    Smart City Data Integration: Leveraging AI for Effective Urban Governance.Hilda Andrea - manuscript
    Rapid advancement of urbanization has necessitated the creation of "smart cities," where information and communication technologies (ICT) are used to improve the quality of urban life. Central to the smart city paradigm is data integration—connecting disparate data sources from various urban systems, such as transportation, healthcare, utilities, and public safety. This paper explores the role of Artificial Intelligence (AI) in facilitating data integration within smart cities, focusing on how AI technologies can enable effective urban governance. By examining the current (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21. Acceleration AI Ethics, the Debate between Innovation and Safety, and Stability AI’s Diffusion versus OpenAI’s Dall-E.James Brusseau - manuscript
    One objection to conventional AI ethics is that it slows innovation. This presentation responds by reconfiguring ethics as an innovation accelerator. The critical elements develop from a contrast between Stability AI’s Diffusion and OpenAI’s Dall-E. By analyzing the divergent values underlying their opposed strategies for development and deployment, five conceptions are identified as common to acceleration ethics. Uncertainty is understood as positive and encouraging, rather than discouraging. Innovation is conceived as intrinsically valuable, instead of worthwhile only as mediated by social (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22.  43
    AI through the looking glass: an empirical study of structural social and ethical challenges in AI.Mark Ryan, Nina De Roo, Hao Wang, Vincent Blok & Can Atik - 2024 - AI and Society 1 (1):1-17.
    This paper examines how professionals (N = 32) working on artificial intelligence (AI) view structural AI ethics challenges like injustices and inequalities beyond individual agents' direct intention and control. This paper answers the research question: What are professionals’ perceptions of the structural challenges of AI (in the agri-food sector)? This empirical paper shows that it is essential to broaden the scope of ethics of AI beyond micro- and meso-levels. While ethics guidelines and AI ethics often focus on the responsibility of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. How Much Should Governments Pay to Prevent Catastrophes? Longtermism's Limited Role.Carl Shulman & Elliott Thornley - forthcoming - In Jacob Barrett, Hilary Greaves & David Thorstad (eds.), Essays on Longtermism. Oxford University Press.
    Longtermists have argued that humanity should significantly increase its efforts to prevent catastrophes like nuclear wars, pandemics, and AI disasters. But one prominent longtermist argument overshoots this conclusion: the argument also implies that humanity should reduce the risk of existential catastrophe even at extreme cost to the present generation. This overshoot means that democratic governments cannot use the longtermist argument to guide their catastrophe policy. In this paper, we show that the case for preventing catastrophe does not depend on longtermism. (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  24. The Case for Government by Artificial Intelligence.Steven James Bartlett - 2016 - Willamette University Faculty Research Website: Http://Www.Willamette.Edu/~Sbartlet/Documents/Bartlett_The%20Case%20for%20Government%20by%20Artifici al%20Intelligence.Pdf.
    THE CASE FOR GOVERNMENT BY ARTIFICIAL INTELLIGENCE. Tired of election madness? The rhetoric of politicians? Their unreliable promises? And less than good government? -/- Until recently, it hasn’t been hard for people to give up control to computers. Not very many people miss the effort and time required to do calculations by hand, to keep track of their finances, or to complete their tax returns manually. But relinquishing direct human control to self-driving cars is expected to be more of a (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  25. AI and Structural Injustice: Foundations for Equity, Values, and Responsibility.Johannes Himmelreich & Désirée Lim - 2023 - In Justin B. Bullock, Yu-Che Chen, Johannes Himmelreich, Valerie M. Hudson, Anton Korinek, Matthew M. Young & Baobao Zhang (eds.), The Oxford Handbook of AI Governance. Oxford University Press.
    This chapter argues for a structural injustice approach to the governance of AI. Structural injustice has an analytical and an evaluative component. The analytical component consists of structural explanations that are well-known in the social sciences. The evaluative component is a theory of justice. Structural injustice is a powerful conceptual tool that allows researchers and practitioners to identify, articulate, and perhaps even anticipate, AI biases. The chapter begins with an example of racial bias in AI that arises from structural (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. From Iron to AI: The Evolution of the Sources of State Power.Yu Chen - manuscript
    This article, “From Iron to AI: The Evolution of the Sources of State Power,” examines the progression of fundamental resources that have historically underpinned state power, from tangible assets like land and iron to modern advancements in artificial intelligence (AI). It traces the development of state power through three significant eras: the ancient period characterized by land, population, horses, and iron; the industrial era marked by railroads, coal, and electricity; and the contemporary digital age dominated by the Internet and emerging (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. AI Worship as a New Form of Religion.Neil McArthur - manuscript
    We are about to see the emergence of religions devoted to the worship of Artificial Intelligence (AI). Such religions pose acute risks, both to their followers and to the public. We should require their creators, and governments, to acknowledge these risks and to manage them as best they can. However, these new religions cannot be stopped altogether, nor should we try to stop them if we could. We must accept that AI worship will become part of our culture, and we (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28. Narrow AI Nanny: Reaching Strategic Advantage via Narrow AI to Prevent Creation of the Dangerous Superintelligence.Alexey Turchin - manuscript
    Abstract: As there are no currently obvious ways to create safe self-improving superintelligence, but its emergence is looming, we probably need temporary ways to prevent its creation. The only way to prevent it is to create a special type of AI that is able to control and monitor the entire world. The idea has been suggested by Goertzel in the form of an AI Nanny, but his Nanny is still superintelligent, and is not easy to control. We explore here ways (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29.  50
    Preserving our humanity in the growing AI-mediated politics: Unraveling the concepts of Democracy (民主) and People as the Roots of the state (民本).Manh-Tung Ho & My-Van Luong - manuscript
    Artificial intelligence (AI) has transformed the way people engage with politics around the world: how citizens consume news, how they view the institutions and norms, how civic groups mobilize public interests, how data-driven campaigns are shaping elections, and so on (Ho & Vuong, 2024). Placing people at the center of the increasingly AI-mediated political landscape has become an urgent matter that transcends all forms of institutions. In this essay, we argue that, in this era, it is necessary to look beyond (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30.  44
    Moral Argument for AI Ethics.Michael Haimes - manuscript
    The Moral Argument for AI Ethics emphasizes the need for an adaptive, globally equitable, and philosophically grounded framework for the ethical development and deployment of artificial intelligence. It highlights key principles, including dynamic adaptation to societal values, inclusivity, and the mitigation of global disparities. Drawing from historical AI ethical failures, the argument underscores the urgency of proactive and enforceable frameworks addressing bias, surveillance, and existential threats. The conclusion advocates for international coalitions that integrate diverse philosophical traditions and practical implementation strategies, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31. On the Normative Importance of the Distinction Between ‘Governance of AI’ and ‘Governance by AI’.Erman Eva & Furendal Markus - 2023 - Global Policy 14.
    Download  
     
    Export citation  
     
    Bookmark  
  32.  91
    Beyond Competence: Why AI Needs Purpose, Not Just Programming.Georgy Iashvili - manuscript
    The alignment problem in artificial intelligence (AI) is a critical challenge that extends beyond the need to align future superintelligent systems with human values. This paper argues that even "merely intelligent" AI systems, built on current-gen technologies, pose existential risks due to their competence-without-comprehension nature. Current AI models, despite their advanced capabilities, lack intrinsic moral reasoning and are prone to catastrophic misalignment when faced with ethical dilemmas, as illustrated by recent controversies. Solutions such as hard-coded censorship and rule-based restrictions prove (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33.  15
    AI-Based Solutions for Environmental Monitoring in Urban Spaces.Hilda Andrea - manuscript
    The rapid advancement of urbanization has necessitated the creation of "smart cities," where information and communication technologies (ICT) are used to improve the quality of urban life. Central to the smart city paradigm is data integration—connecting disparate data sources from various urban systems, such as transportation, healthcare, utilities, and public safety. This paper explores the role of Artificial Intelligence (AI) in facilitating data integration within smart cities, focusing on how AI technologies can enable effective urban governance. By examining the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34. Catastrophically Dangerous AI is Possible Before 2030.Alexey Turchin - manuscript
    In AI safety research, the median timing of AGI arrival is often taken as a reference point, which various polls predict to happen in the middle of 21 century, but for maximum safety, we should determine the earliest possible time of Dangerous AI arrival. Such Dangerous AI could be either AGI, capable of acting completely independently in the real world and of winning in most real-world conflicts with humans, or an AI helping humans to build weapons of mass destruction, or (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. Foundations of an Ethical Framework for AI Entities: the Ethics of Systems.Andrej Dameski - 2020 - Dissertation, University of Luxembourg
    The field of AI ethics during the current and previous decade is receiving an increasing amount of attention from all involved stakeholders: the public, science, philosophy, religious organizations, enterprises, governments, and various organizations. However, this field currently lacks consensus on scope, ethico-philosophical foundations, or common methodology. This thesis aims to contribute towards filling this gap by providing an answer to the two main research questions: first, what theory can explain moral scenarios in which AI entities are participants?; and second, what (...)
    Download  
     
    Export citation  
     
    Bookmark  
  36. The Many Meanings of Vulnerability in the AI Act and the One Missing.Federico Galli & Claudio Novelli - 2024 - Biolaw Journal 1.
    This paper reviews the different meanings of vulnerability in the AI Act (AIA). We show that the AIA follows a rather established tradition of looking at vulnerability as a trait or a state of certain individuals and groups. It also includes a promising account of vulnerability as a relation but does not clarify if and how AI changes this relation. We spot the missing piece of the AIA: the lack of recognition that vulnerability is an inherent feature of all human-AI (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37. (1 other version)Ethics-based auditing to develop trustworthy AI.Jakob Mökander & Luciano Floridi - 2021 - Minds and Machines 31 (2):323–327.
    A series of recent developments points towards auditing as a promising mechanism to bridge the gap between principles and practice in AI ethics. Building on ongoing discussions concerning ethics-based auditing, we offer three contributions. First, we argue that ethics-based auditing can improve the quality of decision making, increase user satisfaction, unlock growth potential, enable law-making, and relieve human suffering. Second, we highlight current best practices to support the design and implementation of ethics-based auditing: To be feasible and effective, ethics-based auditing (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  38. The debate on the ethics of AI in health care: a reconstruction and critical review.Jessica Morley, Caio C. V. Machado, Christopher Burr, Josh Cowls, Indra Joshi, Mariarosaria Taddeo & Luciano Floridi - manuscript
    Healthcare systems across the globe are struggling with increasing costs and worsening outcomes. This presents those responsible for overseeing healthcare with a challenge. Increasingly, policymakers, politicians, clinical entrepreneurs and computer and data scientists argue that a key part of the solution will be ‘Artificial Intelligence’ (AI) – particularly Machine Learning (ML). This argument stems not from the belief that all healthcare needs will soon be taken care of by “robot doctors.” Instead, it is an argument that rests on the classic (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  39. Disciplining Deliberation: A Sociotechnical Perspective on Machine Learning Trade-offs.Sina Fazelpour - 2021
    This paper focuses on two highly publicized formal trade-offs in the field of responsible artificial intelligence (AI) -- between predictive accuracy and fairness and between predictive accuracy and interpretability. These formal trade-offs are often taken by researchers, practitioners, and policy-makers to directly imply corresponding tensions between underlying values. Thus interpreted, the trade-offs have formed a core focus of normative engagement in AI governance, accompanied by a particular division of labor along disciplinary lines. This paper argues against this prevalent interpretation (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. Analytical Modelling and UK Government Policy.Marie Oldfield - 2021 - AI and Ethics 1 (1):1-16.
    In the last decade, the UK Government has attempted to implement improved processes and procedures in modelling and analysis in response to the Laidlaw report of 2012 and the Macpherson review of 2013. The Laidlaw report was commissioned after failings during the Intercity West Coast Rail (ICWC) Franchise procurement exercise by the Department for Transport (DfT) that led to a legal challenge of the analytical models used within the exercise. The Macpherson review looked into the quality assurance of Government analytical (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  41. The Role of Engineers in Harmonising Human Values for AI Systems Design.Steven Umbrello - 2022 - Journal of Responsible Technology 10 (July):100031.
    Most engineers Fwork within social structures governing and governed by a set of values that primarily emphasise economic concerns. The majority of innovations derive from these loci. Given the effects of these innovations on various communities, it is imperative that the values they embody are aligned with those societies. Like other transformative technologies, artificial intelligence systems can be designed by a single organisation but be diffused globally, demonstrating impacts over time. This paper argues that in order to design for this (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  42.  12
    Examining the Epistemological Status of AI-Aided Research in the Information Age: Research Integrity of Margaret Lawrence University in Delta State (11th edition).Etaoghene Paul Polo - 2024 - International Journal of Social Sciences and Humanities 11 (1):197-207.
    This study examines the epistemological implications of the adoption of Artificial Intelligence (AI) in researches within the information age. Focusing on the particular case of Margaret Lawrence University, a leading research institution situated in Galilee, Ika North-East Local Government Area of Delta State, Nigeria, this study assesses the implications of AI-aided research and questions the integrity of AI-generated knowledge. Precisely, this study discusses the epistemological status of AI-generated knowledge by weighing the prospects and shortcomings of using AI in research. Also, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. The importance of understanding trust in Confucianism and what it is like in an AI-powered world.Ho Manh Tung - unknown
    Since the revival of artificial intelligence (AI) research, many countries in the world have proposed their visions of an AI-powered world: Germany with the concept of “Industry 4.0,”1 Japan with the concept of “Society 5.0,”2 China with the “New Generation Artificial Intelligence Plan (AIDP).”3 In all of the grand visions, all governments emphasize the “human-centric element” in their plans. This essay focuses on the concept of trust in Confucian societies and places this very human element in the context of an (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. Accountability in Artificial Intelligence: What It Is and How It Works.Claudio Novelli, Mariarosaria Taddeo & Luciano Floridi - 2023 - AI and Society 1:1-12.
    Accountability is a cornerstone of the governance of artificial intelligence (AI). However, it is often defined too imprecisely because its multifaceted nature and the sociotechnical structure of AI systems imply a variety of values, practices, and measures to which accountability in AI can refer. We address this lack of clarity by defining accountability in terms of answerability, identifying three conditions of possibility (authority recognition, interrogation, and limitation of power), and an architecture of seven features (context, range, agent, forum, standards, (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  45.  89
    How could the United Nations Global Digital Compact prevent cultural imposition and hermeneutical injustice?Arthur Gwagwa & Warmhold Jan Thomas Mollema - 2024 - Patterns 5 (11).
    As the geopolitical superpowers race to regulate the digital realm, their divergent rights-centered, market-driven, and social-control-based approaches require a global compact on digital regulation. If diverse regulatory jurisdictions remain, forms of domination entailed by cultural imposition and hermeneutical injustice related to AI legislation and AI systems will follow. We argue for consensual regulation on shared substantive issues, accompanied by proper standardization and coordination. Failure to attain consensus will fragment global digital regulation, enable regulatory capture by authoritarian powers or bad corporate (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46.  87
    Critical Provocations for Synthetic Data.Daniel Susser & Jeremy Seeman - 2024 - Surveillance and Society 22 (4):453-459.
    Training artificial intelligence (AI) systems requires vast quantities of data, and AI developers face a variety of barriers to accessing the information they need. Synthetic data has captured researchers’ and industry’s imagination as a potential solution to this problem. While some of the enthusiasm for synthetic data may be warranted, in this short paper we offer critical counterweight to simplistic narratives that position synthetic data as a cost-free solution to every data-access challenge—provocations highlighting ethical, political, and governance issues the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47. Artificial Intelligence 2024 - 2034: What to expect in the next ten years.Demetrius Floudas - 2024 - 'Agi Talks' Series at Daniweb.
    In this public communication, AI policy theorist Demetrius Floudas introduces a novel era classification for the AI epoch and reveals the hidden dangers of AGI, predicting the potential obsolescence of humanity. In retort, he proposes a provocative International Control Treaty. -/- According to this scheme, the age of AI will unfold in three distinct phases, introduced here for the first time. An AGI Control & non-Proliferation Treaty may be humanity’s only safeguard. This piece aims to provide a publicly accessible exposé (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48.  29
    L’Artificial Intelligence Act Europeo: alcune questioni di implementazione.Claudio Novelli - 2024 - Federalismi 2:95-113.
    L’articolo esamina la proposta europea di regolamento sull’intelligenza artificiale, AI Act (AIA). In particolare, esamina il modello di analisi e valutazione del rischio dei sistemi di IA. L’articolo identifica tre potenziali problemi di implementazione del regolamento: (1) la predeterminazione dei livelli di rischio, (2) la genericità del giudizio di significatività del rischio e (3) l’indeterminatezza della valutazione sull’impatto dei diritti fondamentali. Il saggio suggeriscealcune soluzioni per affrontare questi tre problemi.
    Download  
     
    Export citation  
     
    Bookmark  
  49. Social philosophies in Japan’s vision of human-centric Society 5.0 and some recommendations for Vietnam.Manh-Tung Ho, Phuong-Thao Luu & T. Hong-Kong Nguyen - manuscript
    This essay briefly summarizes the key characteristics and social philosophies in Japan’s vision of Society 5.0. Then it discusses why Vietnam, as a developing country, can learn from the experiences of Japan in establishing its vision for an AI-powered human-centric society. The paper finally provides five concrete recommendations for Vietnam toward a harmonic and human-centric coexistence with increasingly competent and prevalent AI systems, including: Human-centric AI vision; Multidimensional, pluralistic understanding of human-technology relation; AI as a driving force for socio-economic development; (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50. Deontology and Safe Artificial Intelligence.William D’Alessandro - forthcoming - Philosophical Studies:1-24.
    The field of AI safety aims to prevent increasingly capable artificially intelligent systems from causing humans harm. Research on moral alignment is widely thought to offer a promising safety strategy: if we can equip AI systems with appropriate ethical rules, according to this line of thought, they'll be unlikely to disempower, destroy or otherwise seriously harm us. Deontological morality looks like a particularly attractive candidate for an alignment target, given its popularity, relative technical tractability and commitment to harm-avoidance principles. I (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
1 — 50 / 967