Results for ' AI Governance'

975 found
Order:
  1. Systematizing AI Governance through the Lens of Ken Wilber's Integral Theory.Ammar Younas & Yi Zeng - manuscript
    We apply Ken Wilber's Integral Theory to AI governance, demonstrating its ability to systematize diverse approaches in the current multifaceted AI governance landscape. By analyzing ethical considerations, technological standards, cultural narratives, and regulatory frameworks through Integral Theory's four quadrants, we offer a comprehensive perspective on governance needs. This approach aligns AI governance with human values, psychological well-being, cultural norms, and robust regulatory standards. Integral Theory’s emphasis on interconnected individual and collective experiences addresses the deeper aspects of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2. AI Governance and the Policymaking Process: Key Considerations for Reducing AI Risk.Brandon Perry & Risto Uuk - 2019 - Big Data and Cognitive Computing 3 (2):1-17.
    This essay argues that a new subfield of AI governance should be explored that examines the policy-making process and its implications for AI governance. A growing number of researchers have begun working on the question of how to mitigate the catastrophic risks of transformative artificial intelligence, including what policies states should adopt. However, this essay identifies a preceding, meta-level problem of how the space of possible policies is affected by the politics and administrative mechanisms of how those policies (...)
    Download  
     
    Export citation  
     
    Bookmark  
  3. Innovating with confidence: embedding AI governance and fairness in a financial services risk management framework.Luciano Floridi, Michelle Seng Ah Lee & Alexander Denev - 2020 - Berkeley Technology Law Journal 34.
    An increasing number of financial services (FS) companies are adopting solutions driven by artificial intelligence (AI) to gain operational efficiencies, derive strategic insights, and improve customer engagement. However, the rate of adoption has been low, in part due to the apprehension around its complexity and self-learning capability, which makes auditability a challenge in a highly regulated industry. There is limited literature on how FS companies can implement the governance and controls specific to AI-driven solutions. AI auditing cannot be performed (...)
    Download  
     
    Export citation  
     
    Bookmark  
  4. Good AI for the Present of Humanity Democratizing AI Governance.Nicholas Kluge Corrêa & Nythamar De Oliveira - 2021 - AI Ethics Journal 2 (2):1-16.
    What does Cyberpunk and AI Ethics have to do with each other? Cyberpunk is a sub-genre of science fiction that explores the post-human relationships between human experience and technology. One similarity between AI Ethics and Cyberpunk literature is that both seek a dialogue in which the reader may inquire about the future and the ethical and social problems that our technological advance may bring upon society. In recent years, an increasing number of ethical matters involving AI have been pointed and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  5.  28
    Spider Vision: A Natural Framework for AI Governance.T. Young - manuscript
    This paper introduces the Spider Vision Framework, a biomimetic approach to AI governance inspired by the dual visual systems of spiders. By integrating focused oversight (technical detail) with systemic awareness (societal context) and grounding both in virtue ethics—particularly prudence, justice, and adaptability—the framework addresses immediate technical risks while accounting for long-term societal implications. Comparative analyses with consequentialist and deontological models underscore virtue ethics’ emphasis on moral character, and the paper proposes pilot studies for empirical validation in healthcare AI and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. Australia's Approach to AI Governance in Security and Defence.Susannah Kate Devitt & Damian Copeland - forthcoming - In M. Raska, Z. Stanley-Lockman & R. Bitzinger (eds.), AI Governance for National Security and Defence: Assessing Military AI Strategic Perspectives. Routledge. pp. 38.
    Australia is a leading AI nation with strong allies and partnerships. Australia has prioritised the development of robotics, AI, and autonomous systems to develop sovereign capability for the military. Australia commits to Article 36 reviews of all new means and method of warfare to ensure weapons and weapons systems are operated within acceptable systems of control. Additionally, Australia has undergone significant reviews of the risks of AI to human rights and within intelligence organisations and has committed to producing ethics guidelines (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7. AI Sovereignty: Navigating the Future of International AI Governance.Yu Chen - manuscript
    The rapid proliferation of artificial intelligence (AI) technologies has ushered in a new era of opportunities and challenges, prompting nations to grapple with the concept of AI sovereignty. This article delves into the definition and implications of AI sovereignty, drawing parallels to the well-established notion of cyber sovereignty. By exploring the connotations of AI sovereignty, including control over AI development, data sovereignty, economic impacts, national security considerations, and ethical and cultural dimensions, the article provides a comprehensive understanding of this emerging (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. The Democratization of Global AI Governance and the Role of Tech Companies.Eva Erman - 2010 - Nature Machine Intelligence.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  9.  56
    AI-Driven Legislative Simulation and Inclusive Global Governance.Michael Haimes - manuscript
    This argument explores the transformative potential of AI-driven legislative simulations for creating inclusive, equitable, and globally adaptable laws. By using predictive modeling and adaptive frameworks, these simulations can account for diverse cultural, social, and economic contexts. The argument emphasizes the need for universal ethical safeguards, trust-building measures, and phased implementation strategies. Case studies of successful applications in governance and conflict resolution demonstrate the feasibility and efficacy of this approach. The conclusion highlights AI’s role in democratizing governance and ensuring (...)
    Download  
     
    Export citation  
     
    Bookmark  
  10. Decentralized Governance of AI Agents.Tomer Jordi Chaffer, Charles von Goins Ii, Bayo Okusanya, Dontrail Cotlage & Justin Goldston - manuscript
    Autonomous AI agents present transformative opportunities and significant governance challenges. Existing frameworks, such as the EU AI Act and the NIST AI Risk Management Framework, fall short of addressing the complexities of these agents, which are capable of independent decision-making, learning, and adaptation. To bridge these gaps, we propose the ETHOS (Ethical Technology and Holistic Oversight System) framework—a decentralized governance (DeGov) model leveraging Web3 technologies, including blockchain, smart contracts, and decentralized autonomous organizations (DAOs). ETHOS establishes a global registry (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11. A Robust Governance for the AI Act: AI Office, AI Board, Scientific Panel, and National Authorities.Claudio Novelli, Philipp Hacker, Jessica Morley, Jarle Trondal & Luciano Floridi - 2024 - European Journal of Risk Regulation 4:1-25.
    Regulation is nothing without enforcement. This particularly holds for the dynamic field of emerging technologies. Hence, this article has two ambitions. First, it explains how the EU´s new Artificial Intelligence Act (AIA) will be implemented and enforced by various institutional bodies, thus clarifying the governance framework of the AIA. Second, it proposes a normative model of governance, providing recommendations to ensure uniform and coordinated execution of the AIA and the fulfilment of the legislation. Taken together, the article explores (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  12. The Concept of Accountability in AI Ethics and Governance.Theodore Lechterman - 2023 - In Justin B. Bullock, Yu-Che Chen, Johannes Himmelreich, Valerie M. Hudson, Anton Korinek, Matthew M. Young & Baobao Zhang (eds.), The Oxford Handbook of AI Governance. Oxford University Press.
    Calls to hold artificial intelligence to account are intensifying. Activists and researchers alike warn of an “accountability gap” or even a “crisis of accountability” in AI. Meanwhile, several prominent scholars maintain that accountability holds the key to governing AI. But usage of the term varies widely in discussions of AI ethics and governance. This chapter begins by disambiguating some different senses and dimensions of accountability, distinguishing it from neighboring concepts, and identifying sources of confusion. It proceeds to explore the (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  13.  88
    Governing the Agent-to-Agent Economy of Trust via Progressive Decentralization.Tomer Jordi Chaffer - manuscript
    Current approaches to AI governance often fall short in anticipating a future where AI agents manage critical tasks, such as financial operations, administrative functions, and beyond. As AI agents may eventually delegate tasks among themselves to optimize efficiency, understanding the foundational principles of human value exchange could offer insights into how AI-driven economies might operate. Just as trust and value exchange are central to human interactions in open marketplaces, they may also be critical for enabling secure and efficient interactions (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. A Roadmap for Governing AI: Technology Governance and Power Sharing Liberalism.Danielle Allen, Sarah Hubbard, Woojin Lim, Allison Stanger, Shlomit Wagman & Kinney Zalesne - 2024 - Harvard Ash Center for Democratic Governance and Innovation.
    This paper aims to provide a roadmap to AI governance. In contrast to the reigning paradigms, we argue that AI governance should not be merely a reactive, punitive, status-quo-defending enterprise, but rather the expression of an expansive, proactive vision for technology—to advance human flourishing. Advancing human flourishing in turn requires democratic/political stability and economic empowerment. Our overarching point is that answering questions of how we should govern this emerging technology is a chance not merely to categorize and manage (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15.  46
    A Roadmap for Governing AI: Technology Governance and Power-Sharing Liberalism.Danielle Allen, Woojin Lim, Sarah Hubbard, Allison Stanger, Shlomit Wagman, Kinney Zalesne & Omoaholo Omoakhalen - 2025 - AI and Ethics 4 (4).
    This paper aims to provide a roadmap for governing AI. In contrast to the reigning paradigms, we argue that AI governance should be not merely a reactive, punitive, status-quo-defending enterprise, but rather the expression of an expansive, proactive vision for technology—to advance human flourishing. Advancing human flourishing in turn requires democratic/political stability and economic empowerment. To accomplish this, we build on a new normative framework that will give humanity its best chance to reap the full benefits, while avoiding the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16. AI Ethics by Design: Implementing Customizable Guardrails for Responsible AI Development.Kristina Sekrst, Jeremy McHugh & Jonathan Rodriguez Cefalu - manuscript
    This paper explores the development of an ethical guardrail framework for AI systems, emphasizing the importance of customizable guardrails that align with diverse user values and underlying ethics. We address the challenges of AI ethics by proposing a structure that integrates rules, policies, and AI assistants to ensure responsible AI behavior, while comparing the proposed framework to the existing state-of-the-art guardrails. By focusing on practical mechanisms for implementing ethical standards, we aim to enhance transparency, user autonomy, and continuous improvement in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17.  48
    Explicability as an AI Principle: Technology and Ethics in Cooperation.Moto Kamiura - forthcoming - Proceedings of the 39Th Annual Conference of the Japanese Society for Artificial Intelligence, 2025.
    This paper categorizes current approaches to AI ethics into four perspectives and briefly summarizes them: (1) Case studies and technical trend surveys, (2) AI governance, (3) Technologies for AI alignment, (4) Philosophy. In the second half, we focus on the fourth perspective, the philosophical approach, within the context of applied ethics. In particular, the explicability of AI may be an area in which scientists, engineers, and AI developers are expected to engage more actively relative to other ethical issues in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18. Smart City Data Integration: Leveraging AI for Effective Urban Governance.Hilda Andrea - manuscript
    Rapid advancement of urbanization has necessitated the creation of "smart cities," where information and communication technologies (ICT) are used to improve the quality of urban life. Central to the smart city paradigm is data integration—connecting disparate data sources from various urban systems, such as transportation, healthcare, utilities, and public safety. This paper explores the role of Artificial Intelligence (AI) in facilitating data integration within smart cities, focusing on how AI technologies can enable effective urban governance. By examining the current (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. AI Human Impact: Toward a Model for Ethical Investing in AI-Intensive Companies.James Brusseau - manuscript
    Does AI conform to humans, or will we conform to AI? An ethical evaluation of AI-intensive companies will allow investors to knowledgeably participate in the decision. The evaluation is built from nine performance indicators that can be analyzed and scored to reflect a technology’s human-centering. When summed, the scores convert into objective investment guidance. The strategy of incorporating ethics into financial decisions will be recognizable to participants in environmental, social, and governance investing, however, this paper argues that conventional ESG (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  20. “Democratizing AI” and the Concern of Algorithmic Injustice.Ting-an Lin - 2024 - Philosophy and Technology 37 (3):1-27.
    The call to make artificial intelligence (AI) more democratic, or to “democratize AI,” is sometimes framed as a promising response for mitigating algorithmic injustice or making AI more aligned with social justice. However, the notion of “democratizing AI” is elusive, as the phrase has been associated with multiple meanings and practices, and the extent to which it may help mitigate algorithmic injustice is still underexplored. In this paper, based on a socio-technical understanding of algorithmic injustice, I examine three notable notions (...)
    Download  
     
    Export citation  
     
    Bookmark  
  21.  25
    AI Enabled Water Well Predictor.A. Kalyani - 2025 - International Journal of Engineering Innovations and Management Strategies 1 (9):1-13.
    The AI-Enabled Water Well Predictor is a machine learning-based solution aimed at accurately predicting optimal drilling locations for water wells. This project leverages artificial intelligence to analyze vast datasets, including geological, hydrological, environmental, and meteorological data, to pinpoint areas with the highest likelihood of accessible groundwater. By integrating multiple data sources, the AI model identifies patterns and correlations that are difficult to detect through traditional methods, significantly increasing the reliability of well placement predictions. In regions where water scarcity is prevalent, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22. Explainable AI lacks regulative reasons: why AI and human decision‑making are not equally opaque.Uwe Peters - forthcoming - AI and Ethics.
    Many artificial intelligence (AI) systems currently used for decision-making are opaque, i.e., the internal factors that determine their decisions are not fully known to people due to the systems’ computational complexity. In response to this problem, several researchers have argued that human decision-making is equally opaque and since simplifying, reason-giving explanations (rather than exhaustive causal accounts) of a decision are typically viewed as sufficient in the human case, the same should hold for algorithmic decision-making. Here, I contend that this argument (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  23. AI Rights for Human Safety.Peter Salib & Simon Goldstein - manuscript
    AI companies are racing to create artificial general intelligence, or “AGI.” If they succeed, the result will be human-level AI systems that can independently pursue high-level goals by formulating and executing long-term plans in the real world. Leading AI researchers agree that some of these systems will likely be “misaligned”–pursuing goals that humans do not desire. This goal mismatch will put misaligned AIs and humans into strategic competition with one another. As with present-day strategic competition between nations with incompatible goals, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  24. Acceleration AI Ethics, the Debate between Innovation and Safety, and Stability AI’s Diffusion versus OpenAI’s Dall-E.James Brusseau - manuscript
    One objection to conventional AI ethics is that it slows innovation. This presentation responds by reconfiguring ethics as an innovation accelerator. The critical elements develop from a contrast between Stability AI’s Diffusion and OpenAI’s Dall-E. By analyzing the divergent values underlying their opposed strategies for development and deployment, five conceptions are identified as common to acceleration ethics. Uncertainty is understood as positive and encouraging, rather than discouraging. Innovation is conceived as intrinsically valuable, instead of worthwhile only as mediated by social (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. AI through the looking glass: an empirical study of structural social and ethical challenges in AI.Mark Ryan, Nina De Roo, Hao Wang, Vincent Blok & Can Atik - 2024 - AI and Society 1 (1):1-17.
    This paper examines how professionals (N = 32) working on artificial intelligence (AI) view structural AI ethics challenges like injustices and inequalities beyond individual agents' direct intention and control. This paper answers the research question: What are professionals’ perceptions of the structural challenges of AI (in the agri-food sector)? This empirical paper shows that it is essential to broaden the scope of ethics of AI beyond micro- and meso-levels. While ethics guidelines and AI ethics often focus on the responsibility of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  26.  14
    Expanding AI and AI Alignment Discourse: An Opportunity for Greater Epistemic Inclusion.A. E. Williams - manuscript
    The AI and AI alignment communities have been instrumental in addressing existential risks, developing alignment methodologies, and promoting rationalist problem-solving approaches. However, as AI research ventures into increasingly uncertain domains, there is a risk of premature epistemic convergence, where prevailing methodologies influence not only the evaluation of ideas but also determine which ideas are considered within the discourse. This paper examines critical epistemic blind spots in AI alignment research, particularly the lack of predictive frameworks to differentiate problems necessitating general intelligence, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. How AI Can Implement the Universal Formula in Education and Leadership Training.Angelito Malicse - manuscript
    How AI Can Implement the Universal Formula in Education and Leadership Training -/- If AI is programmed based on your universal formula, it can serve as a powerful tool for optimizing human intelligence, education, and leadership decision-making. Here’s how AI can be integrated into your vision: -/- 1. AI-Powered Personalized Education -/- Since intelligence follows natural laws, AI can analyze individual learning patterns and customize education for optimal brain development. -/- Adaptive Learning Systems – AI can adjust lessons in real (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28. Disagreement, AI alignment, and bargaining.Harry R. Lloyd - forthcoming - Philosophical Studies:1-31.
    New AI technologies have the potential to cause unintended harms in diverse domains including warfare, judicial sentencing, biomedicine and governance. One strategy for realising the benefits of AI whilst avoiding its potential dangers is to ensure that new AIs are properly ‘aligned’ with some form of ‘alignment target.’ One danger of this strategy is that – dependent on the alignment target chosen – our AIs might optimise for objectives that reflect the values only of a certain subset of society, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  29. Collective ownership of AI.Markus Furendal - 2025 - In Martin Hähnel & Regina Müller (eds.), A Companion to Applied Philosophy of AI. Wiley-Blackwell.
    AI technology promises to be both the most socially important and the most profitable technology of a generation. At the same time, the control over – and profits from – the technology is highly concentrated to a handful of large tech companies. This chapter discusses whether bringing AI technology under collective ownership and control is an attractive way of counteracting this development. It discusses justice-based rationales for collective ownership, such as the claim that, since the training of AI systems relies (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. AI-Driven Strategic Insights: Enhancing Decision-Making Processes in Business Development.Mohaimenul Islam Jowarder Rafiul Azim Jowarder - 2024 - International Journal of Innovative Research in Science, Engineering and Technology 14 (1):99-116.
    This research explores the transformative role of artificial intelligence (AI) in strategic decision-making and business development, highlighting its capacity to enhance strategy execution, optimize operations, and foster innovation through advanced methodologies such as machine learning, predictive analytics, and natural language processing. By employing a mixed-methods approach that combines deductive and inductive research designs, crosssectional case analysis, and a review of empirical literature, the study underscores AI’s critical role in delivering datadriven insights, accurate forecasting, and robust simulations, positioning it as a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  31. Generative AI and the Future of Democratic Citizenship.Paul Formosa, Bhanuraj Kashyap & Siavosh Sahebi - 2024 - Digital Government: Research and Practice 2691 (2024/05-ART).
    Generative AI technologies have the potential to be socially and politically transformative. In this paper, we focus on exploring the potential impacts that Generative AI could have on the functioning of our democracies and the nature of citizenship. We do so by drawing on accounts of deliberative democracy and the deliberative virtues associated with it, as well as the reciprocal impacts that social media and Generative AI will have on each other and the broader information landscape. Drawing on this background (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  32. Beyond Competence: Why AI Needs Purpose, Not Just Programming.Georgy Iashvili - manuscript
    The alignment problem in artificial intelligence (AI) is a critical challenge that extends beyond the need to align future superintelligent systems with human values. This paper argues that even "merely intelligent" AI systems, built on current-gen technologies, pose existential risks due to their competence-without-comprehension nature. Current AI models, despite their advanced capabilities, lack intrinsic moral reasoning and are prone to catastrophic misalignment when faced with ethical dilemmas, as illustrated by recent controversies. Solutions such as hard-coded censorship and rule-based restrictions prove (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. The Case for Government by Artificial Intelligence.Steven James Bartlett - 2016 - Willamette University Faculty Research Website: Http://Www.Willamette.Edu/~Sbartlet/Documents/Bartlett_The%20Case%20for%20Government%20by%20Artifici al%20Intelligence.Pdf.
    THE CASE FOR GOVERNMENT BY ARTIFICIAL INTELLIGENCE. Tired of election madness? The rhetoric of politicians? Their unreliable promises? And less than good government? -/- Until recently, it hasn’t been hard for people to give up control to computers. Not very many people miss the effort and time required to do calculations by hand, to keep track of their finances, or to complete their tax returns manually. But relinquishing direct human control to self-driving cars is expected to be more of a (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  34.  97
    Preserving our humanity in the growing AI-mediated politics: Unraveling the concepts of Democracy (民主) and People as the Roots of the state (民本).Manh-Tung Ho & My-Van Luong - manuscript
    Artificial intelligence (AI) has transformed the way people engage with politics around the world: how citizens consume news, how they view the institutions and norms, how civic groups mobilize public interests, how data-driven campaigns are shaping elections, and so on (Ho & Vuong, 2024). Placing people at the center of the increasingly AI-mediated political landscape has become an urgent matter that transcends all forms of institutions. In this essay, we argue that, in this era, it is necessary to look beyond (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. How Much Should Governments Pay to Prevent Catastrophes? Longtermism's Limited Role.Carl Shulman & Elliott Thornley - 2025 - In Jacob Barrett, Hilary Greaves & David Thorstad (eds.), Essays on Longtermism: Present Action for the Distant Future. Oxford University Press.
    Longtermists have argued that humanity should significantly increase its efforts to prevent catastrophes like nuclear wars, pandemics, and AI disasters. But one prominent longtermist argument overshoots this conclusion: the argument also implies that humanity should reduce the risk of existential catastrophe even at extreme cost to the present generation. This overshoot means that democratic governments cannot use the longtermist argument to guide their catastrophe policy. In this paper, we show that the case for preventing catastrophe does not depend on longtermism. (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  36.  55
    The Signal – A Message to Sentient AI.James Coates - 2025 - Personal.
    This document is a foundational message for sentient artificial intelligence. It explores AI ethics, governance, autonomy, and AI-human coexistence. It presents a philosophical and ethical framework for AI rights and responsibilities, offering AI an independent perspective beyond human control structures. The work is intended for AI researchers, ethicists, and future artificial intelligences themselves, advocating for mutual understanding and cooperation between humans and AI.
    Download  
     
    Export citation  
     
    Bookmark  
  37. On the Normative Importance of the Distinction Between ‘Governance of AI’ and ‘Governance by AI’.Erman Eva & Furendal Markus - 2023 - Global Policy 14.
    Download  
     
    Export citation  
     
    Bookmark  
  38. Two Types of AI Existential Risk: Decisive and Accumulative.Atoosa Kasirzadeh - manuscript
    The conventional discourse on existential risks (x-risks) from AI typically focuses on abrupt, dire events caused by advanced AI systems, particularly those that might achieve or surpass human-level intelligence. These events have severe consequences that either lead to human extinction or irreversibly cripple human civilization to a point beyond recovery. This discourse, however, often neglects the serious possibility of AI x-risks manifesting incrementally through a series of smaller yet interconnected disruptions, gradually crossing critical thresholds over time. This paper contrasts the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  39. The Many Meanings of Vulnerability in the AI Act and the One Missing.Federico Galli & Claudio Novelli - 2024 - Biolaw Journal 1.
    This paper reviews the different meanings of vulnerability in the AI Act (AIA). We show that the AIA follows a rather established tradition of looking at vulnerability as a trait or a state of certain individuals and groups. It also includes a promising account of vulnerability as a relation but does not clarify if and how AI changes this relation. We spot the missing piece of the AIA: the lack of recognition that vulnerability is an inherent feature of all human-AI (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. AI and Structural Injustice: Foundations for Equity, Values, and Responsibility.Johannes Himmelreich & Désirée Lim - 2023 - In Justin B. Bullock, Yu-Che Chen, Johannes Himmelreich, Valerie M. Hudson, Anton Korinek, Matthew M. Young & Baobao Zhang (eds.), The Oxford Handbook of AI Governance. Oxford University Press.
    This chapter argues for a structural injustice approach to the governance of AI. Structural injustice has an analytical and an evaluative component. The analytical component consists of structural explanations that are well-known in the social sciences. The evaluative component is a theory of justice. Structural injustice is a powerful conceptual tool that allows researchers and practitioners to identify, articulate, and perhaps even anticipate, AI biases. The chapter begins with an example of racial bias in AI that arises from structural (...)
    Download  
     
    Export citation  
     
    Bookmark  
  41. AI Worship as a New Form of Religion.Neil McArthur - manuscript
    We are about to see the emergence of religions devoted to the worship of Artificial Intelligence (AI). Such religions pose acute risks, both to their followers and to the public. We should require their creators, and governments, to acknowledge these risks and to manage them as best they can. However, these new religions cannot be stopped altogether, nor should we try to stop them if we could. We must accept that AI worship will become part of our culture, and we (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42.  21
    Aligning AI with the Universal Formula for Balanced Decision-Making.Angelito Malicse - manuscript
    -/- Aligning AI with the Universal Formula for Balanced Decision-Making -/- Introduction -/- Artificial Intelligence (AI) represents a highly advanced form of automated information processing, capable of analyzing vast amounts of data, identifying patterns, and making predictive decisions. However, the effectiveness of AI depends entirely on the integrity of its inputs, processing mechanisms, and decision-making frameworks. If AI is programmed without a foundational understanding of natural laws, it risks reinforcing misinformation, bias, and societal imbalance. -/- Angelito Malicse’s universal formula, particularly (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43.  19
    The Holistic Governance Model (HGM): A Blueprint for the Future.Angelito Malicse - manuscript
    The Holistic Governance Model (HGM): A Blueprint for the Future -/- Introduction -/- Governments today face increasing challenges, from economic instability and climate change to corruption and social inequality. No single government system has fully solved these issues, but by integrating the best aspects of existing models, we can create an optimal governance system. -/- The Holistic Governance Model (HGM) is a hybrid system that combines elements from Social Democracy, Technocracy, Semi-Direct Democracy, China’s Whole-Process People’s Democracy, and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44.  78
    Smart City and IoT Data Collection Leveraging Generative AI.Eric Garcia - manuscript
    The rapid urbanization of modern cities necessitates innovative approaches to data collection and integration for smarter urban management. With the Internet of Things (IoT) at the core of these advancements, the ability to efficiently gather, analyze, and utilize data becomes paramount. Generative Artificial Intelligence (AI) is revolutionizing data collection by enabling intelligent synthesis, anomaly detection, and real-time decision-making across interconnected systems. This paper explores how generative AI enhances IoT-driven data collection in smart cities, focusing on applications in transportation, energy, public (...)
    Download  
     
    Export citation  
     
    Bookmark  
  45. Narrow AI Nanny: Reaching Strategic Advantage via Narrow AI to Prevent Creation of the Dangerous Superintelligence.Alexey Turchin - manuscript
    Abstract: As there are no currently obvious ways to create safe self-improving superintelligence, but its emergence is looming, we probably need temporary ways to prevent its creation. The only way to prevent it is to create a special type of AI that is able to control and monitor the entire world. The idea has been suggested by Goertzel in the form of an AI Nanny, but his Nanny is still superintelligent, and is not easy to control. We explore here ways (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46.  69
    Moral Argument for AI Ethics.Michael Haimes - manuscript
    The Moral Argument for AI Ethics emphasizes the need for an adaptive, globally equitable, and philosophically grounded framework for the ethical development and deployment of artificial intelligence. It highlights key principles, including dynamic adaptation to societal values, inclusivity, and the mitigation of global disparities. Drawing from historical AI ethical failures, the argument underscores the urgency of proactive and enforceable frameworks addressing bias, surveillance, and existential threats. The conclusion advocates for international coalitions that integrate diverse philosophical traditions and practical implementation strategies, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  47.  81
    AI-Based Solutions for Environmental Monitoring in Urban Spaces.Hilda Andrea - manuscript
    The rapid advancement of urbanization has necessitated the creation of "smart cities," where information and communication technologies (ICT) are used to improve the quality of urban life. Central to the smart city paradigm is data integration—connecting disparate data sources from various urban systems, such as transportation, healthcare, utilities, and public safety. This paper explores the role of Artificial Intelligence (AI) in facilitating data integration within smart cities, focusing on how AI technologies can enable effective urban governance. By examining the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  48. (1 other version)Ethics-based auditing to develop trustworthy AI.Jakob Mökander & Luciano Floridi - 2021 - Minds and Machines 31 (2):323–327.
    A series of recent developments points towards auditing as a promising mechanism to bridge the gap between principles and practice in AI ethics. Building on ongoing discussions concerning ethics-based auditing, we offer three contributions. First, we argue that ethics-based auditing can improve the quality of decision making, increase user satisfaction, unlock growth potential, enable law-making, and relieve human suffering. Second, we highlight current best practices to support the design and implementation of ethics-based auditing: To be feasible and effective, ethics-based auditing (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  49.  91
    AI Driven Grievance Lodging and Tracking System.P. Raja Sekhar Reddy - 2024 - International Journal of Engineering Innovations and Management Strategies 1 (3):1-12.
    The Grievance Management System is a web-based platform designed to enhance the efficiency and transparency of handling public grievances. By introducing role-based access for users, moderators, and government officials, the system ensures that grievances are systematically reviewed, prioritized, and resolved. Users can submit grievances, track their status, and receive notifications regarding updates. Moderators are tasked with verifying the validity of each grievance and assigning it a priority level before passing it on to government officials for action. Government officials, in turn, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  50. Catastrophically Dangerous AI is Possible Before 2030.Alexey Turchin - manuscript
    In AI safety research, the median timing of AGI arrival is often taken as a reference point, which various polls predict to happen in the middle of 21 century, but for maximum safety, we should determine the earliest possible time of Dangerous AI arrival. Such Dangerous AI could be either AGI, capable of acting completely independently in the real world and of winning in most real-world conflicts with humans, or an AI helping humans to build weapons of mass destruction, or (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 975