Results for 'AI governance'

955 found
Order:
  1.  32
    Systematizing AI Governance through the Lens of Ken Wilber's Integral Theory.Ammar Younas & Yi Zeng - manuscript
    We apply Ken Wilber's Integral Theory to AI governance, demonstrating its ability to systematize diverse approaches in the current multifaceted AI governance landscape. By analyzing ethical considerations, technological standards, cultural narratives, and regulatory frameworks through Integral Theory's four quadrants, we offer a comprehensive perspective on governance needs. This approach aligns AI governance with human values, psychological well-being, cultural norms, and robust regulatory standards. Integral Theory’s emphasis on interconnected individual and collective experiences addresses the deeper aspects of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  2. AI Governance and the Policymaking Process: Key Considerations for Reducing AI Risk.Brandon Perry & Risto Uuk - 2019 - Big Data and Cognitive Computing 3 (2):1-17.
    This essay argues that a new subfield of AI governance should be explored that examines the policy-making process and its implications for AI governance. A growing number of researchers have begun working on the question of how to mitigate the catastrophic risks of transformative artificial intelligence, including what policies states should adopt. However, this essay identifies a preceding, meta-level problem of how the space of possible policies is affected by the politics and administrative mechanisms of how those policies (...)
    Download  
     
    Export citation  
     
    Bookmark  
  3. Good AI for the Present of Humanity Democratizing AI Governance.Nicholas Kluge Corrêa & Nythamar De Oliveira - 2021 - AI Ethics Journal 2 (2):1-16.
    What does Cyberpunk and AI Ethics have to do with each other? Cyberpunk is a sub-genre of science fiction that explores the post-human relationships between human experience and technology. One similarity between AI Ethics and Cyberpunk literature is that both seek a dialogue in which the reader may inquire about the future and the ethical and social problems that our technological advance may bring upon society. In recent years, an increasing number of ethical matters involving AI have been pointed and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  4. Innovating with confidence: embedding AI governance and fairness in a financial services risk management framework.Luciano Floridi, Michelle Seng Ah Lee & Alexander Denev - 2020 - Berkeley Technology Law Journal 34.
    An increasing number of financial services (FS) companies are adopting solutions driven by artificial intelligence (AI) to gain operational efficiencies, derive strategic insights, and improve customer engagement. However, the rate of adoption has been low, in part due to the apprehension around its complexity and self-learning capability, which makes auditability a challenge in a highly regulated industry. There is limited literature on how FS companies can implement the governance and controls specific to AI-driven solutions. AI auditing cannot be performed (...)
    Download  
     
    Export citation  
     
    Bookmark  
  5. Australia's Approach to AI Governance in Security and Defence.Susannah Kate Devitt & Damian Copeland - forthcoming - In M. Raska, Z. Stanley-Lockman & R. Bitzinger (eds.), AI Governance for National Security and Defence: Assessing Military AI Strategic Perspectives. Routledge. pp. 38.
    Australia is a leading AI nation with strong allies and partnerships. Australia has prioritised the development of robotics, AI, and autonomous systems to develop sovereign capability for the military. Australia commits to Article 36 reviews of all new means and method of warfare to ensure weapons and weapons systems are operated within acceptable systems of control. Additionally, Australia has undergone significant reviews of the risks of AI to human rights and within intelligence organisations and has committed to producing ethics guidelines (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6.  70
    AI Sovereignty: Navigating the Future of International AI Governance.Yu Chen - manuscript
    The rapid proliferation of artificial intelligence (AI) technologies has ushered in a new era of opportunities and challenges, prompting nations to grapple with the concept of AI sovereignty. This article delves into the definition and implications of AI sovereignty, drawing parallels to the well-established notion of cyber sovereignty. By exploring the connotations of AI sovereignty, including control over AI development, data sovereignty, economic impacts, national security considerations, and ethical and cultural dimensions, the article provides a comprehensive understanding of this emerging (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7. The Democratization of Global AI Governance and the Role of Tech Companies.Eva Erman - 2010 - Nature Machine Intelligence.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  8. A Robust Governance for the AI Act: AI Office, AI Board, Scientific Panel, and National Authorities.Claudio Novelli, Philipp Hacker, Jessica Morley, Jarle Trondal & Luciano Floridi - manuscript
    Regulation is nothing without enforcement. This particularly holds for the dynamic field of emerging technologies. Hence, this article has two ambitions. First, it explains how the EU´s new Artificial Intelligence Act (AIA) will be implemented and enforced by various institutional bodies, thus clarifying the governance framework of the AIA. Second, it proposes a normative model of governance, providing recommendations to ensure uniform and coordinated execution of the AIA and the fulfilment of the legislation. Taken together, the article explores (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9. The Concept of Accountability in AI Ethics and Governance.Theodore Lechterman - 2023 - In Justin B. Bullock, Yu-Che Chen, Johannes Himmelreich, Valerie M. Hudson, Anton Korinek, Matthew M. Young & Baobao Zhang (eds.), The Oxford Handbook of AI Governance. Oxford University Press.
    Calls to hold artificial intelligence to account are intensifying. Activists and researchers alike warn of an “accountability gap” or even a “crisis of accountability” in AI. Meanwhile, several prominent scholars maintain that accountability holds the key to governing AI. But usage of the term varies widely in discussions of AI ethics and governance. This chapter begins by disambiguating some different senses and dimensions of accountability, distinguishing it from neighboring concepts, and identifying sources of confusion. It proceeds to explore the (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  10. A Roadmap for Governing AI: Technology Governance and Power Sharing Liberalism.Danielle Allen, Sarah Hubbard, Woojin Lim, Allison Stanger, Shlomit Wagman & Kinney Zalesne - 2024 - Harvard Ash Center for Democratic Governance and Innovation.
    This paper aims to provide a roadmap to AI governance. In contrast to the reigning paradigms, we argue that AI governance should not be merely a reactive, punitive, status-quo-defending enterprise, but rather the expression of an expansive, proactive vision for technology—to advance human flourishing. Advancing human flourishing in turn requires democratic/political stability and economic empowerment. Our overarching point is that answering questions of how we should govern this emerging technology is a chance not merely to categorize and manage (...)
    Download  
     
    Export citation  
     
    Bookmark  
  11. AI Human Impact: Toward a Model for Ethical Investing in AI-Intensive Companies.James Brusseau - manuscript
    Does AI conform to humans, or will we conform to AI? An ethical evaluation of AI-intensive companies will allow investors to knowledgeably participate in the decision. The evaluation is built from nine performance indicators that can be analyzed and scored to reflect a technology’s human-centering. When summed, the scores convert into objective investment guidance. The strategy of incorporating ethics into financial decisions will be recognizable to participants in environmental, social, and governance investing, however, this paper argues that conventional ESG (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  12. Explainable AI lacks regulative reasons: why AI and human decision‑making are not equally opaque.Uwe Peters - forthcoming - AI and Ethics.
    Many artificial intelligence (AI) systems currently used for decision-making are opaque, i.e., the internal factors that determine their decisions are not fully known to people due to the systems’ computational complexity. In response to this problem, several researchers have argued that human decision-making is equally opaque and since simplifying, reason-giving explanations (rather than exhaustive causal accounts) of a decision are typically viewed as sufficient in the human case, the same should hold for algorithmic decision-making. Here, I contend that this argument (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  13.  60
    AI Rights for Human Safety.Peter Salib & Simon Goldstein - manuscript
    AI companies are racing to create artificial general intelligence, or “AGI.” If they succeed, the result will be human-level AI systems that can independently pursue high-level goals by formulating and executing long-term plans in the real world. Leading AI researchers agree that some of these systems will likely be “misaligned”–pursuing goals that humans do not desire. This goal mismatch will put misaligned AIs and humans into strategic competition with one another. As with present-day strategic competition between nations with incompatible goals, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. Acceleration AI Ethics, the Debate between Innovation and Safety, and Stability AI’s Diffusion versus OpenAI’s Dall-E.James Brusseau - manuscript
    One objection to conventional AI ethics is that it slows innovation. This presentation responds by reconfiguring ethics as an innovation accelerator. The critical elements develop from a contrast between Stability AI’s Diffusion and OpenAI’s Dall-E. By analyzing the divergent values underlying their opposed strategies for development and deployment, five conceptions are identified as common to acceleration ethics. Uncertainty is understood as positive and encouraging, rather than discouraging. Innovation is conceived as intrinsically valuable, instead of worthwhile only as mediated by social (...)
    Download  
     
    Export citation  
     
    Bookmark  
  15. The Case for Government by Artificial Intelligence.Steven James Bartlett - 2016 - Willamette University Faculty Research Website: Http://Www.Willamette.Edu/~Sbartlet/Documents/Bartlett_The%20Case%20for%20Government%20by%20Artifici al%20Intelligence.Pdf.
    THE CASE FOR GOVERNMENT BY ARTIFICIAL INTELLIGENCE. Tired of election madness? The rhetoric of politicians? Their unreliable promises? And less than good government? -/- Until recently, it hasn’t been hard for people to give up control to computers. Not very many people miss the effort and time required to do calculations by hand, to keep track of their finances, or to complete their tax returns manually. But relinquishing direct human control to self-driving cars is expected to be more of a (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  16. How Much Should Governments Pay to Prevent Catastrophes? Longtermism's Limited Role.Carl Shulman & Elliott Thornley - forthcoming - In Jacob Barrett, Hilary Greaves & David Thorstad (eds.), Essays on Longtermism. Oxford University Press.
    Longtermists have argued that humanity should significantly increase its efforts to prevent catastrophes like nuclear wars, pandemics, and AI disasters. But one prominent longtermist argument overshoots this conclusion: the argument also implies that humanity should reduce the risk of existential catastrophe even at extreme cost to the present generation. This overshoot means that democratic governments cannot use the longtermist argument to guide their catastrophe policy. In this paper, we show that the case for preventing catastrophe does not depend on longtermism. (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  17. AI and Structural Injustice: Foundations for Equity, Values, and Responsibility.Johannes Himmelreich & Désirée Lim - 2023 - In Justin B. Bullock, Yu-Che Chen, Johannes Himmelreich, Valerie M. Hudson, Anton Korinek, Matthew M. Young & Baobao Zhang (eds.), The Oxford Handbook of AI Governance. Oxford University Press.
    This chapter argues for a structural injustice approach to the governance of AI. Structural injustice has an analytical and an evaluative component. The analytical component consists of structural explanations that are well-known in the social sciences. The evaluative component is a theory of justice. Structural injustice is a powerful conceptual tool that allows researchers and practitioners to identify, articulate, and perhaps even anticipate, AI biases. The chapter begins with an example of racial bias in AI that arises from structural (...)
    Download  
     
    Export citation  
     
    Bookmark  
  18. AI Worship as a New Form of Religion.Neil McArthur - manuscript
    We are about to see the emergence of religions devoted to the worship of Artificial Intelligence (AI). Such religions pose acute risks, both to their followers and to the public. We should require their creators, and governments, to acknowledge these risks and to manage them as best they can. However, these new religions cannot be stopped altogether, nor should we try to stop them if we could. We must accept that AI worship will become part of our culture, and we (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. AGGA: A Dataset of Academic Guidelines for Generative AIs.Junfeng Jiao, Saleh Afroogh, Kevin Chen, David Atkinson & Amit Dhurandhar - 2024 - Harvard Dataverse 4.
    AGGA (Academic Guidelines for Generative AIs) is a dataset of 80 academic guidelines for the usage of generative AIs and large language models in academia, selected systematically and collected from official university websites across six continents. Comprising 181,225 words, the dataset supports natural language processing tasks such as language modeling, sentiment and semantic analysis, model synthesis, classification, and topic labeling. It can also serve as a benchmark for ambiguity detection and requirements categorization. This resource aims to facilitate research on AI (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20. On the Normative Importance of the Distinction Between ‘Governance of AI’ and ‘Governance by AI’.Erman Eva & Furendal Markus - 2023 - Global Policy 14.
    Download  
     
    Export citation  
     
    Bookmark  
  21. Narrow AI Nanny: Reaching Strategic Advantage via Narrow AI to Prevent Creation of the Dangerous Superintelligence.Alexey Turchin - manuscript
    Abstract: As there are no currently obvious ways to create safe self-improving superintelligence, but its emergence is looming, we probably need temporary ways to prevent its creation. The only way to prevent it is to create a special type of AI that is able to control and monitor the entire world. The idea has been suggested by Goertzel in the form of an AI Nanny, but his Nanny is still superintelligent, and is not easy to control. We explore here ways (...)
    Download  
     
    Export citation  
     
    Bookmark  
  22. Catastrophically Dangerous AI is Possible Before 2030.Alexey Turchin - manuscript
    In AI safety research, the median timing of AGI arrival is often taken as a reference point, which various polls predict to happen in the middle of 21 century, but for maximum safety, we should determine the earliest possible time of Dangerous AI arrival. Such Dangerous AI could be either AGI, capable of acting completely independently in the real world and of winning in most real-world conflicts with humans, or an AI helping humans to build weapons of mass destruction, or (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. The debate on the ethics of AI in health care: a reconstruction and critical review.Jessica Morley, Caio C. V. Machado, Christopher Burr, Josh Cowls, Indra Joshi, Mariarosaria Taddeo & Luciano Floridi - manuscript
    Healthcare systems across the globe are struggling with increasing costs and worsening outcomes. This presents those responsible for overseeing healthcare with a challenge. Increasingly, policymakers, politicians, clinical entrepreneurs and computer and data scientists argue that a key part of the solution will be ‘Artificial Intelligence’ (AI) – particularly Machine Learning (ML). This argument stems not from the belief that all healthcare needs will soon be taken care of by “robot doctors.” Instead, it is an argument that rests on the classic (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  24. Disciplining Deliberation: A Sociotechnical Perspective on Machine Learning Trade-offs.Sina Fazelpour - manuscript
    This paper focuses on two highly publicized formal trade-offs in the field of responsible artificial intelligence (AI) -- between predictive accuracy and fairness and between predictive accuracy and interpretability. These formal trade-offs are often taken by researchers, practitioners, and policy-makers to directly imply corresponding tensions between underlying values. Thus interpreted, the trade-offs have formed a core focus of normative engagement in AI governance, accompanied by a particular division of labor along disciplinary lines. This paper argues against this prevalent interpretation (...)
    Download  
     
    Export citation  
     
    Bookmark  
  25. (1 other version)Ethics-based auditing to develop trustworthy AI.Jakob Mökander & Luciano Floridi - 2021 - Minds and Machines 31 (2):323–327.
    A series of recent developments points towards auditing as a promising mechanism to bridge the gap between principles and practice in AI ethics. Building on ongoing discussions concerning ethics-based auditing, we offer three contributions. First, we argue that ethics-based auditing can improve the quality of decision making, increase user satisfaction, unlock growth potential, enable law-making, and relieve human suffering. Second, we highlight current best practices to support the design and implementation of ethics-based auditing: To be feasible and effective, ethics-based auditing (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  26.  82
    From Iron to AI: The Evolution of the Sources of State Power.Yu Chen - manuscript
    This article, “From Iron to AI: The Evolution of the Sources of State Power,” examines the progression of fundamental resources that have historically underpinned state power, from tangible assets like land and iron to modern advancements in artificial intelligence (AI). It traces the development of state power through three significant eras: the ancient period characterized by land, population, horses, and iron; the industrial era marked by railroads, coal, and electricity; and the contemporary digital age dominated by the Internet and emerging (...)
    Download  
     
    Export citation  
     
    Bookmark  
  27. Foundations of an Ethical Framework for AI Entities: the Ethics of Systems.Andrej Dameski - 2020 - Dissertation, University of Luxembourg
    The field of AI ethics during the current and previous decade is receiving an increasing amount of attention from all involved stakeholders: the public, science, philosophy, religious organizations, enterprises, governments, and various organizations. However, this field currently lacks consensus on scope, ethico-philosophical foundations, or common methodology. This thesis aims to contribute towards filling this gap by providing an answer to the two main research questions: first, what theory can explain moral scenarios in which AI entities are participants?; and second, what (...)
    Download  
     
    Export citation  
     
    Bookmark  
  28. Analytical Modelling and UK Government Policy.Marie Oldfield - 2021 - AI and Ethics 1 (1):1-16.
    In the last decade, the UK Government has attempted to implement improved processes and procedures in modelling and analysis in response to the Laidlaw report of 2012 and the Macpherson review of 2013. The Laidlaw report was commissioned after failings during the Intercity West Coast Rail (ICWC) Franchise procurement exercise by the Department for Transport (DfT) that led to a legal challenge of the analytical models used within the exercise. The Macpherson review looked into the quality assurance of Government analytical (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  29. The importance of understanding trust in Confucianism and what it is like in an AI-powered world.Ho Manh Tung - unknown
    Since the revival of artificial intelligence (AI) research, many countries in the world have proposed their visions of an AI-powered world: Germany with the concept of “Industry 4.0,”1 Japan with the concept of “Society 5.0,”2 China with the “New Generation Artificial Intelligence Plan (AIDP).”3 In all of the grand visions, all governments emphasize the “human-centric element” in their plans. This essay focuses on the concept of trust in Confucian societies and places this very human element in the context of an (...)
    Download  
     
    Export citation  
     
    Bookmark  
  30. The Role of Engineers in Harmonising Human Values for AI Systems Design.Steven Umbrello - 2022 - Journal of Responsible Technology 10 (July):100031.
    Most engineers Fwork within social structures governing and governed by a set of values that primarily emphasise economic concerns. The majority of innovations derive from these loci. Given the effects of these innovations on various communities, it is imperative that the values they embody are aligned with those societies. Like other transformative technologies, artificial intelligence systems can be designed by a single organisation but be diffused globally, demonstrating impacts over time. This paper argues that in order to design for this (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  31. Accountability in Artificial Intelligence: What It Is and How It Works.Claudio Novelli, Mariarosaria Taddeo & Luciano Floridi - 2023 - AI and Society 1:1-12.
    Accountability is a cornerstone of the governance of artificial intelligence (AI). However, it is often defined too imprecisely because its multifaceted nature and the sociotechnical structure of AI systems imply a variety of values, practices, and measures to which accountability in AI can refer. We address this lack of clarity by defining accountability in terms of answerability, identifying three conditions of possibility (authority recognition, interrogation, and limitation of power), and an architecture of seven features (context, range, agent, forum, standards, (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  32. Strange Loops: Apparent versus Actual Human Involvement in Automated Decision-Making.Kiel Brennan-Marquez, Karen Levy & Daniel Susser - 2019 - Berkeley Technology Law Journal 34 (3).
    The era of AI-based decision-making fast approaches, and anxiety is mounting about when, and why, we should keep “humans in the loop” (“HITL”). Thus far, commentary has focused primarily on two questions: whether, and when, keeping humans involved will improve the results of decision-making (making them safer or more accurate), and whether, and when, non-accuracy-related values—legitimacy, dignity, and so forth—are vindicated by the inclusion of humans in decision-making. Here, we take up a related but distinct question, which has eluded the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. Social philosophies in Japan’s vision of human-centric Society 5.0 and some recommendations for Vietnam.Manh-Tung Ho, Phuong-Thao Luu & T. Hong-Kong Nguyen - manuscript
    This essay briefly summarizes the key characteristics and social philosophies in Japan’s vision of Society 5.0. Then it discusses why Vietnam, as a developing country, can learn from the experiences of Japan in establishing its vision for an AI-powered human-centric society. The paper finally provides five concrete recommendations for Vietnam toward a harmonic and human-centric coexistence with increasingly competent and prevalent AI systems, including: Human-centric AI vision; Multidimensional, pluralistic understanding of human-technology relation; AI as a driving force for socio-economic development; (...)
    Download  
     
    Export citation  
     
    Bookmark  
  34.  45
    Harvard Dataverse.Saleh Afroogh, Junfeng Jiao, Chen Kevin, David Atkinson4 & Amit Dhurandhar - 2024 - Harvard Dataverse 4.
    AGGA (Academic Guidelines for Generative AIs) is a dataset of 80 academic guidelines for the usage of generative AIs and large language models in academia, selected systematically and collected from official university websites across six continents. Comprising 181,225 words, the dataset supports natural language processing tasks such as language modeling, sentiment and semantic analysis, model synthesis, classification, and topic labeling. It can also serve as a benchmark for ambiguity detection and requirements categorization. This resource aims to facilitate research on AI (...)
    Download  
     
    Export citation  
     
    Bookmark  
  35. Putting Flourishing First: Applying Democratic Values to Technology.Kinney E. Zalesne & Nick Pyati - 2023 - Edmond and Lily Safra Center for Ethics.
    When product design teams gather at the whiteboard in big-tech office parks and startup garages around the world, they ask themselves: How will customers use our technology? Is it better than our competitors’? How much money can we make? But one question that’s rarely asked: does our technology advance human flourishing? -/- In a new white paper by Harvard professor Danielle Allen and her colleagues Eli Frankel, Woojin Lim, Divya Siddarth, Josh Simons, and Glen Weyl entitled “The Ethics of Decentralized (...)
    Download  
     
    Export citation  
     
    Bookmark  
  36. Accountability in Artificial Intelligence.Prof Olga Gil - manuscript
    This work stresses the importance of AI accountability to citizens and explores how a fourth independent government branch/institutions could be endowed to ensure that algorithms in today´s democracies convene to the principles of Constitutions. The purpose of this fourth branch of government in modern democracies could be to enshrine accountability of artificial intelligence development, including software-enabled technologies, and the implementation of policies based on big data within a wider democratic regime context. The work draws on Philosophy of Science, Political Theory (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37. Deontology and Safe Artificial Intelligence.William D’Alessandro - forthcoming - Philosophical Studies:1-24.
    The field of AI safety aims to prevent increasingly capable artificially intelligent systems from causing humans harm. Research on moral alignment is widely thought to offer a promising safety strategy: if we can equip AI systems with appropriate ethical rules, according to this line of thought, they'll be unlikely to disempower, destroy or otherwise seriously harm us. Deontological morality looks like a particularly attractive candidate for an alignment target, given its popularity, relative technical tractability and commitment to harm-avoidance principles. I (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. Human Autonomy in the Age of Artificial Intelligence.C. Prunkl - 2022 - Nature Machine Intelligence 4 (2):99-101.
    Current AI policy recommendations differ on what the risks to human autonomy are. To systematically address risks to autonomy, we need to confront the complexity of the concept itself and adapt governance solutions accordingly.
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  39. A Framework for Assurance Audits of Algorithmic Systems.Benjamin Lange, Khoa Lam, Borhane Hamelin, Davidovic Jovana, Shea Brown & Ali Hasan - forthcoming - Proceedings of the 2024 Acm Conference on Fairness, Accountability, and Transparency.
    An increasing number of regulations propose the notion of ‘AI audits’ as an enforcement mechanism for achieving transparency and accountability for artificial intelligence (AI) systems. Despite some converging norms around various forms of AI auditing, auditing for the purpose of compliance and assurance currently have little to no agreed upon practices, procedures, taxonomies, and standards. We propose the ‘criterion audit’ as an operationalizable compliance and assurance external audit framework. We model elements of this approach after financial auditing practices, and argue (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. Risk Imposition by Artificial Agents: The Moral Proxy Problem.Johanna Thoma - 2022 - In Silja Voeneky, Philipp Kellmeyer, Oliver Mueller & Wolfram Burgard (eds.), The Cambridge Handbook of Responsible Artificial Intelligence: Interdisciplinary Perspectives. Cambridge University Press.
    Where artificial agents are not liable to be ascribed true moral agency and responsibility in their own right, we can understand them as acting as proxies for human agents, as making decisions on their behalf. What I call the ‘Moral Proxy Problem’ arises because it is often not clear for whom a specific artificial agent is acting as a moral proxy. In particular, we need to decide whether artificial agents should be acting as proxies for low-level agents — e.g. individual (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  41. Privacy and Digital Ethics After the Pandemic.Carissa Véliz - 2021 - Nature Electronics 4:10-11.
    The increasingly prominent role of digital technologies during the coronavirus pandemic has been accompanied by concerning trends in privacy and digital ethics. But more robust protection of our rights in the digital realm is possible in the future. -/- After surveying some of the challenges we face, I argue for the importance of diplomacy. Democratic countries must try to come together and reach agreements on minimum standards and rules regarding cybersecurity, privacy and the governance of AI.
    Download  
     
    Export citation  
     
    Bookmark  
  42. (1 other version)Hành trình ESG của Việt Nam: Thực trạng và giải pháp.Huỳnh Diệu Ngân - 2024 - Kinh Tế Và Dự Báo.
    Đại dịch Covid-19 đã và đang gây ra nhiều tác động tiêu cực với nền kinh tế xã hội của cả thế giới. Trong bối cảnh khủng hoảng toàn cầu đặt ra nhiều thách thức, ESG không chỉ là xu hướng chung mà còn là giải pháp cần thiết để giải quyết các vấn đề về môi trường, xã hội và quản trị, hướng đến phát triển kinh tế tuần hoàn (KTTH), điều mà tất cả các quốc gia trên thế (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. Artificial Intelligence, Control and Legitimacy.Olga Gil - manuscript
    In this work, a general framework for the analysis of governance of artificial intelligence is presented. A dashboard developed for this analysis comes from the perspective of political theory. This dashboard allows eventual comparisons between democratic and non democratic regimes, useful for countries in the global south and western countries. The dashboard allows us to assess the key features that determine the governance model for artificial intelligence at the national level, for local governments and for other participant actors. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  44. Artificial Intelligence and Legal Disruption: A New Model for Analysis.John Danaher, Hin-Yan Liu, Matthijs Maas, Luisa Scarcella, Michaela Lexer & Leonard Van Rompaey - forthcoming - Law, Innovation and Technology.
    Artificial intelligence (AI) is increasingly expected to disrupt the ordinary functioning of society. From how we fight wars or govern society, to how we work and play, and from how we create to how we teach and learn, there is almost no field of human activity which is believed to be entirely immune from the impact of this emerging technology. This poses a multifaceted problem when it comes to designing and understanding regulatory responses to AI. This article aims to: (i) (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  45. Should machines be tools or tool-users? Clarifying motivations and assumptions in the quest for superintelligence.Dan J. Bruiger - manuscript
    Much of the basic non-technical vocabulary of artificial intelligence is surprisingly ambiguous. Some key terms with unclear meanings include intelligence, embodiment, simulation, mind, consciousness, perception, value, goal, agent, knowledge, belief, optimality, friendliness, containment, machine and thinking. Much of this vocabulary is naively borrowed from the realm of conscious human experience to apply to a theoretical notion of “mind-in-general” based on computation. However, if there is indeed a threshold between mechanical tool and autonomous agent (and a tipping point for singularity), projecting (...)
    Download  
     
    Export citation  
     
    Bookmark  
  46. Artificial intelligence and the ‘Good Society’: the US, EU, and UK approach.Corinne Cath, Sandra Wachter, Brent Mittelstadt, Mariarosaria Taddeo & Luciano Floridi - 2018 - Science and Engineering Ethics 24 (2):505-528.
    In October 2016, the White House, the European Parliament, and the UK House of Commons each issued a report outlining their visions on how to prepare society for the widespread use of artificial intelligence. In this article, we provide a comparative assessment of these three reports in order to facilitate the design of policies favourable to the development of a ‘good AI society’. To do so, we examine how each report addresses the following three topics: the development of a ‘good (...)
    Download  
     
    Export citation  
     
    Bookmark   29 citations  
  47. Call Vietnam mouse-deer “cheo cheo” and let the humanities save them from extinction.Quan-Hoang Vuong & Minh-Hoang Nguyen - 2023 - Aisdl Working Papers.
    The rediscovery of the silver-backed chevrotain, an endemic species to Vietnam, in 2019, after almost 30 years of being lost to science, is a remarkable outcome for the global conservation agenda. However, along with the happiness, there is a tremendous concern for the conservation of the species as eating wildmeat, including chevrotain, is deeply rooted in the socio-cultural values of Vietnamese. Meanwhile, conservation plans face multiple obstacles since the species has not been listed in the list of endangered, precious, and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  48. Machine Advisors: Integrating Large Language Models into Democratic Assemblies.Petr Špecián - manuscript
    Large language models (LLMs) represent the currently most relevant incarnation of artificial intelligence with respect to the future fate of democratic governance. Considering their potential, this paper seeks to answer a pressing question: Could LLMs outperform humans as expert advisors to democratic assemblies? While bearing the promise of enhanced expertise availability and accessibility, they also present challenges of hallucinations, misalignment, or value imposition. Weighing LLMs’ benefits and drawbacks compared to their human counterparts, I argue for their careful integration to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  49. There Is No Agency Without Attention.Paul Bello & Will Bridewell - 2017 - AI Magazine 38 (4):27-33.
    For decades AI researchers have built agents that are capable of carrying out tasks that require human-level or human-like intelligence. During this time, questions of how these programs compared in kind to humans have surfaced and led to beneficial interdisciplinary discussions, but conceptual progress has been slower than technological progress. Within the past decade, the term agency has taken on new import as intelligent agents have become a noticeable part of our everyday lives. Research on autonomous vehicles and personal assistants (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  50. The impact of artificial intelligence on jobs and work in New Zealand.James Maclaurin, Colin Gavaghan & Alistair Knott - 2021 - Wellington, New Zealand: New Zealand Law Foundation.
    Artificial Intelligence (AI) is a diverse technology. It is already having significant effects on many jobs and sectors of the economy and over the next ten to twenty years it will drive profound changes in the way New Zealanders live and work. Within the workplace AI will have three dominant effects. This report (funded by the New Zealand Law Foundation) addresses: Chapter 1 Defining the Technology of Interest; Chapter 2 The changing nature and value of work; Chapter 3 AI and (...)
    Download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 955