Results for 'algorithmic governance'

997 found
Order:
  1. Algorithms and Posthuman Governance.James Hughes - 2017 - Journal of Posthuman Studies.
    Since the Enlightenment, there have been advocates for the rationalizing efficiency of enlightened sovereigns, bureaucrats, and technocrats. Today these enthusiasms are joined by calls for replacing or augmenting government with algorithms and artificial intelligence, a process already substantially under way. Bureaucracies are in effect algorithms created by technocrats that systematize governance, and their automation simply removes bureaucrats and paper. The growth of algorithmic governance can already be seen in the automation of social services, regulatory oversight, policing, the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  2. On the Wisdom of Algorithmic Markets: Governance by Algorithmic Price.Pip Thornton & John Danaher - manuscript
    Leading digital platform providers such as Google and Uber construct marketplaces in which algorithms set prices. The efficiency-maximising free market credentials of this approach are touted by the companies involved and by legislators, policy makers and marketers. They have also taken root in the public imagination. In this article we challenge this understanding of algorithmically constructed marketplaces. We do so by returning to Hayek’s (1945) classic defence of the price mechanism, and by arguing that algorithmically-mediated price mechanisms do not, and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  3. Algorithmic Fairness from a Non-ideal Perspective.Sina Fazelpour & Zachary C. Lipton - 2020 - Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society.
    Inspired by recent breakthroughs in predictive modeling, practitioners in both industry and government have turned to machine learning with hopes of operationalizing predictions to drive automated decisions. Unfortunately, many social desiderata concerning consequential decisions, such as justice or fairness, have no natural formulation within a purely predictive framework. In efforts to mitigate these problems, researchers have proposed a variety of metrics for quantifying deviations from various statistical parities that we might expect to observe in a fair world and offered a (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  4. The ethics of algorithms: key problems and solutions.Andreas Tsamados, Nikita Aggarwal, Josh Cowls, Jessica Morley, Huw Roberts, Mariarosaria Taddeo & Luciano Floridi - 2021 - AI and Society.
    Research on the ethics of algorithms has grown substantially over the past decade. Alongside the exponential development and application of machine learning algorithms, new ethical problems and solutions relating to their ubiquitous use in society have been proposed. This article builds on a review of the ethics of algorithms published in 2016, 2016). The goals are to contribute to the debate on the identification and analysis of the ethical implications of algorithms, to provide an updated analysis of epistemic and normative (...)
    Download  
     
    Export citation  
     
    Bookmark   42 citations  
  5. Algorithmic Randomness and Probabilistic Laws.Jeffrey A. Barrett & Eddy Keming Chen - manuscript
    We consider two ways one might use algorithmic randomness to characterize a probabilistic law. The first is a generative chance* law. Such laws involve a nonstandard notion of chance. The second is a probabilistic* constraining law. Such laws impose relative frequency and randomness constraints that every physically possible world must satisfy. While each notion has virtues, we argue that the latter has advantages over the former. It supports a unified governing account of non-Humean laws and provides independently motivated solutions (...)
    Download  
     
    Export citation  
     
    Bookmark  
  6. Big Tech, Algorithmic Power, and Democratic Control.Ugur Aytac - forthcoming - Journal of Politics.
    This paper argues that instituting Citizen Boards of Governance (CBGs) is the optimal strategy to democratically contain Big Tech’s algorithmic powers in the digital public sphere. CBGs are bodies of randomly selected citizens that are authorized to govern the algorithmic infrastructure of Big Tech platforms. The main advantage of CBGs is to tackle the concentrated powers of private tech corporations without giving too much power to governments. I show why this is a better approach than ordinary state (...)
    Download  
     
    Export citation  
     
    Bookmark  
  7.  79
    A Framework for Assurance Audits of Algorithmic Systems.Benjamin Lange, Khoa Lam, Borhane Hamelin, Davidovic Jovana, Shea Brown & Ali Hasan - forthcoming - Proceedings of the 2024 Acm Conference on Fairness, Accountability, and Transparency.
    An increasing number of regulations propose the notion of ‘AI audits’ as an enforcement mechanism for achieving transparency and accountability for artificial intelligence (AI) systems. Despite some converging norms around various forms of AI auditing, auditing for the purpose of compliance and assurance currently have little to no agreed upon practices, procedures, taxonomies, and standards. We propose the ‘criterion audit’ as an operationalizable compliance and assurance external audit framework. We model elements of this approach after financial auditing practices, and argue (...)
    Download  
     
    Export citation  
     
    Bookmark  
  8. <null>me<null>: Algorithmic Governmentality and the Notion of Subjectivity in Project Itoh's Harmony.Fatemeh Savaedi & Maryam Alavi Nia - 2021 - Journal of Science Fiction and Philosophy 4:1-19.
    Algorithmic governmentality is a new form of political governance interconnected with technology and computation. By coining the term “algorithmic governmentality,” Antoinette Rouvroy argues that this mode of governance reduces everything to data, and people are no longer individuals but dividuals (able to be divided) or readable data profiles. Implementing the concept of algorithmic governmentality, the current study analyses Project Itoh’s award-winning novel Harmony in terms of such relevant concepts as “subjectivity,” “infra-individuality” and “control,” as suggested (...)
    Download  
     
    Export citation  
     
    Bookmark  
  9. Genealogy of Algorithms: Datafication as Transvaluation.Virgil W. Brower - 2020 - le Foucaldien 6 (1):1-43.
    This article investigates religious ideals persistent in the datafication of information society. Its nodal point is Thomas Bayes, after whom Laplace names the primal probability algorithm. It reconsiders their mathematical innovations with Laplace's providential deism and Bayes' singular theological treatise. Conceptions of divine justice one finds among probability theorists play no small part in the algorithmic data-mining and microtargeting of Cambridge Analytica. Theological traces within mathematical computation are emphasized as the vantage over large numbers shifts to weights beyond enumeration (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  10. How to design a governable digital health ecosystem.Jessica Morley & Luciano Floridi - manuscript
    It has been suggested that to overcome the challenges facing the UK’s National Health Service (NHS) of an ageing population and reduced available funding, the NHS should be transformed into a more informationally mature and heterogeneous organisation, reliant on data-based and algorithmically-driven interactions between human, artificial, and hybrid (semi-artificial) agents. This transformation process would offer significant benefit to patients, clinicians, and the overall system, but it would also rely on a fundamental transformation of the healthcare system in a way that (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  11. Metanormative Principles and Norm Governed Social Interaction.Berislav Žarnić & Gabriela Bašić - 2014 - Revus 22:105-120.
    Critical examination of Alchourrón and Bulygin’s set-theoretic definition of normative system shows that deductive closure is not an inevitable property. Following von Wright’s conjecture that axioms of standard deontic logic describe perfection-properties of a norm-set, a translation algorithm from the modal to the set-theoretic language is introduced. The translations reveal that the plausibility of metanormative principles rests on different grounds. Using a methodological approach that distinguishes the actor roles in a norm governed interaction, it has been shown that metanormative principles (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  12. The Relations Between Pedagogical and Scientific Explanations of Algorithms: Case Studies from the French Administration.Maël Pégny - manuscript
    The opacity of some recent Machine Learning (ML) techniques have raised fundamental questions on their explainability, and created a whole domain dedicated to Explainable Artificial Intelligence (XAI). However, most of the literature has been dedicated to explainability as a scientific problem dealt with typical methods of computer science, from statistics to UX. In this paper, we focus on explainability as a pedagogical problem emerging from the interaction between lay users and complex technological systems. We defend an empirical methodology based on (...)
    Download  
     
    Export citation  
     
    Bookmark  
  13. Innovating with confidence: embedding AI governance and fairness in a financial services risk management framework.Luciano Floridi, Michelle Seng Ah Lee & Alexander Denev - 2020 - Berkeley Technology Law Journal 34.
    An increasing number of financial services (FS) companies are adopting solutions driven by artificial intelligence (AI) to gain operational efficiencies, derive strategic insights, and improve customer engagement. However, the rate of adoption has been low, in part due to the apprehension around its complexity and self-learning capability, which makes auditability a challenge in a highly regulated industry. There is limited literature on how FS companies can implement the governance and controls specific to AI-driven solutions. AI auditing cannot be performed (...)
    Download  
     
    Export citation  
     
    Bookmark  
  14. The obsolescence of politics: Rereading Günther Anders’s critique of cybernetic governance and integral power in the digital age.Anna-Verena Nosthoff & Felix Maschewski - 2019 - Thesis Eleven 153 (1):75-93.
    Following media-theoretical studies that have characterized digitization as a process of all-encompassing cybernetization, this paper will examine the timely and critical potential of Günther Anders’s oeuvre vis-à-vis the ever-increasing power of cybernetic devices and networks. Anders has witnessed and negotiated the process of cybernetization from its very beginning, having criticized its tendency to automate and expand, as well as its circular logic and ‘integral power’, including disruptive consequences for the constitution of the political and the social. In this vein, Anders’s (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  15. How to Save Face & the Fourth Amendment: Developing an Algorithmic Auditing and Accountability Industry for Facial Recognition Technology in Law Enforcement.Lin Patrick - 2023 - Albany Law Journal of Science and Technology 33 (2):189-235.
    For more than two decades, police in the United States have used facial recognition to surveil civilians. Local police departments deploy facial recognition technology to identify protestors’ faces while federal law enforcement agencies quietly amass driver’s license and social media photos to build databases containing billions of faces. Yet, despite the widespread use of facial recognition in law enforcement, there are neither federal laws governing the deployment of this technology nor regulations settings standards with respect to its development. To make (...)
    Download  
     
    Export citation  
     
    Bookmark  
  16. Res Publica ex Machina: On Neocybernetic Governance and the End of Politics.Anna-Verena Nosthoff & Felix Maschewski - 2020 - In Let's Get Physical, INC Reader. Amsterdam: pp. 196-211.
    The article critically investigates various approaches to “smart” governance, from algorithmic regulation (O’Reilly), fluid technocracy (P. Khanna), “smart states” (Noveck), nudge theory (Thaler/ Sunstein) and social physics (Alex Pentland). It specifically evaluates the cybernetic origins of these approaches and interprets them as pragmatic actualisations of earlier cybernetic models of the state (Lang, Deutsch) against the current background of surveillance capitalism. The authors argue that cybernetic thinking rests on a reductive model of participation and a limited concept of “the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  17. Freedom in an Age of Algocracy.John Danaher - 2020 - In Shannon Vallor (ed.), The Oxford Handbook of Philosophy of Technology. New York, NY: Oxford University Press, Usa.
    There is a growing sense of unease around algorithmic modes of governance ('algocracies') and their impact on freedom. Contrary to the emancipatory utopianism of digital enthusiasts, many now fear that the rise of algocracies will undermine our freedom. Nevertheless, there has been some struggle to explain exactly how this will happen. This chapter tries to address the shortcomings in the existing discussion by arguing for a broader conception/understanding of freedom as well as a broader conception/understanding of algocracy. Broadening (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  18. Artificial intelligence: opportunities and implications for the future of decision making.U. K. Government & Office for Science - 2016
    Artificial intelligence has arrived. In the online world it is already a part of everyday life, sitting invisibly behind a wide range of search engines and online commerce sites. It offers huge potential to enable more efficient and effective business and government but the use of artificial intelligence brings with it important questions about governance, accountability and ethics. Realising the full potential of artificial intelligence and avoiding possible adverse consequences requires societies to find satisfactory answers to these questions. This (...)
    Download  
     
    Export citation  
     
    Bookmark  
  19. Computational Transformation of the Public Sphere: Theories and Cases.S. M. Amadae (ed.) - 2020 - Helsinki: Faculty of Social Sciences, University of Helsinki.
    This book is an edited collection of original research papers on the digital revolution of the public and governance. It covers cyber governance in Finland, and the securitization of cyber security in Finland. It investigates the cases of Brexit, the 2016 US presidential election of Donald Trump, the 2017 presidential election of Volodymyr Zelensky, and Brexit. It examines the environmental concerns of climate change and greenwashing, and the impact of digital communication giving rise to the #MeToo and Incel (...)
    Download  
     
    Export citation  
     
    Bookmark  
  20. The Threat of Algocracy: Reality, Resistance and Accommodation.John Danaher - 2016 - Philosophy and Technology 29 (3):245-268.
    One of the most noticeable trends in recent years has been the increasing reliance of public decision-making processes on algorithms, i.e. computer-programmed step-by-step instructions for taking a given set of inputs and producing an output. The question raised by this article is whether the rise of such algorithmic governance creates problems for the moral or political legitimacy of our public decision-making processes. Ignoring common concerns with data protection and privacy, it is argued that algorithmic governance does (...)
    Download  
     
    Export citation  
     
    Bookmark   49 citations  
  21. Digital Domination: Social Media and Contestatory Democracy.Ugur Aytac - 2022 - Political Studies.
    This paper argues that social media companies’ power to regulate communication in the public sphere illustrates a novel type of domination. The idea is that, since social media companies can partially dictate the terms of citizens’ political participation in the public sphere, they can arbitrarily interfere with the choices individuals make qua citizens. I contend that social media companies dominate citizens in two different ways. First, I focus on the cases in which social media companies exercise direct control over political (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  22. Digitocracy: Ruling and Being Ruled.Alfonso Ballesteros - 2020 - Philosophies 5 (2):9.
    Digitalisation is attracting much scholarly attention at present. However, scholars often take its benefits for granted, overlooking the essential question: “Does digital technology make us better?” This paper aims to help fill this gap by examining digitalisation as a form of government (digitocracy) and the way it shapes a new kind of man: _animal digitalis_. I argue that the digitalised man is animal-like rather than machine-like. This man does not use efficient and cold machine-like language, but is rather emotionalised through (...)
    Download  
     
    Export citation  
     
    Bookmark  
  23. Democratic Obligations and Technological Threats to Legitimacy: PredPol, Cambridge Analytica, and Internet Research Agency.Alan Rubel, Clinton Castro & Adam Pham - 2021 - In Algorithms & Autonomy: The Ethics of Automated Decision Systems. Cambridge University Press: Cambridge University Press. pp. 163-183.
    ABSTRACT: So far in this book, we have examined algorithmic decision systems from three autonomy-based perspectives: in terms of what we owe autonomous agents (chapters 3 and 4), in terms of the conditions required for people to act autonomously (chapters 5 and 6), and in terms of the responsibilities of agents (chapter 7). -/- In this chapter we turn to the ways in which autonomy underwrites democratic governance. Political authority, which is to say the ability of a government (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  24. Explainable AI lacks regulative reasons: why AI and human decision‑making are not equally opaque.Uwe Peters - forthcoming - AI and Ethics.
    Many artificial intelligence (AI) systems currently used for decision-making are opaque, i.e., the internal factors that determine their decisions are not fully known to people due to the systems’ computational complexity. In response to this problem, several researchers have argued that human decision-making is equally opaque and since simplifying, reason-giving explanations (rather than exhaustive causal accounts) of a decision are typically viewed as sufficient in the human case, the same should hold for algorithmic decision-making. Here, I contend that this (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  25. É Possível Evitar Vieses Algorítmicos?Carlos Barth - 2021 - Revista de Filosofia Moderna E Contemporânea 8 (3):39-68.
    Artificial intelligence (AI) techniques are used to model human activities and predict behavior. Such systems have shown race, gender and other kinds of bias, which are typically understood as technical problems. Here we try to show that: 1) to get rid of such biases, we need a system that can understand the structure of human activities and;2) to create such a system, we need to solve foundational problems of AI, such as the common-sense problem. Additionally, when informational platforms uses these (...)
    Download  
     
    Export citation  
     
    Bookmark  
  26. "We have to Coordinate the Flow" oder: Die Sozialphysik des Anstoßes. Zum Steuerungs- und Regelungsdenken neokybernetischer Politiken.Anna-Verena Nosthoff & Felix Maschewski - 2019 - In Jahrbuch Technikphilosophie 2019.
    Der Aufsatz diskutiert das Steuerungs- und Regelungsdenken zeitgenössischer neokybernetischer Governance-Ansätze (Pentland/ Khanna/ Noveck/ Thaler & Sunstein) unter besonderer Berücksichtigung früher Modelle politischer Kybernetik. Erstere werden dabei als Weiterentwicklung kybernetischer Staatstheorien charakterisiert, wobei insbesondere deren implizite kybernetische Grundannahmen problematisiert werden: Das Paradigma einer kontrollierbaren Freiheit, die Fixierung auf systemische Ultrastabilität und die Prozesse dynamischer, selbstregelnder Anpassung im Zusammenhang der anthropologischen Prämisse des Homo imitans, grundieren, so die These, eine umfassende „algorithmische Gouvernementalität“ und damit die Potentiale einer integralen Herrschaft. -/- In (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  27. Artificial intelligence and the ‘Good Society’: the US, EU, and UK approach.Corinne Cath, Sandra Wachter, Brent Mittelstadt, Mariarosaria Taddeo & Luciano Floridi - 2018 - Science and Engineering Ethics 24 (2):505-528.
    In October 2016, the White House, the European Parliament, and the UK House of Commons each issued a report outlining their visions on how to prepare society for the widespread use of artificial intelligence. In this article, we provide a comparative assessment of these three reports in order to facilitate the design of policies favourable to the development of a ‘good AI society’. To do so, we examine how each report addresses the following three topics: the development of a ‘good (...)
    Download  
     
    Export citation  
     
    Bookmark   27 citations  
  28. People, posts, and platforms: reducing the spread of online toxicity by contextualizing content and setting norms.Isaac Record & Boaz Miller - 2022 - Asian Journal of Philosophy 1 (2):1-19.
    We present a novel model of individual people, online posts, and media platforms to explain the online spread of epistemically toxic content such as fake news and suggest possible responses. We argue that a combination of technical features, such as the algorithmically curated feed structure, and social features, such as the absence of stable social-epistemic norms of posting and sharing in social media, is largely responsible for the unchecked spread of epistemically toxic content online. Sharing constitutes a distinctive communicative act, (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  29. Incommensurability and Theory Change.Howard Sankey - 2011 - In Steven D. Hales (ed.), A Companion to Relativism. Oxford: Wiley-Blackwell. pp. 456-474.
    The paper explores the relativistic implications of the thesis of incommensurability. A semantic form of incommensurability due to semantic variation between theories is distinguished from a methodological form due to variation in methodological standards between theories. Two responses to the thesis of semantic incommensurability are dealt with: the first challenges the idea of untranslatability to which semantic incommensurability gives rise; the second holds that relations of referential continuity eliminate semantic incommensurability. It is then argued that methodological incommensurability poses little risk (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  30. Sense and the computation of reference.Reinhard Muskens - 2004 - Linguistics and Philosophy 28 (4):473 - 504.
    The paper shows how ideas that explain the sense of an expression as a method or algorithm for finding its reference, preshadowed in Frege’s dictum that sense is the way in which a referent is given, can be formalized on the basis of the ideas in Thomason (1980). To this end, the function that sends propositions to truth values or sets of possible worlds in Thomason (1980) must be replaced by a relation and the meaning postulates governing the behaviour of (...)
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  31. Mistakes in medical ontologies: Where do they come from and how can they be detected?Werner Ceusters, Barry Smith, Anand Kumar & Christoffel Dhaen - 2004 - Studies in Health and Technology Informatics 102:145-164.
    We present the details of a methodology for quality assurance in large medical terminologies and describe three algorithms that can help terminology developers and users to identify potential mistakes. The methodology is based in part on linguistic criteria and in part on logical and ontological principles governing sound classifications. We conclude by outlining the results of applying the methodology in the form of a taxonomy different types of errors and potential errors detected in SNOMED-CT.
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  32. Platform cooperativism and freedom as non-domination in the gig economy.Tim Christiaens - 2024 - European Journal of Political Theory.
    While the challenges workers face in the gig economy are now well-known, reflections on emancipatory solutions in political philosophy are still underdeveloped. Some have pleaded for enhancing workers' bargaining power through unionisation; others for enhancing exit options in the labour market. Both strategies, however, come with unin-tended side-effects and do not exhaust the full potential for worker self-government present in the digital gig economy. Using the republican theory of freedom as non-domination , I argue that G.D.H. Cole's 20th-century defence of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  33. Isbell Conjugacy for Developing Cognitive Science.Venkata Rayudu Posina, Posina Venkata Rayudu & Sisir Roy - manuscript
    What is cognition? Equivalently, what is cognition good for? Or, what is it that would not be but for human cognition? But for human cognition, there would not be science. Based on this kinship between individual cognition and collective science, here we put forward Isbell conjugacy---the adjointness between objective geometry and subjective algebra---as a scientific method for developing cognitive science. We begin with the correspondence between categorical perception and category theory. Next, we show how the Gestalt maxim is subsumed by (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  34. Key ethical challenges in the European Medical Information Framework.Luciano Floridi, Christoph Luetge, Ugo Pagallo, Burkhard Schafer, Peggy Valcke, Effy Vayena, Janet Addison, Nigel Hughes, Nathan Lea, Caroline Sage, Bart Vannieuwenhuyse & Dipak Kalra - 2019 - Minds and Machines 29 (3):355-371.
    The European Medical Information Framework project, funded through the IMI programme, has designed and implemented a federated platform to connect health data from a variety of sources across Europe, to facilitate large scale clinical and life sciences research. It enables approved users to analyse securely multiple, diverse, data via a single portal, thereby mediating research opportunities across a large quantity of research data. EMIF developed a code of practice to ensure the privacy protection of data subjects, protect the interests of (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  35. Impact of enterprise digitalization on green innovation performance under the perspective of production and operation.Hailin Li, Hongqin Tang, Wenhao Zhou & Xiaoji Wan - 2022 - Frontiers in Public Health 10:971971.
    Introduction: How enterprises should practice digitalization transformation to effectively improve green innovation performance is related to the sustainable development of enterprises and the economy, which is an important issue that needs to be clarified. -/- Methods: This research uses the perspective of production and operation to deconstruct the digitalization of industrial listed enterprises from 2016 to 2020 into six features. A variety of machine learning methods are used, including DBSCAN, CART and other algorithms, to specifically explore the complex impact of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  36.  96
    Accountability in Artificial Intelligence.Prof Olga Gil - manuscript
    This work stresses the importance of AI accountability to citizens and explores how a fourth independent government branch/institutions could be endowed to ensure that algorithms in today´s democracies convene to the principles of Constitutions. The purpose of this fourth branch of government in modern democracies could be to enshrine accountability of artificial intelligence development, including software-enabled technologies, and the implementation of policies based on big data within a wider democratic regime context. The work draws on Philosophy of Science, Political Theory (...)
    Download  
     
    Export citation  
     
    Bookmark  
  37.  69
    Sociotechnical Infrastructures of Dominion in Stefan L. Sorgner’s We Have Always Been Cyborgs.Steven Umbrello - 2023 - Etica & Politica / Ethics & Politics 25 (1):336-351.
    In We Have Always Been Cyborgs (2021), Stefan L. Sorgner argues that, given the growing economic burden of desirable welfare programs, in order for Western democratic societies to continue to flourish it will be necessary that they establish some form of algocracy (i.e., governance by algorithm). This is argued to be necessary both in order to maintain the sustainability and efficiency of these programs, but also due to the fact that further integration of humans into technical systems provides the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  38. Algorithmic neutrality.Milo Phillips-Brown - manuscript
    Algorithms wield increasing control over our lives—over which jobs we get, whether we're granted loans, what information we're exposed to online, and so on. Algorithms can, and often do, wield their power in a biased way, and much work has been devoted to algorithmic bias. In contrast, algorithmic neutrality has gone largely neglected. I investigate three questions about algorithmic neutrality: What is it? Is it possible? And when we have it in mind, what can we learn about (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  39. „Passivität im Kostüm der Aktivität“ – Über Günther Anders’ Kritik kybernetischer Politik im Zeitalter der „totalen Maschine“.Anna-Verena Nosthoff - 2018 - Behemoth. A Journal on Civilisation 11 (1):8–25.
    Various media-theoretical studies have recently characterized the fourth industrial revolution as a process of all-encompassing technicization and cybernetization. Against this background, this paper seeks to show the timely and critical potential of Günther Anders’s magnum opus Die Antiquiertheit des Menschen vis-à-vis the ever-increasing power of cybernetic devices and networks. Anders has both witnessed, and negotiated, the process of cybernetization from its very beginning, having criticised not only its tendency of automatization and expansion, but also the circular logic and the “integral (...)
    Download  
     
    Export citation  
     
    Bookmark  
  40. Automatic Face Mask Detection Using Python.M. Madan Mohan - 2021 - Journal of Science Technology and Research (JSTAR) 2 (1):91-100.
    The corona virus COVID-19 pandemic is causing a global health crisis so the effective protection methods is wearing a face mask in public areas according to the World Health Organization (WHO). The COVID-19 pandemic forced governments across the world to impose lockdowns to prevent virus transmissions. Reports indicate that wearing facemasks while at work clearly reduces the risk of transmission. An efficient and economic approach of using AI to create a safe environment in a manufacturing setup. A hybrid model using (...)
    Download  
     
    Export citation  
     
    Bookmark  
  41. What Do Technology and Artificial Intelligence Mean Today?Scott H. Hawley & Elias Kruger - forthcoming - In Hector Fernandez (ed.), Sociedad Tecnológica y Futuro Humano, vol. 1: Desafíos conceptuales. pp. 17.
    Technology and Artificial Intelligence, both today and in the near future, are dominated by automated algorithms that combine optimization with models based on the human brain to learn, predict, and even influence the large-scale behavior of human users. Such applications can be understood to be outgrowths of historical trends in industry and academia, yet have far-reaching and even unintended consequences for social and political life around the world. Countries in different parts of the world take different regulatory views for the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  42. Disambiguating Algorithmic Bias: From Neutrality to Justice.Elizabeth Edenberg & Alexandra Wood - 2023 - In Francesca Rossi, Sanmay Das, Jenny Davis, Kay Firth-Butterfield & Alex John (eds.), AIES '23: Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery. pp. 691-704.
    As algorithms have become ubiquitous in consequential domains, societal concerns about the potential for discriminatory outcomes have prompted urgent calls to address algorithmic bias. In response, a rich literature across computer science, law, and ethics is rapidly proliferating to advance approaches to designing fair algorithms. Yet computer scientists, legal scholars, and ethicists are often not speaking the same language when using the term ‘bias.’ Debates concerning whether society can or should tackle the problem of algorithmic bias are hampered (...)
    Download  
     
    Export citation  
     
    Bookmark  
  43. Algorithms for Ethical Decision-Making in the Clinic: A Proof of Concept.Lukas J. Meier, Alice Hein, Klaus Diepold & Alena Buyx - 2022 - American Journal of Bioethics 22 (7):4-20.
    Machine intelligence already helps medical staff with a number of tasks. Ethical decision-making, however, has not been handed over to computers. In this proof-of-concept study, we show how an algorithm based on Beauchamp and Childress’ prima-facie principles could be employed to advise on a range of moral dilemma situations that occur in medical institutions. We explain why we chose fuzzy cognitive maps to set up the advisory system and how we utilized machine learning to train it. We report on the (...)
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  44. Democratizing Algorithmic Fairness.Pak-Hang Wong - 2020 - Philosophy and Technology 33 (2):225-244.
    Algorithms can now identify patterns and correlations in the (big) datasets, and predict outcomes based on those identified patterns and correlations with the use of machine learning techniques and big data, decisions can then be made by algorithms themselves in accordance with the predicted outcomes. Yet, algorithms can inherit questionable values from the datasets and acquire biases in the course of (machine) learning, and automated algorithmic decision-making makes it more difficult for people to see algorithms as biased. While researchers (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  45. Algorithmic Profiling as a Source of Hermeneutical Injustice.Silvia Milano & Carina Prunkl - forthcoming - Philosophical Studies:1-19.
    It is well-established that algorithms can be instruments of injustice. It is less frequently discussed, however, how current modes of AI deployment often make the very discovery of injustice difficult, if not impossible. In this article, we focus on the effects of algorithmic profiling on epistemic agency. We show how algorithmic profiling can give rise to epistemic injustice through the depletion of epistemic resources that are needed to interpret and evaluate certain experiences. By doing so, we not only (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  46. Governing Without A Fundamental Direction of Time: Minimal Primitivism about Laws of Nature.Eddy Keming Chen & Sheldon Goldstein - forthcoming - In Yemima Ben-Menahem (ed.), Rethinking Laws of Nature. Springer. pp. 21-64.
    The Great Divide in metaphysical debates about laws of nature is between Humeans, who think that laws merely describe the distribution of matter, and non-Humeans, who think that laws govern it. The metaphysics can place demands on the proper formulations of physical theories. It is sometimes assumed that the governing view requires a fundamental / intrinsic direction of time: to govern, laws must be dynamical, producing later states of the world from earlier ones, in accord with the fundamental direction of (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  47. Algorithms, Agency, and Respect for Persons.Alan Rubel, Clinton Castro & Adam Pham - 2020 - Social Theory and Practice 46 (3):547-572.
    Algorithmic systems and predictive analytics play an increasingly important role in various aspects of modern life. Scholarship on the moral ramifications of such systems is in its early stages, and much of it focuses on bias and harm. This paper argues that in understanding the moral salience of algorithmic systems it is essential to understand the relation between algorithms, autonomy, and agency. We draw on several recent cases in criminal sentencing and K–12 teacher evaluation to outline four key (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  48. Algorithms and the Individual in Criminal Law.Renée Jorgensen - 2022 - Canadian Journal of Philosophy 52 (1):1-17.
    Law-enforcement agencies are increasingly able to leverage crime statistics to make risk predictions for particular individuals, employing a form of inference that some condemn as violating the right to be “treated as an individual.” I suggest that the right encodes agents’ entitlement to a fair distribution of the burdens and benefits of the rule of law. Rather than precluding statistical prediction, it requires that citizens be able to anticipate which variables will be used as predictors and act intentionally to avoid (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  49. Algorithmic paranoia: the temporal governmentality of predictive policing.Bonnie Sheehey - 2019 - Ethics and Information Technology 21 (1):49-58.
    In light of the recent emergence of predictive techniques in law enforcement to forecast crimes before they occur, this paper examines the temporal operation of power exercised by predictive policing algorithms. I argue that predictive policing exercises power through a paranoid style that constitutes a form of temporal governmentality. Temporality is especially pertinent to understanding what is ethically at stake in predictive policing as it is continuous with a historical racialized practice of organizing, managing, controlling, and stealing time. After first (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  50. On algorithmic fairness in medical practice.Thomas Grote & Geoff Keeling - 2022 - Cambridge Quarterly of Healthcare Ethics 31 (1):83-94.
    The application of machine-learning technologies to medical practice promises to enhance the capabilities of healthcare professionals in the assessment, diagnosis, and treatment, of medical conditions. However, there is growing concern that algorithmic bias may perpetuate or exacerbate existing health inequalities. Hence, it matters that we make precise the different respects in which algorithmic bias can arise in medicine, and also make clear the normative relevance of these different kinds of algorithmic bias for broader questions about justice and (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
1 — 50 / 997