Machine Advisors: Integrating Large Language Models into Democratic Assemblies

Social Epistemology (forthcoming)
  Copy   BIBTEX

Abstract

Could the employment of large language models (LLMs) in place of human advisors improve the problem-solving ability of democratic assemblies? LLMs represent the most significant recent incarnation of artificial intelligence and could change the future of democratic governance. This paper assesses their potential to serve as expert advisors to democratic representatives. While LLMs promise enhanced expertise availability and accessibility, they also present specific challenges. These include hallucinations, misalignment and value imposition. After weighing LLMs’ benefits and drawbacks against human advisors, I argue that time-tested democratic procedures, such as deliberation and aggregation by voting, provide safeguards that are effective against human and machine advisor shortcomings alike. Additional protective measures may include custom training for advisor LLMs or boosting representatives’ competencies in query formulation. Implementation of adversarial proceedings in which LLM advisors would debate each other and provide dissenting opinions is likely to yield further epistemic benefits. Overall, promising interventions that would mitigate the LLM risks appear feasible. Machine advisors could thus empower human decision-makers to make more autonomous, higher-quality decisions. On this basis, I defend the hypothesis that LLMs’ careful integration into policymaking could augment democracy’s ability to address today’s complex social problems.

Author's Profile

Petr Špecián
Charles University, Prague

Analytics

Added to PP
2024-01-23

Downloads
345 (#60,595)

6 months
216 (#14,271)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?