Machine Advisors: Integrating Large Language Models into Democratic Assemblies

Abstract

Large language models (LLMs) represent the currently most relevant incarnation of artificial intelligence with respect to the future fate of democratic governance. Considering their potential, this paper seeks to answer a pressing question: Could LLMs outperform humans as expert advisors to democratic assemblies? While bearing the promise of enhanced expertise availability and accessibility, they also present challenges of hallucinations, misalignment, or value imposition. Weighing LLMs’ benefits and drawbacks compared to their human counterparts, I argue for their careful integration to augment democracy’s ability to address complex policy issues. The paper posits that time-tested democratic procedures like deliberation and aggregation by voting provide safeguards effective against both human and machine advisor imperfections. Additional protective measures include custom LLM training for the advisory role, boosting representatives’ competencies in query formulation, or implementation of adversarial proceedings in which LLM advisors could debate each other and provide dissenting opinions. These could further mitigate the risks that LLMs present in advisory roles and empower human decision-makers toward increased autonomy and quality of their collective choices. My conceptual exploration offers a roadmap for the co-evolution of AI and democratic institutions, setting the stage for an empirical research agenda to finetune the implementation specifics.

Author's Profile

Petr Špecián
Charles University, Prague

Analytics

Added to PP
2024-01-23

Downloads
183 (#73,939)

6 months
183 (#15,806)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?