Causal Inference for Mean Field Multi-Agent Reinforcement Learning

International Journal of Multidisciplinary Research in Science, Engineering, Technology and Management 12 (12):10956-10959 (2024)
  Copy   BIBTEX

Abstract

Multi-agent reinforcement learning (MARL) has gained significant attention due to its applications in complex, interactive environments. Traditional MARL approaches often struggle with scalability and non-stationarity as the number of agents increases. Mean Field Reinforcement Learning (MFRL) provides a scalable alternative by approximating interactions using aggregated statistics. However, existing MFRL models fail to capture causal relationships between agent interactions, leading to suboptimal decision-making. In this work, we introduce Causal Mean Field Multi-Agent Reinforcement Learning (Causal-MFRL), which integrates causal inference techniques into the mean field framework. By leveraging causal graphs and counterfactual reasoning, Causal-MFRL improves policy learning and enhances the interpretability of agent behaviors. We evaluate our approach on standard MARL benchmarks, demonstrating superior performance in efficiency, robustness, and generalization.

Analytics

Added to PP
2025-02-23

Downloads
95 (#100,964)

6 months
95 (#66,509)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?