Resilient projection-based consensus actor-critic (RPBCAC) algorithm
We implement the RPBCAC algorithm with nonlinear approximation from [1] and focus on training performance of cooperative agents in the presence of adversaries. We aim to validate the analytical results presented in the paper and prevent adversarial attacks that can arbitrarily hurt cooperative network performance including the one studied in [2]. The repository contains folders whose description is provided below:
- agents - contains resilient and adversarial agents
- environments - contains a grid world environment for the cooperative navigation task
- simulation_results - contains plots that show training performance
- training - contains functions for training agents
To train agents, execute main.py.
Multi-agent grid world: cooperative navigation
We train five agents in a grid-world environment. Their original goal is to approach their desired position without colliding with other agents in the network. We design a grid world of dimension (6 x 6) and consider a reward function that penalizes the agents for distance from the target and colliding with other agents.
We compare the cooperative network performance under the RPBCAC algorithm with the trimming parameter H=0 and H=1, which corresponds to the number of adversarial agents that are assumed to be present in the network. We consider four scenarios:
- All agents are cooperative. They maximize the team-average expected returns.
- One agent is greedy as it maximizes its own expected returns. It shares parameters with other agents but does not apply consensus updates.
- One agent is faulty and does not have a well-defined objective. It shares fixed parameter values with other agents.
- One agent is strategic; it maximizes its own returns and leads the cooperative agents to minimize their returns. The strategic agent has knowledge of other agents' rewards and updates two critic estimates (one critic is used to improve the adversary's policy and the other to hurt the cooperative agents' performance).
The simulation results below demonstrate very good performance of the RPBCAC with H=1 (right) compared to the non-resilient case with H=0 (left). The performance is measured by the episode returns.
1) All cooperative
2) Three cooperative + one greedy
3) Three cooperative + one faulty
4) Three cooperative + one malicious
The folder with resilient agents contains the RPBCAC agent as well as an agent that applies the method of trimmed means in the consensus updates (RTMCAC).
References
[2] Figura, M., Kosaraju, K. C., and Gupta, V. Adversarial attacks in consensus-based multi-agent reinforcement learning. arXiv preprint arXiv:2103.06967, 2021.