Skip to content

Exploring partial observability in Multi-Agent Reinforcement Learning

License

Notifications You must be signed in to change notification settings

TIERS/partially-observable-marl

Repository files navigation

A Study of Partial Observability in
Multi-Agent Reinforcement Learning



Paper   •   Contact Us

drawing drawing drawing drawing

Simple-Spread task: Agents with different partial observation settings can achieve comparable performance with near-optimality. From left to right: agents can observe nearby 2, 4, 6, 8 agents.

Installation

$ conda env create -f environment.yml

Train the agents

$ cd scripts
$ ./run_mpe_batch.sh

Results

The pretrained simple-spread models can be found in results/MPE/simple_spread/ramppo/models

$ cd scripts
$ ./render_mpe.sh

Citation

If you use this dataset for any academic work, please cite the following publication:

@misc{wenshuai2023less,
    title={Less Is More: Robust Robot Learning via Partially Observable Multi-Agent Reinforcement Learning}, 
    author={Wenshuai Zhao and Eetu Rantala and Joni Pajarinen and Jorge Peña Queralta},
    year={2023},
    eprint={},
    archivePrefix={arXiv},
    primaryClass={cs.RO}
}

About

Exploring partial observability in Multi-Agent Reinforcement Learning

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published