Skip to content

Toolkit of Causal Model-based Reinforcement Learning.

License

Notifications You must be signed in to change notification settings

polixir/causal-mbrl

Repository files navigation

Causal-MBRL

Code style: black

cmrl(short for Causal-MBRL) is a toolbox for facilitating the development of Causal Model-based Reinforcement learning algorithms. It use Stable-Baselines3 as model-free engine and allows flexible use of causal models.

cmrl is inspired by MBRL-Lib. Unlike MBRL-Lib, cmrl focuses on the causal characteristics of the model. It supports the learning of different types of causal models and can use any model free algorithm on these models. It uses Emei as the reinforcement learning environment by default, which is a re-encapsulation of Openai Gym.

Main Features

Thanks to the decoupling between the environment-model and the model-free algorithm, cmrl supports all on-policy and off-policy reinforcement learning algorithms in Stable-Baselines3 and SB3-Contrib. Meanwhile, cmrl is consistent with a number of utilities Stable-Baselines3 (e.g. Logger, Replay-buffer, Callback, etc.).

Although it supports many model-free algorithms, the focus of cmrl is the learning of causal models. cmrl uses VecFakeEnv to build a fake environment and conduct online reinforcement learning on it. Each 'VecFakeEnv' corresponds to a dynamics, which is composed of three parts, namely, Transition, Reward-Mech(short for reward mechanism) and Termination-Mech, look at the class diagram:

classDiagram

    BaseDynamics o-- BaseTransition
    BaseDynamics o-- BaseRewardMech
    BaseDynamics o-- BaseTerminationMech

    class BaseDynamics {
         + transition: BaseTransition
         + reward_mech: BaseRewardMech
         + termination_mech: BaseTerminationMech
         + transition_graph: BaseGraph
         + reward_mech_graph: BaseGraph
         + termination_mech_graph: BaseGraph
         + learn()
         + save()
         + load()
    }

    class BaseTransition {
        + obs_size: int
        + action_size: int
        + forward()
    }

    class BaseRewardMech {
        + obs_size: int
        + action_size: int
        + forward()
    }

    class BaseTerminationMech {
        + obs_size: int
        + action_size: int
        + forward()
    }
Loading

cmrl encapsulates the neural networks commonly used in causal-model-based RL, including PlainEnsembleMLP , ExternalMaskEnsembleMLP and so on. For any mechanism in dynamics, it should be a subclass of any MLP and its corresponding base class. For example, look at the class diagram of PlainTransition:

classDiagram

    EnsembleMLP <|--  PlainEnsembleMLP
    BaseTransition  <|-- PlainTransition
    PlainEnsembleMLP <|-- PlainTransition

    class PlainTransition {
        + obs_size: int
        + action_size: int
        + forward()
    }

    class BaseTransition {
        + obs_size: int
        + action_size: int
        + forward()
    }

    class PlainEnsembleMLP {
        + ensemble_num: int
        + elite_num: int
         + save()
         + load()
    }

Loading

Installation

install by cloning from github

# clone the repository
git clone https://github.com/FrankTianTT/causal-mbrl.git
cd causal-mbrl
# create conda env
conda create -n cmrl python=3.8
conda activate cmrl
# install cmrl and its dependent packages
pip install -e .

If there is no cuda in your device, it's convenient to install cuda and pytorch from conda directly (refer to pytorch):

# for example, in the case of cuda=11.3
conda install pytorch cudatoolkit=11.3 -c pytorch

install using pip

coming soon.

Usage

python -m cmrl.exmaples.main

Contributing

see CONTRIBUTING for details.