diff --git a/README.md b/README.md index fa47057e12fe31914b8006a2acf6869f57e4e13c..13aabfd2ca85e1ac4553eac9b6b4a3d3afbdaf8b 100644 --- a/README.md +++ b/README.md @@ -12,6 +12,11 @@ set_action_size_reduced() ``` action space. The reduced action space just removes DO_NOTHING. +--- +The used policy is based on the FastTreeObs in the official starter kit - NeurIPS 2020 Flatland Challenge. But the + FastTreeObs in this repo is extended version. +[fast_tree_obs.py](./utils/fast_tree_obs.py) + --- Have a look into the [run.py](./run.py) file. There you can select using PPO or DDDQN as RL agents.