This repository contains reinforcement learning baselines for the [NeurIPS 2020 Flatland Challenge](https://www.aicrowd.com/challenges/neurips-2020-flatland-challenge/).
Experiments
---
## Provided baselines
```{note}
**Note:** looking for something simpler to start? We also provide a simpler Dueling Double DQN method implemented using PyTorch without relying on RLlib: **https://gitlab.aicrowd.com/flatland/flatland-examples**
```
### RL Methods
- Ape-X
- PPO
- CCPPO
- Pure Imitation Learning: MARWIL
- Mixed IL/RL: DQfD
### Custom observations
- Density observations
- Local conflict observations
### Tricks
- Action skipping
- Action masking
## Organisation
Experiments consist of one or many RLlib YAML config files alongside a MARKDOWN file containing results, plots and a detailed description of the methodology.
...
...
@@ -12,31 +38,20 @@ All files are stored in a experiment folder under `experiments/<env-name>/<exper
This observation allows to combine multiple observation by specifying them int the fun config.
```
### 💡 The idea
Provide a simple way to combine multiple observation.
### 🗂️ Files and usage
The observation is defined in "neurips2020-flatland-baselines/envs/flatland/observations/combined_obs.py".
To combine multiple observations, instead of directly putting the observation settings under "observation_config", use the names of the observations you want to combine as keys to provide the corresponding observation configs (see example).
An example config is located in "neurips2020-flatland-baselines/baselines/combined_tree_local_conflict_obs/sparse_small_apex_maxdepth2_spmaxdepth30.yaml" and can be run with
This observation does not generate itself any information for the agent but just naively concat the outputs of the specified observations.
### 📈 Results
Since this observations is meant as a helper to easily explore combinations of observations, there is no meaningful baseline. However, we did a run combining tree and local conflict observations as a sanity check (see link below).
### 🔗 Links
*[W&B report for test run](https://app.wandb.ai/masterscrat/flatland/reports/Tree-and-Conflict-Obs-|-sparse-small_v0--VmlldzoxNTc4MzU)
This observation is a global observation, that provides an agent with information on its own and the other agents predicted paths. The paths are predicted from the shortest path of each agent to its respective target. The information is encoded into a density map.
```
### 💡 The idea
The density observation is based on the idea that every agent's path to its target is represented in a discrete map of the environment assigning each location (cell) a value encoding the information if and when the cell will be occupied. For simplicity, we assume that an agent follows the shortest path to its target and don't consider alternative paths. The individual values along the agents' shortest paths are combined into a "density" for each cell. For example, if all agents would occupy the same cell at the same time step, the density would be very high. If the agents would use the same cell but at different time steps the density for that cell would be lower. The density map therefore potentially allows the agents to learn from the (projected) cell occupancy distribution.
### 🗂️ Files and usage
The observation is defined in "neurips2020-flatland-baselines/envs/flatland/observations/global_density_obs.py", the model used in the example in "neurips2020-flatland-baselines/models/global_dens_obs_model.py".
The observation can be configured with the following parameters:
* width and height: have to correspond to the shape of the environment
* max_t: max number of time steps the path of each agent is predicted for
* encoding: defining how to factor in the time information into the density value (2d options: exp_decay, lin_decay, binary; 3d option: series; see next section for more details)
An example config is located in "neurips2020-flatland-baselines/baselines/global_density_obs/sparse_small_apex_expdecay_maxt1000.yaml" and can be run with
The observation for each agent consists of two arrays representing the cells of the environment. The first array contains the density values for the agent itself, and the second one the mean of the other agents' values for each cell. The arrays are either two or three dimensional depending on the encoding.
The idea behind this parameter is to provide a way to compress the space and time information into a 2d representation. However, it is possible to get a 3d observation with a separate, 2d density map for each time step, by using the option "series" (for time series) for the encoding. In this case, a binary representation for the individual agent occupancies is used.
The other options use a function of the time step *t* and the maximal time step *max_t* to determine the density value:
* exp_decay: e^(-t / max_t^(1/2))
* lin_decay: (max_t - t) / max_t
* binary: 1
We created a custom model (GlobalDensObsModel) for this observation that uses a convolutional neural network to process the observation. For the experiments, we used the IMPALA (see links section) implementation.
### 📈 Results
We trained the agents with the different encoding options and different values for max_t using Ape-X (see links section). However, we didn't search systematically or exhaustibly for the best settings.
The best runs achieved around 45% mean completion on the sparse, small flatand environment with max_t = 1000 and encoding = exp_decay. The mean completion rate is considerably lower than the tree observation but show that learning is possible from global observations and can inform approaches to combine local, tree and global observations.
More information on the runs is can be found in the weights and biases report linked below.
### 🔗 Links
*[IMPALA Paper – IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures (Espeholt et al.)](https://arxiv.org/abs/1802.01561)
*[Ape-X Paper – Distributed Prioritized Experience Replay (Horgan et al.)](https://arxiv.org/abs/1803.00933)
*[W&B report for training runs](https://app.wandb.ai/masterscrat/flatland/reports/Density-Obs-|-sparse-small_v0--VmlldzoxMTYxMDE)