Quick introduction on how to train a simple DQN agent using Flatland and Pytorch. At the end of this Tutorial you should be able to train a single agent to navigate in Flatland.
We use the `training_navigation.py`[here](https://gitlab.aicrowd.com/flatland/baselines/blob/master/torch_training/training_navigation.py) file to train a simple agent with the tree observation to solve the navigation task.
We use the `training_navigation.py`([here](https://gitlab.aicrowd.com/flatland/baselines/blob/master/torch_training/training_navigation.py)) file to train a simple agent with the tree observation to solve the navigation task.
## Actions in Flatland
Flatland is a railway simulation. Thus the actions of an agent are strongly limited to the railway network. This means that in many cases not all actions are valid.
The possible actions of an agent are
- 0 *Do Nothing*: If the agent is moving it continues moving, if it is stopped id stays stopped
- 1 *Deviate Left*: This action is only valid at cells where the agent can change direction towards left. If action is chosen, the left transition and a rotation of the agent orientation to the left is executed. If the agent is stopped at any position, this action will cause it to start moving in any cell where forward or left is allowed!
- 0 *Do Nothing*: If the agent is moving it continues moving, if it is stopped it stays stopped
- 1 *Deviate Left*: This action is only valid at cells where the agent can change direction towards left. If action is chosen, the left transition and a rotation of the agent orientation to the left is executed. If the agent is stopped at any position, this action will cause it to start moving in any cell where forward or left is allowed! [**Adrian**: Why forward is allowed/applied?]
- 2 *Go Forward*: This action will start the agent when stopped. At switches this will chose the forward direction.
- 3 *Deviate Right*: Exactly the same as deviate left but for right turns.
- 4 *Stop*: This action causes the agent to stop, this is necessary to avoid conflicts in multi agent setups (Not needed for navigation).
## Tree Observation
Flatland offers 3 basic observations from the beginning. We encourage you to develop your own observations that are better suited for this specific task.
Flatland offers three basic observations from the beginning. We encourage you to develop your own observations that are better suited for this specific task.
For the navigation training we start with the Tree Observation as agents will learn the task very quickly using this observation.
The tree observation exploits the fact that a railway network is a graph and thus the obersvation is only built along allowed transitions in the graph.
The tree observation exploits the fact that a railway network is a graph and thus the observation is only built along allowed transitions in the graph.
Here is a small example of a railway network with an agent in the top left corner. The tree observation is build by following the allowed transitions for that agent.

As we move along the allowed transitions we build up a tree where a new node is created at every cell where the agent has different possibilities (Switch) or the target is reached. It is important to note that the tree observation is always build according to the orientation of the agent at a given node. This means that each node always has 4 branches coming from it in the directions *Left, Forward, Right and Backward*. These are illustrated with different colors in the figure below. The tree is build form the example rail above. Nodes where there are no possibilitis are fill with `-inf` and are not all shown here for simplicity. The tree however, always hase the same number of nodes for a given tree depth.
As we move along the allowed transitions we build up a tree where a new node is created at every cell where the agent has different possibilities (Switch) or the target is reached.[**Adrian**: What about a dead-end?]
It is important to note that the tree observation is always build according to the orientation of the agent at a given node. This means that each node always has 4 branches coming from it in the directions *Left, Forward, Right and Backward*. These are illustrated with different colors in the figure below. The tree is build form the example rail above. Nodes where there are no possibilities are filled with `-inf` and are not all shown here for simplicity. The tree however, always has the same number of nodes for a given tree depth.
@@ -30,28 +32,36 @@ Each node is filled with information gathered along the path to the node. Curren
- 1: if own target lies on the explored branch the current distance from the agent in number of cells is stored.
- 2: if another agents target is detected the distance in number of cells from the agents current locaiton is stored.
- 2: if another agents target is detected the distance in number of cells from current agent position is stored.
- 3: if another agent is detected the distance in number of cells from current agent position is stored.
- 4: possible conflict detected
tot_dist = Other agent predicts to pass along this cell at the same time as the agent, we store the distance in number of cells from current agent position 0 = No other agent reserve the same cell at similar time.
[**Adrian**: Please describe more in detail. I don't understand what the other agent does, how dows it predict, if i don't have any background this is very hard to understand]
- 5: if an not usable switch (for agent) is detected we store the distance.
[**Adrian** what is an not usable switch? where two tracks gets joint to getter?]
- 6: This feature stores the distance in number of cells to the next branching (current node)
[**Adrian**: to next switch?, local distance starting at node ... to next switch / node in cell : Might better the branch length in number of cells]
- 7: minimum distance from node to the agent's target given the direction of the agent if this path is chosen
[**Adrian**: Where is the distance from, we might have to write something about the distance map used]
- 8: agent in the same direction
- n = number of agents present same direction (possible future use: number of other agents in the same direction in this branch)
- 0 = no agent present same direction
[**Adrian**: The number of agents on the branch traveling in the same direction?]
- 9: agent in the opposite direction
- n = number of agents present other direction than myself
- 0 = no agent present other direction than myself
[**Adrian**: other -> opposite?]
For training purposes the tree is flattend into a single array.
[**Adrian**: maybe some words or a small graphic what that mean - optional]
To set up a appropriate agent we need the state and action space sizes. From the discussion above about the tree observation we end up with:
[**Adrian**: I just wonder, why this is not done in seperate method in the the observation: get_state_size, then we don't have to write down much more. And the user don't need to
understand anything about the oberservation. I suggest moving this into the obersvation, base ObservationBuilder declare it as an abstract method. ... ]
```
# Given the depth of the tree observation and the number of features per node we get the following state_size
In the `training_navigation.py` file you will finde further variable that we initiate in order to keep track of the training progress.
In the `training_navigation.py` file you will find further variable that we initiate in order to keep track of the training progress.
Below you see an example code to train an agent. It is important to note that we reshape and normalize the tree observation provided by the environment to facilitate training.
To do so, we use the utility functions `split_tree(tree=np.array(obs[a]), num_features_per_node=features_per_node, current_depth=0)` and `norm_obs_clip()`. Feel free to modify the normalization as you see fit.
...
...
@@ -136,6 +149,7 @@ To do so, we use the utility functions `split_tree(tree=np.array(obs[a]), num_fe
```
We now use the normalized `agent_obs` for our training loop:
[**Adrian**: Same question as above, why not done in the observation class?]