Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • flatland/flatland
  • stefan_otte/flatland
  • jiaodaxiaozi/flatland
  • sfwatergit/flatland
  • utozx126/flatland
  • ChenKuanSun/flatland
  • ashivani/flatland
  • minhhoa/flatland
  • pranjal_dhole/flatland
  • darthgera123/flatland
  • rivesunder/flatland
  • thomaslecat/flatland
  • joel_joseph/flatland
  • kchour/flatland
  • alex_zharichenko/flatland
  • yoogottamk/flatland
  • troye_fang/flatland
  • elrichgro/flatland
  • jun_jin/flatland
  • nimishsantosh107/flatland
20 results
Show changes
Showing
with 886 additions and 285 deletions
# Environment Wrappers
> We provide various environment wrappers to work with both the rail env and the petting zoo interface.
## Background
These wrappers changes certain environment behavior which can help to get better reinforcement learning training.
## Supported Inbuilt Wrappers
We provide 2 sample wrappers for ShortestPathAction wrapper and SkipNoChoice wrapper. The wrappers requires many env properties that are only created on environment reset. Hence before using the wrapper, we must reset the rail env. To use the wrappers, simply pass the resetted rail env. Code samples are shown below for each wrapper.
### ShortestPathAction Wrapper
To use the ShortestPathAction Wrapper, simply wrap the rail env as follows
```python
rail_env.reset(random_seed=1)
rail_env = ShortestPathActionWrapper(rail_env)
```
The shortest path action wrapper maps the existing action space into 3 actions - Shortest Path (`0`), Next Shortest Path (`1`) and Stop (`2`). Hence, we must ensure that the predicted action should always be one of these (0, 1 and 2) actions. To route all agents in the shortest path, pass `0` as the action.
### SkipNoChoice Wrapper
To use the SkipNoChoiceWrapper, simply wrap the rail env as follows
```python
rail_env.reset(random_seed=1)
rail_env = SkipNoChoiceCellsWrapper(rail_env, accumulate_skipped_rewards=False, discounting=0.0)
```
Environment Wrappers
====================
..
We provide various environment wrappers to work with both the rail env and the petting zoo interface.
Background
----------
These wrappers changes certain environment behavior which can help to get better reinforcement learning training.
Supported Inbuilt Wrappers
--------------------------
We provide 2 sample wrappers for ShortestPathAction wrapper and SkipNoChoice wrapper. The wrappers requires many env properties that are only created on environment reset. Hence before using the wrapper, we must reset the rail env. To use the wrappers, simply pass the resetted rail env. Code samples are shown below for each wrapper.
ShortestPathAction Wrapper
^^^^^^^^^^^^^^^^^^^^^^^^^^
To use the ShortestPathAction Wrapper, simply wrap the rail env as follows
.. code-block:: python
rail_env.reset(random_seed=1)
rail_env = ShortestPathActionWrapper(rail_env)
The shortest path action wrapper maps the existing action space into 3 actions - Shortest Path (\ ``0``\ ), Next Shortest Path (\ ``1``\ ) and Stop (\ ``2``\ ). Hence, we must ensure that the predicted action should always be one of these (0, 1 and 2) actions. To route all agents in the shortest path, pass ``0`` as the action.
SkipNoChoice Wrapper
^^^^^^^^^^^^^^^^^^^^
To use the SkipNoChoiceWrapper, simply wrap the rail env as follows
.. code-block:: python
rail_env.reset(random_seed=1)
rail_env = SkipNoChoiceCellsWrapper(rail_env, accumulate_skipped_rewards=False, discounting=0.0)
================
Local Evaluation
================
This document explains you how to locally evaluate your submissions before making
an official submission to the competition.
Requirements
------------
* **flatland-rl** : We expect that you have `flatland-rl` installed by following the instructions in :doc:`installation`.
* **redis** : Additionally you will also need to have `redis installed <https://redis.io/topics/quickstart>`_ and **should have it running in the background.**
Test Data
---------
* **test env data** : You can `download and untar the test-env-data <https://www.aicrowd.com/challenges/flatland-challenge/dataset_files>`, at a location of your choice, lets say `/path/to/test-env-data/`. After untarring the folder, the folder structure should look something like:
.. code-block:: console
.
└── test-env-data
├── Test_0
│ ├── Level_0.pkl
│ └── Level_1.pkl
├── Test_1
│ ├── Level_0.pkl
│ └── Level_1.pkl
├..................
├..................
├── Test_8
│ ├── Level_0.pkl
│ └── Level_1.pkl
└── Test_9
├── Level_0.pkl
└── Level_1.pkl
Evaluation Service
------------------
* **start evaluation service** : Then you can start the evaluator by running :
.. code-block:: console
flatland-evaluator --tests /path/to/test-env-data/
RemoteClient
------------
* **run client** : Some `sample submission code can be found in the starter-kit <https://github.com/AIcrowd/flatland-challenge-starter-kit/>`_, but before you can run your code locally using `FlatlandRemoteClient`, you will have to set the `AICROWD_TESTS_FOLDER` environment variable to the location where you previous untarred the folder with `the test-env-data`:
.. code-block:: console
export AICROWD_TESTS_FOLDER="/path/to/test-env-data/"
# or on Windows :
#
# set AICROWD_TESTS_FOLDER "\path\to\test-env-data\"
# and then finally run your code
python run.py
# Core Specifications
## Environment Class Overview
## Core Specifications
### Environment Class Overview
The Environment class contains all necessary functions for the interactions between the agents and the environment. The base Environment class is derived from rllib.env.MultiAgentEnv (https://github.com/ray-project/ray).
......@@ -12,7 +13,7 @@ class Environment:
Agents are identified by agent ids (handles).
Examples:
>>> obs = env.reset()
>>> obs, info = env.reset()
>>> print(obs)
{
"train_0": [2.4, 1.6],
......
## Intro
In a human-readable language, specifications provide
- code base overview (hand-drawn concept)
- key concepts (generators, envs) and how are they linked
- link relevant code base
![Overview](img/UML_flatland.png)
`Diagram Source <https://confluence.sbb.ch/x/pQfsSw>`_
=============================
Observation and Action Spaces
=============================
----------------------------
This is an introduction to the three standard observations and the action space of **Flatland**.
Action Space
============
^^^^^^^^^^^^
Flatland is a railway simulation. Thus the actions of an agent are strongly limited to the railway network. This means that in many cases not all actions are valid.
The possible actions of an agent are
......@@ -15,7 +15,7 @@ The possible actions of an agent are
- ``4`` **Stop**: This action causes the agent to stop.
Observation Spaces
==================
^^^^^^^^^^^^^^^^^^
In the **Flatland** environment we have included three basic observations to get started. The figure below illustrates the observation range of the different basic observation: ``Global``, ``Local Grid`` and ``Local Tree``.
.. image:: https://i.imgur.com/oo8EIYv.png
......@@ -24,7 +24,7 @@ In the **Flatland** environment we have included three basic observations to get
Global Observation
------------------
~~~~~~~~~~~~~~~~~~
Gives a global observation of the entire rail environment.
The observation is composed of the following elements:
......@@ -37,7 +37,7 @@ We encourage you to enhance this observation with any layer you think might help
It would also be possible to construct a global observation for a super agent that controls all agents at once.
Local Grid Observation
----------------------
~~~~~~~~~~~~~~~~~~~~~~
Gives a local observation of the rail environment around the agent.
The observation is composed of the following elements:
......@@ -50,7 +50,7 @@ Be aware that this observation **does not** contain any clues about target locat
We encourage you to come up with creative ways to overcome this problem. In the tree observation below we introduce the concept of distance maps.
Tree Observation
----------------
~~~~~~~~~~~~~~~~
The tree observation is built by exploiting the graph structure of the railway network. The observation is generated by spanning a **4 branched tree** from the current position of the agent. Each branch follows the allowed transitions (backward branch only allowed at dead-ends) until a cell with multiple allowed transitions is reached. Here the information gathered along the branch is stored as a node in the tree.
The figure below illustrates how the tree observation is built:
......@@ -73,7 +73,7 @@ The right side of the figure shows the resulting tree of the railway network on
Node Information
----------------
~~~~~~~~~~~~~~~~
Each node is filled with information gathered along the path to the node. Currently each node contains 9 features:
- 1: if own target lies on the explored branch the current distance from the agent in number of cells is stored.
......
# Railway Specifications
## Railway Specifications
## Overview
### Overview
Flatland is usually a two-dimensional environment intended for multi-agent problems, in particular it should serve as a benchmark for many multi-agent reinforcement learning approaches.
......@@ -9,20 +9,20 @@ The environment can host a broad array of diverse problems reaching from disease
This documentation illustrates the dynamics and possibilities of Flatland environment and introduces the details of the train traffic management implementation.
## Environment
### Environment
Before describing the Flatland at hand, let us first define terms which will be used in this specification. Flatland is grid-like n-dimensional space of any size. A cell is the elementary element of the grid. The cell is defined as a location where any objects can be located at. The term agent is defined as an entity that can move within the grid and must solve tasks. An agent can move in any arbitrary direction on well-defined transitions from cells to cell. The cell where the agent is located at must have enough capacity to hold the agent on. Every agent reserves exact one capacity or resource. The capacity of a cell is usually one. Thus usually only one agent can be at same time located at a given cell. The agent movement possibility can be restricted by limiting the allowed transitions.
Before describing the Flatland at hand, let us first define terms which will be used in this specification. Flatland is grid-like n-dimensional space of any size. A cell is the elementary element of the grid. The cell is defined as a location where any objects can be located at. The term agent is defined as an entity that can move within the grid and must solve tasks. An agent can move in any arbitrary direction on well-defined transitions from cells to cell. The cell where the agent is located at must have enough capacity to hold the agent on. Every agent reserves exact one capacity or resource. The capacity of a cell is usually one. Thus usually only one agent can be at same time located at a given cell. The agent movement possibility can be restricted by limiting the allowed transitions.
Flatland is a discrete time simulation. A discrete time simulation performs all actions with constant time step. In Flatland the simulation step moves the time forward in equal duration of time. At each step the agents can choose an action. For the chosen action the attached transition will be executed. While executing a transition Flatland checks whether the requested transition is valid. If the transition is valid the transition will update the agents position. In case the transition call is not allowed the agent will not move.
In general each cell has a only one cell type attached. With the help of the cell type the allowed transitions can be defined for all agents.
In general each cell has a only one cell type attached. With the help of the cell type the allowed transitions can be defined for all agents.
Flatland supports many different types of agents. In consequence the cell type can be further defined per agent type. In consequence the allowed transition for a agent at a given cell is now defined by the cell type and agent's type.
Flatland supports many different types of agents. In consequence the cell type can be further defined per agent type. In consequence the allowed transition for a agent at a given cell is now defined by the cell type and agent's type.
For each agent type Flatland can have a different action space.
For each agent type Flatland can have a different action space.
### Grid
#### Grid
A rectangular grid of integer shape (dim_x, dim_y) defines the spatial dimensions of the environment.
......@@ -33,20 +33,20 @@ Within this documentation we use North, East, West, South as orientation indicat
Cells are enumerated starting from NW, East-West axis is the second coordinate, North-South is the first coordinate as commonly used in matrix notation.
Two cells $`i`$ and $`j`$ ($`i \neq j`$) are considered neighbors when the Euclidean distance between them is $`|\vec{x_i}-\vec{x_j}<= \sqrt{2}|`$. This means that the grid does not wrap around as if on a torus. (Two cells are considered neighbors when they share one edge or on node.)
Two cells $`i`$ and $`j`$ ($`i \neq j`$) are considered neighbors when the Euclidean distance between them is $`|\vec{x_i}-\vec{x_j}<= \sqrt{2}|`$. This means that the grid does not wrap around as if on a torus. (Two cells are considered neighbors when they share one edge or on node.)
![cell_table](https://drive.google.com/uc?export=view&id=109cD1uihDvTWnQ7PPTxC9AiNphlsY92r)
For each cell the allowed transitions to all neighboring 4 cells are defined. This can be extended to include transition probabilities as well.
### Tile Types
#### Tile Types
##### Railway Grid
###### Railway Grid
Each Cell within the simulation grid consists of a distinct tile type which in turn limit the movement possibilities of the agent through the cell. For railway specific problem 8 basic tile types can be defined which describe a rail network. As a general fact in railway network when on navigation choice must be taken at maximum two options are available.
Each Cell within the simulation grid consists of a distinct tile type which in turn limit the movement possibilities of the agent through the cell. For railway specific problem 8 basic tile types can be defined which describe a rail network. As a general fact in railway network when on navigation choice must be taken at maximum two options are available.
The following image gives an overview of the eight basic types. These can be rotated in steps of 45° and mirrored along the North-South of East-West axis. Please refer to Appendix A for a complete list of tiles.
The following image gives an overview of the eight basic types. These can be rotated in steps of 45° and mirrored along the North-South of East-West axis. Please refer to Appendix A for a complete list of tiles.
![cell_types](https://drive.google.com/uc?export=view&id=164iowmfRQ9O34hquxLhO2xxt49NE473P)
......@@ -56,9 +56,9 @@ As a general consistency rule, it can be said that each connection out of a tile
![consistency_rule](https://drive.google.com/uc?export=view&id=1iaMIokHZ9BscMJ_Vi9t8QX_-8DzOjBKE)
In the image above on the left picture there is an inconsistency at the eastern end of cell (3,2) since the there is no valid neighbor for cell (3,2). In the right picture a Cell (3,2) consists of a dead-end which leaves no unconnected transitions.
In the image above on the left picture there is an inconsistency at the eastern end of cell (3,2) since the there is no valid neighbor for cell (3,2). In the right picture a Cell (3,2) consists of a dead-end which leaves no unconnected transitions.
Case 0 represents a wall, thus no agent can occupy the tile at any time.
Case 0 represents a wall, thus no agent can occupy the tile at any time.
Case 1 represent a passage through the tile. While on the tile the agent on can make no navigation decision. The agent can only decide to either continue, i.e. passing on to the next connected tile, wait or move backwards (moving the tile visited before).
......@@ -66,38 +66,38 @@ Case 2 represents a simple switch thus when coming the top position (south in th
Case 3 can be seen as a superposition of Case 1. As with any other tile at maximum one agent can occupy the cell at a given time.
Case 4 represents a single-slit switch. In the example a navigation choice is possible when coming from West or South.
Case 4 represents a single-slit switch. In the example a navigation choice is possible when coming from West or South.
In Case 5 coming from all direction a navigation choice must be taken.
In Case 5 coming from all direction a navigation choice must be taken.
Case 7 represents a deadend, thus only stop or backwards motion is possible when an agent occupies this cell.
Case 7 represents a deadend, thus only stop or backwards motion is possible when an agent occupies this cell.
##### Tile Types of Wall-Based Cell Games (Theseus and Minotaur's puzzle, Labyrinth Game)
###### Tile Types of Wall-Based Cell Games (Theseus and Minotaur's puzzle, Labyrinth Game)
The Flatland approach can also be used the describe a variety of cell based logic games. While not going into any detail at all it is still worthwhile noting that the games are usually visualized using cell grid with wall describing forbidden transitions (negative formulation).
The Flatland approach can also be used the describe a variety of cell based logic games. While not going into any detail at all it is still worthwhile noting that the games are usually visualized using cell grid with wall describing forbidden transitions (negative formulation).
![minotaurus](https://drive.google.com/uc?export=view&id=1WbU6YGopLKqAjVD6-r9UhCIzDfLisb5U)
Left: Wall-based Grid definition (negative definition), Right: lane-based Grid definition (positive definition)
Left: Wall-based Grid definition (negative definition), Right: lane-based Grid definition (positive definition)
# Train Traffic Management
## Train Traffic Management
### Problem Definition
#### Problem Definition
Additionally, due to the dynamics of train traffic, each transition probability is symmetric in this environment. This means that neighboring cells will always have the same transition probability to each other.
Furthermore, each cell is exclusive and can only be occupied by one agent at any given time.
## Observations
### Observations
In this early stage of the project it is very difficult to come up with the necessary observation space in order to solve all train related problems. Given our early experiments we therefore propose different observation methods and hope to investigate further options with the crowdsourcing challenge. Below we compare global observation with local observations and discuss the differences in performance and flexibility.
### Global Observation
#### Global Observation
Global observations, specifically on a grid like environment, benefit from the vast research results on learning from pixels and the advancements in convolutional neural network algorithms. The observation can simply be generated from the environment state and not much additional computation is necessary to generate the state.
......@@ -108,7 +108,7 @@ However, we run into problems when scalability and flexibility become an importa
Given the complexity of real-world railway networks (especially in Switzerland), we do not believe that a global observation is suited for this problem.
### Local Observation
#### Local Observation
Given that scalability and speed are the main requirements for our use cases local observations offer an interesting novel approach. Local observations require some additional computations to be extracted from the environment state but could in theory be performed in parallel for each agent.
......@@ -117,7 +117,7 @@ With early experiments (presentation GTC, details below) we could show that even
Below we highlight two different forms of local observations and elaborate on their benefits.
#### Local Field of View
##### Local Field of View
This form of observation is very similar to the global view approach, in that it consists of a grid like input. In this setup each agent has its own observation that depends on its current location in the environment.
......@@ -129,7 +129,7 @@ Given an agents location, the observation is simply a $`n \times m`$ grid around
![local_grid](https://drive.google.com/uc?export=view&id=1kZzinMOs7hlPaSJJeIiaQ7lAz2erXuHx)
#### Tree Search
##### Tree Search
From our past experiences and the nature of railway networks (they are a graph) it seems most suitable to use a local tree search as an observation for the agents.
......@@ -148,27 +148,27 @@ _Figure 3: A local tree search moves along the allowed transitions, originating
We have gained some insights into using and aggregating the information along the tree search. This should be part of the early investigation while implementing Flatland. One possibility would also be to leave this up to the participants of the Flatland challenge.
### Communication
#### Communication
Given the complexity and the high dependence of the multi-agent system a communication form might be necessary. This needs to be investigated und following constraints:
* Communication must converge in a feasible time
* Communication…
Depending on the game configuration every agent can be informed about the position of the other agents present in the respective observation range. For a local observation space the agent knows the distance to the next agent (defined with the agent type) in each direction. If no agent is present the the distance can simply be -1 or null.
Depending on the game configuration every agent can be informed about the position of the other agents present in the respective observation range. For a local observation space the agent knows the distance to the next agent (defined with the agent type) in each direction. If no agent is present the the distance can simply be -1 or null.
### Action Negotiation
#### Action Negotiation
In order to avoid illicit situations ( for example agents crashing into each other) the intended actions for each agent in the observation range is known. Depending on the known movement intentions new movement intention must be generated by the agents. This is called a negotiation round. After a fixed amount of negotiation round the last intended action is executed for each agent. An illicit situation results in ending the game with a fixed low rewards.
In order to avoid illicit situations ( for example agents crashing into each other) the intended actions for each agent in the observation range is known. Depending on the known movement intentions new movement intention must be generated by the agents. This is called a negotiation round. After a fixed amount of negotiation round the last intended action is executed for each agent. An illicit situation results in ending the game with a fixed low rewards.
## Actions
### Actions
### Navigation
#### Navigation
The agent can be located at any cell except on case 0 cells. The agent can move along the rails to another unoccupied cell or it can just wait where he is currently located at.
The agent can be located at any cell except on case 0 cells. The agent can move along the rails to another unoccupied cell or it can just wait where he is currently located at.
Flatland is a discrete time simulation. A discrete time simulation performs all actions in a discrete time with constant time step. In Flatland the simulation step is fixed and the time moves forward in equal duration of time. At each step every agent can choose an action. For the chosen action the attached transition will be executed. While executing a transition Flatland checks whether the requested transition is valid. If the transition is valid the transition will update the agents position. In case the transition call is not allowed the agent will not move.
......@@ -176,19 +176,19 @@ If the agent calls an action and the attached transition is not allowed at curre
An agent can move with a definable maximum speed. The default and absolute maximum speed is one spatial unit per time step. If an agent is defined to move slower, it can take a navigation action only ever N steps with N being an integer. For the transition to be made the same action must be taken N times consecutively. An agent can also have a maximum speed of 0 defined, thus it can never take a navigation step. This would be the case where an agent represents a good to be transported which can never move on its own.
An agent can be defined to be picked up/dropped off by another agent or to pick up/drop off another agent. When agent A is picked up by another agent B it is said that A is linked to B. The linked agent loses all its navigation possibilities. On the other side it inherits the position from the linking agent for the time being linked. Linking and unlinking between two agents is only possible the participating agents have the same space-time coordinates for the linking and unlinking action.
An agent can be defined to be picked up/dropped off by another agent or to pick up/drop off another agent. When agent A is picked up by another agent B it is said that A is linked to B. The linked agent loses all its navigation possibilities. On the other side it inherits the position from the linking agent for the time being linked. Linking and unlinking between two agents is only possible the participating agents have the same space-time coordinates for the linking and unlinking action.
### Transportation
#### Transportation
In railway the transportation of goods or passengers is essential. Consequently agents can transport goods or passengers. It's depending on the agent's type. If the agent is a freight train, it will transport goods. It's passenger train it will transport passengers only. But the transportation capacity for both kind of trains limited. Passenger trains have a maximum number of seats restriction. The freight trains have a maximal number of tons restriction.
In railway the transportation of goods or passengers is essential. Consequently agents can transport goods or passengers. It's depending on the agent's type. If the agent is a freight train, it will transport goods. It's passenger train it will transport passengers only. But the transportation capacity for both kind of trains limited. Passenger trains have a maximum number of seats restriction. The freight trains have a maximal number of tons restriction.
Passenger can take or switch trains only at stations. Passengers are agents with traveling needs. A common passenger like to move from a starting location to a destination and it might like using trains or walking. Consequently a future Flatland must also support passenger movement (walk) in the grid and not only by using train. The goal of a passenger is to reach in an optimal manner its destination. The quality of traveling is measured by the reward function.
Passenger can take or switch trains only at stations. Passengers are agents with traveling needs. A common passenger like to move from a starting location to a destination and it might like using trains or walking. Consequently a future Flatland must also support passenger movement (walk) in the grid and not only by using train. The goal of a passenger is to reach in an optimal manner its destination. The quality of traveling is measured by the reward function.
Goods will be only transported over the railway network. Goods are agents with transportation needs. They can start their transportation chain at any station. Each good has a station as the destination attached. The destination is the end of the transportation. It's the transportation goal. Once a good reach its destination it will disappear. Disappearing mean the goods leave Flatland. Goods can't move independently on the grid. They can only move by using trains. They can switch trains at any stations. The goal of the system is to find for goods the right trains to get a feasible transportation chain. The quality of the transportation chain is measured by the reward function.
## Environment Rules
### Environment Rules
* Depending the cell type a cell must have a given number of neighbouring cells of a given type. \
......@@ -199,7 +199,7 @@ Goods will be only transported over the railway network. Goods are agents with t
* Agents related to each other through transport (one carries another) must be at the same place the same time.
## Environment Configuration
### Environment Configuration
The environment should allow for a broad class of problem instances. Thus the configuration file for each problem instance should contain:
......@@ -216,7 +216,7 @@ The environment should allow for a broad class of problem instances. Thus the co
For the train traffic the configurations should be as follows:
Cell types: Case 0 - 7
Cell types: Case 0 - 7
Agent Types allowed: Active Agents with Speed 1 and no goals, Passive agents with goals
......@@ -231,30 +231,30 @@ Observation Type: Local, Targets known
It should be check prior to solving the problem that the Goal location for each agent can be reached.
## Reward Function
### Reward Function
### Railway-specific Use-Cases
#### Railway-specific Use-Cases
A first idea for a Cost function for generic applicability is as follows. For each agent and each goal sum up
A first idea for a Cost function for generic applicability is as follows. For each agent and each goal sum up
* The timestep when the goal has been reached when not target time is given in the goal.
* The absolute value of the difference between the target time and the arrival time of the agent.
An additional refinement proven meaningful for situations where not target time is given is to weight the longest arrival time higher as the sum off all arrival times.
An additional refinement proven meaningful for situations where not target time is given is to weight the longest arrival time higher as the sum off all arrival times.
### Further Examples (Games)
#### Further Examples (Games)
## Initialization
### Initialization
Given that we want a generalizable agent to solve the problem, training must be performed on a diverse training set. We therefore need a level generator which can create novel tasks for to be solved in a reliable and fast fashion.
Given that we want a generalizable agent to solve the problem, training must be performed on a diverse training set. We therefore need a level generator which can create novel tasks for to be solved in a reliable and fast fashion.
### Level Generator
#### Level Generator
Each problem instance can have its own level generator.
......@@ -279,63 +279,63 @@ The output of the level generator should be:
* Initial rewards, positions and observations
## Railway Use Cases
### Railway Use Cases
In this section we define a few simple tasks related to railway traffic that we believe would be well suited for a crowdsourcing challenge. The tasks are ordered according to their complexity. The Flatland repo must at least support all these types of use cases.
### Simple Navigation
#### Simple Navigation
In order to onboard the broad reinforcement learning community this task is intended as an introduction to the Railway@Flatland environment.
In order to onboard the broad reinforcement learning community this task is intended as an introduction to the Railway@Flatland environment.
#### Task
##### Task
A single agent is placed at an arbitrary (permitted) cell and is given a target cell (reachable by the rules of Flatand). The task is to arrive at the target destination in as little time steps as possible.
#### Actions
##### Actions
In this task an agent can perform transitions ( max 3 possibilities) or stop. Therefore, the agent can chose an action in the range $`a \in [0,4] `$.
#### Reward
##### Reward
The reward is -1 for each time step and 10 if the agent stops at the destination. We might add -1 for invalid moves to speed up exploration and learning.
#### Observation
##### Observation
If we chose a local observation scheme, we need to provide some information about the distance to the target to the agent. This could either be achieved by a distance map, by using waypoints or providing a broad sense of direction to the agent.
### Multi Agent Navigation and Dispatching
#### Multi Agent Navigation and Dispatching
This task is intended as a natural extension of the navigation task.
#### Task
##### Task
A number of agents ($`n`$-agents) are placed at an arbitrary (permitted) cell and given individual target cells (reachable by the rules of Flatand). The task is to arrive at the target destination in as little time steps as possible as a group. This means that the goal is to minimize the longest path of *ALL* agents.
#### Actions
##### Actions
In this task an agent can perform transitions ( max 3 possibilities) or stop. Therefore, the agent can chose an action in the range $`a \in [0,4] `$.
#### Reward
##### Reward
The reward is -1 for each time step and 10 if all the agents stop at the destination. We can further punish collisions between agents and illegal moves to speed up learning.
#### Observation
##### Observation
If we chose a local observation scheme, we need to provide some information about the distance to the target to the agent. This could either be achieved by a distance map or by using waypoints.
The agents must see each other in their tree searches.
#### Previous learnings
##### Previous learnings
Training an agent by himself first to understand the main task turned out to be beneficial.
......@@ -344,15 +344,372 @@ It might be necessary to add the "intended" paths of each agent to the observati
A communication layer might be necessary to improve agent performance.
### Multi Agent Navigation and Dispatching with Schedule
#### Multi Agent Navigation and Dispatching with Schedule
### Transport Chains (Transportation of goods and passengers)
#### Transport Chains (Transportation of goods and passengers)
## Benefits of Transition Model
### Benefits of Transition Model
Using a grid world with 8 transition possibilities to the neighboring cells constitutes a very flexible environment, which can model many different types of problems.
Considering the recent advancements in machine learning, this approach also allows to make use of convolutions in order to process observation states of agents. For the specific case of railway simulation the grid world unfortunately also brings a few drawbacks.
Most notably the railway network only offers action possibilities at elements where there are more than two transition probabilities. Thus, if using a less dense graph than a grid, the railway network could be represented in a simpler graph. However, we believe that moving from grid-like example where many transitions are allowed towards the railway network with fewer transitions would be the simplest approach for the broad reinforcement learning community.
## Rail Generators and Schedule Generators
The separation between rail generator and schedule generator reflects the organisational separation in the railway domain
- Infrastructure Manager (IM): is responsible for the layout and maintenance of tracks
- Railway Undertaking (RU): operates trains on the infrastructure
Usually, there is a third organisation, which ensures discrimination-free access to the infrastructure for concurrent requests for the infrastructure in a **schedule planning phase**.
However, in the **Flat**land challenge, we focus on the re-scheduling problem during live operations.
Technically,
```python
RailGeneratorProduct = Tuple[GridTransitionMap, Optional[Any]]
RailGenerator = Callable[[int, int, int, int], RailGeneratorProduct]
AgentPosition = Tuple[int, int]
Schedule = collections.namedtuple('Schedule', 'agent_positions '
'agent_directions '
'agent_targets '
'agent_speeds '
'agent_malfunction_rates '
'max_episode_steps')
ScheduleGenerator = Callable[[GridTransitionMap, int, Optional[Any], Optional[int]], Schedule]
```
We can then produce `RailGenerator`s by currying:
```python
def sparse_rail_generator(num_cities=5, num_intersections=4, num_trainstations=2, min_node_dist=20, node_radius=2,
num_neighb=3, grid_mode=False, enhance_intersection=False, seed=1):
def generator(width, height, num_agents, num_resets=0):
# generate the grid and (optionally) some hints for the schedule_generator
...
return grid_map, {'agents_hints': {
'num_agents': num_agents,
'agent_start_targets_nodes': agent_start_targets_nodes,
'train_stations': train_stations
}}
return generator
```
And, similarly, `ScheduleGenerator`s:
```python
def sparse_schedule_generator(speed_ratio_map: Mapping[float, float] = None) -> ScheduleGenerator:
def generator(rail: GridTransitionMap, num_agents: int, hints: Any = None):
# place agents:
# - initial position
# - initial direction
# - (initial) speed
# - malfunction
...
return agents_position, agents_direction, agents_target, speeds, agents_malfunction
return generator
```
Notice that the `rail_generator` may pass `agents_hints` to the `schedule_generator` which the latter may interpret.
For instance, the way the `sparse_rail_generator` generates the grid, it already determines the agent's goal and target.
Hence, `rail_generator` and `schedule_generator` have to match if `schedule_generator` presupposes some specific `agents_hints`.
The environment's `reset` takes care of applying the two generators:
```python
def __init__(self,
...
rail_generator: RailGenerator = random_rail_generator(),
schedule_generator: ScheduleGenerator = random_schedule_generator(),
...
):
self.rail_generator: RailGenerator = rail_generator
self.schedule_generator: ScheduleGenerator = schedule_generator
def reset(self, regenerate_rail=True, regenerate_schedule=True):
rail, optionals = self.rail_generator(self.width, self.height, self.get_num_agents(), self.num_resets)
...
if replace_agents:
agents_hints = None
if optionals and 'agents_hints' in optionals:
agents_hints = optionals['agents_hints']
self.agents_static = EnvAgentStatic.from_lists(
self.schedule_generator(self.rail, self.get_num_agents(), hints=agents_hints))
```
### RailEnv Speeds
One of the main contributions to the complexity of railway network operations stems from the fact that all trains travel at different speeds while sharing a very limited railway network.
The different speed profiles can be generated using the `schedule_generator`, where you can actually chose as many different speeds as you like.
Keep in mind that the *fastest speed* is 1 and all slower speeds must be between 1 and 0.
For the submission scoring you can assume that there will be no more than 5 speed profiles.
Currently (as of **Flat**land 2.0), an agent keeps its speed over the whole episode.
Because the different speeds are implemented as fractions the agents ability to perform actions has been updated.
We **do not allow actions to change within the cell **.
This means that each agent can only chose an action to be taken when entering a cell (ie. positional fraction is 0).
There is some real railway specific considerations such as reserved blocks that are similar to this behavior.
But more importantly we disabled this to simplify the use of machine learning algorithms with the environment.
If we allow stop actions in the middle of cells. then the controller needs to make much more observations and not only at cell changes.
(Not set in stone and could be updated if the need arises).
The chosen action is then executed when a step to the next cell is valid. For example
- Agent enters switch and choses to deviate left. Agent fractional speed is 1/4 and thus the agent will take 4 time steps to complete its journey through the cell. On the 4th time step the agent will leave the cell deviating left as chosen at the entry of the cell.
- All actions chosen by the agent during its travels within a cell are ignored
- Agents can make observations at any time step. Make sure to discard observations without any information. See this [example](https://gitlab.aicrowd.com/flatland/baselines/blob/master/torch_training/training_navigation.py) for a simple implementation.
- The environment checks if agent is allowed to move to next cell only at the time of the switch to the next cell
In your controller, you can check whether an agent requires an action by checking `info`:
```python
obs, rew, done, info = env.step(actions)
...
action_dict = dict()
for a in range(env.get_num_agents()):
if info['action_required'][a]:
action_dict.update({a: ...})
```
Notice that `info['action_required'][a]`
* if the agent breaks down (see stochasticity below) on entering the cell (no distance elpased in the cell), an action required as long as the agent is broken down;
when it gets back to work, the action chosen just before will be taken and executed at the end of the cell; you may check whether the agent
gets healthy again in the next step by checking `info['malfunction'][a] == 1`.
* when the agent has spent enough time in the cell, the next cell may not be free and the agent has to wait.
Since later versions of **Flat**land might have varying speeds during episodes.
Therefore, we return the agents' speed - in your controller, you can get the agents' speed from the `info` returned by `step`:
```python
obs, rew, done, info = env.step(actions)
...
for a in range(env.get_num_agents()):
speed = info['speed'][a]
```
Notice that we do not guarantee that the speed will be computed at each step, but if not costly we will return it at each step.
### RailEnv Malfunctioning / Stochasticity
Stochastic events may happen during the episodes.
This is very common for railway networks where the initial plan usually needs to be rescheduled during operations as minor events such as delayed departure from trainstations, malfunctions on trains or infrastructure or just the weather lead to delayed trains.
We implemted a poisson process to simulate delays by stopping agents at random times for random durations. The parameters necessary for the stochastic events can be provided when creating the environment.
```python
## Use a the malfunction generator to break agents from time to time
stochastic_data = {
'prop_malfunction': 0.5, # Percentage of defective agents
'malfunction_rate': 30, # Rate of malfunction occurence
'min_duration': 3, # Minimal duration of malfunction
'max_duration': 10 # Max duration of malfunction
}
```
The parameters are as follows:
- `prop_malfunction` is the proportion of agents that can malfunction. `1.0` means that each agent can break.
- `malfunction_rate` is the mean rate of the poisson process in number of environment steps.
- `min_duration` and `max_duration` set the range of malfunction durations. They are sampled uniformly
You can introduce stochasticity by simply creating the env as follows:
```python
env = RailEnv(
...
stochastic_data=stochastic_data, # Malfunction data generator
...
)
env.reset()
```
In your controller, you can check whether an agent is malfunctioning:
```python
obs, rew, done, info = env.step(actions)
...
action_dict = dict()
for a in range(env.get_num_agents()):
if info['malfunction'][a] == 0:
action_dict.update({a: ...})
## Custom observation builder
tree_observation = TreeObsForRailEnv(max_depth=2, predictor=ShortestPathPredictorForRailEnv())
## Different agent types (trains) with different speeds.
speed_ration_map = {1.: 0.25, # Fast passenger train
1. / 2.: 0.25, # Fast freight train
1. / 3.: 0.25, # Slow commuter train
1. / 4.: 0.25} # Slow freight train
env = RailEnv(width=50,
height=50,
rail_generator=sparse_rail_generator(num_cities=20, # Number of cities in map (where train stations are)
num_intersections=5, # Number of intersections (no start / target)
num_trainstations=15, # Number of possible start/targets on map
min_node_dist=3, # Minimal distance of nodes
node_radius=2, # Proximity of stations to city center
num_neighb=4, # Number of connections to other cities/intersections
seed=15, # Random seed
grid_mode=True,
enhance_intersection=True
),
schedule_generator=sparse_schedule_generator(speed_ration_map),
number_of_agents=10,
stochastic_data=stochastic_data, # Malfunction data generator
obs_builder_object=tree_observation)
env.reset()
```
### Observation Builders
Every `RailEnv` has an `obs_builder`. The `obs_builder` has full access to the `RailEnv`.
The `obs_builder` is called in the `step()` function to produce the observations.
```python
env = RailEnv(
...
obs_builder_object=TreeObsForRailEnv(
max_depth=2,
predictor=ShortestPathPredictorForRailEnv(max_depth=10)
),
...
)
env.reset()
```
The two principal observation builders provided are global and tree.
#### Global Observation Builder
`GlobalObsForRailEnv` gives a global observation of the entire rail environment.
* transition map array with dimensions (env.height, env.width, 16), assuming 16 bits encoding of transitions.
* Two 2D arrays (map_height, map_width, 2) containing respectively the position of the given agent target and the positions of the other agents targets.
* A 3D array (map_height, map_width, 4) wtih
- first channel containing the agents position and direction
- second channel containing the other agents positions and diretions
- third channel containing agent malfunctions
- fourth channel containing agent fractional speeds
#### Tree Observation Builder
`TreeObsForRailEnv` computes the current observation for each agent.
The observation vector is composed of 4 sequential parts, corresponding to data from the up to 4 possible
movements in a `RailEnv` (up to because only a subset of possible transitions are allowed in RailEnv).
The possible movements are sorted relative to the current orientation of the agent, rather than NESW as for
the transitions. The order is:
```console
[data from 'left'] + [data from 'forward'] + [data from 'right'] + [data from 'back']
```
Each branch data is organized as:
```console
[root node information] +
[recursive branch data from 'left'] +
[... from 'forward'] +
[... from 'right] +
[... from 'back']
```
Each node information is composed of 9 features:
1. if own target lies on the explored branch the current distance from the agent in number of cells is stored.
2. if another agents target is detected the distance in number of cells from the agents current location
is stored
3. if another agent is detected the distance in number of cells from current agent position is stored.
4. possible conflict detected
tot_dist = Other agent predicts to pass along this cell at the same time as the agent, we store the
distance in number of cells from current agent position
```console
0 = No other agent reserve the same cell at similar time
```
5. if an not usable switch (for agent) is detected we store the distance.
6. This feature stores the distance in number of cells to the next branching (current node)
7. minimum distance from node to the agent's target given the direction of the agent if this path is chosen
8. agent in the same direction
```console
n = number of agents present same direction
(possible future use: number of other agents in the same direction in this branch)
0 = no agent present same direction
```
9. agent in the opposite direction
```console
n = number of agents present other direction than myself (so conflict)
(possible future use: number of other agents in other direction in this branch, ie. number of conflicts)
0 = no agent present other direction than myself
```
10. malfunctioning/blokcing agents
```console
n = number of time steps the oberved agent remains blocked
```
11. slowest observed speed of an agent in same direction
```console
1 if no agent is observed
min_fractional speed otherwise
```
Missing/padding nodes are filled in with -inf (truncated).
Missing values in present node are filled in with +inf (truncated).
In case of the root node, the values are [0, 0, 0, 0, distance from agent to target, own malfunction, own speed]
In case the target node is reached, the values are [0, 0, 0, 0, 0].
### Predictors
Predictors make predictions on future agents' moves based on the current state of the environment.
They are decoupled from observation builders in order to be encapsulate the functionality and to make it re-usable.
For instance, `TreeObsForRailEnv` optionally uses the predicted the predicted trajectories while exploring
the branches of an agent's future moves to detect future conflicts.
The general call structure is as follows:
```python
RailEnv.step()
-> ObservationBuilder.get_many()
-> self.predictor.get()
self.get()
self.get()
...
```
### Maximum number of allowed time steps in an episode
Whenever the schedule within RailEnv is generated, the maximum number of allowed time steps in an episode is calculated
according to the following formula:
```python
RailEnv._max_episode_steps = timedelay_factor * alpha * (env.width + env.height + ratio_nr_agents_to_nr_cities)
```
where the following default values are used `timedelay_factor=4`, `alpha=2` and `ratio_nr_agents_to_nr_cities=20`
If participants want to use their own formula they have to overwrite the method `compute_max_episode_steps()` from the class `RailEnv`
# Rendering Specifications
## Rendering Specifications
## Scope
### Scope
This doc specifies the software to meet the requirements in the Visualization requirements doc.
## References
### References
- [Visualization Requirements](visualization)
- [Core Spec](./core)
## Interfaces
### Interface with Environment Component
### Interfaces
#### Interface with Environment Component
- Environment produces the Env Snapshot data structure (TBD)
- Renderer reads the Env Snapshot
......@@ -28,9 +28,9 @@ This doc specifies the software to meet the requirements in the Visualization re
- Or, render frames without blocking environment
- Render frames in separate process / thread
#### Environment Snapshot
##### Environment Snapshot
### Data Structure
#### Data Structure
A definitions of the data structure is to be defined in Core requirements or Interfaces doc.
......@@ -50,7 +50,7 @@ Top-level dictionary
- Tree-based observation
- TBD
### Existing Tools / Libraries
#### Existing Tools / Libraries
1. Pygame
1. Very easy to use. Like dead simple to add sprites etc. [Link](https://studywolf.wordpress.com/2015/03/06/arm-visualization-with pygame/)
2. No inbuilt support for threads/processes. Does get faster if using pypy/pysco.
......@@ -58,18 +58,18 @@ Top-level dictionary
1. Somewhat simple, a little more verbose to use the different modules.
2. Multi-threaded via QThread! Yay! (Doesn’t block main thread that does the real work), [Link](https://nikolak.com/pyqt-threading-tutorial/)
#### How to structure the code
##### How to structure the code
1. Define draw functions/classes for each primitive
1. Primitives: Agents (Trains), Railroad, Grass, Houses etc.
2. Background. Initialize the background before starting the episode.
1. Static objects in the scenes, directly draw those primitives once and cache.
#### Proposed Interfaces
##### Proposed Interfaces
To-be-filled
### Technical Graphics Considerations
#### Technical Graphics Considerations
#### Overlay dynamic primitives over the background at each time step.
##### Overlay dynamic primitives over the background at each time step.
No point trying to figure out changes. Need to explicitly draw every primitive anyways (that’s how these renders work).
# Visualization
## Visualization
![logo](https://drive.google.com/uc?export=view&id=1rstqMPJXFJd9iD46z1A5Rus-W0Ww6O8i)
# Introduction & Scope
### Introduction & Scope
Broad requirements for human-viewable display of a single Flatland Environment.
## Context
#### Context
Shows this software component in relation to some of the other components. We name the component the "Renderer". Multiple agents interact with a single Environment. A renderer interacts with the environment, and displays on screen, and/or into movie or image files.
......@@ -20,10 +20,10 @@ Shows this software component in relation to some of the other components. We n
![drawing](https://docs.google.com/a/google.com/drawings/d/12345/export/png)
# Requirements
### Requirements
## Primary Requirements
#### Primary Requirements
......@@ -39,7 +39,7 @@ Shows this software component in relation to some of the other components. We n
7. Should not drive the "main loop" of Inference or training
## Secondary / Optional Requirements
#### Secondary / Optional Requirements
......@@ -68,7 +68,7 @@ Shows this software component in relation to some of the other components. We n
15. Browser
## Performance Metrics
#### Performance Metrics
Here are some performance metrics which the Renderer should meet.
......@@ -78,7 +78,7 @@ Here are some performance metrics which the Renderer should meet.
<td>
</td>
<td><p style="text-align: right">
# Per second</p>
### Per second</p>
</td>
<td><p style="text-align: right">
......@@ -144,15 +144,15 @@ Prototype time (ms)</p>
## Example Visualization
#### Example Visualization
# Reference Documents
### Reference Documents
Link to this doc: https://docs.google.com/document/d/1Y4Mw0Q6r8PEOvuOZMbxQX-pV2QKDuwbZJBvn18mo9UU/edit#
## Core Specification
#### Core Specification
This specifies the system containing the environment and agents - this will be able to run independently of the renderer.
......@@ -161,24 +161,24 @@ This specifies the system containing the environment and agents - this will be a
The data structure which the renderer needs to read initially resides here.
## Visualization Specification
#### Visualization Specification
This will specify the software which will meet the requirements documented here.
[https://docs.google.com/document/d/1XYOe_aUIpl1h_RdHnreACvevwNHAZWT0XHDL0HsfzRY/edit#](https://docs.google.com/document/d/1XYOe_aUIpl1h_RdHnreACvevwNHAZWT0XHDL0HsfzRY/edit#)
## Interface Specification
#### Interface Specification
This will specify the interfaces through which the different components communicate
# Non-requirements - to be deleted below here.
### Non-requirements - to be deleted below here.
The below has been copied into the spec doc. Comments may be lost. I'm only preserving it to save the comments for a few days - they don't cut & paste into the other doc!
## Interface with Environment Component
#### Interface with Environment Component
......@@ -201,7 +201,7 @@ The below has been copied into the spec doc. Comments may be lost. I'm only
* Render frames in separate process / thread
#### Environment Snapshot
###### Environment Snapshot
**Data Structure**
......@@ -227,7 +227,7 @@ Top-level dictionary
* TBD
## Investigation into Existing Tools / Libraries
#### Investigation into Existing Tools / Libraries
......@@ -252,9 +252,9 @@ Top-level dictionary
To-be-filled
## Technical Graphics Considerations
#### Technical Graphics Considerations
#### Overlay dynamic primitives over the background at each time step.
###### Overlay dynamic primitives over the background at each time step.
No point trying to figure out changes. Need to explicitly draw every primitive anyways (that's how these renders work).
Flatland Specs
==============
.. toctree::
:maxdepth: 2
specifications/specifications.md
specifications/core.md
specifications/railway.md
specifications/rendering.md
specifications/specifications.md
specifications/visualization.md
===============
Getting Started
===============
Getting Started Tutorial
========================
Overview
--------
......@@ -16,14 +15,14 @@ To use flatland in a project:
import flatland
Part 1 : Basic Usage
--------------------
Simple Example 1 : Basic Usage
------------------------------
The basic usage of RailEnv environments consists in creating a RailEnv object
endowed with a rail generator, that generates new rail networks on each reset,
and an observation generator object, that is supplied with environment-specific
information at each time step and provides a suitable observation vector to the
agents.
agents. After the RailEnv environment is created, one need to call reset() on the
environment in order to fully initialize the environment
The simplest rail generators are envs.rail_generators.rail_from_manual_specifications_generator
and envs.rail_generators.random_rail_generator.
......@@ -45,6 +44,7 @@ For example,
rail_generator=rail_from_manual_specifications_generator(specs),
number_of_agents=1,
obs_builder_object=TreeObsForRailEnv(max_depth=2))
env.reset()
Alternatively, a random environment can be generated (optionally specifying
weights for each cell type to increase or decrease their proportion in the
......@@ -73,6 +73,7 @@ generated rail networks).
),
number_of_agents=3,
obs_builder_object=TreeObsForRailEnv(max_depth=2))
env.reset()
Environments can be rendered using the utils.rendertools utilities, for example:
......@@ -108,19 +109,17 @@ following code. Also, tree observation data is displayed by RenderTool by defaul
for i in range(env.get_num_agents()):
env.obs_builder.util_print_obs_subtree(
tree=obs[i],
num_features_per_node=5
)
The complete code for this part of the Getting Started guide can be found in
* `examples/simple_example_1.py <https://gitlab.aicrowd.com/flatland/flatland/blob/master/examples/simple_example_1.py>`_
* `examples/simple_example_2.py <https://gitlab.aicrowd.com/flatland/flatland/blob/master/examples/simple_example_2.py>`_
* `examples/simple_example_3.py <https://gitlab.aicrowd.com/flatland/flatland/blob/master/examples/simple_example_3.py>`_
Part 2 : Training a Simple an Agent on Flatland
-----------------------------------------------
---------------------------------------------------------
This is a brief tutorial on how to train an agent on Flatland.
Here we use a simple random agent to illustrate the process on how to interact with the environment.
The corresponding code can be found in examples/training_example.py and in the baselines repository
......@@ -146,8 +145,9 @@ Next we configure the difficulty of our task by modifying the complex_rail_gener
nr_extra=10,
min_dist=10,
max_dist=99999,
seed=0),
seed=1),
number_of_agents=5)
env.reset()
The difficulty of a railway network depends on the dimensions (`width` x `height`) and the number of agents in the network.
By varying the number of start and goal connections (nr_start_goal) and the number of extra railway elements added (nr_extra)
......@@ -161,13 +161,13 @@ Once we are set with the environment we can load our preferred agent from either
.. code-block:: python
agent = RandomAgent(env.action_space, env.observation_space)
agent = RandomAgent(state_size, action_size)
We start every trial by resetting the environment
.. code-block:: python
obs = env.reset()
obs, info = env.reset()
Which provides the initial observation for all agents (obs = array of all observations).
In order for the environment to step forward in time we need a dictionar of actions for all active agents.
......@@ -187,77 +187,4 @@ This dictionary is then passed to the environment which checks the validity of a
The environment returns an array of new observations, reward dictionary for all agents as well as a flag for which agents are done.
This information can be used to update the policy of your agent and if done['__all__'] == True the episode terminates.
Part 3 : Customizing Observations and Level Generators
------------------------------------------------------
Example code for generating custom observations given a RailEnv and to generate
random rail maps are available in examples/custom_observation_example.py and
examples/custom_railmap_example.py .
Custom observations can be produced by deriving a new object from the
core.env_observation_builder.ObservationBuilder base class, for example as follows:
.. code-block:: python
class CustomObs(ObservationBuilder):
def __init__(self):
self.observation_space = [5]
def reset(self):
return
def get(self, handle):
observation = handle*np.ones((5,))
return observation
It is important that an observation_space is defined with a list of dimensions
of the returned observation tensors. get() returns the observation for each agent,
of handle 'handle'.
A RailEnv environment can then be created as usual:
.. code-block:: python
env = RailEnv(width=7,
height=7,
rail_generator=random_rail_generator(),
number_of_agents=3,
obs_builder_object=CustomObs())
As for generating custom rail maps, the RailEnv class accepts a rail_generator
argument that must be a function with arguments `width`, `height`, `num_agents`,
and `num_resets=0`, and that has to return a GridTransitionMap object (the rail map),
and three lists of tuples containing the (row,column) coordinates of each of
num_agent agents, their initial orientation **(0=North, 1=East, 2=South, 3=West)**,
and the position of their targets.
For example, the following custom rail map generator returns an empty map of
size (height, width), with no agents (regardless of num_agents):
.. code-block:: python
def custom_rail_generator():
def generator(width, height, num_agents=0, num_resets=0):
rail_trans = RailEnvTransitions()
grid_map = GridTransitionMap(width=width, height=height, transitions=rail_trans)
rail_array = grid_map.grid
rail_array.fill(0)
agents_positions = []
agents_direction = []
agents_target = []
return grid_map, agents_positions, agents_direction, agents_target
return generator
It is worth to note that helpful utilities to manage RailEnv environments and their
related data structures are available in 'envs.env_utils'. In particular,
envs.env_utils.get_rnd_agents_pos_tgt_dir_on_rail is fairly handy to fill in
random (but consistent) agents along with their targets and initial directions,
given a rail map (GridTransitionMap object) and the desired number of agents:
.. code-block:: python
agents_position, agents_direction, agents_target = get_rnd_agents_pos_tgt_dir_on_rail(
rail_map,
num_agents)
The full source code of this example can be found in `examples/training_example.py <https://gitlab.aicrowd.com/flatland/flatland/blob/master/examples/training_example.py>`_.
==============================================================
Getting Started with custom observations and custom predictors
==============================================================
Custom observations and custom predictors Tutorial
==================================================
Overview
--------
......@@ -19,7 +18,7 @@ base class and must implement two methods, :code:`reset(self)` and :code:`get(se
.. _`flatland.core.env_observation_builder.ObservationBuilder` : https://gitlab.aicrowd.com/flatland/flatland/blob/master/flatland/core/env_observation_builder.py#L13
Below is a simple example that returns observation vectors of size :code:`observation_space = 5` featuring only the ID (handle) of the agent whose
Below is a simple example that returns observation vectors of size 5 featuring only the ID (handle) of the agent whose
observation vector is being computed:
.. code-block:: python
......@@ -29,14 +28,12 @@ observation vector is being computed:
Simplest observation builder. The object returns observation vectors with 5 identical components,
all equal to the ID of the respective agent.
"""
def __init__(self):
self.observation_space = [5]
def reset(self):
return
def get(self, handle):
observation = handle * np.ones((self.observation_space[0],))
observation = handle * np.ones(5)
return observation
We can pass an instance of our custom observation builder :code:`SimpleObs` to the :code:`RailEnv` creator as follows:
......@@ -48,6 +45,7 @@ We can pass an instance of our custom observation builder :code:`SimpleObs` to t
rail_generator=random_rail_generator(),
number_of_agents=3,
obs_builder_object=SimpleObs())
env.reset()
Anytime :code:`env.reset()` or :code:`env.step()` is called, the observation builder will return the custom observation of all agents initialized in the env.
In the next example we highlight how to derive from existing observation builders and how to access internal variables of **Flatland**.
......@@ -86,7 +84,6 @@ Note that this simple strategy fails when multiple agents are present, as each a
super().__init__(max_depth=0)
# We set max_depth=0 in because we only need to look at the current
# position of the agent to decide what direction is shortest.
self.observation_space = [3]
def reset(self):
# Recompute the distance map, if the environment has changed.
......@@ -122,9 +119,10 @@ Note that this simple strategy fails when multiple agents are present, as each a
env = RailEnv(width=7,
height=7,
rail_generator=complex_rail_generator(nr_start_goal=10, nr_extra=1, \
min_dist=8, max_dist=99999, seed=0),
min_dist=8, max_dist=99999, seed=1),
number_of_agents=2,
obs_builder_object=SingleAgentNavigationObs())
env.reset()
obs, all_rewards, done, _ = env.step({0: 0, 1: 1})
for i in range(env.get_num_agents()):
......@@ -140,6 +138,7 @@ navigation to target, and shows the path taken as an animation.
rail_generator=random_rail_generator(),
number_of_agents=1,
obs_builder_object=SingleAgentNavigationObs())
env.reset()
obs, all_rewards, done, _ = env.step({0: 0})
......@@ -190,7 +189,6 @@ In contrast to the previous examples we also implement the :code:`def get_many(s
def __init__(self, predictor):
super().__init__(max_depth=0)
self.observation_space = [10]
self.predictor = predictor
def reset(self):
......@@ -272,11 +270,12 @@ We can then use this new observation builder and the renderer to visualize the o
# Initiate Environment
env = RailEnv(width=10,
height=10,
rail_generator=complex_rail_generator(nr_start_goal=5, nr_extra=1, min_dist=8, max_dist=99999, seed=0),
rail_generator=complex_rail_generator(nr_start_goal=5, nr_extra=1, min_dist=8, max_dist=99999, seed=1),
number_of_agents=3,
obs_builder_object=CustomObsBuilder)
env.reset()
obs = env.reset()
obs, info = env.reset()
env_renderer = RenderTool(env, gl="PILSVG")
# We render the initial step and show the obsered cells as colored boxes
......
# Level Generation Tutorial
We are currently working on different new level generators and you can expect that the levels in the submission testing will not all come from just one but rather different level generators to be sure that the controllers can handle any railway specific challenge.
Let's have a look at the `sparse_rail_generator`.
## Sparse Rail Generator
![Example_Sparse](https://i.imgur.com/DP8sIyx.png)
The idea behind the sparse rail generator is to mimic classic railway structures where dense nodes (cities) are sparsely connected to each other and where you have to manage traffic flow between the nodes efficiently.
The cities in this level generator are much simplified in comparison to real city networks but it mimics parts of the problems faced in daily operations of any railway company.
There are a few parameters you can tune to build your own map and test different complexity levels of the levels.
**Warning** some combinations of parameters do not go well together and will lead to infeasible level generation.
In the worst case, the level generator currently issues a warning when it cannot build the environment according to the parameters provided.
This will lead to a crash of the whole env.
We are currently working on improvements here and are **happy for any suggestions from your side**.
To build an environment you instantiate a `RailEnv` as follows:
```python
Initialize the generator
rail_generator=sparse_rail_generator(
num_cities=10, # Number of cities in map
num_intersections=10, # Number of interesections in map
num_trainstations=50, # Number of possible start/targets on map
min_node_dist=6, # Minimal distance of nodes
node_radius=3, # Proximity of stations to city center
num_neighb=3, # Number of connections to other cities
seed=5, # Random seed
grid_mode=False # Ordered distribution of nodes
)
Build the environment
env = RailEnv(
width=50,
height=50,
rail_generator=rail_generator
schedule_generator=sparse_schedule_generator(),
number_of_agents=10,
obs_builder_object=TreeObsForRailEnv(max_depth=3,predictor=shortest_path_predictor)
)
Call reset on the environment
env.reset()
```
You can see that you now need both a `rail_generator` and a `schedule_generator` to generate a level. These need to work nicely together. The `rail_generator` will only generate the railway infrastructure and provide hints to the `schedule_generator` about where to place agents. The `schedule_generator` will then generate a schedule, meaning it places agents at different train stations and gives them tasks by providing individual targets.
You can tune the following parameters in the `sparse_rail_generator`:
- `num_cities` is the number of cities on a map. Cities are the only nodes that can host start and end points for agent tasks (Train stations). Here you have to be carefull that the number is not too high as all the cities have to fit on the map. When `grid_mode=False` you have to be carefull when chosing `min_node_dist` because leves will fails if not all cities (and intersections) can be placed with at least `min_node_dist` between them.
- `num_intersections` is the number of nodes that don't hold any trainstations. They are also the first priority that a city connects to. We use these to allow for sparse connections between cities.
- `num_trainstations` defines the *Total* number of trainstations in the network. This also sets the max number of allowed agents in the environment. This is also a delicate parameter as there is only a limitid amount of space available around nodes and thus if the number is too high the level generation will fail. *Important*: Only the number of agents provided to the environment will actually produce active train stations. The others will just be present as dead-ends (See figures below).
- `min_node_dist` is only used if `grid_mode=False` and represents the minimal distance between two nodes.
- `node_radius` defines the extent of a city. Each trainstation is placed at a distance to the closes city node that is smaller or equal to this number.
- `num_neighb`defines the number of neighbouring nodes that connect to each other. Thus this changes the connectivity and thus the amount of alternative routes in the network.
- `grid_mode` True -> Nodes evenly distriubted in env, False-> Random distribution of nodes
- `enhance_intersection`: True -> Extra rail elements added at intersections
- `seed` is used to initialize the random generator
If you run into any bugs with sets of parameters please let us know.
Here is a network with `grid_mode=False` and the parameters from above.
![sparse_random](https://i.imgur.com/Xg7nifF.png)
and here with `grid_mode=True`
![sparse_ordered](https://i.imgur.com/jyA7Pt4.png)
## Example code
To see all the changes in action you can just run the `flatland_example_2_0.py` file in the examples folder. The file can be found [here](https://gitlab.aicrowd.com/flatland/flatland/blob/master/examples/flatland_2_0_example.py).
# Stochasticity Tutorial
Another area where we improved **Flat**land 2.0 are stochastic events added during the episodes.
This is very common for railway networks where the initial plan usually needs to be rescheduled during operations as minor events such as delayed departure from trainstations, malfunctions on trains or infrastructure or just the weather lead to delayed trains.
We implemted a poisson process to simulate delays by stopping agents at random times for random durations. The parameters necessary for the stochastic events can be provided when creating the environment.
```python
# Use a the malfunction generator to break agents from time to time
stochastic_data = {
'prop_malfunction': 0.5, # Percentage of defective agents
'malfunction_rate': 30, # Rate of malfunction occurence
'min_duration': 3, # Minimal duration of malfunction
'max_duration': 10 # Max duration of malfunction
}
```
The parameters are as follows:
- `prop_malfunction` is the proportion of agents that can malfunction. `1.0` means that each agent can break.
- `malfunction_rate` is the mean rate of the poisson process in number of environment steps.
- `min_duration` and `max_duration` set the range of malfunction durations. They are sampled uniformly
You can introduce stochasticity by simply creating the env as follows:
```python
env = RailEnv(
...
stochastic_data=stochastic_data, # Malfunction data generator
...
)
```
In your controller, you can check whether an agent is malfunctioning:
```python
obs, rew, done, info = env.step(actions)
...
action_dict = dict()
for a in range(env.get_num_agents()):
if info['malfunction'][a] == 0:
action_dict.update({a: ...})
# Custom observation builder
tree_observation = TreeObsForRailEnv(max_depth=2, predictor=ShortestPathPredictorForRailEnv())
# Different agent types (trains) with different speeds.
speed_ration_map = {1.: 0.25, # Fast passenger train
1. / 2.: 0.25, # Fast freight train
1. / 3.: 0.25, # Slow commuter train
1. / 4.: 0.25} # Slow freight train
env = RailEnv(width=50,
height=50,
rail_generator=sparse_rail_generator(num_cities=20, # Number of cities in map (where train stations are)
num_intersections=5, # Number of intersections (no start / target)
num_trainstations=15, # Number of possible start/targets on map
min_node_dist=3, # Minimal distance of nodes
node_radius=2, # Proximity of stations to city center
num_neighb=4, # Number of connections to other cities/intersections
seed=15, # Random seed
grid_mode=True,
enhance_intersection=True
),
schedule_generator=sparse_schedule_generator(speed_ration_map),
number_of_agents=10,
stochastic_data=stochastic_data, # Malfunction data generator
obs_builder_object=tree_observation)
env.reset()
```
You will quickly realize that this will lead to unforeseen difficulties which means that **your controller** needs to observe the environment at all times to be able to react to the stochastic events.
## Example code
To see all the changes in action you can just run the `flatland_example_2_0.py` file in the examples folder. The file can be found [here](https://gitlab.aicrowd.com/flatland/flatland/blob/master/examples/flatland_2_0_example.py).
Flatland Specs
==========================
# Different speed profiles Tutorial
What are **Flatland** specs about?
---------------------------------
In a humand-readable language, they provide
* code base overview (hand-drawn concept)
* key concepts (generators, envs) and how are they linked
* link relevant code base
One of the main contributions to the complexity of railway network operations stems from the fact that all trains travel at different speeds while sharing a very limited railway network.
In **Flat**land 2.0 this feature will be enabled as well and will lead to much more complex configurations. Here we count on your support if you find bugs or improvements :).
## Overview
![UML_flatland.png](img/UML_flatland.png)
[Diagram Source](https://confluence.sbb.ch/x/pQfsSw)
The different speed profiles can be generated using the `schedule_generator`, where you can actually chose as many different speeds as you like.
Keep in mind that the *fastest speed* is 1 and all slower speeds must be between 1 and 0.
For the submission scoring you can assume that there will be no more than 5 speed profiles.
Later versions of **Flat**land might have varying speeds during episodes. Therefore, we return the agent speeds.
Notice that we do not guarantee that the speed will be computed at each step, but if not costly we will return it at each step.
In your controller, you can get the agents' speed from the `info` returned by `step`:
```python
obs, rew, done, info = env.step(actions)
...
for a in range(env.get_num_agents()):
speed = info['speed'][a]
```
## Actions and observation with different speed levels
Because the different speeds are implemented as fractions the agents ability to perform actions has been updated.
We **do not allow actions to change within the cell **.
This means that each agent can only chose an action to be taken when entering a cell.
This action is then executed when a step to the next cell is valid. For example
- Agent enters switch and choses to deviate left. Agent fractional speed is 1/4 and thus the agent will take 4 time steps to complete its journey through the cell. On the 4th time step the agent will leave the cell deviating left as chosen at the entry of the cell.
- All actions chosen by the agent during its travels within a cell are ignored
- Agents can make observations at any time step. Make sure to discard observations without any information. See this [example](https://gitlab.aicrowd.com/flatland/baselines/blob/master/torch_training/training_navigation.py) for a simple implementation.
- The environment checks if agent is allowed to move to next cell only at the time of the switch to the next cell
In your controller, you can check whether an agent requires an action by checking `info`:
```python
obs, rew, done, info = env.step(actions)
...
action_dict = dict()
for a in range(env.get_num_agents()):
if info['action_required'][a] and info['malfunction'][a] == 0:
action_dict.update({a: ...})
```
Notice that `info['action_required'][a]` does not mean that the action will have an effect:
if the next cell is blocked or the agent breaks down, the action cannot be performed and an action will be required again in the next step.
## Rail Generators and Schedule Generators
The separation between rail generator and schedule generator reflects the organisational separation in the railway domain
......@@ -21,26 +51,31 @@ The separation between rail generator and schedule generator reflects the organi
Usually, there is a third organisation, which ensures discrimination-free access to the infrastructure for concurrent requests for the infrastructure in a **schedule planning phase**.
However, in the **Flat**land challenge, we focus on the re-scheduling problem during live operations.
Technically,
```
Technically,
```python
RailGeneratorProduct = Tuple[GridTransitionMap, Optional[Any]]
RailGenerator = Callable[[int, int, int, int], RailGeneratorProduct]
AgentPosition = Tuple[int, int]
ScheduleGeneratorProduct = Tuple[List[AgentPosition], List[AgentPosition], List[AgentPosition], List[float]]
ScheduleGenerator = Callable[[GridTransitionMap, int, Optional[Any]], ScheduleGeneratorProduct]
Schedule = collections.namedtuple('Schedule', 'agent_positions '
'agent_directions '
'agent_targets '
'agent_speeds '
'agent_malfunction_rates '
'max_episode_steps')
ScheduleGenerator = Callable[[GridTransitionMap, int, Optional[Any], Optional[int]], Schedule]
```
We can then produce `RailGenerator`s by currying:
```
```python
def sparse_rail_generator(num_cities=5, num_intersections=4, num_trainstations=2, min_node_dist=20, node_radius=2,
num_neighb=3, grid_mode=False, enhance_intersection=False, seed=0):
num_neighb=3, grid_mode=False, enhance_intersection=False, seed=1):
def generator(width, height, num_agents, num_resets=0):
# generate the grid and (optionally) some hints for the schedule_generator
...
return grid_map, {'agents_hints': {
'num_agents': num_agents,
'agent_start_targets_nodes': agent_start_targets_nodes,
......@@ -50,7 +85,7 @@ def sparse_rail_generator(num_cities=5, num_intersections=4, num_trainstations=2
return generator
```
And, similarly, `ScheduleGenerator`s:
```
```python
def sparse_schedule_generator(speed_ratio_map: Mapping[float, float] = None) -> ScheduleGenerator:
def generator(rail: GridTransitionMap, num_agents: int, hints: Any = None):
# place agents:
......@@ -59,7 +94,7 @@ def sparse_schedule_generator(speed_ratio_map: Mapping[float, float] = None) ->
# - (initial) speed
# - malfunction
...
return agents_position, agents_direction, agents_target, speeds, agents_malfunction
return generator
......@@ -69,7 +104,7 @@ For instance, the way the `sparse_rail_generator` generates the grid, it already
Hence, `rail_generator` and `schedule_generator` have to match if `schedule_generator` presupposes some specific `agents_hints`.
The environment's `reset` takes care of applying the two generators:
```
```python
def __init__(self,
...
rail_generator: RailGenerator = random_rail_generator(),
......@@ -78,8 +113,8 @@ The environment's `reset` takes care of applying the two generators:
):
self.rail_generator: RailGenerator = rail_generator
self.schedule_generator: ScheduleGenerator = schedule_generator
def reset(self, regen_rail=True, replace_agents=True):
def reset(self, regenerate_rail=True, regenerate_schedule=True):
rail, optionals = self.rail_generator(self.width, self.height, self.get_num_agents(), self.num_resets)
...
......@@ -89,249 +124,10 @@ The environment's `reset` takes care of applying the two generators:
if optionals and 'agents_hints' in optionals:
agents_hints = optionals['agents_hints']
self.agents_static = EnvAgentStatic.from_lists(
*self.schedule_generator(self.rail, self.get_num_agents(), hints=agents_hints))
```
## RailEnv Speeds
One of the main contributions to the complexity of railway network operations stems from the fact that all trains travel at different speeds while sharing a very limited railway network.
The different speed profiles can be generated using the `schedule_generator`, where you can actually chose as many different speeds as you like.
Keep in mind that the *fastest speed* is 1 and all slower speeds must be between 1 and 0.
For the submission scoring you can assume that there will be no more than 5 speed profiles.
Currently (as of **Flat**land 2.0), an agent keeps its speed over the whole episode.
Because the different speeds are implemented as fractions the agents ability to perform actions has been updated.
We **do not allow actions to change within the cell **.
This means that each agent can only chose an action to be taken when entering a cell (ie. positional fraction is 0).
There is some real railway specific considerations such as reserved blocks that are similar to this behavior.
But more importantly we disabled this to simplify the use of machine learning algorithms with the environment.
If we allow stop actions in the middle of cells. then the controller needs to make much more observations and not only at cell changes.
(Not set in stone and could be updated if the need arises).
The chosen action is then executed when a step to the next cell is valid. For example
- Agent enters switch and choses to deviate left. Agent fractional speed is 1/4 and thus the agent will take 4 time steps to complete its journey through the cell. On the 4th time step the agent will leave the cell deviating left as chosen at the entry of the cell.
- All actions chosen by the agent during its travels within a cell are ignored
- Agents can make observations at any time step. Make sure to discard observations without any information. See this [example](https://gitlab.aicrowd.com/flatland/baselines/blob/master/torch_training/training_navigation.py) for a simple implementation.
- The environment checks if agent is allowed to move to next cell only at the time of the switch to the next cell
In your controller, you can check whether an agent requires an action by checking `info`:
```
obs, rew, done, info = env.step(actions)
...
action_dict = dict()
for a in range(env.get_num_agents()):
if info['action_required'][a]:
action_dict.update({a: ...})
```
Notice that `info['action_required'][a]`
* if the agent breaks down (see stochasticity below) on entering the cell (no distance elpased in the cell), an action required as long as the agent is broken down;
when it gets back to work, the action chosen just before will be taken and executed at the end of the cell; you may check whether the agent
gets healthy again in the next step by checking `info['malfunction'][a] == 1`.
* when the agent has spent enough time in the cell, the next cell may not be free and the agent has to wait.
Since later versions of **Flat**land might have varying speeds during episodes.
Therefore, we return the agents' speed - in your controller, you can get the agents' speed from the `info` returned by `step`:
```
obs, rew, done, info = env.step(actions)
...
for a in range(env.get_num_agents()):
speed = info['speed'][a]
```
Notice that we do not guarantee that the speed will be computed at each step, but if not costly we will return it at each step.
## RailEnv Malfunctioning / Stochasticity
Stochastic events may happen during the episodes.
This is very common for railway networks where the initial plan usually needs to be rescheduled during operations as minor events such as delayed departure from trainstations, malfunctions on trains or infrastructure or just the weather lead to delayed trains.
We implemted a poisson process to simulate delays by stopping agents at random times for random durations. The parameters necessary for the stochastic events can be provided when creating the environment.
```
# Use a the malfunction generator to break agents from time to time
stochastic_data = {
'prop_malfunction': 0.5, # Percentage of defective agents
'malfunction_rate': 30, # Rate of malfunction occurence
'min_duration': 3, # Minimal duration of malfunction
'max_duration': 10 # Max duration of malfunction
}
```
The parameters are as follows:
- `prop_malfunction` is the proportion of agents that can malfunction. `1.0` means that each agent can break.
- `malfunction_rate` is the mean rate of the poisson process in number of environment steps.
- `min_duration` and `max_duration` set the range of malfunction durations. They are sampled uniformly
You can introduce stochasticity by simply creating the env as follows:
```
env = RailEnv(
...
stochastic_data=stochastic_data, # Malfunction data generator
...
)
```
In your controller, you can check whether an agent is malfunctioning:
```
obs, rew, done, info = env.step(actions)
...
action_dict = dict()
for a in range(env.get_num_agents()):
if info['malfunction'][a] == 0:
action_dict.update({a: ...})
# Custom observation builder
tree_observation = TreeObsForRailEnv(max_depth=2, predictor=ShortestPathPredictorForRailEnv())
# Different agent types (trains) with different speeds.
speed_ration_map = {1.: 0.25, # Fast passenger train
1. / 2.: 0.25, # Fast freight train
1. / 3.: 0.25, # Slow commuter train
1. / 4.: 0.25} # Slow freight train
env = RailEnv(width=50,
height=50,
rail_generator=sparse_rail_generator(num_cities=20, # Number of cities in map (where train stations are)
num_intersections=5, # Number of intersections (no start / target)
num_trainstations=15, # Number of possible start/targets on map
min_node_dist=3, # Minimal distance of nodes
node_radius=2, # Proximity of stations to city center
num_neighb=4, # Number of connections to other cities/intersections
seed=15, # Random seed
grid_mode=True,
enhance_intersection=True
),
schedule_generator=sparse_schedule_generator(speed_ration_map),
number_of_agents=10,
stochastic_data=stochastic_data, # Malfunction data generator
obs_builder_object=tree_observation)
```
## Observation Builders
Every `RailEnv` has an `obs_builder`. The `obs_builder` has full access to the `RailEnv`.
The `obs_builder` is called in the `step()` function to produce the observations.
```
env = RailEnv(
...
obs_builder_object=TreeObsForRailEnv(
max_depth=2,
predictor=ShortestPathPredictorForRailEnv(max_depth=10)
),
...
)
self.schedule_generator(self.rail, self.get_num_agents(), hints=agents_hints))
```
The two principal observation builders provided are global and tree.
### Global Observation Builder
`GlobalObsForRailEnv` gives a global observation of the entire rail environment.
* transition map array with dimensions (env.height, env.width, 16),
assuming 16 bits encoding of transitions.
* Two 2D arrays (map_height, map_width, 2) containing respectively the position of the given agent
target and the positions of the other agents targets.
* A 3D array (map_height, map_width, 4) wtih
- first channel containing the agents position and direction
- second channel containing the other agents positions and diretions
- third channel containing agent malfunctions
- fourth channel containing agent fractional speeds
### Tree Observation Builder
`TreeObsForRailEnv` computes the current observation for each agent.
The observation vector is composed of 4 sequential parts, corresponding to data from the up to 4 possible
movements in a `RailEnv` (up to because only a subset of possible transitions are allowed in RailEnv).
The possible movements are sorted relative to the current orientation of the agent, rather than NESW as for
the transitions. The order is:
[data from 'left'] + [data from 'forward'] + [data from 'right'] + [data from 'back']
Each branch data is organized as:
[root node information] +
[recursive branch data from 'left'] +
[... from 'forward'] +
[... from 'right] +
[... from 'back']
Each node information is composed of 9 features:
1. if own target lies on the explored branch the current distance from the agent in number of cells is stored.
2. if another agents target is detected the distance in number of cells from the agents current location
is stored
3. if another agent is detected the distance in number of cells from current agent position is stored.
4. possible conflict detected
tot_dist = Other agent predicts to pass along this cell at the same time as the agent, we store the
distance in number of cells from current agent position
0 = No other agent reserve the same cell at similar time
5. if an not usable switch (for agent) is detected we store the distance.
## Example code
6. This feature stores the distance in number of cells to the next branching (current node)
7. minimum distance from node to the agent's target given the direction of the agent if this path is chosen
8. agent in the same direction
n = number of agents present same direction
(possible future use: number of other agents in the same direction in this branch)
0 = no agent present same direction
9. agent in the opposite direction
n = number of agents present other direction than myself (so conflict)
(possible future use: number of other agents in other direction in this branch, ie. number of conflicts)
0 = no agent present other direction than myself
10. malfunctioning/blokcing agents
n = number of time steps the oberved agent remains blocked
11. slowest observed speed of an agent in same direction
1 if no agent is observed
min_fractional speed otherwise
Missing/padding nodes are filled in with -inf (truncated).
Missing values in present node are filled in with +inf (truncated).
In case of the root node, the values are [0, 0, 0, 0, distance from agent to target, own malfunction, own speed]
In case the target node is reached, the values are [0, 0, 0, 0, 0].
## Predictors
Predictors make predictions on future agents' moves based on the current state of the environment.
They are decoupled from observation builders in order to be encapsulate the functionality and to make it re-usable.
For instance, `TreeObsForRailEnv` optionally uses the predicted the predicted trajectories while exploring
the branches of an agent's future moves to detect future conflicts.
The general call structure is as follows:
```
RailEnv.step()
-> ObservationBuilder.get_many()
-> self.predictor.get()
self.get()
self.get()
...
```
To see all the changes in action you can just run the `flatland_example_2_0.py` file in the examples folder. The file can be found [here](https://gitlab.aicrowd.com/flatland/flatland/blob/master/examples/flatland_2_0_example.py).
# How to get started in Round 2
- [Environment Changes](#environment-changes)
- [Level generation](#level-generation)
- [Observations](#observations)
- [Predictions](#predictions)
## Environment Changes
There have been some major changes in how agents are being handled in the environment in this Flatland update.
### Agents
Agents are no more permant entities in the environment. Now agents will be removed from the environment as soon as they finsish their task. To keep interactions with the environment as simple as possible we do not modify the dimensions of the observation vectors nor the number of agents. Agents that have finished do not require any special treatment from the controller. Any action provided to these agents is simply ignored, just like before.
Start positions of agents are *not unique* anymore. This means that many agents can start from the same position on the railway grid. It is important to keep in mind that whatever agent moves first will block the rest of the agents from moving into the same cell. Thus, the controller can already decide the ordering of the agents from the first step.
## Level Generation
The levels are now generated using the `sparse_rail_generator` and the `sparse_schedule_generator`
### Rail Generation
The rail generation is done in a sequence of steps:
1. A number of city centers are placed in a a grid of size `(height, width)`
2. Each city is connected to two neighbouring cities
3. Internal parallel tracks are generated in each city
### Schedule Generation
The `sparse_schedule_generator` produces tasks for the agents by selecting a starting city and a target city. The agent is then placed on an even track number on the starting city and faced such that a path exists to the target city. The task for the agent is to reach the target position as fast as possible.
In the future we will update how these schedules are generated to allow for more complex tasks
## Observations
Observations have been updated to reflect the novel features and behaviors of Flatland. Have a look at [observation](https://gitlab.aicrowd.com/flatland/flatland/blob/master/flatland/envs/observations.py) or the documentation for more details on the observations.
## Predicitons
\ No newline at end of file
File added
File added
File added
File added