Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • flatland/flatland
  • stefan_otte/flatland
  • jiaodaxiaozi/flatland
  • sfwatergit/flatland
  • utozx126/flatland
  • ChenKuanSun/flatland
  • ashivani/flatland
  • minhhoa/flatland
  • pranjal_dhole/flatland
  • darthgera123/flatland
  • rivesunder/flatland
  • thomaslecat/flatland
  • joel_joseph/flatland
  • kchour/flatland
  • alex_zharichenko/flatland
  • yoogottamk/flatland
  • troye_fang/flatland
  • elrichgro/flatland
  • jun_jin/flatland
  • nimishsantosh107/flatland
20 results
Show changes
Showing
with 1887 additions and 121 deletions
Observation and Action Spaces
----------------------------
This is an introduction to the three standard observations and the action space of **Flatland**.
Action Space
^^^^^^^^^^^^
Flatland is a railway simulation. Thus the actions of an agent are strongly limited to the railway network. This means that in many cases not all actions are valid.
The possible actions of an agent are
- ``0`` **Do Nothing**: If the agent is moving it continues moving, if it is stopped it stays stopped
- ``1`` **Deviate Left**: If the agent is at a switch with a transition to its left, the agent will chose th eleft path. Otherwise the action has no effect. If the agent is stopped, this action will start agent movement again if allowed by the transitions.
- ``2`` **Go Forward**: This action will start the agent when stopped. This will move the agent forward and chose the go straight direction at switches.
- ``3`` **Deviate Right**: Exactly the same as deviate left but for right turns.
- ``4`` **Stop**: This action causes the agent to stop.
Observation Spaces
^^^^^^^^^^^^^^^^^^
In the **Flatland** environment we have included three basic observations to get started. The figure below illustrates the observation range of the different basic observation: ``Global``, ``Local Grid`` and ``Local Tree``.
.. image:: https://i.imgur.com/oo8EIYv.png
:height: 100
:width: 200
Global Observation
~~~~~~~~~~~~~~~~~~
Gives a global observation of the entire rail environment.
The observation is composed of the following elements:
- transition map array with dimensions (``env.height``, ``env.width``, ``16``), assuming **16 bits encoding of transitions**.
- Two 2D arrays (``map_height``, ``map_width``, ``2``) containing respectively the position of the given agent target and the positions of the other agents' targets.
- A 3D array (``map_height``, ``map_width``, ``8``) with the **first 4 channels** containing the **one hot encoding** of the direction of the given agent and the second 4 channels containing the positions of the other agents at their position coordinates.
We encourage you to enhance this observation with any layer you think might help solve the problem.
It would also be possible to construct a global observation for a super agent that controls all agents at once.
Local Grid Observation
~~~~~~~~~~~~~~~~~~~~~~
Gives a local observation of the rail environment around the agent.
The observation is composed of the following elements:
- transition map array of the local environment around the given agent, with dimensions (``2*view_radius + 1``, ``2*view_radius + 1``, ``16``), assuming **16 bits encoding of transitions**.
- Two 2D arrays (``2*view_radius + 1``, ``2*view_radius + 1``, ``2``) containing respectively, if they are in the agent's vision range, its target position, the positions of the other targets.
- A 3D array (``2*view_radius + 1``, ``2*view_radius + 1``, ``4``) containing the one hot encoding of directions of the other agents at their position coordinates, if they are in the agent's vision range.
- A 4 elements array with one hot encoding of the direction.
Be aware that this observation **does not** contain any clues about target location if target is out of range. Thus navigation on maps where the radius of the observation does not guarantee a visible target at all times will become very difficult.
We encourage you to come up with creative ways to overcome this problem. In the tree observation below we introduce the concept of distance maps.
Tree Observation
~~~~~~~~~~~~~~~~
The tree observation is built by exploiting the graph structure of the railway network. The observation is generated by spanning a **4 branched tree** from the current position of the agent. Each branch follows the allowed transitions (backward branch only allowed at dead-ends) until a cell with multiple allowed transitions is reached. Here the information gathered along the branch is stored as a node in the tree.
The figure below illustrates how the tree observation is built:
1. From Agent location probe all 4 directions (``L:Blue``, ``F:Green``, ``R:Purple``, ``B:Red``) starting with left and start branches when transition is allowed.
1. For each branch walk along the allowed transition until you reach a dead-end, switch or the target destination.
2. Create a node and fill in the node information as stated below.
3. If max depth of tree is not reached and there are possible transitions, start new branches and repeat the steps above.
2. Fill up all non existing branches with -infinity such that tree size is invariant to the number of possible transitions at branching points.
Note that we always start with the left branch according to the agent orientation. Thus the tree observation is independent of the NESW orientation of cells, and only considers the transitions relative to the agent's orientation.
The colors in the figure bellow illustrate what branch the cell belongs to. If there are multiple colors in a cell, this cell is visited by different branches of the tree observation.
The right side of the figure shows the resulting tree of the railway network on the left. Cross means no branch was built. If a node has no children it was either a terminal node (dead-end, max depth reached or no transition possible). A circle indicates a node filled with the corresponding information stated below in Node Information.
.. image:: https://i.imgur.com/sGBBhzJ.png
:height: 100
:width: 200
Node Information
~~~~~~~~~~~~~~~~
Each node is filled with information gathered along the path to the node. Currently each node contains 9 features:
- 1: if own target lies on the explored branch the current distance from the agent in number of cells is stored.
- 2: if another agent's target is detected, the distance in number of cells from the current agent position is stored.
- 3: if another agent is detected, the distance in number of cells from the current agent position is stored.
- 4: possible conflict detected (This only works when we use a predictor and will not be important in this tutorial)
- 5: if an unusable switch (for the agent) is detected we store the distance. An unusable switch is a switch where the agent does not have any choice of path, but other agents coming from different directions might.
- 6: This feature stores the distance (in number of cells) to the next node (e.g. switch or target or dead-end)
- 7: minimum remaining travel distance from this node to the agent's target given the direction of the agent if this path is chosen
- 8: agent in the same direction found on path to node
- ``n`` = number of agents present in the same direction (possible future use: number of other agents in the same direction in this branch)
- ``0`` = no agent present in the same direction
- 9: agent in the opposite direction on path to node
- ``n`` = number of agents present in the opposite direction to the observing agent
- ``0`` = no agent present in other direction to the observing agent
This diff is collapsed.
## Rendering Specifications
### Scope
This doc specifies the software to meet the requirements in the Visualization requirements doc.
### References
- [Visualization Requirements](visualization)
- [Core Spec](./core)
### Interfaces
#### Interface with Environment Component
- Environment produces the Env Snapshot data structure (TBD)
- Renderer reads the Env Snapshot
- Connection between Env and Renderer, either:
- Environment “invokes” the renderer in-process
- Renderer “connects” to the environment
- Eg Env acts as a server, Renderer as a client
- Either
- The Env sends a Snapshot to the renderer and waits for rendering
- Or:
- The Env puts snapshots into a rendering queue
- The renderer blocks / waits on the queue, waiting for a new snapshot to arrive
- If several snapshots are waiting, delete and skip them and just render the most recent
- Delete the snapshot after rendering
- Optionally
- Render every frame / time step
- Or, render frames without blocking environment
- Render frames in separate process / thread
##### Environment Snapshot
#### Data Structure
A definitions of the data structure is to be defined in Core requirements or Interfaces doc.
##### Example only
Top-level dictionary
- World nd-array
- Each element represents available transitions in a cell
- List of agents
- Agent location, orientation, movement (forward / stop / turn?)
- Observation
- Rectangular observation
- Maybe just dimensions - width + height (ie no need for contents)
- Can be highlighted in display as per minigrid
- Tree-based observation
- TBD
#### Existing Tools / Libraries
1. Pygame
1. Very easy to use. Like dead simple to add sprites etc. [Link](https://studywolf.wordpress.com/2015/03/06/arm-visualization-with pygame/)
2. No inbuilt support for threads/processes. Does get faster if using pypy/pysco.
2. PyQt
1. Somewhat simple, a little more verbose to use the different modules.
2. Multi-threaded via QThread! Yay! (Doesn’t block main thread that does the real work), [Link](https://nikolak.com/pyqt-threading-tutorial/)
##### How to structure the code
1. Define draw functions/classes for each primitive
1. Primitives: Agents (Trains), Railroad, Grass, Houses etc.
2. Background. Initialize the background before starting the episode.
1. Static objects in the scenes, directly draw those primitives once and cache.
##### Proposed Interfaces
To-be-filled
#### Technical Graphics Considerations
##### Overlay dynamic primitives over the background at each time step.
No point trying to figure out changes. Need to explicitly draw every primitive anyways (that’s how these renders work).
## Visualization
![logo](https://drive.google.com/uc?export=view&id=1rstqMPJXFJd9iD46z1A5Rus-W0Ww6O8i)
### Introduction & Scope
Broad requirements for human-viewable display of a single Flatland Environment.
#### Context
Shows this software component in relation to some of the other components. We name the component the "Renderer". Multiple agents interact with a single Environment. A renderer interacts with the environment, and displays on screen, and/or into movie or image files.
<p id="gdcalert2" ><span style="color: red; font-weight: bold">>>>>> gd2md-html alert: inline drawings not supported directly from Docs. You may want to copy the inline drawing to a standalone drawing and export by reference. See <a href="https://github.com/evbacher/gd2md-html/wiki/Google-Drawings-by-reference">Google Drawings by reference</a> for details. The img URL below is a placeholder. </span><br>(<a href="#">Back to top</a>)(<a href="#gdcalert3">Next alert</a>)<br><span style="color: red; font-weight: bold">>>>>> </span></p>
![drawing](https://docs.google.com/a/google.com/drawings/d/12345/export/png)
### Requirements
#### Primary Requirements
1. Visualize or Render the state of the environment
1. Read an Environment + Agent Snapshot provided by the Environment component
2. Display onto a local screen in real-time (or near real-time)
3. Include all the agents
4. Illustrate the agent observations (typically subsets of the grid / world)
5. 2d-rendering only
2. Output visualisation into movie / image files for use in later animation
3. Should not impose control-flow constraints on Environment
6. Should not force env to respond to events
7. Should not drive the "main loop" of Inference or training
#### Secondary / Optional Requirements
1. During training (possibly across multiple processes or machines / OS instances), display a single training environment,
1. without holding up the other environments in the training.
2. Some training environments may be remote to the display machine (eg using GCP / AWS)
3. Attach to / detach from running environment / training cluster without restarting training.
2. Provide a switch to make use of graphics / artwork provided by graphic artist
4. Fast / compact mode for general use
5. Beauty mode for publicity / demonstrations
3. Provide a switch between smooth / continuous animation of an agent (slower) vs jumping from cell to cell (faster)
6. Smooth / continuous translation between cells
7. Smooth / continuous rotation
4. Speed - ideally capable of 60fps (see performance metrics)
5. Window view - only render part of the environment, or a single agent and agents nearby.
8. May not be feasible to render very large environments
9. Possibly more than one window, ie one for each selected agent
10. Window(s) can be tied to agents, ie they move around with the agent, and optionally rotate with the agent.
6. Interactive scaling
11. eg wide view, narrow / enlarged view
12. eg with mouse scrolling & zooming
7. Minimize necessary skill-set for participants
13. Python API to gui toolkit, no need for C/C++
8. View on various media:
14. Linux & Windows local display
15. Browser
#### Performance Metrics
Here are some performance metrics which the Renderer should meet.
<table>
<tr>
<td>
</td>
<td><p style="text-align: right">
### Per second</p>
</td>
<td><p style="text-align: right">
Target Time (ms)</p>
</td>
<td><p style="text-align: right">
Prototype time (ms)</p>
</td>
</tr>
<tr>
<td>Write an agent update (ie env as client providing an agent update)
</td>
<td>
</td>
<td><p style="text-align: right">
0.1</p>
</td>
<td>
</td>
</tr>
<tr>
<td>Draw an environment window 20x20
</td>
<td><p style="text-align: right">
60</p>
</td>
<td><p style="text-align: right">
16</p>
</td>
<td>
</td>
</tr>
<tr>
<td>Draw an environment window 50 x 50
</td>
<td><p style="text-align: right">
10</p>
</td>
<td>
</td>
<td>
</td>
</tr>
<tr>
<td>Draw an agent update on an existing environment window. 5 agents visible.
</td>
<td>
</td>
<td><p style="text-align: right">
1</p>
</td>
<td>
</td>
</tr>
</table>
#### Example Visualization
### Reference Documents
Link to this doc: https://docs.google.com/document/d/1Y4Mw0Q6r8PEOvuOZMbxQX-pV2QKDuwbZJBvn18mo9UU/edit#
#### Core Specification
This specifies the system containing the environment and agents - this will be able to run independently of the renderer.
[https://docs.google.com/document/d/1RN162b8wSfYTBblrdE6-Wi_zSgQTvVm6ZYghWWKn5t8/edit](https://docs.google.com/document/d/1RN162b8wSfYTBblrdE6-Wi_zSgQTvVm6ZYghWWKn5t8/edit)
The data structure which the renderer needs to read initially resides here.
#### Visualization Specification
This will specify the software which will meet the requirements documented here.
[https://docs.google.com/document/d/1XYOe_aUIpl1h_RdHnreACvevwNHAZWT0XHDL0HsfzRY/edit#](https://docs.google.com/document/d/1XYOe_aUIpl1h_RdHnreACvevwNHAZWT0XHDL0HsfzRY/edit#)
#### Interface Specification
This will specify the interfaces through which the different components communicate
### Non-requirements - to be deleted below here.
The below has been copied into the spec doc. Comments may be lost. I'm only preserving it to save the comments for a few days - they don't cut & paste into the other doc!
#### Interface with Environment Component
* Environment produces the Env Snapshot data structure (TBD)
* Renderer reads the Env Snapshot
* Connection between Env and Renderer, either:
* Environment "invokes" the renderer in-process
* Renderer "connects" to the environment
* Eg Env acts as a server, Renderer as a client
* Either
* The Env sends a Snapshot to the renderer and waits for rendering
* Or:
* The Env puts snapshots into a rendering queue
* The renderer blocks / waits on the queue, waiting for a new snapshot to arrive
* If several snapshots are waiting, delete and skip them and just render the most recent
* Delete the snapshot after rendering
* Optionally
* Render every frame / time step
* Or, render frames without blocking environment
* Render frames in separate process / thread
###### Environment Snapshot
**Data Structure**
A definitions of the data structure is to be defined in Core requirements.
It is a requirement of the Renderer component that it can read this data structure.
**Example only**
Top-level dictionary
* World nd-array
* Each element represents available transitions in a cell
* List of agents
* Agent location, orientation, movement (forward / stop / turn?)
* Observation
* Rectangular observation
* Maybe just dimensions - width + height (ie no need for contents)
* Can be highlighted in display as per minigrid
* Tree-based observation
* TBD
#### Investigation into Existing Tools / Libraries
1. Pygame
1. Very easy to use. Like dead simple to add sprites etc. ([https://studywolf.wordpress.com/2015/03/06/arm-visualization-with-pygame/](https://studywolf.wordpress.com/2015/03/06/arm-visualization-with-pygame/))
2. No inbuilt support for threads/processes. Does get faster if using pypy/pysco.
2. PyQt
3. Somewhat simple, a little more verbose to use the different modules.
4. Multi-threaded via QThread! Yay! (Doesn't block main thread that does the real work), ([https://nikolak.com/pyqt-threading-tutorial/](https://nikolak.com/pyqt-threading-tutorial/))
**How to structure the code**
1. Define draw functions/classes for each primitive
1. Primitives: Agents (Trains), Railroad, Grass, Houses etc.
2. Background. Initialize the background before starting the episode.
2. Static objects in the scenes, directly draw those primitives once and cache.
**Proposed Interfaces**
To-be-filled
#### Technical Graphics Considerations
###### Overlay dynamic primitives over the background at each time step.
No point trying to figure out changes. Need to explicitly draw every primitive anyways (that's how these renders work).
=====
Getting Started
=====
Getting Started Tutorial
========================
Overview
--------------
--------
Following are three short tutorials to help new users get acquainted with how
to create RailEnvs, how to train simple DQN agents on them, and how to customize
Following are three short tutorials to help new users get acquainted with how
to create RailEnvs, how to train simple DQN agents on them, and how to customize
them.
To use flatland in a project:
......@@ -16,20 +15,20 @@ To use flatland in a project:
import flatland
Part 1 : Basic Usage
--------------
Simple Example 1 : Basic Usage
------------------------------
The basic usage of RailEnv environments consists in creating a RailEnv object
endowed with a rail generator, that generates new rail networks on each reset,
and an observation generator object, that is supplied with environment-specific
information at each time step and provides a suitable observation vector to the
agents. After the RailEnv environment is created, one need to call reset() on the
environment in order to fully initialize the environment
The basic usage of RailEnv environments consists in creating a RailEnv object
endowed with a rail generator, that generates new rail networks on each reset,
and an observation generator object, that is supplied with environment-specific
information at each time step and provides a suitable observation vector to the
agents.
The simplest rail generators are envs.rail_generators.rail_from_manual_specifications_generator
and envs.rail_generators.random_rail_generator.
The simplest rail generators are envs.generators.rail_from_manual_specifications_generator
and envs.generators.random_rail_generator.
The first one accepts a list of lists whose each element is a 2-tuple, whose
entries represent the 'cell_type' (see core.transitions.RailEnvTransitions) and
The first one accepts a list of lists whose each element is a 2-tuple, whose
entries represent the 'cell_type' (see core.transitions.RailEnvTransitions) and
the desired clockwise rotation of the cell contents (0, 90, 180 or 270 degrees).
For example,
......@@ -45,9 +44,10 @@ For example,
rail_generator=rail_from_manual_specifications_generator(specs),
number_of_agents=1,
obs_builder_object=TreeObsForRailEnv(max_depth=2))
env.reset()
Alternatively, a random environment can be generated (optionally specifying
weights for each cell type to increase or decrease their proportion in the
Alternatively, a random environment can be generated (optionally specifying
weights for each cell type to increase or decrease their proportion in the
generated rail networks).
.. code-block:: python
......@@ -64,24 +64,27 @@ generated rail networks).
0.2, # Case 8 - turn left
0.2, # Case 9 - turn right
1.0] # Case 10 - mirrored switch
# Example generate a random rail
env = RailEnv(width=10,
height=10,
rail_generator=random_rail_generator(cell_type_relative_proportion=transition_probability),
rail_generator=random_rail_generator(
cell_type_relative_proportion=transition_probability
),
number_of_agents=3,
obs_builder_object=TreeObsForRailEnv(max_depth=2))
env.reset()
Environments can be rendered using the utils.rendertools utilities, for example:
.. code-block:: python
env_renderer = RenderTool(env)
env_renderer.renderEnv(show=True)
env_renderer.render_env(show=True)
Finally, the environment can be run by supplying the environment step function
with a dictionary of actions whose keys are agents' handles (returned by
Finally, the environment can be run by supplying the environment step function
with a dictionary of actions whose keys are agents' handles (returned by
env.get_agent_handles() ) and the corresponding values the selected actions.
For example, for a 2-agents environment:
......@@ -91,39 +94,43 @@ For example, for a 2-agents environment:
action_dict = {handles[0]:0, handles[1]:0}
obs, all_rewards, done, _ = env.step(action_dict)
where 'obs', 'all_rewards', and 'done' are also dictionary indexed by the agents'
handles, whose values correspond to the relevant observations, rewards and terminal
status for each agent. Further, the 'dones' dictionary returns an extra key
where 'obs', 'all_rewards', and 'done' are also dictionary indexed by the agents'
handles, whose values correspond to the relevant observations, rewards and terminal
status for each agent. Further, the 'dones' dictionary returns an extra key
'__all__' that is set to True after all agents have reached their goals.
In the specific case a TreeObsForRailEnv observation builder is used, it is
possible to print a representation of the returned observations with the
In the specific case a TreeObsForRailEnv observation builder is used, it is
possible to print a representation of the returned observations with the
following code. Also, tree observation data is displayed by RenderTool by default.
.. code-block:: python
for i in range(env.get_num_agents()):
env.obs_builder.util_print_obs_subtree(tree=obs[i], num_features_per_node=5)
env.obs_builder.util_print_obs_subtree(
tree=obs[i],
)
The complete code for this part of the Getting Started guide can be found in
examples/simple_example_1.py, examples/simple_example_2.py and
examples/simple_example_3.py
The complete code for this part of the Getting Started guide can be found in
* `examples/simple_example_1.py <https://gitlab.aicrowd.com/flatland/flatland/blob/master/examples/simple_example_1.py>`_
* `examples/simple_example_2.py <https://gitlab.aicrowd.com/flatland/flatland/blob/master/examples/simple_example_2.py>`_
Part 2 : Training a Simple an Agent on Flatland
--------------
---------------------------------------------------------
This is a brief tutorial on how to train an agent on Flatland.
Here we use a simple random agent to illustrate the process on how to interact with the environment.
The corresponding code can be found in examples/training_example.py and in the baselines repository
you find a tutorial to train a DQN agent to solve the navigation task.
you find a tutorial to train a `DQN <https://arxiv.org/abs/1312.5602>`_ agent to solve the navigation task.
We start by importing the necessary Flatland libraries
.. code-block:: python
from flatland.envs.generators import complex_rail_generator
from flatland.envs.rail_generators import complex_rail_generator
from flatland.envs.schedule_generators import complex_schedule_generator
from flatland.envs.rail_env import RailEnv
The complex_rail_generator is used in order to guarantee feasible railway network configurations for training.
......@@ -131,30 +138,36 @@ Next we configure the difficulty of our task by modifying the complex_rail_gener
.. code-block:: python
env = RailEnv(width=15,
height=15,
rail_generator=complex_rail_generator(nr_start_goal=10, nr_extra=10, min_dist=10, max_dist=99999, seed=0),
number_of_agents=5)
The difficulty of a railway network depends on the dimensions (width x height) and the number of agents in the network.
env = RailEnv( width=15,
height=15,
rail_generator=complex_rail_generator(
nr_start_goal=10,
nr_extra=10,
min_dist=10,
max_dist=99999,
seed=1),
number_of_agents=5)
env.reset()
The difficulty of a railway network depends on the dimensions (`width` x `height`) and the number of agents in the network.
By varying the number of start and goal connections (nr_start_goal) and the number of extra railway elements added (nr_extra)
the number of alternative paths of each agents can be modified. The more possible paths an agent has to reach its target the easier the task becomes.
Here we don't specify any observation builder but rather use the standard tree observation. If you would like to use a custom obervation please follow
the instructions in the next tutorial.
Feel free to vary these parameters to see how your own agent holds up on different setting. The evalutation set of railway configurations will
the instructions in the next tutorial.
Feel free to vary these parameters to see how your own agent holds up on different setting. The evalutation set of railway configurations will
cover the whole spectrum from easy to complex tasks.
Once we are set with the environment we can load our preferred agent from either RLlib or any other ressource. Here we use a random agent to illustrate the code.
.. code-block:: python
agent = RandomAgent(env.action_space, env.observation_space)
agent = RandomAgent(state_size, action_size)
We start every trial by resetting the environment
.. code-block:: python
obs = env.reset()
obs, info = env.reset()
Which provides the initial observation for all agents (obs = array of all observations).
In order for the environment to step forward in time we need a dictionar of actions for all active agents.
......@@ -170,80 +183,8 @@ This dictionary is then passed to the environment which checks the validity of a
.. code-block:: python
next_obs, all_rewards, done, _ = env.step(action_dict)
The environment returns an array of new observations, reward dictionary for all agents as well as a flag for which agents are done.
This information can be used to update the policy of your agent and if done['__all__'] == True the episode terminates.
Part 3 : Customizing Observations and Level Generators
--------------
Example code for generating custom observations given a RailEnv and to generate
random rail maps are available in examples/custom_observation_example.py and
examples/custom_railmap_example.py .
Custom observations can be produced by deriving a new object from the
core.env_observation_builder.ObservationBuilder base class, for example as follows:
.. code-block:: python
class CustomObs(ObservationBuilder):
def __init__(self):
self.observation_space = [5]
def reset(self):
return
def get(self, handle):
observation = handle*np.ones((5,))
return observation
It is important that an observation_space is defined with a list of dimensions
of the returned observation tensors. get() returns the observation for each agent,
of handle 'handle'.
A RailEnv environment can then be created as usual:
.. code-block:: python
env = RailEnv(width=7,
height=7,
rail_generator=random_rail_generator(),
number_of_agents=3,
obs_builder_object=CustomObs())
As for generating custom rail maps, the RailEnv class accepts a rail_generator
argument that must be a function with arguments 'width', 'height', 'num_agents',
and 'num_resets=0', and that has to return a GridTransitionMap object (the rail map),
and three lists of tuples containing the (row,column) coordinates of each of
num_agent agents, their initial orientation (0=North, 1=East, 2=South, 3=West),
and the position of their targets.
For example, the following custom rail map generator returns an empty map of
size (height, width), with no agents (regardless of num_agents):
.. code-block:: python
def custom_rail_generator():
def generator(width, height, num_agents=0, num_resets=0):
rail_trans = RailEnvTransitions()
grid_map = GridTransitionMap(width=width, height=height, transitions=rail_trans)
rail_array = grid_map.grid
rail_array.fill(0)
agents_positions = []
agents_direction = []
agents_target = []
return grid_map, agents_positions, agents_direction, agents_target
return generator
It is worth to note that helpful utilities to manage RailEnv environments and their
related data structures are available in 'envs.env_utils'. In particular,
envs.env_utils.get_rnd_agents_pos_tgt_dir_on_rail is fairly handy to fill in
random (but consistent) agents along with their targets and initial directions,
given a rail map (GridTransitionMap object) and the desired number of agents:
.. code-block:: python
agents_position, agents_direction, agents_target = get_rnd_agents_pos_tgt_dir_on_rail(
rail_map,
num_agents)
The full source code of this example can be found in `examples/training_example.py <https://gitlab.aicrowd.com/flatland/flatland/blob/master/examples/training_example.py>`_.
This diff is collapsed.
# Level Generation Tutorial
We are currently working on different new level generators and you can expect that the levels in the submission testing will not all come from just one but rather different level generators to be sure that the controllers can handle any railway specific challenge.
Let's have a look at the `sparse_rail_generator`.
## Sparse Rail Generator
![Example_Sparse](https://i.imgur.com/DP8sIyx.png)
The idea behind the sparse rail generator is to mimic classic railway structures where dense nodes (cities) are sparsely connected to each other and where you have to manage traffic flow between the nodes efficiently.
The cities in this level generator are much simplified in comparison to real city networks but it mimics parts of the problems faced in daily operations of any railway company.
There are a few parameters you can tune to build your own map and test different complexity levels of the levels.
**Warning** some combinations of parameters do not go well together and will lead to infeasible level generation.
In the worst case, the level generator currently issues a warning when it cannot build the environment according to the parameters provided.
This will lead to a crash of the whole env.
We are currently working on improvements here and are **happy for any suggestions from your side**.
To build an environment you instantiate a `RailEnv` as follows:
```python
Initialize the generator
rail_generator=sparse_rail_generator(
num_cities=10, # Number of cities in map
num_intersections=10, # Number of interesections in map
num_trainstations=50, # Number of possible start/targets on map
min_node_dist=6, # Minimal distance of nodes
node_radius=3, # Proximity of stations to city center
num_neighb=3, # Number of connections to other cities
seed=5, # Random seed
grid_mode=False # Ordered distribution of nodes
)
Build the environment
env = RailEnv(
width=50,
height=50,
rail_generator=rail_generator
schedule_generator=sparse_schedule_generator(),
number_of_agents=10,
obs_builder_object=TreeObsForRailEnv(max_depth=3,predictor=shortest_path_predictor)
)
Call reset on the environment
env.reset()
```
You can see that you now need both a `rail_generator` and a `schedule_generator` to generate a level. These need to work nicely together. The `rail_generator` will only generate the railway infrastructure and provide hints to the `schedule_generator` about where to place agents. The `schedule_generator` will then generate a schedule, meaning it places agents at different train stations and gives them tasks by providing individual targets.
You can tune the following parameters in the `sparse_rail_generator`:
- `num_cities` is the number of cities on a map. Cities are the only nodes that can host start and end points for agent tasks (Train stations). Here you have to be carefull that the number is not too high as all the cities have to fit on the map. When `grid_mode=False` you have to be carefull when chosing `min_node_dist` because leves will fails if not all cities (and intersections) can be placed with at least `min_node_dist` between them.
- `num_intersections` is the number of nodes that don't hold any trainstations. They are also the first priority that a city connects to. We use these to allow for sparse connections between cities.
- `num_trainstations` defines the *Total* number of trainstations in the network. This also sets the max number of allowed agents in the environment. This is also a delicate parameter as there is only a limitid amount of space available around nodes and thus if the number is too high the level generation will fail. *Important*: Only the number of agents provided to the environment will actually produce active train stations. The others will just be present as dead-ends (See figures below).
- `min_node_dist` is only used if `grid_mode=False` and represents the minimal distance between two nodes.
- `node_radius` defines the extent of a city. Each trainstation is placed at a distance to the closes city node that is smaller or equal to this number.
- `num_neighb`defines the number of neighbouring nodes that connect to each other. Thus this changes the connectivity and thus the amount of alternative routes in the network.
- `grid_mode` True -> Nodes evenly distriubted in env, False-> Random distribution of nodes
- `enhance_intersection`: True -> Extra rail elements added at intersections
- `seed` is used to initialize the random generator
If you run into any bugs with sets of parameters please let us know.
Here is a network with `grid_mode=False` and the parameters from above.
![sparse_random](https://i.imgur.com/Xg7nifF.png)
and here with `grid_mode=True`
![sparse_ordered](https://i.imgur.com/jyA7Pt4.png)
## Example code
To see all the changes in action you can just run the `flatland_example_2_0.py` file in the examples folder. The file can be found [here](https://gitlab.aicrowd.com/flatland/flatland/blob/master/examples/flatland_2_0_example.py).
# Stochasticity Tutorial
Another area where we improved **Flat**land 2.0 are stochastic events added during the episodes.
This is very common for railway networks where the initial plan usually needs to be rescheduled during operations as minor events such as delayed departure from trainstations, malfunctions on trains or infrastructure or just the weather lead to delayed trains.
We implemted a poisson process to simulate delays by stopping agents at random times for random durations. The parameters necessary for the stochastic events can be provided when creating the environment.
```python
# Use a the malfunction generator to break agents from time to time
stochastic_data = {
'prop_malfunction': 0.5, # Percentage of defective agents
'malfunction_rate': 30, # Rate of malfunction occurence
'min_duration': 3, # Minimal duration of malfunction
'max_duration': 10 # Max duration of malfunction
}
```
The parameters are as follows:
- `prop_malfunction` is the proportion of agents that can malfunction. `1.0` means that each agent can break.
- `malfunction_rate` is the mean rate of the poisson process in number of environment steps.
- `min_duration` and `max_duration` set the range of malfunction durations. They are sampled uniformly
You can introduce stochasticity by simply creating the env as follows:
```python
env = RailEnv(
...
stochastic_data=stochastic_data, # Malfunction data generator
...
)
```
In your controller, you can check whether an agent is malfunctioning:
```python
obs, rew, done, info = env.step(actions)
...
action_dict = dict()
for a in range(env.get_num_agents()):
if info['malfunction'][a] == 0:
action_dict.update({a: ...})
# Custom observation builder
tree_observation = TreeObsForRailEnv(max_depth=2, predictor=ShortestPathPredictorForRailEnv())
# Different agent types (trains) with different speeds.
speed_ration_map = {1.: 0.25, # Fast passenger train
1. / 2.: 0.25, # Fast freight train
1. / 3.: 0.25, # Slow commuter train
1. / 4.: 0.25} # Slow freight train
env = RailEnv(width=50,
height=50,
rail_generator=sparse_rail_generator(num_cities=20, # Number of cities in map (where train stations are)
num_intersections=5, # Number of intersections (no start / target)
num_trainstations=15, # Number of possible start/targets on map
min_node_dist=3, # Minimal distance of nodes
node_radius=2, # Proximity of stations to city center
num_neighb=4, # Number of connections to other cities/intersections
seed=15, # Random seed
grid_mode=True,
enhance_intersection=True
),
schedule_generator=sparse_schedule_generator(speed_ration_map),
number_of_agents=10,
stochastic_data=stochastic_data, # Malfunction data generator
obs_builder_object=tree_observation)
env.reset()
```
You will quickly realize that this will lead to unforeseen difficulties which means that **your controller** needs to observe the environment at all times to be able to react to the stochastic events.
## Example code
To see all the changes in action you can just run the `flatland_example_2_0.py` file in the examples folder. The file can be found [here](https://gitlab.aicrowd.com/flatland/flatland/blob/master/examples/flatland_2_0_example.py).
# Different speed profiles Tutorial
One of the main contributions to the complexity of railway network operations stems from the fact that all trains travel at different speeds while sharing a very limited railway network.
In **Flat**land 2.0 this feature will be enabled as well and will lead to much more complex configurations. Here we count on your support if you find bugs or improvements :).
The different speed profiles can be generated using the `schedule_generator`, where you can actually chose as many different speeds as you like.
Keep in mind that the *fastest speed* is 1 and all slower speeds must be between 1 and 0.
For the submission scoring you can assume that there will be no more than 5 speed profiles.
Later versions of **Flat**land might have varying speeds during episodes. Therefore, we return the agent speeds.
Notice that we do not guarantee that the speed will be computed at each step, but if not costly we will return it at each step.
In your controller, you can get the agents' speed from the `info` returned by `step`:
```python
obs, rew, done, info = env.step(actions)
...
for a in range(env.get_num_agents()):
speed = info['speed'][a]
```
## Actions and observation with different speed levels
Because the different speeds are implemented as fractions the agents ability to perform actions has been updated.
We **do not allow actions to change within the cell **.
This means that each agent can only chose an action to be taken when entering a cell.
This action is then executed when a step to the next cell is valid. For example
- Agent enters switch and choses to deviate left. Agent fractional speed is 1/4 and thus the agent will take 4 time steps to complete its journey through the cell. On the 4th time step the agent will leave the cell deviating left as chosen at the entry of the cell.
- All actions chosen by the agent during its travels within a cell are ignored
- Agents can make observations at any time step. Make sure to discard observations without any information. See this [example](https://gitlab.aicrowd.com/flatland/baselines/blob/master/torch_training/training_navigation.py) for a simple implementation.
- The environment checks if agent is allowed to move to next cell only at the time of the switch to the next cell
In your controller, you can check whether an agent requires an action by checking `info`:
```python
obs, rew, done, info = env.step(actions)
...
action_dict = dict()
for a in range(env.get_num_agents()):
if info['action_required'][a] and info['malfunction'][a] == 0:
action_dict.update({a: ...})
```
Notice that `info['action_required'][a]` does not mean that the action will have an effect:
if the next cell is blocked or the agent breaks down, the action cannot be performed and an action will be required again in the next step.
## Rail Generators and Schedule Generators
The separation between rail generator and schedule generator reflects the organisational separation in the railway domain
- Infrastructure Manager (IM): is responsible for the layout and maintenance of tracks
- Railway Undertaking (RU): operates trains on the infrastructure
Usually, there is a third organisation, which ensures discrimination-free access to the infrastructure for concurrent requests for the infrastructure in a **schedule planning phase**.
However, in the **Flat**land challenge, we focus on the re-scheduling problem during live operations.
Technically,
```python
RailGeneratorProduct = Tuple[GridTransitionMap, Optional[Any]]
RailGenerator = Callable[[int, int, int, int], RailGeneratorProduct]
AgentPosition = Tuple[int, int]
Schedule = collections.namedtuple('Schedule', 'agent_positions '
'agent_directions '
'agent_targets '
'agent_speeds '
'agent_malfunction_rates '
'max_episode_steps')
ScheduleGenerator = Callable[[GridTransitionMap, int, Optional[Any], Optional[int]], Schedule]
```
We can then produce `RailGenerator`s by currying:
```python
def sparse_rail_generator(num_cities=5, num_intersections=4, num_trainstations=2, min_node_dist=20, node_radius=2,
num_neighb=3, grid_mode=False, enhance_intersection=False, seed=1):
def generator(width, height, num_agents, num_resets=0):
# generate the grid and (optionally) some hints for the schedule_generator
...
return grid_map, {'agents_hints': {
'num_agents': num_agents,
'agent_start_targets_nodes': agent_start_targets_nodes,
'train_stations': train_stations
}}
return generator
```
And, similarly, `ScheduleGenerator`s:
```python
def sparse_schedule_generator(speed_ratio_map: Mapping[float, float] = None) -> ScheduleGenerator:
def generator(rail: GridTransitionMap, num_agents: int, hints: Any = None):
# place agents:
# - initial position
# - initial direction
# - (initial) speed
# - malfunction
...
return agents_position, agents_direction, agents_target, speeds, agents_malfunction
return generator
```
Notice that the `rail_generator` may pass `agents_hints` to the `schedule_generator` which the latter may interpret.
For instance, the way the `sparse_rail_generator` generates the grid, it already determines the agent's goal and target.
Hence, `rail_generator` and `schedule_generator` have to match if `schedule_generator` presupposes some specific `agents_hints`.
The environment's `reset` takes care of applying the two generators:
```python
def __init__(self,
...
rail_generator: RailGenerator = random_rail_generator(),
schedule_generator: ScheduleGenerator = random_schedule_generator(),
...
):
self.rail_generator: RailGenerator = rail_generator
self.schedule_generator: ScheduleGenerator = schedule_generator
def reset(self, regenerate_rail=True, regenerate_schedule=True):
rail, optionals = self.rail_generator(self.width, self.height, self.get_num_agents(), self.num_resets)
...
if replace_agents:
agents_hints = None
if optionals and 'agents_hints' in optionals:
agents_hints = optionals['agents_hints']
self.agents_static = EnvAgentStatic.from_lists(
self.schedule_generator(self.rail, self.get_num_agents(), hints=agents_hints))
```
## Example code
To see all the changes in action you can just run the `flatland_example_2_0.py` file in the examples folder. The file can be found [here](https://gitlab.aicrowd.com/flatland/flatland/blob/master/examples/flatland_2_0_example.py).
# How to get started in Round 2
- [Environment Changes](#environment-changes)
- [Level generation](#level-generation)
- [Observations](#observations)
- [Predictions](#predictions)
## Environment Changes
There have been some major changes in how agents are being handled in the environment in this Flatland update.
### Agents
Agents are no more permant entities in the environment. Now agents will be removed from the environment as soon as they finsish their task. To keep interactions with the environment as simple as possible we do not modify the dimensions of the observation vectors nor the number of agents. Agents that have finished do not require any special treatment from the controller. Any action provided to these agents is simply ignored, just like before.
Start positions of agents are *not unique* anymore. This means that many agents can start from the same position on the railway grid. It is important to keep in mind that whatever agent moves first will block the rest of the agents from moving into the same cell. Thus, the controller can already decide the ordering of the agents from the first step.
## Level Generation
The levels are now generated using the `sparse_rail_generator` and the `sparse_schedule_generator`
### Rail Generation
The rail generation is done in a sequence of steps:
1. A number of city centers are placed in a a grid of size `(height, width)`
2. Each city is connected to two neighbouring cities
3. Internal parallel tracks are generated in each city
### Schedule Generation
The `sparse_schedule_generator` produces tasks for the agents by selecting a starting city and a target city. The agent is then placed on an even track number on the starting city and faced such that a path exists to the target city. The task for the agent is to reach the target position as fast as possible.
In the future we will update how these schedules are generated to allow for more complex tasks
## Observations
Observations have been updated to reflect the novel features and behaviors of Flatland. Have a look at [observation](https://gitlab.aicrowd.com/flatland/flatland/blob/master/flatland/envs/observations.py) or the documentation for more details on the observations.
## Predicitons
\ No newline at end of file
File deleted
File added
File added