Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • flatland/flatland
  • stefan_otte/flatland
  • jiaodaxiaozi/flatland
  • sfwatergit/flatland
  • utozx126/flatland
  • ChenKuanSun/flatland
  • ashivani/flatland
  • minhhoa/flatland
  • pranjal_dhole/flatland
  • darthgera123/flatland
  • rivesunder/flatland
  • thomaslecat/flatland
  • joel_joseph/flatland
  • kchour/flatland
  • alex_zharichenko/flatland
  • yoogottamk/flatland
  • troye_fang/flatland
  • elrichgro/flatland
  • jun_jin/flatland
  • nimishsantosh107/flatland
20 results
Show changes
Showing
with 1305 additions and 284 deletions
flatland.core.grid package
==========================
Submodules
----------
flatland.core.grid.grid4 module
-------------------------------
.. automodule:: flatland.core.grid.grid4
:members:
:undoc-members:
:show-inheritance:
flatland.core.grid.grid4\_astar module
--------------------------------------
.. automodule:: flatland.core.grid.grid4_astar
:members:
:undoc-members:
:show-inheritance:
flatland.core.grid.grid4\_utils module
--------------------------------------
.. automodule:: flatland.core.grid.grid4_utils
:members:
:undoc-members:
:show-inheritance:
flatland.core.grid.grid8 module
-------------------------------
.. automodule:: flatland.core.grid.grid8
:members:
:undoc-members:
:show-inheritance:
flatland.core.grid.grid\_utils module
-------------------------------------
.. automodule:: flatland.core.grid.grid_utils
:members:
:undoc-members:
:show-inheritance:
flatland.core.grid.rail\_env\_grid module
-----------------------------------------
.. automodule:: flatland.core.grid.rail_env_grid
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: flatland.core.grid
:members:
:undoc-members:
:show-inheritance:
flatland.core package
=====================
Submodules
----------
flatland.core.env module
------------------------
.. automodule:: flatland.core.env
:members:
:undoc-members:
:show-inheritance:
flatland.core.transitions module
--------------------------------
.. automodule:: flatland.core.transitions
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: flatland.core
:members:
:undoc-members:
:show-inheritance:
flatland.envs package
=====================
Submodules
----------
flatland.envs.rail\_env module
------------------------------
.. automodule:: flatland.envs.rail_env
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: flatland.envs
:members:
:undoc-members:
:show-inheritance:
flatland package
================
Subpackages
-----------
.. toctree::
flatland.core
flatland.envs
flatland.utils
Submodules
----------
flatland.cli module
-------------------
.. automodule:: flatland.cli
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: flatland
:members:
:undoc-members:
:show-inheritance:
flatland.utils package
======================
Module contents
---------------
.. automodule:: flatland.utils
:members:
:undoc-members:
:show-inheritance:
Welcome to flatland's documentation!
======================================
.. include:: ../README.rst
.. toctree::
:maxdepth: 2
:caption: Contents:
readme
installation
about_flatland
gettingstarted
modules
FAQ
contributing
authors
01_readme
03_tutorials_toc
04_specifications_toc
05_apidoc
06_contributing
07_changes
08_authors
09_faq_toc
10_interface
Indices and tables
==================
......
.. highlight:: shell
============
Installation
============
Software Runtime & Dependencies
-------------------------------
This is the recommended way of installation and running flatland's dependencies.
* Install `Anaconda <https://www.anaconda.com/distribution/>`_ by following the instructions `here <https://www.anaconda.com/distribution/>`_
* Create a new conda environment
.. code-block:: console
$ conda create python=3.6 --name flatland-rl
$ conda activate flatland-rl
* Install the necessary dependencies
.. code-block:: console
$ conda install -c conda-forge cairosvg pycairo
$ conda install -c anaconda tk
Stable release
--------------
To install flatland, run this command in your terminal:
.. code-block:: console
$ pip install flatland-rl
This is the preferred method to install flatland, as it will always install the most recent stable release.
If you don't have `pip`_ installed, this `Python installation guide`_ can guide
you through the process.
.. _pip: https://pip.pypa.io
.. _Python installation guide: http://docs.python-guide.org/en/latest/starting/installation/
From sources
------------
The sources for flatland can be downloaded from the `Gitlab repo`_.
You can clone the public repository:
.. code-block:: console
$ git clone git@gitlab.aicrowd.com:flatland/flatland.git
Once you have a copy of the source, you can install it with:
.. code-block:: console
$ python setup.py install
.. _Gitlab repo: https://gitlab.aicrowd.com/flatland/flatland
Jupyter Canvas Widget
---------------------
If you work with jupyter notebook you need to install the Jupyer Canvas Widget. To install the Jupyter Canvas Widget read also
https://github.com/Who8MyLunch/Jupyter_Canvas_Widget#installation
# PettingZoo
> PettingZoo (https://www.pettingzoo.ml/) is a collection of multi-agent environments for reinforcement learning. We build a pettingzoo interface for flatland.
## Background
PettingZoo is a popular multi-agent environment library (https://arxiv.org/abs/2009.14471) that aims to be the gym standard for Multi-Agent Reinforcement Learning. We list the below advantages that make it suitable for use with flatland
- Works with both rllib (https://docs.ray.io/en/latest/rllib.html) and stable baselines 3 (https://stable-baselines3.readthedocs.io/) using wrappers from Super Suit.
- Clean API (https://www.pettingzoo.ml/api) with additional facilities/api for parallel, saving observation, recording using gym monitor, processing, normalising observations
- Scikit-learn inspired api
e.g.
```python
act = model.predict(obs, deterministic=True)[0]
```
- Parallel learning using literally 2 lines of code to use with stable baselines 3
```python
env = ss.pettingzoo_env_to_vec_env_v0(env)
env = ss.concat_vec_envs_v0(env, 8, num_cpus=4, base_class=stable_baselines3)
```
- Tested and supports various multi-agent environments with many agents comparable to flatland. e.g. https://www.pettingzoo.ml/magent
- Clean interface means we can custom add an experimenting tool like wandb and have full flexibility to save information we want
PettingZoo
==========
..
PettingZoo (https://www.pettingzoo.ml/) is a collection of multi-agent environments for reinforcement learning. We build a pettingzoo interface for flatland.
Background
----------
PettingZoo is a popular multi-agent environment library (https://arxiv.org/abs/2009.14471) that aims to be the gym standard for Multi-Agent Reinforcement Learning. We list the below advantages that make it suitable for use with flatland
* Works with both rllib (https://docs.ray.io/en/latest/rllib.html) and stable baselines 3 (https://stable-baselines3.readthedocs.io/) using wrappers from Super Suit.
* Clean API (https://www.pettingzoo.ml/api) with additional facilities/api for parallel, saving observation, recording using gym monitor, processing, normalising observations
* Scikit-learn inspired api
e.g.
.. code-block:: python
act = model.predict(obs, deterministic=True)[0]
* Parallel learning using literally 2 lines of code to use with stable baselines 3
.. code-block:: python
env = ss.pettingzoo_env_to_vec_env_v0(env)
env = ss.concat_vec_envs_v0(env, 8, num_cpus=4, base_class=’stable_baselines3’)
* Tested and supports various multi-agent environments with many agents comparable to flatland. e.g. https://www.pettingzoo.ml/magent
* Clean interface means we can custom add an experimenting tool like wandb and have full flexibility to save information we want
# Environment Wrappers
> We provide various environment wrappers to work with both the rail env and the petting zoo interface.
## Background
These wrappers changes certain environment behavior which can help to get better reinforcement learning training.
## Supported Inbuilt Wrappers
We provide 2 sample wrappers for ShortestPathAction wrapper and SkipNoChoice wrapper. The wrappers requires many env properties that are only created on environment reset. Hence before using the wrapper, we must reset the rail env. To use the wrappers, simply pass the resetted rail env. Code samples are shown below for each wrapper.
### ShortestPathAction Wrapper
To use the ShortestPathAction Wrapper, simply wrap the rail env as follows
```python
rail_env.reset(random_seed=1)
rail_env = ShortestPathActionWrapper(rail_env)
```
The shortest path action wrapper maps the existing action space into 3 actions - Shortest Path (`0`), Next Shortest Path (`1`) and Stop (`2`). Hence, we must ensure that the predicted action should always be one of these (0, 1 and 2) actions. To route all agents in the shortest path, pass `0` as the action.
### SkipNoChoice Wrapper
To use the SkipNoChoiceWrapper, simply wrap the rail env as follows
```python
rail_env.reset(random_seed=1)
rail_env = SkipNoChoiceCellsWrapper(rail_env, accumulate_skipped_rewards=False, discounting=0.0)
```
Environment Wrappers
====================
..
We provide various environment wrappers to work with both the rail env and the petting zoo interface.
Background
----------
These wrappers changes certain environment behavior which can help to get better reinforcement learning training.
Supported Inbuilt Wrappers
--------------------------
We provide 2 sample wrappers for ShortestPathAction wrapper and SkipNoChoice wrapper. The wrappers requires many env properties that are only created on environment reset. Hence before using the wrapper, we must reset the rail env. To use the wrappers, simply pass the resetted rail env. Code samples are shown below for each wrapper.
ShortestPathAction Wrapper
^^^^^^^^^^^^^^^^^^^^^^^^^^
To use the ShortestPathAction Wrapper, simply wrap the rail env as follows
.. code-block:: python
rail_env.reset(random_seed=1)
rail_env = ShortestPathActionWrapper(rail_env)
The shortest path action wrapper maps the existing action space into 3 actions - Shortest Path (\ ``0``\ ), Next Shortest Path (\ ``1``\ ) and Stop (\ ``2``\ ). Hence, we must ensure that the predicted action should always be one of these (0, 1 and 2) actions. To route all agents in the shortest path, pass ``0`` as the action.
SkipNoChoice Wrapper
^^^^^^^^^^^^^^^^^^^^
To use the SkipNoChoiceWrapper, simply wrap the rail env as follows
.. code-block:: python
rail_env.reset(random_seed=1)
rail_env = SkipNoChoiceCellsWrapper(rail_env, accumulate_skipped_rewards=False, discounting=0.0)
@ECHO OFF
pushd %~dp0
REM Command file for Sphinx documentation
if "%SPHINXBUILD%" == "" (
set SPHINXBUILD=python -msphinx
)
set SOURCEDIR=.
set BUILDDIR=_build
set SPHINXPROJ=flatland
if "%1" == "" goto help
%SPHINXBUILD% >NUL 2>NUL
if errorlevel 9009 (
echo.
echo.The Sphinx module was not found. Make sure you have Sphinx installed,
echo.then set the SPHINXBUILD environment variable to point to the full
echo.path of the 'sphinx-build' executable. Alternatively you may add the
echo.Sphinx directory to PATH.
echo.
echo.If you don't have Sphinx installed, grab it from
echo.http://sphinx-doc.org/
exit /b 1
)
%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS%
goto end
:help
%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS%
:end
popd
flatland
========
.. toctree::
:maxdepth: 4
flatland
## Core Specifications
### Environment Class Overview
The Environment class contains all necessary functions for the interactions between the agents and the environment. The base Environment class is derived from rllib.env.MultiAgentEnv (https://github.com/ray-project/ray).
The functions are specific for each realization of Flatland (e.g. Railway, Vaccination,...)
In particular, we retain the rllib interface in the use of the step() function, that accepts a dictionary of actions indexed by the agents handles (returned by get_agent_handles()) and returns dictionaries of observations, dones and infos.
```python
class Environment:
"""Base interface for multi-agent environments in Flatland.
Agents are identified by agent ids (handles).
Examples:
>>> obs, info = env.reset()
>>> print(obs)
{
"train_0": [2.4, 1.6],
"train_1": [3.4, -3.2],
}
>>> obs, rewards, dones, infos = env.step(
action_dict={
"train_0": 1, "train_1": 0})
>>> print(rewards)
{
"train_0": 3,
"train_1": -1,
}
>>> print(dones)
{
"train_0": False, # train_0 is still running
"train_1": True, # train_1 is done
"__all__": False, # the env is not done
}
>>> print(infos)
{
"train_0": {}, # info for train_0
"train_1": {}, # info for train_1
}
"""
def __init__(self):
pass
def reset(self):
"""
Resets the env and returns observations from agents in the environment.
Returns:
obs : dict
New observations for each agent.
"""
raise NotImplementedError()
def step(self, action_dict):
"""
Performs an environment step with simultaneous execution of actions for
agents in action_dict.
Returns observations from agents in the environment.
The returns are dicts mapping from agent_id strings to values.
Parameters
-------
action_dict : dict
Dictionary of actions to execute, indexed by agent id.
Returns
-------
obs : dict
New observations for each ready agent.
rewards: dict
Reward values for each ready agent.
dones : dict
Done values for each ready agent. The special key "__all__"
(required) is used to indicate env termination.
infos : dict
Optional info values for each agent id.
"""
raise NotImplementedError()
def render(self):
"""
Perform rendering of the environment.
"""
raise NotImplementedError()
def get_agent_handles(self):
"""
Returns a list of agents' handles to be used as keys in the step()
function.
"""
raise NotImplementedError()
```
File suppressed by a .gitattributes entry or the file's encoding is unsupported.
## Intro
In a human-readable language, specifications provide
- code base overview (hand-drawn concept)
- key concepts (generators, envs) and how are they linked
- link relevant code base
![Overview](img/UML_flatland.png)
`Diagram Source <https://confluence.sbb.ch/x/pQfsSw>`_
=============================
Observation and Action Spaces
=============================
----------------------------
This is an introduction to the three standard observations and the action space of **Flatland**.
Action Space
============
^^^^^^^^^^^^
Flatland is a railway simulation. Thus the actions of an agent are strongly limited to the railway network. This means that in many cases not all actions are valid.
The possible actions of an agent are
......@@ -15,7 +15,7 @@ The possible actions of an agent are
- ``4`` **Stop**: This action causes the agent to stop.
Observation Spaces
==================
^^^^^^^^^^^^^^^^^^
In the **Flatland** environment we have included three basic observations to get started. The figure below illustrates the observation range of the different basic observation: ``Global``, ``Local Grid`` and ``Local Tree``.
.. image:: https://i.imgur.com/oo8EIYv.png
......@@ -24,7 +24,7 @@ In the **Flatland** environment we have included three basic observations to get
Global Observation
------------------
~~~~~~~~~~~~~~~~~~
Gives a global observation of the entire rail environment.
The observation is composed of the following elements:
......@@ -37,7 +37,7 @@ We encourage you to enhance this observation with any layer you think might help
It would also be possible to construct a global observation for a super agent that controls all agents at once.
Local Grid Observation
----------------------
~~~~~~~~~~~~~~~~~~~~~~
Gives a local observation of the rail environment around the agent.
The observation is composed of the following elements:
......@@ -50,7 +50,7 @@ Be aware that this observation **does not** contain any clues about target locat
We encourage you to come up with creative ways to overcome this problem. In the tree observation below we introduce the concept of distance maps.
Tree Observation
----------------
~~~~~~~~~~~~~~~~
The tree observation is built by exploiting the graph structure of the railway network. The observation is generated by spanning a **4 branched tree** from the current position of the agent. Each branch follows the allowed transitions (backward branch only allowed at dead-ends) until a cell with multiple allowed transitions is reached. Here the information gathered along the branch is stored as a node in the tree.
The figure below illustrates how the tree observation is built:
......@@ -73,7 +73,7 @@ The right side of the figure shows the resulting tree of the railway network on
Node Information
----------------
~~~~~~~~~~~~~~~~
Each node is filled with information gathered along the path to the node. Currently each node contains 9 features:
- 1: if own target lies on the explored branch the current distance from the agent in number of cells is stored.
......
## Railway Specifications
### Overview
Flatland is usually a two-dimensional environment intended for multi-agent problems, in particular it should serve as a benchmark for many multi-agent reinforcement learning approaches.
The environment can host a broad array of diverse problems reaching from disease spreading to train traffic management.
This documentation illustrates the dynamics and possibilities of Flatland environment and introduces the details of the train traffic management implementation.
### Environment
Before describing the Flatland at hand, let us first define terms which will be used in this specification. Flatland is grid-like n-dimensional space of any size. A cell is the elementary element of the grid. The cell is defined as a location where any objects can be located at. The term agent is defined as an entity that can move within the grid and must solve tasks. An agent can move in any arbitrary direction on well-defined transitions from cells to cell. The cell where the agent is located at must have enough capacity to hold the agent on. Every agent reserves exact one capacity or resource. The capacity of a cell is usually one. Thus usually only one agent can be at same time located at a given cell. The agent movement possibility can be restricted by limiting the allowed transitions.
Flatland is a discrete time simulation. A discrete time simulation performs all actions with constant time step. In Flatland the simulation step moves the time forward in equal duration of time. At each step the agents can choose an action. For the chosen action the attached transition will be executed. While executing a transition Flatland checks whether the requested transition is valid. If the transition is valid the transition will update the agents position. In case the transition call is not allowed the agent will not move.
In general each cell has a only one cell type attached. With the help of the cell type the allowed transitions can be defined for all agents.
Flatland supports many different types of agents. In consequence the cell type can be further defined per agent type. In consequence the allowed transition for a agent at a given cell is now defined by the cell type and agent's type.
For each agent type Flatland can have a different action space.
#### Grid
A rectangular grid of integer shape (dim_x, dim_y) defines the spatial dimensions of the environment.
Within this documentation we use North, East, West, South as orientation indicator where North is Up, South is Down, West is left and East is Right.
![single_cell](https://drive.google.com/uc?export=view&id=1O6jl2Ha14TV3Wuob5EbaowdYZiFt3aDW)
Cells are enumerated starting from NW, East-West axis is the second coordinate, North-South is the first coordinate as commonly used in matrix notation.
Two cells $`i`$ and $`j`$ ($`i \neq j`$) are considered neighbors when the Euclidean distance between them is $`|\vec{x_i}-\vec{x_j}<= \sqrt{2}|`$. This means that the grid does not wrap around as if on a torus. (Two cells are considered neighbors when they share one edge or on node.)
![cell_table](https://drive.google.com/uc?export=view&id=109cD1uihDvTWnQ7PPTxC9AiNphlsY92r)
For each cell the allowed transitions to all neighboring 4 cells are defined. This can be extended to include transition probabilities as well.
#### Tile Types
###### Railway Grid
Each Cell within the simulation grid consists of a distinct tile type which in turn limit the movement possibilities of the agent through the cell. For railway specific problem 8 basic tile types can be defined which describe a rail network. As a general fact in railway network when on navigation choice must be taken at maximum two options are available.
The following image gives an overview of the eight basic types. These can be rotated in steps of 45° and mirrored along the North-South of East-West axis. Please refer to Appendix A for a complete list of tiles.
![cell_types](https://drive.google.com/uc?export=view&id=164iowmfRQ9O34hquxLhO2xxt49NE473P)
As a general consistency rule, it can be said that each connection out of a tile must be joined by a connection of a neighboring tile.
![consistency_rule](https://drive.google.com/uc?export=view&id=1iaMIokHZ9BscMJ_Vi9t8QX_-8DzOjBKE)
In the image above on the left picture there is an inconsistency at the eastern end of cell (3,2) since the there is no valid neighbor for cell (3,2). In the right picture a Cell (3,2) consists of a dead-end which leaves no unconnected transitions.
Case 0 represents a wall, thus no agent can occupy the tile at any time.
Case 1 represent a passage through the tile. While on the tile the agent on can make no navigation decision. The agent can only decide to either continue, i.e. passing on to the next connected tile, wait or move backwards (moving the tile visited before).
Case 2 represents a simple switch thus when coming the top position (south in the example) a navigation choice (West or North) must be taken. Generally the straight transition (S->N in the example) is less costly than the bent transition. Therefore in Case 2 the two choices may be rewarded differently. Case 6 is identical to case 2 from a topological point of view, however the is no preferred choice when coming from South.
Case 3 can be seen as a superposition of Case 1. As with any other tile at maximum one agent can occupy the cell at a given time.
Case 4 represents a single-slit switch. In the example a navigation choice is possible when coming from West or South.
In Case 5 coming from all direction a navigation choice must be taken.
Case 7 represents a deadend, thus only stop or backwards motion is possible when an agent occupies this cell.
###### Tile Types of Wall-Based Cell Games (Theseus and Minotaur's puzzle, Labyrinth Game)
The Flatland approach can also be used the describe a variety of cell based logic games. While not going into any detail at all it is still worthwhile noting that the games are usually visualized using cell grid with wall describing forbidden transitions (negative formulation).
![minotaurus](https://drive.google.com/uc?export=view&id=1WbU6YGopLKqAjVD6-r9UhCIzDfLisb5U)
Left: Wall-based Grid definition (negative definition), Right: lane-based Grid definition (positive definition)
## Train Traffic Management
#### Problem Definition
Additionally, due to the dynamics of train traffic, each transition probability is symmetric in this environment. This means that neighboring cells will always have the same transition probability to each other.
Furthermore, each cell is exclusive and can only be occupied by one agent at any given time.
### Observations
In this early stage of the project it is very difficult to come up with the necessary observation space in order to solve all train related problems. Given our early experiments we therefore propose different observation methods and hope to investigate further options with the crowdsourcing challenge. Below we compare global observation with local observations and discuss the differences in performance and flexibility.
#### Global Observation
Global observations, specifically on a grid like environment, benefit from the vast research results on learning from pixels and the advancements in convolutional neural network algorithms. The observation can simply be generated from the environment state and not much additional computation is necessary to generate the state.
It is reasonable to assume that an observation of the full environment is beneficiary for good global solutions. Early experiments also showed promising result on small toy examples.
However, we run into problems when scalability and flexibility become an important factor. Already on small toy examples we could show that flexibility quickly becomes an issue when the problem instances differ too much. When scaling the problem instances the decision performance of the algorithm diminishes and re-training becomes necessary.
Given the complexity of real-world railway networks (especially in Switzerland), we do not believe that a global observation is suited for this problem.
#### Local Observation
Given that scalability and speed are the main requirements for our use cases local observations offer an interesting novel approach. Local observations require some additional computations to be extracted from the environment state but could in theory be performed in parallel for each agent.
With early experiments (presentation GTC, details below) we could show that even with local observations multiple agents can find feasible, global solutions and most importantly scale seamlessly to larger problem instances.
Below we highlight two different forms of local observations and elaborate on their benefits.
##### Local Field of View
This form of observation is very similar to the global view approach, in that it consists of a grid like input. In this setup each agent has its own observation that depends on its current location in the environment.
Given an agents location, the observation is simply a $`n \times m`$ grid around the agent. The observation grid does not need to be symmetric or squared not does it need to center around the agent.
**Benefits** of this approach again come from the vast research findings using convolutional neural networks and the comparably small computational effort to generate each observation.
**Drawbacks** mostly come from the specific details of train traffic dynamics, most notably the limited degrees of freedom. Considering, that the actions and directions an agent can chose in any given cell, it becomes clear that a grid like observation around an agent will not contain much useful information, as most of the observed cells are not reachable nor play a significant role in for the agents decisions.
![local_grid](https://drive.google.com/uc?export=view&id=1kZzinMOs7hlPaSJJeIiaQ7lAz2erXuHx)
##### Tree Search
From our past experiences and the nature of railway networks (they are a graph) it seems most suitable to use a local tree search as an observation for the agents.
A tree search on a grid of course will be computationally very expensive compared to a simple rectangular observation. Luckily, the limited allowed transition in the railway implementation, vastly reduce the complexity of the tree search. The figure below illustrates the observed tiles when using a local tree search. The information contained in such an observation is much higher than in the proposed grid observation above.
**Benefit** of this approach is the incorporation of allowed transitions into the observation generation and thus an improvement of information density in the observation. From our experience this is currently the most suited observation space for the problem.
**Drawback **is** **mostly the computational cost to generate the observation tree for each agent. Depending on how we model the tree search we will be able to perform all searches in parallel. Because the agents are not able to see the global system, the environment needs to provide some information about the global environment locally to the agent e.g. position of destination.
**Unclear** is whether or not we should rotate the tree search according to the agent such that decisions are always made according to the direction of travel of the agent.
![local_tree](https://drive.google.com/uc?export=view&id=1biob77VFskCsa3HwNsDH-gks9k965JEb)
_Figure 3: A local tree search moves along the allowed transitions, originating from the agents position. This observation contains much more relevant information but has a higher computational cost. This figure illustrates an agent that can move east from its current position. The thick lines indicate the allowed transitions to a depth of eight._
We have gained some insights into using and aggregating the information along the tree search. This should be part of the early investigation while implementing Flatland. One possibility would also be to leave this up to the participants of the Flatland challenge.
#### Communication
Given the complexity and the high dependence of the multi-agent system a communication form might be necessary. This needs to be investigated und following constraints:
* Communication must converge in a feasible time
* Communication…
Depending on the game configuration every agent can be informed about the position of the other agents present in the respective observation range. For a local observation space the agent knows the distance to the next agent (defined with the agent type) in each direction. If no agent is present the the distance can simply be -1 or null.
#### Action Negotiation
In order to avoid illicit situations ( for example agents crashing into each other) the intended actions for each agent in the observation range is known. Depending on the known movement intentions new movement intention must be generated by the agents. This is called a negotiation round. After a fixed amount of negotiation round the last intended action is executed for each agent. An illicit situation results in ending the game with a fixed low rewards.
### Actions
#### Navigation
The agent can be located at any cell except on case 0 cells. The agent can move along the rails to another unoccupied cell or it can just wait where he is currently located at.
Flatland is a discrete time simulation. A discrete time simulation performs all actions in a discrete time with constant time step. In Flatland the simulation step is fixed and the time moves forward in equal duration of time. At each step every agent can choose an action. For the chosen action the attached transition will be executed. While executing a transition Flatland checks whether the requested transition is valid. If the transition is valid the transition will update the agents position. In case the transition call is not allowed the agent will not move.
If the agent calls an action and the attached transition is not allowed at current cell the agent will not move. Waiting at current cell is always a valid action. The waiting action is an action which has the transition from current cell to going-to cell equal current cell attached.
An agent can move with a definable maximum speed. The default and absolute maximum speed is one spatial unit per time step. If an agent is defined to move slower, it can take a navigation action only ever N steps with N being an integer. For the transition to be made the same action must be taken N times consecutively. An agent can also have a maximum speed of 0 defined, thus it can never take a navigation step. This would be the case where an agent represents a good to be transported which can never move on its own.
An agent can be defined to be picked up/dropped off by another agent or to pick up/drop off another agent. When agent A is picked up by another agent B it is said that A is linked to B. The linked agent loses all its navigation possibilities. On the other side it inherits the position from the linking agent for the time being linked. Linking and unlinking between two agents is only possible the participating agents have the same space-time coordinates for the linking and unlinking action.
#### Transportation
In railway the transportation of goods or passengers is essential. Consequently agents can transport goods or passengers. It's depending on the agent's type. If the agent is a freight train, it will transport goods. It's passenger train it will transport passengers only. But the transportation capacity for both kind of trains limited. Passenger trains have a maximum number of seats restriction. The freight trains have a maximal number of tons restriction.
Passenger can take or switch trains only at stations. Passengers are agents with traveling needs. A common passenger like to move from a starting location to a destination and it might like using trains or walking. Consequently a future Flatland must also support passenger movement (walk) in the grid and not only by using train. The goal of a passenger is to reach in an optimal manner its destination. The quality of traveling is measured by the reward function.
Goods will be only transported over the railway network. Goods are agents with transportation needs. They can start their transportation chain at any station. Each good has a station as the destination attached. The destination is the end of the transportation. It's the transportation goal. Once a good reach its destination it will disappear. Disappearing mean the goods leave Flatland. Goods can't move independently on the grid. They can only move by using trains. They can switch trains at any stations. The goal of the system is to find for goods the right trains to get a feasible transportation chain. The quality of the transportation chain is measured by the reward function.
### Environment Rules
* Depending the cell type a cell must have a given number of neighbouring cells of a given type. \
* There mustn't exists a state where the occupation capacity of a cell is violated. \
* An Agent can move at maximum by one cell at a time step. \
* Agents related to each other through transport (one carries another) must be at the same place the same time.
### Environment Configuration
The environment should allow for a broad class of problem instances. Thus the configuration file for each problem instance should contain:
* Cell types allowed
* Agent types allowed
* Objects allowed
* Level generator to use
* Episodic or non-episodic task
* Duration
* Reward function
* Observation types allowed
* Actions allowed
* Dimensions of Environment?
For the train traffic the configurations should be as follows:
Cell types: Case 0 - 7
Agent Types allowed: Active Agents with Speed 1 and no goals, Passive agents with goals
Object allowed: None
Level Generator to use: ?
Reward function: as described below
Observation Type: Local, Targets known
It should be check prior to solving the problem that the Goal location for each agent can be reached.
### Reward Function
#### Railway-specific Use-Cases
A first idea for a Cost function for generic applicability is as follows. For each agent and each goal sum up
* The timestep when the goal has been reached when not target time is given in the goal.
* The absolute value of the difference between the target time and the arrival time of the agent.
An additional refinement proven meaningful for situations where not target time is given is to weight the longest arrival time higher as the sum off all arrival times.
#### Further Examples (Games)
### Initialization
Given that we want a generalizable agent to solve the problem, training must be performed on a diverse training set. We therefore need a level generator which can create novel tasks for to be solved in a reliable and fast fashion.
#### Level Generator
Each problem instance can have its own level generator.
The inputs to the level generator should be:
* Spatial and temporal dimensions of environment
* Reward type
* Over all task
* Collaboration or competition
* Number of agents
* Further level parameters
* Environment complexity
* Stochasticity and error
* Random or pre designed environment
The output of the level generator should be:
* Feasible environment
* Observation setup for require number of agents
* Initial rewards, positions and observations
### Railway Use Cases
In this section we define a few simple tasks related to railway traffic that we believe would be well suited for a crowdsourcing challenge. The tasks are ordered according to their complexity. The Flatland repo must at least support all these types of use cases.
#### Simple Navigation
In order to onboard the broad reinforcement learning community this task is intended as an introduction to the Railway@Flatland environment.
##### Task
A single agent is placed at an arbitrary (permitted) cell and is given a target cell (reachable by the rules of Flatand). The task is to arrive at the target destination in as little time steps as possible.
##### Actions
In this task an agent can perform transitions ( max 3 possibilities) or stop. Therefore, the agent can chose an action in the range $`a \in [0,4] `$.
##### Reward
The reward is -1 for each time step and 10 if the agent stops at the destination. We might add -1 for invalid moves to speed up exploration and learning.
##### Observation
If we chose a local observation scheme, we need to provide some information about the distance to the target to the agent. This could either be achieved by a distance map, by using waypoints or providing a broad sense of direction to the agent.
#### Multi Agent Navigation and Dispatching
This task is intended as a natural extension of the navigation task.
##### Task
A number of agents ($`n`$-agents) are placed at an arbitrary (permitted) cell and given individual target cells (reachable by the rules of Flatand). The task is to arrive at the target destination in as little time steps as possible as a group. This means that the goal is to minimize the longest path of *ALL* agents.
##### Actions
In this task an agent can perform transitions ( max 3 possibilities) or stop. Therefore, the agent can chose an action in the range $`a \in [0,4] `$.
##### Reward
The reward is -1 for each time step and 10 if all the agents stop at the destination. We can further punish collisions between agents and illegal moves to speed up learning.
##### Observation
If we chose a local observation scheme, we need to provide some information about the distance to the target to the agent. This could either be achieved by a distance map or by using waypoints.
The agents must see each other in their tree searches.
##### Previous learnings
Training an agent by himself first to understand the main task turned out to be beneficial.
It might be necessary to add the "intended" paths of each agent to the observation in order to get intelligent multi agent behavior.
A communication layer might be necessary to improve agent performance.
#### Multi Agent Navigation and Dispatching with Schedule
#### Transport Chains (Transportation of goods and passengers)
### Benefits of Transition Model
Using a grid world with 8 transition possibilities to the neighboring cells constitutes a very flexible environment, which can model many different types of problems.
Considering the recent advancements in machine learning, this approach also allows to make use of convolutions in order to process observation states of agents. For the specific case of railway simulation the grid world unfortunately also brings a few drawbacks.
Most notably the railway network only offers action possibilities at elements where there are more than two transition probabilities. Thus, if using a less dense graph than a grid, the railway network could be represented in a simpler graph. However, we believe that moving from grid-like example where many transitions are allowed towards the railway network with fewer transitions would be the simplest approach for the broad reinforcement learning community.
## Rail Generators and Schedule Generators
The separation between rail generator and schedule generator reflects the organisational separation in the railway domain
- Infrastructure Manager (IM): is responsible for the layout and maintenance of tracks
- Railway Undertaking (RU): operates trains on the infrastructure
Usually, there is a third organisation, which ensures discrimination-free access to the infrastructure for concurrent requests for the infrastructure in a **schedule planning phase**.
However, in the **Flat**land challenge, we focus on the re-scheduling problem during live operations.
Technically,
```python
RailGeneratorProduct = Tuple[GridTransitionMap, Optional[Any]]
RailGenerator = Callable[[int, int, int, int], RailGeneratorProduct]
AgentPosition = Tuple[int, int]
Schedule = collections.namedtuple('Schedule', 'agent_positions '
'agent_directions '
'agent_targets '
'agent_speeds '
'agent_malfunction_rates '
'max_episode_steps')
ScheduleGenerator = Callable[[GridTransitionMap, int, Optional[Any], Optional[int]], Schedule]
```
We can then produce `RailGenerator`s by currying:
```python
def sparse_rail_generator(num_cities=5, num_intersections=4, num_trainstations=2, min_node_dist=20, node_radius=2,
num_neighb=3, grid_mode=False, enhance_intersection=False, seed=1):
def generator(width, height, num_agents, num_resets=0):
# generate the grid and (optionally) some hints for the schedule_generator
...
return grid_map, {'agents_hints': {
'num_agents': num_agents,
'agent_start_targets_nodes': agent_start_targets_nodes,
'train_stations': train_stations
}}
return generator
```
And, similarly, `ScheduleGenerator`s:
```python
def sparse_schedule_generator(speed_ratio_map: Mapping[float, float] = None) -> ScheduleGenerator:
def generator(rail: GridTransitionMap, num_agents: int, hints: Any = None):
# place agents:
# - initial position
# - initial direction
# - (initial) speed
# - malfunction
...
return agents_position, agents_direction, agents_target, speeds, agents_malfunction
return generator
```
Notice that the `rail_generator` may pass `agents_hints` to the `schedule_generator` which the latter may interpret.
For instance, the way the `sparse_rail_generator` generates the grid, it already determines the agent's goal and target.
Hence, `rail_generator` and `schedule_generator` have to match if `schedule_generator` presupposes some specific `agents_hints`.
The environment's `reset` takes care of applying the two generators:
```python
def __init__(self,
...
rail_generator: RailGenerator = random_rail_generator(),
schedule_generator: ScheduleGenerator = random_schedule_generator(),
...
):
self.rail_generator: RailGenerator = rail_generator
self.schedule_generator: ScheduleGenerator = schedule_generator
def reset(self, regenerate_rail=True, regenerate_schedule=True):
rail, optionals = self.rail_generator(self.width, self.height, self.get_num_agents(), self.num_resets)
...
if replace_agents:
agents_hints = None
if optionals and 'agents_hints' in optionals:
agents_hints = optionals['agents_hints']
self.agents_static = EnvAgentStatic.from_lists(
self.schedule_generator(self.rail, self.get_num_agents(), hints=agents_hints))
```
### RailEnv Speeds
One of the main contributions to the complexity of railway network operations stems from the fact that all trains travel at different speeds while sharing a very limited railway network.
The different speed profiles can be generated using the `schedule_generator`, where you can actually chose as many different speeds as you like.
Keep in mind that the *fastest speed* is 1 and all slower speeds must be between 1 and 0.
For the submission scoring you can assume that there will be no more than 5 speed profiles.
Currently (as of **Flat**land 2.0), an agent keeps its speed over the whole episode.
Because the different speeds are implemented as fractions the agents ability to perform actions has been updated.
We **do not allow actions to change within the cell **.
This means that each agent can only chose an action to be taken when entering a cell (ie. positional fraction is 0).
There is some real railway specific considerations such as reserved blocks that are similar to this behavior.
But more importantly we disabled this to simplify the use of machine learning algorithms with the environment.
If we allow stop actions in the middle of cells. then the controller needs to make much more observations and not only at cell changes.
(Not set in stone and could be updated if the need arises).
The chosen action is then executed when a step to the next cell is valid. For example
- Agent enters switch and choses to deviate left. Agent fractional speed is 1/4 and thus the agent will take 4 time steps to complete its journey through the cell. On the 4th time step the agent will leave the cell deviating left as chosen at the entry of the cell.
- All actions chosen by the agent during its travels within a cell are ignored
- Agents can make observations at any time step. Make sure to discard observations without any information. See this [example](https://gitlab.aicrowd.com/flatland/baselines/blob/master/torch_training/training_navigation.py) for a simple implementation.
- The environment checks if agent is allowed to move to next cell only at the time of the switch to the next cell
In your controller, you can check whether an agent requires an action by checking `info`:
```python
obs, rew, done, info = env.step(actions)
...
action_dict = dict()
for a in range(env.get_num_agents()):
if info['action_required'][a]:
action_dict.update({a: ...})
```
Notice that `info['action_required'][a]`
* if the agent breaks down (see stochasticity below) on entering the cell (no distance elpased in the cell), an action required as long as the agent is broken down;
when it gets back to work, the action chosen just before will be taken and executed at the end of the cell; you may check whether the agent
gets healthy again in the next step by checking `info['malfunction'][a] == 1`.
* when the agent has spent enough time in the cell, the next cell may not be free and the agent has to wait.
Since later versions of **Flat**land might have varying speeds during episodes.
Therefore, we return the agents' speed - in your controller, you can get the agents' speed from the `info` returned by `step`:
```python
obs, rew, done, info = env.step(actions)
...
for a in range(env.get_num_agents()):
speed = info['speed'][a]
```
Notice that we do not guarantee that the speed will be computed at each step, but if not costly we will return it at each step.
### RailEnv Malfunctioning / Stochasticity
Stochastic events may happen during the episodes.
This is very common for railway networks where the initial plan usually needs to be rescheduled during operations as minor events such as delayed departure from trainstations, malfunctions on trains or infrastructure or just the weather lead to delayed trains.
We implemted a poisson process to simulate delays by stopping agents at random times for random durations. The parameters necessary for the stochastic events can be provided when creating the environment.
```python
## Use a the malfunction generator to break agents from time to time
stochastic_data = {
'prop_malfunction': 0.5, # Percentage of defective agents
'malfunction_rate': 30, # Rate of malfunction occurence
'min_duration': 3, # Minimal duration of malfunction
'max_duration': 10 # Max duration of malfunction
}
```
The parameters are as follows:
- `prop_malfunction` is the proportion of agents that can malfunction. `1.0` means that each agent can break.
- `malfunction_rate` is the mean rate of the poisson process in number of environment steps.
- `min_duration` and `max_duration` set the range of malfunction durations. They are sampled uniformly
You can introduce stochasticity by simply creating the env as follows:
```python
env = RailEnv(
...
stochastic_data=stochastic_data, # Malfunction data generator
...
)
env.reset()
```
In your controller, you can check whether an agent is malfunctioning:
```python
obs, rew, done, info = env.step(actions)
...
action_dict = dict()
for a in range(env.get_num_agents()):
if info['malfunction'][a] == 0:
action_dict.update({a: ...})
## Custom observation builder
tree_observation = TreeObsForRailEnv(max_depth=2, predictor=ShortestPathPredictorForRailEnv())
## Different agent types (trains) with different speeds.
speed_ration_map = {1.: 0.25, # Fast passenger train
1. / 2.: 0.25, # Fast freight train
1. / 3.: 0.25, # Slow commuter train
1. / 4.: 0.25} # Slow freight train
env = RailEnv(width=50,
height=50,
rail_generator=sparse_rail_generator(num_cities=20, # Number of cities in map (where train stations are)
num_intersections=5, # Number of intersections (no start / target)
num_trainstations=15, # Number of possible start/targets on map
min_node_dist=3, # Minimal distance of nodes
node_radius=2, # Proximity of stations to city center
num_neighb=4, # Number of connections to other cities/intersections
seed=15, # Random seed
grid_mode=True,
enhance_intersection=True
),
schedule_generator=sparse_schedule_generator(speed_ration_map),
number_of_agents=10,
stochastic_data=stochastic_data, # Malfunction data generator
obs_builder_object=tree_observation)
env.reset()
```
### Observation Builders
Every `RailEnv` has an `obs_builder`. The `obs_builder` has full access to the `RailEnv`.
The `obs_builder` is called in the `step()` function to produce the observations.
```python
env = RailEnv(
...
obs_builder_object=TreeObsForRailEnv(
max_depth=2,
predictor=ShortestPathPredictorForRailEnv(max_depth=10)
),
...
)
env.reset()
```
The two principal observation builders provided are global and tree.
#### Global Observation Builder
`GlobalObsForRailEnv` gives a global observation of the entire rail environment.
* transition map array with dimensions (env.height, env.width, 16), assuming 16 bits encoding of transitions.
* Two 2D arrays (map_height, map_width, 2) containing respectively the position of the given agent target and the positions of the other agents targets.
* A 3D array (map_height, map_width, 4) wtih
- first channel containing the agents position and direction
- second channel containing the other agents positions and diretions
- third channel containing agent malfunctions
- fourth channel containing agent fractional speeds
#### Tree Observation Builder
`TreeObsForRailEnv` computes the current observation for each agent.
The observation vector is composed of 4 sequential parts, corresponding to data from the up to 4 possible
movements in a `RailEnv` (up to because only a subset of possible transitions are allowed in RailEnv).
The possible movements are sorted relative to the current orientation of the agent, rather than NESW as for
the transitions. The order is:
```console
[data from 'left'] + [data from 'forward'] + [data from 'right'] + [data from 'back']
```
Each branch data is organized as:
```console
[root node information] +
[recursive branch data from 'left'] +
[... from 'forward'] +
[... from 'right] +
[... from 'back']
```
Each node information is composed of 9 features:
1. if own target lies on the explored branch the current distance from the agent in number of cells is stored.
2. if another agents target is detected the distance in number of cells from the agents current location
is stored
3. if another agent is detected the distance in number of cells from current agent position is stored.
4. possible conflict detected
tot_dist = Other agent predicts to pass along this cell at the same time as the agent, we store the
distance in number of cells from current agent position
```console
0 = No other agent reserve the same cell at similar time
```
5. if an not usable switch (for agent) is detected we store the distance.
6. This feature stores the distance in number of cells to the next branching (current node)
7. minimum distance from node to the agent's target given the direction of the agent if this path is chosen
8. agent in the same direction
```console
n = number of agents present same direction
(possible future use: number of other agents in the same direction in this branch)
0 = no agent present same direction
```
9. agent in the opposite direction
```console
n = number of agents present other direction than myself (so conflict)
(possible future use: number of other agents in other direction in this branch, ie. number of conflicts)
0 = no agent present other direction than myself
```
10. malfunctioning/blokcing agents
```console
n = number of time steps the oberved agent remains blocked
```
11. slowest observed speed of an agent in same direction
```console
1 if no agent is observed
min_fractional speed otherwise
```
Missing/padding nodes are filled in with -inf (truncated).
Missing values in present node are filled in with +inf (truncated).
In case of the root node, the values are [0, 0, 0, 0, distance from agent to target, own malfunction, own speed]
In case the target node is reached, the values are [0, 0, 0, 0, 0].
### Predictors
Predictors make predictions on future agents' moves based on the current state of the environment.
They are decoupled from observation builders in order to be encapsulate the functionality and to make it re-usable.
For instance, `TreeObsForRailEnv` optionally uses the predicted the predicted trajectories while exploring
the branches of an agent's future moves to detect future conflicts.
The general call structure is as follows:
```python
RailEnv.step()
-> ObservationBuilder.get_many()
-> self.predictor.get()
self.get()
self.get()
...
```
### Maximum number of allowed time steps in an episode
Whenever the schedule within RailEnv is generated, the maximum number of allowed time steps in an episode is calculated
according to the following formula:
```python
RailEnv._max_episode_steps = timedelay_factor * alpha * (env.width + env.height + ratio_nr_agents_to_nr_cities)
```
where the following default values are used `timedelay_factor=4`, `alpha=2` and `ratio_nr_agents_to_nr_cities=20`
If participants want to use their own formula they have to overwrite the method `compute_max_episode_steps()` from the class `RailEnv`
## Rendering Specifications
### Scope
This doc specifies the software to meet the requirements in the Visualization requirements doc.
### References
- [Visualization Requirements](visualization)
- [Core Spec](./core)
### Interfaces
#### Interface with Environment Component
- Environment produces the Env Snapshot data structure (TBD)
- Renderer reads the Env Snapshot
- Connection between Env and Renderer, either:
- Environment “invokes” the renderer in-process
- Renderer “connects” to the environment
- Eg Env acts as a server, Renderer as a client
- Either
- The Env sends a Snapshot to the renderer and waits for rendering
- Or:
- The Env puts snapshots into a rendering queue
- The renderer blocks / waits on the queue, waiting for a new snapshot to arrive
- If several snapshots are waiting, delete and skip them and just render the most recent
- Delete the snapshot after rendering
- Optionally
- Render every frame / time step
- Or, render frames without blocking environment
- Render frames in separate process / thread
##### Environment Snapshot
#### Data Structure
A definitions of the data structure is to be defined in Core requirements or Interfaces doc.
##### Example only
Top-level dictionary
- World nd-array
- Each element represents available transitions in a cell
- List of agents
- Agent location, orientation, movement (forward / stop / turn?)
- Observation
- Rectangular observation
- Maybe just dimensions - width + height (ie no need for contents)
- Can be highlighted in display as per minigrid
- Tree-based observation
- TBD
#### Existing Tools / Libraries
1. Pygame
1. Very easy to use. Like dead simple to add sprites etc. [Link](https://studywolf.wordpress.com/2015/03/06/arm-visualization-with pygame/)
2. No inbuilt support for threads/processes. Does get faster if using pypy/pysco.
2. PyQt
1. Somewhat simple, a little more verbose to use the different modules.
2. Multi-threaded via QThread! Yay! (Doesn’t block main thread that does the real work), [Link](https://nikolak.com/pyqt-threading-tutorial/)
##### How to structure the code
1. Define draw functions/classes for each primitive
1. Primitives: Agents (Trains), Railroad, Grass, Houses etc.
2. Background. Initialize the background before starting the episode.
1. Static objects in the scenes, directly draw those primitives once and cache.
##### Proposed Interfaces
To-be-filled
#### Technical Graphics Considerations
##### Overlay dynamic primitives over the background at each time step.
No point trying to figure out changes. Need to explicitly draw every primitive anyways (that’s how these renders work).
## Visualization
![logo](https://drive.google.com/uc?export=view&id=1rstqMPJXFJd9iD46z1A5Rus-W0Ww6O8i)
### Introduction & Scope
Broad requirements for human-viewable display of a single Flatland Environment.
#### Context
Shows this software component in relation to some of the other components. We name the component the "Renderer". Multiple agents interact with a single Environment. A renderer interacts with the environment, and displays on screen, and/or into movie or image files.
<p id="gdcalert2" ><span style="color: red; font-weight: bold">>>>>> gd2md-html alert: inline drawings not supported directly from Docs. You may want to copy the inline drawing to a standalone drawing and export by reference. See <a href="https://github.com/evbacher/gd2md-html/wiki/Google-Drawings-by-reference">Google Drawings by reference</a> for details. The img URL below is a placeholder. </span><br>(<a href="#">Back to top</a>)(<a href="#gdcalert3">Next alert</a>)<br><span style="color: red; font-weight: bold">>>>>> </span></p>
![drawing](https://docs.google.com/a/google.com/drawings/d/12345/export/png)
### Requirements
#### Primary Requirements
1. Visualize or Render the state of the environment
1. Read an Environment + Agent Snapshot provided by the Environment component
2. Display onto a local screen in real-time (or near real-time)
3. Include all the agents
4. Illustrate the agent observations (typically subsets of the grid / world)
5. 2d-rendering only
2. Output visualisation into movie / image files for use in later animation
3. Should not impose control-flow constraints on Environment
6. Should not force env to respond to events
7. Should not drive the "main loop" of Inference or training
#### Secondary / Optional Requirements
1. During training (possibly across multiple processes or machines / OS instances), display a single training environment,
1. without holding up the other environments in the training.
2. Some training environments may be remote to the display machine (eg using GCP / AWS)
3. Attach to / detach from running environment / training cluster without restarting training.
2. Provide a switch to make use of graphics / artwork provided by graphic artist
4. Fast / compact mode for general use
5. Beauty mode for publicity / demonstrations
3. Provide a switch between smooth / continuous animation of an agent (slower) vs jumping from cell to cell (faster)
6. Smooth / continuous translation between cells
7. Smooth / continuous rotation
4. Speed - ideally capable of 60fps (see performance metrics)
5. Window view - only render part of the environment, or a single agent and agents nearby.
8. May not be feasible to render very large environments
9. Possibly more than one window, ie one for each selected agent
10. Window(s) can be tied to agents, ie they move around with the agent, and optionally rotate with the agent.
6. Interactive scaling
11. eg wide view, narrow / enlarged view
12. eg with mouse scrolling & zooming
7. Minimize necessary skill-set for participants
13. Python API to gui toolkit, no need for C/C++
8. View on various media:
14. Linux & Windows local display
15. Browser
#### Performance Metrics
Here are some performance metrics which the Renderer should meet.
<table>
<tr>
<td>
</td>
<td><p style="text-align: right">
### Per second</p>
</td>
<td><p style="text-align: right">
Target Time (ms)</p>
</td>
<td><p style="text-align: right">
Prototype time (ms)</p>
</td>
</tr>
<tr>
<td>Write an agent update (ie env as client providing an agent update)
</td>
<td>
</td>
<td><p style="text-align: right">
0.1</p>
</td>
<td>
</td>
</tr>
<tr>
<td>Draw an environment window 20x20
</td>
<td><p style="text-align: right">
60</p>
</td>
<td><p style="text-align: right">
16</p>
</td>
<td>
</td>
</tr>
<tr>
<td>Draw an environment window 50 x 50
</td>
<td><p style="text-align: right">
10</p>
</td>
<td>
</td>
<td>
</td>
</tr>
<tr>
<td>Draw an agent update on an existing environment window. 5 agents visible.
</td>
<td>
</td>
<td><p style="text-align: right">
1</p>
</td>
<td>
</td>
</tr>
</table>
#### Example Visualization
### Reference Documents
Link to this doc: https://docs.google.com/document/d/1Y4Mw0Q6r8PEOvuOZMbxQX-pV2QKDuwbZJBvn18mo9UU/edit#
#### Core Specification
This specifies the system containing the environment and agents - this will be able to run independently of the renderer.
[https://docs.google.com/document/d/1RN162b8wSfYTBblrdE6-Wi_zSgQTvVm6ZYghWWKn5t8/edit](https://docs.google.com/document/d/1RN162b8wSfYTBblrdE6-Wi_zSgQTvVm6ZYghWWKn5t8/edit)
The data structure which the renderer needs to read initially resides here.
#### Visualization Specification
This will specify the software which will meet the requirements documented here.
[https://docs.google.com/document/d/1XYOe_aUIpl1h_RdHnreACvevwNHAZWT0XHDL0HsfzRY/edit#](https://docs.google.com/document/d/1XYOe_aUIpl1h_RdHnreACvevwNHAZWT0XHDL0HsfzRY/edit#)
#### Interface Specification
This will specify the interfaces through which the different components communicate
### Non-requirements - to be deleted below here.
The below has been copied into the spec doc. Comments may be lost. I'm only preserving it to save the comments for a few days - they don't cut & paste into the other doc!
#### Interface with Environment Component
* Environment produces the Env Snapshot data structure (TBD)
* Renderer reads the Env Snapshot
* Connection between Env and Renderer, either:
* Environment "invokes" the renderer in-process
* Renderer "connects" to the environment
* Eg Env acts as a server, Renderer as a client
* Either
* The Env sends a Snapshot to the renderer and waits for rendering
* Or:
* The Env puts snapshots into a rendering queue
* The renderer blocks / waits on the queue, waiting for a new snapshot to arrive
* If several snapshots are waiting, delete and skip them and just render the most recent
* Delete the snapshot after rendering
* Optionally
* Render every frame / time step
* Or, render frames without blocking environment
* Render frames in separate process / thread
###### Environment Snapshot
**Data Structure**
A definitions of the data structure is to be defined in Core requirements.
It is a requirement of the Renderer component that it can read this data structure.
**Example only**
Top-level dictionary
* World nd-array
* Each element represents available transitions in a cell
* List of agents
* Agent location, orientation, movement (forward / stop / turn?)
* Observation
* Rectangular observation
* Maybe just dimensions - width + height (ie no need for contents)
* Can be highlighted in display as per minigrid
* Tree-based observation
* TBD
#### Investigation into Existing Tools / Libraries
1. Pygame
1. Very easy to use. Like dead simple to add sprites etc. ([https://studywolf.wordpress.com/2015/03/06/arm-visualization-with-pygame/](https://studywolf.wordpress.com/2015/03/06/arm-visualization-with-pygame/))
2. No inbuilt support for threads/processes. Does get faster if using pypy/pysco.
2. PyQt
3. Somewhat simple, a little more verbose to use the different modules.
4. Multi-threaded via QThread! Yay! (Doesn't block main thread that does the real work), ([https://nikolak.com/pyqt-threading-tutorial/](https://nikolak.com/pyqt-threading-tutorial/))
**How to structure the code**
1. Define draw functions/classes for each primitive
1. Primitives: Agents (Trains), Railroad, Grass, Houses etc.
2. Background. Initialize the background before starting the episode.
2. Static objects in the scenes, directly draw those primitives once and cache.
**Proposed Interfaces**
To-be-filled
#### Technical Graphics Considerations
###### Overlay dynamic primitives over the background at each time step.
No point trying to figure out changes. Need to explicitly draw every primitive anyways (that's how these renders work).