Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • flatland/flatland
  • stefan_otte/flatland
  • jiaodaxiaozi/flatland
  • sfwatergit/flatland
  • utozx126/flatland
  • ChenKuanSun/flatland
  • ashivani/flatland
  • minhhoa/flatland
  • pranjal_dhole/flatland
  • darthgera123/flatland
  • rivesunder/flatland
  • thomaslecat/flatland
  • joel_joseph/flatland
  • kchour/flatland
  • alex_zharichenko/flatland
  • yoogottamk/flatland
  • troye_fang/flatland
  • elrichgro/flatland
  • jun_jin/flatland
  • nimishsantosh107/flatland
20 results
Show changes
Showing
with 970 additions and 399 deletions
flatland.core.grid package
==========================
Submodules
----------
flatland.core.grid.grid4 module
-------------------------------
.. automodule:: flatland.core.grid.grid4
:members:
:undoc-members:
:show-inheritance:
flatland.core.grid.grid4\_astar module
--------------------------------------
.. automodule:: flatland.core.grid.grid4_astar
:members:
:undoc-members:
:show-inheritance:
flatland.core.grid.grid4\_utils module
--------------------------------------
.. automodule:: flatland.core.grid.grid4_utils
:members:
:undoc-members:
:show-inheritance:
flatland.core.grid.grid8 module
-------------------------------
.. automodule:: flatland.core.grid.grid8
:members:
:undoc-members:
:show-inheritance:
flatland.core.grid.grid\_utils module
-------------------------------------
.. automodule:: flatland.core.grid.grid_utils
:members:
:undoc-members:
:show-inheritance:
flatland.core.grid.rail\_env\_grid module
-----------------------------------------
.. automodule:: flatland.core.grid.rail_env_grid
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: flatland.core.grid
:members:
:undoc-members:
:show-inheritance:
flatland.core package
=====================
Submodules
----------
flatland.core.env module
------------------------
.. automodule:: flatland.core.env
:members:
:undoc-members:
:show-inheritance:
flatland.core.transitions module
--------------------------------
.. automodule:: flatland.core.transitions
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: flatland.core
:members:
:undoc-members:
:show-inheritance:
flatland.envs package
=====================
Submodules
----------
flatland.envs.rail\_env module
------------------------------
.. automodule:: flatland.envs.rail_env
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: flatland.envs
:members:
:undoc-members:
:show-inheritance:
flatland.evaluators package
===========================
Submodules
----------
flatland.evaluators.aicrowd\_helpers module
-------------------------------------------
.. automodule:: flatland.evaluators.aicrowd_helpers
:members:
:undoc-members:
:show-inheritance:
flatland.evaluators.client module
---------------------------------
.. automodule:: flatland.evaluators.client
:members:
:undoc-members:
:show-inheritance:
flatland.evaluators.messages module
-----------------------------------
.. automodule:: flatland.evaluators.messages
:members:
:undoc-members:
:show-inheritance:
flatland.evaluators.service module
----------------------------------
.. automodule:: flatland.evaluators.service
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: flatland.evaluators
:members:
:undoc-members:
:show-inheritance:
flatland package
================
Subpackages
-----------
.. toctree::
flatland.core
flatland.envs
flatland.evaluators
flatland.utils
Submodules
----------
flatland.cli module
-------------------
.. automodule:: flatland.cli
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: flatland
:members:
:undoc-members:
:show-inheritance:
flatland.utils package
======================
Module contents
---------------
.. automodule:: flatland.utils
:members:
:undoc-members:
:show-inheritance:
Welcome to flatland's documentation!
======================================
.. include:: ../README.rst
.. toctree::
:maxdepth: 2
:caption: Contents:
readme
installation
about_flatland
gettingstarted
intro_observationbuilder
localevaluation
modules
FAQ
contributing
authors
01_readme
03_tutorials_toc
04_specifications_toc
05_apidoc
06_contributing
07_changes
08_authors
09_faq_toc
10_interface
Indices and tables
==================
......
.. highlight:: shell
============
Installation
============
Software Runtime & Dependencies
-------------------------------
This is the recommended way of installation and running flatland's dependencies.
* Install `Anaconda <https://www.anaconda.com/distribution/>`_ by following the instructions `here <https://www.anaconda.com/distribution/>`_
* Create a new conda environment
.. code-block:: console
$ conda create python=3.6 --name flatland-rl
$ conda activate flatland-rl
* Install the necessary dependencies
.. code-block:: console
$ conda install -c conda-forge cairosvg pycairo
$ conda install -c anaconda tk
Stable release
--------------
To install flatland, run this command in your terminal:
.. code-block:: console
$ pip install flatland-rl
This is the preferred method to install flatland, as it will always install the most recent stable release.
If you don't have `pip`_ installed, this `Python installation guide`_ can guide
you through the process.
.. _pip: https://pip.pypa.io
.. _Python installation guide: http://docs.python-guide.org/en/latest/starting/installation/
From sources
------------
The sources for flatland can be downloaded from the `Gitlab repo`_.
You can clone the public repository:
.. code-block:: console
$ git clone git@gitlab.aicrowd.com:flatland/flatland.git
Once you have a copy of the source, you can install it with:
.. code-block:: console
$ python setup.py install
.. _Gitlab repo: https://gitlab.aicrowd.com/flatland/flatland
Jupyter Canvas Widget
---------------------
If you work with jupyter notebook you need to install the Jupyer Canvas Widget. To install the Jupyter Canvas Widget read also
https://github.com/Who8MyLunch/Jupyter_Canvas_Widget#installation
# PettingZoo
> PettingZoo (https://www.pettingzoo.ml/) is a collection of multi-agent environments for reinforcement learning. We build a pettingzoo interface for flatland.
## Background
PettingZoo is a popular multi-agent environment library (https://arxiv.org/abs/2009.14471) that aims to be the gym standard for Multi-Agent Reinforcement Learning. We list the below advantages that make it suitable for use with flatland
- Works with both rllib (https://docs.ray.io/en/latest/rllib.html) and stable baselines 3 (https://stable-baselines3.readthedocs.io/) using wrappers from Super Suit.
- Clean API (https://www.pettingzoo.ml/api) with additional facilities/api for parallel, saving observation, recording using gym monitor, processing, normalising observations
- Scikit-learn inspired api
e.g.
```python
act = model.predict(obs, deterministic=True)[0]
```
- Parallel learning using literally 2 lines of code to use with stable baselines 3
```python
env = ss.pettingzoo_env_to_vec_env_v0(env)
env = ss.concat_vec_envs_v0(env, 8, num_cpus=4, base_class=stable_baselines3)
```
- Tested and supports various multi-agent environments with many agents comparable to flatland. e.g. https://www.pettingzoo.ml/magent
- Clean interface means we can custom add an experimenting tool like wandb and have full flexibility to save information we want
PettingZoo
==========
..
PettingZoo (https://www.pettingzoo.ml/) is a collection of multi-agent environments for reinforcement learning. We build a pettingzoo interface for flatland.
Background
----------
PettingZoo is a popular multi-agent environment library (https://arxiv.org/abs/2009.14471) that aims to be the gym standard for Multi-Agent Reinforcement Learning. We list the below advantages that make it suitable for use with flatland
* Works with both rllib (https://docs.ray.io/en/latest/rllib.html) and stable baselines 3 (https://stable-baselines3.readthedocs.io/) using wrappers from Super Suit.
* Clean API (https://www.pettingzoo.ml/api) with additional facilities/api for parallel, saving observation, recording using gym monitor, processing, normalising observations
* Scikit-learn inspired api
e.g.
.. code-block:: python
act = model.predict(obs, deterministic=True)[0]
* Parallel learning using literally 2 lines of code to use with stable baselines 3
.. code-block:: python
env = ss.pettingzoo_env_to_vec_env_v0(env)
env = ss.concat_vec_envs_v0(env, 8, num_cpus=4, base_class=’stable_baselines3’)
* Tested and supports various multi-agent environments with many agents comparable to flatland. e.g. https://www.pettingzoo.ml/magent
* Clean interface means we can custom add an experimenting tool like wandb and have full flexibility to save information we want
# Environment Wrappers
> We provide various environment wrappers to work with both the rail env and the petting zoo interface.
## Background
These wrappers changes certain environment behavior which can help to get better reinforcement learning training.
## Supported Inbuilt Wrappers
We provide 2 sample wrappers for ShortestPathAction wrapper and SkipNoChoice wrapper. The wrappers requires many env properties that are only created on environment reset. Hence before using the wrapper, we must reset the rail env. To use the wrappers, simply pass the resetted rail env. Code samples are shown below for each wrapper.
### ShortestPathAction Wrapper
To use the ShortestPathAction Wrapper, simply wrap the rail env as follows
```python
rail_env.reset(random_seed=1)
rail_env = ShortestPathActionWrapper(rail_env)
```
The shortest path action wrapper maps the existing action space into 3 actions - Shortest Path (`0`), Next Shortest Path (`1`) and Stop (`2`). Hence, we must ensure that the predicted action should always be one of these (0, 1 and 2) actions. To route all agents in the shortest path, pass `0` as the action.
### SkipNoChoice Wrapper
To use the SkipNoChoiceWrapper, simply wrap the rail env as follows
```python
rail_env.reset(random_seed=1)
rail_env = SkipNoChoiceCellsWrapper(rail_env, accumulate_skipped_rewards=False, discounting=0.0)
```
Environment Wrappers
====================
..
We provide various environment wrappers to work with both the rail env and the petting zoo interface.
Background
----------
These wrappers changes certain environment behavior which can help to get better reinforcement learning training.
Supported Inbuilt Wrappers
--------------------------
We provide 2 sample wrappers for ShortestPathAction wrapper and SkipNoChoice wrapper. The wrappers requires many env properties that are only created on environment reset. Hence before using the wrapper, we must reset the rail env. To use the wrappers, simply pass the resetted rail env. Code samples are shown below for each wrapper.
ShortestPathAction Wrapper
^^^^^^^^^^^^^^^^^^^^^^^^^^
To use the ShortestPathAction Wrapper, simply wrap the rail env as follows
.. code-block:: python
rail_env.reset(random_seed=1)
rail_env = ShortestPathActionWrapper(rail_env)
The shortest path action wrapper maps the existing action space into 3 actions - Shortest Path (\ ``0``\ ), Next Shortest Path (\ ``1``\ ) and Stop (\ ``2``\ ). Hence, we must ensure that the predicted action should always be one of these (0, 1 and 2) actions. To route all agents in the shortest path, pass ``0`` as the action.
SkipNoChoice Wrapper
^^^^^^^^^^^^^^^^^^^^
To use the SkipNoChoiceWrapper, simply wrap the rail env as follows
.. code-block:: python
rail_env.reset(random_seed=1)
rail_env = SkipNoChoiceCellsWrapper(rail_env, accumulate_skipped_rewards=False, discounting=0.0)
=====
Local Evaluation
=====
This document explains you how to locally evaluate your submissions before making
an official submission to the competition.
Requirements
--------------
* **flatland-rl** : We expect that you have `flatland-rl` installed by following the instructions in :doc:`installation`.
* **redis** : Additionally you will also need to have `redis installed <https://redis.io/topics/quickstart>`_ and **should have it running in the background.**
Test Data
--------------
* **test env data** : You can `download and untar the test-env-data <https://www.aicrowd.com/challenges/flatland-challenge/dataset_files>`_,
at a location of your choice, lets say `/path/to/test-env-data/`. After untarring the folder, the folder structure should look something like :
.. code-block:: console
.
└── test-env-data
├── Test_0
│ ├── Level_0.pkl
│ └── Level_1.pkl
├── Test_1
│ ├── Level_0.pkl
│ └── Level_1.pkl
├..................
├..................
├── Test_8
│ ├── Level_0.pkl
│ └── Level_1.pkl
└── Test_9
├── Level_0.pkl
└── Level_1.pkl
Evaluation Service
------------------
* **start evaluation service** : Then you can start the evaluator by running :
.. code-block:: console
flatland-evaluator --tests /path/to/test-env-data/
RemoteClient
------------------
* **run client** : Some `sample submission code can be found in the starter-kit <https://github.com/AIcrowd/flatland-challenge-starter-kit/>`_,
but before you can run your code locally using `FlatlandRemoteClient`, you will have to set the `AICROWD_TESTS_FOLDER` environment variable to the location where you
previous untarred the folder with `the test-env-data`:
.. code-block:: console
export AICROWD_TESTS_FOLDER="/path/to/test-env-data/"
# or on Windows :
#
# set AICROWD_TESTS_FOLDER "\path\to\test-env-data\"
# and then finally run your code
python run.py
@ECHO OFF
pushd %~dp0
REM Command file for Sphinx documentation
if "%SPHINXBUILD%" == "" (
set SPHINXBUILD=python -msphinx
)
set SOURCEDIR=.
set BUILDDIR=_build
set SPHINXPROJ=flatland
if "%1" == "" goto help
%SPHINXBUILD% >NUL 2>NUL
if errorlevel 9009 (
echo.
echo.The Sphinx module was not found. Make sure you have Sphinx installed,
echo.then set the SPHINXBUILD environment variable to point to the full
echo.path of the 'sphinx-build' executable. Alternatively you may add the
echo.Sphinx directory to PATH.
echo.
echo.If you don't have Sphinx installed, grab it from
echo.http://sphinx-doc.org/
exit /b 1
)
%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS%
goto end
:help
%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS%
:end
popd
flatland
========
.. toctree::
:maxdepth: 4
flatland
## Core Specifications
### Environment Class Overview
The Environment class contains all necessary functions for the interactions between the agents and the environment. The base Environment class is derived from rllib.env.MultiAgentEnv (https://github.com/ray-project/ray).
The functions are specific for each realization of Flatland (e.g. Railway, Vaccination,...)
In particular, we retain the rllib interface in the use of the step() function, that accepts a dictionary of actions indexed by the agents handles (returned by get_agent_handles()) and returns dictionaries of observations, dones and infos.
```python
class Environment:
"""Base interface for multi-agent environments in Flatland.
Agents are identified by agent ids (handles).
Examples:
>>> obs, info = env.reset()
>>> print(obs)
{
"train_0": [2.4, 1.6],
"train_1": [3.4, -3.2],
}
>>> obs, rewards, dones, infos = env.step(
action_dict={
"train_0": 1, "train_1": 0})
>>> print(rewards)
{
"train_0": 3,
"train_1": -1,
}
>>> print(dones)
{
"train_0": False, # train_0 is still running
"train_1": True, # train_1 is done
"__all__": False, # the env is not done
}
>>> print(infos)
{
"train_0": {}, # info for train_0
"train_1": {}, # info for train_1
}
"""
def __init__(self):
pass
def reset(self):
"""
Resets the env and returns observations from agents in the environment.
Returns:
obs : dict
New observations for each agent.
"""
raise NotImplementedError()
def step(self, action_dict):
"""
Performs an environment step with simultaneous execution of actions for
agents in action_dict.
Returns observations from agents in the environment.
The returns are dicts mapping from agent_id strings to values.
Parameters
-------
action_dict : dict
Dictionary of actions to execute, indexed by agent id.
Returns
-------
obs : dict
New observations for each ready agent.
rewards: dict
Reward values for each ready agent.
dones : dict
Done values for each ready agent. The special key "__all__"
(required) is used to indicate env termination.
infos : dict
Optional info values for each agent id.
"""
raise NotImplementedError()
def render(self):
"""
Perform rendering of the environment.
"""
raise NotImplementedError()
def get_agent_handles(self):
"""
Returns a list of agents' handles to be used as keys in the step()
function.
"""
raise NotImplementedError()
```
File suppressed by a .gitattributes entry or the file's encoding is unsupported.
## Intro
In a human-readable language, specifications provide
- code base overview (hand-drawn concept)
- key concepts (generators, envs) and how are they linked
- link relevant code base
![Overview](img/UML_flatland.png)
`Diagram Source <https://confluence.sbb.ch/x/pQfsSw>`_
=============================
Observation and Action Spaces
=============================
----------------------------
This is an introduction to the three standard observations and the action space of **Flatland**.
Action Space
============
^^^^^^^^^^^^
Flatland is a railway simulation. Thus the actions of an agent are strongly limited to the railway network. This means that in many cases not all actions are valid.
The possible actions of an agent are
......@@ -15,7 +15,7 @@ The possible actions of an agent are
- ``4`` **Stop**: This action causes the agent to stop.
Observation Spaces
==================
^^^^^^^^^^^^^^^^^^
In the **Flatland** environment we have included three basic observations to get started. The figure below illustrates the observation range of the different basic observation: ``Global``, ``Local Grid`` and ``Local Tree``.
.. image:: https://i.imgur.com/oo8EIYv.png
......@@ -24,7 +24,7 @@ In the **Flatland** environment we have included three basic observations to get
Global Observation
------------------
~~~~~~~~~~~~~~~~~~
Gives a global observation of the entire rail environment.
The observation is composed of the following elements:
......@@ -37,7 +37,7 @@ We encourage you to enhance this observation with any layer you think might help
It would also be possible to construct a global observation for a super agent that controls all agents at once.
Local Grid Observation
----------------------
~~~~~~~~~~~~~~~~~~~~~~
Gives a local observation of the rail environment around the agent.
The observation is composed of the following elements:
......@@ -50,7 +50,7 @@ Be aware that this observation **does not** contain any clues about target locat
We encourage you to come up with creative ways to overcome this problem. In the tree observation below we introduce the concept of distance maps.
Tree Observation
----------------
~~~~~~~~~~~~~~~~
The tree observation is built by exploiting the graph structure of the railway network. The observation is generated by spanning a **4 branched tree** from the current position of the agent. Each branch follows the allowed transitions (backward branch only allowed at dead-ends) until a cell with multiple allowed transitions is reached. Here the information gathered along the branch is stored as a node in the tree.
The figure below illustrates how the tree observation is built:
......@@ -73,7 +73,7 @@ The right side of the figure shows the resulting tree of the railway network on
Node Information
----------------
~~~~~~~~~~~~~~~~
Each node is filled with information gathered along the path to the node. Currently each node contains 9 features:
- 1: if own target lies on the explored branch the current distance from the agent in number of cells is stored.
......
This diff is collapsed.