Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • flatland/flatland
  • stefan_otte/flatland
  • jiaodaxiaozi/flatland
  • sfwatergit/flatland
  • utozx126/flatland
  • ChenKuanSun/flatland
  • ashivani/flatland
  • minhhoa/flatland
  • pranjal_dhole/flatland
  • darthgera123/flatland
  • rivesunder/flatland
  • thomaslecat/flatland
  • joel_joseph/flatland
  • kchour/flatland
  • alex_zharichenko/flatland
  • yoogottamk/flatland
  • troye_fang/flatland
  • elrichgro/flatland
  • jun_jin/flatland
  • nimishsantosh107/flatland
20 results
Show changes
Showing
with 1305 additions and 284 deletions
flatland.core.grid package
==========================
Submodules
----------
flatland.core.grid.grid4 module
-------------------------------
.. automodule:: flatland.core.grid.grid4
:members:
:undoc-members:
:show-inheritance:
flatland.core.grid.grid4\_astar module
--------------------------------------
.. automodule:: flatland.core.grid.grid4_astar
:members:
:undoc-members:
:show-inheritance:
flatland.core.grid.grid4\_utils module
--------------------------------------
.. automodule:: flatland.core.grid.grid4_utils
:members:
:undoc-members:
:show-inheritance:
flatland.core.grid.grid8 module
-------------------------------
.. automodule:: flatland.core.grid.grid8
:members:
:undoc-members:
:show-inheritance:
flatland.core.grid.grid\_utils module
-------------------------------------
.. automodule:: flatland.core.grid.grid_utils
:members:
:undoc-members:
:show-inheritance:
flatland.core.grid.rail\_env\_grid module
-----------------------------------------
.. automodule:: flatland.core.grid.rail_env_grid
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: flatland.core.grid
:members:
:undoc-members:
:show-inheritance:
flatland.core package
=====================
Submodules
----------
flatland.core.env module
------------------------
.. automodule:: flatland.core.env
:members:
:undoc-members:
:show-inheritance:
flatland.core.transitions module
--------------------------------
.. automodule:: flatland.core.transitions
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: flatland.core
:members:
:undoc-members:
:show-inheritance:
flatland.envs package
=====================
Submodules
----------
flatland.envs.rail\_env module
------------------------------
.. automodule:: flatland.envs.rail_env
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: flatland.envs
:members:
:undoc-members:
:show-inheritance:
flatland package
================
Subpackages
-----------
.. toctree::
flatland.core
flatland.envs
flatland.utils
Submodules
----------
flatland.cli module
-------------------
.. automodule:: flatland.cli
:members:
:undoc-members:
:show-inheritance:
Module contents
---------------
.. automodule:: flatland
:members:
:undoc-members:
:show-inheritance:
flatland.utils package
======================
Module contents
---------------
.. automodule:: flatland.utils
:members:
:undoc-members:
:show-inheritance:
Welcome to flatland's documentation!
======================================
.. include:: ../README.rst
.. toctree::
:maxdepth: 2
:caption: Contents:
readme
installation
about_flatland
gettingstarted
modules
FAQ
contributing
authors
01_readme
03_tutorials_toc
04_specifications_toc
05_apidoc
06_contributing
07_changes
08_authors
09_faq_toc
10_interface
Indices and tables
==================
......
.. highlight:: shell
============
Installation
============
Software Runtime & Dependencies
-------------------------------
This is the recommended way of installation and running flatland's dependencies.
* Install `Anaconda <https://www.anaconda.com/distribution/>`_ by following the instructions `here <https://www.anaconda.com/distribution/>`_
* Create a new conda environment
.. code-block:: console
$ conda create python=3.6 --name flatland-rl
$ conda activate flatland-rl
* Install the necessary dependencies
.. code-block:: console
$ conda install -c conda-forge cairosvg pycairo
$ conda install -c anaconda tk
Stable release
--------------
To install flatland, run this command in your terminal:
.. code-block:: console
$ pip install flatland-rl
This is the preferred method to install flatland, as it will always install the most recent stable release.
If you don't have `pip`_ installed, this `Python installation guide`_ can guide
you through the process.
.. _pip: https://pip.pypa.io
.. _Python installation guide: http://docs.python-guide.org/en/latest/starting/installation/
From sources
------------
The sources for flatland can be downloaded from the `Gitlab repo`_.
You can clone the public repository:
.. code-block:: console
$ git clone git@gitlab.aicrowd.com:flatland/flatland.git
Once you have a copy of the source, you can install it with:
.. code-block:: console
$ python setup.py install
.. _Gitlab repo: https://gitlab.aicrowd.com/flatland/flatland
Jupyter Canvas Widget
---------------------
If you work with jupyter notebook you need to install the Jupyer Canvas Widget. To install the Jupyter Canvas Widget read also
https://github.com/Who8MyLunch/Jupyter_Canvas_Widget#installation
# PettingZoo
> PettingZoo (https://www.pettingzoo.ml/) is a collection of multi-agent environments for reinforcement learning. We build a pettingzoo interface for flatland.
## Background
PettingZoo is a popular multi-agent environment library (https://arxiv.org/abs/2009.14471) that aims to be the gym standard for Multi-Agent Reinforcement Learning. We list the below advantages that make it suitable for use with flatland
- Works with both rllib (https://docs.ray.io/en/latest/rllib.html) and stable baselines 3 (https://stable-baselines3.readthedocs.io/) using wrappers from Super Suit.
- Clean API (https://www.pettingzoo.ml/api) with additional facilities/api for parallel, saving observation, recording using gym monitor, processing, normalising observations
- Scikit-learn inspired api
e.g.
```python
act = model.predict(obs, deterministic=True)[0]
```
- Parallel learning using literally 2 lines of code to use with stable baselines 3
```python
env = ss.pettingzoo_env_to_vec_env_v0(env)
env = ss.concat_vec_envs_v0(env, 8, num_cpus=4, base_class=stable_baselines3)
```
- Tested and supports various multi-agent environments with many agents comparable to flatland. e.g. https://www.pettingzoo.ml/magent
- Clean interface means we can custom add an experimenting tool like wandb and have full flexibility to save information we want
PettingZoo
==========
..
PettingZoo (https://www.pettingzoo.ml/) is a collection of multi-agent environments for reinforcement learning. We build a pettingzoo interface for flatland.
Background
----------
PettingZoo is a popular multi-agent environment library (https://arxiv.org/abs/2009.14471) that aims to be the gym standard for Multi-Agent Reinforcement Learning. We list the below advantages that make it suitable for use with flatland
* Works with both rllib (https://docs.ray.io/en/latest/rllib.html) and stable baselines 3 (https://stable-baselines3.readthedocs.io/) using wrappers from Super Suit.
* Clean API (https://www.pettingzoo.ml/api) with additional facilities/api for parallel, saving observation, recording using gym monitor, processing, normalising observations
* Scikit-learn inspired api
e.g.
.. code-block:: python
act = model.predict(obs, deterministic=True)[0]
* Parallel learning using literally 2 lines of code to use with stable baselines 3
.. code-block:: python
env = ss.pettingzoo_env_to_vec_env_v0(env)
env = ss.concat_vec_envs_v0(env, 8, num_cpus=4, base_class=’stable_baselines3’)
* Tested and supports various multi-agent environments with many agents comparable to flatland. e.g. https://www.pettingzoo.ml/magent
* Clean interface means we can custom add an experimenting tool like wandb and have full flexibility to save information we want
# Environment Wrappers
> We provide various environment wrappers to work with both the rail env and the petting zoo interface.
## Background
These wrappers changes certain environment behavior which can help to get better reinforcement learning training.
## Supported Inbuilt Wrappers
We provide 2 sample wrappers for ShortestPathAction wrapper and SkipNoChoice wrapper. The wrappers requires many env properties that are only created on environment reset. Hence before using the wrapper, we must reset the rail env. To use the wrappers, simply pass the resetted rail env. Code samples are shown below for each wrapper.
### ShortestPathAction Wrapper
To use the ShortestPathAction Wrapper, simply wrap the rail env as follows
```python
rail_env.reset(random_seed=1)
rail_env = ShortestPathActionWrapper(rail_env)
```
The shortest path action wrapper maps the existing action space into 3 actions - Shortest Path (`0`), Next Shortest Path (`1`) and Stop (`2`). Hence, we must ensure that the predicted action should always be one of these (0, 1 and 2) actions. To route all agents in the shortest path, pass `0` as the action.
### SkipNoChoice Wrapper
To use the SkipNoChoiceWrapper, simply wrap the rail env as follows
```python
rail_env.reset(random_seed=1)
rail_env = SkipNoChoiceCellsWrapper(rail_env, accumulate_skipped_rewards=False, discounting=0.0)
```
Environment Wrappers
====================
..
We provide various environment wrappers to work with both the rail env and the petting zoo interface.
Background
----------
These wrappers changes certain environment behavior which can help to get better reinforcement learning training.
Supported Inbuilt Wrappers
--------------------------
We provide 2 sample wrappers for ShortestPathAction wrapper and SkipNoChoice wrapper. The wrappers requires many env properties that are only created on environment reset. Hence before using the wrapper, we must reset the rail env. To use the wrappers, simply pass the resetted rail env. Code samples are shown below for each wrapper.
ShortestPathAction Wrapper
^^^^^^^^^^^^^^^^^^^^^^^^^^
To use the ShortestPathAction Wrapper, simply wrap the rail env as follows
.. code-block:: python
rail_env.reset(random_seed=1)
rail_env = ShortestPathActionWrapper(rail_env)
The shortest path action wrapper maps the existing action space into 3 actions - Shortest Path (\ ``0``\ ), Next Shortest Path (\ ``1``\ ) and Stop (\ ``2``\ ). Hence, we must ensure that the predicted action should always be one of these (0, 1 and 2) actions. To route all agents in the shortest path, pass ``0`` as the action.
SkipNoChoice Wrapper
^^^^^^^^^^^^^^^^^^^^
To use the SkipNoChoiceWrapper, simply wrap the rail env as follows
.. code-block:: python
rail_env.reset(random_seed=1)
rail_env = SkipNoChoiceCellsWrapper(rail_env, accumulate_skipped_rewards=False, discounting=0.0)
@ECHO OFF
pushd %~dp0
REM Command file for Sphinx documentation
if "%SPHINXBUILD%" == "" (
set SPHINXBUILD=python -msphinx
)
set SOURCEDIR=.
set BUILDDIR=_build
set SPHINXPROJ=flatland
if "%1" == "" goto help
%SPHINXBUILD% >NUL 2>NUL
if errorlevel 9009 (
echo.
echo.The Sphinx module was not found. Make sure you have Sphinx installed,
echo.then set the SPHINXBUILD environment variable to point to the full
echo.path of the 'sphinx-build' executable. Alternatively you may add the
echo.Sphinx directory to PATH.
echo.
echo.If you don't have Sphinx installed, grab it from
echo.http://sphinx-doc.org/
exit /b 1
)
%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS%
goto end
:help
%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS%
:end
popd
flatland
========
.. toctree::
:maxdepth: 4
flatland
## Core Specifications
### Environment Class Overview
The Environment class contains all necessary functions for the interactions between the agents and the environment. The base Environment class is derived from rllib.env.MultiAgentEnv (https://github.com/ray-project/ray).
The functions are specific for each realization of Flatland (e.g. Railway, Vaccination,...)
In particular, we retain the rllib interface in the use of the step() function, that accepts a dictionary of actions indexed by the agents handles (returned by get_agent_handles()) and returns dictionaries of observations, dones and infos.
```python
class Environment:
"""Base interface for multi-agent environments in Flatland.
Agents are identified by agent ids (handles).
Examples:
>>> obs, info = env.reset()
>>> print(obs)
{
"train_0": [2.4, 1.6],
"train_1": [3.4, -3.2],
}
>>> obs, rewards, dones, infos = env.step(
action_dict={
"train_0": 1, "train_1": 0})
>>> print(rewards)
{
"train_0": 3,
"train_1": -1,
}
>>> print(dones)
{
"train_0": False, # train_0 is still running
"train_1": True, # train_1 is done
"__all__": False, # the env is not done
}
>>> print(infos)
{
"train_0": {}, # info for train_0
"train_1": {}, # info for train_1
}
"""
def __init__(self):
pass
def reset(self):
"""
Resets the env and returns observations from agents in the environment.
Returns:
obs : dict
New observations for each agent.
"""
raise NotImplementedError()
def step(self, action_dict):
"""
Performs an environment step with simultaneous execution of actions for
agents in action_dict.
Returns observations from agents in the environment.
The returns are dicts mapping from agent_id strings to values.
Parameters
-------
action_dict : dict
Dictionary of actions to execute, indexed by agent id.
Returns
-------
obs : dict
New observations for each ready agent.
rewards: dict
Reward values for each ready agent.
dones : dict
Done values for each ready agent. The special key "__all__"
(required) is used to indicate env termination.
infos : dict
Optional info values for each agent id.
"""
raise NotImplementedError()
def render(self):
"""
Perform rendering of the environment.
"""
raise NotImplementedError()
def get_agent_handles(self):
"""
Returns a list of agents' handles to be used as keys in the step()
function.
"""
raise NotImplementedError()
```
File suppressed by a .gitattributes entry or the file's encoding is unsupported.
## Intro
In a human-readable language, specifications provide
- code base overview (hand-drawn concept)
- key concepts (generators, envs) and how are they linked
- link relevant code base
![Overview](img/UML_flatland.png)
`Diagram Source <https://confluence.sbb.ch/x/pQfsSw>`_
=============================
Observation and Action Spaces
=============================
----------------------------
This is an introduction to the three standard observations and the action space of **Flatland**.
Action Space
============
^^^^^^^^^^^^
Flatland is a railway simulation. Thus the actions of an agent are strongly limited to the railway network. This means that in many cases not all actions are valid.
The possible actions of an agent are
......@@ -15,7 +15,7 @@ The possible actions of an agent are
- ``4`` **Stop**: This action causes the agent to stop.
Observation Spaces
==================
^^^^^^^^^^^^^^^^^^
In the **Flatland** environment we have included three basic observations to get started. The figure below illustrates the observation range of the different basic observation: ``Global``, ``Local Grid`` and ``Local Tree``.
.. image:: https://i.imgur.com/oo8EIYv.png
......@@ -24,7 +24,7 @@ In the **Flatland** environment we have included three basic observations to get
Global Observation
------------------
~~~~~~~~~~~~~~~~~~
Gives a global observation of the entire rail environment.
The observation is composed of the following elements:
......@@ -37,7 +37,7 @@ We encourage you to enhance this observation with any layer you think might help
It would also be possible to construct a global observation for a super agent that controls all agents at once.
Local Grid Observation
----------------------
~~~~~~~~~~~~~~~~~~~~~~
Gives a local observation of the rail environment around the agent.
The observation is composed of the following elements:
......@@ -50,7 +50,7 @@ Be aware that this observation **does not** contain any clues about target locat
We encourage you to come up with creative ways to overcome this problem. In the tree observation below we introduce the concept of distance maps.
Tree Observation
----------------
~~~~~~~~~~~~~~~~
The tree observation is built by exploiting the graph structure of the railway network. The observation is generated by spanning a **4 branched tree** from the current position of the agent. Each branch follows the allowed transitions (backward branch only allowed at dead-ends) until a cell with multiple allowed transitions is reached. Here the information gathered along the branch is stored as a node in the tree.
The figure below illustrates how the tree observation is built:
......@@ -73,7 +73,7 @@ The right side of the figure shows the resulting tree of the railway network on
Node Information
----------------
~~~~~~~~~~~~~~~~
Each node is filled with information gathered along the path to the node. Currently each node contains 9 features:
- 1: if own target lies on the explored branch the current distance from the agent in number of cells is stored.
......
This diff is collapsed.
## Rendering Specifications
### Scope
This doc specifies the software to meet the requirements in the Visualization requirements doc.
### References
- [Visualization Requirements](visualization)
- [Core Spec](./core)
### Interfaces
#### Interface with Environment Component
- Environment produces the Env Snapshot data structure (TBD)
- Renderer reads the Env Snapshot
- Connection between Env and Renderer, either:
- Environment “invokes” the renderer in-process
- Renderer “connects” to the environment
- Eg Env acts as a server, Renderer as a client
- Either
- The Env sends a Snapshot to the renderer and waits for rendering
- Or:
- The Env puts snapshots into a rendering queue
- The renderer blocks / waits on the queue, waiting for a new snapshot to arrive
- If several snapshots are waiting, delete and skip them and just render the most recent
- Delete the snapshot after rendering
- Optionally
- Render every frame / time step
- Or, render frames without blocking environment
- Render frames in separate process / thread
##### Environment Snapshot
#### Data Structure
A definitions of the data structure is to be defined in Core requirements or Interfaces doc.
##### Example only
Top-level dictionary
- World nd-array
- Each element represents available transitions in a cell
- List of agents
- Agent location, orientation, movement (forward / stop / turn?)
- Observation
- Rectangular observation
- Maybe just dimensions - width + height (ie no need for contents)
- Can be highlighted in display as per minigrid
- Tree-based observation
- TBD
#### Existing Tools / Libraries
1. Pygame
1. Very easy to use. Like dead simple to add sprites etc. [Link](https://studywolf.wordpress.com/2015/03/06/arm-visualization-with pygame/)
2. No inbuilt support for threads/processes. Does get faster if using pypy/pysco.
2. PyQt
1. Somewhat simple, a little more verbose to use the different modules.
2. Multi-threaded via QThread! Yay! (Doesn’t block main thread that does the real work), [Link](https://nikolak.com/pyqt-threading-tutorial/)
##### How to structure the code
1. Define draw functions/classes for each primitive
1. Primitives: Agents (Trains), Railroad, Grass, Houses etc.
2. Background. Initialize the background before starting the episode.
1. Static objects in the scenes, directly draw those primitives once and cache.
##### Proposed Interfaces
To-be-filled
#### Technical Graphics Considerations
##### Overlay dynamic primitives over the background at each time step.
No point trying to figure out changes. Need to explicitly draw every primitive anyways (that’s how these renders work).
## Visualization
![logo](https://drive.google.com/uc?export=view&id=1rstqMPJXFJd9iD46z1A5Rus-W0Ww6O8i)
### Introduction & Scope
Broad requirements for human-viewable display of a single Flatland Environment.
#### Context
Shows this software component in relation to some of the other components. We name the component the "Renderer". Multiple agents interact with a single Environment. A renderer interacts with the environment, and displays on screen, and/or into movie or image files.
<p id="gdcalert2" ><span style="color: red; font-weight: bold">>>>>> gd2md-html alert: inline drawings not supported directly from Docs. You may want to copy the inline drawing to a standalone drawing and export by reference. See <a href="https://github.com/evbacher/gd2md-html/wiki/Google-Drawings-by-reference">Google Drawings by reference</a> for details. The img URL below is a placeholder. </span><br>(<a href="#">Back to top</a>)(<a href="#gdcalert3">Next alert</a>)<br><span style="color: red; font-weight: bold">>>>>> </span></p>
![drawing](https://docs.google.com/a/google.com/drawings/d/12345/export/png)
### Requirements
#### Primary Requirements
1. Visualize or Render the state of the environment
1. Read an Environment + Agent Snapshot provided by the Environment component
2. Display onto a local screen in real-time (or near real-time)
3. Include all the agents
4. Illustrate the agent observations (typically subsets of the grid / world)
5. 2d-rendering only
2. Output visualisation into movie / image files for use in later animation
3. Should not impose control-flow constraints on Environment
6. Should not force env to respond to events
7. Should not drive the "main loop" of Inference or training
#### Secondary / Optional Requirements
1. During training (possibly across multiple processes or machines / OS instances), display a single training environment,
1. without holding up the other environments in the training.
2. Some training environments may be remote to the display machine (eg using GCP / AWS)
3. Attach to / detach from running environment / training cluster without restarting training.
2. Provide a switch to make use of graphics / artwork provided by graphic artist
4. Fast / compact mode for general use
5. Beauty mode for publicity / demonstrations
3. Provide a switch between smooth / continuous animation of an agent (slower) vs jumping from cell to cell (faster)
6. Smooth / continuous translation between cells
7. Smooth / continuous rotation
4. Speed - ideally capable of 60fps (see performance metrics)
5. Window view - only render part of the environment, or a single agent and agents nearby.
8. May not be feasible to render very large environments
9. Possibly more than one window, ie one for each selected agent
10. Window(s) can be tied to agents, ie they move around with the agent, and optionally rotate with the agent.
6. Interactive scaling
11. eg wide view, narrow / enlarged view
12. eg with mouse scrolling & zooming
7. Minimize necessary skill-set for participants
13. Python API to gui toolkit, no need for C/C++
8. View on various media:
14. Linux & Windows local display
15. Browser
#### Performance Metrics
Here are some performance metrics which the Renderer should meet.
<table>
<tr>
<td>
</td>
<td><p style="text-align: right">
### Per second</p>
</td>
<td><p style="text-align: right">
Target Time (ms)</p>
</td>
<td><p style="text-align: right">
Prototype time (ms)</p>
</td>
</tr>
<tr>
<td>Write an agent update (ie env as client providing an agent update)
</td>
<td>
</td>
<td><p style="text-align: right">
0.1</p>
</td>
<td>
</td>
</tr>
<tr>
<td>Draw an environment window 20x20
</td>
<td><p style="text-align: right">
60</p>
</td>
<td><p style="text-align: right">
16</p>
</td>
<td>
</td>
</tr>
<tr>
<td>Draw an environment window 50 x 50
</td>
<td><p style="text-align: right">
10</p>
</td>
<td>
</td>
<td>
</td>
</tr>
<tr>
<td>Draw an agent update on an existing environment window. 5 agents visible.
</td>
<td>
</td>
<td><p style="text-align: right">
1</p>
</td>
<td>
</td>
</tr>
</table>
#### Example Visualization
### Reference Documents
Link to this doc: https://docs.google.com/document/d/1Y4Mw0Q6r8PEOvuOZMbxQX-pV2QKDuwbZJBvn18mo9UU/edit#
#### Core Specification
This specifies the system containing the environment and agents - this will be able to run independently of the renderer.
[https://docs.google.com/document/d/1RN162b8wSfYTBblrdE6-Wi_zSgQTvVm6ZYghWWKn5t8/edit](https://docs.google.com/document/d/1RN162b8wSfYTBblrdE6-Wi_zSgQTvVm6ZYghWWKn5t8/edit)
The data structure which the renderer needs to read initially resides here.
#### Visualization Specification
This will specify the software which will meet the requirements documented here.
[https://docs.google.com/document/d/1XYOe_aUIpl1h_RdHnreACvevwNHAZWT0XHDL0HsfzRY/edit#](https://docs.google.com/document/d/1XYOe_aUIpl1h_RdHnreACvevwNHAZWT0XHDL0HsfzRY/edit#)
#### Interface Specification
This will specify the interfaces through which the different components communicate
### Non-requirements - to be deleted below here.
The below has been copied into the spec doc. Comments may be lost. I'm only preserving it to save the comments for a few days - they don't cut & paste into the other doc!
#### Interface with Environment Component
* Environment produces the Env Snapshot data structure (TBD)
* Renderer reads the Env Snapshot
* Connection between Env and Renderer, either:
* Environment "invokes" the renderer in-process
* Renderer "connects" to the environment
* Eg Env acts as a server, Renderer as a client
* Either
* The Env sends a Snapshot to the renderer and waits for rendering
* Or:
* The Env puts snapshots into a rendering queue
* The renderer blocks / waits on the queue, waiting for a new snapshot to arrive
* If several snapshots are waiting, delete and skip them and just render the most recent
* Delete the snapshot after rendering
* Optionally
* Render every frame / time step
* Or, render frames without blocking environment
* Render frames in separate process / thread
###### Environment Snapshot
**Data Structure**
A definitions of the data structure is to be defined in Core requirements.
It is a requirement of the Renderer component that it can read this data structure.
**Example only**
Top-level dictionary
* World nd-array
* Each element represents available transitions in a cell
* List of agents
* Agent location, orientation, movement (forward / stop / turn?)
* Observation
* Rectangular observation
* Maybe just dimensions - width + height (ie no need for contents)
* Can be highlighted in display as per minigrid
* Tree-based observation
* TBD
#### Investigation into Existing Tools / Libraries
1. Pygame
1. Very easy to use. Like dead simple to add sprites etc. ([https://studywolf.wordpress.com/2015/03/06/arm-visualization-with-pygame/](https://studywolf.wordpress.com/2015/03/06/arm-visualization-with-pygame/))
2. No inbuilt support for threads/processes. Does get faster if using pypy/pysco.
2. PyQt
3. Somewhat simple, a little more verbose to use the different modules.
4. Multi-threaded via QThread! Yay! (Doesn't block main thread that does the real work), ([https://nikolak.com/pyqt-threading-tutorial/](https://nikolak.com/pyqt-threading-tutorial/))
**How to structure the code**
1. Define draw functions/classes for each primitive
1. Primitives: Agents (Trains), Railroad, Grass, Houses etc.
2. Background. Initialize the background before starting the episode.
2. Static objects in the scenes, directly draw those primitives once and cache.
**Proposed Interfaces**
To-be-filled
#### Technical Graphics Considerations
###### Overlay dynamic primitives over the background at each time step.
No point trying to figure out changes. Need to explicitly draw every primitive anyways (that's how these renders work).