diff --git a/docs/flatland.rst b/docs/flatland.rst
index 88e8ec93fd4f6c89c1f6e20c55defbddaa9b28fa..e09087a49b6df3572ac38b77e41ca739bcea8150 100644
--- a/docs/flatland.rst
+++ b/docs/flatland.rst
@@ -6,10 +6,10 @@ Subpackages
 
 .. toctree::
 
-    flatland.core
-    flatland.envs
-    flatland.evaluators
-    flatland.utils
+   flatland.core
+   flatland.envs
+   flatland.evaluators
+   flatland.utils
 
 Submodules
 ----------
@@ -18,15 +18,15 @@ flatland.cli module
 -------------------
 
 .. automodule:: flatland.cli
-    :members:
-    :undoc-members:
-    :show-inheritance:
+   :members:
+   :undoc-members:
+   :show-inheritance:
 
 
 Module contents
 ---------------
 
 .. automodule:: flatland
-    :members:
-    :undoc-members:
-    :show-inheritance:
+   :members:
+   :undoc-members:
+   :show-inheritance:
diff --git a/docs/flatland_2.0.md b/docs/flatland_2.0.md
new file mode 100644
index 0000000000000000000000000000000000000000..0ed2f1bdcd2e460fb60b1982047d03dc223509e2
--- /dev/null
+++ b/docs/flatland_2.0.md
@@ -0,0 +1,150 @@
+# Flatland 2.0 Introduction (Beta)
+
+Curious to see whats coming in **Flat**land 2.0? Have a look at the current development and report bugs and give us feedback on the environment.
+
+**WARNING**: Flatlnadn 2.0 Beta is under current development and not stable nor final. We would however like you to play with the code and help us get the best possible environment for multi-agent control problems.
+
+## Whats new
+
+In this version of **Flat**land we are moving closer to realistic and more complex railway problems. Earlier versions of **Flat**land which introduced you to the concept of restricted transitions was still to simplified to give us feasible solutions for daily operations. Thus the following changes are coming in the next version to be closer to real railway network challenges:
+
+- **New Level Generator** with less connections between different nodes in the network and thus much higher agent densities on rails.
+- **Stochastic Events** that cause agents to stop and get stuck for different number of time steps.
+- **Different Speed Classes** allow agents to move at different speeds and thus enhance complexity in the search for optimal solutions.
+
+Below we explain these changes in more detail and how you can play with their parametrization. We appreciate *your feedback* on the performance and the difficulty on these levels to help us shape the best possible **Flat**land 2.0 environment.
+
+## Get the new level generators
+Since this is currently still in *beta* phase you can only install this version of **Flat**land through the gitlab repository. Once you have downloaded the [Flatland Repository](https://gitlab.aicrowd.com/flatland/flatland) you have to switch to the [147_new_level_generator](https://gitlab.aicrowd.com/flatland/flatland/tree/147_new_level_generator) branch to be able access the latest changes in **Flat**land.
+
+Once you have switched to this branch install **Flat**land by running `python setup.py install`.
+
+## Generate levels
+
+We are currently working on different new level generators and you can expect that the levels in the submission testing will not all come from just one but rather different level generators to be sure that the controllers can handle any railway specific challenge.
+
+For this early **beta** testing we suggest you have a look at the `sparse_rail_generator` and `realistic_rail_generator`.
+
+### Sparse Rail Generator
+![Example_Sparse](https://i.imgur.com/DP8sIyx.png)
+
+The idea behind the sparse rail generator is to mimic classic railway structures where dense nodes (cities) are sparsly connected to each other and where you have to manage traffic flow between the nodes efficiently. The cities in this level generator are much simplified in comparison to real city networks but it mimics parts of the problems faced in daily operations of any railway company.
+
+There are a few parameters you can tune to build your own map and test different complexity levels of the levels. **Warning** some combinations of parameters do not go well together and will lead to infeasible level generation. In the worst case, the level generator currently issues a warning when it cannot build the environment according to the parameters provided. This will lead to a crash of the whole env. We are currently working on improvements here and are **happy for any suggestions from your side**.
+
+To build en environment you instantiate a `RailEnv` follows
+
+```
+# Initialize the generator
+RailGenerator = sparse_rail_generator(num_cities=10,                        # Number of cities in map
+                                                   num_intersections=10,    # Number of interesections in map
+                                                   num_trainstations=50,    # Number of possible start/targets on map
+                                                   min_node_dist=6,         # Minimal distance of nodes
+                                                   node_radius=3,           # Proximity of stations to city center
+                                                   num_neighb=3,            # Number of connections to other cities
+                                                   seed=5,                  # Random seed
+                                                   realistic_mode=True      # Ordered distribution of nodes
+                                                   )
+
+# Build the environment
+env = RailEnv(width=50,
+              height=50,
+              rail_generator=RailGenerator,
+              number_of_agents=10,
+              obs_builder_object=TreeObsForRailEnv(max_depth=3,predictor=shortest_path_predictor)
+              )
+```
+
+You can tune the following parameters:
+
+- `num_citeis` is the number of cities on a map. Cities are the only nodes that can host start and end points for agent tasks (Train stations). Here you have to be carefull that the number is not too high as all the cities have to fit on the map. When `realistic_mode=False` you have to be carefull when chosing `min_node_dist` because leves will fails if not all cities (and intersections) can be placed with at least `min_node_dist` between them.
+- `num_intersections` is the number of nodes that don't hold any trainstations. They are also the first priority that a city connects to. We use these to allow for sparse connections between cities.
+- `num_trainstations`defines the *Total* number of trainstations in the network. This also sets the max number of allowed agents in the environment. This is also a delicate parameter as there is only a limitid amount of space available around nodes and thus if the number is too high the level generation will fail. *Important*: Only the number of agents provided to the environment will actually produce active train stations. The others will just be present as dead-ends (See figures below).
+- `min_node_dist`is only used if `realistic_mode=False` and represents the minimal distance between two nodes.
+- `node_radius` defines the extent of a city. Each trainstation is placed at a distance to the closes city node that is smaller or equal to this number.
+- `num_neighb`defines the number of neighbouring nodes that connect to each other. Thus this changes the connectivity and thus the amount of alternative routes in the network.
+- `seed` is used to initialize the random generator
+- `realistic_mode` currently only changes how the nodes are distirbuted. If it is set to `True` the nodes are evenly spreas out and cities and intersecitons are set between each other.
+
+If you run into any bugs with sets of parameters please let us know.
+
+Here is a network with `realistic_mode=False` and the parameters from above.
+
+![sparse_random](https://i.imgur.com/Xg7nifF.png)
+
+and here with `realistic_mode=True`
+
+![sparse_ordered](https://i.imgur.com/jyA7Pt4.png)
+
+## Add Stochasticity
+
+Another area where we improve **Flat**land 2.0 is by adding stochastic events during the episodes. This is very common for railway networks where the initial plan usually needs to be rescheduled during operations as minor events such as delayed departure from trainstations, malfunctions on trains or infrastructure or just the weather lead to delayed trains.
+
+We implemted a poisson process to simulate delays by stopping agents at random times for random durations. The parameters necessary for the stochastic events can be provided when creating the environment.
+
+```
+# Use a the malfunction generator to break agents from time to time
+stochastic_data = {'prop_malfunction': 0.5,  # Percentage of defective agents
+                   'malfunction_rate': 30,  # Rate of malfunction occurence
+                   'min_duration': 3,  # Minimal duration of malfunction
+                   'max_duration': 10  # Max duration of malfunction
+                   }
+
+```
+
+The parameters are as follows:
+
+- `prop_malfunction` is the proportion of agents that can malfunction. `1.0` means that each agent can break.
+- `malfunction_rate` is the mean rate of the poisson process in number of environment steps.
+- `min_dutation` and `max_duration` set the range of malfunction durations. They are sampled uniformly
+
+You can introduce stochasticity by simply creating the env as follows:
+
+```
+# Use a the malfunction generator to break agents from time to time
+stochastic_data = {'prop_malfunction': 0.5,  # Percentage of defective agents
+                   'malfunction_rate': 30,  # Rate of malfunction occurence
+                   'min_duration': 3,  # Minimal duration of malfunction
+                   'max_duration': 10  # Max duration of malfunction
+                   }
+
+# Use your own observation builder
+TreeObservation = TreeObsForRailEnv(max_depth=2, predictor=ShortestPathPredictorForRailEnv())
+
+env = RailEnv(width=10,
+              height=10,
+              rail_generator=sparse_rail_generator(num_cities=3,  # Number of cities in map (where train stations are)
+                                                   num_intersections=1,  # Number of interesections (no start / target)
+                                                   num_trainstations=8,  # Number of possible start/targets on map
+                                                   min_node_dist=3,  # Minimal distance of nodes
+                                                   node_radius=2,  # Proximity of stations to city center
+                                                   num_neighb=2,  # Number of connections to other cities/intersections
+                                                   seed=15,  # Random seed
+                                                   ),
+              number_of_agents=5,
+              stochastic_data=stochastic_data,  # Malfunction generator data
+              obs_builder_object=TreeObservation)
+```
+
+You will quickly realize that this will lead to unforseen difficulties which means that **your controller** needs to observe the environment at all times to be able to react to the stochastic events.
+
+## Add different speed profiles
+
+One of the main contributions to the complexity of railway network operations stems from the fact that all trains travel at different speeds while sharing a very limited railway network. In **Flat**land 2.0 this feature will be enabled as well and will lead to much more complex configurations. This is still in early *beta* and even though stock observation builders and predictors do support these changes we have not yet fully tested them. Here we count on your support :).
+
+Currently you have to initialize the speed profiles manually after the environment has been reset (*Attention*: this is currently being worked on and will change soon). In order for agent to have differnt speed profiles you can include this after your `env.reset()` call:
+
+```
+# Reset environment and get initial observations for all agents
+    obs = env.reset()
+    for idx in range(env.get_num_agents()):
+        tmp_agent = env.agents[idx]
+        speed = (idx % 4) + 1
+        tmp_agent.speed_data["speed"] = 1 / speed
+```
+
+Where you can actually chose as many different speeds as you like. Keep in mind that the *fastest speed* is 1 and all slower speeds must be between 1 and 0. For the submission scoring you can assume that there will be no more than 5 speed profiles.
+
+## Example code
+
+To see allt he changes in action you can just run the `flatland_example_2_0.py` file in the examples folder. The file can be found [here](https://gitlab.aicrowd.com/flatland/flatland/blob/147_new_level_generator/examples/flatland_2_0_example.py)
diff --git a/docs/intro_observationbuilder.rst b/docs/intro_observationbuilder.rst
index 86631e3f455110996c84e0035871445b5280c83f..3cde8167838a6040db8513d5944421ce5e02460d 100644
--- a/docs/intro_observationbuilder.rst
+++ b/docs/intro_observationbuilder.rst
@@ -12,7 +12,7 @@ Whenever an environment needs to compute new observations for each agent, it que
 .. _Flatland-Challenge: https://www.aicrowd.com/challenges/flatland-challenge
 
 Example 1 : Simple (but useless) observation
---------------
+--------------------------------------------------------
 In this first example we implement all the functions necessary for the observation builder to be valid and work with **Flatland**.
 Custom observation builder objects need to derive from the `flatland.core.env_observation_builder.ObservationBuilder`_
 base class and must implement two methods, :code:`reset(self)` and :code:`get(self, handle)`.
@@ -54,7 +54,7 @@ In the next example we highlight how to derive from existing observation builder
 
 
 Example 2 : Single-agent navigation
---------------
+-------------------------------------
 
 Observation builder objects can of course derive from existing concrete subclasses of ObservationBuilder.
 For example, it may be useful to extend the TreeObsForRailEnv_ observation builder.
@@ -157,7 +157,7 @@ navigation to target, and shows the path taken as an animation.
 The code examples above appear in the example file `custom_observation_example.py <https://gitlab.aicrowd.com/flatland/flatland/blob/master/examples/custom_observation_example.py>`_. You can run it using :code:`python examples/custom_observation_example.py` from the root folder of the flatland repo.  The two examples are run one after the other.
 
 Example 3 : Using custom predictors and rendering observation
---------------
+-------------------------------------------------------------
 
 Because the re-scheduling task of the Flatland-Challenge_ requires some short time planning we allow the possibility to use custom predictors that help predict upcoming conflicts and help agent solve them in a timely manner.
 In the **Flatland Environment** we included an initial predictor ShortestPathPredictorForRailEnv_ to give you an idea what you can do with these predictors.
@@ -291,3 +291,73 @@ We can then use this new observation builder and the renderer to visualize the o
         print("Rewards: ", all_rewards, "  [done=", done, "]")
         env_renderer.render_env(show=True, frames=True, show_observations=True, show_predictions=False)
         time.sleep(0.5)
+
+How to access environment and agent data for observation builders
+------------------------------------------------------------------
+
+When building your custom observation builder, you might want to aggregate and define your own features that are different from the raw env data. In this section we introduce how such information can be accessed and how you can build your own features out of them.
+
+Transitions maps
+~~~~~~~~~~~~~~~~
+
+The transition maps build the base for all movement in the environment. They contain all the information about allowed transitions for the agent at any given position. Because railway movement is limited to the railway tracks, these are important features for any controller that want to interact with the environment. All functionality and features of a transition map can be found here_
+
+.. _here:https://gitlab.aicrowd.com/flatland/flatland/blob/master/flatland/core/transition_map.py
+
+**Get Transitions for cell**
+
+To access the possible transitions at any given cell there are different possibilites:
+
+1. You provide a cell position and a orientation in that cell (usually the orientation of the agent) and call :code:`cell_transitions = env.rail.get_transitions(*position, direction)` and in return you get a 4d vector with the transition probability ordered as :code:`[North, East, South, West]` given the initial orientation. The position is a tuple of the form :code:`(x, y)` where :code:`x in [0, height]` and :code:`y in [0, width]`. This can be used for branching in a tree search and when looking for all possible allowed paths of an agent as it will provide a simple way to get the possible trajectories.
+
+2. When more detailed information about the cell in general is necessary you can also get the full transitions of a cell by calling :code:`transition_int = env.rail.get_full_transitions(*position)`. This will return an :code:`int16` for the cell representing the allowed transitions. To understand the transitions returned it is best to represent it as a binary number :code:`bin(transition_int)`, where the bits have to following meaning: :code:`NN NE NS NW EN EE ES EW SN SE SS SW WN WE WS WW`. For example the binary code :code:`1000 0000 0010 0000`, represents a straigt where an agent facing north can transition north and an agent facing south can transition south and no other transitions are possible. To get a better feeling what the binary representations of the elements look like go to this Link_
+
+.. _Link:https://gitlab.aicrowd.com/flatland/flatland/blob/master/flatland/core/grid/rail_env_grid.py#L29
+
+
+These two objects can be used for example to detect switches that are usable by other agents but not the observing agent itself. This can be an important feature when actions have to be taken in order to avoid conflicts.
+
+.. code-block:: python
+
+    cell_transitions = self.env.rail.get_transitions(*position, direction)
+    transition_bit = bin(self.env.rail.get_full_transitions(*position))
+    
+    total_transitions = transition_bit.count("1")
+    num_transitions = np.count_nonzero(cell_transitions)
+
+    # Detect Switches that can only be used by other agents.
+    if total_transitions > 2 > num_transitions:
+        unusable_switch_detected = True
+
+
+Agent information
+~~~~~~~~~~~~~~~~~~
+
+The agents are represented as an agent class and are provided when the environment is instantiated. Because agents can have different properties it is helpful to know how to access this information.
+
+You can simply acces the three main types of agent information in the following ways with :code:`agent = env.agents[handle]`:
+
+**Agent basic information**
+All the agent in the initiated environment can be found in the :code:`env.agents` class. Given the index of the agent you have acces to:
+
+- Agent position :code:`agent.position` which returns the current coordinates :code:`(x, y)` of the agent.
+- Agent target :code:`agent.target`  which returns the target coordinates :code:`(x, y)`.
+- Agent direction :code:`agent.direction` which is an int representing the current orientation :code:`{0: North, 1: East, 2: South, 3: West}`
+- Agent moving :code:`agent.moving` where 0 means the agent is currently not moving and 1 indicates agent is moving.
+
+**Agent speed information**
+
+Beyond the basic agent information we can also access more details about the agents type by looking at speed data:
+
+- Agent max speed :code:`agent.speed_data["speed"]` wich defines the traveling speed when the agent is moving.
+- Agent position fraction :code:``agent.speed_data["position_fraction"]` which is a number between 0 and 1 and inidicates when the move to the next cell will occur. Each speed of an agent is 1 or a smaller fraction. At each :code:`env.step()` the agent moves at its fractional speed forwards any only changes to the next cell when the cumulated fractions are :code:`agent.speed_data["position_fraction"] >= 1.`
+
+**Agent malfunction information**
+
+Similar to the speed data you can also access individual data about the malfunctions of an agent. All data is available through :code:`agent.malfunction_data` with:
+
+- Indication how long the agent is still malfunctioning :code:`'malfunction'` by an integer counting down at each time step. 0 means the agent is ok and can move. 
+- Possion rate at which malfunctions happen for this agent :code:`'malfunction_rate'`
+- Number of steps untill next malfunction will occur :code:`'next_malfunction'`
+- Number of malfunctions an agent have occured for this agent so far :code:`nr_malfunctions'`
+
diff --git a/flatland/utils/graphics_pil.py b/flatland/utils/graphics_pil.py
index 47ff8e6c06423b7ff515e73bd590ea32399de3d6..6a0a9282614c0319338454f5b8ae97531b12e432 100644
--- a/flatland/utils/graphics_pil.py
+++ b/flatland/utils/graphics_pil.py
@@ -4,7 +4,7 @@ import time
 import tkinter as tk
 
 import numpy as np
-from PIL import Image, ImageDraw, ImageTk  # , ImageFont
+from PIL import Image, ImageDraw, ImageTk, ImageFont
 from numpy import array
 from pkg_resources import resource_string as resource_bytes
 
@@ -90,6 +90,8 @@ class PILGL(GraphicsLayer):
         self.old_background_image = (None, None, None)
         self.create_layers()
 
+        self.font = ImageFont.load_default()
+
     def build_background_map(self, dTargets):
         x = self.old_background_image
         rebuild = False
@@ -167,8 +169,14 @@ class PILGL(GraphicsLayer):
         # quit but not destroy!
         self.__class__.window.quit()
 
-    def text(self, *args, **kwargs):
-        pass
+    def text(self, xPx, yPx, strText, layer=RAIL_LAYER):
+        xyPixLeftTop = (xPx, yPx)
+        self.draws[layer].text(xyPixLeftTop, strText, font=self.font, fill=(0, 0, 0, 255))
+        
+    def text_rowcol(self, rcTopLeft, strText, layer=AGENT_LAYER):
+        print("Text:", "rc:", rcTopLeft, "text:", strText, "layer:", layer)
+        xyPixLeftTop = tuple((array(rcTopLeft) * self.nPixCell)[[1, 0]])
+        self.text(*xyPixLeftTop, strText, layer)
 
     def prettify(self, *args, **kwargs):
         pass
@@ -492,13 +500,17 @@ class PILSVG(PILGL):
                                           False)[0]
         self.draw_image_row_col(colored_rail, (row, col), layer=PILGL.PREDICTION_PATH_LAYER)
 
-    def set_rail_at(self, row, col, binary_trans, target=None, is_selected=False, rail_grid=None):
+    def set_rail_at(self, row, col, binary_trans, target=None, is_selected=False, rail_grid=None, 
+            show_debug=True):
+        
         if binary_trans in self.pil_rail:
             pil_track = self.pil_rail[binary_trans]
             if target is not None:
                 target_img = self.station_colors[target % len(self.station_colors)]
                 target_img = Image.alpha_composite(pil_track, target_img)
                 self.draw_image_row_col(target_img, (row, col), layer=PILGL.TARGET_LAYER)
+                if show_debug:
+                    self.text_rowcol((row+0.8, col+0.0), strText=str(target), layer=PILGL.TARGET_LAYER)
 
             if binary_trans == 0:
                 if self.background_grid[col][row] <= 4:
@@ -579,7 +591,7 @@ class PILSVG(PILGL):
                 for color_idx, pil_zug_3 in enumerate(pils):
                     self.pil_zug[(in_direction_2, out_direction_2, color_idx)] = pils[color_idx]
 
-    def set_agent_at(self, agent_idx, row, col, in_direction, out_direction, is_selected):
+    def set_agent_at(self, agent_idx, row, col, in_direction, out_direction, is_selected, show_debug=False):
         delta_dir = (out_direction - in_direction) % 4
         color_idx = agent_idx % self.n_agent_colors
         # when flipping direction at a dead end, use the "out_direction" direction.
@@ -593,6 +605,10 @@ class PILSVG(PILGL):
             self.clear_layer(PILGL.SELECTED_AGENT_LAYER, 0)
             self.draw_image_row_col(bg_svg, (row, col), layer=PILGL.SELECTED_AGENT_LAYER)
 
+        if show_debug:
+            print("Call text:")
+            self.text_rowcol((row+0.2, col+0.2,), str(agent_idx))
+
     def set_cell_occupied(self, agent_idx, row, col):
         occupied_im = self.cell_occupied[agent_idx % len(self.cell_occupied)]
         self.draw_image_row_col(occupied_im, (row, col), 1)
diff --git a/flatland/utils/rendertools.py b/flatland/utils/rendertools.py
index 5118af75f7170b951558451c9b6f3b6b7828a759..fc2ee30658629b907275295dde2640c79bb6f863 100644
--- a/flatland/utils/rendertools.py
+++ b/flatland/utils/rendertools.py
@@ -39,8 +39,11 @@ class RenderTool(object):
     theta = np.linspace(0, np.pi / 2, 5)
     arc = array([np.cos(theta), np.sin(theta)]).T  # from [1,0] to [0,1]
 
-    def __init__(self, env, gl="PILSVG", jupyter=False, agent_render_variant=AgentRenderVariant.ONE_STEP_BEHIND,
-                 screen_width=800, screen_height=600):
+
+    def __init__(self, env, gl="PILSVG", jupyter=False,
+            agent_render_variant=AgentRenderVariant.ONE_STEP_BEHIND,
+            show_debug=True,screen_width=800, screen_height=600):
+
         self.env = env
         self.frame_nr = 0
         self.start_time = time.time()
@@ -57,6 +60,7 @@ class RenderTool(object):
             self.gl = PILSVG(env.width, env.height, jupyter, screen_width=screen_width, screen_height=screen_height)
 
         self.new_rail = True
+        self.show_debug = show_debug
         self.update_background()
 
     def reset(self):
@@ -283,7 +287,7 @@ class RenderTool(object):
         if len(observation_dict) < 1:
             warnings.warn(
                 "Predictor did not provide any predicted cells to render. \
-                Observaiton builder needs to populate: env.dev_obs_dict")
+                Observation builder needs to populate: env.dev_obs_dict")
         else:
             for agent in agent_handles:
                 color = self.gl.get_agent_color(agent)
@@ -526,7 +530,7 @@ class RenderTool(object):
                         is_selected = False
 
                     self.gl.set_rail_at(r, c, transitions, target=target, is_selected=is_selected,
-                                        rail_grid=env.rail.grid)
+                                        rail_grid=env.rail.grid, show_debug=self.show_debug)
 
             self.gl.build_background_map(targets)
 
@@ -551,7 +555,8 @@ class RenderTool(object):
                 # set_agent_at uses the agent index for the color
                 if self.agent_render_variant == AgentRenderVariant.ONE_STEP_BEHIND_AND_BOX:
                     self.gl.set_cell_occupied(agent_idx, *(agent.position))
-                self.gl.set_agent_at(agent_idx, *position, old_direction, direction, selected_agent == agent_idx)
+                self.gl.set_agent_at(agent_idx, *position, old_direction, direction,
+                    selected_agent == agent_idx, show_debug=self.show_debug)
             else:
                 position = agent.position
                 direction = agent.direction
@@ -563,7 +568,7 @@ class RenderTool(object):
 
                         # set_agent_at uses the agent index for the color
                         self.gl.set_agent_at(agent_idx, *position, agent.direction, direction,
-                                             selected_agent == agent_idx)
+                                             selected_agent == agent_idx, show_debug=self.show_debug)
 
                 # set_agent_at uses the agent index for the color
                 if self.agent_render_variant == AgentRenderVariant.AGENT_SHOWS_OPTIONS_AND_BOX:
diff --git a/tests/test_multi_speed.py b/tests/test_multi_speed.py
index 8b5468716fa12d83a0546a9b6ff34f2488beace5..47aadee73cfc0b45dc701fe914a586feb31b2597 100644
--- a/tests/test_multi_speed.py
+++ b/tests/test_multi_speed.py
@@ -66,7 +66,7 @@ def test_multi_speed_init():
     # Run episode
     for step in range(100):
 
-        # Chose an action for each agent in the environment
+        # Choose an action for each agent in the environment
         for a in range(env.get_num_agents()):
             action = agent.act(0)
             action_dict.update({a: action})
@@ -75,12 +75,11 @@ def test_multi_speed_init():
             assert old_pos[a] == env.agents[a].position
 
         # Environment step which returns the observations for all agents, their corresponding
-        # reward and whether their are done
+        # reward and whether they are done
         _, _, _, _ = env.step(action_dict)
 
         # Update old position whenever an agent was allowed to move
         for i_agent in range(env.get_num_agents()):
             if (step + 1) % (i_agent + 1) == 0:
-                print(step, i_agent, env.agents[a].position)
-
+                print(step, i_agent, env.agents[i_agent].position)
                 old_pos[i_agent] = env.agents[i_agent].position