diff --git a/docs/flatland_2.0.md b/docs/flatland_2.0.md new file mode 100644 index 0000000000000000000000000000000000000000..0ed2f1bdcd2e460fb60b1982047d03dc223509e2 --- /dev/null +++ b/docs/flatland_2.0.md @@ -0,0 +1,150 @@ +# Flatland 2.0 Introduction (Beta) + +Curious to see whats coming in **Flat**land 2.0? Have a look at the current development and report bugs and give us feedback on the environment. + +**WARNING**: Flatlnadn 2.0 Beta is under current development and not stable nor final. We would however like you to play with the code and help us get the best possible environment for multi-agent control problems. + +## Whats new + +In this version of **Flat**land we are moving closer to realistic and more complex railway problems. Earlier versions of **Flat**land which introduced you to the concept of restricted transitions was still to simplified to give us feasible solutions for daily operations. Thus the following changes are coming in the next version to be closer to real railway network challenges: + +- **New Level Generator** with less connections between different nodes in the network and thus much higher agent densities on rails. +- **Stochastic Events** that cause agents to stop and get stuck for different number of time steps. +- **Different Speed Classes** allow agents to move at different speeds and thus enhance complexity in the search for optimal solutions. + +Below we explain these changes in more detail and how you can play with their parametrization. We appreciate *your feedback* on the performance and the difficulty on these levels to help us shape the best possible **Flat**land 2.0 environment. + +## Get the new level generators +Since this is currently still in *beta* phase you can only install this version of **Flat**land through the gitlab repository. Once you have downloaded the [Flatland Repository](https://gitlab.aicrowd.com/flatland/flatland) you have to switch to the [147_new_level_generator](https://gitlab.aicrowd.com/flatland/flatland/tree/147_new_level_generator) branch to be able access the latest changes in **Flat**land. + +Once you have switched to this branch install **Flat**land by running `python setup.py install`. + +## Generate levels + +We are currently working on different new level generators and you can expect that the levels in the submission testing will not all come from just one but rather different level generators to be sure that the controllers can handle any railway specific challenge. + +For this early **beta** testing we suggest you have a look at the `sparse_rail_generator` and `realistic_rail_generator`. + +### Sparse Rail Generator + + +The idea behind the sparse rail generator is to mimic classic railway structures where dense nodes (cities) are sparsly connected to each other and where you have to manage traffic flow between the nodes efficiently. The cities in this level generator are much simplified in comparison to real city networks but it mimics parts of the problems faced in daily operations of any railway company. + +There are a few parameters you can tune to build your own map and test different complexity levels of the levels. **Warning** some combinations of parameters do not go well together and will lead to infeasible level generation. In the worst case, the level generator currently issues a warning when it cannot build the environment according to the parameters provided. This will lead to a crash of the whole env. We are currently working on improvements here and are **happy for any suggestions from your side**. + +To build en environment you instantiate a `RailEnv` follows + +``` +# Initialize the generator +RailGenerator = sparse_rail_generator(num_cities=10, # Number of cities in map + num_intersections=10, # Number of interesections in map + num_trainstations=50, # Number of possible start/targets on map + min_node_dist=6, # Minimal distance of nodes + node_radius=3, # Proximity of stations to city center + num_neighb=3, # Number of connections to other cities + seed=5, # Random seed + realistic_mode=True # Ordered distribution of nodes + ) + +# Build the environment +env = RailEnv(width=50, + height=50, + rail_generator=RailGenerator, + number_of_agents=10, + obs_builder_object=TreeObsForRailEnv(max_depth=3,predictor=shortest_path_predictor) + ) +``` + +You can tune the following parameters: + +- `num_citeis` is the number of cities on a map. Cities are the only nodes that can host start and end points for agent tasks (Train stations). Here you have to be carefull that the number is not too high as all the cities have to fit on the map. When `realistic_mode=False` you have to be carefull when chosing `min_node_dist` because leves will fails if not all cities (and intersections) can be placed with at least `min_node_dist` between them. +- `num_intersections` is the number of nodes that don't hold any trainstations. They are also the first priority that a city connects to. We use these to allow for sparse connections between cities. +- `num_trainstations`defines the *Total* number of trainstations in the network. This also sets the max number of allowed agents in the environment. This is also a delicate parameter as there is only a limitid amount of space available around nodes and thus if the number is too high the level generation will fail. *Important*: Only the number of agents provided to the environment will actually produce active train stations. The others will just be present as dead-ends (See figures below). +- `min_node_dist`is only used if `realistic_mode=False` and represents the minimal distance between two nodes. +- `node_radius` defines the extent of a city. Each trainstation is placed at a distance to the closes city node that is smaller or equal to this number. +- `num_neighb`defines the number of neighbouring nodes that connect to each other. Thus this changes the connectivity and thus the amount of alternative routes in the network. +- `seed` is used to initialize the random generator +- `realistic_mode` currently only changes how the nodes are distirbuted. If it is set to `True` the nodes are evenly spreas out and cities and intersecitons are set between each other. + +If you run into any bugs with sets of parameters please let us know. + +Here is a network with `realistic_mode=False` and the parameters from above. + + + +and here with `realistic_mode=True` + + + +## Add Stochasticity + +Another area where we improve **Flat**land 2.0 is by adding stochastic events during the episodes. This is very common for railway networks where the initial plan usually needs to be rescheduled during operations as minor events such as delayed departure from trainstations, malfunctions on trains or infrastructure or just the weather lead to delayed trains. + +We implemted a poisson process to simulate delays by stopping agents at random times for random durations. The parameters necessary for the stochastic events can be provided when creating the environment. + +``` +# Use a the malfunction generator to break agents from time to time +stochastic_data = {'prop_malfunction': 0.5, # Percentage of defective agents + 'malfunction_rate': 30, # Rate of malfunction occurence + 'min_duration': 3, # Minimal duration of malfunction + 'max_duration': 10 # Max duration of malfunction + } + +``` + +The parameters are as follows: + +- `prop_malfunction` is the proportion of agents that can malfunction. `1.0` means that each agent can break. +- `malfunction_rate` is the mean rate of the poisson process in number of environment steps. +- `min_dutation` and `max_duration` set the range of malfunction durations. They are sampled uniformly + +You can introduce stochasticity by simply creating the env as follows: + +``` +# Use a the malfunction generator to break agents from time to time +stochastic_data = {'prop_malfunction': 0.5, # Percentage of defective agents + 'malfunction_rate': 30, # Rate of malfunction occurence + 'min_duration': 3, # Minimal duration of malfunction + 'max_duration': 10 # Max duration of malfunction + } + +# Use your own observation builder +TreeObservation = TreeObsForRailEnv(max_depth=2, predictor=ShortestPathPredictorForRailEnv()) + +env = RailEnv(width=10, + height=10, + rail_generator=sparse_rail_generator(num_cities=3, # Number of cities in map (where train stations are) + num_intersections=1, # Number of interesections (no start / target) + num_trainstations=8, # Number of possible start/targets on map + min_node_dist=3, # Minimal distance of nodes + node_radius=2, # Proximity of stations to city center + num_neighb=2, # Number of connections to other cities/intersections + seed=15, # Random seed + ), + number_of_agents=5, + stochastic_data=stochastic_data, # Malfunction generator data + obs_builder_object=TreeObservation) +``` + +You will quickly realize that this will lead to unforseen difficulties which means that **your controller** needs to observe the environment at all times to be able to react to the stochastic events. + +## Add different speed profiles + +One of the main contributions to the complexity of railway network operations stems from the fact that all trains travel at different speeds while sharing a very limited railway network. In **Flat**land 2.0 this feature will be enabled as well and will lead to much more complex configurations. This is still in early *beta* and even though stock observation builders and predictors do support these changes we have not yet fully tested them. Here we count on your support :). + +Currently you have to initialize the speed profiles manually after the environment has been reset (*Attention*: this is currently being worked on and will change soon). In order for agent to have differnt speed profiles you can include this after your `env.reset()` call: + +``` +# Reset environment and get initial observations for all agents + obs = env.reset() + for idx in range(env.get_num_agents()): + tmp_agent = env.agents[idx] + speed = (idx % 4) + 1 + tmp_agent.speed_data["speed"] = 1 / speed +``` + +Where you can actually chose as many different speeds as you like. Keep in mind that the *fastest speed* is 1 and all slower speeds must be between 1 and 0. For the submission scoring you can assume that there will be no more than 5 speed profiles. + +## Example code + +To see allt he changes in action you can just run the `flatland_example_2_0.py` file in the examples folder. The file can be found [here](https://gitlab.aicrowd.com/flatland/flatland/blob/147_new_level_generator/examples/flatland_2_0_example.py) diff --git a/docs/intro_observationbuilder.rst b/docs/intro_observationbuilder.rst index 86631e3f455110996c84e0035871445b5280c83f..3cde8167838a6040db8513d5944421ce5e02460d 100644 --- a/docs/intro_observationbuilder.rst +++ b/docs/intro_observationbuilder.rst @@ -12,7 +12,7 @@ Whenever an environment needs to compute new observations for each agent, it que .. _Flatland-Challenge: https://www.aicrowd.com/challenges/flatland-challenge Example 1 : Simple (but useless) observation --------------- +-------------------------------------------------------- In this first example we implement all the functions necessary for the observation builder to be valid and work with **Flatland**. Custom observation builder objects need to derive from the `flatland.core.env_observation_builder.ObservationBuilder`_ base class and must implement two methods, :code:`reset(self)` and :code:`get(self, handle)`. @@ -54,7 +54,7 @@ In the next example we highlight how to derive from existing observation builder Example 2 : Single-agent navigation --------------- +------------------------------------- Observation builder objects can of course derive from existing concrete subclasses of ObservationBuilder. For example, it may be useful to extend the TreeObsForRailEnv_ observation builder. @@ -157,7 +157,7 @@ navigation to target, and shows the path taken as an animation. The code examples above appear in the example file `custom_observation_example.py <https://gitlab.aicrowd.com/flatland/flatland/blob/master/examples/custom_observation_example.py>`_. You can run it using :code:`python examples/custom_observation_example.py` from the root folder of the flatland repo. The two examples are run one after the other. Example 3 : Using custom predictors and rendering observation --------------- +------------------------------------------------------------- Because the re-scheduling task of the Flatland-Challenge_ requires some short time planning we allow the possibility to use custom predictors that help predict upcoming conflicts and help agent solve them in a timely manner. In the **Flatland Environment** we included an initial predictor ShortestPathPredictorForRailEnv_ to give you an idea what you can do with these predictors. @@ -291,3 +291,73 @@ We can then use this new observation builder and the renderer to visualize the o print("Rewards: ", all_rewards, " [done=", done, "]") env_renderer.render_env(show=True, frames=True, show_observations=True, show_predictions=False) time.sleep(0.5) + +How to access environment and agent data for observation builders +------------------------------------------------------------------ + +When building your custom observation builder, you might want to aggregate and define your own features that are different from the raw env data. In this section we introduce how such information can be accessed and how you can build your own features out of them. + +Transitions maps +~~~~~~~~~~~~~~~~ + +The transition maps build the base for all movement in the environment. They contain all the information about allowed transitions for the agent at any given position. Because railway movement is limited to the railway tracks, these are important features for any controller that want to interact with the environment. All functionality and features of a transition map can be found here_ + +.. _here:https://gitlab.aicrowd.com/flatland/flatland/blob/master/flatland/core/transition_map.py + +**Get Transitions for cell** + +To access the possible transitions at any given cell there are different possibilites: + +1. You provide a cell position and a orientation in that cell (usually the orientation of the agent) and call :code:`cell_transitions = env.rail.get_transitions(*position, direction)` and in return you get a 4d vector with the transition probability ordered as :code:`[North, East, South, West]` given the initial orientation. The position is a tuple of the form :code:`(x, y)` where :code:`x in [0, height]` and :code:`y in [0, width]`. This can be used for branching in a tree search and when looking for all possible allowed paths of an agent as it will provide a simple way to get the possible trajectories. + +2. When more detailed information about the cell in general is necessary you can also get the full transitions of a cell by calling :code:`transition_int = env.rail.get_full_transitions(*position)`. This will return an :code:`int16` for the cell representing the allowed transitions. To understand the transitions returned it is best to represent it as a binary number :code:`bin(transition_int)`, where the bits have to following meaning: :code:`NN NE NS NW EN EE ES EW SN SE SS SW WN WE WS WW`. For example the binary code :code:`1000 0000 0010 0000`, represents a straigt where an agent facing north can transition north and an agent facing south can transition south and no other transitions are possible. To get a better feeling what the binary representations of the elements look like go to this Link_ + +.. _Link:https://gitlab.aicrowd.com/flatland/flatland/blob/master/flatland/core/grid/rail_env_grid.py#L29 + + +These two objects can be used for example to detect switches that are usable by other agents but not the observing agent itself. This can be an important feature when actions have to be taken in order to avoid conflicts. + +.. code-block:: python + + cell_transitions = self.env.rail.get_transitions(*position, direction) + transition_bit = bin(self.env.rail.get_full_transitions(*position)) + + total_transitions = transition_bit.count("1") + num_transitions = np.count_nonzero(cell_transitions) + + # Detect Switches that can only be used by other agents. + if total_transitions > 2 > num_transitions: + unusable_switch_detected = True + + +Agent information +~~~~~~~~~~~~~~~~~~ + +The agents are represented as an agent class and are provided when the environment is instantiated. Because agents can have different properties it is helpful to know how to access this information. + +You can simply acces the three main types of agent information in the following ways with :code:`agent = env.agents[handle]`: + +**Agent basic information** +All the agent in the initiated environment can be found in the :code:`env.agents` class. Given the index of the agent you have acces to: + +- Agent position :code:`agent.position` which returns the current coordinates :code:`(x, y)` of the agent. +- Agent target :code:`agent.target` which returns the target coordinates :code:`(x, y)`. +- Agent direction :code:`agent.direction` which is an int representing the current orientation :code:`{0: North, 1: East, 2: South, 3: West}` +- Agent moving :code:`agent.moving` where 0 means the agent is currently not moving and 1 indicates agent is moving. + +**Agent speed information** + +Beyond the basic agent information we can also access more details about the agents type by looking at speed data: + +- Agent max speed :code:`agent.speed_data["speed"]` wich defines the traveling speed when the agent is moving. +- Agent position fraction :code:``agent.speed_data["position_fraction"]` which is a number between 0 and 1 and inidicates when the move to the next cell will occur. Each speed of an agent is 1 or a smaller fraction. At each :code:`env.step()` the agent moves at its fractional speed forwards any only changes to the next cell when the cumulated fractions are :code:`agent.speed_data["position_fraction"] >= 1.` + +**Agent malfunction information** + +Similar to the speed data you can also access individual data about the malfunctions of an agent. All data is available through :code:`agent.malfunction_data` with: + +- Indication how long the agent is still malfunctioning :code:`'malfunction'` by an integer counting down at each time step. 0 means the agent is ok and can move. +- Possion rate at which malfunctions happen for this agent :code:`'malfunction_rate'` +- Number of steps untill next malfunction will occur :code:`'next_malfunction'` +- Number of malfunctions an agent have occured for this agent so far :code:`nr_malfunctions'` + diff --git a/examples/debugging_example_DELETE.py b/examples/debugging_example_DELETE.py new file mode 100644 index 0000000000000000000000000000000000000000..2c0f814576caef84471d20c91dd92d23d4db02ac --- /dev/null +++ b/examples/debugging_example_DELETE.py @@ -0,0 +1,85 @@ +import random +import time + +import numpy as np + +from flatland.envs.generators import complex_rail_generator +from flatland.envs.observations import TreeObsForRailEnv +from flatland.envs.rail_env import RailEnv +from flatland.utils.rendertools import RenderTool + +random.seed(1) +np.random.seed(1) + +class SingleAgentNavigationObs(TreeObsForRailEnv): + """ + We derive our bbservation builder from TreeObsForRailEnv, to exploit the existing implementation to compute + the minimum distances from each grid node to each agent's target. + + We then build a representation vector with 3 binary components, indicating which of the 3 available directions + for each agent (Left, Forward, Right) lead to the shortest path to its target. + E.g., if taking the Left branch (if available) is the shortest route to the agent's target, the observation vector + will be [1, 0, 0]. + """ + def __init__(self): + super().__init__(max_depth=0) + self.observation_space = [3] + + def reset(self): + # Recompute the distance map, if the environment has changed. + super().reset() + + def get(self, handle): + agent = self.env.agents[handle] + + possible_transitions = self.env.rail.get_transitions(*agent.position, agent.direction) + num_transitions = np.count_nonzero(possible_transitions) + + # Start from the current orientation, and see which transitions are available; + # organize them as [left, forward, right], relative to the current orientation + # If only one transition is possible, the forward branch is aligned with it. + if num_transitions == 1: + observation = [0, 1, 0] + else: + min_distances = [] + for direction in [(agent.direction + i) % 4 for i in range(-1, 2)]: + if possible_transitions[direction]: + new_position = self._new_position(agent.position, direction) + min_distances.append(self.distance_map[handle, new_position[0], new_position[1], direction]) + else: + min_distances.append(np.inf) + + observation = [0, 0, 0] + observation[np.argmin(min_distances)] = 1 + + return observation + + +env = RailEnv(width=14, + height=14, + rail_generator=complex_rail_generator(nr_start_goal=10, nr_extra=1, min_dist=5, max_dist=99999, seed=0), + number_of_agents=2, + obs_builder_object=SingleAgentNavigationObs()) + +obs = env.reset() +env_renderer = RenderTool(env, gl="PILSVG") +env_renderer.render_env(show=True, frames=True, show_observations=False) +for step in range(100): + actions = {} + for i in range(len(obs)): + actions[i] = np.argmax(obs[i])+1 + + if step%5 == 0: + print("Agent halts") + actions[0] = 4 # Halt + + obs, all_rewards, done, _ = env.step(actions) + if env.agents[0].malfunction_data['malfunction'] > 0: + print("Agent 0 broken-ness: ", env.agents[0].malfunction_data['malfunction']) + + env_renderer.render_env(show=True, frames=True, show_observations=False) + time.sleep(0.5) + if done["__all__"]: + break +env_renderer.close_window() + diff --git a/examples/training_example.py b/examples/training_example.py index c038e7b477069957efdec622b2c56e9e84cb7ac0..d125be1587a56025ba1cd3f78b28ba3976f01fbf 100644 --- a/examples/training_example.py +++ b/examples/training_example.py @@ -57,7 +57,7 @@ class RandomAgent: # Initialize the agent with the parameters corresponding to the environment and observation_builder -agent = RandomAgent(218, 4) +agent = RandomAgent(218, 5) n_trials = 5 # Empty dictionary for all agent action @@ -77,12 +77,11 @@ for trials in range(1, n_trials + 1): score = 0 # Run episode - for step in range(100): + for step in range(500): # Chose an action for each agent in the environment for a in range(env.get_num_agents()): action = agent.act(obs[a]) action_dict.update({a: action}) - # Environment step which returns the observations for all agents, their corresponding # reward and whether their are done next_obs, all_rewards, done, _ = env.step(action_dict) @@ -92,7 +91,6 @@ for trials in range(1, n_trials + 1): for a in range(env.get_num_agents()): agent.step((obs[a], action_dict[a], all_rewards[a], next_obs[a], done[a])) score += all_rewards[a] - obs = next_obs.copy() if done['__all__']: break diff --git a/flatland/envs/agent_utils.py b/flatland/envs/agent_utils.py index e353af29ddbee16c208e2059767c18fa7880cb64..4c4070088c59499f885c16db68976c163ec91001 100644 --- a/flatland/envs/agent_utils.py +++ b/flatland/envs/agent_utils.py @@ -15,6 +15,7 @@ class EnvAgentStatic(object): direction = attrib() target = attrib() moving = attrib(default=False) + # speed_data: speed is added to position_fraction on each moving step, until position_fraction>=1.0, # after which 'transition_action_on_cellexit' is executed (equivalent to executing that action in the previous # cell if speed=1, as default) @@ -22,6 +23,12 @@ class EnvAgentStatic(object): speed_data = attrib( default=Factory(lambda: dict({'position_fraction': 0.0, 'speed': 1.0, 'transition_action_on_cellexit': 0}))) + # if broken>0, the agent's actions are ignored for 'broken' steps + # number of time the agent had to stop, since the last time it broke down + malfunction_data = attrib( + default=Factory( + lambda: dict({'malfunction': 0, 'malfunction_rate': 0, 'next_malfunction': 0, 'nr_malfunctions': 0}))) + @classmethod def from_lists(cls, positions, directions, targets, speeds=None): """ Create a list of EnvAgentStatics from lists of positions, directions and targets @@ -31,7 +38,22 @@ class EnvAgentStatic(object): speed_datas.append({'position_fraction': 0.0, 'speed': speeds[i] if speeds is not None else 1.0, 'transition_action_on_cellexit': 0}) - return list(starmap(EnvAgentStatic, zip(positions, directions, targets, [False] * len(positions), speed_datas))) + + # TODO: on initialization, all agents are re-set as non-broken. Perhaps it may be desirable to set + # some as broken? + malfunction_datas = [] + for i in range(len(positions)): + malfunction_datas.append({'malfunction': 0, + 'malfunction_rate': 0, + 'next_malfunction': 0, + 'nr_malfunctions': 0}) + + return list(starmap(EnvAgentStatic, zip(positions, + directions, + targets, + [False] * len(positions), + speed_datas, + malfunction_datas))) def to_list(self): @@ -45,7 +67,7 @@ class EnvAgentStatic(object): if type(lTarget) is np.ndarray: lTarget = lTarget.tolist() - return [lPos, int(self.direction), lTarget, int(self.moving), self.speed_data] + return [lPos, int(self.direction), lTarget, int(self.moving), self.speed_data, self.malfunction_data] @attrs @@ -63,7 +85,7 @@ class EnvAgent(EnvAgentStatic): def to_list(self): return [ self.position, self.direction, self.target, self.handle, - self.old_direction, self.old_position, self.moving, self.speed_data] + self.old_direction, self.old_position, self.moving, self.speed_data, self.malfunction_data] @classmethod def from_static(cls, oStatic): diff --git a/flatland/envs/rail_env.py b/flatland/envs/rail_env.py index abe623ae173a593e265cff7d4d88eb323e16b08e..2281282977d8c9d972f13526efa9d96abaf84a52 100644 --- a/flatland/envs/rail_env.py +++ b/flatland/envs/rail_env.py @@ -75,6 +75,17 @@ class RailEnv(Environment): - stop_penalty = 0 # penalty for stopping a moving agent - start_penalty = 0 # penalty for starting a stopped agent + Stochastic malfunctioning of trains: + Trains in RailEnv can malfunction if they are halted too often (either by their own choice or because an invalid + action or cell is selected. + + Every time an agent stops, an agent has a certain probability of malfunctioning. Malfunctions of trains follow a + poisson process with a certain rate. Not all trains will be affected by malfunctions during episodes to keep + complexity managable. + + TODO: currently, the parameters that control the stochasticity of the environment are hard-coded in init(). + For Round 2, they will be passed to the constructor as arguments, to allow for more flexibility. + """ def __init__(self, @@ -83,7 +94,8 @@ class RailEnv(Environment): rail_generator=random_rail_generator(), number_of_agents=1, obs_builder_object=TreeObsForRailEnv(max_depth=2), - max_episode_steps=None + max_episode_steps=None, + stochastic_data=None ): """ Environment init. @@ -146,6 +158,29 @@ class RailEnv(Environment): self.action_space = [1] self.observation_space = self.obs_builder.observation_space # updated on resets? + # Stochastic train malfunctioning parameters + if stochastic_data is not None: + prop_malfunction = stochastic_data['prop_malfunction'] + mean_malfunction_rate = stochastic_data['malfunction_rate'] + malfunction_min_duration = stochastic_data['min_duration'] + malfunction_max_duration = stochastic_data['max_duration'] + else: + prop_malfunction = 0. + mean_malfunction_rate = 0. + malfunction_min_duration = 0. + malfunction_max_duration = 0. + + # percentage of malfunctioning trains + self.proportion_malfunctioning_trains = prop_malfunction + + # Mean malfunction in number of stops + self.mean_malfunction_rate = mean_malfunction_rate + + # Uniform distribution parameters for malfunction duration + self.min_number_of_steps_broken = malfunction_min_duration + self.max_number_of_steps_broken = malfunction_max_duration + + # Rest environment self.reset() self.num_resets = 0 # yes, set it to zero again! @@ -195,7 +230,15 @@ class RailEnv(Environment): for i_agent in range(self.get_num_agents()): agent = self.agents[i_agent] + + # A proportion of agent in the environment will receive a positive malfunction rate + if np.random.random() < self.proportion_malfunctioning_trains: + agent.malfunction_data['malfunction_rate'] = self.mean_malfunction_rate + agent.speed_data['position_fraction'] = 0.0 + agent.malfunction_data['malfunction'] = 0 + + self._agent_malfunction(agent) self.num_resets += 1 self._elapsed_steps = 0 @@ -210,6 +253,30 @@ class RailEnv(Environment): # Return the new observation vectors for each agent return self._get_observations() + def _agent_malfunction(self, agent): + # Decrease counter for next event + agent.malfunction_data['next_malfunction'] -= 1 + + # Only agents that have a positive rate for malfunctions and are not currently broken are considered + if agent.malfunction_data['malfunction_rate'] > 0 >= agent.malfunction_data[ + 'malfunction']: + + # If counter has come to zero --> Agent has malfunction + # set next malfunction time and duration of current malfunction + if agent.malfunction_data['next_malfunction'] <= 0: + # Increase number of malfunctions + agent.malfunction_data['nr_malfunctions'] += 1 + + # Next malfunction in number of stops + next_breakdown = int( + np.random.exponential(scale=agent.malfunction_data['malfunction_rate'])) + agent.malfunction_data['next_malfunction'] = next_breakdown + + # Duration of current malfunction + num_broken_steps = np.random.randint(self.min_number_of_steps_broken, + self.max_number_of_steps_broken + 1) + 1 + agent.malfunction_data['malfunction'] = num_broken_steps + def step(self, action_dict_): self._elapsed_steps += 1 @@ -238,12 +305,29 @@ class RailEnv(Environment): agent = self.agents[i_agent] agent.old_direction = agent.direction agent.old_position = agent.position + + # Check if agent breaks at this step + self._agent_malfunction(agent) + if self.dones[i_agent]: # this agent has already completed... continue - if i_agent not in action_dict: # no action has been supplied for this agent + # No action has been supplied for this agent + if i_agent not in action_dict: action_dict[i_agent] = RailEnvActions.DO_NOTHING + # The train is broken + if agent.malfunction_data['malfunction'] > 0: + agent.malfunction_data['malfunction'] -= 1 + + # Broken agents are stopped + self.rewards_dict[i_agent] += step_penalty * agent.speed_data['speed'] + self.agents[i_agent].moving = False + action_dict[i_agent] = RailEnvActions.DO_NOTHING + + # Nothing left to do with broken agent + continue + if action_dict[i_agent] < 0 or action_dict[i_agent] > len(RailEnvActions): print('ERROR: illegal action=', action_dict[i_agent], 'for agent with index=', i_agent, @@ -329,7 +413,7 @@ class RailEnv(Environment): agent.direction = new_direction agent.speed_data['position_fraction'] = 0.0 else: - # If the agent cannot move due to any reason, we set its state to not moving. + # If the agent cannot move due to any reason, we set its state to not moving agent.moving = False if np.equal(agent.position, agent.target).all(): diff --git a/tests/test_flatland_malfunction.py b/tests/test_flatland_malfunction.py new file mode 100644 index 0000000000000000000000000000000000000000..67dcd25c0769e542fd9a03502c2a8c1b29333b2b --- /dev/null +++ b/tests/test_flatland_malfunction.py @@ -0,0 +1,110 @@ +import numpy as np + +from flatland.envs.generators import complex_rail_generator +from flatland.envs.observations import TreeObsForRailEnv +from flatland.envs.rail_env import RailEnv + + +class SingleAgentNavigationObs(TreeObsForRailEnv): + """ + We derive our bbservation builder from TreeObsForRailEnv, to exploit the existing implementation to compute + the minimum distances from each grid node to each agent's target. + + We then build a representation vector with 3 binary components, indicating which of the 3 available directions + for each agent (Left, Forward, Right) lead to the shortest path to its target. + E.g., if taking the Left branch (if available) is the shortest route to the agent's target, the observation vector + will be [1, 0, 0]. + """ + + def __init__(self): + super().__init__(max_depth=0) + self.observation_space = [3] + + def reset(self): + # Recompute the distance map, if the environment has changed. + super().reset() + + def get(self, handle): + agent = self.env.agents[handle] + + possible_transitions = self.env.rail.get_transitions(*agent.position, agent.direction) + num_transitions = np.count_nonzero(possible_transitions) + + # Start from the current orientation, and see which transitions are available; + # organize them as [left, forward, right], relative to the current orientation + # If only one transition is possible, the forward branch is aligned with it. + if num_transitions == 1: + observation = [0, 1, 0] + else: + min_distances = [] + for direction in [(agent.direction + i) % 4 for i in range(-1, 2)]: + if possible_transitions[direction]: + new_position = self._new_position(agent.position, direction) + min_distances.append(self.distance_map[handle, new_position[0], new_position[1], direction]) + else: + min_distances.append(np.inf) + + observation = [0, 0, 0] + observation[np.argmin(min_distances)] = 1 + + return observation + + +def test_malfunction_process(): + # Set fixed malfunction duration for this test + stochastic_data = {'prop_malfunction': 1., + 'malfunction_rate': 1000, + 'min_duration': 3, + 'max_duration': 3} + np.random.seed(5) + + env = RailEnv(width=20, + height=20, + rail_generator=complex_rail_generator(nr_start_goal=10, nr_extra=1, min_dist=5, max_dist=99999, + seed=0), + number_of_agents=2, + obs_builder_object=SingleAgentNavigationObs(), + stochastic_data=stochastic_data) + + obs = env.reset() + + # Check that a initial duration for malfunction was assigned + assert env.agents[0].malfunction_data['next_malfunction'] > 0 + + agent_halts = 0 + total_down_time = 0 + agent_malfunctioning = False + agent_old_position = env.agents[0].position + for step in range(100): + actions = {} + for i in range(len(obs)): + actions[i] = np.argmax(obs[i]) + 1 + + if step % 5 == 0: + # Stop the agent and set it to be malfunctioning + env.agents[0].malfunction_data['malfunction'] = -1 + env.agents[0].malfunction_data['next_malfunction'] = 0 + agent_halts += 1 + + obs, all_rewards, done, _ = env.step(actions) + + if env.agents[0].malfunction_data['malfunction'] > 0: + agent_malfunctioning = True + else: + agent_malfunctioning = False + + if agent_malfunctioning: + # Check that agent is not moving while malfunctioning + assert agent_old_position == env.agents[0].position + + agent_old_position = env.agents[0].position + total_down_time += env.agents[0].malfunction_data['malfunction'] + + # Check that the appropriate number of malfunctions is achieved + assert env.agents[0].malfunction_data['nr_malfunctions'] == 21 + + # Check that 20 stops where performed + assert agent_halts == 20 + + # Check that malfunctioning data was standing around + assert total_down_time > 0