Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • jack_bruck/baselines
  • rivesunder/baselines
  • xzhaoma/baselines
  • giulia_cantini/baselines
  • sfwatergit/baselines
  • jiaodaxiaozi/baselines
  • flatland/baselines
7 results
Show changes
Showing
with 19042 additions and 317 deletions
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -59,7 +59,8 @@ For training purposes the tree is flattend into a single array. ...@@ -59,7 +59,8 @@ For training purposes the tree is flattend into a single array.
## Training ## Training
### Setting up the environment ### Setting up the environment
Let us now train a simle double dueling DQN agent to navigate to its target on flatland. We start by importing flatland Before you get started with the training make sure that you have [pytorch](https://pytorch.org/get-started/locally/) installed.
Let us now train a simPle double dueling DQN agent to navigate to its target on flatland. We start by importing flatland
``` ```
from flatland.envs.generators import complex_rail_generator from flatland.envs.generators import complex_rail_generator
...@@ -105,12 +106,12 @@ We have no successfully set up the environment for training. To visualize it in ...@@ -105,12 +106,12 @@ We have no successfully set up the environment for training. To visualize it in
env_renderer = RenderTool(env, gl="PILSVG", ) env_renderer = RenderTool(env, gl="PILSVG", )
``` ```
###Setting up the agent ### Setting up the agent
To set up a appropriate agent we need the state and action space sizes. From the discussion above about the tree observation we end up with: To set up a appropriate agent we need the state and action space sizes. From the discussion above about the tree observation we end up with:
[**Adrian**: I just wonder, why this is not done in seperate method in the the observation: get_state_size, then we don't have to write down much more. And the user don't need to [**Adrian**: I just wonder, why this is not done in seperate method in the the observation: get_state_size, then we don't have to write down much more. And the user don't need to
understand anything about the oberservation. I suggest moving this into the obersvation, base ObservationBuilder declare it as an abstract method. ... ] understand anything about the observation. I suggest moving this into the observation, base ObservationBuilder declare it as an abstract method. ... ]
``` ```
# Given the depth of the tree observation and the number of features per node we get the following state_size # Given the depth of the tree observation and the number of features per node we get the following state_size
...@@ -149,7 +150,7 @@ We now use the normalized `agent_obs` for our training loop: ...@@ -149,7 +150,7 @@ We now use the normalized `agent_obs` for our training loop:
for trials in range(1, n_trials + 1): for trials in range(1, n_trials + 1):
# Reset environment # Reset environment
obs = env.reset(True, True) obs, info = env.reset(True, True)
if not Training: if not Training:
env_renderer.set_new_rail() env_renderer.set_new_rail()
...@@ -217,7 +218,7 @@ for trials in range(1, n_trials + 1): ...@@ -217,7 +218,7 @@ for trials in range(1, n_trials + 1):
eps = max(eps_end, eps_decay * eps) # decrease epsilon eps = max(eps_end, eps_decay * eps) # decrease epsilon
``` ```
Running the `navigation_training.py` file trains a simple agent to navigate to any random target within the railway network. After running you should see a learning curve similiar to this one: Running the `training_navigation.py` file trains a simple agent to navigate to any random target within the railway network. After running you should see a learning curve similiar to this one:
![Learning_curve](https://i.imgur.com/yVGXpUy.png) ![Learning_curve](https://i.imgur.com/yVGXpUy.png)
......
...@@ -174,7 +174,7 @@ We now use the normalized `agent_obs` for our training loop: ...@@ -174,7 +174,7 @@ We now use the normalized `agent_obs` for our training loop:
agent_next_obs = [None] * env.get_num_agents() agent_next_obs = [None] * env.get_num_agents()
# Reset environment # Reset environment
obs = env.reset(True, True) obs, info = env.reset(True, True)
# Setup placeholder for finals observation of a single agent. This is necessary because agents terminate at # Setup placeholder for finals observation of a single agent. This is necessary because agents terminate at
# different times during an episode # different times during an episode
......
File deleted
File deleted
File deleted
...@@ -8,51 +8,41 @@ import torch ...@@ -8,51 +8,41 @@ import torch
import torch.nn.functional as F import torch.nn.functional as F
import torch.optim as optim import torch.optim as optim
from torch_training.model import QNetwork, QNetwork2 from torch_training.model import QNetwork
BUFFER_SIZE = int(1e5) # replay buffer size BUFFER_SIZE = int(1e5) # replay buffer size
BATCH_SIZE = 512 # minibatch size BATCH_SIZE = 512 # minibatch size
GAMMA = 0.99 # discount factor 0.99 GAMMA = 0.99 # discount factor 0.99
TAU = 1e-3 # for soft update of target parameters TAU = 1e-3 # for soft update of target parameters
LR = 0.5e-4 # learning rate 5 LR = 0.5e-4 # learning rate 0.5e-4 works
UPDATE_EVERY = 10 # how often to update the network UPDATE_EVERY = 10 # how often to update the network
double_dqn = True # If using double dqn algorithm
input_channels = 5 # Number of Input channels
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
device = torch.device("cpu")
print(device) print(device)
class Agent: class Agent:
"""Interacts with and learns from the environment.""" """Interacts with and learns from the environment."""
def __init__(self, state_size, action_size, net_type, seed, double_dqn=True, input_channels=5): def __init__(self, state_size, action_size, double_dqn=True):
"""Initialize an Agent object. """Initialize an Agent object.
Params Params
====== ======
state_size (int): dimension of each state state_size (int): dimension of each state
action_size (int): dimension of each action action_size (int): dimension of each action
seed (int): random seed
""" """
self.state_size = state_size self.state_size = state_size
self.action_size = action_size self.action_size = action_size
self.seed = random.seed(seed)
self.version = net_type
self.double_dqn = double_dqn self.double_dqn = double_dqn
# Q-Network # Q-Network
if self.version == "Conv": self.qnetwork_local = QNetwork(state_size, action_size).to(device)
self.qnetwork_local = QNetwork2(state_size, action_size, seed, input_channels).to(device) self.qnetwork_target = copy.deepcopy(self.qnetwork_local)
self.qnetwork_target = copy.deepcopy(self.qnetwork_local)
else:
self.qnetwork_local = QNetwork(state_size, action_size, seed).to(device)
self.qnetwork_target = copy.deepcopy(self.qnetwork_local)
self.optimizer = optim.Adam(self.qnetwork_local.parameters(), lr=LR) self.optimizer = optim.Adam(self.qnetwork_local.parameters(), lr=LR)
# Replay memory # Replay memory
self.memory = ReplayBuffer(action_size, BUFFER_SIZE, BATCH_SIZE, seed) self.memory = ReplayBuffer(action_size, BUFFER_SIZE, BATCH_SIZE)
# Initialize time step (for updating every UPDATE_EVERY steps) # Initialize time step (for updating every UPDATE_EVERY steps)
self.t_step = 0 self.t_step = 0
...@@ -152,7 +142,7 @@ class Agent: ...@@ -152,7 +142,7 @@ class Agent:
class ReplayBuffer: class ReplayBuffer:
"""Fixed-size buffer to store experience tuples.""" """Fixed-size buffer to store experience tuples."""
def __init__(self, action_size, buffer_size, batch_size, seed): def __init__(self, action_size, buffer_size, batch_size):
"""Initialize a ReplayBuffer object. """Initialize a ReplayBuffer object.
Params Params
...@@ -160,13 +150,11 @@ class ReplayBuffer: ...@@ -160,13 +150,11 @@ class ReplayBuffer:
action_size (int): dimension of each action action_size (int): dimension of each action
buffer_size (int): maximum size of buffer buffer_size (int): maximum size of buffer
batch_size (int): size of each training batch batch_size (int): size of each training batch
seed (int): random seed
""" """
self.action_size = action_size self.action_size = action_size
self.memory = deque(maxlen=buffer_size) self.memory = deque(maxlen=buffer_size)
self.batch_size = batch_size self.batch_size = batch_size
self.experience = namedtuple("Experience", field_names=["state", "action", "reward", "next_state", "done"]) self.experience = namedtuple("Experience", field_names=["state", "action", "reward", "next_state", "done"])
self.seed = random.seed(seed)
def add(self, state, action, reward, next_state, done): def add(self, state, action, reward, next_state, done):
"""Add a new experience to memory.""" """Add a new experience to memory."""
...@@ -188,7 +176,7 @@ class ReplayBuffer: ...@@ -188,7 +176,7 @@ class ReplayBuffer:
dones = torch.from_numpy(self.__v_stack_impr([e.done for e in experiences if e is not None]).astype(np.uint8)) \ dones = torch.from_numpy(self.__v_stack_impr([e.done for e in experiences if e is not None]).astype(np.uint8)) \
.float().to(device) .float().to(device)
return (states, actions, rewards, next_states, dones) return states, actions, rewards, next_states, dones
def __len__(self): def __len__(self):
"""Return the current size of internal memory.""" """Return the current size of internal memory."""
......
...@@ -3,7 +3,7 @@ import torch.nn.functional as F ...@@ -3,7 +3,7 @@ import torch.nn.functional as F
class QNetwork(nn.Module): class QNetwork(nn.Module):
def __init__(self, state_size, action_size, seed, hidsize1=128, hidsize2=128): def __init__(self, state_size, action_size, hidsize1=128, hidsize2=128):
super(QNetwork, self).__init__() super(QNetwork, self).__init__()
self.fc1_val = nn.Linear(state_size, hidsize1) self.fc1_val = nn.Linear(state_size, hidsize1)
...@@ -24,38 +24,3 @@ class QNetwork(nn.Module): ...@@ -24,38 +24,3 @@ class QNetwork(nn.Module):
adv = F.relu(self.fc2_adv(adv)) adv = F.relu(self.fc2_adv(adv))
adv = self.fc3_adv(adv) adv = self.fc3_adv(adv)
return val + adv - adv.mean() return val + adv - adv.mean()
class QNetwork2(nn.Module):
def __init__(self, state_size, action_size, seed, input_channels, hidsize1=128, hidsize2=64):
super(QNetwork2, self).__init__()
self.conv1 = nn.Conv2d(input_channels, 16, kernel_size=3, stride=1)
self.bn1 = nn.BatchNorm2d(16)
self.conv2 = nn.Conv2d(16, 32, kernel_size=5, stride=3)
self.bn2 = nn.BatchNorm2d(32)
self.conv3 = nn.Conv2d(32, 64, kernel_size=5, stride=3)
self.bn3 = nn.BatchNorm2d(64)
self.fc1_val = nn.Linear(6400, hidsize1)
self.fc2_val = nn.Linear(hidsize1, hidsize2)
self.fc3_val = nn.Linear(hidsize2, 1)
self.fc1_adv = nn.Linear(6400, hidsize1)
self.fc2_adv = nn.Linear(hidsize1, hidsize2)
self.fc3_adv = nn.Linear(hidsize2, action_size)
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.relu(self.conv2(x))
x = F.relu(self.conv3(x))
# value function approximation
val = F.relu(self.fc1_val(x.view(x.size(0), -1)))
val = F.relu(self.fc2_val(val))
val = self.fc3_val(val)
# advantage calculation
adv = F.relu(self.fc1_adv(x.view(x.size(0), -1)))
adv = F.relu(self.fc2_adv(adv))
adv = self.fc3_adv(adv)
return val + adv - adv.mean()
...@@ -3,59 +3,85 @@ from collections import deque ...@@ -3,59 +3,85 @@ from collections import deque
import numpy as np import numpy as np
import torch import torch
from flatland.envs.generators import complex_rail_generator from flatland.envs.malfunction_generators import malfunction_from_params, MalfunctionParameters
from flatland.envs.observations import TreeObsForRailEnv from flatland.envs.observations import TreeObsForRailEnv
from flatland.envs.predictions import ShortestPathPredictorForRailEnv from flatland.envs.predictions import ShortestPathPredictorForRailEnv
from flatland.envs.rail_env import RailEnv from flatland.envs.rail_env import RailEnv
from flatland.envs.rail_generators import sparse_rail_generator
from flatland.envs.schedule_generators import sparse_schedule_generator
from flatland.utils.rendertools import RenderTool from flatland.utils.rendertools import RenderTool
from importlib_resources import path from importlib_resources import path
import torch_training.Nets import torch_training.Nets
from torch_training.dueling_double_dqn import Agent from torch_training.dueling_double_dqn import Agent
from utils.observation_utils import norm_obs_clip, split_tree from utils.observation_utils import normalize_observation
random.seed(3) random.seed(1)
np.random.seed(2) np.random.seed(1)
""" """
file_name = "./railway/flatland.pkl" file_name = "./railway/complex_scene.pkl"
env = RailEnv(width=10, env = RailEnv(width=10,
height=20, height=20,
rail_generator=rail_from_file(file_name), rail_generator=rail_from_file(file_name),
obs_builder_object=TreeObsForRailEnv(max_depth=3, predictor=ShortestPathPredictorForRailEnv())) obs_builder_object=TreeObsForRailEnv(max_depth=3, predictor=ShortestPathPredictorForRailEnv()))
x_dim = env.width x_dim = env.width
y_dim = env.height y_dim = env.height
""" """
x_dim = 20 #np.random.randint(8, 20) # Parameters for the Environment
y_dim = 20 #np.random.randint(8, 20) x_dim = 25
n_agents = 1 # np.random.randint(3, 8) y_dim = 25
n_goals = 10 + n_agents + np.random.randint(0, 3) n_agents = 10
min_dist = int(0.75 * min(x_dim, y_dim))
# We are training an Agent using the Tree Observation with depth 2
observation_builder = TreeObsForRailEnv(max_depth=2)
# Use a the malfunction generator to break agents from time to time
stochastic_data = MalfunctionParameters(malfunction_rate=1./10000, # Rate of malfunction occurence
min_duration=15, # Minimal duration of malfunction
max_duration=50 # Max duration of malfunction
)
# Custom observation builder
TreeObservation = TreeObsForRailEnv(max_depth=2, predictor=ShortestPathPredictorForRailEnv(30))
# Different agent types (trains) with different speeds.
speed_ration_map = {1.: 0.25, # Fast passenger train
1. / 2.: 0.25, # Fast freight train
1. / 3.: 0.25, # Slow commuter train
1. / 4.: 0.25} # Slow freight train
env = RailEnv(width=x_dim, env = RailEnv(width=x_dim,
height=y_dim, height=y_dim,
rail_generator=complex_rail_generator(nr_start_goal=n_goals, nr_extra=5, min_dist=min_dist, rail_generator=sparse_rail_generator(max_num_cities=3,
max_dist=99999, # Number of cities in map (where train stations are)
seed=0), seed=1, # Random seed
obs_builder_object=TreeObsForRailEnv(max_depth=3, predictor=ShortestPathPredictorForRailEnv()), grid_mode=False,
number_of_agents=n_agents) max_rails_between_cities=2,
max_rails_in_city=2),
schedule_generator=sparse_schedule_generator(speed_ration_map),
number_of_agents=n_agents,
malfunction_generator_and_process_data=malfunction_from_params(stochastic_data),
obs_builder_object=TreeObservation)
env.reset(True, True) env.reset(True, True)
tree_depth = 3 observation_helper = TreeObsForRailEnv(max_depth=3, predictor=ShortestPathPredictorForRailEnv())
observation_helper = TreeObsForRailEnv(max_depth=tree_depth, predictor=ShortestPathPredictorForRailEnv())
env_renderer = RenderTool(env, gl="PILSVG", ) env_renderer = RenderTool(env, gl="PILSVG", )
handle = env.get_agent_handles()
num_features_per_node = env.obs_builder.observation_dim num_features_per_node = env.obs_builder.observation_dim
tree_depth = 2
nr_nodes = 0 nr_nodes = 0
for i in range(tree_depth + 1): for i in range(tree_depth + 1):
nr_nodes += np.power(4, i) nr_nodes += np.power(4, i)
state_size = num_features_per_node * nr_nodes state_size = num_features_per_node * nr_nodes
action_size = 5 action_size = 5
n_trials = 10 # We set the number of episodes we would like to train on
observation_radius = 10 if 'n_trials' not in locals():
max_steps = int(3 * (env.height + env.width)) n_trials = 60000
max_steps = int(4 * 2 * (20 + env.height + env.width))
eps = 1. eps = 1.
eps_end = 0.005 eps_end = 0.005
eps_decay = 0.9995 eps_decay = 0.9995
...@@ -63,14 +89,13 @@ action_dict = dict() ...@@ -63,14 +89,13 @@ action_dict = dict()
final_action_dict = dict() final_action_dict = dict()
scores_window = deque(maxlen=100) scores_window = deque(maxlen=100)
done_window = deque(maxlen=100) done_window = deque(maxlen=100)
time_obs = deque(maxlen=2)
scores = [] scores = []
dones_list = [] dones_list = []
action_prob = [0] * action_size action_prob = [0] * action_size
agent_obs = [None] * env.get_num_agents() agent_obs = [None] * env.get_num_agents()
agent_next_obs = [None] * env.get_num_agents() agent_next_obs = [None] * env.get_num_agents()
agent = Agent(state_size, action_size, "FC", 0) agent = Agent(state_size, action_size)
with path(torch_training.Nets, "avoid_checkpoint36600.pth") as file_in: with path(torch_training.Nets, "navigator_checkpoint1200.pth") as file_in:
agent.qnetwork_local.load_state_dict(torch.load(file_in)) agent.qnetwork_local.load_state_dict(torch.load(file_in))
record_images = False record_images = False
...@@ -79,42 +104,36 @@ frame_step = 0 ...@@ -79,42 +104,36 @@ frame_step = 0
for trials in range(1, n_trials + 1): for trials in range(1, n_trials + 1):
# Reset environment # Reset environment
obs = env.reset(True, True) obs, info = env.reset(True, True)
env_renderer.reset()
env_renderer.set_new_rail() # Build agent specific observations
for a in range(env.get_num_agents()): for a in range(env.get_num_agents()):
data, distance, agent_data = split_tree(tree=np.array(obs[a]), num_features_per_node=num_features_per_node, agent_obs[a] = agent_obs[a] = normalize_observation(obs[a], tree_depth, observation_radius=10)
current_depth=0) # Reset score and done
data = norm_obs_clip(data, fixed_radius=observation_radius) score = 0
distance = norm_obs_clip(distance) env_done = 0
agent_data = np.clip(agent_data, -1, 1)
agent_obs[a] = np.concatenate((np.concatenate((data, distance)), agent_data))
# Run episode # Run episode
for step in range(max_steps): for step in range(max_steps):
env_renderer.render_env(show=True, show_observations=False, show_predictions=True)
if record_images:
env_renderer.gl.save_image("./Images/Avoiding/flatland_frame_{:04d}.bmp".format(frame_step))
frame_step += 1
# Action # Action
for a in range(env.get_num_agents()): for a in range(env.get_num_agents()):
action = agent.act(agent_obs[a], eps=0) if info['action_required'][a]:
action_dict.update({a: action}) action = agent.act(agent_obs[a], eps=0.)
else:
action = 0
action_prob[action] += 1
action_dict.update({a: action})
# Environment step # Environment step
obs, all_rewards, done, _ = env.step(action_dict)
next_obs, all_rewards, done, _ = env.step(action_dict) env_renderer.render_env(show=True, show_predictions=True, show_observations=False)
# Build agent specific observations and normalize
for a in range(env.get_num_agents()): for a in range(env.get_num_agents()):
data, distance, agent_data = split_tree(tree=np.array(next_obs[a]), if obs[a]:
num_features_per_node=num_features_per_node, agent_obs[a] = normalize_observation(obs[a], tree_depth, observation_radius=10)
current_depth=0)
data = norm_obs_clip(data, fixed_radius=observation_radius)
distance = norm_obs_clip(distance)
agent_data = np.clip(agent_data, -1, 1)
agent_obs[a] = np.concatenate((np.concatenate((data, distance)), agent_data))
if done['__all__']: if done['__all__']:
break break
# Import packages for plotting and system
import getopt import getopt
import random import random
import sys import sys
from collections import deque from collections import deque
# make sure the root path is in system path
from pathlib import Path
from flatland.envs.malfunction_generators import malfunction_from_params, MalfunctionParameters
base_dir = Path(__file__).resolve().parent.parent
sys.path.append(str(base_dir))
import matplotlib.pyplot as plt import matplotlib.pyplot as plt
import numpy as np import numpy as np
import torch import torch
# Import Flatland/ Observations and Predictors
from flatland.envs.generators import complex_rail_generator
from flatland.envs.observations import TreeObsForRailEnv
from flatland.envs.predictions import ShortestPathPredictorForRailEnv
from flatland.envs.rail_env import RailEnv
from importlib_resources import path
# Import Torch and utility functions to normalize observation
import torch_training.Nets
from torch_training.dueling_double_dqn import Agent from torch_training.dueling_double_dqn import Agent
from utils.observation_utils import norm_obs_clip, split_tree
from flatland.envs.rail_env import RailEnv
from flatland.envs.rail_generators import sparse_rail_generator
from flatland.envs.schedule_generators import sparse_schedule_generator
from flatland.utils.rendertools import RenderTool
from utils.observation_utils import normalize_observation
from flatland.envs.observations import TreeObsForRailEnv
from flatland.envs.predictions import ShortestPathPredictorForRailEnv
from flatland.envs.agent_utils import RailAgentStatus
def main(argv): def main(argv):
try: try:
opts, args = getopt.getopt(argv, "n:", ["n_episodes="]) opts, args = getopt.getopt(argv, "n:", ["n_trials="])
except getopt.GetoptError: except getopt.GetoptError:
print('training_navigation.py -n <n_episodes>') print('training_navigation.py -n <n_trials>')
sys.exit(2) sys.exit(2)
for opt, arg in opts: for opt, arg in opts:
if opt in ('-n', '--n_episodes'): if opt in ('-n', '--n_trials'):
n_episodes = int(arg) n_trials = int(arg)
## Initialize the random
random.seed(1) random.seed(1)
np.random.seed(1) np.random.seed(1)
# Initialize a random map with a random number of agents # Parameters for the Environment
x_dim = np.random.randint(8, 20) x_dim = 35
y_dim = np.random.randint(8, 20) y_dim = 35
n_agents = np.random.randint(3, 8) n_agents = 10
n_goals = n_agents + np.random.randint(0, 3)
min_dist = int(0.75 * min(x_dim, y_dim))
tree_depth = 3 # Use a the malfunction generator to break agents from time to time
print("main2") stochastic_data = MalfunctionParameters(malfunction_rate=1./10000, # Rate of malfunction occurence
min_duration=15, # Minimal duration of malfunction
""" max_duration=50 # Max duration of malfunction
Get an observation builder and predictor: )
The predictor will always predict the shortest path from the current location of the agent.
This is used to warn for potential conflicts --> Should be enhanced to get better performance!
""" # Custom observation builder
predictor = ShortestPathPredictorForRailEnv() TreeObservation = TreeObsForRailEnv(max_depth=2, predictor=ShortestPathPredictorForRailEnv(30))
observation_helper = TreeObsForRailEnv(max_depth=tree_depth, predictor=predictor)
# Different agent types (trains) with different speeds.
speed_ration_map = {1.: 0.25, # Fast passenger train
1. / 2.: 0.25, # Fast freight train
1. / 3.: 0.25, # Slow commuter train
1. / 4.: 0.25} # Slow freight train
env = RailEnv(width=x_dim, env = RailEnv(width=x_dim,
height=y_dim, height=y_dim,
rail_generator=complex_rail_generator(nr_start_goal=n_goals, nr_extra=5, min_dist=min_dist, rail_generator=sparse_rail_generator(max_num_cities=3,
max_dist=99999, # Number of cities in map (where train stations are)
seed=0), seed=1, # Random seed
obs_builder_object=observation_helper, grid_mode=False,
number_of_agents=n_agents) max_rails_between_cities=2,
env.reset(True, True) max_rails_in_city=3),
schedule_generator=sparse_schedule_generator(speed_ration_map),
handle = env.get_agent_handles() number_of_agents=n_agents,
malfunction_generator_and_process_data=malfunction_from_params(stochastic_data),
obs_builder_object=TreeObservation)
# Reset env
env.reset(True,True)
# After training we want to render the results so we also load a renderer
env_renderer = RenderTool(env, gl="PILSVG", )
# Given the depth of the tree observation and the number of features per node we get the following state_size
num_features_per_node = env.obs_builder.observation_dim num_features_per_node = env.obs_builder.observation_dim
tree_depth = 2
nr_nodes = 0 nr_nodes = 0
for i in range(tree_depth + 1): for i in range(tree_depth + 1):
nr_nodes += np.power(4, i) nr_nodes += np.power(4, i)
state_size = num_features_per_node * nr_nodes state_size = num_features_per_node * nr_nodes
# The action space of flatland is 5 discrete actions
action_size = 5 action_size = 5
# We set the number of episodes we would like to train on # We set the number of episodes we would like to train on
if 'n_episodes' not in locals(): if 'n_trials' not in locals():
n_episodes = 60000 n_trials = 15000
# And the max number of steps we want to take per episode
max_steps = int(4 * 2 * (20 + env.height + env.width))
# Set max number of steps per episode as well as other training relevant parameter # Define training parameters
max_steps = int(3 * (env.height + env.width))
eps = 1. eps = 1.
eps_end = 0.005 eps_end = 0.005
eps_decay = 0.9995 eps_decay = 0.998
# And some variables to keep track of the progress
action_dict = dict() action_dict = dict()
final_action_dict = dict() final_action_dict = dict()
scores_window = deque(maxlen=100) scores_window = deque(maxlen=100)
...@@ -86,106 +109,73 @@ def main(argv): ...@@ -86,106 +109,73 @@ def main(argv):
action_prob = [0] * action_size action_prob = [0] * action_size
agent_obs = [None] * env.get_num_agents() agent_obs = [None] * env.get_num_agents()
agent_next_obs = [None] * env.get_num_agents() agent_next_obs = [None] * env.get_num_agents()
observation_radius = 10 agent_obs_buffer = [None] * env.get_num_agents()
agent_action_buffer = [2] * env.get_num_agents()
# Initialize the agent cummulated_reward = np.zeros(env.get_num_agents())
agent = Agent(state_size, action_size, "FC", 0) update_values = [False] * env.get_num_agents()
# Now we load a Double dueling DQN agent
# Here you can pre-load an agent agent = Agent(state_size, action_size)
if False:
with path(torch_training.Nets, "avoid_checkpoint30000.pth") as file_in:
agent.qnetwork_local.load_state_dict(torch.load(file_in))
# Do training over n_episodes
for episodes in range(1, n_episodes + 1):
"""
Training Curriculum: In order to get good generalization we change the number of agents
and the size of the levels every 50 episodes.
"""
if episodes % 50 == 0:
x_dim = np.random.randint(8, 20)
y_dim = np.random.randint(8, 20)
n_agents = np.random.randint(3, 8)
n_goals = n_agents + np.random.randint(0, 3)
min_dist = int(0.75 * min(x_dim, y_dim))
env = RailEnv(width=x_dim,
height=y_dim,
rail_generator=complex_rail_generator(nr_start_goal=n_goals, nr_extra=5, min_dist=min_dist,
max_dist=99999,
seed=0),
obs_builder_object=observation_helper,
number_of_agents=n_agents)
# Adjust the parameters according to the new env.
max_steps = int(3 * (env.height + env.width))
agent_obs = [None] * env.get_num_agents()
agent_next_obs = [None] * env.get_num_agents()
# Reset environment for trials in range(1, n_trials + 1):
obs = env.reset(True, True)
# Setup placeholder for finals observation of a single agent. This is necessary because agents terminate at
# different times during an episode
final_obs = agent_obs.copy()
final_obs_next = agent_next_obs.copy()
# Reset environment
obs, info = env.reset(True, True)
env_renderer.reset()
# Build agent specific observations # Build agent specific observations
for a in range(env.get_num_agents()): for a in range(env.get_num_agents()):
data, distance, agent_data = split_tree(tree=np.array(obs[a]), num_features_per_node=num_features_per_node, if obs[a]:
current_depth=0) agent_obs[a] = normalize_observation(obs[a], tree_depth, observation_radius=10)
data = norm_obs_clip(data, fixed_radius=observation_radius) agent_obs_buffer[a] = agent_obs[a].copy()
distance = norm_obs_clip(distance)
agent_data = np.clip(agent_data, -1, 1)
agent_obs[a] = np.concatenate((np.concatenate((data, distance)), agent_data))
# Reset score and done
score = 0 score = 0
env_done = 0 env_done = 0
# Run episode # Run episode
for step in range(max_steps): while True:
# Action # Action
for a in range(env.get_num_agents()): for a in range(env.get_num_agents()):
action = agent.act(agent_obs[a], eps=eps) if info['action_required'][a]:
action_prob[action] += 1 # If an action is require, we want to store the obs a that step as well as the action
update_values[a] = True
action = agent.act(agent_obs[a], eps=eps)
action_prob[action] += 1
else:
update_values[a] = False
action = 0
action_dict.update({a: action}) action_dict.update({a: action})
# Environment step # Environment step
next_obs, all_rewards, done, _ = env.step(action_dict) next_obs, all_rewards, done, info = env.step(action_dict)
# Build agent specific observations and normalize
for a in range(env.get_num_agents()):
data, distance, agent_data = split_tree(tree=np.array(next_obs[a]),
num_features_per_node=num_features_per_node, current_depth=0)
data = norm_obs_clip(data, fixed_radius=observation_radius)
distance = norm_obs_clip(distance)
agent_data = np.clip(agent_data, -1, 1)
agent_next_obs[a] = np.concatenate((np.concatenate((data, distance)), agent_data))
# Update replay buffer and train agent # Update replay buffer and train agent
for a in range(env.get_num_agents()): for a in range(env.get_num_agents()):
if done[a]: # Only update the values when we are done or when an action was taken and thus relevant information is present
final_obs[a] = agent_obs[a].copy() if update_values[a] or done[a]:
final_obs_next[a] = agent_next_obs[a].copy() agent.step(agent_obs_buffer[a], agent_action_buffer[a], all_rewards[a],
final_action_dict.update({a: action_dict[a]}) agent_obs[a], done[a])
if not done[a]: cummulated_reward[a] = 0.
agent.step(agent_obs[a], action_dict[a], all_rewards[a], agent_next_obs[a], done[a])
agent_obs_buffer[a] = agent_obs[a].copy()
agent_action_buffer[a] = action_dict[a]
if next_obs[a]:
agent_obs[a] = normalize_observation(next_obs[a], tree_depth, observation_radius=10)
score += all_rewards[a] / env.get_num_agents() score += all_rewards[a] / env.get_num_agents()
# Copy observation # Copy observation
agent_obs = agent_next_obs.copy()
if done['__all__']: if done['__all__']:
env_done = 1 env_done = 1
for a in range(env.get_num_agents()):
agent.step(final_obs[a], final_action_dict[a], all_rewards[a], final_obs_next[a], done[a])
break break
# Epsilon decay # Epsilon decay
eps = max(eps_end, eps_decay * eps) # decrease epsilon eps = max(eps_end, eps_decay * eps) # decrease epsilon
# Collection information about training # Collection information about training
done_window.append(env_done) tasks_finished = 0
for current_agent in env.agents:
if current_agent.status == RailAgentStatus.DONE_REMOVED:
tasks_finished += 1
done_window.append(tasks_finished / max(1, env.get_num_agents()))
scores_window.append(score / max_steps) # save most recent score scores_window.append(score / max_steps) # save most recent score
scores.append(np.mean(scores_window)) scores.append(np.mean(scores_window))
dones_list.append((np.mean(done_window))) dones_list.append((np.mean(done_window)))
...@@ -193,23 +183,24 @@ def main(argv): ...@@ -193,23 +183,24 @@ def main(argv):
print( print(
'\rTraining {} Agents on ({},{}).\t Episode {}\t Average Score: {:.3f}\tDones: {:.2f}%\tEpsilon: {:.2f} \t Action Probabilities: \t {}'.format( '\rTraining {} Agents on ({},{}).\t Episode {}\t Average Score: {:.3f}\tDones: {:.2f}%\tEpsilon: {:.2f} \t Action Probabilities: \t {}'.format(
env.get_num_agents(), x_dim, y_dim, env.get_num_agents(), x_dim, y_dim,
episodes, trials,
np.mean(scores_window), np.mean(scores_window),
100 * np.mean(done_window), 100 * np.mean(done_window),
eps, action_prob / np.sum(action_prob)), end=" ") eps, action_prob / np.sum(action_prob)), end=" ")
if episodes % 100 == 0: if trials % 100 == 0:
print( print(
'\rTraining {} Agents.\t Episode {}\t Average Score: {:.3f}\tDones: {:.2f}%\tEpsilon: {:.2f} \t Action Probabilities: \t {}'.format( '\rTraining {} Agents on ({},{}).\t Episode {}\t Average Score: {:.3f}\tDones: {:.2f}%\tEpsilon: {:.2f} \t Action Probabilities: \t {}'.format(
env.get_num_agents(), env.get_num_agents(), x_dim, y_dim,
episodes, trials,
np.mean(scores_window), np.mean(scores_window),
100 * np.mean(done_window), 100 * np.mean(done_window),
eps, eps, action_prob / np.sum(action_prob)))
action_prob / np.sum(action_prob)))
torch.save(agent.qnetwork_local.state_dict(), torch.save(agent.qnetwork_local.state_dict(),
'./Nets/avoid_checkpoint' + str(episodes) + '.pth') './Nets/navigator_checkpoint' + str(trials) + '.pth')
action_prob = [1] * action_size action_prob = [1] * action_size
# Plot overall training progress at the end
plt.plot(scores) plt.plot(scores)
plt.show() plt.show()
......
...@@ -7,17 +7,18 @@ from collections import deque ...@@ -7,17 +7,18 @@ from collections import deque
import matplotlib.pyplot as plt import matplotlib.pyplot as plt
import numpy as np import numpy as np
import torch import torch
# Import Flatland/ Observations and Predictors
from flatland.envs.generators import complex_rail_generator
from flatland.envs.observations import TreeObsForRailEnv from flatland.envs.observations import TreeObsForRailEnv
from flatland.envs.predictions import ShortestPathPredictorForRailEnv from flatland.envs.predictions import ShortestPathPredictorForRailEnv
from flatland.envs.rail_env import RailEnv from flatland.envs.rail_env import RailEnv
from flatland.envs.rail_generators import complex_rail_generator
# Import Flatland/ Observations and Predictors
from flatland.envs.schedule_generators import complex_schedule_generator
from importlib_resources import path from importlib_resources import path
# Import Torch and utility functions to normalize observation # Import Torch and utility functions to normalize observation
import torch_training.Nets import torch_training.Nets
from torch_training.dueling_double_dqn import Agent from torch_training.dueling_double_dqn import Agent
from utils.observation_utils import norm_obs_clip, split_tree from utils.observation_utils import norm_obs_clip, split_tree_into_feature_groups
def main(argv): def main(argv):
...@@ -40,25 +41,25 @@ def main(argv): ...@@ -40,25 +41,25 @@ def main(argv):
n_agents = np.random.randint(3, 8) n_agents = np.random.randint(3, 8)
n_goals = n_agents + np.random.randint(0, 3) n_goals = n_agents + np.random.randint(0, 3)
min_dist = int(0.75 * min(x_dim, y_dim)) min_dist = int(0.75 * min(x_dim, y_dim))
tree_depth = 3 tree_depth = 2
print("main2") print("main2")
demo = False
# Get an observation builder and predictor # Get an observation builder and predictor
predictor = ShortestPathPredictorForRailEnv() observation_helper = TreeObsForRailEnv(max_depth=tree_depth, predictor=ShortestPathPredictorForRailEnv())
observation_helper = TreeObsForRailEnv(max_depth=tree_depth, predictor=predictor())
env = RailEnv(width=x_dim, env = RailEnv(width=x_dim,
height=y_dim, height=y_dim,
rail_generator=complex_rail_generator(nr_start_goal=n_goals, nr_extra=5, min_dist=min_dist, rail_generator=complex_rail_generator(nr_start_goal=n_goals, nr_extra=5, min_dist=min_dist,
max_dist=99999, max_dist=99999,
seed=0), seed=0),
schedule_generator=complex_schedule_generator(),
obs_builder_object=observation_helper, obs_builder_object=observation_helper,
number_of_agents=n_agents) number_of_agents=n_agents)
env.reset(True, True) env.reset(True, True)
handle = env.get_agent_handles() handle = env.get_agent_handles()
features_per_node = env.obs_builder.observation_dim features_per_node = env.obs_builder.observation_dim
tree_depth = 2
nr_nodes = 0 nr_nodes = 0
for i in range(tree_depth + 1): for i in range(tree_depth + 1):
nr_nodes += np.power(4, i) nr_nodes += np.power(4, i)
...@@ -85,11 +86,11 @@ def main(argv): ...@@ -85,11 +86,11 @@ def main(argv):
agent_obs = [None] * env.get_num_agents() agent_obs = [None] * env.get_num_agents()
agent_next_obs = [None] * env.get_num_agents() agent_next_obs = [None] * env.get_num_agents()
# Initialize the agent # Initialize the agent
agent = Agent(state_size, action_size, "FC", 0) agent = Agent(state_size, action_size)
# Here you can pre-load an agent # Here you can pre-load an agent
if False: if False:
with path(torch_training.Nets, "avoid_checkpoint30000.pth") as file_in: with path(torch_training.Nets, "avoid_checkpoint500.pth") as file_in:
agent.qnetwork_local.load_state_dict(torch.load(file_in)) agent.qnetwork_local.load_state_dict(torch.load(file_in))
# Do training over n_episodes # Do training over n_episodes
...@@ -109,6 +110,7 @@ def main(argv): ...@@ -109,6 +110,7 @@ def main(argv):
rail_generator=complex_rail_generator(nr_start_goal=n_goals, nr_extra=5, min_dist=min_dist, rail_generator=complex_rail_generator(nr_start_goal=n_goals, nr_extra=5, min_dist=min_dist,
max_dist=99999, max_dist=99999,
seed=0), seed=0),
schedule_generator=complex_schedule_generator(),
obs_builder_object=TreeObsForRailEnv(max_depth=3, obs_builder_object=TreeObsForRailEnv(max_depth=3,
predictor=ShortestPathPredictorForRailEnv()), predictor=ShortestPathPredictorForRailEnv()),
number_of_agents=n_agents) number_of_agents=n_agents)
...@@ -119,7 +121,7 @@ def main(argv): ...@@ -119,7 +121,7 @@ def main(argv):
agent_next_obs = [None] * env.get_num_agents() agent_next_obs = [None] * env.get_num_agents()
# Reset environment # Reset environment
obs = env.reset(True, True) obs, info = env.reset(True, True)
# Setup placeholder for finals observation of a single agent. This is necessary because agents terminate at # Setup placeholder for finals observation of a single agent. This is necessary because agents terminate at
# different times during an episode # different times during an episode
...@@ -128,8 +130,7 @@ def main(argv): ...@@ -128,8 +130,7 @@ def main(argv):
# Build agent specific observations # Build agent specific observations
for a in range(env.get_num_agents()): for a in range(env.get_num_agents()):
data, distance, agent_data = split_tree(tree=np.array(obs[a]), data, distance, agent_data = split_tree_into_feature_groups(obs[a], tree_depth)
current_depth=0)
data = norm_obs_clip(data) data = norm_obs_clip(data)
distance = norm_obs_clip(distance) distance = norm_obs_clip(distance)
agent_data = np.clip(agent_data, -1, 1) agent_data = np.clip(agent_data, -1, 1)
...@@ -160,8 +161,7 @@ def main(argv): ...@@ -160,8 +161,7 @@ def main(argv):
next_obs, all_rewards, done, _ = env.step(action_dict) next_obs, all_rewards, done, _ = env.step(action_dict)
for a in range(env.get_num_agents()): for a in range(env.get_num_agents()):
data, distance, agent_data = split_tree(tree=np.array(next_obs[a]), data, distance, agent_data = split_tree_into_feature_groups(next_obs[a], tree_depth)
current_depth=0)
data = norm_obs_clip(data) data = norm_obs_clip(data)
distance = norm_obs_clip(distance) distance = norm_obs_clip(distance)
agent_data = np.clip(agent_data, -1, 1) agent_data = np.clip(agent_data, -1, 1)
......
File added
File added
File added
File added
...@@ -3,16 +3,17 @@ from collections import deque ...@@ -3,16 +3,17 @@ from collections import deque
import numpy as np import numpy as np
import torch import torch
from flatland.envs.generators import complex_rail_generator from flatland.envs.malfunction_generators import malfunction_from_params, MalfunctionParameters
from flatland.envs.observations import TreeObsForRailEnv from flatland.envs.observations import TreeObsForRailEnv
from flatland.envs.predictions import ShortestPathPredictorForRailEnv
from flatland.envs.rail_env import RailEnv from flatland.envs.rail_env import RailEnv
from flatland.envs.rail_generators import sparse_rail_generator
from flatland.envs.schedule_generators import sparse_schedule_generator
from flatland.utils.rendertools import RenderTool from flatland.utils.rendertools import RenderTool
from importlib_resources import path from importlib_resources import path
import torch_training.Nets import torch_training.Nets
from torch_training.dueling_double_dqn import Agent from torch_training.dueling_double_dqn import Agent
from utils.observation_utils import norm_obs_clip, split_tree from utils.observation_utils import normalize_observation
random.seed(1) random.seed(1)
np.random.seed(1) np.random.seed(1)
...@@ -26,27 +27,53 @@ x_dim = env.width ...@@ -26,27 +27,53 @@ x_dim = env.width
y_dim = env.height y_dim = env.height
""" """
x_dim = np.random.randint(8, 20) # Parameters for the Environment
y_dim = np.random.randint(8, 20) x_dim = 25
n_agents = np.random.randint(3, 8) y_dim = 25
n_goals = n_agents + np.random.randint(0, 3) n_agents = 1
min_dist = int(0.75 * min(x_dim, y_dim)) n_goals = 5
min_dist = 5
# We are training an Agent using the Tree Observation with depth 2
observation_builder = TreeObsForRailEnv(max_depth=2)
# Use a the malfunction generator to break agents from time to time
stochastic_data = MalfunctionParameters(malfunction_rate=1./10000, # Rate of malfunction occurence
min_duration=15, # Minimal duration of malfunction
max_duration=50 # Max duration of malfunction
)
# Custom observation builder
TreeObservation = TreeObsForRailEnv(max_depth=2)
# Different agent types (trains) with different speeds.
speed_ration_map = {1.: 1., # Fast passenger train
1. / 2.: 0.0, # Fast freight train
1. / 3.: 0.0, # Slow commuter train
1. / 4.: 0.0} # Slow freight train
env = RailEnv(width=x_dim, env = RailEnv(width=x_dim,
height=y_dim, height=y_dim,
rail_generator=complex_rail_generator(nr_start_goal=n_goals, nr_extra=5, min_dist=min_dist, rail_generator=sparse_rail_generator(max_num_cities=3,
max_dist=99999, # Number of cities in map (where train stations are)
seed=0), seed=1, # Random seed
obs_builder_object=TreeObsForRailEnv(max_depth=3, predictor=ShortestPathPredictorForRailEnv()), grid_mode=False,
number_of_agents=n_agents) max_rails_between_cities=2,
env.reset(True, True) max_rails_in_city=4),
schedule_generator=sparse_schedule_generator(speed_ration_map),
observation_helper = TreeObsForRailEnv(max_depth=3, predictor=ShortestPathPredictorForRailEnv()) number_of_agents=n_agents,
malfunction_generator_and_process_data=malfunction_from_params(stochastic_data),
obs_builder_object=TreeObservation)
env.reset(True,True)
env_renderer = RenderTool(env, gl="PILSVG", ) env_renderer = RenderTool(env, gl="PILSVG", )
num_features_per_node = env.obs_builder.observation_dim num_features_per_node = env.obs_builder.observation_dim
handle = env.get_agent_handles()
features_per_node = 9 tree_depth = 2
state_size = features_per_node * 85 * 2 nr_nodes = 0
for i in range(tree_depth + 1):
nr_nodes += np.power(4, i)
state_size = num_features_per_node * nr_nodes
action_size = 5 action_size = 5
# We set the number of episodes we would like to train on # We set the number of episodes we would like to train on
...@@ -60,14 +87,13 @@ action_dict = dict() ...@@ -60,14 +87,13 @@ action_dict = dict()
final_action_dict = dict() final_action_dict = dict()
scores_window = deque(maxlen=100) scores_window = deque(maxlen=100)
done_window = deque(maxlen=100) done_window = deque(maxlen=100)
time_obs = deque(maxlen=2)
scores = [] scores = []
dones_list = [] dones_list = []
action_prob = [0] * action_size action_prob = [0] * action_size
agent_obs = [None] * env.get_num_agents() agent_obs = [None] * env.get_num_agents()
agent_next_obs = [None] * env.get_num_agents() agent_next_obs = [None] * env.get_num_agents()
agent = Agent(state_size, action_size, "FC", 0) agent = Agent(state_size, action_size)
with path(torch_training.Nets, "avoid_checkpoint49700.pth") as file_in: with path(torch_training.Nets, "navigator_checkpoint1000.pth") as file_in:
agent.qnetwork_local.load_state_dict(torch.load(file_in)) agent.qnetwork_local.load_state_dict(torch.load(file_in))
record_images = False record_images = False
...@@ -76,58 +102,38 @@ frame_step = 0 ...@@ -76,58 +102,38 @@ frame_step = 0
for trials in range(1, n_trials + 1): for trials in range(1, n_trials + 1):
# Reset environment # Reset environment
obs = env.reset(True, True) obs, info = env.reset(True, True)
env_renderer.reset()
env_renderer.set_new_rail() # Build agent specific observations
obs_original = obs.copy()
final_obs = obs.copy()
final_obs_next = obs.copy()
for a in range(env.get_num_agents()):
data, distance, agent_data = split_tree(tree=np.array(obs[a]), num_features_per_node=num_features_per_node,
current_depth=0)
data = norm_obs_clip(data)
distance = norm_obs_clip(distance)
agent_data = np.clip(agent_data, -1, 1)
obs[a] = np.concatenate((np.concatenate((data, distance)), agent_data))
agent_data = env.agents[a]
speed = 1 # np.random.randint(1,5)
agent_data.speed_data['speed'] = 1. / speed
for i in range(2):
time_obs.append(obs)
# env.obs_builder.util_print_obs_subtree(tree=obs[0], num_elements_per_node=5)
for a in range(env.get_num_agents()): for a in range(env.get_num_agents()):
agent_obs[a] = np.concatenate((time_obs[0][a], time_obs[1][a])) agent_obs[a] = agent_obs[a] = normalize_observation(obs[a], tree_depth, observation_radius=10)
# Reset score and done
score = 0
env_done = 0
# Run episode # Run episode
for step in range(max_steps): for step in range(max_steps):
env_renderer.render_env(show=True, show_observations=False, show_predictions=True)
if record_images:
env_renderer.gl.saveImage("./Images/flatland_frame_{:04d}.bmp".format(frame_step))
frame_step += 1
# Action # Action
for a in range(env.get_num_agents()): for a in range(env.get_num_agents()):
# action = agent.act(np.array(obs[a]), eps=eps) if info['action_required'][a]:
action = agent.act(agent_obs[a], eps=0) action = agent.act(agent_obs[a], eps=0.)
else:
action = 0
action_prob[action] += 1
action_dict.update({a: action}) action_dict.update({a: action})
# Environment step # Environment step
obs, all_rewards, done, _ = env.step(action_dict)
next_obs, all_rewards, done, _ = env.step(action_dict) env_renderer.render_env(show=True, show_predictions=True, show_observations=False)
# print(all_rewards,action) # Build agent specific observations and normalize
obs_original = next_obs.copy()
for a in range(env.get_num_agents()):
data, distance, agent_data = split_tree(tree=np.array(next_obs[a]),
num_features_per_node=num_features_per_node,
current_depth=0)
data = norm_obs_clip(data)
distance = norm_obs_clip(distance)
agent_data = np.clip(agent_data, -1, 1)
next_obs[a] = np.concatenate((np.concatenate((data, distance)), agent_data))
time_obs.append(next_obs)
for a in range(env.get_num_agents()): for a in range(env.get_num_agents()):
agent_next_obs[a] = np.concatenate((time_obs[0][a], time_obs[1][a])) if obs[a]:
agent_obs = agent_next_obs.copy() agent_obs[a] = normalize_observation(obs[a], tree_depth, observation_radius=10)
if done['__all__']: if done['__all__']:
break break