"In the example below, agents 0 and 1 collide head on. The other agents are spaced out behind them and collide into them in subsequent steps.\n",
"In the example below, agents 0 and 1 collide head on. The other agents are spaced out behind them and collide into them in subsequent steps.\n",
"\n",
"\n",
"The deadlock status is now recorded in element 5 of each agent in the recorded episode. (row, col, dir, status, malfunction, deadlock)"
"The deadlock status is now recorded in element 5 of each agent in the recorded episode. (row, col, dir, status, malfunction, deadlock)\n",
"\n",
"__Bugs / Limitations__\n",
"\n",
"The code does not currently count agents which are deadlocked, if they are also malfunctioning, or choose to stop moving voluntarily.\n",
"\n",
"The code does not detect agents which are about to become deadlocked, because they are heading towards each other on a track with no junctions or relevant targets."
]
]
},
},
{
{
...
...
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
# Unordered Close Following Agents
# Unordered Close Following Agents
May of you will be aware that Flatland agents cannot follow each other close behind, unless they are in agent index order, ie Agent 1 can follow Agent 0, but Agent 0 cannot follow Agent 1, unless it leaves a gap of one cell.
May of you will be aware that Flatland agents cannot follow each other close behind, unless they are in agent index order, ie Agent 1 can follow Agent 0, but Agent 0 cannot follow Agent 1, unless it leaves a gap of one cell.
We have now provided an update which removes this restriction. It's currently in the branch `323_unordered_cell_locking`. It means that agents (moving at the same speed) can now always follow each other without leaving a gap.
We have now provided an update which removes this restriction. It's currently in the branch `323_unordered_cell_locking`. It means that agents (moving at the same speed) can now always follow each other without leaving a gap.
**Why is this a big deal? Or even a deal?**
**Why is this a big deal? Or even a deal?**
Many of the OR solutions took advantage of it to send agents in the "correct" index order so that they could make better use of the available space, but we believe it's harder for RL solutions to do the same.
Many of the OR solutions took advantage of it to send agents in the "correct" index order so that they could make better use of the available space, but we believe it's harder for RL solutions to do the same.
Think of a chain of agents, in random order, moving in the same direction. For any adjacent pair of agents, there's a 0.5 chance that it is in index order, ie index(A) < index(B) where A is in front of B. So roughly half the adjacent pairs will need to leave a gap and half won't, and the chain of agents will typically be one-third empty space. By removing the restriction, we can keep the agents close together and so move up to 50% more agents through a junction or segment of rail in the same number of steps.
Think of a chain of agents, in random order, moving in the same direction. For any adjacent pair of agents, there's a 0.5 chance that it is in index order, ie index(A) < index(B) where A is in front of B. So roughly half the adjacent pairs will need to leave a gap and half won't, and the chain of agents will typically be one-third empty space. By removing the restriction, we can keep the agents close together and so move up to 50% more agents through a junction or segment of rail in the same number of steps.
**What difference does it make in practice?**
**What difference does it make in practice?**
We have run a few tests and it does seem to slightly increase the training performance of existing RL models.
We have run a few tests and it does seem to slightly increase the training performance of existing RL models.
**Does the order not matter at all now?**
**Does the order not matter at all now?**
Well, yes, a bit. We are still using index order to resolve conflicts between two agents trying to move into the same spot, for example, head-on collisions, or agents "merging" at junctions.
Well, yes, a bit. We are still using index order to resolve conflicts between two agents trying to move into the same spot, for example, head-on collisions, or agents "merging" at junctions.
**This sounds boring. Is there anything interesting about it at all?**
**This sounds boring. Is there anything interesting about it at all?**
Thanks for reading this far... It was quite interesting to implement. Think of a chain of moving agents in reverse index order. The `env.step()` iterates them from the back of the chain (lowest index) to the front, so when it gets to the front agent, it's already processed all the others. Now suppose the front agent has decided to stop, or is blocked. The env needs to propagate that back through the chain of agents, and none of them can in fact move. You can see how this might get a bit more complicated with "trees" of merging agents etc. And how do we identify a chain at all?
Thanks for reading this far... It was quite interesting to implement. Think of a chain of moving agents in reverse index order. The `env.step()` iterates them from the back of the chain (lowest index) to the front, so when it gets to the front agent, it's already processed all the others. Now suppose the front agent has decided to stop, or is blocked. The env needs to propagate that back through the chain of agents, and none of them can in fact move. You can see how this might get a bit more complicated with "trees" of merging agents etc. And how do we identify a chain at all?
We did it by storing an agent's position as a graph node, and a movement as a directed edge, using the NetworkX graph library. We create an empty graph for each step, and add the agents into the graph in order, using their (row, column) location for the node. Stationary agents get a self-loop. Agents in an adjacent chain naturally get "connected up". We then use some NetworkX algorithms:
We did it by storing an agent's position as a graph node, and a movement as a directed edge, using the NetworkX graph library. We create an empty graph for each step, and add the agents into the graph in order, using their (row, column) location for the node. Stationary agents get a self-loop. Agents in an adjacent chain naturally get "connected up". We then use some NetworkX algorithms:
-`weakly_connected_components` to find the chains.
-`weakly_connected_components` to find the chains.
-`selfloop_edges` to find the stopped agents
-`selfloop_edges` to find the stopped agents
-`dfs_postorder_nodes` to traverse a chain
-`dfs_postorder_nodes` to traverse a chain
-`simple_cycles` to find agents colliding head-on
-`simple_cycles` to find agents colliding head-on
We can also display a NetworkX graph very simply, but neatly, using GraphViz (see below).
We can also display a NetworkX graph very simply, but neatly, using GraphViz (see below).
**Does it run faster / slower?**
**Does it run faster / slower?**
It seems to make almost no difference to the speed.
It seems to make almost no difference to the speed.
**How do you handle agents entering the env / spawning?**
**How do you handle agents entering the env / spawning?**
For an agent in state READY_TO_DEPART we use a dummy cell of `(-1, agent_id)`. This means that if several agents try to enter the env in the same cell and in the same step, the agent with the lowest index will get to start first. It uses the same rule as above, the agent with the lowest index gets to enter an empty cell ahead of any others.
For an agent in state READY_TO_DEPART we use a dummy cell of `(-1, agent_id)`. This means that if several agents try to enter the env in the same cell and in the same step, the agent with the lowest index will get to start first. It uses the same rule as above, the agent with the lowest index gets to enter an empty cell ahead of any others.
We have added deadlock detection in the close-following code. This detects agents which are head-to-head ie facing in opposing directions in adjacent squares, and thus will not be able to move for the rest of the episode. This deadlocked status is propagated to any other agents which are blocked by the opposing pair.
We have added deadlock detection in the close-following code. This detects agents which are head-to-head ie facing in opposing directions in adjacent squares, and thus will not be able to move for the rest of the episode. This deadlocked status is propagated to any other agents which are blocked by the opposing pair.
In the example below, agents 0 and 1 collide head on. The other agents are spaced out behind them and collide into them in subsequent steps.
In the example below, agents 0 and 1 collide head on. The other agents are spaced out behind them and collide into them in subsequent steps.
The deadlock status is now recorded in element 5 of each agent in the recorded episode. (row, col, dir, status, malfunction, deadlock)
The deadlock status is now recorded in element 5 of each agent in the recorded episode. (row, col, dir, status, malfunction, deadlock)
__Bugs / Limitations__
The code does not currently count agents which are deadlocked, if they are also malfunctioning, or choose to stop moving voluntarily.
The code does not detect agents which are about to become deadlocked, because they are heading towards each other on a track with no junctions or relevant targets.