Skip to content
Snippets Groups Projects
Commit e2b4c243 authored by hagrid67's avatar hagrid67
Browse files

replace input command with hardwired command strings

parent 65397f68
No related branches found
No related tags found
No related merge requests found
%% Cell type:markdown id: tags:
### Simple Example 3 - Manual Control
This opens a separate window, and a text box in the Jupyter notebook.
eg Enter `"0 2 s<enter>"` to tell agent 0 to step forward, and step the environment.
By default this runs a few "move forward" actions for two agents, in a separate window.
If you uncomment the "input" line below, it opens a text box in the Jupyter notebook, allowing basic manual control.
eg Enter `"0 2 s<enter>"` to tell agent 0 to move forward, and step the environment.
You should be able to see the red agent step forward, and get a reward from the env, looking like this:
`Rewards: {0: -1.0, 1: -1.0} [done= {0: False, 1: False, '__all__': False} ]`
Note that this example is set up to use the straightforward "PIL" renderer - without the special SBB artwork!
The agent observations are displayed as squares of varying sizes, with a paler version of the agent colour. The targets are half-size squares in the full agent colour.
You can switch to the "PILSVG" renderer which is prettier but currently renders the agents one step behind, because it needs to know which way the agent is turning. This can be confusing if you are debugging step-by-step.
The image below is what the separate window should look like.
%% Cell type:markdown id: tags:
![simple_example_3.png](simple_example_3.png)
%% Cell type:code id: tags:
``` python
import random
import numpy as np
import time
from flatland.envs.generators import random_rail_generator
from flatland.envs.observations import TreeObsForRailEnv
from flatland.envs.rail_env import RailEnv
from flatland.utils.rendertools import RenderTool
```
%% Cell type:code id: tags:
``` python
random.seed(1)
np.random.seed(1)
env = RailEnv(width=7,
height=7,
rail_generator=random_rail_generator(),
number_of_agents=2,
obs_builder_object=TreeObsForRailEnv(max_depth=2))
# Print the observation vector for agent 0
obs, all_rewards, done, _ = env.step({0: 0})
for i in range(env.get_num_agents()):
env.obs_builder.util_print_obs_subtree(tree=obs[i], num_features_per_node=7)
env_renderer = RenderTool(env, gl="PIL")
# env_renderer = RenderTool(env, gl="PILSVG")
env_renderer.renderEnv(show=True, frames=True)
env_renderer.renderEnv(show=True, frames=True)
print("Manual control: s=perform step, q=quit, [agent id] [1-2-3 action] \
(turnleft+move, move to front, turnright+move)")
for step in range(100):
cmd = input(">> ")
```
%% Cell type:code id: tags:
``` python
for step in range(10):
# This is an example command, setting agent 0's action to 2 (move forward), and agent 1's action to 2,
# then stepping the environment.
cmd = "0 2 1 2 s"
# uncomment this input statement if you want to try interactive manual commands
# cmd = input(">> ")
cmds = cmd.split(" ")
action_dict = {}
i = 0
while i < len(cmds):
if cmds[i] == 'q':
import sys
sys.exit()
elif cmds[i] == 's':
obs, all_rewards, done, _ = env.step(action_dict)
action_dict = {}
print("Rewards: ", all_rewards, " [done=", done, "]")
else:
agent_id = int(cmds[i])
action = int(cmds[i + 1])
action_dict[agent_id] = action
i = i + 1
i += 1
env_renderer.renderEnv(show=True, frames=True)
time.sleep(0.3)
```
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment