Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
Flatland
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Model registry
Operate
Environments
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
Flatland
Flatland
Commits
e99d42bd
Commit
e99d42bd
authored
5 years ago
by
hagrid67
Browse files
Options
Downloads
Patches
Plain Diff
fix flake8
parent
386b6bc5
No related branches found
No related tags found
1 merge request
!261
170 save episode 157 render malfunction
Pipeline
#2730
passed
5 years ago
Changes
1
Pipelines
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
flatland/envs/rail_env.py
+4
-2
4 additions, 2 deletions
flatland/envs/rail_env.py
with
4 additions
and
2 deletions
flatland/envs/rail_env.py
+
4
−
2
View file @
e99d42bd
...
@@ -218,7 +218,7 @@ class RailEnv(Environment):
...
@@ -218,7 +218,7 @@ class RailEnv(Environment):
# global numpy array of agents position, -1 means that the cell is free, otherwise the agent handle is placed
# global numpy array of agents position, -1 means that the cell is free, otherwise the agent handle is placed
# inside the cell
# inside the cell
self
.
agent_positions
:
np
.
ndarray
=
np
.
zeros
((
height
,
width
),
dtype
=
int
)
-
1
self
.
agent_positions
:
np
.
ndarray
=
np
.
zeros
((
height
,
width
),
dtype
=
int
)
-
1
# save episode timesteps ie agent positions, orientations. (not yet actions / observations)
# save episode timesteps ie agent positions, orientations. (not yet actions / observations)
self
.
record_steps
=
record_steps
# whether to save timesteps
self
.
record_steps
=
record_steps
# whether to save timesteps
self
.
cur_episode
=
[]
# save timesteps in here
self
.
cur_episode
=
[]
# save timesteps in here
...
@@ -578,7 +578,8 @@ class RailEnv(Environment):
...
@@ -578,7 +578,8 @@ class RailEnv(Environment):
self
.
rewards_dict
[
i_agent
]
+=
self
.
stop_penalty
self
.
rewards_dict
[
i_agent
]
+=
self
.
stop_penalty
if
not
agent
.
moving
and
not
(
if
not
agent
.
moving
and
not
(
action
==
RailEnvActions
.
DO_NOTHING
or
action
==
RailEnvActions
.
STOP_MOVING
):
action
==
RailEnvActions
.
DO_NOTHING
or
action
==
RailEnvActions
.
STOP_MOVING
):
# Allow agent to start with any forward or direction action
# Allow agent to start with any forward or direction action
agent
.
moving
=
True
agent
.
moving
=
True
self
.
rewards_dict
[
i_agent
]
+=
self
.
start_penalty
self
.
rewards_dict
[
i_agent
]
+=
self
.
start_penalty
...
@@ -939,6 +940,7 @@ class RailEnv(Environment):
...
@@ -939,6 +940,7 @@ class RailEnv(Environment):
# msgpack.packb(msg_data, use_bin_type=True)
# msgpack.packb(msg_data, use_bin_type=True)
with
open
(
filename
,
"
wb
"
)
as
file_out
:
with
open
(
filename
,
"
wb
"
)
as
file_out
:
file_out
.
write
(
msgpack
.
packb
(
dict_data
))
file_out
.
write
(
msgpack
.
packb
(
dict_data
))
def
load
(
self
,
filename
):
def
load
(
self
,
filename
):
"""
"""
Load environment with distance map from a file
Load environment with distance map from a file
...
...
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment