From c977124a6bc39e6ba8c74d3d7d4ba9caaddaaf01 Mon Sep 17 00:00:00 2001 From: flaurent <florian.laurent@gmail.com> Date: Tue, 7 Jul 2020 11:16:08 +0200 Subject: [PATCH] Trying to make evaluations deterministic --- README.md | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index d8dc09b..8e72c90 100644 --- a/README.md +++ b/README.md @@ -1,7 +1,12 @@ -## Examples of scripts to train agents in the Flatland environment. +# âš ï¸ Deprecated repository -# Torch Training +This repository is deprecated! Please go to: + +#### **https://gitlab.aicrowd.com/flatland/flatland-examples** + + +## Torch Training The `torch_training` folder shows an example of how to train agents with a DQN implemented in pytorch. In the links below you find introductions to training an agent on Flatland: @@ -15,7 +20,7 @@ With the above introductions you will solve tasks like these and even more...  -# Sequential Agent +## Sequential Agent This is a very simple baseline to show you have the `complex_level_generator` generates feasible network configurations. If you run the `run_test.py` file you will see a simple agent that solves the level by sequentially running each agent along its shortest path. This is very innefficient but it solves all the instances generated by `complex_level_generator`. However when being scored for the AIcrowd competition, this agent fails due to the duration it needs to solve an episode. -- GitLab