From 6e4f05922c128b8d48978ac035fdf80d556f43d6 Mon Sep 17 00:00:00 2001 From: mlerik <baerenjesus@gmail.com> Date: Thu, 18 Jul 2019 15:54:51 +0000 Subject: [PATCH] Update README.md --- README.md | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/README.md b/README.md index b1622a0..13973a5 100644 --- a/README.md +++ b/README.md @@ -18,3 +18,10 @@ With the above introductions you will solve tasks like these and even more... # RLLib Training The `RLLib_training` folder shows an example of how to train agents with algorithm from implemented in the RLLib library available at: <https://github.com/ray-project/ray/tree/master/python/ray/rllib> +# Sequential Agent +This is a very simple baseline to show you have the `complex_level_generator` generates feasible network configurations. +If you run the `run_test.py` file you will see a simple agent that solves the level by sequentially running each agent along its shortest path. + +Here you see it in action: + + \ No newline at end of file -- GitLab