From f745c09deb734b59c400d1c32a85682152a2bf44 Mon Sep 17 00:00:00 2001
From: mlerik <baerenjesus@gmail.com>
Date: Thu, 18 Jul 2019 15:57:21 +0000
Subject: [PATCH] Update README.md

---
 README.md | 1 +
 1 file changed, 1 insertion(+)

diff --git a/README.md b/README.md
index 1a1cde8..cdd54b5 100644
--- a/README.md
+++ b/README.md
@@ -21,6 +21,7 @@ The `RLLib_training` folder shows an example of how to train agents with  algori
 # Sequential Agent
 This is a very simple baseline to show you have the `complex_level_generator` generates feasible network configurations.
 If you run the `run_test.py` file you will see a simple agent that solves the level by sequentially running each agent along its shortest path.
+This is very innefficient but it solves all the instances generated by `complex_level_generator`. However when being scored for the AIcrowd competition, this agent fails due to the duration it needs to solve an episode.
 
 Here you see it in action:
 
-- 
GitLab