diff --git a/README.md b/README.md
index 1a1cde83672b22fba7e9a1535ec834216762a5f5..cdd54b5229cd88c94618782bf242595b453413f8 100644
--- a/README.md
+++ b/README.md
@@ -21,6 +21,7 @@ The `RLLib_training` folder shows an example of how to train agents with  algori
 # Sequential Agent
 This is a very simple baseline to show you have the `complex_level_generator` generates feasible network configurations.
 If you run the `run_test.py` file you will see a simple agent that solves the level by sequentially running each agent along its shortest path.
+This is very innefficient but it solves all the instances generated by `complex_level_generator`. However when being scored for the AIcrowd competition, this agent fails due to the duration it needs to solve an episode.
 
 Here you see it in action: