diff --git a/README.md b/README.md
index b1622a0f7f06c86fe424b9380fa127b2b0315d3b..cdd54b5229cd88c94618782bf242595b453413f8 100644
--- a/README.md
+++ b/README.md
@@ -18,3 +18,11 @@ With the above introductions you will solve tasks like these and even more...
 # RLLib Training
 The `RLLib_training` folder shows an example of how to train agents with  algorithm from implemented in the RLLib library available at: <https://github.com/ray-project/ray/tree/master/python/ray/rllib>
 
+# Sequential Agent
+This is a very simple baseline to show you have the `complex_level_generator` generates feasible network configurations.
+If you run the `run_test.py` file you will see a simple agent that solves the level by sequentially running each agent along its shortest path.
+This is very innefficient but it solves all the instances generated by `complex_level_generator`. However when being scored for the AIcrowd competition, this agent fails due to the duration it needs to solve an episode.
+
+Here you see it in action:
+
+![Sequential_Agent](https://i.imgur.com/DsbG6zK.gif)
\ No newline at end of file