diff --git a/README.md b/README.md
index d8dc09bde2ace60244a901506eb25d1790acb74a..8e72c905d68535ab0073e15ade1be95b128add71 100644
--- a/README.md
+++ b/README.md
@@ -1,7 +1,12 @@
-## Examples of scripts to train agents in the Flatland environment.
 
+# ⚠️ Deprecated repository
 
-# Torch Training
+This repository is deprecated! Please go to:
+
+#### **https://gitlab.aicrowd.com/flatland/flatland-examples**
+
+
+## Torch Training
 The `torch_training` folder shows an example of how to train agents with a DQN implemented in pytorch.
 In the links below you find introductions to training an agent on Flatland:
 
@@ -15,7 +20,7 @@ With the above introductions you will solve tasks like these and even more...
 ![Conflict_Avoidance](https://i.imgur.com/AvBHKaD.gif)
 
 
-# Sequential Agent
+## Sequential Agent
 This is a very simple baseline to show you have the `complex_level_generator` generates feasible network configurations.
 If you run the `run_test.py` file you will see a simple agent that solves the level by sequentially running each agent along its shortest path.
 This is very innefficient but it solves all the instances generated by `complex_level_generator`. However when being scored for the AIcrowd competition, this agent fails due to the duration it needs to solve an episode.