diff --git a/README.md b/README.md
index f1a67716a6783f364ecee5e5c2747f7621669167..7425c9c83889a1d7cfb3eae8bf36b74f0df4ecdf 100644
--- a/README.md
+++ b/README.md
@@ -157,6 +157,17 @@ This also includes instructions on [specifying your software runtime](docs/submi
 You can find more details about the hardware and system configuration in [docs/hardware-and-system-config.md](docs/hardware-and-system-config.md).
 In summary, we provide you `2` x [[NVIDIA T4 GPUs](https://www.nvidia.com/en-us/data-center/tesla-t4/)] in Phase 1; and `4` x [[NVIDIA T4 GPUs](https://www.nvidia.com/en-us/data-center/tesla-t4/)] in Phase 2.
 
+Your solution will be given a certain amount of time for inference, after which it would be immediately killed and no results would be available. The time limit is set at 
+| Phase  | Track 1 | Track 2 | Track 3 | Track 4 | Track 5 |
+| ------ | ------- | ------- | ------- | ------- | ------- |
+| **Phase 1**| 140 minutes | 40 minutes | 60 minutes | 60 minutes | 5 hours |
+
+For reference, the baseline solution with zero-shot [Vicuna-7B](https://huggingface.co/lmsys/vicuna-7b-v1.5) (Find it [**here**](https://gitlab.aicrowd.com/aicrowd/challenges/amazon-kdd-cup-2024/amazon-kdd-cup-2024-starter-kit/-/blob/master/models/dummy_model.py)) consumes the following amount of time. 
+
+| Phase  | Track 1 | Track 2 | Track 3 | Track 4 | 
+| ------ | ------- | ------- | ------- | ------- | 
+| **Phase 1**| ~50 minutes | ~3 minutes | ~25 minutes | ~35 minutes | 
+
 
 ## 🧩 How are my model responses parsed by the evaluators ?
 Please refer to [parsers.py](parsers.py) for more details on how we parse your model responses.