@@ -152,6 +152,8 @@ Before running on the challenge dataset, your model will be run on the dummy dat
Your model will be run on an AWS g5.2xlarge node. This node has **8 vCPUs, 32 GB RAM, and one Nvidia A10G GPU with 24 GB VRAM**.
Before your model starts processing conversations, it is provided an additional time upto *5 minutes* to load models or preprocess any data if needed.
## Local Evaluation
Participants can run the evaluation protocol for their model locally with or without any constraint posed by the challenge to benchmark their models privately. See `local_evaluation.py` for details. You can change it as you like, your changes to `local_evaluation.py` will **NOT** be used for the competition.