**The following is a high level description of how this round works**

1.**Sign up** to join the competition [on the AIcrowd website].(https://www.aicrowd.com/challenges/airborne-object-tracking-challenge)
2.**Clone** this repo and start developing your solution.
3.**Train** your models and writer code in `run.py`.
4.[**Submit**](#how-to-submit-a-model) your trained models to [AIcrowd Gitlab](https://gitlab.aicrowd.com) for evaluation [(full instructions below)](#how-to-submit-a-model). The automated evaluation setup will evaluate the submissions against the test dataset to compute and report the metrics on the leaderboard of the competition.
# How to setup the environment
# How to start participating
## Setup
1.**Add your SSH key** to AIcrowd GitLab
You can add your SSH Keys to your GitLab account by going to your profile settings [here](https://gitlab.aicrowd.com/profile/keys). If you do not have SSH Keys, you will first need to [generate one](https://docs.gitlab.com/ee/ssh/README.html#generating-a-new-ssh-key-pair).
4. Try out the baseline model available in `run.py`.
## How do I specify my software runtime / dependencies ?
We accept submissions with custom runtime, so you don't need to worry about which libraries or framework to pick from.
The configuration files typically include `requirements.txt` (pypi packages), `environment.yml` (conda environment), `apt.txt` (apt packages) or even your own `Dockerfile`.
You can check detailed information about the same in the 👉 [RUNTIME.md](/docs/RUNTIME.md) file.
## What should my code structure be like ?
Please follow the example structure as it is in the starter kit for the code structure.
The different files and directories have following meaning:
```
.
├── aicrowd.json # Submission meta information - like your username
├── apt.txt # Packages to be installed inside docker image
├── data # Your local dataset copy - you don't need to upload it (read DATASET.md)
├── requirements.txt # Python packages to be installed
├── run.py # IMPORTANT: Your testing/inference phase code, must be derived from AirbornePredictor (example in test.py)
```
Finally, **you must specify an AIcrowd submission JSON in `aicrowd.json` to be scored!**
The `aicrowd.json` of each submission should contain the following content:
```json
{
"challenge_id":"evaluations-api-airborne",
"grader_id":"evaluations-api-airborne",
"authors":["aicrowd-bot"],
"tags":"change-me",
"description":"Random prediction model for Airborne challenge",
}
```
This JSON is used to map your submission to the challenge - so please remember to use the correct `challenge_id` as specified above.
Please specify if your code will use a GPU or not for the evaluation of your model. If you specify `true` for the GPU, GPU will be provided and used for the evaluation.
## How to make submission
👉 [SUBMISSION.md](/docs/SUBMISSION.md)
**Best of Luck** :tada: :tada:
# Other Concepts
## Time constraints
You need to make sure that your model finishes evaluation in 1500 seconds, otherwise your evaluation will be marked failed.
## Local evaluation
You can also test end to end evaluation on your own systems.