-
vrushank_vyas1 authoredvrushank_vyas1 authored
- Airborne Object Tracking Challenge Starter Kit
- Table of Contents
- Competition Procedure
- How to access and use dataset
- How to start participating
- Setup
- How do I specify my software runtime / dependencies ?
- What should my code structure be like ?
- How to make submission
- Other Concepts
- Time constraints
- Local evaluation
- 📎 Important links
Airborne Object Tracking Challenge Starter Kit
This repository is the main Airborne Object Tracking challenge Submission template and Starter kit!
Clone the repository to compete now!
This repository contains:
- Documentation on how to submit your agent to the leaderboard
- The procedure for best practices and information on how we evaluate your agent, etc.
- Starter code for you to get started!
Table of Contents
- Competition Procedure
- How to access and use dataset
- How to start participating
- How do I specify my software runtime / dependencies?
- What should my code structure be like ?
- How to make submission
- Other concepts
- Important links
Competition Procedure
The main task of the competition is to detect a collision threat reliably. In this challenge, you will train your agents locally and then upload them to AIcrowd (via git) to be evaluated.
The following is a high level description of how this round works
- Sign up to join the competition [on the AIcrowd website].(https://www.aicrowd.com/challenges/airborne-object-tracking-challenge)
- Clone this repo and start developing your submissions.
-
Train your models to detect objects and write inference code in
test.py
. - Submit your trained models to AIcrowd Gitlab for evaluation (full instructions below). The automated evaluation setup will evaluate the submissions against the test dataset, to compute and report the metrics on the leaderboard of the competition.
How to access and use dataset
The starter kit contains dataset exploration notebooks and helper functions to access the dataset.
You can check the instructions for the same here:
How to start participating
Setup
- Add your SSH key to AIcrowd GitLab
You can add your SSH Keys to your GitLab account by going to your profile settings here. If you do not have SSH Keys, you will first need to generate one.
-
Clone the repository
git clone git@gitlab.aicrowd.com:amazon-prime-air/airborne-detection-starter-kit.git
-
Install competition specific dependencies!
cd airborne-detection-starter-kit pip3 install -r requirements.txt
-
Run local exploration notebook present in
data/dataset-playground.ipynb
usingjupyter notebook
command locally. -
Try out random prediction codebase present in
test.py
.
How do I specify my software runtime / dependencies ?
We accept submissions with custom runtime, so you don't need to worry about which libraries or framework to pick from.
The configuration files typically includes requirements.txt
(pypi packages), environment.yml
(conda environment), apt.txt
(apt packages) or even your own Dockerfile
.
You can check detailed information about the same in
What should my code structure be like ?
Please follow the example structure shared in the starter kit for the code structure. The different files and directories have following meaning:
.
├── aicrowd.json # Submission meta information like your username
├── apt.txt # Packages to be installed inside docker image
├── data # Your local dataset copy, you don't need to upload it (read DATASET.md)
├── requirements.txt # Python packages to be installed
├── test.py # IMPORTANT: Your testing/inference phase code, must be derived from AirbornePredictor (example in test.py)
└── utility # The utility scripts to provide smoother experience to you.
├── docker_build.sh
├── docker_run.sh
├── environ.sh
└── verify_or_download_data.sh
Finally, you must specify an AIcrowd submission JSON in aicrowd.json
to be scored!
The aicrowd.json
of each submission should contain the following content:
{
"challenge_id": "airborne-detection-challenge",
"authors": ["your-aicrowd-username"],
"description": "sample description about your awesome agent",
"license": "MIT",
"gpu": true
}
This JSON is used to map your submission to the said challenge, so please remember to use the correct challenge_id
as specified above.
Please specify if your code will use a GPU or not for the evaluation of your model. If you specify true
for the GPU, GPU will be provided and used for the evaluation.
How to make submission
Best of Luck
Other Concepts
Time constraints
You need to make sure that your model can predict airborne objects for each flight within 1000 second, otherwise the submission will be mark as failed. (need revision on timeouts based on budget)
Local evaluation
You can also test end to end evaluation on your own systems. The scripts are available in core/metrics
folder.

Important links
