@@ -10,36 +10,41 @@ Instructions to make submissions to the [SBB CFF Flatland Challenge](https://www
Participants will have to submit their code, with packaging specifications, and the evaluator will automatically build a docker image and execute their agent against an arbitrary number of pre-generated flatland environments.
### Setup
-**docker** : By following the instructions [here](https://docs.docker.com/install/linux/docker-ce/ubuntu/)
-**aicrowd-repo2docker**
```sh
pip install aicrowd-repo2docker
```
### Dependencies
-**Anaconda** (By following instructions [here](https://www.anaconda.com/download)) At least version `4.5.11` is required to correctly populate `environment.yml`.
-**flatland-rl** (By following instructions [here](http://flatland-rl-docs.s3-website.eu-central-1.amazonaws.com/readme.html))
**IMPORTANT** : Please note that you will need `flatland-rl` version `>=0.3.9` to be able to submit which you can update in your anaconda env by :
**IMPORTANT** : Please note that you will need `flatland-rl` version `>=0.3.9` to be able to submit.
@@ -47,7 +52,7 @@ cd flatland-challenge-starter-kit
In this exercise, we assume that you will download the test-envs provided at : [https://www.aicrowd.com/challenges/flatland-challenge/dataset_files](https://www.aicrowd.com/challenges/flatland-challenge/dataset_files), and will untar them inside `./scratch/test-envs`, so that you have a directory structure similar to :
```
.
.scratch/
└── test-envs
├── Test_0
│ ├── Level_0.pkl
...
...
@@ -111,12 +116,20 @@ The software runtime is specified by exporting your `conda` env to the root
of your repository by doing :
```
# The included environment.yml is generated by the command below, and you do not need to run it again
# if you did not add any custom dependencies
conda env export --no-build > environment.yml
# Note the `--no-build` flag, which is important if you want your anaconda env to be replicable across all
```
This `environment.yml` file will be used to recreate the `conda environment` inside the Docker container.
This repository includes an example `environment.yml`
You can specify your software environment by using all the [available configuration options of repo2docker](https://repo2docker.readthedocs.io/en/latest/config_files.html). (But please remember to use [aicrowd-repo2docker](https://pypi.org/project/aicrowd-repo2docker/) to have GPU support)
# What should my code structure be like ?
Please follow the structure documented in the included [run.py](https://github.com/AIcrowd/flatland-challenge-starter-kit/blob/master/run.py) to adapt
...
...
@@ -144,25 +157,11 @@ This is used to map your submission to the said challenge, so please remember to
If you set `debug` to `true`, then the evaluation will run on a separate set of 20 environments, and the logs from your submitted code (if it fails), will be made available to you to help you debug.
### Packaging of your software environment
You can specify your software environment by using all the [available configuration options of repo2docker](https://repo2docker.readthedocs.io/en/latest/config_files.html). (But please remember to use [aicrowd-repo2docker](https://pypi.org/project/aicrowd-repo2docker/) to have GPU support)
The recommended way is to use Anaconda configuration files using **environment.yml** files.
```sh
# The included environment.yml is generated by the command below, and you do not need to run it again
# if you did not add any custom dependencies
conda env export--no-build> environment.yml
# Note the `--no-build` flag, which is important if you want your anaconda env to be replicable across all
```
### Code Entrypoint
The evaluator will use `/home/aicrowd/run.sh` as the entrypoint, so please remember to have a `run.sh` at the root, which can instantitate any necessary environment variables, and also start executing your actual code. This repository includes a sample `run.sh` file.
If you are using a Dockerfile to specify your software environment, please remember to create a `aicrowd` user, and place the entrypoint code at `run.sh`.
If you are unsure what this is all about, you can let `run.sh` be as is, and instead focus on the `run.py` which is being called from within `run.sh`.