Commit 665f39b5 authored by spmohanty's avatar spmohanty

First commit

parents
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
.hypothesis/
.pytest_cache/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
.python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# celery beat schedule file
celerybeat-schedule
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
scratch/test-envs/
# Frequently Asked Questions
A non-exhaustive list of Frequently Asked Questions for the [Flatland Challenge](https://www.aicrowd.com/challenges/flatland-challenge).
## How do I locally build a docker image out of my submission ?
* Install Dependencies
- **docker** : By following the instructions [here](https://docs.docker.com/install/linux/docker-ce/ubuntu/)
- **aicrowd-repo2docker**
```sh
pip install aicrowd-repo2docker
```
* Build Image
```
sudo aicrowd-repo2docker --no-run \
--user-id 1001 \
--user-name aicrowd \
--image-name my-random-agent \
--debug .
```
# Author
Sharada Mohanty <https://twitter.com/MeMohanty>
\ No newline at end of file
![AIcrowd-Logo](https://raw.githubusercontent.com/AIcrowd/AIcrowd/master/app/assets/images/misc/aicrowd-horizontal.png)
# Flatland Challenge Starter Kit
[![gitter-badge](https://badges.gitter.im/AIcrowd-HQ/flatland-rl.png)](https://gitter.im/AIcrowd-HQ/flatland-rl)
Instructions to make submissions to the [SBB CFF Flatland Challenge](https://www.aicrowd.com/challenges/flatland-challenge).
Participants will have to submit their code, with packaging specifications, and the evaluator will automatically build a docker image and execute their agent against an arbitrary number of pre-generated flatland environments.
### Setup
- **docker** : By following the instructions [here](https://docs.docker.com/install/linux/docker-ce/ubuntu/)
- **aicrowd-repo2docker**
```sh
pip install aicrowd-repo2docker
```
- **Anaconda** (By following instructions [here](https://www.anaconda.com/download)) At least version `4.5.11` is required to correctly populate `environment.yml`.
- **flatland-rl** (By following instructions [here](http://flatland-rl-docs.s3-website.eu-central-1.amazonaws.com/readme.html))
**IMPORTANT** : Please note that you will need `flatland-rl` version `>=0.3.5` to be able to submit which you can update in your anaconda env by :
```
pip install -U flatland-rl
```
- **Your code specific dependencies**
```sh
# If say you want to install PyTorch
conda install pytorch torchvision -c pytorch
```
### Clone repository
```
git clone git@github.com:AIcrowd/flatland-challenge-starter-kit.git
cd flatland-challenge-starter-kit
```
### Test Submission Locally
```
cd flatland-challenge-starter-kit
export IMAGE_NAME="flatland-random-agent"
# Build docker image for your submission
./debug_build.sh
# In a separate tab : run redis server
./run_redis.sh
# In a separate tab : run local grader
./run_local_grader.sh
# In a separate tab :
# Finally, run your agent locally by :
./docker_run.sh
```
# How do I specify my software runtime ?
The software runtime is specified by exporting your `conda` env to the root
of your repository by doing :
```
conda env export --no-build > environment.yml
```
This `environment.yml` file will be used to recreate the `conda environment` inside the Docker container.
This repository includes an example `environment.yml`
# What should my code structure be like ?
Please follow the structure documented in the included [run.py](https://github.com/AIcrowd/flatland-challenge-starter-kit/blob/master/run.py) to adapt
your already existing code to the required structure for this round.
## Important Concepts
### Repository Structure
- `aicrowd.json`
Each repository should have a `aicrowd.json` with the following content :
```json
{
"challenge_id": "flatland-challenge-2019",
"grader_id": "flatland-challenge-2019",
"authors": ["your-aicrowd-username"],
"description": "sample description about your awesome agent",
"license": "MIT",
"debug": false
}
```
This is used to map your submission to the said challenge, so please remember to use the correct `challenge_id` and `grader_id` as specified above.
If you set `debug` to `true`, then the evaluation will run on a separate set of 20 environments, and the logs from your submitted code (if it fails), will be made available to you to help you debug.
### Packaging of your software environment
You can specify your software environment by using all the [available configuration options of repo2docker](https://repo2docker.readthedocs.io/en/latest/config_files.html). (But please remember to use [aicrowd-repo2docker](https://pypi.org/project/aicrowd-repo2docker/) to have GPU support)
The recommended way is to use Anaconda configuration files using **environment.yml** files.
```sh
# The included environment.yml is generated by the command below, and you do not need to run it again
# if you did not add any custom dependencies
conda env export --no-build > environment.yml
# Note the `--no-build` flag, which is important if you want your anaconda env to be replicable across all
```
### Debugging the packaged software environment
If you have issues with your submission because of your software environment and dependencies, you can debug them, by first building the docker image, and then getting a shell inside the image by :
```
docker run --net=host -it $IMAGE_NAME /bin/bash
```
and then exploring to find the cause of the issue.
### Code Entrypoint
The evaluator will use `/home/aicrowd/run.sh` as the entrypoint, so please remember to have a `run.sh` at the root, which can instantitate any necessary environment variables, and also start executing your actual code. This repository includes a sample `run.sh` file.
If you are using a Dockerfile to specify your software environment, please remember to create a `aicrowd` user, and place the entrypoint code at `run.sh`.
## Submission
To make a submission, you will have to create a private repository on [https://gitlab.aicrowd.com/](https://gitlab.aicrowd.com/).
You will have to add your SSH Keys to your GitLab account by following the instructions [here](https://docs.gitlab.com/ee/gitlab-basics/create-your-ssh-keys.html).
If you do not have SSH Keys, you will first need to [generate one](https://docs.gitlab.com/ee/ssh/README.html#generating-a-new-ssh-key-pair).
Then you can create a submission by making a _tag push_ to your repository on [https://gitlab.aicrowd.com/](https://gitlab.aicrowd.com/).
**Any tag push (where the tag name begins with "submission-") to your private repository is considered as a submission**
Then you can add the correct git remote, and finally submit by doing :
```
cd flatland-challenge-starter-kit
# Add AIcrowd git remote endpoint
git remote add aicrowd git@gitlab.aicrowd.com/:<YOUR_AICROWD_USER_NAME>/flatland-challenge-starter-kit.git
git push aicrowd master
# Create a tag for your submission and push
git tag -am "submission-v0.1" submission-v0.1
git push aicrowd master
git push aicrowd submission-v0.1
# Note : If the contents of your repository (latest commit hash) does not change,
# then pushing a new tag will **not** trigger a new evaluation.
```
You now should be able to see the details of your submission at :
[gitlab.aicrowd.com//<YOUR_AICROWD_USER_NAME>/flatland-challenge-starter-kit/issues](gitlab.aicrowd.com//<YOUR_AICROWD_USER_NAME>/flatland-challenge-starter-kit/issues)
**NOTE**: Remember to update your username in the link above :wink:
In the link above, you should start seeing something like this take shape (each of the steps can take a bit of time, so please be patient too :wink: ) :
![](https://i.imgur.com/Kc7M8zH.png)
and if everything works out correctly, then you should be able to see the final scores like this :
![](https://i.imgur.com/9RT2jFi.png)
**Best of Luck** :tada: :tada:
# Author
Sharada Mohanty <https://twitter.com/MeMohanty>
{
"challenge_id": "aicrowd_flatland_challenge_2019",
"grader_id": "aicrowd_flatland_challenge_2019",
"authors": ["mohanty"],
"description": "Random Agent for Flatland Challenge.",
"debug" : false
}
curl
git
vim
ssh
gcc
\ No newline at end of file
#!/bin/bash
if [ -e environ_secret.sh ]
then
echo "Note: Gathering environment variables from environ_secret.sh"
source environ_secret.sh
else
echo "Note: Gathering environment variables from environ.sh"
source environ.sh
fi
# Expected Env variables : in environ.sh
sudo docker run \
--net=host \
-v ./scratch/test-envs:/flatland_envs:z \
-it ${IMAGE_NAME}:${IMAGE_TAG} \
/home/aicrowd/run.sh
name: flatland-rl-test
channels:
- defaults
dependencies:
- _libgcc_mutex=0.1
- ca-certificates=2019.5.15
- certifi=2019.6.16
- libedit=3.1.20181209
- libffi=3.2.1
- libgcc-ng=9.1.0
- libstdcxx-ng=9.1.0
- ncurses=6.1
- openssl=1.1.1c
- pip=19.1.1
- python=3.6.8
- readline=7.0
- setuptools=41.0.1
- sqlite=3.29.0
- tk=8.6.8
- wheel=0.33.4
- xz=5.2.4
- zlib=1.2.11
- pip:
- atomicwrites==1.3.0
- attrs==19.1.0
- bleach==3.1.0
- cairocffi==1.0.2
- cairosvg==2.4.0
- cffi==1.12.3
- chardet==3.0.4
- click==7.0
- crowdai-api==0.1.21
- cssselect2==0.2.1
- cycler==0.10.0
- defusedxml==0.6.0
- docutils==0.15.1
- filelock==3.0.12
- flatland-rl==0.3.6
- idna==2.8
- importlib-metadata==0.19
- importlib-resources==1.0.2
- kiwisolver==1.1.0
- lxml==4.4.0
- matplotlib==3.1.1
- more-itertools==7.2.0
- msgpack==0.6.1
- msgpack-numpy==0.4.4.3
- numpy==1.17.0
- packaging==19.0
- pandas==0.25.0
- pillow==6.1.0
- pkginfo==1.5.0.1
- pluggy==0.12.0
- py==1.8.0
- pyarrow==0.14.1
- pycparser==2.19
- pygments==2.4.2
- pyparsing==2.4.1.1
- pytest==5.0.1
- pytest-runner==5.1
- python-dateutil==2.8.0
- python-gitlab==1.10.0
- pytz==2019.1
- readme-renderer==24.0
- recordtype==1.3
- redis==3.3.2
- requests==2.22.0
- requests-toolbelt==0.9.1
- screeninfo==0.4
- six==1.12.0
- svgutils==0.3.1
- timeout-decorator==0.4.1
- tinycss2==1.0.2
- toml==0.10.0
- tox==3.13.2
- tqdm==4.32.2
- twine==1.13.0
- urllib3==1.25.3
- ushlex==0.99.1
- virtualenv==16.7.2
- wcwidth==0.1.7
- webencodings==0.5.1
- xarray==0.12.3
- zipp==0.5.2
prefix: /home/mohanty/anaconda3/envs/flatland-rl-test
from flatland.evaluators.client import FlatlandRemoteClient
from flatland.envs.observations import TreeObsForRailEnv
from flatland.envs.predictions import ShortestPathPredictorForRailEnv
import numpy as np
#####################################################################
# Instantiate a Remote Client
#####################################################################
remote_client = FlatlandRemoteClient()
#####################################################################
# Define your custom controller
#
# which can take an observation, and the number of agents and
# compute the necessary action for this step for all (or even some)
# of the agents
#####################################################################
def my_controller(obs, number_of_agents):
_action = {}
for _idx range(number_of_agents):
_action[_idx] = np.random.randint(0, 5)
return _action
#####################################################################
# Instantiate your custom Observation Builder
#
# You can build your own Observation Builder by following
# the example here :
# https://gitlab.aicrowd.com/flatland/flatland/blob/master/flatland/envs/observations.py#L14
#####################################################################
my_observation_builder = TreeObsForRailEnv(
max_depth=3,
predictor=ShortestPathPredictorForRailEnv()
)
#####################################################################
# Main evaluation loop
#
# This iterates over an arbitrary number of env evaluations
#####################################################################
evaluation_number = 0
while True:
evaluation_number += 1
# Switch to a new evaluation environemnt
#
# a remote_client.env_create is similar to instantiating a
# RailEnv and then doing a env.reset()
# hence it returns the first observation from the
# env.reset()
#
# You can also pass your custom observation_builder object
# to allow you to have as much control as you wish
# over the observation of your choice.
observation = remote_client.env_create(
obs_builder_object=my_observation_builder
)
if not observation:
#
# If the remote_client returns False on a `env_create` call,
# then it basically means that your agent has already been
# evaluated on all the required evaluation environments,
# and hence its safe to break out of the main evaluation loop
break
print("Evaluation Number : {}".format(evaluation_number))
#####################################################################
# Access to a local copy of the environment
#
#####################################################################
# Note: You can access a local copy of the environment
# by using :
# remote_client.env
#
# But please ensure to not make any changes (or perform any action) on
# the local copy of the env, as then it will diverge from
# the state of the remote copy of the env, and the observations and
# rewards, etc will behave unexpectedly
#
# You can however probe the local_env instance to get any information
# you need from the environment. It is a valid RailEnv instance.
local_env = remote_client.env
number_of_agents = len(local_env.agents)
# Now we enter into another infinite loop where we
# compute the actions for all the individual steps in this episode
# until the episode is `done`
#
# An episode is considered done when either all the agents have
# reached their target destination
# or when the number of time steps has exceed max_time_steps, which
# is defined by :
#
# max_time_steps = int(1.5 * (env.width + env.height))
#
while True:
#####################################################################
# Evaluation of a single episode
#
#####################################################################
# Compute the action for this step by using the previously
# defined controlle
action = my_controller(obs, number_of_agents)
# Perform the chosen action on the environment.
# The action gets applied to both the local and the remote copy
# of the environment instance, and the observation is what is
# returned by the local copy of the env, and the rewards, and done and info
# are returned by the remote copy of the env
observation, all_rewards, done, info = remote_client.env_step(action)
if done['__all__']:
print("Reward : ", sum(list(all_rewards.values())))
#
# When done['__all__'] == True, then the evaluation of this
# particular Env instantiation is complete, and we can break out
# of this loop, and move onto the next Env evaluation
break
print("Evaluation of all environments complete...")
########################################################################
# Submit your Results
#
# Please do not forget to include this call, as this triggers the
# final computation of the score statistics, video generation, etc
# and is necesaary to have your submission marked as successfully evaluated
########################################################################
print(remote_client.submit())
#!/bin/bash
python ./run.py
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment