Commit 1d97912f authored by nikhil_rayaprolu's avatar nikhil_rayaprolu

Update README.md

parent 1eff41a0
......@@ -13,7 +13,55 @@ This is an evolving dataset, where we will release more data as the dataset grow
# Baseline
MMdetection is an open source object detection toolbox based on PyTorch, with a large Model Zoo with many customised models that can be plugged and tested in with just a single config file modification. PYou can read more about it at: [mmdetection github](https://github.com/open-mmlab/mmdetection/)
# Installation
[MMDetection Installation instructions](https://github.com/open-mmlab/mmdetection/blob/master/docs/INSTALL.md)
Follow the installation instructions as given in the above link.
Ensure you have `docker` and `nvidia-docker` installed by following the instructions here :
* [Docker](https://docs.docker.com/install/)
* [nvidia-docker](https://github.com/NVIDIA/nvidia-docker)
**NOTE** : You do not need nvidia-docker if you do not want to use GPU when testing your submission locally
```
git clone http://gitlab.aicrowd.com/nikhil_rayaprolu/food-pytorch-baseline.git
cd food-pytorch-baseline
pip3 install -r requirements.txt
pip3 install git+https://github.com/AIcrowd/coco.git
```
# Dataset
The dataset for the [AIcrowd Food Recognition Challenge](https://www.aicrowd.com/challenges/food-recognition-challenge) is available at [https://www.aicrowd.com/challenges/food-recognition-challenge/dataset_files](https://www.aicrowd.com/challenges/food-recognition-challenge/dataset_files)
This dataset contains :
* `train-v0.2.tar.gz` : This is the Training Set of **7949** (as RGB images) food images, along with their corresponding annotations in [MS-COCO format](http://cocodataset.org/#home)
* `val-v0.2.tar.gz`: This is the suggested Validation Set of **418** (as RGB images) food images, along with their corresponding annotations in [MS-COCO format](http://cocodataset.org/#home)
* `test_images-v0.2.tar.gz` : This is the debug Test Set for Round-1, where you are provided the same images as the validation set.
To get started, we would advise you to download all the files, and untar them inside the `data/` folder of this repository, so that you have a directory structure like this :
```bash
|-- data/
| |-- test_images/ (has all images for prediction)(**NOTE** : They are the same as the validation set images)
| |-- train/
| | |-- images (has all the images for training)
| | |__ annotation.json : Annotation of the data in MS COCO format
| | |__ annotation-small.json : Smaller version of the previous dataset
| |-- val/
| | |-- images (has all the images for training)
| | |__ annotation.json : Annotation of the data in MS COCO format
| | |__ annotation-small.json : Smaller version of the previous dataset
```
We are also assuming that you have already installed all the requirements for this notebook, or you can still install them by :
# Usage
# Training with MMDetection:
Let us look at training MMDetection using Hybrid Task Cascade [HTC research paper](https://arxiv.org/abs/1901.07518).
......@@ -70,53 +118,6 @@ plot_curve([os.getcwd()+'/work_dirs/htc_r50_fpn/20191206_105437.log.json'], args
```
# Installation
Ensure you have `docker` and `nvidia-docker` installed by following the instructions here :
* [Docker](https://docs.docker.com/install/)
* [nvidia-docker](https://github.com/NVIDIA/nvidia-docker)
**NOTE** : You do not need nvidia-docker if you do not want to use GPU when testing your submission locally
```
git clone http://gitlab.aicrowd.com/nikhil_rayaprolu/food-pytorch-baseline.git
cd food-pytorch-baseline
pip3 install -r requirements.txt
pip3 install git+https://github.com/AIcrowd/coco.git
```
# Dataset
The dataset for the [AIcrowd Food Recognition Challenge](https://www.aicrowd.com/challenges/food-recognition-challenge) is available at [https://www.aicrowd.com/challenges/food-recognition-challenge/dataset_files](https://www.aicrowd.com/challenges/food-recognition-challenge/dataset_files)
This dataset contains :
* `train-v0.2.tar.gz` : This is the Training Set of **7949** (as RGB images) food images, along with their corresponding annotations in [MS-COCO format](http://cocodataset.org/#home)
* `val-v0.2.tar.gz`: This is the suggested Validation Set of **418** (as RGB images) food images, along with their corresponding annotations in [MS-COCO format](http://cocodataset.org/#home)
* `test_images-v0.2.tar.gz` : This is the debug Test Set for Round-1, where you are provided the same images as the validation set.
To get started, we would advise you to download all the files, and untar them inside the `data/` folder of this repository, so that you have a directory structure like this :
```bash
|-- data/
| |-- test_images/ (has all images for prediction)(**NOTE** : They are the same as the validation set images)
| |-- train/
| | |-- images (has all the images for training)
| | |__ annotation.json : Annotation of the data in MS COCO format
| | |__ annotation-small.json : Smaller version of the previous dataset
| |-- val/
| | |-- images (has all the images for training)
| | |__ annotation.json : Annotation of the data in MS COCO format
| | |__ annotation-small.json : Smaller version of the previous dataset
```
We are also assuming that you have already installed all the requirements for this notebook, or you can still install them by :
# Usage
**TODO** : @Nikhil : Please add some Usage instructions for people to use this codebase locally.
## Other Associated Notebooks
* [Dataset Utils](https://github.com/AIcrowd/food-recognition-challenge-starter-kit/blob/master/Dataset%20Utils.ipynb)
......@@ -279,7 +280,7 @@ git push aicrowd submission-v0.1
## Credits
* Parts of the documentation for this baseline was taken from : https://github.com/AIcrowd/food-recognition-challenge-starter-kit
* **TODO** @Nikhil : Please add relevant credits for the mmdetection codebases that you used as a reference.
* and the baseline is built using MMDetection: https://github.com/open-mmlab/mmdetection/
# Author
**[Nikhil Rayaprolu](nikhil@aicrowd.com)**
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment