diff --git a/README.md b/README.md
index 6b97890ba323502c4b94069b3f1812d7f9fda19b..339e674b91c59ab1cd669b7d41896116aa0fcaca 100644
--- a/README.md
+++ b/README.md
@@ -1,127 +1,233 @@
+# food-recognition-challenge-mmdetection-baseline
+![AIcrowd-Logo](https://raw.githubusercontent.com/AIcrowd/AIcrowd/master/app/assets/images/misc/aicrowd-horizontal.png)
 
-# MMDetection
+# Problem Statement
 
-**News**: We released the technical report on [ArXiv](https://arxiv.org/abs/1906.07155).
+The goal of this challenge is to train models which can look at images of food items and detect the individual food items present in them.
+We provide a novel dataset of food images collected using the MyFoodRepo project where numerous volunteer Swiss users provide images of their daily food intake. The images have been hand labelled by a group of experts to map the individual food items to an ontology of Swiss Food items.
 
-Documentation: https://mmdetection.readthedocs.io/
+This is an evolving dataset, where we will release more data as the dataset grows in size.
 
-## Introduction
+![image1](https://i.imgur.com/zS2Nbf0.png)
 
-The master branch works with **PyTorch 1.1** or higher.
+# Baseline
+MMdetection is an open source object detection toolbox based on PyTorch, with a large Model Zoo with many customised models that can be plugged and tested in with just a single config file modification. PYou can read more about it at: [mmdetection github](https://github.com/open-mmlab/mmdetection/)
 
-mmdetection is an open source object detection toolbox based on PyTorch. It is
-a part of the open-mmlab project developed by [Multimedia Laboratory, CUHK](http://mmlab.ie.cuhk.edu.hk/).
+Follow the installation instructions as given in the above link.
+# Training with MMDetection:
+Let us look at training MMDetection using Hybrid Task Cascade [HTC research paper](https://arxiv.org/abs/1901.07518).
 
-![demo image](demo/coco_test_12510.jpg)
+A score of AP_50 of 0.526	and AR_50 of 0.729, can be achieved with Hybrid Task Cascade of Resnet50 Backbone.
 
-### Major features
+MMDetection provides us with a config file especially for HTC, available at [HTC config](https://github.com/open-mmlab/mmdetection/tree/master/configs/htc)
 
-- **Modular Design**
+Also make sure you have downloaded the training data to a subfolder of your project.
 
-  We decompose the detection framework into different components and one can easily construct a customized object detection framework by combining different modules.
+Modify your config file and point your dataset variables to your data folder.
 
-- **Support of multiple frameworks out of box**
+As given in [MMDetection Getting Started](https://github.com/open-mmlab/mmdetection/blob/master/docs/GETTING_STARTED.md),
+You can use:
 
-  The toolbox directly supports popular and contemporary detection frameworks, *e.g.* Faster RCNN, Mask RCNN, RetinaNet, etc.
+    python tools/train.py ${CONFIG_FILE}
+to train the model on a single GPU or
 
-- **High efficiency**
+    ./tools/dist_train.sh ${CONFIG_FILE} ${GPU_NUM} [optional arguments]
+to train the model on multiple GPUs.
 
-  All basic bbox and mask operations run on GPUs now. The training speed is faster than or comparable to other codebases, including [Detectron](https://github.com/facebookresearch/Detectron), [maskrcnn-benchmark](https://github.com/facebookresearch/maskrcnn-benchmark) and [SimpleDet](https://github.com/TuSimple/simpledet).
+Make sure you have edited the config file to point to the dataset and also have made changes to the number of classes.
+If you are going to use the dataloader from the mmdetection.
 
-- **State of the art**
+## Testing with MMDetection:
+To test your results with MMDetection, 
+you can use the commands:
+```
+*single-gpu testing*
+python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [--out ${RESULT_FILE}] [--eval ${EVAL_METRICS}] [--show]
 
-  The toolbox stems from the codebase developed by the *MMDet* team, who won [COCO Detection Challenge](http://cocodataset.org/#detection-leaderboard) in 2018, and we keep pushing it forward.
+*multi-gpu testing*
+./tools/dist_test.sh ${CONFIG_FILE} ${CHECKPOINT_FILE} ${GPU_NUM} [--out ${RESULT_FILE}] [--eval ${EVAL_METRICS}]
+```
 
-Apart from MMDetection, we also released a library [mmcv](https://github.com/open-mmlab/mmcv) for computer vision research, which is heavily depended on by this toolbox.
+**Log Analysis**
 
-## License
+The training logs can be analyzed using the plot_curve provided with the mmdetection:
 
-This project is released under the [Apache 2.0 license](LICENSE).
+```
+import os
+import matplotlib
+%matplotlib inline
+from tools.analyze_logs import plot_curve
+matplotlib.rcParams['figure.figsize'] = [20, 10]
+args = {
+    'keys':['segm_mAP_50'],
+    'legend':'segm_mAP_50',
+    'backend': None,
+    'json_logs': [os.getcwd()+'/work_dirs/htc_r50_fpn/20191206_105437.log.json'],
+    'title': 'loss'
+}
+print(os.getcwd()+'/work_dirs/htc_r50_fpn/20191206_105437.log.json')
+plot_curve([os.getcwd()+'/work_dirs/htc_r50_fpn/20191206_105437.log.json'], args)
+```
 
-## Changelog
 
-v1.0.0 was released in 30/1/2020, with more than 20 fixes and improvements.
-Please refer to [CHANGELOG.md](docs/CHANGELOG.md) for details and release history.
+# Installation
 
-## Benchmark and model zoo
+Ensure you have `docker` and `nvidia-docker` installed by following the instructions here : 
 
-Supported methods and backbones are shown in the below table.
-Results and models are available in the [Model zoo](docs/MODEL_ZOO.md).
+* [Docker](https://docs.docker.com/install/)
+* [nvidia-docker](https://github.com/NVIDIA/nvidia-docker)
+**NOTE** : You do not need nvidia-docker if you do not want to use GPU when testing your submission locally
 
-|                    | ResNet   | ResNeXt  | SENet    | VGG      | HRNet |
-|--------------------|:--------:|:--------:|:--------:|:--------:|:-----:|
-| RPN                | ✓        | ✓        | ☐        | ✗        | ✓     |
-| Fast R-CNN         | ✓        | ✓        | ☐        | ✗        | ✓     |
-| Faster R-CNN       | ✓        | ✓        | ☐        | ✗        | ✓     |
-| Mask R-CNN         | ✓        | ✓        | ☐        | ✗        | ✓     |
-| Cascade R-CNN      | ✓        | ✓        | ☐        | ✗        | ✓     |
-| Cascade Mask R-CNN | ✓        | ✓        | ☐        | ✗        | ✓     |
-| SSD                | ✗        | ✗        | ✗        | ✓        | ✗     |
-| RetinaNet          | ✓        | ✓        | ☐        | ✗        | ✓     |
-| GHM                | ✓        | ✓        | ☐        | ✗        | ✓     |
-| Mask Scoring R-CNN | ✓        | ✓        | ☐        | ✗        | ✓     |
-| Double-Head R-CNN  | ✓        | ✓        | ☐        | ✗        | ✓     |
-| Grid R-CNN (Plus)  | ✓        | ✓        | ☐        | ✗        | ✓     |
-| Hybrid Task Cascade| ✓        | ✓        | ☐        | ✗        | ✓     |
-| Libra R-CNN        | ✓        | ✓        | ☐        | ✗        | ✓     |
-| Guided Anchoring   | ✓        | ✓        | ☐        | ✗        | ✓     |
-| FCOS               | ✓        | ✓        | ☐        | ✗        | ✓     |
-| RepPoints          | ✓        | ✓        | ☐        | ✗        | ✓     |
-| Foveabox           | ✓        | ✓        | ☐        | ✗        | ✓     |
-| FreeAnchor         | ✓        | ✓        | ☐        | ✗        | ✓     |
-| NAS-FPN            | ✓        | ✓        | ☐        | ✗        | ✓     |
-| ATSS               | ✓        | ✓        | ☐        | ✗        | ✓     |
+```
+git clone http://gitlab.aicrowd.com/nikhil_rayaprolu/food-round2.git
+cd food-round2
+pip3 install -r requirements.txt
+pip3 install git+https://github.com/AIcrowd/coco.git
+```
 
-Other features
-- [x] DCNv2
-- [x] Group Normalization
-- [x] Weight Standardization
-- [x] OHEM
-- [x] Soft-NMS
-- [x] Generalized Attention
-- [x] GCNet
-- [x] Mixed Precision (FP16) Training
-- [x] [InstaBoost](configs/instaboost/README.md)
+# Dataset
 
+The dataset for the [AIcrowd Food Recognition Challenge](https://www.aicrowd.com/challenges/food-recognition-challenge) is available at [https://www.aicrowd.com/challenges/food-recognition-challenge/dataset_files](https://www.aicrowd.com/challenges/food-recognition-challenge/dataset_files)
 
-## Installation
+This dataset contains :   
+* `train-v0.2.tar.gz` : This is the Training Set of **7949** (as RGB images) food images, along with their corresponding annotations in [MS-COCO format](http://cocodataset.org/#home)
 
-Please refer to [INSTALL.md](docs/INSTALL.md) for installation and dataset preparation.
+* `val-v0.2.tar.gz`: This is the suggested Validation Set of **418** (as RGB images) food images, along with their corresponding annotations in [MS-COCO format](http://cocodataset.org/#home)
 
+* `test_images-v0.2.tar.gz` : This is the debug Test Set for Round-1, where you are provided the same images as the validation set.
 
-## Get Started
+To get started, we would advise you to download all the files, and untar them inside the `data/` folder of this repository, so that you have a directory structure like this : 
 
-Please see [GETTING_STARTED.md](docs/GETTING_STARTED.md) for the basic usage of MMDetection.
+```bash
+|-- data/
+|   |-- test_images/ (has all images for prediction)(**NOTE** : They are the same as the validation set images)
+|   |-- train/
+|   |   |-- images (has all the images for training)
+|   |   |__ annotation.json : Annotation of the data in MS COCO format
+|   |   |__ annotation-small.json : Smaller version of the previous dataset
+|   |-- val/
+|   |   |-- images (has all the images for training)
+|   |   |__ annotation.json : Annotation of the data in MS COCO format
+|   |   |__ annotation-small.json : Smaller version of the previous dataset
+```
 
-## Contributing
+We are also assuming that you have already installed all the requirements for this notebook, or you can still install them by :
 
-We appreciate all contributions to improve MMDetection. Please refer to [CONTRIBUTING.md](.github/CONTRIBUTING.md) for the contributing guideline.
+# Usage
 
-## Acknowledgement
+**TODO** : @Nikhil : Please add some Usage instructions for people to use this codebase locally. 
 
-MMDetection is an open source project that is contributed by researchers and engineers from various colleges and companies. We appreciate all the contributors who implement their methods or add new features, as well as users who give valuable feedbacks.
-We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their own new detectors.
+## Other Associated Notebooks
 
+* [Dataset Utils](https://github.com/AIcrowd/food-recognition-challenge-starter-kit/blob/master/Dataset%20Utils.ipynb)
+  * [Import Dependencies](https://github.com/AIcrowd/food-recognition-challenge-starter-kit/blob/master/Dataset%20Utils.ipynb#Import-dependencies)
+  * [Configuration Variables](https://github.com/AIcrowd/food-recognition-challenge-starter-kit/blob/master/Dataset%20Utils.ipynb#Configuration-Variables)
+  * [Parsing Annotations](https://github.com/AIcrowd/food-recognition-challenge-starter-kit/blob/master/Dataset%20Utils.ipynb#Parsing-the-annotations)
+  * [Collecting and Visualizing Images](https://github.com/AIcrowd/food-recognition-challenge-starter-kit/blob/master/Dataset%20Utils.ipynb#Collecting-and-Visualizing-Images)
+  * [Understanding Annotations](https://github.com/AIcrowd/food-recognition-challenge-starter-kit/blob/master/Dataset%20Utils.ipynb#Understanding-Annotations)
+  * [Visualizing Annotations](https://github.com/AIcrowd/food-recognition-challenge-starter-kit/blob/master/Dataset%20Utils.ipynb#Visualizing-Annotations)
+  * [Advanced](https://github.com/AIcrowd/food-recognition-challenge-starter-kit/blob/master/Dataset%20Utils.ipynb#Advanced)
+    * [Convert poly segmentation to rle](https://github.com/AIcrowd/food-recognition-challenge-starter-kit/blob/master/Dataset%20Utils.ipynb#1.-Convert-poly-segmentation-to-rle)
+    * [Convert segmentation to pixel level masks](https://github.com/AIcrowd/food-recognition-challenge-starter-kit/blob/master/Dataset%20Utils.ipynb#2.-Convert-segmentation-to-pixel-level-masks)
+* [Random Submission](https://github.com/AIcrowd/food-recognition-challenge-starter-kit/blob/master/run.py)
+* [Locally test the evaluation function](https://github.com/AIcrowd/food-recognition-challenge-starter-kit/blob/master/Local%20Evaluation.ipynb)   
 
-## Citation
+## Other Baselines
+# Round 1
+* [Colab Notebook for Data Analysis and Tutorial](https://colab.research.google.com/drive/1A5p9GX5X3n6OMtLjfhnH6Oeq13tWNtFO#scrollTo=ok54AWT_VoWV)
+A notebook with data analysis on the Food Recognition Dataset and then a short tutorial on training with keras and pytorch. This lets you immediately jump onto the challenge and solve the challenge
+### Pretrained Baselines
+* [mmdetection (pytorch)](https://gitlab.aicrowd.com/nikhil_rayaprolu/food-pytorch-baseline)  
+* [matterport-maskrcnn (keras - tensorflow)](https://gitlab.aicrowd.com/nikhil_rayaprolu/food-recognition)
 
-If you use this toolbox or benchmark in your research, please cite this project.
+# Round 2
+* [Colab Notebook for Data Analysis and Tutorial](https://colab.research.google.com/drive/1vXdv9quZ7CXO5lLCjhyz3jtejRzDq221)
+A notebook with data analysis on the Food Recognition Dataset and then a short tutorial on training with keras and pytorch. This lets you immediately jump onto the challenge and solve the challenge
+### Pretrained Baselines
+* [mmdetection (pytorch)](https://gitlab.aicrowd.com/nikhil_rayaprolu/food-round2)  
+
+# Submission Instructions
+
+To submit to the challenge you'll need to ensure you've set up an appropriate repository structure, create a private git repository at https://gitlab.aicrowd.com with the contents of your submission, and push a git tag corresponding to the version of your repository you'd like to submit.
+
+## Repository Structure
+We have created this sample submission repository which you can use as reference.
+
+#### aicrowd.json
+Each repository should have a aicrowd.json file with the following fields:
 
 ```
-@article{mmdetection,
-  title   = {{MMDetection}: Open MMLab Detection Toolbox and Benchmark},
-  author  = {Chen, Kai and Wang, Jiaqi and Pang, Jiangmiao and Cao, Yuhang and
-             Xiong, Yu and Li, Xiaoxiao and Sun, Shuyang and Feng, Wansen and
-             Liu, Ziwei and Xu, Jiarui and Zhang, Zheng and Cheng, Dazhi and
-             Zhu, Chenchen and Cheng, Tianheng and Zhao, Qijie and Li, Buyu and
-             Lu, Xin and Zhu, Rui and Wu, Yue and Dai, Jifeng and Wang, Jingdong
-             and Shi, Jianping and Ouyang, Wanli and Loy, Chen Change and Lin, Dahua},
-  journal= {arXiv preprint arXiv:1906.07155},
-  year={2019}
+{
+    "challenge_id" : "aicrowd-food-recognition-challenge",
+    "grader_id": "aicrowd-food-recognition-challenge",
+    "authors" : ["aicrowd-user"],
+    "description" : "Food Recognition Challenge Submission",
+    "license" : "MIT",
+    "gpu": true
 }
 ```
+This file is used to identify your submission as a part of the Food Recognition Challenge.  You must use the `challenge_id` and `grader_id` specified above in the submission. The `gpu` key in the `aicrowd.json` lets your specify if your submission requires a GPU or not. In which case, a NVIDIA-K80 will be made available to your submission when evaluation the submission.
+
+#### Submission environment configuration
+You can specify the software runtime of your code by modifying the included [Dockerfile](Dockerfile). 
+
+#### Code Entrypoint
+The evaluator will use `/home/aicrowd/run.sh` as the entrypoint. Please remember to have a `run.sh` at the root which can instantiate any necessary environment variables and execute your code. This repository includes a sample `run.sh` file.
+
+### Local Debug
+
+```
+export TEST_IMAGES_PATH="../data/test_images"  # or path to your local folder containing images
+export IMAGE_NAME="aicrowd-food-recognition-challenge-submission"
+
+./build.sh
+./debug.sh
+
+######################################
+## NOTE : 
+## 
+## * If you do not wish to your a GPU when testing locally, please feel free to replace nvidia-docker with docker
+##
+## * If you want to test on images located at an alternate location, set the `TEST_IMAGES_PATH` environment variable accordingly before running `build.sh` and `debug.sh`.
+```
+
+### Submitting 
+To make a submission, you will have to create a private repository on [https://gitlab.aicrowd.com](https://gitlab.aicrowd.com).
+
+You will have to add your SSH Keys to your GitLab account by following the instructions [here](https://docs.gitlab.com/ee/gitlab-basics/create-your-ssh-keys.html).
+If you do not have SSH Keys, you will first need to [generate one](https://docs.gitlab.com/ee/ssh/README.html#generating-a-new-ssh-key-pair).
+
+Then you can create a submission by making a *tag push* to your repository, adding the correct git remote and pushing to the remote:
+
+```
+git clone https://gitlab.aicrowd.com/nikhil_rayaprolu/food-round2
+cd food-round2
+
+# Add AICrowd git remote endpoint
+git remote add aicrowd git@gitlab.aicrowd.com:<YOUR_AICROWD_USER_NAME>/food-challenge-pytorch-baseline.git
+git push aicrowd master
+
+# Create a tag for your submission and push
+git tag -am "submission-v0.1" submission-v0.1
+git push aicrowd master
+git push aicrowd submission-v0.1
+
+# Note : If the contents of your repository (latest commit hash) does not change, 
+# then pushing a new tag will not trigger a new evaluation.
+```
+You now should be able to see the details of your submission at : 
+[gitlab.aicrowd.com/<YOUR_AICROWD_USER_NAME>/food-challenge-pytorch-baseline/issues](gitlab.aicrowd.com/<YOUR_AICROWD_USER_NAME>/food-challenge-pytorch-baseline/issues)
+
+
+**Best of Luck**
+
+## Miscelaneous Resources
+* [Convert Annotations from MS COCO format to PascalVOC format](https://github.com/CasiaFan/Dataset_to_VOC_converter/blob/master/anno_coco2voc.py)
 
+## Credits
+* Parts of the documentation for this baseline was taken from : https://github.com/AIcrowd/food-recognition-challenge-starter-kit
+* **TODO** @Nikhil : Please add relevant credits for the mmdetection codebases that you used as a reference.
 
-## Contact
+# Author   
+**[Nikhil Rayaprolu](nikhil@aicrowd.com)**
 
-This repo is currently maintained by Kai Chen ([@hellock](http://github.com/hellock)), Yuhang Cao ([@yhcao6](https://github.com/yhcao6)), Wenwei Zhang ([@ZwwWayne](https://github.com/ZwwWayne)), Jiangmiao Pang ([@OceanPang](https://github.com/OceanPang)) and Jiaqi Wang ([@myownskyW7](https://github.com/myownskyW7)).