From e0c65faf74e1e3ae746caa192941d37b57137d0a Mon Sep 17 00:00:00 2001
From: eric_a_scuccimarra <skooch@gmail.com>
Date: Fri, 5 Mar 2021 15:32:21 +0000
Subject: [PATCH] Update README.md

---
 README.md | 261 +-----------------------------------------------------
 1 file changed, 3 insertions(+), 258 deletions(-)

diff --git a/README.md b/README.md
index d99fca6..8ac1e12 100644
--- a/README.md
+++ b/README.md
@@ -13,264 +13,9 @@ This is an evolving dataset, where we will release more data as the dataset grow
 # Baseline
 MMdetection is an open source object detection toolbox based on PyTorch, with a large Model Zoo with many customised models that can be plugged and tested in with just a single config file modification. PYou can read more about it at: [mmdetection github](https://github.com/open-mmlab/mmdetection/)
 
+# Code
 
-
-# Installation
-[MMDetection Installation instructions](https://github.com/open-mmlab/mmdetection/blob/master/docs/INSTALL.md)
-Follow the installation instructions as given in the above link.
-
-Ensure you have `docker` and `nvidia-docker` installed by following the instructions here : 
-
-* [Docker](https://docs.docker.com/install/)
-* [nvidia-docker](https://github.com/NVIDIA/nvidia-docker)
-**NOTE** : You do not need nvidia-docker if you do not want to use GPU when testing your submission locally
-
-```
-git clone http://gitlab.aicrowd.com/nikhil_rayaprolu/food-pytorch-baseline.git
-cd food-pytorch-baseline
-pip3 install -r requirements.txt
-pip3 install git+https://github.com/AIcrowd/coco.git
-```
-
-# Dataset
-
-The dataset for the [AIcrowd Food Recognition Challenge](https://www.aicrowd.com/challenges/food-recognition-challenge) is available at [https://www.aicrowd.com/challenges/food-recognition-challenge/dataset_files](https://www.aicrowd.com/challenges/food-recognition-challenge/dataset_files)
-
-This dataset contains :   
-* `train-v0.2.tar.gz` : This is the Training Set of **7949** (as RGB images) food images, along with their corresponding annotations in [MS-COCO format](http://cocodataset.org/#home)
-
-* `val-v0.2.tar.gz`: This is the suggested Validation Set of **418** (as RGB images) food images, along with their corresponding annotations in [MS-COCO format](http://cocodataset.org/#home)
-
-* `test_images-v0.2.tar.gz` : This is the debug Test Set for Round-1, where you are provided the same images as the validation set.
-
-To get started, we would advise you to download all the files, and untar them inside the `data/` folder of this repository, so that you have a directory structure like this : 
-
-```bash
-|-- data/
-|   |-- test_images/ (has all images for prediction)(**NOTE** : They are the same as the validation set images)
-|   |-- train/
-|   |   |-- images (has all the images for training)
-|   |   |__ annotation.json : Annotation of the data in MS COCO format
-|   |   |__ annotation-small.json : Smaller version of the previous dataset
-|   |-- val/
-|   |   |-- images (has all the images for training)
-|   |   |__ annotation.json : Annotation of the data in MS COCO format
-|   |   |__ annotation-small.json : Smaller version of the previous dataset
-```
-
-We are also assuming that you have already installed all the requirements for this notebook, or you can still install them by :
-
-# Usage
-
-# Training with MMDetection:
-Let us look at training MMDetection using Hybrid Task Cascade [HTC research paper](https://arxiv.org/abs/1901.07518).
-
-A score of AP_50 of 0.526	and AR_50 of 0.729, can be achieved with Hybrid Task Cascade of Resnet50 Backbone.
-
-MMDetection provides us with a config file especially for HTC, available at [HTC config](https://github.com/open-mmlab/mmdetection/tree/master/configs/htc)
-
-Also make sure you have downloaded the training data to a subfolder of your project.
-
-Modify your config file and point your dataset variables to your data folder.
-
-As given in [MMDetection Getting Started](https://github.com/open-mmlab/mmdetection/blob/master/docs/GETTING_STARTED.md),
-You can use:
-
-    python tools/train.py ${CONFIG_FILE}
-to train the model on a single GPU or
-
-    ./tools/dist_train.sh ${CONFIG_FILE} ${GPU_NUM} [optional arguments]
-to train the model on multiple GPUs.
-
-Make sure you have edited the config file to point to the dataset and also have made changes to the number of classes.
-If you are going to use the dataloader from the mmdetection.
-
-## Testing with MMDetection:
-To test your results with MMDetection, 
-you can use the commands:
-```
-*single-gpu testing*
-python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [--out ${RESULT_FILE}] [--eval ${EVAL_METRICS}] [--show]
-
-*multi-gpu testing*
-./tools/dist_test.sh ${CONFIG_FILE} ${CHECKPOINT_FILE} ${GPU_NUM} [--out ${RESULT_FILE}] [--eval ${EVAL_METRICS}]
-```
-
-**Log Analysis**
-
-The training logs can be analyzed using the plot_curve provided with the mmdetection:
-
-```
-import os
-import matplotlib
-%matplotlib inline
-from tools.analyze_logs import plot_curve
-matplotlib.rcParams['figure.figsize'] = [20, 10]
-args = {
-    'keys':['segm_mAP_50'],
-    'legend':'segm_mAP_50',
-    'backend': None,
-    'json_logs': [os.getcwd()+'/work_dirs/htc_r50_fpn/20191206_105437.log.json'],
-    'title': 'loss'
-}
-print(os.getcwd()+'/work_dirs/htc_r50_fpn/20191206_105437.log.json')
-plot_curve([os.getcwd()+'/work_dirs/htc_r50_fpn/20191206_105437.log.json'], args)
-```
-
-
-## Other Associated Notebooks
-
-* [Dataset Utils](https://github.com/AIcrowd/food-recognition-challenge-starter-kit/blob/master/Dataset%20Utils.ipynb)
-  * [Import Dependencies](https://github.com/AIcrowd/food-recognition-challenge-starter-kit/blob/master/Dataset%20Utils.ipynb#Import-dependencies)
-  * [Configuration Variables](https://github.com/AIcrowd/food-recognition-challenge-starter-kit/blob/master/Dataset%20Utils.ipynb#Configuration-Variables)
-  * [Parsing Annotations](https://github.com/AIcrowd/food-recognition-challenge-starter-kit/blob/master/Dataset%20Utils.ipynb#Parsing-the-annotations)
-  * [Collecting and Visualizing Images](https://github.com/AIcrowd/food-recognition-challenge-starter-kit/blob/master/Dataset%20Utils.ipynb#Collecting-and-Visualizing-Images)
-  * [Understanding Annotations](https://github.com/AIcrowd/food-recognition-challenge-starter-kit/blob/master/Dataset%20Utils.ipynb#Understanding-Annotations)
-  * [Visualizing Annotations](https://github.com/AIcrowd/food-recognition-challenge-starter-kit/blob/master/Dataset%20Utils.ipynb#Visualizing-Annotations)
-  * [Advanced](https://github.com/AIcrowd/food-recognition-challenge-starter-kit/blob/master/Dataset%20Utils.ipynb#Advanced)
-    * [Convert poly segmentation to rle](https://github.com/AIcrowd/food-recognition-challenge-starter-kit/blob/master/Dataset%20Utils.ipynb#1.-Convert-poly-segmentation-to-rle)
-    * [Convert segmentation to pixel level masks](https://github.com/AIcrowd/food-recognition-challenge-starter-kit/blob/master/Dataset%20Utils.ipynb#2.-Convert-segmentation-to-pixel-level-masks)
-* [Random Submission](https://github.com/AIcrowd/food-recognition-challenge-starter-kit/blob/master/run.py)
-* [Locally test the evaluation function](https://github.com/AIcrowd/food-recognition-challenge-starter-kit/blob/master/Local%20Evaluation.ipynb)   
-
-## Other Baselines
-# Round 1
-* [Colab Notebook for Data Analysis and Tutorial](https://colab.research.google.com/drive/1A5p9GX5X3n6OMtLjfhnH6Oeq13tWNtFO#scrollTo=ok54AWT_VoWV)
-A notebook with data analysis on the Food Recognition Dataset and then a short tutorial on training with keras and pytorch. This lets you immediately jump onto the challenge and solve the challenge
-### Pretrained Baselines
-* [mmdetection (pytorch)](https://gitlab.aicrowd.com/nikhil_rayaprolu/food-pytorch-baseline)  
-* [matterport-maskrcnn (keras - tensorflow)](https://gitlab.aicrowd.com/nikhil_rayaprolu/food-recognition)
-
-# Round 2
-* [Colab Notebook for Data Analysis and Tutorial](https://colab.research.google.com/drive/1vXdv9quZ7CXO5lLCjhyz3jtejRzDq221)
-A notebook with data analysis on the Food Recognition Dataset and then a short tutorial on training with keras and pytorch. This lets you immediately jump onto the challenge and solve the challenge
-### Pretrained Baselines
-* [mmdetection (pytorch)](https://gitlab.aicrowd.com/nikhil_rayaprolu/food-round2)  
-
-# Submission Instructions
-
-To submit to the challenge you'll need to ensure you've set up an appropriate repository structure, create a private git repository at https://gitlab.aicrowd.com with the contents of your submission, and push a git tag corresponding to the version of your repository you'd like to submit.
-
-## Repository Structure
-We have created this sample submission repository which you can use as reference.
-
-#### aicrowd.json
-Each repository should have a aicrowd.json file with the following fields:
-
-```
-{
-    "challenge_id" : "aicrowd-food-recognition-challenge",
-    "grader_id": "aicrowd-food-recognition-challenge",
-    "authors" : ["aicrowd-user"],
-    "description" : "Food Recognition Challenge Submission",
-    "license" : "MIT",
-    "gpu": true
-}
-```
-This file is used to identify your submission as a part of the Food Recognition Challenge.  You must use the `challenge_id` and `grader_id` specified above in the submission. The `gpu` key in the `aicrowd.json` lets your specify if your submission requires a GPU or not. In which case, a NVIDIA-K80 will be made available to your submission when evaluation the submission.
-
-#### Submission environment configuration
-You can specify the software runtime of your code by modifying the included [Dockerfile](Dockerfile). 
-
-#### Code Entrypoint
-The evaluator will use `/home/aicrowd/run.sh` as the entrypoint. Please remember to have a `run.sh` at the root which can instantiate any necessary environment variables and execute your code. This repository includes a sample `run.sh` file.
-
-### Local Debug
-
-```
-export TEST_IMAGES_PATH="../data/test_images"  # or path to your local folder containing images
-export IMAGE_NAME="aicrowd-food-recognition-challenge-submission"
-
-./build.sh
-./debug.sh
-
-######################################
-## NOTE : 
-## 
-## * If you do not wish to your a GPU when testing locally, please feel free to replace nvidia-docker with docker
-##
-## * If you want to test on images located at an alternate location, set the `TEST_IMAGES_PATH` environment variable accordingly before running `build.sh` and `debug.sh`.
-```
-
-### Submitting 
-To make a submission, you will have to create a private repository on [https://gitlab.aicrowd.com](https://gitlab.aicrowd.com).
-
-You will have to add your SSH Keys to your GitLab account by following the instructions [here](https://docs.gitlab.com/ee/gitlab-basics/create-your-ssh-keys.html).
-If you do not have SSH Keys, you will first need to [generate one](https://docs.gitlab.com/ee/ssh/README.html#generating-a-new-ssh-key-pair).
-
-Then you can create a submission by making a *tag push* to your repository, adding the correct git remote and pushing to the remote:
-
-```
-git clone https://gitlab.aicrowd.com/nikhil_rayaprolu/food-pytorch-baseline
-cd food-pytorch-baseline
-
-# Add AICrowd git remote endpoint
-git remote add aicrowd git@gitlab.aicrowd.com:<YOUR_AICROWD_USER_NAME>/food-challenge-pytorch-baseline.git
-git push aicrowd master
-
-# Create a tag for your submission and push
-git tag -am "submission-v0.1" submission-v0.1
-git push aicrowd master
-git push aicrowd submission-v0.1
-
-# Note : If the contents of your repository (latest commit hash) does not change, 
-# then pushing a new tag will not trigger a new evaluation.
-```
-You now should be able to see the details of your submission at : 
-[gitlab.aicrowd.com/<YOUR_AICROWD_USER_NAME>/food-challenge-pytorch-baseline/issues](gitlab.aicrowd.com/<YOUR_AICROWD_USER_NAME>/food-challenge-pytorch-baseline/issues)
-
-
-## Using http instead of ssh (Personal Access Token):
-
-In order to use http to clone repositories and submit on gitlab:
-
-a) Create a personal access token
-
-1.  Log in to GitLab.
-    
-2.  In the upper-right corner, click your avatar and select Settings.
-    
-3.  On the User Settings menu, select Access Tokens.
-    
-4.  Choose a name and optional expiry date for the token.
-    
-5.  Choose the desired scopes.
-    
-6.  Click the Create personal access token button.
-    
-7.  Save the personal access token somewhere safe, lets call it XXX for now.
-    
-
-Once you leave or refresh the page, you won’t be able to access it again.
-
-b) to clone a repo use the following command:
-
-git clone [https://oauth2:XXX@gitlab.aicrowd.com/(username)/(repo_name).git](https://oauth2:XXX@gitlab.aicrowd.com/(username)/(repo_name).git)
-
-c)submit a solution:
-```
-cd into your submission repo on gitlab
-
-cd (repo_name)
-
-#Add AICrowd git remote endpoint
-
-git remote add aicrowd https://oauth2:XXX@gitlab.aicrowd.com/(username)/(repo_name).git
-
-git push aicrowd master
-
-# Create a tag for your submission and push
-
-git tag -am "submission-v0.1" submission-v0.1
-
-git push aicrowd master
-
-git push aicrowd submission-v0.1
-
-# Note : If the contents of your repository (latest commit hash) does not change,
-
-# then pushing a new tag will not trigger a new evaluation.
-```
+This repo contains the source code used to train my best submission for Round 4. The submission was a two-model ensemble of DetectoRS and HTC x101. 
 
 
 **Best of Luck**
@@ -283,5 +28,5 @@ git push aicrowd submission-v0.1
 * and the baseline is built using MMDetection: https://github.com/open-mmlab/mmdetection/
 
 # Author   
-**[Nikhil Rayaprolu](nikhil@aicrowd.com)**
+**[Eric Antoine Scuccimarra](skooch@gmail.com)**
 
-- 
GitLab