Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • Archana_Badagi/food-round2
  • eric_a_scuccimarra/food-round2
  • joel_joseph/food-round2
  • darthgera123/food-round2
  • reshmarameshbabu/food-round2
  • nikhil_rayaprolu/food-round2
6 results
Show changes
Commits on Source (534)
Showing with 1010 additions and 515 deletions
*.pth filter=lfs diff=lfs merge=lfs -text
# Contributor Covenant Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, sex characteristics, gender identity and expression,
level of experience, education, socio-economic status, nationality, personal
appearance, race, religion, or sexual identity and orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment
include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community. Examples of
representing a project or community include using an official project e-mail
address, posting via an official social media account, or acting as an appointed
representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team at chenkaidev@gmail.com. All
complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of an incident.
Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see
https://www.contributor-covenant.org/faq
# Contributing to mmdetection
All kinds of contributions are welcome, including but not limited to the following.
- Fixes (typo, bugs)
- New features and components
## Workflow
1. fork and pull the latest mmdetection
2. checkout a new branch (do not use master branch for PRs)
3. commit your changes
4. create a PR
Note
- If you plan to add some new features that involve large changes, it is encouraged to open an issue for discussion first.
- If you are the author of some papers and would like to include your method to mmdetection,
please contact Kai Chen (chenkaidev[at]gmail[dot]com). We will much appreciate your contribution.
## Code style
### Python
We adopt [PEP8](https://www.python.org/dev/peps/pep-0008/) as the preferred code style.
We use the following tools for linting and formatting:
- [flake8](http://flake8.pycqa.org/en/latest/): linter
- [yapf](https://github.com/google/yapf): formatter
- [isort](https://github.com/timothycrosley/isort): sort imports
Style configurations of yapf and isort can be found in [.style.yapf](../.style.yapf) and [.isort.cfg](../.isort.cfg).
We use [pre-commit hook](https://pre-commit.com/) that checks and formats for `flake8`, `yapf`, `isort`, `trailing whitespaces`,
fixes `end-of-files`, sorts `requirments.txt` automatically on every commit.
The config for a pre-commit hook is stored in [.pre-commit-config](../.pre-commit-config.yaml).
After you clone the repository, you will need to install initialize pre-commit hook.
```
pip install -U pre-commit
```
From the repository folder
```
pre-commit install
```
After this on every commit check code linters and formatter will be enforced.
>Before you create a PR, make sure that your code lints and is formatted by yapf.
### C++ and CUDA
We follow the [Google C++ Style Guide](https://google.github.io/styleguide/cppguide.html).
blank_issues_enabled: false
---
name: Error report
about: Create a report to help us improve
title: ''
labels: ''
assignees: ''
---
Thanks for your error report and we appreciate it a lot.
**Checklist**
1. I have searched related issues but cannot get the expected help.
2. The bug has not been fixed in the latest version.
**Describe the bug**
A clear and concise description of what the bug is.
**Reproduction**
1. What command or script did you run?
```
A placeholder for the command.
```
2. Did you make any modifications on the code or config? Did you understand what you have modified?
3. What dataset did you use?
**Environment**
1. Please run `python tools/collect_env.py` to collect necessary environment infomation and paste it here.
2. You may add addition that may be helpful for locating the problem, such as
- How you installed PyTorch [e.g., pip, conda, source]
- Other environment variables that may be related (such as `$PATH`, `$LD_LIBRARY_PATH`, `$PYTHONPATH`, etc.)
**Error traceback**
If applicable, paste the error trackback here.
```
A placeholder for trackback.
```
**Bug fix**
If you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: ''
assignees: ''
---
**Describe the feature**
**Motivation**
A clear and concise description of the motivation of the feature.
Ex1. It is inconvenient when [....].
Ex2. There is a recent paper [....], which is very helpful for [....].
**Related resources**
If there is an official code release or third-party implementations, please also provide the information here, which would be very helpful.
**Additional context**
Add any other context or screenshots about the feature request here.
If you would like to implement the feature and create a PR, please leave a comment here and that would be much appreciated.
---
name: General questions
about: Ask general questions to get help
title: ''
labels: ''
assignees: ''
---
......@@ -104,6 +104,15 @@ venv.bak/
.mypy_cache/
# cython generated cpp
mmdet/ops/nms/*.cpp
mmdet/ops/nms/src/soft_nms_cpu.cpp
mmdet/version.py
data
.vscode
.idea
# custom
*.pkl
*.pkl.json
*.log.json
work_dirs/
[isort]
line_length = 79
multi_line_output = 0
known_standard_library = setuptools
known_first_party = mmdet
known_third_party = Cython,asynctest,cv2,matplotlib,mmcv,numpy,pycocotools,robustness_eval,roi_align,roi_pool,seaborn,six,terminaltables,torch,torchvision
no_lines_before = STDLIB,LOCALFOLDER
default_section = THIRDPARTY
repos:
- repo: https://github.com/asottile/seed-isort-config
rev: v1.9.3
hooks:
- id: seed-isort-config
- repo: https://github.com/pre-commit/mirrors-isort
rev: v4.3.21
hooks:
- id: isort
- repo: https://github.com/pre-commit/mirrors-yapf
rev: v0.29.0
hooks:
- id: yapf
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v2.4.0
hooks:
- id: flake8
- id: trailing-whitespace
- id: check-yaml
- id: end-of-file-fixer
- id: requirements-txt-fixer
[style]
BASED_ON_STYLE = pep8
BLANK_LINE_BEFORE_NESTED_CLASS_OR_DEF = true
SPLIT_BEFORE_EXPRESSION_AFTER_OPENING_PAREN = true
dist: trusty
dist: bionic # ubuntu 18.04
language: python
install:
- pip install flake8
python:
- "3.5"
- "3.6"
- "3.7"
env: CUDA=10.1.105-1 CUDA_SHORT=10.1 UBUNTU_VERSION=ubuntu1804 FORCE_CUDA=1
cache: pip
# Ref to CUDA installation in Travis: https://github.com/jeremad/cuda-travis
before_install:
- INSTALLER=cuda-repo-${UBUNTU_VERSION}_${CUDA}_amd64.deb
- wget http://developer.download.nvidia.com/compute/cuda/repos/${UBUNTU_VERSION}/x86_64/${INSTALLER}
- sudo dpkg -i ${INSTALLER}
- wget https://developer.download.nvidia.com/compute/cuda/repos/${UBUNTU_VERSION}/x86_64/7fa2af80.pub
- sudo apt-key add 7fa2af80.pub
- sudo apt update -qq
- sudo apt install -y cuda-${CUDA_SHORT/./-} cuda-cufft-dev-${CUDA_SHORT/./-}
- sudo apt clean
- CUDA_HOME=/usr/local/cuda-${CUDA_SHORT}
- LD_LIBRARY_PATH=${CUDA_HOME}/lib64:${CUDA_HOME}/include:${LD_LIBRARY_PATH}
- PATH=${CUDA_HOME}/bin:${PATH}
install:
- pip install Pillow==6.2.2 # remove this line when torchvision>=0.5
- pip install Cython torch==1.2 torchvision==0.4.0 # TODO: fix CI for pytorch>1.2
- pip install "git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI"
- pip install -r requirements.txt
before_script:
- flake8 .
- isort -rc --check-only --diff mmdet/ tools/ tests/
- yapf -r -d --style .style.yapf mmdet/ tools/ tests/ configs/
script:
- flake8
\ No newline at end of file
- python setup.py check -m -s
- python setup.py build_ext --inplace
- coverage run --source mmdet -m py.test -v --xdoctest-modules tests mmdet
after_success:
- coverage report
ARG PYTORCH="1.1.0"
ARG CUDA="10.0"
ARG CUDNN="7.5"
FROM pytorch/pytorch:${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel
RUN apt-get update && apt-get install -y libglib2.0-0 libsm6 libxrender-dev libxext6 \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y \
build-essential \
bzip2 \
cmake \
curl \
git \
g++ \
libboost-all-dev \
pkg-config \
rsync \
software-properties-common \
sudo \
tar \
timidity \
unzip \
wget \
locales \
zlib1g-dev \
python3-dev \
python3 \
python3-pip \
python3-tk \
libjpeg-dev \
libpng-dev
# Python3
RUN pip3 install pip --upgrade
RUN pip3 install utm cython aicrowd_api timeout_decorator \
numpy \
aicrowd-repo2docker \
pillow
RUN pip3 install git+https://github.com/AIcrowd/coco.git#subdirectory=PythonAPI
RUN conda install cython -y && conda clean --all
RUN git clone --branch v1.0rc1 https://github.com/open-mmlab/mmdetection.git /mmdetection
WORKDIR /mmdetection
RUN pip install --no-cache-dir -e .
RUN python3.6 -m pip install aicrowd_api aicrowd-repo2docker
# Unicode support:
RUN locale-gen en_US.UTF-8
ENV LANG en_US.UTF-8
ENV LANGUAGE en_US:en
ENV LC_ALL en_US.UTF-8
# Enables X11 sharing and creates user home directory
ENV USER_NAME aicrowd
ENV HOME_DIR /home/$USER_NAME
#
# Replace HOST_UID/HOST_GUID with your user / group id (needed for X11)
ENV HOST_UID 1000
ENV HOST_GID 1000
RUN export uid=${HOST_UID} gid=${HOST_GID} && \
mkdir -p ${HOME_DIR} && \
echo "$USER_NAME:x:${uid}:${gid}:$USER_NAME,,,:$HOME_DIR:/bin/bash" >> /etc/passwd && \
echo "$USER_NAME:x:${uid}:" >> /etc/group && \
echo "$USER_NAME ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/$USER_NAME && \
chmod 0440 /etc/sudoers.d/$USER_NAME && \
chown ${uid}:${gid} -R ${HOME_DIR}
USER ${USER_NAME}
WORKDIR ${HOME_DIR}
COPY . .
RUN sudo chown ${HOST_UID}:${HOST_GID} -R *
RUN sudo chmod 775 -R *
Copyright 2018-2019 Open-MMLab. All rights reserved.
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
......@@ -186,7 +188,7 @@
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Copyright 2018-2019 Open-MMLab.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
......
# Benchmark and Model Zoo
## Environment
### Hardware
- 8 NVIDIA Tesla V100 GPUs
- Intel Xeon 4114 CPU @ 2.20GHz
### Software environment
- Python 3.6 / 3.7
- PyTorch 0.4.1
- CUDA 9.0.176
- CUDNN 7.0.4
- NCCL 2.1.15
## Common settings
- All baselines were trained using 8 GPU with a batch size of 16 (2 images per GPU).
- All models were trained on `coco_2017_train`, and tested on the `coco_2017_val`.
- We use distributed training and BN layer stats are fixed.
- We adopt the same training schedules as Detectron. 1x indicates 12 epochs and 2x indicates 24 epochs, which corresponds to slightly less iterations than Detectron and the difference can be ignored.
- All pytorch-style pretrained backbones on ImageNet are from PyTorch model zoo.
- We report the training GPU memory as the maximum value of `torch.cuda.max_memory_cached()`
for all 8 GPUs. Note that this value is usually less than what `nvidia-smi` shows, but
closer to the actual requirements.
- We report the inference time as the overall time including data loading,
network forwarding and post processing.
- The training memory and time of 2x schedule is simply copied from 1x.
It should be very close to the actual memory and time.
## Baselines
We released RPN, Faster R-CNN and Mask R-CNN models in the first version. More models with different backbones will be added to the model zoo.
### RPN
| Backbone | Style | Lr schd | Mem (GB) | Train time (s/iter) | Inf time (fps) | AR1000 | Download |
|:--------:|:-------:|:-------:|:--------:|:-------------------:|:--------------:|:------:|:--------:|
| R-50-FPN | caffe | 1x | 4.5 | 0.379 | 14.4 | 58.2 | - |
| R-50-FPN | pytorch | 1x | 4.8 | 0.407 | 14.5 | 57.1 | [model](https://s3.ap-northeast-2.amazonaws.com/open-mmlab/mmdetection/models/rpn_r50_fpn_1x_20181010-4a9c0712.pth) \| [result](https://s3.ap-northeast-2.amazonaws.com/open-mmlab/mmdetection/results/rpn_r50_fpn_1x_20181010_results.pkl.json) |
| R-50-FPN | pytorch | 2x | 4.8 | 0.407 | 14.5 | 57.6 | [model](https://s3.ap-northeast-2.amazonaws.com/open-mmlab/mmdetection/models/rpn_r50_fpn_2x_20181010-88a4a471.pth) \| [result](https://s3.ap-northeast-2.amazonaws.com/open-mmlab/mmdetection/results/rpn_r50_fpn_2x_20181010_results.pkl.json) |
### Faster R-CNN
| Backbone | Style | Lr schd | Mem (GB) | Train time (s/iter) | Inf time (fps) | box AP | Download |
|:--------:|:-------:|:-------:|:--------:|:-------------------:|:--------------:|:------:|:--------:|
| R-50-FPN | caffe | 1x | 4.9 | 0.525 | 10.0 | 36.7 | - |
| R-50-FPN | pytorch | 1x | 5.1 | 0.554 | 9.9 | 36.4 | [model](https://s3.ap-northeast-2.amazonaws.com/open-mmlab/mmdetection/models/faster_rcnn_r50_fpn_1x_20181010-3d1b3351.pth) \| [result](https://s3.ap-northeast-2.amazonaws.com/open-mmlab/mmdetection/results/faster_rcnn_r50_fpn_1x_20181010_results.pkl.json) |
| R-50-FPN | pytorch | 2x | 5.1 | 0.554 | 9.9 | 37.7 | [model](https://s3.ap-northeast-2.amazonaws.com/open-mmlab/mmdetection/models/faster_rcnn_r50_fpn_2x_20181010-443129e1.pth) \| [result](https://s3.ap-northeast-2.amazonaws.com/open-mmlab/mmdetection/results/faster_rcnn_r50_fpn_2x_20181010_results.pkl.json) |
### Mask R-CNN
| Backbone | Style | Lr schd | Mem (GB) | Train time (s/iter) | Inf time (fps) | box AP | mask AP | Download |
|:--------:|:-------:|:-------:|:--------:|:-------------------:|:--------------:|:------:|:-------:|:--------:|
| R-50-FPN | caffe | 1x | 5.9 | 0.658 | 7.7 | 37.5 | 34.4 | - |
| R-50-FPN | pytorch | 1x | 5.8 | 0.690 | 7.7 | 37.3 | 34.2 | [model](https://s3.ap-northeast-2.amazonaws.com/open-mmlab/mmdetection/models/mask_rcnn_r50_fpn_1x_20181010-069fa190.pth) \| [result](https://s3.ap-northeast-2.amazonaws.com/open-mmlab/mmdetection/results/mask_rcnn_r50_fpn_1x_20181010_results.pkl.json) |
| R-50-FPN | pytorch | 2x | 5.8 | 0.690 | 7.7 | 38.6 | 35.1 | [model](https://s3.ap-northeast-2.amazonaws.com/open-mmlab/mmdetection/models/mask_rcnn_r50_fpn_2x_20181010-41d35c05.pth) \| [result](https://s3.ap-northeast-2.amazonaws.com/open-mmlab/mmdetection/results/mask_rcnn_r50_fpn_2x_20181010_results.pkl.json) |
### Fast R-CNN (with pre-computed proposals)
| Backbone | Style | Type | Lr schd | Mem (GB) | Train time (s/iter) | Inf time (fps) | box AP | mask AP | Download |
|:--------:|:-------:|:------:|:-------:|:--------:|:-------------------:|:--------------:|:------:|:-------:|:--------:|
| R-50-FPN | caffe | Faster | 1x | 3.5 | 0.35 | 14.6 | 36.6 | - | - |
| R-50-FPN | pytorch | Faster | 1x | 4.0 | 0.38 | 14.5 | 35.8 | - | [model](https://s3.ap-northeast-2.amazonaws.com/open-mmlab/mmdetection/models/fast_rcnn_r50_fpn_1x_20181010-08160859.pth) \| [result](https://s3.ap-northeast-2.amazonaws.com/open-mmlab/mmdetection/results/fast_rcnn_r50_fpn_1x_20181010_results.pkl.json) |
| R-50-FPN | pytorch | Faster | 2x | 4.0 | 0.38 | 14.5 | 37.1 | - | [model](https://s3.ap-northeast-2.amazonaws.com/open-mmlab/mmdetection/models/fast_rcnn_r50_fpn_2x_20181010-d263ada5.pth) \| [result](https://s3.ap-northeast-2.amazonaws.com/open-mmlab/mmdetection/results/fast_rcnn_r50_fpn_2x_20181010_results.pkl.json) |
| R-50-FPN | caffe | Mask | 1x | 5.4 | 0.47 | 10.7 | 37.3 | 34.5 | - |
| R-50-FPN | pytorch | Mask | 1x | 5.3 | 0.50 | 10.6 | 36.8 | 34.1 | [model](https://s3.ap-northeast-2.amazonaws.com/open-mmlab/mmdetection/models/fast_mask_rcnn_r50_fpn_1x_20181010-e030a38f.pth) \| [result](https://s3.ap-northeast-2.amazonaws.com/open-mmlab/mmdetection/results/fast_mask_rcnn_r50_fpn_1x_20181010_results.pkl.json) |
| R-50-FPN | pytorch | Mask | 2x | 5.3 | 0.50 | 10.6 | 37.9 | 34.8 | [model](https://s3.ap-northeast-2.amazonaws.com/open-mmlab/mmdetection/models/fast_mask_rcnn_r50_fpn_2x_20181010-5048cb03.pth) \| [result](https://s3.ap-northeast-2.amazonaws.com/open-mmlab/mmdetection/results/fast_mask_rcnn_r50_fpn_2x_20181010_results.pkl.json) |
### RetinaNet
| Backbone | Style | Lr schd | Mem (GB) | Train time (s/iter) | Inf time (fps) | box AP | Download |
|:--------:|:-------:|:-------:|:--------:|:-------------------:|:--------------:|:------:|:--------:|
| R-50-FPN | caffe | 1x | 6.7 | 0.468 | 9.4 | 35.8 | - |
| R-50-FPN | pytorch | 1x | 6.9 | 0.496 | 9.1 | 35.6 | [model](https://s3.ap-northeast-2.amazonaws.com/open-mmlab/mmdetection/models/retinanet_r50_fpn_1x_20181125-3d3c2142.pth) |
| R-50-FPN | pytorch | 2x | 6.9 | 0.496 | 9.1 | 36.5 | [model](https://s3.ap-northeast-2.amazonaws.com/open-mmlab/mmdetection/models/retinanet_r50_fpn_2x_20181125-e0dbec97.pth) |
### Cascade R-CNN
| Backbone | Style | Lr schd | Mem (GB) | Train time (s/iter) | Inf time (fps) | box AP | Download |
|:--------:|:-------:|:-------:|:--------:|:-------------------:|:--------------:|:------:|:--------:|
| R-50-FPN | caffe | 1x | 5.0 | 0.592 | 8.1 | 40.3 | - |
| R-50-FPN | pytorch | 1x | 5.5 | 0.622 | 8.0 | 40.3 | [model](https://s3.ap-northeast-2.amazonaws.com/open-mmlab/mmdetection/models/cascade_rcnn_r50_fpn_1x_20181123-b1987c4a.pth) |
| R-50-FPN | pytorch | 20e | 5.5 | 0.622 | 8.0 | 41.1 | [model](https://s3.ap-northeast-2.amazonaws.com/open-mmlab/mmdetection/models/cascade_rcnn_r50_fpn_20e_20181123-db483a09.pth) |
### Cascade Mask R-CNN
| Backbone | Style | Lr schd | Mem (GB) | Train time (s/iter) | Inf time (fps) | box AP | mask AP | Download |
|:--------:|:-------:|:-------:|:--------:|:-------------------:|:--------------:|:------:|:-------:|:--------:|
| R-50-FPN | caffe | 1x | 7.5 | 0.880 | 5.8 | 41.0 | 35.6 | - |
| R-50-FPN | pytorch | 1x | 7.6 | 0.910 | 5.7 | 41.3 | 35.7 | [model](https://s3.ap-northeast-2.amazonaws.com/open-mmlab/mmdetection/models/cascade_mask_rcnn_r50_fpn_1x_20181123-88b170c9.pth) |
| R-50-FPN | pytorch | 20e | 7.6 | 0.910 | 5.7 | 42.4 | 36.6 | [model](https://s3.ap-northeast-2.amazonaws.com/open-mmlab/mmdetection/models/cascade_mask_rcnn_r50_fpn_20e_20181123-6e0c9713.pth) |
## Comparison with Detectron
We compare mmdetection with [Detectron](https://github.com/facebookresearch/Detectron)
and [Detectron.pytorch](https://github.com/roytseng-tw/Detectron.pytorch),
a third-party port of Detectron to Pytorch. The backbone used is R-50-FPN.
In general, mmdetection has 3 advantages over Detectron.
- **Higher performance** (especially in terms of mask AP)
- **Faster training speed**
- **Memory efficient**
### Performance
Detectron and Detectron.pytorch use caffe-style ResNet as the backbone.
In order to utilize the PyTorch model zoo, we use pytorch-style ResNet in our experiments.
In the meanwhile, we train models with caffe-style ResNet in 1x experiments for comparison.
We find that pytorch-style ResNet usually converges slower than caffe-style ResNet,
thus leading to slightly lower results in 1x schedule, but the final results
of 2x schedule is higher.
We report results using both caffe-style (weights converted from
[here](https://github.com/facebookresearch/Detectron/blob/master/MODEL_ZOO.md#imagenet-pretrained-models))
and pytorch-style (weights from the official model zoo) ResNet backbone,
indicated as *pytorch-style results* / *caffe-style results*.
<table>
<tr>
<th>Type</th>
<th>Lr schd</th>
<th>Detectron</th>
<th>Detectron.pytorch</th>
<th>mmdetection</th>
</tr>
<tr>
<td rowspan="2">RPN</td>
<td>1x</td>
<td>57.2</td>
<td>-</td>
<td>57.1 / 58.2</td>
</tr>
<tr>
<td>2x</td>
<td>-</td>
<td>-</td>
<td>57.6 / -</td>
</tr>
<tr>
<td rowspan="2">Faster R-CNN</td>
<td>1x</td>
<td>36.7</td>
<td>37.1</td>
<td>36.4 / 36.7</td>
</tr>
<tr>
<td>2x</td>
<td>37.9</td>
<td>-</td>
<td>37.7 / -</td>
</tr>
<tr>
<td rowspan="2">Mask R-CNN</td>
<td>1x</td>
<td>37.7 &amp; 33.9</td>
<td>37.7 &amp; 33.7</td>
<td>37.3 &amp; 34.2 / 37.5 &amp; 34.4</td>
</tr>
<tr>
<td>2x</td>
<td>38.6 &amp; 34.5</td>
<td>-</td>
<td>38.6 &amp; 35.1 / -</td>
</tr>
<tr>
<td rowspan="2">Fast R-CNN</td>
<td>1x</td>
<td>36.4</td>
<td>-</td>
<td>35.8 / 36.6</td>
</tr>
<tr>
<td>2x</td>
<td>36.8</td>
<td>-</td>
<td>37.1 / -</td>
</tr>
<tr>
<td rowspan="2">Fast R-CNN (w/mask)</td>
<td>1x</td>
<td>37.3 &amp; 33.7</td>
<td>-</td>
<td>36.8 &amp; 34.1 / 37.3 &amp; 34.5</td>
</tr>
<tr>
<td>2x</td>
<td>37.7 &amp; 34.0</td>
<td>-</td>
<td>37.9 &amp; 34.8 / -</td>
</tr>
</table>
### Training Speed
The training speed is measure with s/iter. The lower, the better.
<table>
<tr>
<th>Type</th>
<th>Detectron (P100<sup>1</sup>)</th>
<th>Detectron.pytorch (XP<sup>2</sup>)</th>
<th>mmdetection<sup>3</sup> (V100<sup>4</sup> / XP)</th>
</tr>
<tr>
<td>RPN</td>
<td>0.416</td>
<td>-</td>
<td>0.407 / 0.413</td>
</tr>
<tr>
<td>Faster R-CNN</td>
<td>0.544</td>
<td>1.015</td>
<td>0.554 / 0.579</td>
</tr>
<tr>
<td>Mask R-CNN</td>
<td>0.889</td>
<td>1.435</td>
<td>0.690 / 0.732</td>
</tr>
<tr>
<td>Fast R-CNN</td>
<td>0.285</td>
<td>-</td>
<td>0.375 / 0.398</td>
</tr>
<tr>
<td>Fast R-CNN (w/mask)</td>
<td>0.377</td>
<td>-</td>
<td>0.504 / 0.574</td>
</tr>
</table>
\*1. Detectron reports the speed on Facebook's Big Basin servers (P100),
on our V100 servers it is slower so we use the official reported values.
\*2. Detectron.pytorch does not report the runtime and we encountered some issue to
run it on V100, so we report the speed on TITAN XP.
\*3. The speed of pytorch-style ResNet is approximately 5% slower than caffe-style,
and we report the pytorch-style results here.
\*4. We also run the models on a DGX-1 server (P100) and the speed is almost the same as our V100 servers.
### Inference Speed
The inference speed is measured with fps (img/s) on a single GPU. The higher, the better.
<table>
<tr>
<th>Type</th>
<th>Detectron (P100)</th>
<th>Detectron.pytorch (XP)</th>
<th>mmdetection (V100 / XP)</th>
</tr>
<tr>
<td>RPN</td>
<td>12.5</td>
<td>-</td>
<td>14.5 / 15.4</td>
</tr>
<tr>
<td>Faster R-CNN</td>
<td>10.3</td>
<td></td>
<td>9.9 / 9.8</td>
</tr>
<tr>
<td>Mask R-CNN</td>
<td>8.5</td>
<td></td>
<td>7.7 / 7.4</td>
</tr>
<tr>
<td>Fast R-CNN</td>
<td>12.5</td>
<td></td>
<td>14.5 / 14.1</td>
</tr>
<tr>
<td>Fast R-CNN (w/mask)</td>
<td>9.9</td>
<td></td>
<td>10.6 / 10.3</td>
</tr>
</table>
### Training memory
We perform various tests and there is no doubt that mmdetection is more memory
efficient than Detectron, and the main cause is the deep learning framework itself, not our efforts.
Besides, Caffe2 and PyTorch have different apis to obtain memory usage
whose implementation is not exactly the same.
`nvidia-smi` shows a larger memory usage for both detectron and mmdetection, e.g.,
we observe a much higher memory usage when we train Mask R-CNN with 2 images per GPU using detectron (10.6G) and mmdetection (9.3G), which is obviously more than actually required.
> With mmdetection, we can train R-50 FPN Mask R-CNN with **4** images per GPU (TITAN XP, 12G),
which is a promising result.
This diff is collapsed.
{
"challenge_id" : "aicrowd-food-recognition-challenge",
"grader_id": "aicrowd-food-recognition-challenge",
"authors" : ["nikhil13prs"],
"description" : "Food Recognition Challenge Submission",
"license" : "MIT",
"gpu": true
}
#!/usr/bin/env python
import aicrowd_api
import os
########################################################################
# Instatiate Event Notifier
########################################################################
aicrowd_events = aicrowd_api.events.AIcrowdEvents()
def execution_start():
########################################################################
# Register Evaluation Start event
########################################################################
aicrowd_events.register_event(
event_type=aicrowd_events.AICROWD_EVENT_INFO,
message="execution_started",
payload={ #Arbitrary Payload
"event_type": "food_recognition_challenge:execution_started"
}
)
def execution_progress(progress_payload):
image_ids = progress_payload["image_ids"]
########################################################################
# Register Evaluation Progress event
########################################################################
aicrowd_events.register_event(
event_type=aicrowd_events.AICROWD_EVENT_INFO,
message="execution_progress",
payload={ #Arbitrary Payload
"event_type": "food_recognition_challenge:execution_progress",
"image_ids" : image_ids
}
)
def execution_success(payload):
predictions_output_path = payload["predictions_output_path"]
########################################################################
# Register Evaluation Complete event
########################################################################
expected_output_path = os.getenv("AICROWD_PREDICTIONS_OUTPUT_PATH", False)
if expected_output_path != predictions_output_path:
raise Exception("Please write the output to the path specified in the environment variable : AICROWD_PREDICTIONS_OUTPUT_PATH instead of {}".format(predictions_output_path))
aicrowd_events.register_event(
event_type=aicrowd_events.AICROWD_EVENT_SUCCESS,
message="execution_success",
payload={ #Arbitrary Payload
"event_type": "food_recognition_challenge:execution_success",
"predictions_output_path" : predictions_output_path
},
blocking=True
)
def execution_error(error):
########################################################################
# Register Evaluation Complete event
########################################################################
aicrowd_events.register_event(
event_type=aicrowd_events.AICROWD_EVENT_ERROR,
message="execution_error",
payload={ #Arbitrary Payload
"event_type": "food_recognition_challenge:execution_error",
"error" : error
},
blocking=True
)
[
{
"id": 2578,
"name": "water",
"name_readable": "Water",
"supercategory": "food"
},
{
"id": 2939,
"name": "pizza-margherita-baked",
"name_readable": "Pizza, Margherita, baked",
"supercategory": "food"
},
{
"id": 1085,
"name": "broccoli",
"name_readable": "Broccoli",
"supercategory": "food"
},
{
"id": 1040,
"name": "salad-leaf-salad-green",
"name_readable": "Salad, leaf / salad, green",
"supercategory": "food"
},
{
"id": 1070,
"name": "zucchini",
"name_readable": "Zucchini",
"supercategory": "food"
},
{
"id": 2022,
"name": "egg",
"name_readable": "Egg",
"supercategory": "food"
},
{
"id": 2053,
"name": "butter",
"name_readable": "Butter",
"supercategory": "food"
},
{
"id": 1566,
"name": "bread-white",
"name_readable": "Bread, white",
"supercategory": "food"
},
{
"id": 1151,
"name": "apple",
"name_readable": "Apple",
"supercategory": "food"
},
{
"id": 2131,
"name": "dark-chocolate",
"name_readable": "Dark chocolate",
"supercategory": "food"
},
{
"id": 2521,
"name": "white-coffee-with-caffeine",
"name_readable": "White coffee, with caffeine",
"supercategory": "food"
},
{
"id": 1068,
"name": "sweet-pepper",
"name_readable": "Sweet pepper",
"supercategory": "food"
},
{
"id": 1026,
"name": "mixed-salad-chopped-without-sauce",
"name_readable": "Mixed salad (chopped without sauce)",
"supercategory": "food"
},
{
"id": 2738,
"name": "tomato-sauce",
"name_readable": "Tomato sauce",
"supercategory": "food"
},
{
"id": 1565,
"name": "bread-wholemeal",
"name_readable": "Bread, wholemeal",
"supercategory": "food"
},
{
"id": 2512,
"name": "coffee-with-caffeine",
"name_readable": "Coffee, with caffeine",
"supercategory": "food"
},
{
"id": 1061,
"name": "cucumber",
"name_readable": "Cucumber",
"supercategory": "food"
},
{
"id": 1311,
"name": "cheese",
"name_readable": "Cheese",
"supercategory": "food"
},
{
"id": 1505,
"name": "pasta-spaghetti",
"name_readable": "Pasta, spaghetti",
"supercategory": "food"
},
{
"id": 1468,
"name": "rice",
"name_readable": "Rice",
"supercategory": "food"
},
{
"id": 1967,
"name": "salmon",
"name_readable": "Salmon",
"supercategory": "food"
},
{
"id": 1078,
"name": "carrot",
"name_readable": "Carrot",
"supercategory": "food"
},
{
"id": 1116,
"name": "onion",
"name_readable": "Onion",
"supercategory": "food"
},
{
"id": 1022,
"name": "mixed-vegetables",
"name_readable": "Mixed vegetables",
"supercategory": "food"
},
{
"id": 2504,
"name": "espresso-with-caffeine",
"name_readable": "Espresso, with caffeine",
"supercategory": "food"
},
{
"id": 1154,
"name": "banana",
"name_readable": "Banana",
"supercategory": "food"
},
{
"id": 1163,
"name": "strawberries",
"name_readable": "Strawberries",
"supercategory": "food"
},
{
"id": 2750,
"name": "mayonnaise",
"name_readable": "Mayonnaise",
"supercategory": "food"
},
{
"id": 1210,
"name": "almonds",
"name_readable": "Almonds",
"supercategory": "food"
},
{
"id": 2620,
"name": "wine-white",
"name_readable": "Wine, white",
"supercategory": "food"
},
{
"id": 1310,
"name": "hard-cheese",
"name_readable": "Hard cheese",
"supercategory": "food"
},
{
"id": 1893,
"name": "ham-raw",
"name_readable": "Ham, raw",
"supercategory": "food"
},
{
"id": 1069,
"name": "tomato",
"name_readable": "Tomato",
"supercategory": "food"
},
{
"id": 1058,
"name": "french-beans",
"name_readable": "French beans",
"supercategory": "food"
},
{
"id": 1180,
"name": "mandarine",
"name_readable": "Mandarine",
"supercategory": "food"
},
{
"id": 2618,
"name": "wine-red",
"name_readable": "Wine, red",
"supercategory": "food"
},
{
"id": 1010,
"name": "potatoes-steamed",
"name_readable": "Potatoes steamed",
"supercategory": "food"
},
{
"id": 1588,
"name": "croissant",
"name_readable": "Croissant",
"supercategory": "food"
},
{
"id": 1879,
"name": "salami",
"name_readable": "Salami",
"supercategory": "food"
},
{
"id": 3080,
"name": "boisson-au-glucose-50g",
"name_readable": "Boisson au glucose 50g",
"supercategory": "food"
},
{
"id": 2388,
"name": "biscuits",
"name_readable": "Biscuits",
"supercategory": "food"
},
{
"id": 1108,
"name": "corn",
"name_readable": "Corn",
"supercategory": "food"
},
{
"id": 1032,
"name": "leaf-spinach",
"name_readable": "Leaf spinach",
"supercategory": "food"
},
{
"id": 2099,
"name": "jam",
"name_readable": "Jam",
"supercategory": "food"
},
{
"id": 2530,
"name": "tea-green",
"name_readable": "Tea, green",
"supercategory": "food"
},
{
"id": 1013,
"name": "chips-french-fries",
"name_readable": "Chips, french fries",
"supercategory": "food"
},
{
"id": 1323,
"name": "parmesan",
"name_readable": "Parmesan",
"supercategory": "food"
},
{
"id": 2634,
"name": "beer",
"name_readable": "Beer",
"supercategory": "food"
},
{
"id": 1056,
"name": "avocado",
"name_readable": "Avocado",
"supercategory": "food"
},
{
"id": 1520,
"name": "bread-french-white-flour",
"name_readable": "Bread, French (white flour)",
"supercategory": "food"
},
{
"id": 1788,
"name": "chicken",
"name_readable": "Chicken",
"supercategory": "food"
},
{
"id": 1352,
"name": "soft-cheese",
"name_readable": "Soft cheese",
"supercategory": "food"
},
{
"id": 2498,
"name": "tea",
"name_readable": "Tea",
"supercategory": "food"
},
{
"id": 2711,
"name": "sauce-savoury",
"name_readable": "Sauce (savoury)",
"supercategory": "food"
},
{
"id": 2103,
"name": "honey",
"name_readable": "Honey",
"supercategory": "food"
},
{
"id": 1554,
"name": "bread-whole-wheat",
"name_readable": "Bread, whole wheat",
"supercategory": "food"
},
{
"id": 1556,
"name": "bread-sourdough",
"name_readable": "Bread, sourdough",
"supercategory": "food"
},
{
"id": 1307,
"name": "gruyere",
"name_readable": "Gruyère",
"supercategory": "food"
},
{
"id": 1060,
"name": "pickle",
"name_readable": "Pickle",
"supercategory": "food"
},
{
"id": 1220,
"name": "mixed-nuts",
"name_readable": "Mixed nuts",
"supercategory": "food"
},
{
"id": 2580,
"name": "water-mineral",
"name_readable": "Water, mineral",
"supercategory": "food"
}
]
#!/bin/bash
docker build -t $IMAGE_NAME
\ No newline at end of file