diff --git a/CODE_OF_CONDUCT.md b/.github/CODE_OF_CONDUCT.md
similarity index 100%
rename from CODE_OF_CONDUCT.md
rename to .github/CODE_OF_CONDUCT.md
diff --git a/CONTRIBUTING.md b/.github/CONTRIBUTING.md
similarity index 95%
rename from CONTRIBUTING.md
rename to .github/CONTRIBUTING.md
index d459d15abd1096ca5c7ad759f62dafc39bac2e16..d183655345103dd9c0e7d7e56ebc0d9e06279eb1 100644
--- a/CONTRIBUTING.md
+++ b/.github/CONTRIBUTING.md
@@ -27,7 +27,7 @@ We use the following tools for linting and formatting:
 - [yapf](https://github.com/google/yapf): formatter
 - [isort](https://github.com/timothycrosley/isort): sort imports
 
-Style configurations of yapf and isort can be found in [.style.yapf](.style.yapf) and [.isort.cfg](.isort.cfg).
+Style configurations of yapf and isort can be found in [.style.yapf](../.style.yapf) and [.isort.cfg](../.isort.cfg).
 
 >Before you create a PR, make sure that your code lints and is formatted by yapf.
 
diff --git a/README.md b/README.md
index 9aaeb4af6a42113b7e6bf325f241fa52b86bce7e..d1731908e168493afc6523e8e1297ed00609e495 100644
--- a/README.md
+++ b/README.md
@@ -85,7 +85,7 @@ v0.5.1 (20/10/2018)
 ## Benchmark and model zoo
 
 Supported methods and backbones are shown in the below table.
-Results and models are available in the [Model zoo](MODEL_ZOO.md).
+Results and models are available in the [Model zoo](docs/MODEL_ZOO.md).
 
 |                    | ResNet   | ResNeXt  | SENet    | VGG      | HRNet |
 |--------------------|:--------:|:--------:|:--------:|:--------:|:-----:|
@@ -119,16 +119,16 @@ Other features
 
 ## Installation
 
-Please refer to [INSTALL.md](INSTALL.md) for installation and dataset preparation.
+Please refer to [INSTALL.md](docs/INSTALL.md) for installation and dataset preparation.
 
 
 ## Get Started
 
-Please see [GETTING_STARTED.md](GETTING_STARTED.md) for the basic usage of MMDetection.
+Please see [GETTING_STARTED.md](docs/GETTING_STARTED.md) for the basic usage of MMDetection.
 
 ## Contributing
 
-We appreciate all contributions to improve MMDetection. Please refer to [CONTRIBUTING.md](CONTRIBUTING.md) for the contributing guideline.
+We appreciate all contributions to improve MMDetection. Please refer to [CONTRIBUTING.md](.github/CONTRIBUTING.md) for the contributing guideline.
 
 ## Acknowledgement
 
diff --git a/GETTING_STARTED.md b/docs/GETTING_STARTED.md
similarity index 96%
rename from GETTING_STARTED.md
rename to docs/GETTING_STARTED.md
index 4dc4a37124fd28799547280ebbdf6c1577888526..b603e19e35ffe4d997de37c7bdca00568526e636 100644
--- a/GETTING_STARTED.md
+++ b/docs/GETTING_STARTED.md
@@ -103,7 +103,7 @@ for frame in video:
     show_result(frame, result, model.CLASSES, wait_time=1)
 ```
 
-A notebook demo can be found in [demo/inference_demo.ipynb](demo/inference_demo.ipynb).
+A notebook demo can be found in [demo/inference_demo.ipynb](../demo/inference_demo.ipynb).
 
 
 ## Train a model
@@ -133,7 +133,7 @@ If you want to specify the working directory in the command, you can add an argu
 
 Optional arguments are:
 
-- `--validate` (**strongly recommended**): Perform evaluation at every k (default value is 1, which can be modified like [this](configs/mask_rcnn_r50_fpn_1x.py#L174)) epochs during the training.
+- `--validate` (**strongly recommended**): Perform evaluation at every k (default value is 1, which can be modified like [this](../configs/mask_rcnn_r50_fpn_1x.py#L174)) epochs during the training.
 - `--work_dir ${WORK_DIR}`: Override the working directory specified in the config file.
 - `--resume_from ${CHECKPOINT_FILE}`: Resume from a previous checkpoint file.
 
@@ -155,7 +155,7 @@ Here is an example of using 16 GPUs to train Mask R-CNN on the dev partition.
 ./tools/slurm_train.sh dev mask_r50_1x configs/mask_rcnn_r50_fpn_1x.py /nfs/xxxx/mask_rcnn_r50_fpn_1x 16
 ```
 
-You can check [slurm_train.sh](tools/slurm_train.sh) for full arguments and environment variables.
+You can check [slurm_train.sh](../tools/slurm_train.sh) for full arguments and environment variables.
 
 If you have just multiple machines connected with ethernet, you can refer to
 pytorch [launch utility](https://pytorch.org/docs/stable/distributed_deprecated.html#launch-utility).
@@ -168,7 +168,7 @@ Usually it is slow if you do not have high speed networking like infiniband.
 
 You can plot loss/mAP curves given a training log file. Run `pip install seaborn` first to install the dependency.
 
-![loss curve image](demo/loss_curve.png)
+![loss curve image](../demo/loss_curve.png)
 
 ```shell
 python tools/analyze_logs.py plot_curve [--keys ${KEYS}] [--title ${TITLE}] [--legend ${LEGEND}] [--backend ${BACKEND}] [--style ${STYLE}] [--out ${OUT_FILE}]
@@ -324,12 +324,12 @@ There are two ways to work with custom datasets.
 
   You can write a new Dataset class inherited from `CustomDataset`, and overwrite two methods
   `load_annotations(self, ann_file)` and `get_ann_info(self, idx)`,
-  like [CocoDataset](mmdet/datasets/coco.py) and [VOCDataset](mmdet/datasets/voc.py).
+  like [CocoDataset](../mmdet/datasets/coco.py) and [VOCDataset](../mmdet/datasets/voc.py).
 
 - offline conversion
 
   You can convert the annotation format to the expected format above and save it to
-  a pickle or json file, like [pascal_voc.py](tools/convert_datasets/pascal_voc.py).
+  a pickle or json file, like [pascal_voc.py](../tools/convert_datasets/pascal_voc.py).
   Then you can simply use `CustomDataset`.
 
 ### Develop new components
diff --git a/INSTALL.md b/docs/INSTALL.md
similarity index 98%
rename from INSTALL.md
rename to docs/INSTALL.md
index bbc4400e9f27ed2801a0324497be3eaedcfca52b..1e5bec8d97bba001e20d01633ca12ec04a7b3e59 100644
--- a/INSTALL.md
+++ b/docs/INSTALL.md
@@ -55,7 +55,7 @@ It is recommended that you run step d each time you pull some updates from githu
 
 ### Another option: Docker Image
 
-We provide a [Dockerfile](docker/Dockerfile) to build an image.
+We provide a [Dockerfile](../docker/Dockerfile) to build an image.
 
 ```shell
 # build an image with PyTorch 1.1, CUDA 10.0 and CUDNN 7.5
diff --git a/MODEL_ZOO.md b/docs/MODEL_ZOO.md
similarity index 97%
rename from MODEL_ZOO.md
rename to docs/MODEL_ZOO.md
index 118bcd4c89f7b233614341e9eb7ce1e47cc90418..c3294239860e9afe49b5ae4d6821e213c9b0601c 100644
--- a/MODEL_ZOO.md
+++ b/docs/MODEL_ZOO.md
@@ -197,7 +197,7 @@ More models with different backbones will be added to the model zoo.
 
 **Notes:**
 
-- Please refer to [Hybrid Task Cascade](configs/htc/README.md) for details and more a powerful model (50.7/43.9).
+- Please refer to [Hybrid Task Cascade](../configs/htc/README.md) for details and more a powerful model (50.7/43.9).
 
 ### SSD
 
@@ -214,54 +214,54 @@ More models with different backbones will be added to the model zoo.
 
 ### Group Normalization (GN)
 
-Please refer to [Group Normalization](configs/gn/README.md) for details.
+Please refer to [Group Normalization](../configs/gn/README.md) for details.
 
 ### Weight Standardization
 
-Please refer to [Weight Standardization](configs/gn+ws/README.md) for details.
+Please refer to [Weight Standardization](../configs/gn+ws/README.md) for details.
 
 ### Deformable Convolution v2
 
-Please refer to [Deformable Convolutional Networks](configs/dcn/README.md) for details.
+Please refer to [Deformable Convolutional Networks](../configs/dcn/README.md) for details.
 
 ### Libra R-CNN
 
-Please refer to [Libra R-CNN](configs/libra_rcnn/README.md) for details.
+Please refer to [Libra R-CNN](../configs/libra_rcnn/README.md) for details.
 
 ### Guided Anchoring
 
-Please refer to [Guided Anchoring](configs/guided_anchoring/README.md) for details.
+Please refer to [Guided Anchoring](../configs/guided_anchoring/README.md) for details.
 
 ### FCOS
 
-Please refer to [FCOS](configs/fcos/README.md) for details.
+Please refer to [FCOS](../configs/fcos/README.md) for details.
 
 ### Grid R-CNN (plus)
 
-Please refer to [Grid R-CNN](configs/grid_rcnn/README.md) for details.
+Please refer to [Grid R-CNN](../configs/grid_rcnn/README.md) for details.
 
 ### GHM
 
-Please refer to [GHM](configs/ghm/README.md) for details.
+Please refer to [GHM](../configs/ghm/README.md) for details.
 
 ### GCNet
 
-Please refer to [GCNet](configs/gcnet/README.md) for details.
+Please refer to [GCNet](../configs/gcnet/README.md) for details.
 
 ### HRNet
-Please refer to [HRNet](configs/hrnet/README.md) for details.
+Please refer to [HRNet](../configs/hrnet/README.md) for details.
 
 ### Mask Scoring R-CNN
 
-Please refer to [Mask Scoring R-CNN](configs/ms_rcnn/README.md) for details.
+Please refer to [Mask Scoring R-CNN](../configs/ms_rcnn/README.md) for details.
 
 ### Train from Scratch
 
-Please refer to [Rethinking ImageNet Pre-training](configs/scratch/README.md) for details.
+Please refer to [Rethinking ImageNet Pre-training](../configs/scratch/README.md) for details.
 
 ### Other datasets
 
-We also benchmark some methods on [PASCAL VOC](configs/pascal_voc/README.md), [Cityscapes](configs/cityscapes/README.md) and [WIDER FACE](configs/wider_face/README.md).
+We also benchmark some methods on [PASCAL VOC](../configs/pascal_voc/README.md), [Cityscapes](../configs/cityscapes/README.md) and [WIDER FACE](../configs/wider_face/README.md).
 
 
 ## Comparison with Detectron and maskrcnn-benchmark
diff --git a/ROBUSTNESS_BENCHMARKING.md b/docs/ROBUSTNESS_BENCHMARKING.md
similarity index 98%
rename from ROBUSTNESS_BENCHMARKING.md
rename to docs/ROBUSTNESS_BENCHMARKING.md
index 826c13eaa2bd45f9a52319c5134ec9acdc702f78..1c7fdf07d208c4784e51aee925d0e47f13587f88 100644
--- a/ROBUSTNESS_BENCHMARKING.md
+++ b/docs/ROBUSTNESS_BENCHMARKING.md
@@ -18,7 +18,7 @@ This page provides basic tutorials how to use the benchmark.
 }
 ```
 
-![image corruption example](demo/corruptions_sev_3.png)
+![image corruption example](../demo/corruptions_sev_3.png)
 
 ## About the benchmark
 
diff --git a/TECHNICAL_DETAILS.md b/docs/TECHNICAL_DETAILS.md
similarity index 100%
rename from TECHNICAL_DETAILS.md
rename to docs/TECHNICAL_DETAILS.md