From 6ee5e4d614abc81d64eecbf814e195c985b80956 Mon Sep 17 00:00:00 2001 From: Jon Crall <erotemic@gmail.com> Date: Fri, 4 Oct 2019 01:51:00 -0400 Subject: [PATCH] spelling fixes (#1492) --- docs/GETTING_STARTED.md | 2 +- docs/TECHNICAL_DETAILS.md | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/GETTING_STARTED.md b/docs/GETTING_STARTED.md index 5977c71..7131e11 100644 --- a/docs/GETTING_STARTED.md +++ b/docs/GETTING_STARTED.md @@ -27,7 +27,7 @@ python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [--out ${RESULT_FILE}] [- Optional arguments: - `RESULT_FILE`: Filename of the output results in pickle format. If not specified, the results will not be saved to a file. - `EVAL_METRICS`: Items to be evaluated on the results. Allowed values are: `proposal_fast`, `proposal`, `bbox`, `segm`, `keypoints`. -- `--show`: If specified, detection results will be ploted on the images and shown in a new window. It is only applicable to single GPU testing. Please make sure that GUI is available in your environment, otherwise you may encounter the error like `cannot connect to X server`. +- `--show`: If specified, detection results will be plotted on the images and shown in a new window. It is only applicable to single GPU testing. Please make sure that GUI is available in your environment, otherwise you may encounter the error like `cannot connect to X server`. Examples: diff --git a/docs/TECHNICAL_DETAILS.md b/docs/TECHNICAL_DETAILS.md index e71355b..e58d11a 100644 --- a/docs/TECHNICAL_DETAILS.md +++ b/docs/TECHNICAL_DETAILS.md @@ -96,11 +96,11 @@ We adopt distributed training for both single machine and multiple machines. Supposing that the server has 8 GPUs, 8 processes will be started and each process runs on a single GPU. Each process keeps an isolated model, data loader, and optimizer. -Model parameters are only synchronized once at the begining. +Model parameters are only synchronized once at the beginning. After a forward and backward pass, gradients will be allreduced among all GPUs, and the optimizer will update model parameters. Since the gradients are allreduced, the model parameter stays the same for all processes after the iteration. ## Other information -For more information, please refer to our [technical report](https://arxiv.org/abs/1906.07155). \ No newline at end of file +For more information, please refer to our [technical report](https://arxiv.org/abs/1906.07155). -- GitLab