RuntimeError: falseINTERNAL ASSERT FAILED at "../aten/src/ATen/MapAllocator.cpp":263, please report a bug to PyTorch. unable to open shared memory object </torch_12030_1001> in read-write mode
Hello, I meet the error when I ran "python -m torchbeast.monobeast --total_steps 1000000000 --num_actors 8 --num_learner 1 --batch_size 32 --unroll_length 64 --savedir ./results --checkpoint_interval 1800 --xpid nmmo"
The error information was listed below: Found log directory: ./results/nmmo [INFO:12030 file_writer:107 2022-04-16 13:14:32,256] Found log directory: ./results/nmmo Symlinked log directory: ./results/latest [INFO:12030 file_writer:117 2022-04-16 13:14:32,257] Symlinked log directory: ./results/latest Saving arguments to ./results/nmmo/meta.json [INFO:12030 file_writer:129 2022-04-16 13:14:32,257] Saving arguments to ./results/nmmo/meta.json Path to meta file already exists. Not overriding meta. [WARNING:12030 file_writer:131 2022-04-16 13:14:32,257] Path to meta file already exists. Not overriding meta. Saving messages to ./results/nmmo/out.log [INFO:12030 file_writer:137 2022-04-16 13:14:32,257] Saving messages to ./results/nmmo/out.log Path to message file already exists. New data will be appended. [WARNING:12030 file_writer:139 2022-04-16 13:14:32,257] Path to message file already exists. New data will be appended. Saving logs data to ./results/nmmo/logs.csv [INFO:12030 file_writer:147 2022-04-16 13:14:32,257] Saving logs data to ./results/nmmo/logs.csv Saving logs' fields to ./results/nmmo/fields.csv [INFO:12030 file_writer:148 2022-04-16 13:14:32,258] Saving logs' fields to ./results/nmmo/fields.csv Path to log file already exists. New data will be appended. [WARNING:12030 file_writer:151 2022-04-16 13:14:32,258] Path to log file already exists. New data will be appended. [INFO:12030 monobeast:475 2022-04-16 13:14:32,281] Using CUDA. Traceback (most recent call last): File "/home/chenweilong/.conda/envs/ijcai2022-nmmo/lib/python3.9/runpy.py", line 197, in _run_module_as_main File "/home/chenweilong/.conda/envs/ijcai2022-nmmo/lib/python3.9/runpy.py", line 87, in run_code File "/home/chenweilong/ijcai2022-nmmo-starter-kit/ijcai2022-nmmo-baselines/monobeast/training/torchbeast/monobeast.py", line 702, in File "/home/chenweilong/ijcai2022-nmmo-starter-kit/ijcai2022-nmmo-baselines/monobeast/training/torchbeast/monobeast.py", line 694, in main File "/home/chenweilong/ijcai2022-nmmo-starter-kit/ijcai2022-nmmo-baselines/monobeast/training/torchbeast/monobeast.py", line 484, in train File "/home/chenweilong/ijcai2022-nmmo-starter-kit/ijcai2022-nmmo-baselines/monobeast/training/torchbeast/monobeast.py", line 413, in create_buffers File "/home/chenweilong/.conda/envs/ijcai2022-nmmo/lib/python3.9/site-packages/torch/tensor.py", line 426, in share_memory File "/home/chenweilong/.conda/envs/ijcai2022-nmmo/lib/python3.9/site-packages/torch/storage.py", line 145, in share_memory RuntimeError: falseINTERNAL ASSERT FAILED at "../aten/src/ATen/MapAllocator.cpp":263, please report a bug to PyTorch. unable to open shared memory object </torch_12030_1001> in read-write mode