Hi,
I'm running the example franka_reacher.py on ubuntu 18 with
- NVIDIA GeForce MX130, 2GB of memory.
- Cuda toolkit 11.2
What can I do to fix this memory error that shows up sometimes?
Process Process-1:
Traceback (most recent call last):
File "/home/rgap/miniconda3/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/home/rgap/miniconda3/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/rgap/LIBRARIES/storm/storm_kit/mpc/utils/mpc_process_wrapper.py", line 212, in optimize_process
controller = torch.load(control_string)
File "/home/rgap/.virtualenvs/exp/lib/python3.8/site-packages/torch/serialization.py", line 607, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "/home/rgap/.virtualenvs/exp/lib/python3.8/site-packages/torch/serialization.py", line 882, in _load
result = unpickler.load()
File "/home/rgap/.virtualenvs/exp/lib/python3.8/site-packages/torch/serialization.py", line 857, in persistent_load
load_tensor(data_type, size, key, _maybe_decode_ascii(location))
File "/home/rgap/.virtualenvs/exp/lib/python3.8/site-packages/torch/serialization.py", line 846, in load_tensor
loaded_storages[key] = restore_location(storage, location)
File "/home/rgap/.virtualenvs/exp/lib/python3.8/site-packages/torch/serialization.py", line 175, in default_restore_location
result = fn(storage, location)
File "/home/rgap/.virtualenvs/exp/lib/python3.8/site-packages/torch/serialization.py", line 157, in _cuda_deserialize
return obj.cuda(device)
File "/home/rgap/.virtualenvs/exp/lib/python3.8/site-packages/torch/_utils.py", line 79, in _cuda
return new_type(self.size()).copy_(self, non_blocking)
File "/home/rgap/.virtualenvs/exp/lib/python3.8/site-packages/torch/cuda/__init__.py", line 528, in _lazy_new
return super(_CudaBase, cls).__new__(cls, *args, **kwargs)
RuntimeError: CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
['95.704', '1.288', '0.742'] 0.138 0.020
python3.8: /opt/conda/conda-bld/magma-cuda102_1583546904148/work/interface_cuda/interface.cpp:897: void magma_queue_create_from_cuda_internal(magma_device_t, cudaStream_t, cublasHandle_t, cusparseHandle_t, magma_queue**, const char*, const char*, int): Assertion `queue->dAarray__ != __null' failed.
/home/rgap/miniconda3/lib/python3.8/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 6 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '