2021-09-01 12:38:19 | INFO | yolox.core.trainer:238 - epoch: 195/1000, iter: 30/39, mem: 8765Mb, iter_time: 0.850s, data_time: 0.002s, total_loss: 3.8, iou_loss: 1.9, l1_loss: 0.0, conf_loss: 1.3, cls_loss: 0.6, lr: 1.143e-03, size: 544, ETA: 6:15:26
2021-09-01 12:38:26 | INFO | yolox.core.trainer:308 - Save weights to ./YOLOX_outputs\yolox_base
50%|##### | 5/10 [00:06<00:06, 1.31s/it]
2021-09-01 12:38:34 | INFO | yolox.core.trainer:184 - Training of experiment is done and the best AP is 26.95
2021-09-01 12:38:34 | ERROR | yolox.core.launch:68 - An error has been caught in function 'launch', process 'MainProcess' (5316), thread 'MainThread' (21196):
Traceback (most recent call last):
File "F:/projection/YOLOX-train-your-data\train.py", line 114, in
launch(
└ <function launch at 0x0000014EDE632D30>
File "F:\projection\YOLOX-train-your-data\yolox\core\launch.py", line 68, in launch
main_func(*args)
│ └ (╒══════════════════╤════════════════════════════════════════════════════════════════════════════════════════════════════════...
└ <function main at 0x0000014EDEE7D9D0>
File "F:/projection/YOLOX-train-your-data\train.py", line 102, in main
trainer.train()
│ └ <function Trainer.train at 0x0000014EDCE4E820>
└ <yolox.core.trainer.Trainer object at 0x0000014EDEE82E50>
File "F:\projection\YOLOX-train-your-data\yolox\core\trainer.py", line 70, in train
self.train_in_epoch()
│ └ <function Trainer.train_in_epoch at 0x0000014EDCE615E0>
└ <yolox.core.trainer.Trainer object at 0x0000014EDEE82E50>
File "F:\projection\YOLOX-train-your-data\yolox\core\trainer.py", line 80, in train_in_epoch
self.after_epoch()
│ └ <function Trainer.after_epoch at 0x0000014EDCE6A1F0>
└ <yolox.core.trainer.Trainer object at 0x0000014EDEE82E50>
File "F:\projection\YOLOX-train-your-data\yolox\core\trainer.py", line 211, in after_epoch
self.evaluate_and_save_model()
│ └ <function Trainer.evaluate_and_save_model at 0x0000014EDCE6A4C0>
└ <yolox.core.trainer.Trainer object at 0x0000014EDEE82E50>
File "F:\projection\YOLOX-train-your-data\yolox\core\trainer.py", line 294, in evaluate_and_save_model
ap50_95, ap50, summary = self.exp.eval(evalmodel, self.evaluator, self.is_distributed)
│ │ │ │ │ │ │ └ False
│ │ │ │ │ │ └ <yolox.core.trainer.Trainer object at 0x0000014EDEE82E50>
│ │ │ │ │ └ <yolox.evaluators.voc_evaluator.VOCEvaluator object at 0x0000014EFEB8FE20>
│ │ │ │ └ <yolox.core.trainer.Trainer object at 0x0000014EDEE82E50>
│ │ │ └ YOLOX(
│ │ │ (backbone): YOLOPAFPN(
│ │ │ (backbone): CSPDarknet(
│ │ │ (stem): Focus(
│ │ │ (conv): BaseConv(
│ │ │ (conv): ...
│ │ └ <function Exp.eval at 0x0000014EDEE81D30>
│ └ ╒══════════════════╤═════════════════════════════════════════════════════════════════════════════════════════════════════════...
└ <yolox.core.trainer.Trainer object at 0x0000014EDEE82E50>
File "F:\projection\YOLOX-train-your-data\yolox\exp\yolox_base.py", line 242, in eval
return evaluator.evaluate(model, is_distributed, half)
│ │ │ │ └ False
│ │ │ └ False
│ │ └ YOLOX(
│ │ (backbone): YOLOPAFPN(
│ │ (backbone): CSPDarknet(
│ │ (stem): Focus(
│ │ (conv): BaseConv(
│ │ (conv): ...
│ └ <function VOCEvaluator.evaluate at 0x0000014EDCE568B0>
└ <yolox.evaluators.voc_evaluator.VOCEvaluator object at 0x0000014EFEB8FE20>
File "F:\projection\YOLOX-train-your-data\yolox\evaluators\voc_evaluator.py", line 82, in evaluate
for cur_iter, (imgs, _, info_imgs, ids) in enumerate(progress_bar(self.dataloader)):
│ │ │ │ │ │ │ └ <torch.utils.data.dataloader.DataLoader object at 0x0000014EFEB8FBE0>
│ │ │ │ │ │ └ <yolox.evaluators.voc_evaluator.VOCEvaluator object at 0x0000014EFEB8FE20>
│ │ │ │ │ └ <class 'tqdm.std.tqdm'>
│ │ │ │ └ tensor([32, 33, 34, 35, 36, 37, 38, 39])
│ │ │ └ [tensor([393, 207, 301, 252, 349, 369, 389, 293]), tensor([482, 195, 547, 320, 471, 627, 577, 421])]
│ │ └ tensor([[[0., 0., 0., 0., 0.]],
│ │
│ │ [[0., 0., 0., 0., 0.]],
│ │
│ │ [[0., 0., 0., 0., 0.]],
│ │
│ │ [[0., 0., 0., 0., ...
│ └ tensor([[[[-1.0048, -0.9877, -0.9705, ..., -0.1657, -0.1657, -0.1657],
│ [-1.0048, -0.9877, -0.9705, ..., -0.1657, ...
└ 4
File "E:\Anaconda3\envs\pytorch_gpu\lib\site-packages\tqdm\std.py", line 1185, in iter
for obj in iterable:
│ └ <torch.utils.data.dataloader.DataLoader object at 0x0000014EFEB8FBE0>
└ [tensor([[[[-1.0048, -0.9877, -0.9705, ..., -0.1657, -0.1657, -0.1657],
[-1.0048, -0.9877, -0.9705, ..., -0.1657,...
File "E:\Anaconda3\envs\pytorch_gpu\lib\site-packages\torch\utils\data\dataloader.py", line 521, in next
data = self._next_data()
│ └ <function _MultiProcessingDataLoaderIter._next_data at 0x0000014EDE23E310>
└ <torch.utils.data.dataloader._MultiProcessingDataLoaderIter object at 0x0000014EFB738D30>
File "E:\Anaconda3\envs\pytorch_gpu\lib\site-packages\torch\utils\data\dataloader.py", line 1183, in _next_data
return self._process_data(data)
│ │ └ <torch._utils.ExceptionWrapper object at 0x0000014EF348CC40>
│ └ <function _MultiProcessingDataLoaderIter._process_data at 0x0000014EDE23E430>
└ <torch.utils.data.dataloader._MultiProcessingDataLoaderIter object at 0x0000014EFB738D30>
File "E:\Anaconda3\envs\pytorch_gpu\lib\site-packages\torch\utils\data\dataloader.py", line 1229, in _process_data
data.reraise()
│ └ <function ExceptionWrapper.reraise at 0x0000014ED88BA8B0>
└ <torch._utils.ExceptionWrapper object at 0x0000014EF348CC40>
File "E:\Anaconda3\envs\pytorch_gpu\lib\site-packages\torch_utils.py", line 425, in reraise
raise self.exc_type(msg)
│ │ └ 'Caught RuntimeError in DataLoader worker process 1.\nOriginal Traceback (most recent call last):\n File "E:\Anaconda3\env...
│ └ <class 'RuntimeError'>
└ <torch._utils.ExceptionWrapper object at 0x0000014EF348CC40>
RuntimeError: Caught RuntimeError in DataLoader worker process 1.
Original Traceback (most recent call last):
File "E:\Anaconda3\envs\pytorch_gpu\lib\site-packages\torch\utils\data_utils\worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "E:\Anaconda3\envs\pytorch_gpu\lib\site-packages\torch\utils\data_utils\fetch.py", line 47, in fetch
return self.collate_fn(data)
File "E:\Anaconda3\envs\pytorch_gpu\lib\site-packages\torch\utils\data_utils\collate.py", line 84, in default_collate
return [default_collate(samples) for samples in transposed]
File "E:\Anaconda3\envs\pytorch_gpu\lib\site-packages\torch\utils\data_utils\collate.py", line 84, in
return [default_collate(samples) for samples in transposed]
File "E:\Anaconda3\envs\pytorch_gpu\lib\site-packages\torch\utils\data_utils\collate.py", line 54, in default_collate
storage = elem.storage()._new_shared(numel)
File "E:\Anaconda3\envs\pytorch_gpu\lib\site-packages\torch\storage.py", line 155, in _new_shared
return cls._new_using_filename(size)
RuntimeError: Couldn't open shared file mapping: <0000016B6A0EC302>, error code: <1455>