torch-model-archiver --model-name fallDown --version 0.1 --serialized-file /data/share/imageAlgorithm/zhangcheng/code/yolov5/runs/train/exp5/weights/best.torchscript.pt --handler /data/share/imageAlgorithm/zhangcheng/code/yolov5/utils/torchserve_handler.py --extra-files /data/share/imageAlgorithm/zhangcheng/code/yolov5/runs/train/exp5/weights/index_to_name.json,/data/share/imageAlgorithm/zhangcheng/code/yolov5/utils/torchserve_handler.py --export-path /data/share/imageAlgorithm/zhangcheng/code/yolov5/runs/train/exp5/weights/
docker run -itd --gpus '"device=5,6"' -p 8080:8080 -p 8081:8081 -p 8082:8082 -p 7070:7070 -p 7071:7071 --name fallDetect -v /data/share/imageAlgorithm/zhangcheng/code/yolov5/runs/train/exp5/weights/:/home/model-server/model-store/ pytorch/torchserve /bin/bash
torchserve --start --ncs --model-store /home/model-server/model-store --models ./model-store/fallDetect.mar
model-server@aec403759fcc:~$ torchserve --start --ncs --model-store /home/model-server/model-store --models model-store/fallDown.mar
model-server@aec403759fcc:~$ 2021-03-16 02:37:49,517 [INFO ] main org.pytorch.serve.ModelServer -
Torchserve version: 0.3.0
TS Home: /home/venv/lib/python3.6/site-packages
Current directory: /home/model-server
Temp directory: /home/model-server/tmp
Number of GPUs: 2
Number of CPUs: 24
Max heap size: 30688 M
Python executable: /home/venv/bin/python3
Config file: config.properties
Inference address: http://0.0.0.0:8080
Management address: http://0.0.0.0:8081
Metrics address: http://0.0.0.0:8082
Model Store: /home/model-server/model-store
Initial Models: model-store/fallDown.mar
Log dir: /home/model-server/logs
Metrics dir: /home/model-server/logs
Netty threads: 32
Netty client threads: 0
Default workers per model: 2
Blacklist Regex: N/A
Maximum Response Size: 6553500
Maximum Request Size: 6553500
Prefer direct buffer: false
Allowed Urls: [file://.*|http(s)?://.*]
Custom python dependency for model allowed: false
Metrics report format: prometheus
Enable metrics API: true
2021-03-16 02:37:49,545 [INFO ] main org.pytorch.serve.ModelServer - Loading initial models: model-store/fallDown.mar
2021-03-16 02:37:50,042 [INFO ] main org.pytorch.serve.archive.ModelArchive - eTag 040e4e47c1da44c290d18ac9fe5c0b62
2021-03-16 02:37:50,058 [DEBUG] main org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 0.1 for model fallDown
2021-03-16 02:37:50,058 [DEBUG] main org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 0.1 for model fallDown
2021-03-16 02:37:50,058 [INFO ] main org.pytorch.serve.wlm.ModelManager - Model fallDown loaded.
2021-03-16 02:37:50,058 [DEBUG] main org.pytorch.serve.wlm.ModelManager - updateModel: fallDown, count: 2
2021-03-16 02:37:50,075 [INFO ] main org.pytorch.serve.ModelServer - Initialize Inference server with: EpollServerSocketChannel.
2021-03-16 02:37:50,166 [INFO ] main org.pytorch.serve.ModelServer - Inference API bind to: http://0.0.0.0:8080
2021-03-16 02:37:50,166 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /home/model-server/tmp/.ts.sock.9001
2021-03-16 02:37:50,167 [INFO ] main org.pytorch.serve.ModelServer - Initialize Management server with: EpollServerSocketChannel.
2021-03-16 02:37:50,168 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]9127
2021-03-16 02:37:50,168 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started.
2021-03-16 02:37:50,168 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.6.9
2021-03-16 02:37:50,169 [DEBUG] W-9001-fallDown_0.1 org.pytorch.serve.wlm.WorkerThread - W-9001-fallDown_0.1 State change null -> WORKER_STARTED
2021-03-16 02:37:50,170 [INFO ] main org.pytorch.serve.ModelServer - Management API bind to: http://0.0.0.0:8081
2021-03-16 02:37:50,170 [INFO ] main org.pytorch.serve.ModelServer - Initialize Metrics server with: EpollServerSocketChannel.
2021-03-16 02:37:50,171 [INFO ] main org.pytorch.serve.ModelServer - Metrics API bind to: http://0.0.0.0:8082
2021-03-16 02:37:50,176 [INFO ] W-9001-fallDown_0.1 org.pytorch.serve.wlm.WorkerThread - Connecting to: /home/model-server/tmp/.ts.sock.9001
2021-03-16 02:37:50,191 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /home/model-server/tmp/.ts.sock.9001.
2021-03-16 02:37:50,195 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /home/model-server/tmp/.ts.sock.9000
2021-03-16 02:37:50,196 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]9128
2021-03-16 02:37:50,197 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started.
2021-03-16 02:37:50,198 [DEBUG] W-9000-fallDown_0.1 org.pytorch.serve.wlm.WorkerThread - W-9000-fallDown_0.1 State change null -> WORKER_STARTED
2021-03-16 02:37:50,198 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.6.9
2021-03-16 02:37:50,199 [INFO ] W-9000-fallDown_0.1 org.pytorch.serve.wlm.WorkerThread - Connecting to: /home/model-server/tmp/.ts.sock.9000
2021-03-16 02:37:50,203 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /home/model-server/tmp/.ts.sock.9000.
Model server started.
2021-03-16 02:37:50,432 [INFO ] pool-2-thread-1 TS_METRICS - CPUUtilization.Percent:100.0|#Level:Host|#hostname:aec403759fcc,timestamp:1615862270
2021-03-16 02:37:50,440 [INFO ] pool-2-thread-1 TS_METRICS - DiskAvailable.Gigabytes:428.0089874267578|#Level:Host|#hostname:aec403759fcc,timestamp:1615862270
2021-03-16 02:37:50,441 [INFO ] pool-2-thread-1 TS_METRICS - DiskUsage.Gigabytes:71.74675369262695|#Level:Host|#hostname:aec403759fcc,timestamp:1615862270
2021-03-16 02:37:50,441 [INFO ] pool-2-thread-1 TS_METRICS - DiskUtilization.Percent:14.4|#Level:Host|#hostname:aec403759fcc,timestamp:1615862270
2021-03-16 02:37:50,442 [INFO ] pool-2-thread-1 TS_METRICS - MemoryAvailable.Megabytes:117864.27734375|#Level:Host|#hostname:aec403759fcc,timestamp:1615862270
2021-03-16 02:37:50,442 [INFO ] pool-2-thread-1 TS_METRICS - MemoryUsed.Megabytes:9002.609375|#Level:Host|#hostname:aec403759fcc,timestamp:1615862270
2021-03-16 02:37:50,443 [INFO ] pool-2-thread-1 TS_METRICS - MemoryUtilization.Percent:8.2|#Level:Host|#hostname:aec403759fcc,timestamp:1615862270
2021-03-16 02:37:50,996 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Backend worker process died.
2021-03-16 02:37:50,996 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Backend worker process died.
2021-03-16 02:37:50,997 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Traceback (most recent call last):
2021-03-16 02:37:50,997 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Traceback (most recent call last):
2021-03-16 02:37:50,997 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/venv/lib/python3.6/site-packages/ts/model_service_worker.py", line 182, in <module>
2021-03-16 02:37:50,997 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/venv/lib/python3.6/site-packages/ts/model_service_worker.py", line 182, in <module>
2021-03-16 02:37:50,997 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - worker.run_server()
2021-03-16 02:37:50,997 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - worker.run_server()
2021-03-16 02:37:50,997 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/venv/lib/python3.6/site-packages/ts/model_service_worker.py", line 154, in run_server
2021-03-16 02:37:50,998 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/venv/lib/python3.6/site-packages/ts/model_service_worker.py", line 154, in run_server
2021-03-16 02:37:50,998 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.handle_connection(cl_socket)
2021-03-16 02:37:50,998 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.handle_connection(cl_socket)
2021-03-16 02:37:50,998 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/venv/lib/python3.6/site-packages/ts/model_service_worker.py", line 116, in handle_connection
2021-03-16 02:37:50,998 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/venv/lib/python3.6/site-packages/ts/model_service_worker.py", line 116, in handle_connection
2021-03-16 02:37:50,999 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service, result, code = self.load_model(msg)
2021-03-16 02:37:50,998 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service, result, code = self.load_model(msg)
2021-03-16 02:37:50,999 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/venv/lib/python3.6/site-packages/ts/model_service_worker.py", line 89, in load_model
2021-03-16 02:37:50,999 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/venv/lib/python3.6/site-packages/ts/model_service_worker.py", line 89, in load_model
2021-03-16 02:37:50,999 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service = model_loader.load(model_name, model_dir, handler, gpu, batch_size, envelope)
2021-03-16 02:37:50,999 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service = model_loader.load(model_name, model_dir, handler, gpu, batch_size, envelope)
2021-03-16 02:37:51,000 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/venv/lib/python3.6/site-packages/ts/model_loader.py", line 104, in load
2021-03-16 02:37:51,000 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/venv/lib/python3.6/site-packages/ts/model_loader.py", line 104, in load
2021-03-16 02:37:51,000 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - initialize_fn(service.context)
2021-03-16 02:37:51,000 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - initialize_fn(service.context)
2021-03-16 02:37:51,001 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/venv/lib/python3.6/site-packages/ts/torch_handler/base_handler.py", line 79, in initialize
2021-03-16 02:37:51,001 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/venv/lib/python3.6/site-packages/ts/torch_handler/base_handler.py", line 79, in initialize
2021-03-16 02:37:51,001 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.mapping = load_label_mapping(mapping_file_path)
2021-03-16 02:37:51,001 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.mapping = load_label_mapping(mapping_file_path)
2021-03-16 02:37:51,002 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/venv/lib/python3.6/site-packages/ts/utils/util.py", line 40, in load_label_mapping
2021-03-16 02:37:51,002 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/venv/lib/python3.6/site-packages/ts/utils/util.py", line 40, in load_label_mapping
2021-03-16 02:37:51,002 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - mapping = json.load(f)
2021-03-16 02:37:51,002 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - mapping = json.load(f)
2021-03-16 02:37:51,003 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/usr/lib/python3.6/json/__init__.py", line 299, in load
2021-03-16 02:37:51,003 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/usr/lib/python3.6/json/__init__.py", line 299, in load
2021-03-16 02:37:51,003 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
2021-03-16 02:37:51,003 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
2021-03-16 02:37:51,003 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/usr/lib/python3.6/json/__init__.py", line 354, in loads
2021-03-16 02:37:51,003 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/usr/lib/python3.6/json/__init__.py", line 354, in loads
2021-03-16 02:37:51,003 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - return _default_decoder.decode(s)
2021-03-16 02:37:51,004 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - return _default_decoder.decode(s)
2021-03-16 02:37:51,004 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/usr/lib/python3.6/json/decoder.py", line 339, in decode
2021-03-16 02:37:51,004 [INFO ] epollEventLoopGroup-5-2 org.pytorch.serve.wlm.WorkerThread - 9000 Worker disconnected. WORKER_STARTED
2021-03-16 02:37:51,004 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/usr/lib/python3.6/json/decoder.py", line 339, in decode
2021-03-16 02:37:51,004 [INFO ] epollEventLoopGroup-5-1 org.pytorch.serve.wlm.WorkerThread - 9001 Worker disconnected. WORKER_STARTED
2021-03-16 02:37:51,005 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - obj, end = self.raw_decode(s, idx=_w(s, 0).end())
2021-03-16 02:37:51,004 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - obj, end = self.raw_decode(s, idx=_w(s, 0).end())
2021-03-16 02:37:51,005 [DEBUG] W-9001-fallDown_0.1 org.pytorch.serve.wlm.WorkerThread - System state is : WORKER_STARTED
2021-03-16 02:37:51,005 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/usr/lib/python3.6/json/decoder.py", line 355, in raw_decode
2021-03-16 02:37:51,005 [DEBUG] W-9000-fallDown_0.1 org.pytorch.serve.wlm.WorkerThread - System state is : WORKER_STARTED
2021-03-16 02:37:51,005 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/usr/lib/python3.6/json/decoder.py", line 355, in raw_decode
2021-03-16 02:37:51,006 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - obj, end = self.scan_once(s, idx)
2021-03-16 02:37:51,006 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - obj, end = self.scan_once(s, idx)
2021-03-16 02:37:51,006 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 3 column 1 (char 16)
2021-03-16 02:37:51,006 [DEBUG] W-9001-fallDown_0.1 org.pytorch.serve.wlm.WorkerThread - Backend worker monitoring thread interrupted or backend worker process died.
java.lang.InterruptedException
at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2056)
at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2133)
at java.base/java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:432)
at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:188)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
2021-03-16 02:37:51,006 [DEBUG] W-9000-fallDown_0.1 org.pytorch.serve.wlm.WorkerThread - Backend worker monitoring thread interrupted or backend worker process died.
java.lang.InterruptedException
at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2056)
at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2133)
at java.base/java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:432)
at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:188)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
2021-03-16 02:37:51,009 [WARN ] W-9001-fallDown_0.1 org.pytorch.serve.wlm.BatchAggregator - Load model failed: fallDown, error: Worker died.
2021-03-16 02:37:51,007 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 3 column 1 (char 16)
2021-03-16 02:37:51,010 [DEBUG] W-9001-fallDown_0.1 org.pytorch.serve.wlm.WorkerThread - W-9001-fallDown_0.1 State change WORKER_STARTED -> WORKER_STOPPED
2021-03-16 02:37:51,010 [WARN ] W-9001-fallDown_0.1 org.pytorch.serve.wlm.WorkerLifeCycle - terminateIOStreams() threadName=W-9001-fallDown_0.1-stderr
2021-03-16 02:37:51,011 [WARN ] W-9001-fallDown_0.1 org.pytorch.serve.wlm.WorkerLifeCycle - terminateIOStreams() threadName=W-9001-fallDown_0.1-stdout
2021-03-16 02:37:51,010 [WARN ] W-9000-fallDown_0.1 org.pytorch.serve.wlm.BatchAggregator - Load model failed: fallDown, error: Worker died.
2021-03-16 02:37:51,011 [DEBUG] W-9000-fallDown_0.1 org.pytorch.serve.wlm.WorkerThread - W-9000-fallDown_0.1 State change WORKER_STARTED -> WORKER_STOPPED
2021-03-16 02:37:51,011 [WARN ] W-9000-fallDown_0.1 org.pytorch.serve.wlm.WorkerLifeCycle - terminateIOStreams() threadName=W-9000-fallDown_0.1-stderr
2021-03-16 02:37:51,012 [WARN ] W-9000-fallDown_0.1 org.pytorch.serve.wlm.WorkerLifeCycle - terminateIOStreams() threadName=W-9000-fallDown_0.1-stdout
2021-03-16 02:37:51,013 [INFO ] W-9000-fallDown_0.1 org.pytorch.serve.wlm.WorkerThread - Retry worker: 9000 in 1 seconds.
2021-03-16 02:37:51,013 [INFO ] W-9001-fallDown_0.1 org.pytorch.serve.wlm.WorkerThread - Retry worker: 9001 in 1 seconds.
2021-03-16 02:37:51,022 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Stopped Scanner - W-9001-fallDown_0.1-stdout
2021-03-16 02:37:51,022 [INFO ] W-9001-fallDown_0.1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - Stopped Scanner - W-9001-fallDown_0.1-stderr
2021-03-16 02:37:51,027 [INFO ] W-9000-fallDown_0.1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - Stopped Scanner - W-9000-fallDown_0.1-stderr
2021-03-16 02:37:51,027 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Stopped Scanner - W-9000-fallDown_0.1-stdout
2021-03-16 02:37:52,139 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /home/model-server/tmp/.ts.sock.9001
2021-03-16 02:37:52,140 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]9198
2021-03-16 02:37:52,140 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started.
2021-03-16 02:37:52,140 [DEBUG] W-9001-fallDown_0.1 org.pytorch.serve.wlm.WorkerThread - W-9001-fallDown_0.1 State change WORKER_STOPPED -> WORKER_STARTED
2021-03-16 02:37:52,140 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.6.9
2021-03-16 02:37:52,140 [INFO ] W-9001-fallDown_0.1 org.pytorch.serve.wlm.WorkerThread - Connecting to: /home/model-server/tmp/.ts.sock.9001
2021-03-16 02:37:52,142 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /home/model-server/tmp/.ts.sock.9001.
2021-03-16 02:37:52,182 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: /home/model-server/tmp/.ts.sock.9000
2021-03-16 02:37:52,182 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]9197
2021-03-16 02:37:52,182 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started.
2021-03-16 02:37:52,182 [DEBUG] W-9000-fallDown_0.1 org.pytorch.serve.wlm.WorkerThread - W-9000-fallDown_0.1 State change WORKER_STOPPED -> WORKER_STARTED
2021-03-16 02:37:52,182 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.6.9
2021-03-16 02:37:52,182 [INFO ] W-9000-fallDown_0.1 org.pytorch.serve.wlm.WorkerThread - Connecting to: /home/model-server/tmp/.ts.sock.9000
2021-03-16 02:37:52,184 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: /home/model-server/tmp/.ts.sock.9000.
2021-03-16 02:37:52,872 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Backend worker process died.
2021-03-16 02:37:52,872 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Traceback (most recent call last):
2021-03-16 02:37:52,872 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/venv/lib/python3.6/site-packages/ts/model_service_worker.py", line 182, in <module>
2021-03-16 02:37:52,873 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - worker.run_server()
2021-03-16 02:37:52,873 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/venv/lib/python3.6/site-packages/ts/model_service_worker.py", line 154, in run_server
2021-03-16 02:37:52,873 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.handle_connection(cl_socket)
2021-03-16 02:37:52,873 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/venv/lib/python3.6/site-packages/ts/model_service_worker.py", line 116, in handle_connection
2021-03-16 02:37:52,873 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service, result, code = self.load_model(msg)
2021-03-16 02:37:52,873 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/venv/lib/python3.6/site-packages/ts/model_service_worker.py", line 89, in load_model
2021-03-16 02:37:52,873 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service = model_loader.load(model_name, model_dir, handler, gpu, batch_size, envelope)
2021-03-16 02:37:52,873 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/venv/lib/python3.6/site-packages/ts/model_loader.py", line 104, in load
2021-03-16 02:37:52,873 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - initialize_fn(service.context)
2021-03-16 02:37:52,873 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/venv/lib/python3.6/site-packages/ts/torch_handler/base_handler.py", line 79, in initialize
2021-03-16 02:37:52,874 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.mapping = load_label_mapping(mapping_file_path)
2021-03-16 02:37:52,874 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/venv/lib/python3.6/site-packages/ts/utils/util.py", line 40, in load_label_mapping
2021-03-16 02:37:52,874 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - mapping = json.load(f)
2021-03-16 02:37:52,874 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/usr/lib/python3.6/json/__init__.py", line 299, in load
2021-03-16 02:37:52,874 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
2021-03-16 02:37:52,874 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/usr/lib/python3.6/json/__init__.py", line 354, in loads
2021-03-16 02:37:52,874 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - return _default_decoder.decode(s)
2021-03-16 02:37:52,874 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/usr/lib/python3.6/json/decoder.py", line 339, in decode
2021-03-16 02:37:52,874 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - obj, end = self.raw_decode(s, idx=_w(s, 0).end())
2021-03-16 02:37:52,874 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/usr/lib/python3.6/json/decoder.py", line 355, in raw_decode
2021-03-16 02:37:52,874 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - obj, end = self.scan_once(s, idx)
2021-03-16 02:37:52,875 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 3 column 1 (char 16)
2021-03-16 02:37:52,877 [INFO ] epollEventLoopGroup-5-3 org.pytorch.serve.wlm.WorkerThread - 9001 Worker disconnected. WORKER_STARTED
2021-03-16 02:37:52,878 [DEBUG] W-9001-fallDown_0.1 org.pytorch.serve.wlm.WorkerThread - System state is : WORKER_STARTED
2021-03-16 02:37:52,878 [DEBUG] W-9001-fallDown_0.1 org.pytorch.serve.wlm.WorkerThread - Backend worker monitoring thread interrupted or backend worker process died.
java.lang.InterruptedException
at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2056)
at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2133)
at java.base/java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:432)
at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:188)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
2021-03-16 02:37:52,879 [WARN ] W-9001-fallDown_0.1 org.pytorch.serve.wlm.BatchAggregator - Load model failed: fallDown, error: Worker died.
2021-03-16 02:37:52,879 [DEBUG] W-9001-fallDown_0.1 org.pytorch.serve.wlm.WorkerThread - W-9001-fallDown_0.1 State change WORKER_STARTED -> WORKER_STOPPED
2021-03-16 02:37:52,879 [WARN ] W-9001-fallDown_0.1 org.pytorch.serve.wlm.WorkerLifeCycle - terminateIOStreams() threadName=W-9001-fallDown_0.1-stderr
2021-03-16 02:37:52,879 [WARN ] W-9001-fallDown_0.1 org.pytorch.serve.wlm.WorkerLifeCycle - terminateIOStreams() threadName=W-9001-fallDown_0.1-stdout
2021-03-16 02:37:52,880 [INFO ] W-9001-fallDown_0.1 org.pytorch.serve.wlm.WorkerThread - Retry worker: 9001 in 1 seconds.
2021-03-16 02:37:52,888 [INFO ] W-9001-fallDown_0.1-stderr org.pytorch.serve.wlm.WorkerLifeCycle - Stopped Scanner - W-9001-fallDown_0.1-stderr
2021-03-16 02:37:52,888 [INFO ] W-9001-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Stopped Scanner - W-9001-fallDown_0.1-stdout
2021-03-16 02:37:52,900 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Backend worker process died.
2021-03-16 02:37:52,900 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Traceback (most recent call last):
2021-03-16 02:37:52,900 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/venv/lib/python3.6/site-packages/ts/model_service_worker.py", line 182, in <module>
2021-03-16 02:37:52,900 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - worker.run_server()
2021-03-16 02:37:52,900 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/venv/lib/python3.6/site-packages/ts/model_service_worker.py", line 154, in run_server
2021-03-16 02:37:52,900 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.handle_connection(cl_socket)
2021-03-16 02:37:52,901 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/venv/lib/python3.6/site-packages/ts/model_service_worker.py", line 116, in handle_connection
2021-03-16 02:37:52,901 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service, result, code = self.load_model(msg)
2021-03-16 02:37:52,901 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/venv/lib/python3.6/site-packages/ts/model_service_worker.py", line 89, in load_model
2021-03-16 02:37:52,901 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service = model_loader.load(model_name, model_dir, handler, gpu, batch_size, envelope)
2021-03-16 02:37:52,901 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/venv/lib/python3.6/site-packages/ts/model_loader.py", line 104, in load
2021-03-16 02:37:52,901 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - initialize_fn(service.context)
2021-03-16 02:37:52,901 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/venv/lib/python3.6/site-packages/ts/torch_handler/base_handler.py", line 79, in initialize
2021-03-16 02:37:52,901 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.mapping = load_label_mapping(mapping_file_path)
2021-03-16 02:37:52,901 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/home/venv/lib/python3.6/site-packages/ts/utils/util.py", line 40, in load_label_mapping
2021-03-16 02:37:52,901 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - mapping = json.load(f)
2021-03-16 02:37:52,901 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/usr/lib/python3.6/json/__init__.py", line 299, in load
2021-03-16 02:37:52,901 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
2021-03-16 02:37:52,902 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/usr/lib/python3.6/json/__init__.py", line 354, in loads
2021-03-16 02:37:52,902 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - return _default_decoder.decode(s)
2021-03-16 02:37:52,902 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/usr/lib/python3.6/json/decoder.py", line 339, in decode
2021-03-16 02:37:52,902 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - obj, end = self.raw_decode(s, idx=_w(s, 0).end())
2021-03-16 02:37:52,902 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "/usr/lib/python3.6/json/decoder.py", line 355, in raw_decode
2021-03-16 02:37:52,902 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - obj, end = self.scan_once(s, idx)
2021-03-16 02:37:52,902 [INFO ] W-9000-fallDown_0.1-stdout org.pytorch.serve.wlm.WorkerLifeCycle - json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 3 column 1 (char 16)
2021-03-16 02:37:52,905 [INFO ] epollEventLoopGroup-5-4 org.pytorch.serve.wlm.WorkerThread - 9000 Worker disconnected. WORKER_STARTED
2021-03-16 02:37:52,905 [DEBUG] W-9000-fallDown_0.1 org.pytorch.serve.wlm.WorkerThread - System state is : WORKER_STARTED
2021-03-16 02:37:52,905 [DEBUG] W-9000-fallDown_0.1 org.pytorch.serve.wlm.WorkerThread - Backend worker monitoring thread interrupted or backend worker process died.
java.lang.InterruptedException
at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2056)
at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2133)
at java.base/java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:432)
at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:188)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)