• JimXu1989

    @刘看山 请问您好使的时候用的是哪个版本的tensorrt呀?

    发布在 社区求助区(SOS!!) 阅读更多
  • JimXu1989

    @刘看山 您可以把你使用的pytorch, cuda,cudnn,tensorrt版本告诉我吗?我再试试

    发布在 社区求助区(SOS!!) 阅读更多
  • JimXu1989

    @刘看山solov2生成好的onnx模型转tensorrt的时候报错,麻烦看一下谢谢! 中说:

    是一下 onnx-tensorrt 这个工具。

    你好,我用了你说的这个工具,报错是差不多的:
    onnx2trt solov2.onnx -o solov2.trt
    Input filename: solov2.onnx
    ONNX IR version: 0.0.7
    Opset version: 11
    Producer name: pytorch
    Producer version: 1.10
    Domain:
    Model version: 0
    Doc string:
    Parsing model
    [2022-05-07 07:05:49 WARNING] onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
    While parsing node number 283 [Range -> "966"]:
    ERROR: builtin_op_importers.cpp:3350 In function importRange:
    [8] Assertion failed: inputs.at(0).isInt32() && "For range operator with dynamic inputs, this version of TensorRT only supports INT32!"

    发布在 社区求助区(SOS!!) 阅读更多
  • JimXu1989

    你好,我用了你说的这个工具,报错是差不多的:
    onnx2trt solov2.onnx -o solov2.trt

    Input filename: solov2.onnx
    ONNX IR version: 0.0.7
    Opset version: 11
    Producer name: pytorch
    Producer version: 1.10
    Domain:
    Model version: 0
    Doc string:

    Parsing model
    [2022-05-07 07:05:49 WARNING] onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
    While parsing node number 283 [Range -> "966"]:
    ERROR: builtin_op_importers.cpp:3350 In function importRange:
    [8] Assertion failed: inputs.at(0).isInt32() && "For range operator with dynamic inputs, this version of TensorRT only supports INT32!"

    发布在 社区求助区(SOS!!) 阅读更多
  • JimXu1989

    使用tensorrt 8.2 cuda 11.1 pytorch 1.10.1 opencv 4.5.5

    发布在 社区求助区(SOS!!) 阅读更多
  • JimXu1989

    我使用这个命令:
    trtexec --onnx=solov2.onnx --saveEngine=solov2.trt
    报错部分:
    [05/06/2022-15:38:59] [E] [TRT] ModelImporter.cpp:776: --- End node ---
    [05/06/2022-15:38:59] [E] [TRT] ModelImporter.cpp:779: ERROR: builtin_op_importers.cpp:3352 In function importRange:
    [8] Assertion failed: inputs.at(0).isInt32() && "For range operator with dynamic inputs, this version of TensorRT only supports INT32!"
    [05/06/2022-15:38:59] [E] Failed to parse onnx file
    [05/06/2022-15:38:59] [I] Finish parsing network model
    [05/06/2022-15:38:59] [E] Parsing model failed
    [05/06/2022-15:38:59] [E] Failed to create engine from model.
    [05/06/2022-15:38:59] [E] Engine set up failed
    &&&& FAILED TensorRT.trtexec [TensorRT v8204] # trtexec --onnx=solov2.onnx --saveEngine=solov2.trt

    发布在 社区求助区(SOS!!) 阅读更多
  • JimXu1989

    您好,我使用这个命令:
    python3 demo/export.py --config-file configs/SOLOv2/wood_dataset/R101_3x.yaml --video-input 0.mp4 --opts MODEL.WEIGHTS tools/output/model_final.pth

    回复: SOLOv2转ONNX教程
    [05/05 17:31:02 detectron2]: Arguments: Namespace(confidence_threshold=0.5, config_file='configs/SOLOv2/wood_dataset/R101_3x.yaml', input=None, opts=['MODEL.WEIGHTS', 'tools/output/model_final.pth'], output=None, video_input='0.mp4', webcam=False)
    [05/05 17:31:05 fvcore.common.checkpoint]: [Checkpointer] Loading from tools/output/model_final.pth ...
    [05/05 17:31:06 fvcore.common.checkpoint]: [Checkpointer] Loading from tools/output/model_final.pth ...
    /home/xss/Software/AdelaiDet/adet/modeling/solov2/solov2.py:152: TracerWarning: Iterating over a tensor might cause the trace to be incorrect. Passing a tensor of different shape won't change the number of iterations executed (and might lead to errors or silently give incorrect results).
    images = [x["image"].to(self.device) for x in batched_inputs]
    Traceback (most recent call last):
    File "demo/export.py", line 153, in
    torch.onnx.export(model, inp, 'solov2.onnx', output_names={
    File "/usr/local/lib/python3.8/dist-packages/torch/onnx/init.py", line 275, in export
    return utils.export(model, args, f, export_params, verbose, training,
    File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 88, in export
    _export(model, args, f, export_params, verbose, training, input_names, output_names,
    File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 689, in _export
    _model_to_graph(model, args, verbose, input_names,
    File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 458, in _model_to_graph
    graph, params, torch_out, module = _create_jit_graph(model, args,
    File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 422, in _create_jit_graph
    graph, torch_out = _trace_and_get_graph_from_model(model, args)
    File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 373, in _trace_and_get_graph_from_model
    torch.jit._get_trace_graph(model, args, strict=False, _force_outplace=False, _return_inputs_states=True)
    File "/usr/local/lib/python3.8/dist-packages/torch/jit/_trace.py", line 1160, in _get_trace_graph
    outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs)
    File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
    File "/usr/local/lib/python3.8/dist-packages/torch/jit/_trace.py", line 127, in forward
    graph, out = torch._C._create_graph_by_tracing(
    File "/usr/local/lib/python3.8/dist-packages/torch/jit/_trace.py", line 118, in wrapper
    outs.append(self.inner(*trace_inputs))
    File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
    File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1039, in _slow_forward
    result = self.forward(*input, **kwargs)
    File "/home/xss/Software/AdelaiDet/adet/modeling/solov2/solov2.py", line 108, in forward
    images = self.preprocess_image(batched_inputs)
    File "/home/xss/Software/AdelaiDet/adet/modeling/solov2/solov2.py", line 152, in preprocess_image
    images = [x["image"].to(self.device) for x in batched_inputs]
    File "/home/xss/Software/AdelaiDet/adet/modeling/solov2/solov2.py", line 152, in
    images = [x["image"].to(self.device) for x in batched_inputs]
    IndexError: too many indices for tensor of dimension 3

    发布在 原创分享专区 阅读更多
  • JimXu1989

    我看了markdown,尝试这这么做:
    python onnx/export_model_to_onnx.py --config-file configs/SOLOv2/wood_dataset/R101_3x.yaml --output R101_3x.onnx --opts MODEL.WEIGHTS tools/output/model_final.pth
    但是报错了:
    [05/05 17:09:44 detectron2]: load Model:
    tools/output/model_final.pth
    /home/xss/Software/AdelaiDet/adet/modeling/solov2/solov2.py:152: TracerWarning: Iterating over a tensor might cause the trace to be incorrect. Passing a tensor of different shape won't change the number of iterations executed (and might lead to errors or silently give incorrect results).
    images = [x["image"].to(self.device) for x in batched_inputs]
    Traceback (most recent call last):
    File "onnx/export_model_to_onnx.py", line 226, in
    main()
    File "onnx/export_model_to_onnx.py", line 212, in main
    torch.onnx.export(
    File "/usr/local/lib/python3.8/dist-packages/torch/onnx/init.py", line 275, in export
    return utils.export(model, args, f, export_params, verbose, training,
    File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 88, in export
    _export(model, args, f, export_params, verbose, training, input_names, output_names,
    File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 689, in _export
    _model_to_graph(model, args, verbose, input_names,
    File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 458, in _model_to_graph
    graph, params, torch_out, module = _create_jit_graph(model, args,
    File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 422, in _create_jit_graph
    graph, torch_out = _trace_and_get_graph_from_model(model, args)
    File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 373, in _trace_and_get_graph_from_model
    torch.jit._get_trace_graph(model, args, strict=False, _force_outplace=False, _return_inputs_states=True)
    File "/usr/local/lib/python3.8/dist-packages/torch/jit/_trace.py", line 1160, in _get_trace_graph
    outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs)
    File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
    File "/usr/local/lib/python3.8/dist-packages/torch/jit/_trace.py", line 127, in forward
    graph, out = torch._C._create_graph_by_tracing(
    File "/usr/local/lib/python3.8/dist-packages/torch/jit/_trace.py", line 118, in wrapper
    outs.append(self.inner(*trace_inputs))
    File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
    File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1039, in _slow_forward
    result = self.forward(*input, **kwargs)
    File "/home/xss/Software/AdelaiDet/adet/modeling/solov2/solov2.py", line 108, in forward
    images = self.preprocess_image(batched_inputs)
    File "/home/xss/Software/AdelaiDet/adet/modeling/solov2/solov2.py", line 152, in preprocess_image
    images = [x["image"].to(self.device) for x in batched_inputs]
    File "/home/xss/Software/AdelaiDet/adet/modeling/solov2/solov2.py", line 152, in
    images = [x["image"].to(self.device) for x in batched_inputs]
    IndexError: too many indices for tensor of dimension 3

    发布在 社区求助区(SOS!!) 阅读更多
  • JimXu1989

    @刘看山
    您好,我安装了cuda 11.3 pytorch 1.10,训练solov2_d2的时候报这个错误:

    [04/29 13:43:52 d2.data.common]: Serialized dataset takes 0.71 MiB
    WARNING [04/29 13:43:52 d2.solver.build]: SOLVER.STEPS contains values larger than SOLVER.MAX_ITER. These values will be ignored.
    /usr/local/lib/python3.8/dist-packages/numpy/core/getlimits.py:499: UserWarning: The value of the smallest subnormal for <class 'numpy.float32'> type is zero.
    setattr(self, word, getattr(machar, word).flat[0])
    /usr/local/lib/python3.8/dist-packages/numpy/core/getlimits.py:89: UserWarning: The value of the smallest subnormal for <class 'numpy.float32'> type is zero.
    return self._float_to_str(self.smallest_subnormal)
    /usr/local/lib/python3.8/dist-packages/numpy/core/getlimits.py:499: UserWarning: The value of the smallest subnormal for <class 'numpy.float64'> type is zero.
    setattr(self, word, getattr(machar, word).flat[0])
    /usr/local/lib/python3.8/dist-packages/numpy/core/getlimits.py:89: UserWarning: The value of the smallest subnormal for <class 'numpy.float64'> type is zero.
    return self._float_to_str(self.smallest_subnormal)
    [04/29 13:43:54 fvcore.common.checkpoint]: [Checkpointer] Loading from detectron2://ImageNetPretrained/MSRA/R-101.pkl ...
    [04/29 13:43:54 d2.checkpoint.c2_model_loading]: Renaming Caffe2 weights ......
    [04/29 13:43:54 d2.checkpoint.c2_model_loading]: Following weights matched with submodule backbone.bottom_up:

    Names in Model Names in Checkpoint Shapes
    res2.0.conv1.* res2_0_branch2a_{bn_*,w} (64,) (64,) (64,) (64,) (64,64,1,1)
    res2.0.conv2.* res2_0_branch2b_{bn_*,w} (64,) (64,) (64,) (64,) (64,64,3,3)
    res2.0.conv3.* res2_0_branch2c_{bn_*,w} (256,) (256,) (256,) (256,) (256,64,1,1)
    res2.0.shortcut.* res2_0_branch1_{bn_*,w} (256,) (256,) (256,) (256,) (256,64,1,1)
    res2.1.conv1.* res2_1_branch2a_{bn_*,w} (64,) (64,) (64,) (64,) (64,256,1,1)
    res2.1.conv2.* res2_1_branch2b_{bn_*,w} (64,) (64,) (64,) (64,) (64,64,3,3)
    res2.1.conv3.* res2_1_branch2c_{bn_*,w} (256,) (256,) (256,) (256,) (256,64,1,1)
    res2.2.conv1.* res2_2_branch2a_{bn_*,w} (64,) (64,) (64,) (64,) (64,256,1,1)
    res2.2.conv2.* res2_2_branch2b_{bn_*,w} (64,) (64,) (64,) (64,) (64,64,3,3)
    res2.2.conv3.* res2_2_branch2c_{bn_*,w} (256,) (256,) (256,) (256,) (256,64,1,1)
    res3.0.conv1.* res3_0_branch2a_{bn_*,w} (128,) (128,) (128,) (128,) (128,256,1,1)
    res3.0.conv2.* res3_0_branch2b_{bn_*,w} (128,) (128,) (128,) (128,) (128,128,3,3)
    res3.0.conv3.* res3_0_branch2c_{bn_*,w} (512,) (512,) (512,) (512,) (512,128,1,1)
    res3.0.shortcut.* res3_0_branch1_{bn_*,w} (512,) (512,) (512,) (512,) (512,256,1,1)
    res3.1.conv1.* res3_1_branch2a_{bn_*,w} (128,) (128,) (128,) (128,) (128,512,1,1)
    res3.1.conv2.* res3_1_branch2b_{bn_*,w} (128,) (128,) (128,) (128,) (128,128,3,3)
    res3.1.conv3.* res3_1_branch2c_{bn_*,w} (512,) (512,) (512,) (512,) (512,128,1,1)
    res3.2.conv1.* res3_2_branch2a_{bn_*,w} (128,) (128,) (128,) (128,) (128,512,1,1)
    res3.2.conv2.* res3_2_branch2b_{bn_*,w} (128,) (128,) (128,) (128,) (128,128,3,3)
    res3.2.conv3.* res3_2_branch2c_{bn_*,w} (512,) (512,) (512,) (512,) (512,128,1,1)
    res3.3.conv1.* res3_3_branch2a_{bn_*,w} (128,) (128,) (128,) (128,) (128,512,1,1)
    res3.3.conv2.* res3_3_branch2b_{bn_*,w} (128,) (128,) (128,) (128,) (128,128,3,3)
    res3.3.conv3.* res3_3_branch2c_{bn_*,w} (512,) (512,) (512,) (512,) (512,128,1,1)
    res4.0.conv1.* res4_0_branch2a_{bn_*,w} (256,) (256,) (256,) (256,) (256,512,1,1)
    res4.0.conv2.* res4_0_branch2b_{bn_*,w} (256,) (256,) (256,) (256,) (256,256,3,3)
    res4.0.conv3.* res4_0_branch2c_{bn_*,w} (1024,) (1024,) (1024,) (1024,) (1024,256,1,1)
    res4.0.shortcut.* res4_0_branch1_{bn_*,w} (1024,) (1024,) (1024,) (1024,) (1024,512,1,1)
    res4.1.conv1.* res4_1_branch2a_{bn_*,w} (256,) (256,) (256,) (256,) (256,1024,1,1)
    res4.1.conv2.* res4_1_branch2b_{bn_*,w} (256,) (256,) (256,) (256,) (256,256,3,3)
    res4.1.conv3.* res4_1_branch2c_{bn_*,w} (1024,) (1024,) (1024,) (1024,) (1024,256,1,1)
    res4.10.conv1.* res4_10_branch2a_{bn_*,w} (256,) (256,) (256,) (256,) (256,1024,1,1)
    res4.10.conv2.* res4_10_branch2b_{bn_*,w} (256,) (256,) (256,) (256,) (256,256,3,3)
    res4.10.conv3.* res4_10_branch2c_{bn_*,w} (1024,) (1024,) (1024,) (1024,) (1024,256,1,1)
    res4.11.conv1.* res4_11_branch2a_{bn_*,w} (256,) (256,) (256,) (256,) (256,1024,1,1)
    res4.11.conv2.* res4_11_branch2b_{bn_*,w} (256,) (256,) (256,) (256,) (256,256,3,3)
    res4.11.conv3.* res4_11_branch2c_{bn_*,w} (1024,) (1024,) (1024,) (1024,) (1024,256,1,1)
    res4.12.conv1.* res4_12_branch2a_{bn_*,w} (256,) (256,) (256,) (256,) (256,1024,1,1)
    res4.12.conv2.* res4_12_branch2b_{bn_*,w} (256,) (256,) (256,) (256,) (256,256,3,3)
    res4.12.conv3.* res4_12_branch2c_{bn_*,w} (1024,) (1024,) (1024,) (1024,) (1024,256,1,1)
    res4.13.conv1.* res4_13_branch2a_{bn_*,w} (256,) (256,) (256,) (256,) (256,1024,1,1)
    res4.13.conv2.* res4_13_branch2b_{bn_*,w} (256,) (256,) (256,) (256,) (256,256,3,3)
    res4.13.conv3.* res4_13_branch2c_{bn_*,w} (1024,) (1024,) (1024,) (1024,) (1024,256,1,1)
    res4.14.conv1.* res4_14_branch2a_{bn_*,w} (256,) (256,) (256,) (256,) (256,1024,1,1)
    res4.14.conv2.* res4_14_branch2b_{bn_*,w} (256,) (256,) (256,) (256,) (256,256,3,3)
    res4.14.conv3.* res4_14_branch2c_{bn_*,w} (1024,) (1024,) (1024,) (1024,) (1024,256,1,1)
    res4.15.conv1.* res4_15_branch2a_{bn_*,w} (256,) (256,) (256,) (256,) (256,1024,1,1)
    res4.15.conv2.* res4_15_branch2b_{bn_*,w} (256,) (256,) (256,) (256,) (256,256,3,3)
    res4.15.conv3.* res4_15_branch2c_{bn_*,w} (1024,) (1024,) (1024,) (1024,) (1024,256,1,1)
    res4.16.conv1.* res4_16_branch2a_{bn_*,w} (256,) (256,) (256,) (256,) (256,1024,1,1)
    res4.16.conv2.* res4_16_branch2b_{bn_*,w} (256,) (256,) (256,) (256,) (256,256,3,3)
    res4.16.conv3.* res4_16_branch2c_{bn_*,w} (1024,) (1024,) (1024,) (1024,) (1024,256,1,1)
    res4.17.conv1.* res4_17_branch2a_{bn_*,w} (256,) (256,) (256,) (256,) (256,1024,1,1)
    res4.17.conv2.* res4_17_branch2b_{bn_*,w} (256,) (256,) (256,) (256,) (256,256,3,3)
    res4.17.conv3.* res4_17_branch2c_{bn_*,w} (1024,) (1024,) (1024,) (1024,) (1024,256,1,1)
    res4.18.conv1.* res4_18_branch2a_{bn_*,w} (256,) (256,) (256,) (256,) (256,1024,1,1)
    res4.18.conv2.* res4_18_branch2b_{bn_*,w} (256,) (256,) (256,) (256,) (256,256,3,3)
    res4.18.conv3.* res4_18_branch2c_{bn_*,w} (1024,) (1024,) (1024,) (1024,) (1024,256,1,1)
    res4.19.conv1.* res4_19_branch2a_{bn_*,w} (256,) (256,) (256,) (256,) (256,1024,1,1)
    res4.19.conv2.* res4_19_branch2b_{bn_*,w} (256,) (256,) (256,) (256,) (256,256,3,3)
    res4.19.conv3.* res4_19_branch2c_{bn_*,w} (1024,) (1024,) (1024,) (1024,) (1024,256,1,1)
    res4.2.conv1.* res4_2_branch2a_{bn_*,w} (256,) (256,) (256,) (256,) (256,1024,1,1)
    res4.2.conv2.* res4_2_branch2b_{bn_*,w} (256,) (256,) (256,) (256,) (256,256,3,3)
    res4.2.conv3.* res4_2_branch2c_{bn_*,w} (1024,) (1024,) (1024,) (1024,) (1024,256,1,1)
    res4.20.conv1.* res4_20_branch2a_{bn_*,w} (256,) (256,) (256,) (256,) (256,1024,1,1)
    res4.20.conv2.* res4_20_branch2b_{bn_*,w} (256,) (256,) (256,) (256,) (256,256,3,3)
    res4.20.conv3.* res4_20_branch2c_{bn_*,w} (1024,) (1024,) (1024,) (1024,) (1024,256,1,1)
    res4.21.conv1.* res4_21_branch2a_{bn_*,w} (256,) (256,) (256,) (256,) (256,1024,1,1)
    res4.21.conv2.* res4_21_branch2b_{bn_*,w} (256,) (256,) (256,) (256,) (256,256,3,3)
    res4.21.conv3.* res4_21_branch2c_{bn_*,w} (1024,) (1024,) (1024,) (1024,) (1024,256,1,1)
    res4.22.conv1.* res4_22_branch2a_{bn_*,w} (256,) (256,) (256,) (256,) (256,1024,1,1)
    res4.22.conv2.* res4_22_branch2b_{bn_*,w} (256,) (256,) (256,) (256,) (256,256,3,3)
    res4.22.conv3.* res4_22_branch2c_{bn_*,w} (1024,) (1024,) (1024,) (1024,) (1024,256,1,1)
    res4.3.conv1.* res4_3_branch2a_{bn_*,w} (256,) (256,) (256,) (256,) (256,1024,1,1)
    res4.3.conv2.* res4_3_branch2b_{bn_*,w} (256,) (256,) (256,) (256,) (256,256,3,3)
    res4.3.conv3.* res4_3_branch2c_{bn_*,w} (1024,) (1024,) (1024,) (1024,) (1024,256,1,1)
    res4.4.conv1.* res4_4_branch2a_{bn_*,w} (256,) (256,) (256,) (256,) (256,1024,1,1)
    res4.4.conv2.* res4_4_branch2b_{bn_*,w} (256,) (256,) (256,) (256,) (256,256,3,3)
    res4.4.conv3.* res4_4_branch2c_{bn_*,w} (1024,) (1024,) (1024,) (1024,) (1024,256,1,1)
    res4.5.conv1.* res4_5_branch2a_{bn_*,w} (256,) (256,) (256,) (256,) (256,1024,1,1)
    res4.5.conv2.* res4_5_branch2b_{bn_*,w} (256,) (256,) (256,) (256,) (256,256,3,3)
    res4.5.conv3.* res4_5_branch2c_{bn_*,w} (1024,) (1024,) (1024,) (1024,) (1024,256,1,1)
    res4.6.conv1.* res4_6_branch2a_{bn_*,w} (256,) (256,) (256,) (256,) (256,1024,1,1)
    res4.6.conv2.* res4_6_branch2b_{bn_*,w} (256,) (256,) (256,) (256,) (256,256,3,3)
    res4.6.conv3.* res4_6_branch2c_{bn_*,w} (1024,) (1024,) (1024,) (1024,) (1024,256,1,1)
    res4.7.conv1.* res4_7_branch2a_{bn_*,w} (256,) (256,) (256,) (256,) (256,1024,1,1)
    res4.7.conv2.* res4_7_branch2b_{bn_*,w} (256,) (256,) (256,) (256,) (256,256,3,3)
    res4.7.conv3.* res4_7_branch2c_{bn_*,w} (1024,) (1024,) (1024,) (1024,) (1024,256,1,1)
    res4.8.conv1.* res4_8_branch2a_{bn_*,w} (256,) (256,) (256,) (256,) (256,1024,1,1)
    res4.8.conv2.* res4_8_branch2b_{bn_*,w} (256,) (256,) (256,) (256,) (256,256,3,3)
    res4.8.conv3.* res4_8_branch2c_{bn_*,w} (1024,) (1024,) (1024,) (1024,) (1024,256,1,1)
    res4.9.conv1.* res4_9_branch2a_{bn_*,w} (256,) (256,) (256,) (256,) (256,1024,1,1)
    res4.9.conv2.* res4_9_branch2b_{bn_*,w} (256,) (256,) (256,) (256,) (256,256,3,3)
    res4.9.conv3.* res4_9_branch2c_{bn_*,w} (1024,) (1024,) (1024,) (1024,) (1024,256,1,1)
    res5.0.conv1.* res5_0_branch2a_{bn_*,w} (512,) (512,) (512,) (512,) (512,1024,1,1)
    res5.0.conv2.* res5_0_branch2b_{bn_*,w} (512,) (512,) (512,) (512,) (512,512,3,3)
    res5.0.conv3.* res5_0_branch2c_{bn_*,w} (2048,) (2048,) (2048,) (2048,) (2048,512,1,1)
    res5.0.shortcut.* res5_0_branch1_{bn_*,w} (2048,) (2048,) (2048,) (2048,) (2048,1024,1,1)
    res5.1.conv1.* res5_1_branch2a_{bn_*,w} (512,) (512,) (512,) (512,) (512,2048,1,1)
    res5.1.conv2.* res5_1_branch2b_{bn_*,w} (512,) (512,) (512,) (512,) (512,512,3,3)
    res5.1.conv3.* res5_1_branch2c_{bn_*,w} (2048,) (2048,) (2048,) (2048,) (2048,512,1,1)
    res5.2.conv1.* res5_2_branch2a_{bn_*,w} (512,) (512,) (512,) (512,) (512,2048,1,1)
    res5.2.conv2.* res5_2_branch2b_{bn_*,w} (512,) (512,) (512,) (512,) (512,512,3,3)
    res5.2.conv3.* res5_2_branch2c_{bn_*,w} (2048,) (2048,) (2048,) (2048,) (2048,512,1,1)
    stem.conv1.norm.* res_conv1_bn_* (64,) (64,) (64,) (64,)
    stem.conv1.weight conv1_w (64, 3, 7, 7)

    WARNING [04/29 13:43:54 fvcore.common.checkpoint]: Some model parameters or buffers are not found in the checkpoint:
    backbone.fpn_lateral2.{bias, weight}
    backbone.fpn_lateral3.{bias, weight}
    backbone.fpn_lateral4.{bias, weight}
    backbone.fpn_lateral5.{bias, weight}
    backbone.fpn_output2.{bias, weight}
    backbone.fpn_output3.{bias, weight}
    backbone.fpn_output4.{bias, weight}
    backbone.fpn_output5.{bias, weight}
    ins_head.cate_pred.{bias, weight}
    ins_head.cate_tower.0.weight
    ins_head.cate_tower.2.weight
    ins_head.kernel_pred.{bias, weight}
    ins_head.kernel_tower.0.weight
    ins_head.kernel_tower.2.weight
    mask_head.conv_pred.0.weight
    mask_head.conv_pred.1.{bias, weight}
    mask_head.convs_all_levels.0.conv0.0.weight
    mask_head.convs_all_levels.1.conv0.0.weight
    mask_head.convs_all_levels.2.conv0.0.weight
    mask_head.convs_all_levels.2.conv1.0.weight
    mask_head.convs_all_levels.3.conv0.0.weight
    mask_head.convs_all_levels.3.conv1.0.weight
    mask_head.convs_all_levels.3.conv2.0.weight
    WARNING [04/29 13:43:54 fvcore.common.checkpoint]: The checkpoint state_dict contains keys that are not used by the model:
    fc1000.{bias, weight}
    [04/29 13:43:54 adet.trainer]: Starting training from iteration 0
    /usr/local/lib/python3.8/dist-packages/detectron2/structures/image_list.py:88: UserWarning: floordiv is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
    max_size = (max_size + (stride - 1)) // stride * stride
    /usr/local/lib/python3.8/dist-packages/torch/nn/functional.py:3631: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
    warnings.warn(
    /usr/local/lib/python3.8/dist-packages/torch/nn/functional.py:3679: UserWarning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and now uses scale_factor directly, instead of relying on the computed output size. If you wish to restore the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details.
    warnings.warn(
    /usr/local/lib/python3.8/dist-packages/torch/functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:2157.)
    return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
    /home/xss/Projects/wood/solov2_d2/adet/modeling/solov2/solov2.py:279: UserWarning: floordiv is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
    (center_w / upsampled_size[1]) // (1. / num_grid))
    /home/xss/Projects/wood/solov2_d2/adet/modeling/solov2/solov2.py:281: UserWarning: floordiv is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
    (center_h / upsampled_size[0]) // (1. / num_grid))
    /home/xss/Projects/wood/solov2_d2/adet/modeling/solov2/solov2.py:285: UserWarning: floordiv is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
    0, int(((center_h - half_h) / upsampled_size[0]) // (1. / num_grid)))
    /home/xss/Projects/wood/solov2_d2/adet/modeling/solov2/solov2.py:287: UserWarning: floordiv is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
    num_grid - 1, int(((center_h + half_h) / upsampled_size[0]) // (1. / num_grid)))
    /home/xss/Projects/wood/solov2_d2/adet/modeling/solov2/solov2.py:289: UserWarning: floordiv is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
    0, int(((center_w - half_w) / upsampled_size[1]) // (1. / num_grid)))
    /home/xss/Projects/wood/solov2_d2/adet/modeling/solov2/solov2.py:291: UserWarning: floordiv is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
    num_grid - 1, int(((center_w + half_w) / upsampled_size[1]) // (1. / num_grid)))
    Traceback (most recent call last):
    File "/home/xss/Projects/wood/solov2_d2/tools/train_wood.py", line 250, in
    launch(
    File "/usr/local/lib/python3.8/dist-packages/detectron2/engine/launch.py", line 82, in launch
    main_func(*args)
    File "/home/xss/Projects/wood/solov2_d2/tools/train_wood.py", line 244, in main
    return trainer.train()
    File "/home/xss/Projects/wood/solov2_d2/tools/train_wood.py", line 124, in train
    self.train_loop(self.start_iter, self.max_iter)
    File "/home/xss/Projects/wood/solov2_d2/tools/train_wood.py", line 113, in train_loop
    self.run_step()
    File "/usr/local/lib/python3.8/dist-packages/detectron2/engine/defaults.py", line 494, in run_step
    self._trainer.run_step()
    File "/usr/local/lib/python3.8/dist-packages/detectron2/engine/train_loop.py", line 285, in run_step
    losses.backward()
    File "/usr/local/lib/python3.8/dist-packages/torch/_tensor.py", line 307, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
    File "/usr/local/lib/python3.8/dist-packages/torch/autograd/init.py", line 154, in backward
    Variable._execution_engine.run_backward(
    RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [4, 128, 184, 232]], which is output 0 of ReluBackward0, is at version 3; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

    Process finished with exit code 1

    发布在 社区求助区(SOS!!) 阅读更多
  • JimXu1989

    @刘看山 请问要怎么彻底卸载原版的solov2呀

    发布在 原创分享专区 阅读更多
  • JimXu1989

    @刘看山
    神力的例程用的是resnet 50,我改称101了,也可以正常训练,但是在用export.py转出模型的时候报错了:
    /home/jim/project/wood/solov2_d2/adet/modeling/solov2/solov2.py:628: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
    assert not torch.isnan(seg_preds).any(), 'seg_preds contains nan'
    /home/jim/project/wood/solov2_d2/adet/modeling/solov2/solov2.py:656: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
    sh = torch.tensor(seg_preds.shape)
    /home/jim/project/wood/solov2_d2/adet/modeling/solov2/solov2.py:658: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
    sh_kernel = torch.tensor(kernel_preds.shape)
    /home/jim/project/wood/solov2_d2/adet/modeling/solov2/solov2.py:666: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
    seg_masks = seg_preds > torch.tensor(self.mask_threshold).float()
    /home/jim/project/wood/solov2_d2/adet/modeling/solov2/solov2.py:678: TracerWarning: Using len to get tensor shape might cause the trace to be incorrect. Recommended usage would be tensor.shape[0]. Passing a tensor of different shape might lead to errors or silently give incorrect results.
    if len(sort_inds) > self.max_before_nms:
    /home/jim/project/wood/solov2_d2/adet/modeling/solov2/utils.py:161: TracerWarning: Using len to get tensor shape might cause the trace to be incorrect. Recommended usage would be tensor.shape[0]. Passing a tensor of different shape might lead to errors or silently give incorrect results.
    n_samples = len(cate_labels)
    /home/jim/project/wood/solov2_d2/adet/modeling/solov2/solov2.py:689: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
    keep = cate_scores >= torch.tensor(
    /home/jim/project/wood/solov2_d2/adet/modeling/solov2/solov2.py:745: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
    size=(max(int(ori_h0.6), 736), max(int(ori_w0.6), 992)),
    /home/jim/project/wood/solov2_d2/adet/modeling/solov2/solov2.py:747: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
    seg_masks = seg_masks > torch.tensor(self.mask_threshold).float()
    Traceback (most recent call last):
    File "/home/jim/project/wood/solov2_d2/demo/export.py", line 153, in
    torch.onnx.export(model, inp, 'solov2.onnx', output_names={
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/init.py", line 275, in export
    return utils.export(model, args, f, export_params, verbose, training,
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 88, in export
    _export(model, args, f, export_params, verbose, training, input_names, output_names,
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 689, in _export
    _model_to_graph(model, args, verbose, input_names,
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 463, in _model_to_graph
    graph = _optimize_graph(graph, operator_export_type,
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 200, in _optimize_graph
    graph = torch._C._jit_pass_onnx(graph, operator_export_type)
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/init.py", line 313, in _run_symbolic_function
    return utils._run_symbolic_function(*args, **kwargs)
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 990, in _run_symbolic_function
    symbolic_fn = _find_symbolic_in_registry(domain, op_name, opset_version, operator_export_type)
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 944, in _find_symbolic_in_registry
    return sym_registry.get_registered_op(op_name, domain, opset_version)
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/symbolic_registry.py", line 116, in get_registered_op
    raise RuntimeError(msg)
    RuntimeError: Exporting the operator linspace to ONNX opset version 11 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub.

    Process finished with exit code 1
    请帮我看一下,多谢

    发布在 社区求助区(SOS!!) 阅读更多
  • JimXu1989

    用pytorch运行又报了一个不一样的错@刘看山

    /usr/bin/python3.8 /home/jim/solov2_d2/demo/export.py --config-file ../configs/SOLOv2/R101_3x.yaml --video-input ../0.mp4 --opts MODEL.WEIGHTS ../output/model_final.pth
    [06/24 16:49:06 detectron2]: Arguments: Namespace(confidence_threshold=0.5, config_file='../configs/SOLOv2/R101_3x.yaml', input=None, opts=['MODEL.WEIGHTS', '../output/model_final.pth'], output=None, video_input='../0.mp4', webcam=False)
    INFO 06.24 16:49:07 solov2.py:78: instance_shapes: [ShapeSpec(channels=256, height=None, width=None, stride=4), ShapeSpec(channels=256, height=None, width=None, stride=8), ShapeSpec(channels=256, height=None, width=None, stride=16), ShapeSpec(channels=256, height=None, width=None, stride=32), ShapeSpec(channels=256, height=None, width=None, stride=64)]
    [06/24 16:49:15 fvcore.common.checkpoint]: [Checkpointer] Loading from ../output/model_final.pth ...
    INFO 06.24 16:49:17 solov2.py:78: instance_shapes: [ShapeSpec(channels=256, height=None, width=None, stride=4), ShapeSpec(channels=256, height=None, width=None, stride=8), ShapeSpec(channels=256, height=None, width=None, stride=16), ShapeSpec(channels=256, height=None, width=None, stride=32), ShapeSpec(channels=256, height=None, width=None, stride=64)]
    [06/24 16:49:18 fvcore.common.checkpoint]: [Checkpointer] Loading from ../output/model_final.pth ...
    [WARN] exporting onnx...
    batched_inputs: torch.Size([1, 3, 704, 736])
    /home/jim/.local/lib/python3.8/site-packages/torch/nn/functional.py:3454: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
    warnings.warn(
    /home/jim/.local/lib/python3.8/site-packages/torch/nn/functional.py:3502: UserWarning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and now uses scale_factor directly, instead of relying on the computed output size. If you wish to restore the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details.
    warnings.warn(
    pred_masks /home/jim/.local/lib/python3.8/site-packages/torch/tensor.py:587: RuntimeWarning: Iterating over a tensor might cause the trace to be incorrect. Passing a tensor of different shape won't change the number of iterations executed (and might lead to errors or silently give incorrect results).
    warnings.warn('Iterating over a tensor might cause the trace to be incorrect. '
    tensor([[[[0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 3.9518e+00,
    3.7653e+00, 3.8083e+00],
    [0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 4.5552e+00,
    4.3284e+00, 4.1046e+00],
    [0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 4.7800e+00,
    4.6571e+00, 4.3812e+00],
    ...,
    [1.1723e-01, 1.5662e-01, 1.9572e-01, ..., 1.9446e-01,
    1.1960e-01, 5.0942e-02],
    [1.0176e-01, 1.4805e-01, 1.7128e-01, ..., 1.8052e-01,
    1.1186e-01, 5.3612e-02],
    [9.6826e-02, 9.2554e-02, 9.1506e-02, ..., 9.5873e-02,
    6.4562e-02, 5.5044e-02]],

         [[0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 0.0000e+00,
           0.0000e+00, 0.0000e+00],
          [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 0.0000e+00,
           0.0000e+00, 0.0000e+00],
          [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 0.0000e+00,
           0.0000e+00, 0.0000e+00],
          ...,
          [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 3.7478e-02,
           0.0000e+00, 0.0000e+00],
          [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 4.8736e-02,
           7.6189e-03, 0.0000e+00],
          [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 7.7488e-02,
           5.2618e-02, 2.4377e-02]],
    
         [[0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 0.0000e+00,
           0.0000e+00, 0.0000e+00],
          [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 0.0000e+00,
           0.0000e+00, 0.0000e+00],
          [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 0.0000e+00,
           0.0000e+00, 0.0000e+00],
          ...,
          [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 0.0000e+00,
           0.0000e+00, 0.0000e+00],
          [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 0.0000e+00,
           0.0000e+00, 0.0000e+00],
          [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 0.0000e+00,
           0.0000e+00, 0.0000e+00]],
    
         ...,
    
         [[0.0000e+00, 0.0000e+00, 2.5016e-02,  ..., 2.8171e+01,
           2.9437e+01, 2.7834e+01],
          [0.0000e+00, 4.3195e-02, 8.7966e-02,  ..., 2.8479e+01,
           3.1145e+01, 3.0990e+01],
          [0.0000e+00, 5.1823e-02, 8.0432e-02,  ..., 2.8781e+01,
           3.2091e+01, 3.2862e+01],
          ...,
          [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 1.4385e-01,
           1.4950e-01, 0.0000e+00],
          [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 5.5557e-02,
           8.0213e-02, 0.0000e+00],
          [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 0.0000e+00,
           0.0000e+00, 0.0000e+00]],
    
         [[0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 0.0000e+00,
           0.0000e+00, 0.0000e+00],
          [0.0000e+00, 2.4763e-02, 6.4760e-02,  ..., 0.0000e+00,
           0.0000e+00, 0.0000e+00],
          [0.0000e+00, 4.8006e-02, 4.6316e-02,  ..., 0.0000e+00,
           0.0000e+00, 0.0000e+00],
          ...,
          [8.3648e-03, 5.4830e-02, 1.3018e-01,  ..., 0.0000e+00,
           0.0000e+00, 0.0000e+00],
          [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 0.0000e+00,
           0.0000e+00, 0.0000e+00],
          [0.0000e+00, 7.6539e-03, 0.0000e+00,  ..., 0.0000e+00,
           0.0000e+00, 0.0000e+00]],
    
         [[4.6343e-02, 1.1665e-01, 1.7521e-01,  ..., 3.3670e+01,
           3.3471e+01, 2.9798e+01],
          [7.0594e-02, 1.5725e-01, 2.2316e-01,  ..., 3.2300e+01,
           3.3655e+01, 3.1853e+01],
          [8.3667e-02, 1.7335e-01, 2.1356e-01,  ..., 3.2309e+01,
           3.4434e+01, 3.3425e+01],
          ...,
          [0.0000e+00, 1.1504e-02, 3.4751e-02,  ..., 0.0000e+00,
           0.0000e+00, 0.0000e+00],
          [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 0.0000e+00,
           0.0000e+00, 0.0000e+00],
          [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 0.0000e+00,
           0.0000e+00, 0.0000e+00]]]], device='cuda:0') torch.Size([1, 256, 176, 184])
    

    tensor(False, device='cuda:0')
    tensor(False, device='cuda:0')
    pred cate: torch.Size([3872, 80])
    pred kernel: torch.Size([3872, 256])
    /home/jim/solov2_d2/adet/modeling/solov2/solov2.py:628: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
    assert not torch.isnan(seg_preds).any(), 'seg_preds contains nan'
    /home/jim/solov2_d2/adet/modeling/solov2/solov2.py:656: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
    sh = torch.tensor(seg_preds.shape)
    /home/jim/solov2_d2/adet/modeling/solov2/solov2.py:658: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
    sh_kernel = torch.tensor(kernel_preds.shape)
    /home/jim/solov2_d2/adet/modeling/solov2/solov2.py:666: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
    seg_masks = seg_preds > torch.tensor(self.mask_threshold).float()
    /home/jim/solov2_d2/adet/modeling/solov2/solov2.py:689: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
    keep = cate_scores >= torch.tensor(
    /home/jim/solov2_d2/adet/modeling/solov2/solov2.py:745: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
    size=(max(int(ori_h0.6), 736), max(int(ori_w0.6), 992)),
    /home/jim/solov2_d2/adet/modeling/solov2/solov2.py:747: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
    seg_masks = seg_masks > torch.tensor(self.mask_threshold).float()
    Traceback (most recent call last):
    File "/home/jim/solov2_d2/demo/export.py", line 153, in
    torch.onnx.export(model, inp, 'solov2.onnx', output_names={
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/init.py", line 271, in export
    return utils.export(model, args, f, export_params, verbose, training,
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 88, in export
    _export(model, args, f, export_params, verbose, training, input_names, output_names,
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 694, in _export
    _model_to_graph(model, args, verbose, input_names,
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 463, in _model_to_graph
    graph = _optimize_graph(graph, operator_export_type,
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 206, in _optimize_graph
    graph = torch._C._jit_pass_onnx(graph, operator_export_type)
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/init.py", line 309, in _run_symbolic_function
    return utils._run_symbolic_function(*args, **kwargs)
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 993, in _run_symbolic_function
    symbolic_fn = _find_symbolic_in_registry(domain, op_name, opset_version, operator_export_type)
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 950, in _find_symbolic_in_registry
    return sym_registry.get_registered_op(op_name, domain, opset_version)
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/symbolic_registry.py", line 116, in get_registered_op
    raise RuntimeError(msg)
    RuntimeError: Exporting the operator linspace to ONNX opset version 11 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub.

    Process finished with exit code 1

    发布在 原创分享专区 阅读更多
  • JimXu1989

    @JimXu1989SOLOv2转ONNX教程 中说:

    Hi
    在solov2_d2下面没有找到export_onnx.py,只有export.py,而且运行之后报了这样一个错:
    jim@jim-AERO-15-X9:~/solov2_d2$ python3 demo/export.py --config-file configs/SOLOv2/R101_3x.yaml --video-input 0.mp4 --opts MODEL.WEIGHTS ./output/model_final.pth
    [06/24 16:24:40 detectron2]: Arguments: Namespace(confidence_threshold=0.5, config_file='configs/SOLOv2/R101_3x.yaml', input=None, opts=['MODEL.WEIGHTS', './output/model_final.pth'], output=None, video_input='0.mp4', webcam=False)
    [06/24 16:24:44 fvcore.common.checkpoint]: [Checkpointer] Loading from ./output/model_final.pth ...
    [06/24 16:24:45 fvcore.common.checkpoint]: [Checkpointer] Loading from ./output/model_final.pth ...
    /home/jim/.local/lib/python3.8/site-packages/torch/tensor.py:587: RuntimeWarning: Iterating over a tensor might cause the trace to be incorrect. Passing a tensor of different shape won't change the number of iterations executed (and might lead to errors or silently give incorrect results).
    warnings.warn('Iterating over a tensor might cause the trace to be incorrect. '
    Traceback (most recent call last):
    File "demo/export.py", line 153, in
    torch.onnx.export(model, inp, 'solov2.onnx', output_names={
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/init.py", line 271, in export
    return utils.export(model, args, f, export_params, verbose, training,
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 88, in export
    _export(model, args, f, export_params, verbose, training, input_names, output_names,
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 694, in _export
    _model_to_graph(model, args, verbose, input_names,
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 457, in _model_to_graph
    graph, params, torch_out, module = _create_jit_graph(model, args,
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 420, in _create_jit_graph
    graph, torch_out = _trace_and_get_graph_from_model(model, args)
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 380, in _trace_and_get_graph_from_model
    torch.jit._get_trace_graph(model, args, strict=False, _force_outplace=False, _return_inputs_states=True)
    File "/home/jim/.local/lib/python3.8/site-packages/torch/jit/_trace.py", line 1139, in _get_trace_graph
    outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs)
    File "/home/jim/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
    File "/home/jim/.local/lib/python3.8/site-packages/torch/jit/_trace.py", line 125, in forward
    graph, out = torch._C._create_graph_by_tracing(
    File "/home/jim/.local/lib/python3.8/site-packages/torch/jit/_trace.py", line 116, in wrapper
    outs.append(self.inner(*trace_inputs))
    File "/home/jim/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 887, in _call_impl
    result = self._slow_forward(*input, **kwargs)
    File "/home/jim/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 860, in _slow_forward
    result = self.forward(*input, **kwargs)
    File "/home/jim/AdelaiDet/adet/modeling/solov2/solov2.py", line 108, in forward
    images = self.preprocess_image(batched_inputs)
    File "/home/jim/AdelaiDet/adet/modeling/solov2/solov2.py", line 152, in preprocess_image
    images = [x["image"].to(self.device) for x in batched_inputs]
    File "/home/jim/AdelaiDet/adet/modeling/solov2/solov2.py", line 152, in
    images = [x["image"].to(self.device) for x in batched_inputs]
    IndexError: too many indices for tensor of dimension 3

    @刘看山

    发布在 原创分享专区 阅读更多
  • JimXu1989

    我的torch版本是'1.8.1+cu111'

    发布在 原创分享专区 阅读更多
  • JimXu1989

    Hi
    在solov2_d2下面没有找到export_onnx.py,只有export.py,而且运行之后报了这样一个错:

    jim@jim-AERO-15-X9:~/solov2_d2$ python3 demo/export.py --config-file configs/SOLOv2/R101_3x.yaml --video-input 0.mp4 --opts MODEL.WEIGHTS ./output/model_final.pth
    [06/24 16:24:40 detectron2]: Arguments: Namespace(confidence_threshold=0.5, config_file='configs/SOLOv2/R101_3x.yaml', input=None, opts=['MODEL.WEIGHTS', './output/model_final.pth'], output=None, video_input='0.mp4', webcam=False)
    [06/24 16:24:44 fvcore.common.checkpoint]: [Checkpointer] Loading from ./output/model_final.pth ...
    [06/24 16:24:45 fvcore.common.checkpoint]: [Checkpointer] Loading from ./output/model_final.pth ...
    /home/jim/.local/lib/python3.8/site-packages/torch/tensor.py:587: RuntimeWarning: Iterating over a tensor might cause the trace to be incorrect. Passing a tensor of different shape won't change the number of iterations executed (and might lead to errors or silently give incorrect results).
    warnings.warn('Iterating over a tensor might cause the trace to be incorrect. '
    Traceback (most recent call last):
    File "demo/export.py", line 153, in
    torch.onnx.export(model, inp, 'solov2.onnx', output_names={
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/init.py", line 271, in export
    return utils.export(model, args, f, export_params, verbose, training,
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 88, in export
    _export(model, args, f, export_params, verbose, training, input_names, output_names,
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 694, in _export
    _model_to_graph(model, args, verbose, input_names,
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 457, in _model_to_graph
    graph, params, torch_out, module = _create_jit_graph(model, args,
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 420, in _create_jit_graph
    graph, torch_out = _trace_and_get_graph_from_model(model, args)
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 380, in _trace_and_get_graph_from_model
    torch.jit._get_trace_graph(model, args, strict=False, _force_outplace=False, _return_inputs_states=True)
    File "/home/jim/.local/lib/python3.8/site-packages/torch/jit/_trace.py", line 1139, in _get_trace_graph
    outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs)
    File "/home/jim/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
    File "/home/jim/.local/lib/python3.8/site-packages/torch/jit/_trace.py", line 125, in forward
    graph, out = torch._C._create_graph_by_tracing(
    File "/home/jim/.local/lib/python3.8/site-packages/torch/jit/_trace.py", line 116, in wrapper
    outs.append(self.inner(*trace_inputs))
    File "/home/jim/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 887, in _call_impl
    result = self._slow_forward(*input, **kwargs)
    File "/home/jim/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 860, in _slow_forward
    result = self.forward(*input, **kwargs)
    File "/home/jim/AdelaiDet/adet/modeling/solov2/solov2.py", line 108, in forward
    images = self.preprocess_image(batched_inputs)
    File "/home/jim/AdelaiDet/adet/modeling/solov2/solov2.py", line 152, in preprocess_image
    images = [x["image"].to(self.device) for x in batched_inputs]
    File "/home/jim/AdelaiDet/adet/modeling/solov2/solov2.py", line 152, in
    images = [x["image"].to(self.device) for x in batched_inputs]
    IndexError: too many indices for tensor of dimension 3

    发布在 原创分享专区 阅读更多

与 神力AI社区 的连接断开,我们正在尝试重连,请耐心等待