• JimXu1989

    @刘看山 请问要怎么彻底卸载原版的solov2呀

    发布在 原创分享专区 阅读更多
  • JimXu1989

    @刘看山
    神力的例程用的是resnet 50,我改称101了,也可以正常训练,但是在用export.py转出模型的时候报错了:
    /home/jim/project/wood/solov2_d2/adet/modeling/solov2/solov2.py:628: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
    assert not torch.isnan(seg_preds).any(), 'seg_preds contains nan'
    /home/jim/project/wood/solov2_d2/adet/modeling/solov2/solov2.py:656: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
    sh = torch.tensor(seg_preds.shape)
    /home/jim/project/wood/solov2_d2/adet/modeling/solov2/solov2.py:658: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
    sh_kernel = torch.tensor(kernel_preds.shape)
    /home/jim/project/wood/solov2_d2/adet/modeling/solov2/solov2.py:666: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
    seg_masks = seg_preds > torch.tensor(self.mask_threshold).float()
    /home/jim/project/wood/solov2_d2/adet/modeling/solov2/solov2.py:678: TracerWarning: Using len to get tensor shape might cause the trace to be incorrect. Recommended usage would be tensor.shape[0]. Passing a tensor of different shape might lead to errors or silently give incorrect results.
    if len(sort_inds) > self.max_before_nms:
    /home/jim/project/wood/solov2_d2/adet/modeling/solov2/utils.py:161: TracerWarning: Using len to get tensor shape might cause the trace to be incorrect. Recommended usage would be tensor.shape[0]. Passing a tensor of different shape might lead to errors or silently give incorrect results.
    n_samples = len(cate_labels)
    /home/jim/project/wood/solov2_d2/adet/modeling/solov2/solov2.py:689: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
    keep = cate_scores >= torch.tensor(
    /home/jim/project/wood/solov2_d2/adet/modeling/solov2/solov2.py:745: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
    size=(max(int(ori_h0.6), 736), max(int(ori_w0.6), 992)),
    /home/jim/project/wood/solov2_d2/adet/modeling/solov2/solov2.py:747: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
    seg_masks = seg_masks > torch.tensor(self.mask_threshold).float()
    Traceback (most recent call last):
    File "/home/jim/project/wood/solov2_d2/demo/export.py", line 153, in
    torch.onnx.export(model, inp, 'solov2.onnx', output_names={
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/init.py", line 275, in export
    return utils.export(model, args, f, export_params, verbose, training,
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 88, in export
    _export(model, args, f, export_params, verbose, training, input_names, output_names,
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 689, in _export
    _model_to_graph(model, args, verbose, input_names,
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 463, in _model_to_graph
    graph = _optimize_graph(graph, operator_export_type,
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 200, in _optimize_graph
    graph = torch._C._jit_pass_onnx(graph, operator_export_type)
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/init.py", line 313, in _run_symbolic_function
    return utils._run_symbolic_function(*args, **kwargs)
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 990, in _run_symbolic_function
    symbolic_fn = _find_symbolic_in_registry(domain, op_name, opset_version, operator_export_type)
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 944, in _find_symbolic_in_registry
    return sym_registry.get_registered_op(op_name, domain, opset_version)
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/symbolic_registry.py", line 116, in get_registered_op
    raise RuntimeError(msg)
    RuntimeError: Exporting the operator linspace to ONNX opset version 11 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub.

    Process finished with exit code 1
    请帮我看一下,多谢

    发布在 社区求助区(SOS!!) 阅读更多
  • JimXu1989

    用pytorch运行又报了一个不一样的错@刘看山

    /usr/bin/python3.8 /home/jim/solov2_d2/demo/export.py --config-file ../configs/SOLOv2/R101_3x.yaml --video-input ../0.mp4 --opts MODEL.WEIGHTS ../output/model_final.pth
    [06/24 16:49:06 detectron2]: Arguments: Namespace(confidence_threshold=0.5, config_file='../configs/SOLOv2/R101_3x.yaml', input=None, opts=['MODEL.WEIGHTS', '../output/model_final.pth'], output=None, video_input='../0.mp4', webcam=False)
    INFO 06.24 16:49:07 solov2.py:78: instance_shapes: [ShapeSpec(channels=256, height=None, width=None, stride=4), ShapeSpec(channels=256, height=None, width=None, stride=8), ShapeSpec(channels=256, height=None, width=None, stride=16), ShapeSpec(channels=256, height=None, width=None, stride=32), ShapeSpec(channels=256, height=None, width=None, stride=64)]
    [06/24 16:49:15 fvcore.common.checkpoint]: [Checkpointer] Loading from ../output/model_final.pth ...
    INFO 06.24 16:49:17 solov2.py:78: instance_shapes: [ShapeSpec(channels=256, height=None, width=None, stride=4), ShapeSpec(channels=256, height=None, width=None, stride=8), ShapeSpec(channels=256, height=None, width=None, stride=16), ShapeSpec(channels=256, height=None, width=None, stride=32), ShapeSpec(channels=256, height=None, width=None, stride=64)]
    [06/24 16:49:18 fvcore.common.checkpoint]: [Checkpointer] Loading from ../output/model_final.pth ...
    [WARN] exporting onnx...
    batched_inputs: torch.Size([1, 3, 704, 736])
    /home/jim/.local/lib/python3.8/site-packages/torch/nn/functional.py:3454: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
    warnings.warn(
    /home/jim/.local/lib/python3.8/site-packages/torch/nn/functional.py:3502: UserWarning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and now uses scale_factor directly, instead of relying on the computed output size. If you wish to restore the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details.
    warnings.warn(
    pred_masks /home/jim/.local/lib/python3.8/site-packages/torch/tensor.py:587: RuntimeWarning: Iterating over a tensor might cause the trace to be incorrect. Passing a tensor of different shape won't change the number of iterations executed (and might lead to errors or silently give incorrect results).
    warnings.warn('Iterating over a tensor might cause the trace to be incorrect. '
    tensor([[[[0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 3.9518e+00,
    3.7653e+00, 3.8083e+00],
    [0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 4.5552e+00,
    4.3284e+00, 4.1046e+00],
    [0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 4.7800e+00,
    4.6571e+00, 4.3812e+00],
    ...,
    [1.1723e-01, 1.5662e-01, 1.9572e-01, ..., 1.9446e-01,
    1.1960e-01, 5.0942e-02],
    [1.0176e-01, 1.4805e-01, 1.7128e-01, ..., 1.8052e-01,
    1.1186e-01, 5.3612e-02],
    [9.6826e-02, 9.2554e-02, 9.1506e-02, ..., 9.5873e-02,
    6.4562e-02, 5.5044e-02]],

         [[0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 0.0000e+00,
           0.0000e+00, 0.0000e+00],
          [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 0.0000e+00,
           0.0000e+00, 0.0000e+00],
          [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 0.0000e+00,
           0.0000e+00, 0.0000e+00],
          ...,
          [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 3.7478e-02,
           0.0000e+00, 0.0000e+00],
          [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 4.8736e-02,
           7.6189e-03, 0.0000e+00],
          [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 7.7488e-02,
           5.2618e-02, 2.4377e-02]],
    
         [[0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 0.0000e+00,
           0.0000e+00, 0.0000e+00],
          [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 0.0000e+00,
           0.0000e+00, 0.0000e+00],
          [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 0.0000e+00,
           0.0000e+00, 0.0000e+00],
          ...,
          [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 0.0000e+00,
           0.0000e+00, 0.0000e+00],
          [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 0.0000e+00,
           0.0000e+00, 0.0000e+00],
          [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 0.0000e+00,
           0.0000e+00, 0.0000e+00]],
    
         ...,
    
         [[0.0000e+00, 0.0000e+00, 2.5016e-02,  ..., 2.8171e+01,
           2.9437e+01, 2.7834e+01],
          [0.0000e+00, 4.3195e-02, 8.7966e-02,  ..., 2.8479e+01,
           3.1145e+01, 3.0990e+01],
          [0.0000e+00, 5.1823e-02, 8.0432e-02,  ..., 2.8781e+01,
           3.2091e+01, 3.2862e+01],
          ...,
          [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 1.4385e-01,
           1.4950e-01, 0.0000e+00],
          [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 5.5557e-02,
           8.0213e-02, 0.0000e+00],
          [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 0.0000e+00,
           0.0000e+00, 0.0000e+00]],
    
         [[0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 0.0000e+00,
           0.0000e+00, 0.0000e+00],
          [0.0000e+00, 2.4763e-02, 6.4760e-02,  ..., 0.0000e+00,
           0.0000e+00, 0.0000e+00],
          [0.0000e+00, 4.8006e-02, 4.6316e-02,  ..., 0.0000e+00,
           0.0000e+00, 0.0000e+00],
          ...,
          [8.3648e-03, 5.4830e-02, 1.3018e-01,  ..., 0.0000e+00,
           0.0000e+00, 0.0000e+00],
          [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 0.0000e+00,
           0.0000e+00, 0.0000e+00],
          [0.0000e+00, 7.6539e-03, 0.0000e+00,  ..., 0.0000e+00,
           0.0000e+00, 0.0000e+00]],
    
         [[4.6343e-02, 1.1665e-01, 1.7521e-01,  ..., 3.3670e+01,
           3.3471e+01, 2.9798e+01],
          [7.0594e-02, 1.5725e-01, 2.2316e-01,  ..., 3.2300e+01,
           3.3655e+01, 3.1853e+01],
          [8.3667e-02, 1.7335e-01, 2.1356e-01,  ..., 3.2309e+01,
           3.4434e+01, 3.3425e+01],
          ...,
          [0.0000e+00, 1.1504e-02, 3.4751e-02,  ..., 0.0000e+00,
           0.0000e+00, 0.0000e+00],
          [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 0.0000e+00,
           0.0000e+00, 0.0000e+00],
          [0.0000e+00, 0.0000e+00, 0.0000e+00,  ..., 0.0000e+00,
           0.0000e+00, 0.0000e+00]]]], device='cuda:0') torch.Size([1, 256, 176, 184])
    

    tensor(False, device='cuda:0')
    tensor(False, device='cuda:0')
    pred cate: torch.Size([3872, 80])
    pred kernel: torch.Size([3872, 256])
    /home/jim/solov2_d2/adet/modeling/solov2/solov2.py:628: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
    assert not torch.isnan(seg_preds).any(), 'seg_preds contains nan'
    /home/jim/solov2_d2/adet/modeling/solov2/solov2.py:656: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
    sh = torch.tensor(seg_preds.shape)
    /home/jim/solov2_d2/adet/modeling/solov2/solov2.py:658: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
    sh_kernel = torch.tensor(kernel_preds.shape)
    /home/jim/solov2_d2/adet/modeling/solov2/solov2.py:666: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
    seg_masks = seg_preds > torch.tensor(self.mask_threshold).float()
    /home/jim/solov2_d2/adet/modeling/solov2/solov2.py:689: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
    keep = cate_scores >= torch.tensor(
    /home/jim/solov2_d2/adet/modeling/solov2/solov2.py:745: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
    size=(max(int(ori_h0.6), 736), max(int(ori_w0.6), 992)),
    /home/jim/solov2_d2/adet/modeling/solov2/solov2.py:747: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
    seg_masks = seg_masks > torch.tensor(self.mask_threshold).float()
    Traceback (most recent call last):
    File "/home/jim/solov2_d2/demo/export.py", line 153, in
    torch.onnx.export(model, inp, 'solov2.onnx', output_names={
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/init.py", line 271, in export
    return utils.export(model, args, f, export_params, verbose, training,
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 88, in export
    _export(model, args, f, export_params, verbose, training, input_names, output_names,
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 694, in _export
    _model_to_graph(model, args, verbose, input_names,
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 463, in _model_to_graph
    graph = _optimize_graph(graph, operator_export_type,
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 206, in _optimize_graph
    graph = torch._C._jit_pass_onnx(graph, operator_export_type)
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/init.py", line 309, in _run_symbolic_function
    return utils._run_symbolic_function(*args, **kwargs)
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 993, in _run_symbolic_function
    symbolic_fn = _find_symbolic_in_registry(domain, op_name, opset_version, operator_export_type)
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 950, in _find_symbolic_in_registry
    return sym_registry.get_registered_op(op_name, domain, opset_version)
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/symbolic_registry.py", line 116, in get_registered_op
    raise RuntimeError(msg)
    RuntimeError: Exporting the operator linspace to ONNX opset version 11 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub.

    Process finished with exit code 1

    发布在 原创分享专区 阅读更多
  • JimXu1989

    @JimXu1989SOLOv2转ONNX教程 中说:

    Hi
    在solov2_d2下面没有找到export_onnx.py,只有export.py,而且运行之后报了这样一个错:
    jim@jim-AERO-15-X9:~/solov2_d2$ python3 demo/export.py --config-file configs/SOLOv2/R101_3x.yaml --video-input 0.mp4 --opts MODEL.WEIGHTS ./output/model_final.pth
    [06/24 16:24:40 detectron2]: Arguments: Namespace(confidence_threshold=0.5, config_file='configs/SOLOv2/R101_3x.yaml', input=None, opts=['MODEL.WEIGHTS', './output/model_final.pth'], output=None, video_input='0.mp4', webcam=False)
    [06/24 16:24:44 fvcore.common.checkpoint]: [Checkpointer] Loading from ./output/model_final.pth ...
    [06/24 16:24:45 fvcore.common.checkpoint]: [Checkpointer] Loading from ./output/model_final.pth ...
    /home/jim/.local/lib/python3.8/site-packages/torch/tensor.py:587: RuntimeWarning: Iterating over a tensor might cause the trace to be incorrect. Passing a tensor of different shape won't change the number of iterations executed (and might lead to errors or silently give incorrect results).
    warnings.warn('Iterating over a tensor might cause the trace to be incorrect. '
    Traceback (most recent call last):
    File "demo/export.py", line 153, in
    torch.onnx.export(model, inp, 'solov2.onnx', output_names={
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/init.py", line 271, in export
    return utils.export(model, args, f, export_params, verbose, training,
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 88, in export
    _export(model, args, f, export_params, verbose, training, input_names, output_names,
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 694, in _export
    _model_to_graph(model, args, verbose, input_names,
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 457, in _model_to_graph
    graph, params, torch_out, module = _create_jit_graph(model, args,
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 420, in _create_jit_graph
    graph, torch_out = _trace_and_get_graph_from_model(model, args)
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 380, in _trace_and_get_graph_from_model
    torch.jit._get_trace_graph(model, args, strict=False, _force_outplace=False, _return_inputs_states=True)
    File "/home/jim/.local/lib/python3.8/site-packages/torch/jit/_trace.py", line 1139, in _get_trace_graph
    outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs)
    File "/home/jim/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
    File "/home/jim/.local/lib/python3.8/site-packages/torch/jit/_trace.py", line 125, in forward
    graph, out = torch._C._create_graph_by_tracing(
    File "/home/jim/.local/lib/python3.8/site-packages/torch/jit/_trace.py", line 116, in wrapper
    outs.append(self.inner(*trace_inputs))
    File "/home/jim/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 887, in _call_impl
    result = self._slow_forward(*input, **kwargs)
    File "/home/jim/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 860, in _slow_forward
    result = self.forward(*input, **kwargs)
    File "/home/jim/AdelaiDet/adet/modeling/solov2/solov2.py", line 108, in forward
    images = self.preprocess_image(batched_inputs)
    File "/home/jim/AdelaiDet/adet/modeling/solov2/solov2.py", line 152, in preprocess_image
    images = [x["image"].to(self.device) for x in batched_inputs]
    File "/home/jim/AdelaiDet/adet/modeling/solov2/solov2.py", line 152, in
    images = [x["image"].to(self.device) for x in batched_inputs]
    IndexError: too many indices for tensor of dimension 3

    @刘看山

    发布在 原创分享专区 阅读更多
  • JimXu1989

    我的torch版本是'1.8.1+cu111'

    发布在 原创分享专区 阅读更多
  • JimXu1989

    Hi
    在solov2_d2下面没有找到export_onnx.py,只有export.py,而且运行之后报了这样一个错:

    jim@jim-AERO-15-X9:~/solov2_d2$ python3 demo/export.py --config-file configs/SOLOv2/R101_3x.yaml --video-input 0.mp4 --opts MODEL.WEIGHTS ./output/model_final.pth
    [06/24 16:24:40 detectron2]: Arguments: Namespace(confidence_threshold=0.5, config_file='configs/SOLOv2/R101_3x.yaml', input=None, opts=['MODEL.WEIGHTS', './output/model_final.pth'], output=None, video_input='0.mp4', webcam=False)
    [06/24 16:24:44 fvcore.common.checkpoint]: [Checkpointer] Loading from ./output/model_final.pth ...
    [06/24 16:24:45 fvcore.common.checkpoint]: [Checkpointer] Loading from ./output/model_final.pth ...
    /home/jim/.local/lib/python3.8/site-packages/torch/tensor.py:587: RuntimeWarning: Iterating over a tensor might cause the trace to be incorrect. Passing a tensor of different shape won't change the number of iterations executed (and might lead to errors or silently give incorrect results).
    warnings.warn('Iterating over a tensor might cause the trace to be incorrect. '
    Traceback (most recent call last):
    File "demo/export.py", line 153, in
    torch.onnx.export(model, inp, 'solov2.onnx', output_names={
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/init.py", line 271, in export
    return utils.export(model, args, f, export_params, verbose, training,
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 88, in export
    _export(model, args, f, export_params, verbose, training, input_names, output_names,
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 694, in _export
    _model_to_graph(model, args, verbose, input_names,
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 457, in _model_to_graph
    graph, params, torch_out, module = _create_jit_graph(model, args,
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 420, in _create_jit_graph
    graph, torch_out = _trace_and_get_graph_from_model(model, args)
    File "/home/jim/.local/lib/python3.8/site-packages/torch/onnx/utils.py", line 380, in _trace_and_get_graph_from_model
    torch.jit._get_trace_graph(model, args, strict=False, _force_outplace=False, _return_inputs_states=True)
    File "/home/jim/.local/lib/python3.8/site-packages/torch/jit/_trace.py", line 1139, in _get_trace_graph
    outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs)
    File "/home/jim/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
    File "/home/jim/.local/lib/python3.8/site-packages/torch/jit/_trace.py", line 125, in forward
    graph, out = torch._C._create_graph_by_tracing(
    File "/home/jim/.local/lib/python3.8/site-packages/torch/jit/_trace.py", line 116, in wrapper
    outs.append(self.inner(*trace_inputs))
    File "/home/jim/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 887, in _call_impl
    result = self._slow_forward(*input, **kwargs)
    File "/home/jim/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 860, in _slow_forward
    result = self.forward(*input, **kwargs)
    File "/home/jim/AdelaiDet/adet/modeling/solov2/solov2.py", line 108, in forward
    images = self.preprocess_image(batched_inputs)
    File "/home/jim/AdelaiDet/adet/modeling/solov2/solov2.py", line 152, in preprocess_image
    images = [x["image"].to(self.device) for x in batched_inputs]
    File "/home/jim/AdelaiDet/adet/modeling/solov2/solov2.py", line 152, in
    images = [x["image"].to(self.device) for x in batched_inputs]
    IndexError: too many indices for tensor of dimension 3

    发布在 原创分享专区 阅读更多

与 神力AI社区 的连接断开,我们正在尝试重连,请耐心等待