将sagemaker模型(MXNet)转换为ONNX:inferu shap

2024-04-23 07:52:51 发布

您现在位置:Python中文网/ 问答频道 /正文

工作

我正在开发sagemaker jupyter笔记本(环境:anaconda3/envs/mxnet_p36/lib/python3.6)。你知道吗

我成功运行了本教程:https://github.com/onnx/tutorials/blob/master/tutorials/MXNetONNXExport.ipynb


不起作用

然后,在相同的环境下,我尝试对sagemaker培训工作生成的文件应用相同的过程。因此,我使用了S3模型工件文件作为输入,修改了一些教程代码行以满足我的需要。 我使用了内置的目标检测SSD VGG-16网络,超参数图像形状:300。你知道吗

sym = './model_algo_1-symbol.json'
params = './model_algo_1-0000.params'
input_shape = (1,3,300,300)

verbose=True作为export_model()方法中的最后一个参数:

converted_model_path = onnx_mxnet.export_model(sym, params, [input_shape], np.float32, onnx_file, True)

当我运行代码时,我得到了这个错误文章末尾的详细输出):

MXNetError: Error in operator multibox_target: [14:36:32] src/operator/contrib/./multibox_target-inl.h:224: Check failed: lshape.ndim() == 3 (-1 vs. 3) : Label should be [batch, num_labels, label_width] tensor

问题

到目前为止,我还没有找到任何解决办法:

  • 也许input_shape = (1,3,300,300)是错的,但我不能 找到它
  • 可能模型中包含了一些意想不到的层

有人知道解决这个问题的方法或者在本地机器上使用模型的解决方法吗?
(我的意思是不必部署到aws)


详细输出:
  infer_shape error. Arguments:
  data: (1, 3, 300, 300)
  conv3_2_weight: (256, 256, 3, 3)
  fc7_bias: (1024,)
  multi_feat_3_conv_1x1_conv_weight: (128, 512, 1, 1)
  conv4_1_bias: (512,)
  conv5_3_bias: (512,)
  relu4_3_cls_pred_conv_bias: (16,)
  multi_feat_2_conv_3x3_relu_cls_pred_conv_weight: (24, 512, 3, 3)
  relu4_3_loc_pred_conv_bias: (16,)
  relu7_cls_pred_conv_weight: (24, 1024, 3, 3)
  conv3_3_bias: (256,)
  multi_feat_5_conv_3x3_relu_cls_pred_conv_weight: (16, 256, 3, 3)
  conv4_3_weight: (512, 512, 3, 3)
  conv1_2_bias: (64,)
  multi_feat_2_conv_3x3_relu_cls_pred_conv_bias: (24,)
  multi_feat_4_conv_3x3_conv_weight: (256, 128, 3, 3)
  conv4_1_weight: (512, 256, 3, 3)
  relu4_3_scale: (1, 512, 1, 1)
  multi_feat_4_conv_3x3_conv_bias: (256,)
  multi_feat_5_conv_3x3_relu_cls_pred_conv_bias: (16,)
  conv2_2_weight: (128, 128, 3, 3)
  multi_feat_3_conv_3x3_relu_loc_pred_conv_weight: (24, 256, 3, 3)
  multi_feat_5_conv_3x3_conv_bias: (256,)
  conv5_1_bias: (512,)
  multi_feat_3_conv_3x3_conv_bias: (256,)
  conv2_1_bias: (128,)
  conv5_2_weight: (512, 512, 3, 3)
  multi_feat_5_conv_3x3_relu_loc_pred_conv_weight: (16, 256, 3, 3)
  multi_feat_4_conv_3x3_relu_loc_pred_conv_weight: (16, 256, 3, 3)
  multi_feat_2_conv_3x3_conv_weight: (512, 256, 3, 3)
  multi_feat_2_conv_1x1_conv_bias: (256,)
  multi_feat_2_conv_1x1_conv_weight: (256, 1024, 1, 1)
  conv4_3_bias: (512,)
  relu7_cls_pred_conv_bias: (24,)
  fc6_bias: (1024,)
  conv2_1_weight: (128, 64, 3, 3)
  multi_feat_2_conv_3x3_conv_bias: (512,)
  multi_feat_2_conv_3x3_relu_loc_pred_conv_weight: (24, 512, 3, 3)
  multi_feat_5_conv_1x1_conv_bias: (128,)
  relu7_loc_pred_conv_bias: (24,)
  multi_feat_3_conv_3x3_relu_loc_pred_conv_bias: (24,)
  conv3_3_weight: (256, 256, 3, 3)
  conv1_2_weight: (64, 64, 3, 3)
  multi_feat_2_conv_3x3_relu_loc_pred_conv_bias: (24,)
  conv1_1_bias: (64,)
  multi_feat_4_conv_3x3_relu_cls_pred_conv_bias: (16,)
  conv4_2_weight: (512, 512, 3, 3)
  conv5_3_weight: (512, 512, 3, 3)
  relu7_loc_pred_conv_weight: (24, 1024, 3, 3)
  multi_feat_3_conv_3x3_conv_weight: (256, 128, 3, 3)
  conv3_1_weight: (256, 128, 3, 3)
  multi_feat_4_conv_3x3_relu_cls_pred_conv_weight: (16, 256, 3, 3)
  relu4_3_loc_pred_conv_weight: (16, 512, 3, 3)
  multi_feat_5_conv_3x3_conv_weight: (256, 128, 3, 3)
  fc7_weight: (1024, 1024, 1, 1)
  conv4_2_bias: (512,)
  multi_feat_3_conv_3x3_relu_cls_pred_conv_weight: (24, 256, 3, 3)
  multi_feat_3_conv_3x3_relu_cls_pred_conv_bias: (24,)
  conv2_2_bias: (128,)
  conv5_1_weight: (512, 512, 3, 3)
  multi_feat_3_conv_1x1_conv_bias: (128,)
  multi_feat_4_conv_3x3_relu_loc_pred_conv_bias: (16,)
  conv1_1_weight: (64, 3, 3, 3)
  multi_feat_4_conv_1x1_conv_bias: (128,)
  conv3_1_bias: (256,)
  multi_feat_5_conv_3x3_relu_loc_pred_conv_bias: (16,)
  multi_feat_4_conv_1x1_conv_weight: (128, 256, 1, 1)
  fc6_weight: (1024, 512, 3, 3)
  multi_feat_5_conv_1x1_conv_weight: (128, 256, 1, 1)
  conv3_2_bias: (256,)
  conv5_2_bias: (512,)
  relu4_3_cls_pred_conv_weight: (16, 512, 3, 3)

Tags: modelmultilocclsreluweightshapefeat