[Caffe]:检查失败:ShapeEquals(proto)形状不匹配(未设置整形)

2024-05-13 04:15:38 发布

您现在位置:Python中文网/ 问答频道 /正文

我犯了这个错误,我试着上网查了一下,但什么都没查清楚。

我用Caffe成功地训练了我的网,准确率约为82%。

现在我尝试通过以下代码使用图像:

python python/classify.py --model_def examples/imagenet/imagenet_deploy.prototxt --pretrained_model caffe_mycaffe_train_iter_10000.caffemodel --images_dim 64,64 data/mycaffe/testingset/cat1/113.png foo --mean_file data/mycaffe/mycaffe_train_mean.binaryproto

是的,我的图像是64x64

这是我得到的最后一行:

I0610 15:33:44.868100 28657 net.cpp:194] conv3 does not need backward computation. I0610 15:33:44.868110 28657 net.cpp:194] norm2 does not need backward computation. I0610 15:33:44.868120 28657 net.cpp:194] pool2 does not need backward computation. I0610 15:33:44.868130 28657 net.cpp:194] relu2 does not need backward computation. I0610 15:33:44.868142 28657 net.cpp:194] conv2 does not need backward computation. I0610 15:33:44.868152 28657 net.cpp:194] norm1 does not need backward computation. I0610 15:33:44.868162 28657 net.cpp:194] pool1 does not need backward computation. I0610 15:33:44.868173 28657 net.cpp:194] relu1 does not need backward computation. I0610 15:33:44.868182 28657 net.cpp:194] conv1 does not need backward computation. I0610 15:33:44.868192 28657 net.cpp:235] This network produces output fc8_pascal I0610 15:33:44.868214 28657 net.cpp:482] Collecting Learning Rate and Weight Decay. I0610 15:33:44.868238 28657 net.cpp:247] Network initialization done. I0610 15:33:44.868249 28657 net.cpp:248] Memory required for data: 3136120 F0610 15:33:45.025965 28657 blob.cpp:458] Check failed: ShapeEquals(proto) shape mismatch (reshape not set) * Check failure stack trace: * Aborted (core dumped)

我试过不设置——mean_文件和其他东西,但我的镜头结束了。

这是我的imagenet_deploy.prototxt,我已经在一些参数中进行了修改以进行调试,但没有起任何作用。

name: "MyCaffe"
input: "data"
input_dim: 10
input_dim: 3
input_dim: 64
input_dim: 64
layer {
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  convolution_param {
    num_output: 64
    kernel_size: 11
    stride: 4
  }
}
layer {
  name: "relu1"
  type: "ReLU"
  bottom: "conv1"
  top: "conv1"
}
layer {
  name: "pool1"
  type: "Pooling"
  bottom: "conv1"
  top: "pool1"
  pooling_param {
    pool: MAX
    kernel_size: 3
    stride: 2
  }
}
layer {
  name: "norm1"
  type: "LRN"
  bottom: "pool1"
  top: "norm1"
  lrn_param {
    local_size: 5
    alpha: 0.0001
    beta: 0.75
  }
}
layer {
  name: "conv2"
  type: "Convolution"
  bottom: "norm1"
  top: "conv2"
  convolution_param {
    num_output: 64 
    pad: 2
    kernel_size: 5
    group: 2
  }
}
layer {
  name: "relu2"
  type: "ReLU"
  bottom: "conv2"
  top: "conv2"
}
layer {
  name: "pool2"
  type: "Pooling"
  bottom: "conv2"
  top: "pool2"
  pooling_param {
    pool: MAX
    kernel_size: 3
    stride: 2
  }
}
layer {
  name: "norm2"
  type: "LRN"
  bottom: "pool2"
  top: "norm2"
  lrn_param {
    local_size: 5
    alpha: 0.0001
    beta: 0.75
  }
}
layer {
  name: "conv3"
  type: "Convolution"
  bottom: "norm2"
  top: "conv3"
  convolution_param {
    num_output: 384
    pad: 1
    kernel_size: 3
  }
}
layer {
  name: "relu3"
  type: "ReLU"
  bottom: "conv3"
  top: "conv3"
}
layer {
  name: "conv4"
  type: "Convolution"
  bottom: "conv3"
  top: "conv4"
  convolution_param {
    num_output: 384
    pad: 1
    kernel_size: 3
    group: 2
  }
}
layer {
  name: "relu4"
  type: "ReLU"
  bottom: "conv4"
  top: "conv4"
}
layer {
  name: "conv5"
  type: "Convolution"
  bottom: "conv4"
  top: "conv5"
  convolution_param {
    num_output: 64
    pad: 1
    kernel_size: 3
    group: 2
  }
}
layer {
  name: "relu5"
  type: "ReLU"
  bottom: "conv5"
  top: "conv5"
}
layer {
  name: "pool5"
  type: "Pooling"
  bottom: "conv5"
  top: "pool5"
  pooling_param {
    pool: MAX
    kernel_size: 3
    stride: 2
  }
}
layer {
  name: "fc6"
  type: "InnerProduct"
  bottom: "pool5"
  top: "fc6"
  inner_product_param {
    num_output: 4096
  }
}
layer {
  name: "relu6"
  type: "ReLU"
  bottom: "fc6"
  top: "fc6"
}
layer {
  name: "drop6"
  type: "Dropout"
  bottom: "fc6"
  top: "fc6"
  dropout_param {
    dropout_ratio: 0.5
  }
}
layer {
  name: "fc7"
  type: "InnerProduct"
  bottom: "fc6"
  top: "fc7"
  inner_product_param {
    num_output: 4096
  }
}
layer {
  name: "relu7"
  type: "ReLU"
  bottom: "fc7"
  top: "fc7"
}
layer {
  name: "drop7"
  type: "Dropout"
  bottom: "fc7"
  top: "fc7"
  dropout_param {
    dropout_ratio: 0.5
  }
}
layer {
  name: "fc8_pascal"
  type: "InnerProduct"
  bottom: "fc7"
  top: "fc8_pascal"
  inner_product_param {
    num_output: 3
  }
}

有人能给我线索吗? 非常感谢你。


C++和<>强分类bin < /强>相同:它们提供:

F0610 18:06:14.975601 7906 blob.cpp:455] Check failed: ShapeEquals(proto) shape mismatch (reshape not set) * Check failure stack trace: * @ 0x7f0e3c50761c google::LogMessage::Fail() @ 0x7f0e3c507568 google::LogMessage::SendToLog() @ 0x7f0e3c506f6a google::LogMessage::Flush() @ 0x7f0e3c509f01 google::LogMessageFatal::~LogMessageFatal() @ 0x7f0e3c964a80 caffe::Blob<>::FromProto() @ 0x7f0e3c89576e caffe::Net<>::CopyTrainedLayersFrom() @ 0x7f0e3c8a10d2 caffe::Net<>::CopyTrainedLayersFrom() @ 0x406c32 Classifier::Classifier() @ 0x403d2b main @ 0x7f0e3b124ec5 (unknown) @ 0x4041ce (unknown) Aborted (core dumped)


Tags: namelayersizenetparamtoptypenot
3条回答

在我的例子中,解算器文件中第二个卷积层的内核大小不同于训练文件中的内核大小。在解算器文件中更改大小解决了该问题。

我也犯了同样的错误。在我的例子中,最后一层的输出参数不正确:切换数据集,我更改了train.prototxt中的类数,但在test.prototxt(或deploy.prototxt)中未能这样做。纠正这个错误为我解决了这个问题。

我来确认一下基本步骤是否正确。

input_dim: 10
input_dim: 3
input_dim: 64
input_dim: 64

您是否尝试将第一个参数更改为1,因为您只传递一个图像。

当顶部或底部水滴的尺寸不正确时,会发生上述错误。除了输入blob之外,没有其他地方会出错。

编辑2:

ShapeEquals(proto) shape mismatch (reshape not set)fromproto function call的“reforme”参数设置为false时,将出现错误消息。

我在库中快速搜索了fromproto函数调用,如here。除了“CopyTrainedLayersFrom”函数之外,没有其他函数实际将上述参数设置为false

这实际上令人困惑。我建议的两种方法是:

  1. 检查是否从存储库更新了caffe源代码。
  2. 尝试运行/build/tools/中的可执行文件的测试部分。

相关问题 更多 >