Python中文
首页
教程
问答
标签
搜索
登录
注册
[Caffe]:检查失败:ShapeEquals(proto)形状不匹配(未设置整形)
回答此问题可获得
20
贡献值,回答如果被采纳可获得
50
分。
<p>我犯了这个错误,我试着上网查了一下,但什么都没查清楚。</p> <p>我用Caffe成功地训练了我的网,准确率约为82%。</p> <p>现在我尝试通过以下代码使用图像:</p> <p><code>python python/classify.py --model_def examples/imagenet/imagenet_deploy.prototxt --pretrained_model caffe_mycaffe_train_iter_10000.caffemodel --images_dim 64,64 data/mycaffe/testingset/cat1/113.png foo --mean_file data/mycaffe/mycaffe_train_mean.binaryproto</code></p> <p>是的,我的图像是64x64</p> <p>这是我得到的最后一行:</p> <blockquote> <p>I0610 15:33:44.868100 28657 net.cpp:194] conv3 does not need backward computation. I0610 15:33:44.868110 28657 net.cpp:194] norm2 does not need backward computation. I0610 15:33:44.868120 28657 net.cpp:194] pool2 does not need backward computation. I0610 15:33:44.868130 28657 net.cpp:194] relu2 does not need backward computation. I0610 15:33:44.868142 28657 net.cpp:194] conv2 does not need backward computation. I0610 15:33:44.868152 28657 net.cpp:194] norm1 does not need backward computation. I0610 15:33:44.868162 28657 net.cpp:194] pool1 does not need backward computation. I0610 15:33:44.868173 28657 net.cpp:194] relu1 does not need backward computation. I0610 15:33:44.868182 28657 net.cpp:194] conv1 does not need backward computation. I0610 15:33:44.868192 28657 net.cpp:235] This network produces output fc8_pascal I0610 15:33:44.868214 28657 net.cpp:482] Collecting Learning Rate and Weight Decay. I0610 15:33:44.868238 28657 net.cpp:247] Network initialization done. I0610 15:33:44.868249 28657 net.cpp:248] Memory required for data: 3136120 F0610 15:33:45.025965 28657 blob.cpp:458] Check failed: ShapeEquals(proto) shape mismatch (reshape not set) <strong>* Check failure stack trace: *</strong> Aborted (core dumped)</p> </blockquote> <p>我试过不设置——mean_文件和其他东西,但我的镜头结束了。</p> <p>这是我的imagenet_deploy.prototxt,我已经在一些参数中进行了修改以进行调试,但没有起任何作用。</p> <pre><code>name: "MyCaffe" input: "data" input_dim: 10 input_dim: 3 input_dim: 64 input_dim: 64 layer { name: "conv1" type: "Convolution" bottom: "data" top: "conv1" convolution_param { num_output: 64 kernel_size: 11 stride: 4 } } layer { name: "relu1" type: "ReLU" bottom: "conv1" top: "conv1" } layer { name: "pool1" type: "Pooling" bottom: "conv1" top: "pool1" pooling_param { pool: MAX kernel_size: 3 stride: 2 } } layer { name: "norm1" type: "LRN" bottom: "pool1" top: "norm1" lrn_param { local_size: 5 alpha: 0.0001 beta: 0.75 } } layer { name: "conv2" type: "Convolution" bottom: "norm1" top: "conv2" convolution_param { num_output: 64 pad: 2 kernel_size: 5 group: 2 } } layer { name: "relu2" type: "ReLU" bottom: "conv2" top: "conv2" } layer { name: "pool2" type: "Pooling" bottom: "conv2" top: "pool2" pooling_param { pool: MAX kernel_size: 3 stride: 2 } } layer { name: "norm2" type: "LRN" bottom: "pool2" top: "norm2" lrn_param { local_size: 5 alpha: 0.0001 beta: 0.75 } } layer { name: "conv3" type: "Convolution" bottom: "norm2" top: "conv3" convolution_param { num_output: 384 pad: 1 kernel_size: 3 } } layer { name: "relu3" type: "ReLU" bottom: "conv3" top: "conv3" } layer { name: "conv4" type: "Convolution" bottom: "conv3" top: "conv4" convolution_param { num_output: 384 pad: 1 kernel_size: 3 group: 2 } } layer { name: "relu4" type: "ReLU" bottom: "conv4" top: "conv4" } layer { name: "conv5" type: "Convolution" bottom: "conv4" top: "conv5" convolution_param { num_output: 64 pad: 1 kernel_size: 3 group: 2 } } layer { name: "relu5" type: "ReLU" bottom: "conv5" top: "conv5" } layer { name: "pool5" type: "Pooling" bottom: "conv5" top: "pool5" pooling_param { pool: MAX kernel_size: 3 stride: 2 } } layer { name: "fc6" type: "InnerProduct" bottom: "pool5" top: "fc6" inner_product_param { num_output: 4096 } } layer { name: "relu6" type: "ReLU" bottom: "fc6" top: "fc6" } layer { name: "drop6" type: "Dropout" bottom: "fc6" top: "fc6" dropout_param { dropout_ratio: 0.5 } } layer { name: "fc7" type: "InnerProduct" bottom: "fc6" top: "fc7" inner_product_param { num_output: 4096 } } layer { name: "relu7" type: "ReLU" bottom: "fc7" top: "fc7" } layer { name: "drop7" type: "Dropout" bottom: "fc7" top: "fc7" dropout_param { dropout_ratio: 0.5 } } layer { name: "fc8_pascal" type: "InnerProduct" bottom: "fc7" top: "fc8_pascal" inner_product_param { num_output: 3 } } </code></pre> <p>有人能给我线索吗? 非常感谢你。</p> <hr/> <P> C++和<>强分类bin < /强>相同:它们提供:</P> <blockquote> <p>F0610 18:06:14.975601 7906 blob.cpp:455] Check failed: ShapeEquals(proto) shape mismatch (reshape not set) <strong>* Check failure stack trace: *</strong> @ 0x7f0e3c50761c google::LogMessage::Fail() @ 0x7f0e3c507568 google::LogMessage::SendToLog() @ 0x7f0e3c506f6a google::LogMessage::Flush() @ 0x7f0e3c509f01 google::LogMessageFatal::~LogMessageFatal() @ 0x7f0e3c964a80 caffe::Blob<>::FromProto() @ 0x7f0e3c89576e caffe::Net<>::CopyTrainedLayersFrom() @ 0x7f0e3c8a10d2 caffe::Net<>::CopyTrainedLayersFrom() @ 0x406c32 Classifier::Classifier() @ 0x403d2b main @ 0x7f0e3b124ec5 (unknown) @ 0x4041ce (unknown) Aborted (core dumped)</p> </blockquote>
0 条评论
分类:
Python问答
请先
登录
后评论
默认排序
时间排序
1 个回答
匿名
1天前
擅长:python、mysql、java
<p>在我的例子中,解算器文件中第二个卷积层的内核大小不同于训练文件中的内核大小。在解算器文件中更改大小解决了该问题。</p>
请先
登录
后评论
针对此问题:
更多的回答
关注
89
关注
收藏
1
收藏,
216
浏览
网友 提问于 2天前
相关Python问题
python语法错误(如果不在Z中,则在X中表示s)
1 回答
Python语法错误(无效)概率
4 回答
python语法错误*带有可选参数的args
1 回答
python语法错误2.5版有什么办法解决吗?
7 回答
Python语法错误2.7.4
4 回答
python语法错误30/09/2013
3 回答
Python语法错误E001
7 回答
Python语法错误not()op
10 回答
python语法错误outpu
1 回答
Python语法错误print len()
6 回答
python语法错误w3
9 回答
Python语法错误不是caugh
3 回答
python语法错误及yt-packag的使用
10 回答
python语法错误可以查出来!!瓦里亚布
10 回答
Python语法错误可能是缩进?
2 回答
Python语法错误和缩进
6 回答
Python语法错误在while循环中生成随机numb
10 回答
Python语法错误在哪里?
9 回答
python语法错误在尝试导入包时,但仅在远程运行时
9 回答
Python语法错误在电子邮件地址提取脚本中
6 回答