回答此问题可获得 20 贡献值,回答如果被采纳可获得 50 分。
<p>我用tensorflow编写了一个卷积网络,relu作为激活函数,但是它不是学习的(eval和train数据集的损失都是常数)。
对于不同的激活功能,一切正常工作。在</p>
<p>以下是创建nn的代码:</p>
<pre><code>def _create_nn(self):
current = tf.layers.conv2d(self.input, 20, 3, activation=self.activation)
current = tf.layers.max_pooling2d(current, 2, 2)
current = tf.layers.conv2d(current, 24, 3, activation=self.activation)
current = tf.layers.conv2d(current, 24, 3, activation=self.activation)
current = tf.layers.max_pooling2d(current, 2, 2)
self.descriptor = current = tf.layers.conv2d(current, 32, 5, activation=self.activation)
if not self.drop_conv:
current = tf.layers.conv2d(current, self.layer_7_filters_n, 3, activation=self.activation)
if self.add_conv:
current = tf.layers.conv2d(current, 48, 2, activation=self.activation)
self.descriptor = current
last_conv_output_shape = current.get_shape().as_list()
self.descr_size = last_conv_output_shape[1] * last_conv_output_shape[2] * last_conv_output_shape[3]
current = tf.layers.dense(tf.reshape(current, [-1, self.descr_size]), 100, activation=self.activation)
current = tf.layers.dense(current, 50, activation=self.last_activation)
return current
</code></pre>
<p>在自我激活设置为tf.nn.relu公司以及最后一次激活设置为tf.nn.softmax在</p>
<p>损失函数和优化器在此处创建:</p>
^{pr2}$
<p>我尝试通过传递<code>tf.random_normal_initializer(0.1, 0.1)</code>作为初始值设定项来更改变量初始化,但是它没有导致损失函数的任何更改。在</p>
<p>我将非常感谢帮助使这个神经网络与ReLu一起工作。在</p>
<p>编辑:漏热露也有同样的问题</p>
<p>编辑:我设法复制相同错误的小例子:</p>
<pre><code>x = tf.constant([[3., 211., 123., 78.]])
v = tf.Variable([0.5, 0.5, 0.5, 0.5])
h_d = tf.layers.Dense(4, activation=tf.nn.leaky_relu)
h = h_d(x)
y_d = tf.layers.Dense(4, activation=tf.nn.softmax)
y = y_d(h)
d = tf.constant([[.5, .5, 0, 0]])
</code></pre>
<p>坡度(根据tf.梯度)对于h,y,核和偏差要么等于或接近于0</p>