重建Kerasmodel

2024-04-26 19:09:45 发布

您现在位置:Python中文网/ 问答频道 /正文

我是Tensorflow的新手,我正在尝试重建一个简单的网络,我在Keras(TF后端)中用Tensorflows Python API构建了这个网络。它是一个简单的函数逼近器(z=sin(x+y))。你知道吗

我尝试过不同的架构、优化器和学习速率,但我没有让新的网络正常训练。然而在我看来,网络似乎是一样的。两者获得完全相同的特征向量和标签:

# making training data
start = 0
end = 2*np.pi
samp = 1000
num_samp = samp**2
step = end / samp

x_train  = np.arange(start, end, step)
y_train  = np.arange(start, end, step)

data = np.array(np.meshgrid(x_train,y_train)).T.reshape(-1,2)
z_label = np.sin(data[:,0] + data[:,1])

以下是Keras模型:

#start model
model = Sequential()

#stack layers
model.add(Dense(units=128, activation='sigmoid', input_dim=2, name='dense_1'))
model.add(Dense(units=64, activation='sigmoid', input_dim=128, name='dense_2'))
model.add(Dense(units=1, activation='linear', name='output'))

#compile model
model.compile(loss='mean_squared_error',
              optimizer='sgd',
              metrics=['accuracy'])

checkpointer = ModelCheckpoint(filepath='./weights/weights.h5',
                               verbose=1, save_best_only=True)

tensorboard = TensorBoard(log_dir="logs/{}".format(time()))

model.fit(data, z_label, epochs=20, batch_size=32,
          shuffle='true',validation_data=(data_val, z_label_val),
          callbacks=[checkpointer, tensorboard])

以下是使用Tensorflows Python API构建的新网络:

# hyperparameter
n_inputs = 2
n_hidden1 = 128
n_hidden2 = 64
n_outputs = 1
learning_rate = 0.01

# construction phase
X = tf.placeholder(tf.float32, shape=(None, n_inputs), name='input')
y = tf.placeholder(tf.float32, shape=(None), name="target")

hidden1 = tf.layers.dense(X, n_hidden1, name="hidden1", activation=tf.nn.sigmoid)
hidden2 = tf.layers.dense(hidden1, n_hidden2, name="hidden2", activation=tf.nn.sigmoid)
logits = tf.layers.dense(hidden2, n_outputs, activation='linear', name='output')

loss = tf.reduce_mean(tf.square(logits - y),  name='loss')

optimizer = tf.train.GradientDescentOptimizer(learning_rate)

training_op = optimizer.minimize(loss, name='train')

init = tf.global_variables_initializer()

saver = tf.train.Saver()

# --- execution phase ---
n_epochs = 40
batch_size = 32
n_batches = int(num_samp/batch_size)

with tf.Session() as sess:

    init.run()

    for epoch in range(n_epochs):
        print("Epoch: ", epoch, " Running...")
        loss_arr = np.array([])

        for iteration in range( n_batches ):
            start = iteration * batch_size
            end = start + batch_size

            sess.run(training_op, feed_dict={X: data[start:end], y: z_label[start:end] })
            loss_arr = np.append(loss_arr, loss.eval(feed_dict={X: data[start:end, :], y: z_label[start:end]}))

        mean_loss = np.mean(loss_arr)
        print("Epoch: ", epoch, " Calculated ==> Loss: ", mean_loss)

当Keras模型训练正确,损失减少,测试结果正确时,新模型收敛速度很快,停止学习。因此,结果是完全无用的。你知道吗

我是否错误地构建/训练了模型,或者Keras是否在后台做了我不知道的事情?你知道吗


Tags: name网络datamodeltfnptrainmean
1条回答
网友
1楼 · 发布于 2024-04-26 19:09:45

解决了这个问题。问题是标签向量的形状。它是一个有形状的说谎向量(1000000,)。虽然Keras显然能够处理不同形状的输出和标签向量,但Tensorflow错误地初始化了占位符和损失函数

loss = tf.reduce_mean(tf.square(logits - y),  name='loss')

没有意义了,所以训练失败了。添加

z_label = z_label.reshape(-1,1)

将标签向量重塑为(1000000,1)并对其进行求解。或者可以更精确地指定占位符的形状

y = tf.placeholder(tf.float32, shape=(None,1), name="target")

相关问题 更多 >