<p>我有两个网络:生成输出的<code>Model</code>和对输出进行分级的<code>Adversary</code>。</p>
<p>两者都是单独训练的,但现在我需要在一次训练中合并它们的输出。</p>
<p>我试图实现本文中提出的解决方案:<a href="https://stackoverflow.com/questions/39175945/run-multiple-pre-trained-tensorflow-nets-at-the-same-time">Run multiple pre-trained Tensorflow nets at the same time</a></p>
<p><strong>我的代码</strong></p>
<pre><code>with tf.name_scope("model"):
model = Model(args)
with tf.name_scope("adv"):
adversary = Adversary(adv_args)
#...
with tf.Session() as sess:
tf.global_variables_initializer().run()
# Get the variables specific to the `Model`
# Also strip out the surperfluous ":0" for some reason not saved in the checkpoint
model_varlist = {v.name.lstrip("model/")[:-2]: v
for v in tf.global_variables() if v.name[:5] == "model"}
model_saver = tf.train.Saver(var_list=model_varlist)
model_ckpt = tf.train.get_checkpoint_state(args.save_dir)
model_saver.restore(sess, model_ckpt.model_checkpoint_path)
# Get the variables specific to the `Adversary`
adv_varlist = {v.name.lstrip("avd/")[:-2]: v
for v in tf.global_variables() if v.name[:3] == "adv"}
adv_saver = tf.train.Saver(var_list=adv_varlist)
adv_ckpt = tf.train.get_checkpoint_state(adv_args.save_dir)
adv_saver.restore(sess, adv_ckpt.model_checkpoint_path)
</code></pre>
<p><strong>问题</strong></p>
<p>对函数<code>model_saver.restore()</code>的调用似乎什么也没做。在另一个模块中,我使用一个带有<code>tf.train.Saver(tf.global_variables())</code>的保存程序,它可以很好地恢复检查点。</p>
<p>模型有<code>model.tvars = tf.trainable_variables()</code>。为了检查发生了什么,我使用<code>sess.run()</code>来提取恢复前后的<code>tvars</code>。每次使用初始随机分配的变量而不分配检查点中的变量时。</p>
<p>有没有想过为什么<code>model_saver.restore()</code>看起来无所事事?</p>