<p>解决这个问题花了很长时间,所以我发布了我可能不完美的解决方案,以防其他人需要它。</p>
<p>为了诊断这个问题,我手动循环遍历每个变量,并逐个分配它们。然后我注意到,分配变量后,名称会改变。这里是这样描述的:<a href="https://stackoverflow.com/questions/34112202/tensorflow-checkpoint-save-and-read">TensorFlow checkpoint save and read</a></p>
<p>根据那篇文章中的建议,我用自己的图表运行了每个模型。这也意味着我必须在它自己的会话中运行每个图。这意味着以不同的方式处理会话管理。</p>
<p>首先我创建了两个图</p>
<pre><code>model_graph = tf.Graph()
with model_graph.as_default():
model = Model(args)
adv_graph = tf.Graph()
with adv_graph.as_default():
adversary = Adversary(adv_args)
</code></pre>
<p>然后是两会</p>
<pre><code>adv_sess = tf.Session(graph=adv_graph)
sess = tf.Session(graph=model_graph)
</code></pre>
<p>然后我初始化每个会话中的变量,并分别还原每个图</p>
<pre><code>with sess.as_default():
with model_graph.as_default():
tf.global_variables_initializer().run()
model_saver = tf.train.Saver(tf.global_variables())
model_ckpt = tf.train.get_checkpoint_state(args.save_dir)
model_saver.restore(sess, model_ckpt.model_checkpoint_path)
with adv_sess.as_default():
with adv_graph.as_default():
tf.global_variables_initializer().run()
adv_saver = tf.train.Saver(tf.global_variables())
adv_ckpt = tf.train.get_checkpoint_state(adv_args.save_dir)
adv_saver.restore(adv_sess, adv_ckpt.model_checkpoint_path)
</code></pre>
<p>从这里开始,每当需要每个会话时,我都会用<code>with sess.as_default():</code>包装该会话中的任何<code>tf</code>函数。最后,我手动关闭会话</p>
<pre><code>sess.close()
adv_sess.close()
</code></pre>