在第一次RNN examp之后,tensorflow嵌入不存在

2024-04-25 23:36:52 发布

您现在位置:Python中文网/ 问答频道 /正文

我已经设置了一个print语句,我注意到对于第一批输入RNN时,嵌入存在,但是在第二批之后它们就不存在了,我得到了以下错误:

ValueError: Variable RNNLM/RNNLM/Embedding/Adam_2/ does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=None in VarScope?

以下是生成嵌入的代码:

def add_embedding(self):
    with tf.device('/gpu:0'):
      embedding = tf.get_variable("Embedding", [len(self.vocab), self.config.embed_size])
      e_x = tf.nn.embedding_lookup(embedding, self.input_placeholder)
      inputs = [tf.squeeze(s, [1]) for s in tf.split(1, self.config.num_steps, e_x)] 
      return inputs

这是模型是如何seutp,这是我怀疑问题所在

^{pr2}$

当我得到损失函数时,问题就出现了,定义如下

def add_training_op(self, loss):
    opt = tf.train.AdamOptimizer(self.config.lr)
    train_op = opt.minimize(loss)
    return train_op

编辑:下面是一些更新的代码,可以帮助每个人

 def __init__(self, config):
    self.config = config
    self.load_data(debug=False)
    self.add_placeholders()
    self.inputs = self.add_embedding()
    self.rnn_outputs = self.add_model(self.inputs)
    self.outputs = self.add_projection(self.rnn_outputs)
    self.predictions = [tf.nn.softmax(tf.cast(o, 'float64')) for o in self.outputs]
    output = tf.reshape(tf.concat(1, self.outputs), [-1, len(self.vocab)])
    self.calculate_loss = self.add_loss_op(output)
    self.train_step = self.add_training_op(self.calculate_loss)

这里有其他方法,与add_projection和{}有关,所以我们可以排除它们。在

def add_loss_op(self, output):
   weights = tf.ones([self.config.batch_size * self.config.num_steps], tf.int32)
    seq_loss = tf.python.seq2seq.sequence_loss(
      [output], 
      tf.reshape(self.labels_placeholder, [-1]), 
      weights
      )
    tf.add_to_collection('total_loss', seq_loss)
    loss = tf.add_n(tf.get_collection('total_loss')) 
    return loss

def add_projection(self, rnn_outputs):
   with tf.variable_scope("Projection", initializer=tf.contrib.layers.xavier_initializer()) as scope:
      U = tf.get_variable("U", [self.config.hidden_size, len(self.vocab)])
      b_2 = tf.get_variable("b_2", [len(self.vocab)])

      outputs = [tf.matmul(x, U) + b_2 for x in rnn_outputs]
      return outputs


def train_RNNLM():
  config = Config()
  gen_config = deepcopy(config)
  gen_config.batch_size = gen_config.num_steps = 1

  with tf.variable_scope('RNNLM') as scope:
    model = RNNLM_Model(config)
    # This instructs gen_model to reuse the same variables as the model above
    scope.reuse_variables()
    gen_model = RNNLM_Model(gen_config)

  init = tf.initialize_all_variables()
  saver = tf.train.Saver()

  with tf.Session() as session:
    best_val_pp = float('inf')
    best_val_epoch = 0

    session.run(init)
    for epoch in xrange(config.max_epochs):
      print 'Epoch {}'.format(epoch)
      start = time.time()
      ###
      train_pp = model.run_epoch(
          session, model.encoded_train,
          train_op=model.train_step)
      valid_pp = model.run_epoch(session, model.encoded_valid)
      print 'Training perplexity: {}'.format(train_pp)
      print 'Validation perplexity: {}'.format(valid_pp)
      if valid_pp < best_val_pp:
        best_val_pp = valid_pp
        best_val_epoch = epoch
        saver.save(session, './ptb_rnnlm.weights')
      if epoch - best_val_epoch > config.early_stopping:
        break
      print 'Total time: {}'.format(time.time() - start)

Tags: selfaddconfigmodeltfdeftrainoutputs
2条回答

代码似乎正在尝试在每个批处理中创建一个新的Adam变量。 是否可能调用了add_training_op两次? 另外,def add_training_op的片段是不完整的,因为没有return语句。在

问题是以下代码行:

model = RNNLM_Model(config)
    # This instructs gen_model to reuse the same variables as the model above
    scope.reuse_variables()
    gen_model = RNNLM_Model(gen_config)

通过使用reuse_variables(),第二个模型是一个问题。通过删除这一行,问题就消失了。在

相关问题 更多 >