AttributeError:模块'tensorflow.contrib.seq2seq“”没有属性“DynamicAttentionWrapperState”

2024-04-30 02:45:24 发布

您现在位置:Python中文网/ 问答频道 /正文

我在使用tensorflow 1.11.0时收到了这个错误消息

[['model', '300000']]
Jan 01 03:24 test.py[line:53] INFO Test model/model.ckpt-300000. 
Jan 01 03:24 test.py[line:57] INFO Test data/test.1.txt with beam_size = 1
Jan 01 03:24 data_util.py[line:17] INFO Try load dict from data/doc_dict.txt.
Jan 01 03:24 data_util.py[line:33] INFO Load dict data/doc_dict.txt with 30000 words.
Jan 01 03:24 data_util.py[line:17] INFO Try load dict from data/sum_dict.txt.
Jan 01 03:24 data_util.py[line:33] INFO Load dict data/sum_dict.txt with 30000 words.
Jan 01 03:24 data_util.py[line:172] INFO Load test document from data/test.1.txt.
Jan 01 03:24 data_util.py[line:178] INFO Load 1 testing documents.
Jan 01 03:24 data_util.py[line:183] INFO Doc dict covers 75.61% words.
2019-01-01 03:24:51.426388: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Jan 01 03:24 summarization.py[line:195] INFO Creating 1 layers of 400 units.
Traceback (most recent call last):
  File "src/summarization.py", line 241, in <module>
    tf.app.run()
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 125, in run
    _sys.exit(main(argv))
  File "src/summarization.py", line 229, in main
    decode()
  File "src/summarization.py", line 196, in decode
    model = create_model(sess, True)
  File "src/summarization.py", line 75, in create_model
    dtype=dtype)
  File "/TensorFlow-Summarization/src/bigru_model.py", line 89, in __init__
    wrapper_state = tf.contrib.seq2seq.DynamicAttentionWrapperState(
AttributeError: module 'tensorflow.contrib.seq2seq' has no attribute 'DynamicAttentionWrapperState'
Jan 01 03:24 test.py[line:57] INFO Test data/test.1.txt with beam_size = 10
Jan 01 03:25 data_util.py[line:17] INFO Try load dict from data/doc_dict.txt.
Jan 01 03:25 data_util.py[line:33] INFO Load dict data/doc_dict.txt with 30000 words.
Jan 01 03:25 data_util.py[line:17] INFO Try load dict from data/sum_dict.txt.
Jan 01 03:25 data_util.py[line:33] INFO Load dict data/sum_dict.txt with 30000 words.
Jan 01 03:25 data_util.py[line:172] INFO Load test document from data/test.1.txt.
Jan 01 03:25 data_util.py[line:178] INFO Load 1 testing documents.
Jan 01 03:25 data_util.py[line:183] INFO Doc dict covers 75.61% words.
2019-01-01 03:25:02.643185: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Jan 01 03:25 summarization.py[line:195] INFO Creating 1 layers of 400 units.
Traceback (most recent call last):
  File "src/summarization.py", line 241, in <module>
    tf.app.run()
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 125, in run
    _sys.exit(main(argv))
  File "src/summarization.py", line 229, in main
    decode()
  File "src/summarization.py", line 196, in decode
    model = create_model(sess, True)
  File "src/summarization.py", line 75, in create_model
    dtype=dtype)
  File "/TensorFlow-Summarization/src/bigru_model.py", line 89, in __init__
    wrapper_state = tf.contrib.seq2seq.DynamicAttentionWrapperState(
AttributeError: module 'tensorflow.contrib.seq2seq' has no attribute 'DynamicAttentionWrapperState'

代码

^{pr2}$

Tags: inpyinfosrctxtdatamodeltensorflow
2条回答

弃用问题,请改用tf.contrib.seq2seq.AttentionWrapper。我猜你从https://github.com/thunlp/TensorFlow-Summarization/blob/master/src/bigru_model.py借了一些代码。在

attention = tf.contrib.seq2seq.BahdanauAttention(num_units = size_layer, memory = encoder_out, memory_sequence_length = seq_len))

decoder_cell = tf.contrib.seq2seq.AttentionWrapper(cell = tf.nn.rnn_cell.MultiRNNCell([lstm_cell(reuse) for _ in range(num_layers)]), attention_mechanism = attention, attention_layer_size = size_layer)

这可能是因为根据documentation,在API 1.11.0中,tf.contrib.seq2seq中没有{}。在

他们在release 1.3.0中添加了单调的注意力包装

相关问题 更多 >