我有第一本txt格式的哈利波特书。在此基础上,我创建了两个新的txt文件:第一个文件中,Hermione
的所有出现都被Hermione_1
替换;第二个文件中,Hermione
的所有出现都被Hermione_2
替换。然后我将这两个文本连接起来创建一个长文本,并将其作为Word2Vec的输入。
这是我的代码:
import os
from gensim.models import Word2Vec
from gensim.models import KeyedVectors
with open("HarryPotter1.txt", 'r') as original, \
open("HarryPotter1_1.txt", 'w') as mod1, \
open("HarryPotter1_2.txt", 'w') as mod2:
data=original.read()
data_1 = data.replace("Hermione", 'Hermione_1')
data_2 = data.replace("Hermione", 'Hermione_2')
mod1.write(data_1 + r"\n")
mod2.write(data_2 + r"\n")
with open("longText.txt",'w') as longFile:
with open("HarryPotter1_1.txt",'r') as textfile:
for line in textfile:
longFile.write(line)
with open("HarryPotter1_2.txt",'r') as textfile:
for line in textfile:
longFile.write(line)
model = ""
word_vectors = ""
modelName = "ModelTest"
vectorName = "WordVectorsTestst"
answer2 = raw_input("Overwrite embeddig? (yes or n)")
if(answer2 == 'yes'):
with open("longText.txt",'r') as longFile:
sentences = []
single= []
for line in longFile:
for word in line.split(" "):
single.append(word)
sentences.append(single)
model = Word2Vec(sentences,workers=4, window=5,min_count=5)
model.save(modelName)
model.wv.save_word2vec_format(vectorName+".bin",binary=True)
model.wv.save_word2vec_format(vectorName+".txt", binary=False)
model.wv.save(vectorName)
word_vectors = model.wv
else:
model = Word2Vec.load(modelName)
word_vectors = KeyedVectors.load_word2vec_format(vectorName + ".bin", binary=True)
print(model.wv.similarity("Hermione_1","Hermione_2"))
print(model.wv.distance("Hermione_1","Hermione_2"))
print(model.wv.most_similar("Hermione_1"))
print(model.wv.most_similar("Hermione_2"))
model.wv.most_similar("Hermione_1")
和{
训练word2Vec模型在一定程度上是随机的。这就是为什么你会得到不同的结果。另外,
Hermione_2
开始出现在文本数据的后半部分。在我对数据处理过程的理解中,当Hermione_1
上下文已经建立,这个单词的向量也是这样,你在完全相同的上下文中引入第二个单词,算法试图找出两者之间的区别。 其次,你使用了一个很短的向量,这可能会低估概念空间的复杂性。由于简化,你得到的两个向量没有任何重叠。在相关问题 更多 >
编程相关推荐