python中的文本预测

2024-05-29 02:36:50 发布

您现在位置:Python中文网/ 问答频道 /正文

enter image description here

我在图像中有这样的数据帧。我必须根据文本和情感栏预测“精选文本”栏。我如何在这两列上训练模型,以进一步预测“所选文本”列


Tags: 数据模型图像文本情感
2条回答

您提到您了解文本分类,但在本例中,您希望基于两个输入而不是一个输入来预测类

如果你想从两个输入预测文本(类),你可以训练两个模型,每个模型在每个输入上,然后从平均值中得到预测,或者在训练之前将这两个输入连接到一个输入,然后基于该输入进行预测

首先,textID列是不相关的。但是,您可以为情绪列指定一个值(1表示积极,0表示消极)。然后,您可以使用以下代码为所选列中的每个单词创建一个热编码:

one_hot = []
current_bit = 1
current_one_hot_value = ""
for word in <EVERY_WORD_MENTIONED>:
    current_one_hot_value += bin(current_bit)[2:]
    for x in range(0,<HOW_MANY_WORDS> - len(current_one_hot_value)):
        current_one_hot_value += "0"
    one_hot.append(current_one_hot_value)
    current_one_hot_value = ""
    current_bit  = current_bit << 1

true_one_hot = []
one_hot_str = []
for encoding in one_hot:
    for bit in encoding:
        one_hot_str.append(int(bit))
    true_one_hot.append(one_hot_str)
    one_hot_str = []

例如,如果你有“你好”、“你好”和“再见”三个词,它们将变成: 001、010和100

然后,您可以对数据帧的实际文本部分进行一些预处理,并将其放入神经网络中,如Keras网站上的神经网络(经过大量修改):

from __future__ import print_function
from keras.callbacks import LambdaCallback
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.optimizers import RMSprop
from keras.utils.data_utils import get_file
import numpy as np
import random
import sys
import io

path = "<YOUR TEXT FILE OF TEXT HERE>"
with io.open(path, encoding='utf-8') as f:
    text = f.read().lower()
print('corpus length:', len(text))

chars = sorted(list(set(text)))
print('total chars:', len(chars))
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))

# cut the text in semi-redundant sequences of maxlen characters
maxlen = 40
step = 3
sentences = []
next_chars = []
for i in range(0, len(text) - maxlen, step):
    sentences.append(text[i: i + maxlen])
    next_chars.append(text[i + maxlen])
print('nb sequences:', len(sentences))

print('Vectorization...')
x = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.bool)
y = np.zeros((len(sentences), len(chars)), dtype=np.bool)
for i, sentence in enumerate(sentences):
    for t, char in enumerate(sentence):
        x[i, t, char_indices[char]] = 1
    y[i, char_indices[next_chars[i]]] = 1


# build the model: a single LSTM
print('Build model...')
model = Sequential()
model.add(LSTM(128, input_shape=(maxlen, len(chars))))
model.add(Dense(len(chars), activation='softmax'))

optimizer = RMSprop(learning_rate=0.01)
model.compile(loss='categorical_crossentropy', optimizer=optimizer)


def sample(preds, temperature=1.0):
    # helper function to sample an index from a probability array
    preds = np.asarray(preds).astype('float64')
    preds = np.log(preds) / temperature
    exp_preds = np.exp(preds)
    preds = exp_preds / np.sum(exp_preds)
    probas = np.random.multinomial(1, preds, 1)
    return np.argmax(probas)


def on_epoch_end(epoch, _):
    # Function invoked at end of each epoch. Prints generated text.
    print()
    print('  - Generating text after Epoch: %d' % epoch)

    start_index = random.randint(0, len(text) - maxlen - 1)
    for diversity in [0.2, 0.5, 1.0, 1.2]:
        print('  - diversity:', diversity)

        generated = ''
        sentence = text[start_index: start_index + maxlen]
        generated += sentence
        print('  - Generating with seed: "' + sentence + '"')
        sys.stdout.write(generated)

        for i in range(400):
            x_pred = np.zeros((1, maxlen, len(chars)))
            for t, char in enumerate(sentence):
                x_pred[0, t, char_indices[char]] = 1.

            preds = model.predict(x_pred, verbose=0)[0]
            next_index = sample(preds, diversity)
            next_char = indices_char[next_index]

            sentence = sentence[1:] + next_char

            sys.stdout.write(next_char)
            sys.stdout.flush()
        print()

print_callback = LambdaCallback(on_epoch_end=on_epoch_end)

model.fit(x, y,
          batch_size=128,
          epochs=60,
          callbacks=[print_callback])

希望这就是你要找的,最好的祝愿:)

相关问题 更多 >

    热门问题