一般来说,我对机器学习和NLP是相当陌生的。我正在努力思考如何进行适当的文本预处理(清理文本)
我已经构建了一个定制的文本分类模型,有两个标签:offensive
和clean
。在将其提供给我的模型之前,我在所有输入文本上运行以下方法。(培训前和测试时)
该方法将删除停止词、标点符号,并将文本文本文本化
import spacy
from spacy.lang.en.stop_words import STOP_WORDS
import string
def normalize(text, lowercase, remove_stopwords, remove_punctuation):
nlp = spacy.load("en_core_web_sm", disable=['parser', 'tagger', 'ner'])
stops = spacy.lang.en.stop_words.STOP_WORDS
if lowercase:
text = text.lower()
text = nlp(text)
if remove_punctuation:
text = [t for t in text if t.text not in string.punctuation]
lemmatized = list()
for word in text:
lemma = word.lemma_.strip()
if lemma:
if not remove_stopwords or (remove_stopwords and lemma not in stops):
lemmatized.append(lemma)
return " ".join(lemmatized)
考虑以下输入字符串:
输入:You're such a sweet person. All the best!
如果我使用我的方法清理该文本:
test_text = "You're such a sweet person. All the best!"
test_text = normalize(test_text, lowercase=True, remove_stopwords=True, remove_punctuation=True)
它将返回:-PRON- sweet person
现在,我已经用这两个版本测试了我的模型,结果如下:
You're such a sweet person. All the best
:
{'PROFANITY': 0.07376033067703247, 'CLEAN': 0.9841629266738892}
-PRON- sweet person
{'PROFANITY': 0.926033616065979, 'CLEAN': 0.010466966778039932}
如你所见,结果差别很大。如果我没有清理文本,在将其提供给模特之前,它会得到正确的亵渎/清洁分数。这篇文章不是亵渎
但是,如果我在将文本送达模型之前清理文本,亵渎/清理分数是不正确的
我做错什么了吗?我有一个大约18k行的数据集,由带标签的句子组成。所有句子如下图所示,在提供给模型培训之前,将使用我的normalize
方法进行清理:
IS_OFFENSIVE,TEXT
--------------------
1,you are a bitch!
0,you are very sweet!
0,I love you
1,"I think that is correct, idiot!"
这是我训练模型的代码
def convert():
TRAINING_DATA = defaultdict(list)
# Open CSV file.
with open('train/profanity/data/profanity_cleaned_data_cleaned.csv', mode='r') as csv_file:
csv_reader = csv.DictReader(csv_file)
line_count = 1
for row in csv_reader:
if line_count > 0 and line_count < 500:
if row['is_offensive'] == '0':
CLEAN = bool(1)
PROFANITY = bool(0)
else:
CLEAN = bool(0)
PROFANITY = bool(1)
TRAINING_DATA['csv'].append([str(row['text']), {
"CLEAN": CLEAN, "PROFANITY": PROFANITY}])
line_count += 1
return TRAINING_DATA['csv']
def train():
output_dir = 'train/profanity/model/'
TRAINING_DATA = convert_csv_to_dataset.convert()
nlp = spacy.blank("en")
category = nlp.create_pipe("textcat")
category.add_label("PROFANITY")
category.add_label("CLEAN")
nlp.add_pipe(category)
# Start the training
nlp.begin_training()
# Loop for 10 iterations
for itn in range(10):
# Shuffle the training data
random.shuffle(TRAINING_DATA)
losses = {}
# Batch the examples and iterate over them
for batch in tqdm(spacy.util.minibatch(TRAINING_DATA, size=1)):
texts = [nlp(text) for text, entities in batch]
annotations = [{"cats": entities} for text, entities in batch]
nlp.update(texts, annotations, losses=losses)
# if itn % 20 == 0:
# print(losses)
nlp.to_disk(output_dir)
print("Saved model to", output_dir)
已使用normalize
方法对文件profanity_cleaned_data_cleaned.csv
进行了预处理
看看您的规范化代码,通过删除这么多信息并添加
-PRON-
之类的元素,您似乎摆脱了模型从
You're such a sweet person. All the best!
-10个令牌到
-PRON- sweet person
-5个令牌(-PRON-
>;- PRON -
,三个令牌)意味着在“清理”版本中,超过一半的令牌由这个
-PRON-
文本组成。也就是说,大多数输入严重偏向于-PRON-
文本,而且swwet person
几乎没有“重要”您的培训代码看起来不错,只要清理后的csv是原始输入,使用相同的normalize函数进行清理
我建议进行以下修改
-PRON-
这样的标记normalize
中,在if lemma
条件中添加一个else
语句,在该条件下,单词将被添加,如果它没有引理,这可能是导致许多文本被删除的原因if line_count > 0 and line_count < 500:
相关问题 更多 >
编程相关推荐