python中的文本数据预处理

2024-03-28 08:38:20 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在提取的积极,消极和中性关键字在Python。在那里我的评论里有10000条评论备注.txt文件(编码UTF-8)。我想导入文本文件,读取单独的注释行,并从c2列中提到的注释中提取单词(标记化),并将其存储在下一个相邻列中。我用Python编写了一个调用get_keywords函数的小程序,创建了get_keywords()函数,但遇到的问题是将数据帧的每一行作为参数传递,并使用迭代来提供关键字并将其存储在相邻列中。在

代码没有为df dataframe中的所有已处理字提供预期的列“tokens”。在

    import nltk
    import pandas as pd
    import re
    import string
    from nltk import sent_tokenize, word_tokenize
    from nltk.corpus import stopwords
    from nltk.stem.porter import PorterStemmer
    remarks = pd.read_csv('/Users/ZKDN0YU/Desktop/comments/New 
    comments/ccomments.txt')
    df = pd.DataFrame(remarks, columns= ['c2'])
    df.head(50)
    df.tail(50)

    filename = 'ccomments.txt'
    file = open(filename, 'rt', encoding="utf-8")
    text = file.read()
    file.close()

    def get_keywords(row):     
    # split into tokens by white space
      tokens = text.split(str(row))
    # prepare regex for char filtering
      re_punc = re.compile('[%s]' % re.escape(string.punctuation))
    # remove punctuation from each word
      tokens = [re_punc.sub('', w) for w in tokens]
    # remove remaining tokens that are not alphabetic
      tokens = [word for word in tokens if word.isalpha()]
    # filter out stop words
      stop_words = set(stopwords.words('english'))
      tokens = [w for w in tokens if not w in stop_words]
    # stemming of words
      porter = PorterStemmer()
      stemmed = [porter.stem(word) for word in tokens]
    # filter out short tokens
      tokens = [word for word in tokens if len(word) > 1]
      return tokens
      df['tokens'] = df.c2.apply(lambda row: get_keywords(row['c2']), 
       axis=1)
      for index, row in df.iterrows():
      print(index, row['c2'],"tokens : {}".format(row['tokens']))

期望输出:-AComments_修改的文件,包含列1)index,2)c2(Comments)和3)标记化的单词,用于包含10000条注释的数据帧的所有行。在


Tags: infromimportretxtdfforget
1条回答
网友
1楼 · 发布于 2024-03-28 08:38:20

假设您的文本文件ccomments.txt没有任何标题(即数据从第一行开始),并且每行只有一列数据(即文本文件只有注释),下面的代码将返回一个单词列表。在

import nltk
import pandas as pd
import re
import string
from nltk import sent_tokenize, word_tokenize
from nltk.corpus import stopwords
from nltk.stem.porter import PorterStemmer


def get_keywords(row):     
    # split into tokens by white space
      tokens = row.split()
    # prepare regex for char filtering
      re_punc = re.compile('[%s]' % re.escape(string.punctuation))
    # remove punctuation from each word
      tokens = [re_punc.sub('', w) for w in tokens]
    # remove remaining tokens that are not alphabetic
      tokens = [word for word in tokens if word.isalpha()]
    # filter out stop words
      stop_words = set(stopwords.words('english'))
      tokens = [w for w in tokens if w not in stop_words]
    # stemming of words
      porter = PorterStemmer()
      stemmed = [porter.stem(word) for word in tokens]
    # filter out short tokens
      tokens = [word for word in tokens if len(word) > 1]
      return tokens


df = pd.read_csv('ccomments.txt',header=None,names = ['c2'])                      
df['tokens'] = df.c2.apply(lambda row: get_keywords(row))
for index, row in df.iterrows():
    print(index, row['c2'],"tokens : {}".format(row['tokens']))

相关问题 更多 >