我真的是机器学习的新手。我正在检查在电子邮件中分离垃圾邮件或火腿值的代码。我为另一个数据集设置代码时遇到问题。所以,我的数据集不仅仅有ham或spam值。我有两个不同的分类值(年龄和性别)。当我尝试在下面的代码块中使用2个分类值时,我得到了一个错误,解包的值太多。我怎么才能把我的全部价值观
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(messages_bow, import_data['age'], import_data['gender'], test_size = 0.20, random_state = 0)
整体代码:
import numpy as np
import pandas
import nltk
from nltk.corpus import stopwords
import string
# Import Data.
import_data = pandas.read_csv('/root/Desktop/%20/%100.csv' , encoding='cp1252')
# To See Columns Headers.
print(import_data.columns)
# To Remove Duplications.
import_data.drop_duplicates(inplace = True)
# To Find Data Size.
print(import_data.shape)
#Tokenization (a list of tokens), will be used as the analyzer
#1.Punctuations are [!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~]
#2.Stop words in natural language processing, are useless words (data).
def process_text(text):
'''
What will be covered:
1. Remove punctuation
2. Remove stopwords
3. Return list of clean text words
'''
#1
nopunc = [char for char in text if char not in string.punctuation]
nopunc = ''.join(nopunc)
#2
clean_words = [word for word in nopunc.split() if word.lower() not in stopwords.words('english')]
#3
return clean_words
#Show the Tokenization (a list of tokens )
print(import_data['text'].head().apply(process_text))
# Convert the text into a matrix of token counts.
from sklearn.feature_extraction.text import CountVectorizer
messages_bow = CountVectorizer(analyzer=process_text).fit_transform(import_data['text'])
#Split data into 80% training & 20% testing data sets
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(messages_bow, import_data['gender'], import_data['frequency'], test_size = 0.20, random_state = 0)
#Get the shape of messages_bow
print(messages_bow.shape)
train_test_split
将传递给它的每个参数拆分为训练集和测试集。由于要拆分三种不同类型的数据,因此需要6个变量:相关问题 更多 >
编程相关推荐