scipy/sklearn稀疏矩阵分解在文档分类中的应用

2024-04-24 20:47:24 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在尝试对一个大型语料库(4百万个文档)进行文档分类,并在使用标准scikit学习方法时不断遇到内存错误。在清理了我的数据之后,我有一个非常稀疏的矩阵,大约有一百万个单词。我的第一个想法是sklearn.decomposition.TruncatedSVD,但由于内存错误,我无法使用足够大的k执行.fit()操作(我所能做的最大值仅占数据方差的25%)。我试着遵循sklearn分类here,但在进行KNN分类时仍然内存不足。我想手动进行核外矩阵转换,将PCA/SVD应用于矩阵以降低维数,但需要一种先计算特征向量的方法。我希望使用scipy.sparse.linalg.eigs是否有方法计算特征向量矩阵,以完成下面所示的代码?

from sklearn.feature_extraction.text import TfidfVectorizer
import scipy.sparse as sp
import numpy as np
import cPickle as pkl
from sklearn.neighbors import KNeighborsClassifier

def pickleLoader(pklFile):
    try:
        while True:
            yield pkl.load(pklFile)
    except EOFError:
        pass

#sample docs
docs = ['orange green','purple green','green chair apple fruit','raspberry pie banana yellow','green raspberry hat ball','test row green apple']
classes = [1,0,1,0,0,1]
#first k eigenvectors to keep
k = 3

#returns sparse matrix
tfidf = TfidfVectorizer()
tfs = tfidf.fit_transform(docs)

#write sparse matrix to file
pkl.dump(tfs, open('pickleTest.p', 'wb'))



#NEEDED - THIS LINE THAT CALCULATES top k eigenvectors   
del tfs

x = np.empty([len(docs),k])

#iterate over sparse matrix
with open('D:\\GitHub\\Avitro-Classification\\pickleTest.p') as f:
    rowCounter = 0
    for dataRow in pickleLoader(f):
        colCounter = 0
        for col in k:
            x[rowCounter, col] = np.sum(dataRow * eingenvectors[:,col])
f.close()

clf = KNeighborsClassifier(n_neighbors=10) 
clf.fit(x, k_class)

任何帮助或指导将不胜感激!如果有更好的方法可以做到这一点,我很乐意尝试另一种方法,但是我想在这个大型稀疏数据集上尝试KNN,最好使用一些维度缩减(这在我运行的小型测试数据集上表现得非常好-我不想因为愚蠢的内存限制而失去性能!)在

编辑:这是我第一次尝试运行的代码,它引导我走上了自己的核心外稀疏PCA实现的道路。任何有关修复此内存错误的帮助都会使此操作更简单!在

^{pr2}$

带输出:

(3995803, 923633)
---------------------------------------------------------------------------
MemoryError                               Traceback (most recent call last)
<ipython-input-27-c0db86bd3830> in <module>()
     16 
     17 svd = TruncatedSVD(algorithm='randomized', n_components=50000, random_state=42)
---> 18 output = svd.fit_transform(X_words)

C:\Python27\lib\site-packages\sklearn\decomposition\truncated_svd.pyc in fit_transform(self, X, y)
    173             U, Sigma, VT = randomized_svd(X, self.n_components,
    174                                           n_iter=self.n_iter,
--> 175                                           random_state=random_state)
    176         else:
    177             raise ValueError("unknown algorithm %r" % self.algorithm)

C:\Python27\lib\site-packages\sklearn\utils\extmath.pyc in randomized_svd(M, n_components, n_oversamples, n_iter, transpose, flip_sign, random_state, n_iterations)
    297         M = M.T
    298 
--> 299     Q = randomized_range_finder(M, n_random, n_iter, random_state)
    300 
    301     # project M to the (k + p) dimensional space using the basis vectors

C:\Python27\lib\site-packages\sklearn\utils\extmath.pyc in randomized_range_finder(A, size, n_iter, random_state)
    212 
    213     # generating random gaussian vectors r with shape: (A.shape[1], size)
--> 214     R = random_state.normal(size=(A.shape[1], size))
    215 
    216     # sampling the range of A using by linear projection of r

C:\Python27\lib\site-packages\numpy\random\mtrand.pyd in mtrand.RandomState.normal (numpy\random\mtrand\mtrand.c:9968)()

C:\Python27\lib\site-packages\numpy\random\mtrand.pyd in mtrand.cont2_array_sc (numpy\random\mtrand\mtrand.c:2370)()

MemoryError: 

Tags: 方法inimportnumpylibrandomgreensklearn
1条回答
网友
1楼 · 发布于 2024-04-24 20:47:24

scikit learn 0.15.2中未实现稀疏数据的核外SVD或PCA。您可能想改为尝试gensim。在

编辑:我忘记在第一次回复中指定“on sparse data”。在

相关问题 更多 >