在numpy/scipy中将一个高度重复的矩阵添加到一个稀疏矩阵?
我正在尝试在NumPy/Scipy中实现一个函数,用来计算一个单独的(训练)向量和许多其他(观察)向量之间的Jensen-Shannon散度。这些观察向量存储在一个非常大的(500,000x65536)Scipy稀疏矩阵中(因为一个密集矩阵会占用太多内存)。
在这个算法中,我需要计算每个观察向量Oi和训练向量T的和T+Oi。但是我找不到用NumPy的广播规则来实现这个计算的方法,因为稀疏矩阵似乎不支持这些规则(如果把T当作一个密集数组,Scipy会先尝试把稀疏矩阵变成密集矩阵,这样会导致内存不足;如果把T变成稀疏矩阵,T+Oi又会因为形状不一致而失败)。
目前,我采取了一个非常低效的方法,把训练向量铺成一个500,000x65536的稀疏矩阵:
training = sp.csr_matrix(training.astype(np.float32))
tindptr = np.arange(0, len(training.indices)*observations.shape[0]+1, len(training.indices), dtype=np.int32)
tindices = np.tile(training.indices, observations.shape[0])
tdata = np.tile(training.data, observations.shape[0])
mtraining = sp.csr_matrix((tdata, tindices, tindptr), shape=observations.shape)
但是这样会占用大量内存(大约6GB),而且只存储了大约1500个“真实”元素。构建这个矩阵的速度也很慢。
我试着用一些聪明的方法,比如使用stride_tricks来让CSR矩阵的indptr和data成员在重复数据上不占用额外的内存。
training = sp.csr_matrix(training)
mtraining = sp.csr_matrix(observations.shape,dtype=np.int32)
tdata = training.data
vdata = np.lib.stride_tricks.as_strided(tdata, (mtraining.shape[0], tdata.size), (0, tdata.itemsize))
indices = training.indices
vindices = np.lib.stride_tricks.as_strided(indices, (mtraining.shape[0], indices.size), (0, indices.itemsize))
mtraining.indptr = np.arange(0, len(indices)*mtraining.shape[0]+1, len(indices), dtype=np.int32)
mtraining.data = vdata
mtraining.indices = vindices
但是这并不奏效,因为使用的分步视图mtraining.data和mtraining.indices形状不对(而且根据这个回答,没有办法让它们变成正确的形状)。尝试用.flat迭代器让它们看起来像一维数组失败了,因为它们看起来不够像数组(例如,它没有dtype成员),而使用flatten()方法最终会导致复制一份数据。
有没有什么办法可以解决这个问题呢?
1 个回答
你还有一个选择,我之前没想到的,就是自己用稀疏格式来实现求和,这样你就可以充分利用数组的周期性特征。这其实很简单,只要你利用了scipy的稀疏矩阵这个特别的特性:
>>> a = sps.csr_matrix([1,2,3,4])
>>> a.data
array([1, 2, 3, 4])
>>> a.indices
array([0, 1, 2, 3])
>>> a.indptr
array([0, 4])
>>> b = sps.csr_matrix((np.array([1, 2, 3, 4, 5]),
... np.array([0, 1, 2, 3, 0]),
... np.array([0, 5])), shape=(1, 4))
>>> b
<1x4 sparse matrix of type '<type 'numpy.int32'>'
with 5 stored elements in Compressed Sparse Row format>
>>> b.todense()
matrix([[6, 2, 3, 4]])
所以你根本不需要去找你的训练向量和观察矩阵每一行之间的重合点来进行求和:只要把所有数据和正确的指针塞进去,当数据被访问时,想要求和的部分就会自动求和。
补充说明
考虑到第一段代码的运行速度比较慢,你可以通过以下方式在内存和速度之间做个权衡:
def csr_add_sparse_vec(sps_mat, sps_vec) :
"""Adds a sparse vector to every row of a sparse matrix"""
# No checks done, but both arguments should be sparse matrices in CSR
# format, both should have the same number of columns, and the vector
# should be a vector and have only one row.
rows, cols = sps_mat.shape
nnz_vec = len(sps_vec.data)
nnz_per_row = np.diff(sps_mat.indptr)
longest_row = np.max(nnz_per_row)
old_data = np.zeros((rows * longest_row,), dtype=sps_mat.data.dtype)
old_cols = np.zeros((rows * longest_row,), dtype=sps_mat.indices.dtype)
data_idx = np.arange(longest_row) < nnz_per_row[:, None]
data_idx = data_idx.reshape(-1)
old_data[data_idx] = sps_mat.data
old_cols[data_idx] = sps_mat.indices
old_data = old_data.reshape(rows, -1)
old_cols = old_cols.reshape(rows, -1)
new_data = np.zeros((rows, longest_row + nnz_vec,),
dtype=sps_mat.data.dtype)
new_data[:, :longest_row] = old_data
del old_data
new_cols = np.zeros((rows, longest_row + nnz_vec,),
dtype=sps_mat.indices.dtype)
new_cols[:, :longest_row] = old_cols
del old_cols
new_data[:, longest_row:] = sps_vec.data
new_cols[:, longest_row:] = sps_vec.indices
new_data = new_data.reshape(-1)
new_cols = new_cols.reshape(-1)
new_pointer = np.arange(0, (rows + 1) * (longest_row + nnz_vec),
longest_row + nnz_vec)
ret = sps.csr_matrix((new_data, new_cols, new_pointer),
shape=sps_mat.shape)
ret.eliminate_zeros()
return ret
虽然速度没有之前那么快,但它可以在大约1秒内处理10,000行数据:
In [2]: a
Out[2]:
<10000x65536 sparse matrix of type '<type 'numpy.float64'>'
with 15000000 stored elements in Compressed Sparse Row format>
In [3]: b
Out[3]:
<1x65536 sparse matrix of type '<type 'numpy.float64'>'
with 1500 stored elements in Compressed Sparse Row format>
In [4]: csr_add_sparse_vec(a, b)
Out[4]:
<10000x65536 sparse matrix of type '<type 'numpy.float64'>'
with 30000000 stored elements in Compressed Sparse Row format>
In [5]: %timeit csr_add_sparse_vec(a, b)
1 loops, best of 3: 956 ms per loop
补充说明 这段代码非常非常慢
def csr_add_sparse_vec(sps_mat, sps_vec) :
"""Adds a sparse vector to every row of a sparse matrix"""
# No checks done, but both arguments should be sparse matrices in CSR
# format, both should have the same number of columns, and the vector
# should be a vector and have only one row.
rows, cols = sps_mat.shape
new_data = sps_mat.data
new_pointer = sps_mat.indptr.copy()
new_cols = sps_mat.indices
aux_idx = np.arange(rows + 1)
for value, col in itertools.izip(sps_vec.data, sps_vec.indices) :
new_data = np.insert(new_data, new_pointer[1:], [value] * rows)
new_cols = np.insert(new_cols, new_pointer[1:], [col] * rows)
new_pointer += aux_idx
return sps.csr_matrix((new_data, new_cols, new_pointer),
shape=sps_mat.shape)