将数组扩展到dask datafram中的列

2024-05-16 15:06:37 发布

您现在位置:Python中文网/ 问答频道 /正文

我有avro数据和以下键:“id,标签,功能”。 id和label是字符串,而features是float的缓冲区。你知道吗

import dask.bag as db
avros = db.read_avro('data.avro')
df = avros.to_dataframe()
convert = partial(np.frombuffer, dtype='float64')
X = df.assign(features=lambda x: x.features.apply(convert, meta='float64'))

我最终得到了这个MCVE

  label id         features
0  good  a  [1.0, 0.0, 0.0]
1   bad  b  [1.0, 0.0, 0.0]
2  good  c  [0.0, 0.0, 0.0]
3   bad  d  [1.0, 0.0, 1.0]
4  good  e  [0.0, 0.0, 0.0]

我想要的结果是:

  label id   f1   f2   f3
0  good  a  1.0  0.0  0.0
1   bad  b  1.0  0.0  0.0
2  good  c  0.0  0.0  0.0
3   bad  d  1.0  0.0  1.0
4  good  e  0.0  0.0  0.0

我尝试了一些类似熊猫的方法,也就是说df[['f1','f2','f3']] = df.features.apply(pd.Series)不像熊猫那样工作。你知道吗

我可以像这样绕着一圈走

for i in range(len(features)):
df[f'f{i}'] = df.features.map(lambda x: x[i])

但在实际的用例中,我有上千个特性,这些特性会遍历数据集上千次。你知道吗

实现预期结果的最佳方式是什么?你知道吗


Tags: 数据lambdaidconvertdfdblabelavro
1条回答
网友
1楼 · 发布于 2024-05-16 15:06:37
In [68]: import string
    ...: import numpy as np
    ...: import pandas as pd

In [69]: M, N = 100, 100
    ...: labels = np.random.choice(['good', 'bad'], size=M)
    ...: ids = np.random.choice(list(string.ascii_lowercase), size=M)
    ...: features = np.empty((M,), dtype=object)
    ...: features[:] = list(map(list, np.random.randn(M, N)))
    ...: df = pd.DataFrame([labels, ids, features], index=['label', 'id', 'features']).T
    ...: df1 = df.copy()

In [70]: %%time
    ...: columns = [f"f{i:04d}" for i in range(N)]
    ...: features = pd.DataFrame(list(map(np.asarray, df1.pop('features').to_numpy())), index=df.index, columns=columns)
    ...: df1 = pd.concat([df1, features], axis=1)
Wall time: 13.9 ms

In [71]: M, N = 1000, 1000
    ...: labels = np.random.choice(['good', 'bad'], size=M)
    ...: ids = np.random.choice(list(string.ascii_lowercase), size=M)
    ...: features = np.empty((M,), dtype=object)
    ...: features[:] = list(map(list, np.random.randn(M, N)))
    ...: df = pd.DataFrame([labels, ids, features], index=['label', 'id', 'features']).T
    ...: df1 = df.copy()

In [72]: %%time
    ...: columns = [f"f{i:04d}" for i in range(N)]
    ...: features = pd.DataFrame(list(map(np.asarray, df1.pop('features').to_numpy())), index=df.index, columns=columns)
    ...: df1 = pd.concat([df1, features], axis=1)
Wall time: 627 ms

In [73]: df1.shape
Out[73]: (1000, 1002)

编辑:比原来快2倍

In [79]: df2 = df.copy()

In [80]: %%time
    ...: features = df2.pop('features')
    ...: for i in range(N):
    ...:     df2[f'f{i:04d}'] = features.map(lambda x: x[i])
    ...:     
Wall time: 1.46 s

In [81]: df1.equals(df2)
Out[81]: True

编辑:编辑:构建数据帧的更快方法比原始方法提高了8倍:

In [22]: df1 = df.copy()

In [23]: %%time
    ...: features = pd.DataFrame({f"f{i:04d}": np.asarray(row) for i, row in enumerate(df1.pop('features').to_numpy())})
    ...: df1 = pd.concat([df1, features], axis=1)
Wall time: 165 ms

相关问题 更多 >