对组对象中的不同项应用不同的函数:Python pandas

2024-04-23 23:10:18 发布

您现在位置:Python中文网/ 问答频道 /正文

假设我有一个数据帧,如下所示:

In [1]: test_dup_df

Out[1]:
                  exe_price exe_vol flag 
2008-03-13 14:41:07  84.5    200     yes
2008-03-13 14:41:37  85.0    10000   yes
2008-03-13 14:41:38  84.5    69700   yes
2008-03-13 14:41:39  84.5    1200    yes
2008-03-13 14:42:00  84.5    1000    yes
2008-03-13 14:42:08  84.5    300     yes
2008-03-13 14:42:10  84.5    88100   yes
2008-03-13 14:42:10  84.5    11900   yes
2008-03-13 14:42:15  84.5    5000    yes
2008-03-13 14:42:16  84.5    3200    yes 

我想在时间14:42:10对重复数据进行分组,并对exe_priceexe_vol应用不同的函数(例如,对exe_vol求和并计算exe_price的体积加权平均值)。我知道我能做到

In [2]: grouped = test_dup_df.groupby(level=0)

对重复索引进行分组,然后使用first()last()函数获取第一行或最后一行,但这并不是我真正想要的。

是否有方法对不同列中的值进行分组并应用不同的(由我编写的)函数?


Tags: 数据函数intestdf时间体积out
3条回答

我喜欢@waitingkuo的回答,因为它非常清晰易读。

不管怎样,我还是会保留这个,因为它看起来确实更快——至少在熊猫版0.10.0中是这样。这种情况may (hopefully) change in the future,因此请确保重新运行基准测试,特别是如果您使用的是不同版本的Pandas。

import pandas as pd
import io
import timeit

data = '''\
date time       exe_price    exe_vol flag
2008-03-13 14:41:07  84.5    200     yes
2008-03-13 14:41:37  85.0    10000   yes
2008-03-13 14:41:38  84.5    69700   yes
2008-03-13 14:41:39  84.5    1200    yes
2008-03-13 14:42:00  84.5    1000    yes
2008-03-13 14:42:08  84.5    300     yes
2008-03-13 14:42:10  10    88100   yes
2008-03-13 14:42:10  100    11900   yes
2008-03-13 14:42:15  84.5    5000    yes
2008-03-13 14:42:16  84.5    3200    yes'''

df = pd.read_table(io.BytesIO(data), sep='\s+', parse_dates=[[0, 1]],
                   index_col=0)


def func(subf):
    exe_vol = subf['exe_vol'].sum()
    exe_price = ((subf['exe_price']*subf['exe_vol']).sum()
                 / exe_vol)
    flag = True
    return pd.Series([exe_price, exe_vol, flag],
                     index=['exe_price', 'exe_vol', 'flag'])
    # return exe_price

def using_apply():
    return df.groupby(df.index).apply(func)

def using_helper_column():
    df['weight'] = df['exe_price'] * df['exe_vol']
    grouped = df.groupby(level=0, group_keys=True)
    result = grouped.agg({'weight': 'sum', 'exe_vol': 'sum'})
    result['exe_price'] = result['weight'] / result['exe_vol']
    result['flag'] = True
    result = result.drop(['weight'], axis=1)
    return result

result = using_apply()
print(result)
result = using_helper_column()
print(result)

time_apply = timeit.timeit('m.using_apply()',
                      'import __main__ as m ',
                      number=1000)
time_helper = timeit.timeit('m.using_helper_column()',
                      'import __main__ as m ',
                      number=1000)
print('using_apply: {t}'.format(t = time_apply))
print('using_helper_column: {t}'.format(t = time_helper))

收益率

                     exe_vol  exe_price  flag
date_time                                    
2008-03-13 14:41:07      200      84.50  True
2008-03-13 14:41:37    10000      85.00  True
2008-03-13 14:41:38    69700      84.50  True
2008-03-13 14:41:39     1200      84.50  True
2008-03-13 14:42:00     1000      84.50  True
2008-03-13 14:42:08      300      84.50  True
2008-03-13 14:42:10   100000      20.71  True
2008-03-13 14:42:15     5000      84.50  True
2008-03-13 14:42:16     3200      84.50  True

时间基准:

using_apply: 3.0081038475
using_helper_column: 1.35300707817

不太熟悉pandas,但在纯numpy中,您可以:

tot_vol = np.sum(grouped['exe_vol'])
avg_price = np.average(grouped['exe_price'], weights=grouped['exe_vol'])

应用自己的功能:

In [12]: def func(x):
             exe_price = (x['exe_price']*x['exe_vol']).sum() / x['exe_vol'].sum()
             exe_vol = x['exe_vol'].sum()
             flag = True        
             return Series([exe_price, exe_vol, flag], index=['exe_price', 'exe_vol', 'flag'])


In [13]: test_dup_df.groupby(test_dup_df.index).apply(func)
Out[13]:
                    exe_price exe_vol  flag
date_time                                  
2008-03-13 14:41:07      84.5     200  True 
2008-03-13 14:41:37        85   10000  True
2008-03-13 14:41:38      84.5   69700  True
2008-03-13 14:41:39      84.5    1200  True
2008-03-13 14:42:00      84.5    1000  True
2008-03-13 14:42:08      84.5     300  True
2008-03-13 14:42:10     20.71  100000  True
2008-03-13 14:42:15      84.5    5000  True
2008-03-13 14:42:16      84.5    3200  True

相关问题 更多 >