Pandasdf.loc[z,x]=y如何提高速度?

2024-05-29 05:08:58 发布

您现在位置:Python中文网/ 问答频道 /正文

我已经确定了一个熊猫司令部

timeseries.loc[z, x] = y

对迭代中花费的大部分时间负责。现在我正在寻找更好的方法来加速它。循环覆盖的元素甚至不到5万个(生产目标是25万个或更多),但已经需要20秒的时间。

这是我的代码(忽略上半部分,它只是计时助手)

def populateTimeseriesTable(df, observable, timeseries):
    """
    Go through all rows of df and 
    put the observable into the timeseries 
    at correct row (symbol), column (tsMean).
    """

    print "len(df.index)=", len(df.index)  # show number of rows

    global bf, t
    bf = time.time()                       # set 'before' to now
    t = dict([(i,0) for i in range(5)])    # fill category timing with zeros

    def T(i):
        """
        timing helper: Add passed time to category 'i'. Then set 'before' to now.
        """
        global bf, t 
        t[i] = t[i] + (time.time()-bf)
        bf = time.time()        

    for i in df.index:             # this is the slow loop
        bf = time.time()

        sym = df["symbol"][i]
        T(0)

        tsMean = df["tsMean"][i]
        T(1)

        tsMean = tsFormatter(tsMean)
        T(2)

        o = df[observable][i]
        T(3)

        timeseries.loc[sym, tsMean] = o
        T(4)

    from pprint import pprint
    print "times needed (total = %.1f seconds) for each command:" % sum(t.values())
    pprint (t)

    return timeseries

有(不重要,不慢)

def tsFormatter(ts):
    "as human readable string, only up to whole seconds"
    return time.strftime("%Y-%m-%d %H:%M:%S", time.gmtime(ts))

是的。 .

-->;要优化的代码处于for循环中。

(T和T只是辅助函数&dict,用于计时。)

我每一步都计时。绝大多数时间:

len(df.index)= 47160
times needed (total = 20.2 seconds) for each command:
{0: 1.102,
 1: 0.741,
 2: 0.243,
 3: 0.792,
 4: 17.371}

在最后一步中花费

timeseries.loc[sym, tsMean] = o

我已经下载并安装了pypy-但遗憾的是,这还不支持熊猫。

有什么办法可以加快二维数组的填充速度吗?

谢谢!


编辑:抱歉,没有提到-“timeseries”也是一个数据帧:

timeseries = pd.DataFrame({"name": titles}, index=index)

Tags: thetodfforindexlentimedef
3条回答

我总是认为^{}是最快的,但不是。^{}更快:

import pandas as pd

df = pd.DataFrame({'A':[1,2,3],
                   'B':[4,5,6],
                   'C':[7,8,9],
                   'D':[1,3,5],
                   'E':[5,3,6],
                   'F':[7,4,3]})

print (df)
   A  B  C  D  E  F
0  1  4  7  1  5  7
1  2  5  8  3  3  4
2  3  6  9  5  6  3

print (df.at[2, 'B'])
6
print (df.ix[2, 'B'])
6
print (df.loc[2, 'B'])
6

In [77]: %timeit df.at[2, 'B']
10000 loops, best of 3: 44.6 µs per loop

In [78]: %timeit df.ix[2, 'B']
10000 loops, best of 3: 40.7 µs per loop

In [79]: %timeit df.loc[2, 'B']
1000 loops, best of 3: 681 µs per loop

编辑:

我试着^{}df,结果是random.randint功能不同:

df = pd.DataFrame(np.random.rand(10**7, 5), columns=list('ABCDE'))


In [4]: %timeit (df.ix[2, 'B'])
The slowest run took 25.80 times longer than the fastest. This could mean that an intermediate result is being cached.
10000 loops, best of 3: 20.7 µs per loop

In [5]: %timeit (df.ix[random.randint(0, 10**7), 'B'])
The slowest run took 9.42 times longer than the fastest. This could mean that an intermediate result is being cached.
10000 loops, best of 3: 28 µs per loop

如果要在循环中添加行,请考虑性能问题;对于大约前1000到2000条记录,“my_df.loc”的性能更好,并且通过增加循环中的记录数而逐渐变慢。

如果你打算在一个大循环(比如说10M‌记录)中做一些事情,最好使用“iloc”和“append”的混合;用iloc填充一个临时数据帧,直到大小达到1000左右,然后将其附加到原始数据帧,并empy临时数据帧。这将使你的表现提高10倍左右

更新:从Pandas 0.20.1the .ix indexer is deprecated, in favor of the more strict .iloc and .loc indexers开始。

一、二、二、三、三、四、四、四、四、四、四、四、四、六、六

@jezrael提供了一个有趣的比较,我决定使用更多的索引方法和10M行DF(实际上在这种特殊情况下大小并不重要)来重复它:

设置:

In [15]: df = pd.DataFrame(np.random.rand(10**7, 5), columns=list('abcde'))

In [16]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10000000 entries, 0 to 9999999
Data columns (total 5 columns):
a    float64
b    float64
c    float64
d    float64
e    float64
dtypes: float64(5)
memory usage: 381.5 MB

In [17]: df.shape
Out[17]: (10000000, 5)

时间安排:

In [37]: %timeit df.loc[random.randint(0, 10**7), 'b']
1000 loops, best of 3: 502 µs per loop

In [38]: %timeit df.iloc[random.randint(0, 10**7), 1]
1000 loops, best of 3: 394 µs per loop

In [39]: %timeit df.at[random.randint(0, 10**7), 'b']
10000 loops, best of 3: 66.8 µs per loop

In [41]: %timeit df.iat[random.randint(0, 10**7), 1]
10000 loops, best of 3: 32.9 µs per loop

In [42]: %timeit df.ix[random.randint(0, 10**7), 'b']
10000 loops, best of 3: 64.8 µs per loop

In [43]: %timeit df.ix[random.randint(0, 10**7), 1]
1000 loops, best of 3: 503 µs per loop

结果作为条形图:

enter image description here

定时数据为DF:

In [88]: r
Out[88]:
       method  timing
0         loc   502.0
1        iloc   394.0
2          at    66.8
3         iat    32.9
4    ix_label    64.8
5  ix_integer   503.0

In [89]: r.to_dict()
Out[89]:
{'method': {0: 'loc',
  1: 'iloc',
  2: 'at',
  3: 'iat',
  4: 'ix_label',
  5: 'ix_integer'},
 'timing': {0: 502.0,
  1: 394.0,
  2: 66.799999999999997,
  3: 32.899999999999999,
  4: 64.799999999999997,
  5: 503.0}}

绘图

ax = sns.barplot(data=r, x='method', y='timing')
ax.tick_params(labelsize=16)
[ax.annotate(str(round(p.get_height(),2)), (p.get_x() + 0.2, p.get_height() + 5)) for p in ax.patches]
ax.set_xlabel('indexing method', size=20)
ax.set_ylabel('timing (microseconds)', size=20)

相关问题 更多 >

    热门问题