读取大型csv文件,然后将其拆分,导致OOM错误

2024-04-25 06:24:55 发布

您现在位置:Python中文网/ 问答频道 /正文

嗨,我正在创建一个胶水作业,它将读取csv文件,然后通过特定列将其拆分,不幸的是,它导致了一个OOM(Out of Memory)错误。请参阅下面的代码

import pandas as pd
import numpy as np
from datetime import datetime, timedelta
import boto3


#get date 
Current_Date = datetime.now() - timedelta(days=1)
now = Current_Date.strftime('%Y-%m-%d')

#get date
Previous_Date = datetime.now() - timedelta(days=2)
prev = Previous_Date.strftime('%Y-%m-%d')

#read csv file that contain today's date
filepath = "s3://bucket/file"+now+".csv.gz"

data = pd.read_csv(filepath, sep='|', header=None,compression='gzip') 

#   count no. of loops
loop = 0
for i, x in data.groupby(data[10].str.slice(0,10)):
    loop += 1

# if no. of distinct values of column 10 (last_update) is greater than or equal to 7
if loop >= 7:
    #run loop for the dataframe and split by distinct values of column 10 (last_update)
    for i, x in data.groupby(data[10].str.slice(0, 10)):
        x.to_csv("s3://bucket/file.csv.gz".format(i.lower()),header=None,compression='gzip')

#if no. of distinct values of column 10 (last_update) is less than 7
#filter dateframe (current date and previous date); new dataframe is created
else:
    d = data[(data[10].str.slice(0,10)==prev)|(data[10].str.slice(0,10)==now)]
#run loop for the filtered data frame and split by distinct values of column 10 (last_update)
 for i, x in d.groupby(d[10].str.slice(0, 10)):
        x.to_csv("s3://bucket/file.csv.gz".format(i.lower()),header=None,compression='gzip')

溶液- 我通过增加胶水作业的最大容量来解决这个问题


Tags: ofcsvimportloopfordatadatetimedate
1条回答
网友
1楼 · 发布于 2024-04-25 06:24:55

不确定您的文件大小有多大,但是如果您将文件分块,您应该能够避免错误。我们已经使用此方法成功地测试了一个2.5gb文件。另外,如果您使用的是python shell,请记住将胶水作业的最大容量更新为1

data = pd.read_csv(filepath, chunksize=1000, iterator=True) 
for chunk in enumerate(data):
#Loop through the chunks and process the data

相关问题 更多 >