解析大型JSON文件时出现内存错误

2024-04-24 13:39:59 发布

您现在位置:Python中文网/ 问答频道 /正文

我正试图用python解析一个12 GB的JSON文件,其中包含近500万行(每行都是一个对象),并将其存储到数据库中。我正在使用ijson和多处理来加快运行速度。这是密码

def parse(paper):
    global mydata
 
    if 'type' not in paper["venue"]:
        venue = Venues(venue_raw = paper["venue"]["raw"])
        venue.save()
    else:
        venue = Venues(venue_raw = paper["venue"]["raw"], venue_type = paper["venue"]["type"])
        venue.save()
    paper1 = Papers(paper_id = paper["id"],paper_title = paper["title"],venue = venue)
    paper1.save()
            
    paper_authors = paper["authors"]
    paper_authors_json = json.dumps(paper_authors)
    obj = ijson.items(paper_authors_json,'item')
    for author in obj:
        mydata = mydata.append({'author_id': author["id"] , 'venue_raw': venue.venue_raw, 'year' : paper["year"],'number_of_times': 1},ignore_index=True)

if __name__ == '__main__':
    p = Pool(4)
 
    filename = 'C:/Users/dintz/Documents/finaldata/dblp.v12.json'
    with open(filename,encoding='UTF-8') as infile:
        papers = ijson.items(infile, 'item')   
        for paper in papers:
            p.apply_async(parse,(paper,))
    
            
    
    p.close()
    p.join()
            
    
 
    mydata = mydata.groupby(by=['author_id','venue_raw','year'], axis=0, as_index = False).sum()
    mydata = mydata.groupby(by = ['author_id','venue_raw'], axis=0, as_index = False, group_keys = False).apply(lambda x: sum((1+x.year-x.year.min())*numpy.log10(x.number_of_times+1)))
    df = mydata.index.to_frame(index = False)
    df = pd.DataFrame({'author_id':df["author_id"],'venue_raw':df["venue_raw"],'rating':mydata.values[:,2]})
    
    for index, row in df.iterrows():
        author_id = row['author_id']
        venue = Venues.objects.get(venue_raw = row['venue_raw'])
        rating = Ratings(author_id = author_id, venue = venue, rating = row['rating'])
        rating.save()

但是我在不知道原因的情况下得到了以下错误 enter image description here

有人能帮我吗


Tags: inidjsonfalsedfindexrawsave
1条回答
网友
1楼 · 发布于 2024-04-24 13:39:59

我不得不做出一些推断和假设,但看起来

  • 你在用Django
  • 您希望使用场地、论文和作者数据填充SQL数据库
  • 然后你想用熊猫做一些分析

填充您的SQL数据库可以通过如下方式非常灵活地完成

  • 我添加了tqdm包,因此您可以获得进度指示
  • 这假设有一个链接论文和作者的PaperAuthor模型
  • 与原始代码不同,这不会在数据库中保存重复的Venue
  • 您可以看到,我用存根替换了get_or_createcreate,使其在没有数据库模型(或者实际上,没有Django)的情况下可以运行,只需使用the dataset you're using

在我的机器上,这几乎不消耗内存,因为记录被(或将被)转储到SQL数据库中,而不是内存中不断增长、碎片化的数据帧中

熊猫处理留给读者作为练习;-),但是我可以想象,从数据库中读取这些预处理数据需要pd.read_sql()

import multiprocessing

import ijson
import tqdm


def get_or_create(model, **kwargs):
    # Actual Django statement:
    # return model.objects.get_or_create(**kwargs)
    return (None, True)


def create(model, **kwargs):
    # Actual Django statement:
    # return model.objects.create(**kwargs)
    return None


Venue = "Venue"
Paper = "Paper"
PaperAuthor = "PaperAuthor"


def parse(paper):
    venue_name = paper["venue"]["raw"]
    venue_type = paper["venue"].get("type")
    venue, _ = get_or_create(Venue, venue_raw=venue_name, venue_type=venue_type)
    paper_obj = create(Paper, paper_id=paper["id"], paper_title=paper["title"], venue=venue)
    for author in paper["authors"]:
        create(PaperAuthor, paper=paper_obj, author_id=author["id"], year=paper["year"])


def main():
    filename = "F:/dblp.v12.json"
    with multiprocessing.Pool() as p, open(filename, encoding="UTF-8") as infile:
        for result in tqdm.tqdm(p.imap_unordered(parse, ijson.items(infile, "item"), chunksize=64)):
            pass


if __name__ == "__main__":
    main()

相关问题 更多 >