从3个大型tsv/csv文件中提取并合并数据

2024-04-28 08:51:16 发布

您现在位置:Python中文网/ 问答频道 /正文

我有3个大tsv文件,其结构如下:

 file1 : id,f1,f2,name,f3
 file2 : id,f4,blah1,f5
 file3 : id,f5,f6,blah2

我想创建从其他文件中提取的第三个文件:

 result: id,name,blah1,blah2

目前我不能,因为在panda | vaex中加载一个文件会导致进程崩溃,因为它试图读取整个文件

怎么做

我将在vaex中使用生成的文件。。。我想还是1克左右


f1 = vaex.read_csv('stuff.tsv',convert=True,sep='\t') 

然后:

f1.join(f2,left_on='id',right_on='id')

Tags: 文件nameidtsvonf5结构file1
2条回答

像这样的策略可能会让你的工作更轻松。它通过id跟踪跟踪项的merged_itemsdict,并保存nameblah1blah2的值。然后,使用csvreader,它逐行迭代每个文件,而不是一次迭代所有文件,以减少每次使用的必要内存。最后,它再次逐行写出项目。您需要对此进行修补,以适合您的具体用例,但这应该是一个不错的开始

merged_items = {}

with open ('file1.csv','r') as csv_file:
    reader = csv.reader(csv_file)
    next(reader) # skip first row
    for row in reader:
        row_id = row[0]
        name = row[3]
        merged_items[row_id] = {'name':name}


with open ('file2.csv','r') as csv_file:
    reader = csv.reader(csv_file)
    next(reader) # skip first row
    for row in reader:
        row_id = row[0]
        blah1 = row[2]
        merged_items[row_id]['blah1'] = blah1


with open ('file3.csv','r') as csv_file:
    reader = csv.reader(csv_file)
    next(reader) # skip first row
    for row in reader:
        row_id = row[0]
        blah2 = row[3]
        merged_items[row_id]['blah2'] = blah2

with open('output.csv','w', newline='') as output:
    writer = csv.writer(output, delimiter='\t') # change these options as you see fit
    for id, metadata in merged_items.items():
        writer.writerow([id, metadata['name'], metadata['blah1'], metadata['blah2'])

“convert”不会将文件加载到内存中…而是分块工作

f1 = vaex.read_csv('stuff.tsv',convert=True,sep='\t') 
f2 = vaex.read_csv('stuff2.tsv',convert=True,sep='\t') 

fx1 = f1['id','blah1']
fx2 = f2['id','blah2']

然后:

ff = fx1.join(fx2,left_on='id',right_on='id')
ff.export_hdf5('file.hdf5')

相关问题 更多 >