PythonGenerator延迟读取大型csv文件并洗牌行

2024-04-25 11:47:15 发布

您现在位置:Python中文网/ 问答频道 /正文

我想写一个函数,生成一个csv文件的无序行,该文件太大,无法放入内存(~2500万行)

我如何构建一个生成器以逐行生成数据,但不以csv文件中显示的顺序生成数据

是否可以在惰性生成器函数中随机化/洗牌行

def readCSV(csvname, shuffle=True):

    for row in open(csvname, "r"):
        if shuffle:
            # Do something to shuffle the order of the rows
            # But I dont' know how to do this.
        yield row


Tags: 文件csvtheto数据函数内存顺序
1条回答
网友
1楼 · 发布于 2024-04-25 11:47:15

这可以通过首先为大型CSV文件创建索引来实现。除非更改数据,否则只需执行一次。索引将包含所有换行符所在的文件中的偏移量

然后,通过首先寻找所需的偏移量并在中读取一行,就可以轻松地读入随机行

例如:

import random
import csv
import os
import io

def create_index(index_filename, csv_filename):
    with open(csv_filename, 'rb') as f_csv:
        index = 1
        line_indexes = []       # Use [0] if no header
        linesep = ord(os.linesep[-1])
        
        while True:
            block = f_csv.read(io.DEFAULT_BUFFER_SIZE * 1000)
            
            if block:
                block_index = 0
                line_indexes.extend(offset + index for offset, c in enumerate(block) if c == linesep)
                index += len(block)
            else:
                break
                
    with open(index_filename, 'w') as f_index:
        f_index.write('\n'.join(map(str, line_indexes)))


def get_rows(count, index_filename, csv_filename):
    sys_random = random.SystemRandom()
    
    with open(index_filename) as f_index:
        line_indexes = list(map(int, f_index.read().splitlines()))

    row_count = len(line_indexes)
    
    with open(csv_filename) as f_csv:
        for _ in range(count):
            line_number = sys_random.randint(0, row_count-1)
            f_csv.seek(line_indexes[line_number])
            
            if line_number == row_count - 1:
                line = f_csv.read()
            else:
                line = f_csv.read(line_indexes[line_number + 1] - line_indexes[line_number])
            
            yield line_number, next(csv.reader(io.StringIO(line)))


index_filename = 'index.txt'
csv_filename = 'input.csv'

create_index(index_filename, csv_filename)  # only needed ONCE

for row_number, row in get_rows(10, index_filename, csv_filename):
    print(f"Row {row_number}  {row}")

同样的想法也可以用于从随机起始行读取或以无序顺序读取

显然,来回搜索不会像按顺序读取文件那样快,但它应该比从一开始读取要快得多

相关问题 更多 >