读取.h5文件非常慢

2024-03-29 06:13:31 发布

您现在位置:Python中文网/ 问答频道 /正文

我的数据以.h5格式存储。我使用一个数据生成器来拟合模型,速度非常慢。下面提供了我的代码片段

def open_data_file(filename, readwrite="r"):
    return tables.open_file(filename, readwrite)

data_file_opened = open_data_file(os.path.abspath("../data/data.h5"))

train_generator, validation_generator, n_train_steps, n_validation_steps = get_training_and_validation_generators(
        data_file_opened,
        ......)

其中:

def get_training_and_validation_generators(data_file, batch_size, ...):
    training_generator = data_generator(data_file, training_list,....)

数据发生器功能如下所示:

def data_generator(data_file, index_list,....):
      orig_index_list = index_list
    while True:
        x_list = list()
        y_list = list()
        if patch_shape:
            index_list = create_patch_index_list(orig_index_list, data_file, patch_shape,
                                                 patch_overlap, patch_start_offset,pred_specific=pred_specific)
        else:
            index_list = copy.copy(orig_index_list)

        while len(index_list) > 0:
            index = index_list.pop()
            add_data(x_list, y_list, data_file, index, augment=augment, augment_flip=augment_flip,
                     augment_distortion_factor=augment_distortion_factor, patch_shape=patch_shape,
                     skip_blank=skip_blank, permute=permute)
            if len(x_list) == batch_size or (len(index_list) == 0 and len(x_list) > 0):
                yield convert_data(x_list, y_list, n_labels=n_labels, labels=labels, num_model=num_model,overlap_label=overlap_label)
                x_list = list()
                y_list = list()

add_data()如下所示:

def add_data(x_list, y_list, data_file, index, augment=False, augment_flip=False, augment_distortion_factor=0.25,
             patch_shape=False, skip_blank=True, permute=False):
    '''
    add qualified x,y to the generator list
    '''
#     pdb.set_trace()
    data, truth = get_data_from_file(data_file, index, patch_shape=patch_shape)
    
    if np.sum(truth) == 0:
        return
    if augment:
        affine = np.load('affine.npy')
        data, truth = augment_data(data, truth, affine, flip=augment_flip, scale_deviation=augment_distortion_factor)

    if permute:
        if data.shape[-3] != data.shape[-2] or data.shape[-2] != data.shape[-1]:
            raise ValueError("To utilize permutations, data array must be in 3D cube shape with all dimensions having "
                             "the same length.")
        data, truth = random_permutation_x_y(data, truth[np.newaxis])
    else:
        truth = truth[np.newaxis]

    if not skip_blank or np.any(truth != 0):
        x_list.append(data)
        y_list.append(truth)

模型培训:

def train_model(model, model_file,....):
    model.fit(training_generator,
                        steps_per_epoch=steps_per_epoch,
                        epochs=n_epochs,
                        verbose = 2,
                        validation_data=validation_generator,
                        validation_steps=validation_steps)

我的数据集很大:data.h5是55GB。完成一个时代大约需要7万人。在大约6个时代之后,我得到了一个分段错误。批处理大小设置为1,因为否则,将出现资源耗尽错误。是否有一种有效的方法读取生成器中的data.h5,以便更快地进行培训,并且不会导致内存不足错误


Tags: dataindexmodelifdeftrainingstepsgenerator
1条回答
网友
1楼 · 发布于 2024-03-29 06:13:31

这是我答案的开始。我查看了你的代码,你有很多调用来读取.h5数据。据我统计,生成器对training_listvalidation_list上的每个循环进行6次读取调用。因此,一个训练循环中有近20k次呼叫。(对我来说)不清楚发电机是否在每个训练循环中都被调用。如果是,则乘以2268个循环

HDF5文件读取的效率取决于读取数据的调用次数(而不仅仅是数据量)。换句话说,一次调用读取1GB的数据要比一次调用1000次x1MB读取相同的数据快得多。因此,我们需要确定的第一件事是从HDF5文件中读取数据所花费的时间(与您的7000秒进行比较)

我隔离了读取数据文件的PyTables调用。基于此,我构建了一个简单的程序,模拟生成器函数的行为。目前,它在整个样本列表上进行单个训练循环。如果希望运行更长的测试,请增加n_trainn_epoch值。(注意:代码语法是正确的。但是没有文件,因此无法验证逻辑。我认为它是正确的,但您可能需要修复一些小错误。)

请参阅下面的代码。它应该独立运行(导入所有依赖项)。 它打印基本的定时数据。运行它以对生成器进行基准测试

import tables as tb
import numpy as np
from random import shuffle 
import time

with tb.open_file('../data/data.h5', 'r') as data_file:

    n_train = 1
    n_epochs = 1
    loops = n_train*n_epochs
    
    for e_cnt in range(loops):  
        nb_samples = data_file.root.truth.shape[0]
        sample_list = list(range(nb_samples))
        shuffle(sample_list)
        split = 0.80
        n_training = int(len(sample_list) * split)
        training_list = sample_list[:n_training]
        validation_list = sample_list[n_training:]
        
        start = time.time()
        for index_list in [ training_list, validation_list ]:
            shuffle(index_list)
            x_list = list()
            y_list = list()
            
            while len(index_list) > 0:
                index = index_list.pop() 
                
                brain_width = data_file.root.brain_width[index]
                x = np.array([modality_img[index,0,
                                           brain_width[0,0]:brain_width[1,0]+1,
                                           brain_width[0,1]:brain_width[1,1]+1,
                                           brain_width[0,2]:brain_width[1,2]+1] 
                              for modality_img in [data_file.root.t1,
                                                   data_file.root.t1ce,
                                                   data_file.root.flair,
                                                   data_file.root.t2]])
                y = data_file.root.truth[index, 0,
                                         brain_width[0,0]:brain_width[1,0]+1,
                                         brain_width[0,1]:brain_width[1,1]+1,
                                         brain_width[0,2]:brain_width[1,2]+1]    
                
                x_list.append(data)
                y_list.append(truth)
    
        print(f'For loop:{e_cnt}')
        print(f'Time to read all data={time.time()-start:.2f}')

相关问题 更多 >