keras flow_from_dataframe语义分段

2024-04-23 06:51:24 发布

您现在位置:Python中文网/ 问答频道 /正文

我试图使用keras的flow_from_dataframe进行语义分割(输入是一个维度的图像(高度,宽度,3),标签也是一个维度的图像(高,宽,宽),但无法使其工作。在

按照建议,here我(卸载了现有的并)安装了最新的keras预处理库

    pip install git+https://github.com/keras-team/keras-preprocessing.git

我得到下面的小例子的以下错误

值错误:检查目标时出错:预期conv1具有4个维度,但得到的数组具有形状(1,1)

在Pycharm的Windows7上的anaconda虚拟环境中使用以下版本

  • 表1.13.1
  • keras预处理1.0.9
  • 路缘石2.2.4
  • keras应用1.0.7
  • 路缘石基层2.2.4

我认为错误在于我使用flow_from_dataframe,因为我能够在blog之后编写自己的keras数据生成器。在

有什么建议,如何正确地从数据帧设置流?在

充分工作的例子,也产生随机训练数据。在

^{pr2}$

Tags: 数据from图像gitdataframe宽度高度here
1条回答
网友
1楼 · 发布于 2024-04-23 06:51:24

我已经更新了用于TensorFlow 2.0的代码

注意:使用scipy1.2.0,因为scipy.misc.toimage在以后的版本中不推荐使用。在

import tensorflow as tf
import numpy as np
import pandas as pd
import os
import scipy.misc

def get_file_list(root_path):
    """
    # Returns:
        file_list: _list_, list of full paths to all files found
    """
    file_list = []
    for root, dirs, files in os.walk(root_path):
        for name in files:
            file_list.append(os.path.join(root, name))
    return file_list

def gen_rand_img_labels(n_rand_imgs, path_img, path_label):
    for i in range(n_rand_imgs):
        img_rand = np.random.randint(0, 256, size=img_dim)
        PIL.Image.fromarray(img_rand, cmin=0, cmax=255).save(os.path.join(path_img, 'img{}.png'.format(i)))

        label_rand = np.random.randint(0, n_classes, size=(img_dim[0], img_dim[1]))
        print('label_rand.shape: ', label_rand.shape)
        PIL.Image.fromarray(label_rand, cmin=0, cmax=255).save(os.path.join(path_label, 'img{}.png'.format(i)))

if __name__ == "__main__":
    img_dim = (100, 200, 3)  # height, width, channels
    batch_size = 1
    nr_epochs = 1

    n_classes = 5
    n_rand_imgs = 10
    savepath_img = r''
    savepath_label = r''

    #  - generate random images and random labels and save them to disk
    gen_rand_img_labels(n_rand_imgs, savepath_img, savepath_label)


    #  - build Data Generator
    train_df = pd.DataFrame(columns=['path', 'label'])

    list_img_names = get_file_list(savepath_img)

    for fname in list_img_names:
        fname_pure = os.path.split(fname)[1]

        # read in png label file as numpy array
        y = scipy.misc.imread(os.path.join(savepath_label, fname_pure))
        y = tf.keras.utils.to_categorical(y, n_classes)
        print('shape y: {}'.format(y.shape))
        train_df.loc[len(train_df)] = [fname, y]

    datagen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1/255.0, validation_split=0.25)
    train_generator = datagen.flow_from_dataframe(
        dataframe=train_df,
        x_col="path",
        y_col="label",
        subset="training",
        batch_size=batch_size,
        class_mode="raw",
        target_size=(img_dim[0], img_dim[1]))


    valid_generator = datagen.flow_from_dataframe(
        dataframe=train_df,
        x_col="path",
        y_col="label",
        subset="validation",
        batch_size=batch_size,
        class_mode="raw",
        target_size=(img_dim[0], img_dim[1]))

    #  - create the model and train it
    input_ = tf.keras.Input(shape=img_dim)
    x = tf.keras.layers.Conv2D(n_classes, (3, 3), activation='relu', padding='same', name='conv1')(input_)
    model = tf.keras.Model(inputs=input_, outputs=[x])
    model.summary()

    model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=["accuracy"])

    # Train model
    STEP_SIZE_TRAIN = train_generator.n // train_generator.batch_size
    STEP_SIZE_VALID = valid_generator.n // valid_generator.batch_size
    model.fit_generator(generator=train_generator,
                        steps_per_epoch=STEP_SIZE_TRAIN,
                        validation_data=valid_generator,
                        validation_steps=STEP_SIZE_VALID,
                        epochs=nr_epochs)

相关问题 更多 >