wifi手势识别,dl,ml,python,cnn,

2024-04-19 13:20:34 发布

您现在位置:Python中文网/ 问答频道 /正文

我看到了这个样子 (7500,200,30,3)

其中7500个样本(有一个形状为200,30,3的张量)与CSI数据(一种用于手势识别的wifi数据)相关,它有150个不同的标签(手势),目的是分类 我用凯拉斯的CNN分类,我面临着巨大的过度拟合

   def create_DL_model():
    # input layer
    csi = Input(shape=(200,30,3))
    # first feature extractor
    x = Conv2D(64, kernel_size=3, activation='relu',name='layer1-01')(csi)
    x=BatchNormalization()(x)
    x = MaxPooling2D(pool_size=(2, 2),name='layer1-02')(x)
    x = Conv2D(64, kernel_size=3, activation='relu',name='layer1-03')(x)
    x=BatchNormalization()(x)
    x = MaxPooling2D(pool_size=(2, 2),name='layer1-04')(x)
    x=BatchNormalization()(x)
    x = Conv2D(64, kernel_size=3, activation='relu',name='layer1-05',padding='same')(x)
    x=Conv2D(32, kernel_size=3, activation='relu',name='layer1-06',padding='same')(x)
    x=Conv2D(64, (3,3),padding='same',activation='relu',name='layer-01')(x)
    x=BatchNormalization()(x)
    x=MaxPool2D(pool_size=(2, 2,),name='layer-02')(x)
    x=Conv2D(32, (3,3),padding="same",activation='relu',name='layer-03')(x)
    x=BatchNormalization()(x)
    x=MaxPool2D(pool_size=(2, 2),name='layer-04')(x)
    x=Flatten()(x)
    x=Dense(16,activation='relu')(x)
    keras.layers.Dropout(.50, seed=1)
    probability=Dense(150,activation='softmax')(x)
    model= Model(inputs=csi, outputs=probability)
    model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy'])
    return model

如您所见,我使用drop-out来表示密集层,使用提前停止和批处理规范化来表示与过度拟合的斗争,如您所见,仍然存在问题

image

经过交叉验证,我的准确率在70左右(有些论文准确率达到90%,但是我们有150个标签,似乎90%,这真的是一个很好的结果,他们使用了元学习,我不能使用),有什么方法可以推荐吗

非常感谢


Tags: 数据namelayersizemodelactivationkernelrelu
1条回答
网友
1楼 · 发布于 2024-04-19 13:20:34

精度vs历元图表示模型中存在的over fitting问题。这是由于训练样本很少(7500/150=50/班)。一个可能的解决方案是应用Data Augmentation,它允许您仅使用很少的训练示例构建一个强大的图像分类器。你知道吗

数据结构

根据以下结构存储数据

data/
    train/
        class1/
            class1_img001.jpg
            class1_img002.jpg
        class2/
            class2_img001.jpg
            class2_img002.jpg
            ...
        class150/
            class150_img001.jpg
            class150_img002.jpg
            ...
    validation/
        class1/
            class1_img001.jpg
            class1_img002.jpg
        class2/
            class2_img001.jpg
            class2_img002.jpg
            ...
        class150/
            class150_img001.jpg
            class150_img002.jpg
            ...

你可以做:

def create_DL_model(img_height, img_width, channel):
    # input layer
    csi = Input(shape=(img_height, img_width, channel))
    # first feature extractor
    x = Conv2D(64, kernel_size=3, activation='relu',name='layer1-01')(csi)
    x=BatchNormalization()(x)
    x = MaxPooling2D(pool_size=(2, 2),name='layer1-02')(x)
    x = Conv2D(64, kernel_size=3, activation='relu',name='layer1-03')(x)
    x=BatchNormalization()(x)
    x = MaxPooling2D(pool_size=(2, 2),name='layer1-04')(x)
    x=BatchNormalization()(x)
    x = Conv2D(64, kernel_size=3, activation='relu',name='layer1-05',padding='same')(x)
    x=Conv2D(32, kernel_size=3, activation='relu',name='layer1-06',padding='same')(x)
    x=Conv2D(64, (3,3),padding='same',activation='relu',name='layer-01')(x)
    x=BatchNormalization()(x)
    x=MaxPool2D(pool_size=(2, 2,),name='layer-02')(x)
    x=Conv2D(32, (3,3),padding="same",activation='relu',name='layer-03')(x)
    x=BatchNormalization()(x)
    x=MaxPool2D(pool_size=(2, 2),name='layer-04')(x)
    x=Flatten()(x)
    x=Dense(16,activation='relu')(x)
    keras.layers.Dropout(.50, seed=1)
    probability=Dense(150,activation='softmax')(x)
    model= Model(inputs=csi, outputs=probability)
    return model

from keras.preprocessing.image import ImageDataGenerator

batch_size = 32
img_height = 200
img_width = 30
channel = 3

model = create_DL_model(img_height, img_width, channel) 

# this is the augmentation configuration we will use for training
train_datagen = ImageDataGenerator(
        rotation_range=40,
        width_shift_range=0.2,
        height_shift_range=0.2,
        rescale=1./255,
        shear_range=0.2,
        zoom_range=0.2,
        horizontal_flip=True,
        fill_mode='nearest')


# this is the augmentation configuration we will use for testing:
# only rescaling
test_datagen = ImageDataGenerator(rescale=1./255)

# this is a generator that will read pictures found in
# subfolers of 'data/train', and indefinitely generate
# batches of augmented image data
train_generator = train_datagen.flow_from_directory(
        'data/train',  # this is the target directory
        target_size=(img_height , img_width ),  # all images will be resized
        batch_size=batch_size,
        class_mode='categorical_crossentropy')  

# this is a similar generator, for validation data
validation_generator = test_datagen.flow_from_directory(
        'data/validation',
        target_size=(img_height , img_width ),
        batch_size=batch_size,
        class_mode='categorical_crossentropy')

model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy'])

model.fit_generator(
        train_generator,
        steps_per_epoch=7500// batch_size,
        epochs=50,
        validation_data=validation_generator,
        validation_steps=YOUR_VALIDATION_SIZE// batch_size) # use YOUR_VALIDATION_SIZE as per your validation data
model.save('model-e50-b32.h5')  # always save your weights after training or during training

CNN层的数量可以减少,精度损失可以监测,因为我们只提供7500训练图像。你知道吗

以上代码未经测试。请分享你的错误以获得进一步的建议。你知道吗

关于数据扩充和如何应用的更多信息是here。你知道吗

相关问题 更多 >