我想使用简单RNN层的输出作为其他网络的输入。怎么做?

2024-04-19 19:36:40 发布

您现在位置:Python中文网/ 问答频道 /正文

我有两个相同的CRNN网络,我想从两个网络的最后一个简单的RNN层的输出,并输入这两个到另一个网络,这是一个暹罗配置。我无法输入这些CRNN网络的输出及其错误:不可损坏类型:'维度'

完全回溯错误:

Traceback (most recent call last):

File "full_adda.py", line 270, in <module> model_s.fit(([in_source, in_target]), Y_train,batch_size=128,epochs=epochs)

File "/usr/lib64/python3.4/site-packages/keras/engine/training.py", line 1358, in fit batch_size=batch_size)

File "/usr/lib64/python3.4/site-packages/keras/engine/training.py", line 1246, in _standardize_user_data_
check_array_lengths(x, y, sample_weights)

File "/usr/lib64/python3.4/site-packages/keras/engine/training.py", line 222, in _check_array_lengths
    set_x = set_of_lengths(inputs)

File "/usr/lib64/python3.4/site-packages/keras/engine/training.py", line 220, in set_of_lengths
    return set([0 if y is None else y.shape[0] for y in x])

TypeError: unhashable type: 'Dimension'

-

    import numpy as np
    np.random.seed(1337)
    for run in range(0, 1):
        print ('source network..')
        print('run: ' + str(run))
        for i in range(1,nb_class+1):
            class_ind = np.where(y_all==i)
            Xi_trn, Xi_val_test, Yi_trn, Yi_val_test = train_test_split(X_all[class_ind[0],:,:], Y_all[class_ind[0],:], train_size=100, test_size=200)
            Xi_val, Xi_tst, Yi_val, Yi_tst = train_test_split(Xi_val_test, Yi_val_test, train_size=20)
            if i==1:
                X_train, Y_train, X_val, Y_val, X_test, Y_test = Xi_trn, Yi_trn, Xi_val, Yi_val, Xi_tst, Yi_tst
            else:
                X_train = np.concatenate((X_train, Xi_trn), axis=0)
                Y_train = np.concatenate((Y_train, Yi_trn), axis=0)
                X_val = np.concatenate((X_val, Xi_val), axis=0)
                Y_val = np.concatenate((Y_val, Yi_val), axis=0)
                X_test = np.concatenate((X_test, Xi_tst), axis=0)
                Y_test = np.concatenate((Y_test, Yi_tst), axis=0)

        num_epoch = 100
        batch_size = 128
        learning_rate = 1e-4
        decay_every_epochs = 1000
        decay_every = decay_every_epochs*X_train.shape[0]/batch_size
        decay_by = 5.0
        reg = 0e-4

        print('Build model...')
        model = Sequential()                                                                                                   

        model.add(Convolution1D(filters=32,kernel_size=6,padding='same',activation='relu',input_shape=X_train.shape[1:]))
        model.add(MaxPooling1D(pool_size=2))
        model.add(Convolution1D(filters=32,kernel_size=6,padding='same',activation='relu'))
        model.add(MaxPooling1D(pool_size=2))
        model.add(SimpleRNN(256, return_sequences=True))
        model.add(SimpleRNN(512, return_sequences=False))
        model.add(Dense(nb_class,activation='softmax'))

        opt = Adam(lr=learning_rate)
        model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
        print(model.summary())

        print('Train...')
        history=model.fit(X_train, Y_train, batch_size=batch_size, epochs=num_epoch, validation_data=(X_val,Y_val))

        model.save_weights(str(run)+'.h5')
        in_source = model.layers[5].output

    #Target Network

        print('Build model...')
        model_t = Sequential()
        model_t.add(Convolution1D(filters=32,kernel_size=6,padding='same',activation='relu',input_shape=X_train.shape[1:]))
        model_t.add(MaxPooling1D(pool_size=2))
        model_t.add(Convolution1D(filters=32,kernel_size=6,padding='same',activation='relu'))
        model_t.add(MaxPooling1D(pool_size=2))
        model_t.add(SimpleRNN(256, return_sequences=True))
        model_t.add(SimpleRNN(512, return_sequences=False))
        model_t.add(Dense(nb_class,activation='softmax'))

# Loading pre-trained Weights
        model_t.load_weights(str(run)+'.h5',by_name=True)

        opt_t = Adam(lr=learning_rate)
        model_t.compile(loss='categorical_crossentropy', optimizer=opt_t, metrics=['accuracy'])
        print(model_t.summary())
        in_target = model_t.layers[5].output

    # Siamese Network

        def euclidean_distance(vects):
            x_siam, y_siam = vects
            return K.sqrt(K.maximum(K.sum(K.square(x_siam - y_siam), axis=1, keepdims=True), K.epsilon()))


        def eucl_dist_output_shape(shapes):
            shape1, shape2 = shapes
            return (shape1[0], 1)


        def contrastive_loss(y_true, y_pred):
            '''Contrastive loss from Hadsell-et-al.'06
    http://yann.lecun.com/exdb/publis/pdf/hadsell-chopra-lecun-06.pdf
    '''
            margin = 1
            return K.mean(y_true * K.square(y_pred) +
                  (1 - y_true) * K.square(K.maximum(margin - y_pred, 0)))


        def create_base_network(input_dim):
            '''Base network to be shared (eq. to feature extraction).
    '''
            seq = Sequential()
            seq.add(Dense(128, input_shape=(input_dim,), activation='relu'))
            seq.add(Dropout(0.1))
            seq.add(Dense(128, activation='relu'))
            seq.add(Dropout(0.1))
            seq.add(Dense(128, activation='relu'))
            return seq

        input_dim = 512

        base_network = create_base_network(input_dim)

        input_a = Input(shape=(input_dim,))
        input_b = Input(shape=(input_dim,))

        processed_a = base_network(input_a)
        processed_b = base_network(input_b)

        distance = Lambda(euclidean_distance,
                  output_shape=eucl_dist_output_shape)([processed_a, processed_b])

        model_s = Model([input_a, input_b], distance)

    # siamese training
        rms = RMSprop()
        model_s.compile(loss=contrastive_loss, optimizer=rms)
        model_s.fit([in_source, in_target], Y_train,
          batch_size = 128,
          epochs = num_epoch)

这里使用的暹罗网络是作为Keras的例子给出的。所以我也在用同样的损失函数。请帮我解决这个问题


Tags: intestaddinputsizemodelreturnnp
1条回答
网友
1楼 · 发布于 2024-04-19 19:36:40

传递给fit函数的参数必须是具体的numpy数组,而不是符号张量。我不是Keras专家,但我认为代码中的in_sourcein_target将是象征性的。你知道吗

我认为解决方案需要对您的代码进行很大的更改:我认为您最终需要一个Sequential,而不是现在的三个。你需要一个模型,它的输入输入输入到源和目标堆栈,然后这些堆栈象征性地连接到基本网络堆栈,然后模型的输出就是距离,就像你现在所拥有的那样。你知道吗

相关问题 更多 >