如何用十位数实现辍学

2024-05-08 21:46:13 发布

您现在位置:Python中文网/ 问答频道 /正文

我在tensorflow 3-knn实现中应用了dropout。 但是由于占位符的变量keep\u prob,我有一个错误。你知道吗

TypeError:无法将feed\u dict键解释为张量:无法将int转换为张量。你知道吗

我编写了两个函数:前向传播(实现前向传播)和模型(训练模型上的参数)。以下是两个功能的简短实现:

为了训练模型,我应该如何影响keep \u prob值从“model”函数到“forward \u propagation”函数?你知道吗


def forward_propagation(X, parameters, keep_prob):

    # Retrieve the parameters from the dictionary "parameters" 
    W1 = parameters['W1']
    b1 = parameters['b1']
    W2 = parameters['W2']
    b2 = parameters['b2']
    W3 = parameters['W3']
    b3 = parameters['b3']

    #keep_prob = tf.placeholder(dtype=tf.float64)
    ### with keep_drop
    Z1 = tf.add(tf.matmul(W1, X), b1)                 # Z1 = np.dot(W1, X) + b1
    A1 = tf.nn.relu(Z1)                               # A1 = relu(Z1)
    A1 = tf.nn.dropout(A1, keep_prob)
    Z2 = tf.add(tf.matmul(W2, A1), b2)                # Z2 = np.dot(W2, A1) + b2
    A2 = tf.nn.relu(Z2)                               # A2 = relu(Z2)
    A2 = tf.nn.dropout(A2, keep_prob)
    Z3 = tf.add(tf.matmul(W3, A2), b3)                # Z3 = np.dot(W3, A2) + b3
    ### with keep_drop

    return Z3

def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.0001, num_epochs = 1500, minibatch_size = 32, keep_prob = 1, seed = 0):

    ops.reset_default_graph()                         
    tf.set_random_seed(seed)                          
    seed = seed                                       
    (n_x, m) = X_train.shape                          
    n_y = Y_train.shape[0]                            # n_y : output size
    costs = []                                      # To keep track of the cost


    # Create Placeholders of shape (n_x, n_y)
    ### START CODE HERE ### (1 line)
    X, Y = create_placeholders(n_x, n_y)
    keep_prob_ = tf.constant(keep_prob, dtype=tf.float32, name="keep_prob_")
    ### END CODE HERE ###

    # Initialize parameters
    parameters = initialize_parameters()


    # Forward propagation: Build the forward propagation in the tensorflow graph
    Z3 = forward_propagation(X, parameters, keep_prob)

    # Cost = loss function: Add cost function to tensorflow graph
    cost = compute_cost( Z3=Z3, Y=Y)

    # Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer.
    optimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(cost)

    # Initialize all the variables
    init = tf.global_variables_initializer()

    # Start the session to compute the tensorflow graph
    with tf.Session() as sess:

        # Run the initialization
        sess.run(init)

        # Do the training loop
        for epoch in range(num_epochs):

            epoch_cost = 0.                       # Defines a cost related to an epoch
            num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set
            seed = seed + 1
            minibatches = fct_utils.random_mini_batches(X_train, Y_train, 
                                                        minibatch_size, seed)

            for minibatch in minibatches:

                # Select a minibatch
                (minibatch_X, minibatch_Y) = minibatch

                # IMPORTANT: The line that runs the graph on a minibatch.
                # Run the session to execute the "optimizer" and the "cost", the feedict should contain a minibatch for (X,Y).
                _ , minibatch_cost = sess.run([optimizer, cost], 
                                              feed_dict={Y:minibatch_Y, 
                                                         X:minibatch_, 
                                                         keep_prob:keep_prob_})

                epoch_cost += minibatch_cost / num_minibatches

            # Print the cost every epoch
            if print_cost == True and epoch % 100 == 0:
                print ("Cost after epoch %i: %f" % (epoch, epoch_cost))
            if print_cost == True and epoch % 5 == 0:
                costs.append(epoch_cost)

        # lets save the parameters in a variable
        parameters = sess.run(parameters)
        print ("Parameters have been trained!")

Tags: thea2sizetftensorflowa1trainseed
1条回答
网友
1楼 · 发布于 2024-05-08 21:46:13

keep_probkeep_prob_的位置应该互换。在代码中,keep_prob是int类型,keep_prob_是张量,应该是feed_dict中的键。你知道吗

feed_dict={Y:minibatch_Y, X:minibatch_, keep_prob_:keep_prob}

相关问题 更多 >