张量流神经网络模型

2024-04-23 21:08:25 发布

您现在位置:Python中文网/ 问答频道 /正文

我是Tensorflow的新手,正在做一个关于创建神经网络的初学者教程。起初,由于读取错误,程序根本无法工作

Cannot feed value of shape (165,) for Tensor 'Placeholder_107:0', which has shape '(?, 1)

我想这和张量的形状有关,所以我把变量Y改成reshape(-1,1)。之后,程序开始工作,但当模型训练,成本和准确性似乎没有改变。它只是保持在零上。有人能告诉我我做错了什么吗?你知道吗

import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import tensorflow as tf

# Importing the dataset
dataset = pd.read_csv('sonar.csv')
X = dataset.iloc[:, 0:60].values
Y = dataset.iloc[:, 60]


from sklearn.preprocessing import LabelEncoder
labely = LabelEncoder()
Y = labely.fit_transform(Y)

Y = Y.reshape(-1,1)

from sklearn.model_selection import train_test_split
xtrain, xtest, ytrain, ytest = train_test_split(X, Y, test_size = 0.2)

# hyperparmeter
learnRate = 0.3
eps = 10
costHistory = np.empty(shape=[1], dtype=float)
input1 = X.shape[1]
output = 1

hiddenLayer1 = 16
hiddenLayer2 = 16
hiddenLayer3 = 16
hiddenLayer4 = 16

x1 = tf.placeholder(tf.float32,[None,input1])
w = tf.Variable(tf.zeros([input1,output]))
b = tf.Variable(tf.zeros([output]))
y_ = tf.placeholder(tf.float32,[None,output])


#------------------------------------------------


def multi_perceptron(x1, weight, bias):
    layer1 = tf.matmul(x1, weight['h1']) + bias['b1']
    layer1 = tf.nn.sigmoid(layer1)

    layer2 = tf.matmul(layer1, weight['h2']) + bias['b2']
    layer2 = tf.nn.sigmoid(layer2)

    layer3 = tf.matmul(layer2, weight['h3']) + bias['b3']
    layer3 = tf.nn.sigmoid(layer3)

    layer4 = tf.matmul(layer3, weight['h4']) + bias['b4']
    layer4 = tf.nn.relu(layer4)

    outputLayer = tf.matmul(layer4, weight['out']) + bias['out']
    return outputLayer

weight = {
            'h1' : tf.Variable(tf.truncated_normal([input1, hiddenLayer1])),
            'h2' : tf.Variable(tf.truncated_normal([hiddenLayer1, hiddenLayer2])),
            'h3' : tf.Variable(tf.truncated_normal([hiddenLayer2, hiddenLayer3])),
            'h4' : tf.Variable(tf.truncated_normal([hiddenLayer3, hiddenLayer4])),
            'out' : tf.Variable(tf.truncated_normal([hiddenLayer4, output]))
        }

bias = {
         'b1' : tf.Variable(tf.truncated_normal([hiddenLayer1])),
         'b2' : tf.Variable(tf.truncated_normal([hiddenLayer2])),
         'b3' : tf.Variable(tf.truncated_normal([hiddenLayer3])),
         'b4' : tf.Variable(tf.truncated_normal([hiddenLayer4])),
         'out' : tf.Variable(tf.truncated_normal([output]))   
        }

init = tf.global_variables_initializer()
y = multi_perceptron(x1,weight,bias)


costFuction = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = y,labels = y_))
trainStep = tf.train.GradientDescentOptimizer(learnRate).minimize(costFuction)

with tf.Session() as sesh:
    sesh.run(init)

    errHistory = []
    accHistory = []

    for e in range(eps):
        sesh.run(trainStep, feed_dict = {x1:xtrain, y_:ytrain})
        cost = sesh.run(costFuction, feed_dict = {x1:xtrain, y_:ytrain})
        costHistory = np.append(costHistory, cost)
        correctPred = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
        accu = tf.reduce_mean(tf.cast(correctPred, tf.float32))
        print('epoch: ', eps, ' - ', 'cost: ', cost, '-Training Accuracy: ', accu)        

这是它的输出。你知道吗

epoch:  10  -  cost:  0.0 -Training Accuracy:  Tensor("Mean_364:0", shape=(), dtype=float32)
epoch:  10  -  cost:  0.0 -Training Accuracy:  Tensor("Mean_365:0", shape=(), dtype=float32)
epoch:  10  -  cost:  0.0 -Training Accuracy:  Tensor("Mean_366:0", shape=(), dtype=float32)
epoch:  10  -  cost:  0.0 -Training Accuracy:  Tensor("Mean_367:0", shape=(), dtype=float32)
epoch:  10  -  cost:  0.0 -Training Accuracy:  Tensor("Mean_368:0", shape=(), dtype=float32)
epoch:  10  -  cost:  0.0 -Training Accuracy:  Tensor("Mean_369:0", shape=(), dtype=float32)
epoch:  10  -  cost:  0.0 -Training Accuracy:  Tensor("Mean_370:0", shape=(), dtype=float32)
epoch:  10  -  cost:  0.0 -Training Accuracy:  Tensor("Mean_371:0", shape=(), dtype=float32)
epoch:  10  -  cost:  0.0 -Training Accuracy:  Tensor("Mean_372:0", shape=(), dtype=float32)
epoch:  10  -  cost:  0.0 -Training Accuracy:  Tensor("Mean_373:0", shape=(), dtype=float32)

Tags: tftrainingmeanvariabletensornormalweightshape
2条回答

因为要进行二进制分类,所以需要将代价函数从softmax_cross_entropy_with_logits更改为sigmoid_cross_entropy_with_logits。你知道吗

在这种情况下,为了计算准确度,您需要保留一个阈值,而不是使用argmax。你知道吗

correctPred = tf.equal(y_, tf.cast((y>0.5), tf.float32))
accu = tf.reduce_mean(tf.cast(correctPred, tf.float32))
print('epoch: ', e+1, ' - ', 'cost: ', cost, '- Training Accuracy: ', accu.eval(feed_dict = {x1:xtrain, y_:ytrain}))

成本函数与您设定的当前学习率(即0.3)不一致。请以较低的学习率(例如0.05)重新运行。你知道吗

敬礼 钱德拉

相关问题 更多 >