反向传播python神经网络中的错误

2024-04-25 05:46:29 发布

您现在位置:Python中文网/ 问答频道 /正文

该死的东西就是学不到。有时重量似乎变得很难。在

我没有玩过不同数量的隐藏层/输入/输出,但是在不同大小的隐藏层中,bug似乎是一致的。在

from __future__ import division
import numpy
import matplotlib
import random

class Net:
    def __init__(self, *sizes):
        sizes = list(sizes)
        sizes[0] += 1
        self.sizes = sizes
        self.weights = [numpy.random.uniform(-1, 1, (sizes[i+1],sizes[i])) for i in range(len(sizes)-1)]

    @staticmethod
    def activate(x):    
        return 1/(1+numpy.exp(-x))


    def y(self, x_):
        x = numpy.concatenate(([1], numpy.atleast_1d(x_.copy())))
        o = [x] #o[i] is the (activated) output of hidden layer i, "hidden layer 0" is inputs
        for weight in self.weights[:-1]:
            x = weight.dot(x)
            x = Net.activate(x)
            o.append(x)
        o.append(self.weights[-1].dot(x))
        return o    

    def __call__(self, x):
        return self.y(x)[-1]

    def delta(self, x, t):
        o = self.y(x)
        delta = [(o[-1]-t) * o[-1] * (1-o[-1])]
        for i, weight in enumerate(reversed(self.weights)):
            delta.append(weight.T.dot(delta[-1]) * o[-i-2] * (1-o[-i-2]))
        delta.reverse()
        return o, delta            

    def train(self, inputs, outputs, epochs=100, rate=.1):
        for epoch in range(epochs):
            pairs = zip(inputs, outputs)
            random.shuffle(pairs)
            for x, t in pairs: #shuffle? subset? 
                o, d = self.delta(x, t)
                for layer in range(len(self.sizes)-1):
                    self.weights[layer] -=  rate * numpy.outer(o[layer+1], d[layer])


n = Net(1, 4, 1)
x = numpy.linspace(0, 2*3.14, 10)
t = numpy.sin(x)
matplotlib.pyplot.plot(x, t, 'g')
matplotlib.pyplot.plot(x, map(n, x), 'r')
n.train(x, t)
print n.weights
matplotlib.pyplot.plot(x, map(n, x), 'b')
matplotlib.pyplot.show()

Tags: inimportselfnumpylayerforreturnmatplotlib
2条回答

我修好了!谢谢你的建议。我计算出数字部分,发现我的o和{}是正确的,但是我乘错了。这就是为什么我现在用numpy.outer(d[layer+1], o[layer])代替{}。在

我也跳过了一层的更新。这就是为什么我把for layer in range(self.hidden_layers)改为for layer in range(self.hidden_layers+1)。在

我要补充的是,我在发帖前发现了一个bug。我的输出层delta是不正确的,因为我的网络(有意地)没有激活最终的输出,但是我的delta是像它那样计算的。在

首先用一个隐藏层,一个隐藏单元网络进行调试,然后移动到2个输入,3个隐藏层,每个2个神经元,2个输出模型。在

from __future__ import division
import numpy
import scipy
import scipy.special
import matplotlib
#from pylab import *

#numpy.random.seed(23)

def nmap(f, x):
    return numpy.array(map(f, x))

class Net:
    def __init__(self, *sizes):
        self.hidden_layers = len(sizes)-2
        self.weights = [numpy.random.uniform(-1, 1, (sizes[i+1],sizes[i])) for i in range(self.hidden_layers+1)]

    @staticmethod
    def activate(x):
        return scipy.special.expit(x)
        #return 1/(1+numpy.exp(-x))

    @staticmethod
    def activate_(x):
        s = scipy.special.expit(x)
        return s*(1-s)

    def y(self, x):
        o = [numpy.array(x)] #o[i] is the (activated) output of hidden layer i, "hidden layer 0" is inputs and not activated
        for weight in self.weights[:-1]:
            o.append(Net.activate(weight.dot(o[-1])))
        o.append(self.weights[-1].dot(o[-1]))
#        for weight in self.weights:
#            o.append(Net.activate(weight.dot(o[-1])))
        return o

    def __call__(self, x):
        return self.y(x)[-1]

    def delta(self, x, t):
        x = numpy.array(x)
        t = numpy.array(t)
        o = self.y(x)
        #delta = [(o[-1]-t) * o[-1] * (1-o[-1])]
        delta = [o[-1]-t]
        for i, weight in enumerate(reversed(self.weights)):
            delta.append(weight.T.dot(delta[-1]) * o[-i-2] * (1-o[-i-2]))
        delta.reverse() #surely i need this
        return o, delta

    def train(self, inputs, outputs, epochs=1000, rate=.1):
        errors = []
        for epoch in range(epochs):
            for x, t in zip(inputs, outputs): #shuffle? subset?
                o, d = self.delta(x, t)
                for layer in range(self.hidden_layers+1):
                    grad = numpy.outer(d[layer+1], o[layer])
                    self.weights[layer] -=  rate * grad

        return errors

    def rmse(self, inputs, outputs):
        return ((outputs - nmap(self, inputs))**2).sum()**.5/len(inputs)



n = Net(1, 8, 1)
X = numpy.linspace(0, 2*3.1415, 10)
T = numpy.sin(X)
Y = map(n, X)
Y = numpy.array([y[0,0] for y in Y])
matplotlib.pyplot.plot(X, T, 'g')
matplotlib.pyplot.plot(X, Y, 'r')
print 'output successful'
print n.rmse(X, T)
errors = n.train(X, T)
print 'tried to train successfully'
print n.rmse(X, T)
Y = map(n, X)
Y = numpy.array([y[0,0] for y in Y])
matplotlib.pyplot.plot(x, Y, 'b')
matplotlib.pyplot.show()

我没有在您的代码中寻找特定的bug,但是您可以尝试以下方法来进一步缩小您的问题吗?否则,大海捞针就很乏味了。在

1)请尝试使用真实的数据集来了解预期结果,例如MNIST和/或标准化您的数据,因为如果权重太小,您的权重可能会变为NaN。在

2)尝试不同的学习率,并绘制成本函数与时间的关系图,以检查是否正在收敛。它应该看起来像这样(注意,我使用了minibatch学习,并平均了每个epoch的minibatch块)。在

enter image description here

3)我看到您使用的是sigmoid激活,您的实现是正确的,但是为了使它在数值上更稳定,请将1.0 / (1.0 + np.exp(-z))替换为expit(z)(相同的函数,但效率更高)。在

4)进行坡度检查。在这里,您将解析解与数值近似的梯度进行比较

enter image description here

enter image description here

或者一个更好的方法可以得到更精确的梯度近似值,那就是计算两点公式给出的对称(或中心)差商

enter image description here

PS:如果您感兴趣并且发现它很有用,我有一个工作的普通NumPy神经网络实现here。在

相关问题 更多 >