如何在tensorflow中实现ano.tensor.Lop?

2024-06-16 11:22:06 发布

您现在位置:Python中文网/ 问答频道 /正文

最近,我想用tensorflow重写一些Theano编码。然而,我遇到了一个问题,我不知道如何写Lop运算符在张量流。下图是关于Theano.tensor.Lop的API

enter image description here

以下是序号中的初始编码

def svgd_gradient(X0):

    hidden, _, mse = discrim(X0)
    grad = -1.0 * T.grad( mse.sum(), X0)

    kxy, neighbors, h = rbf_kernel(hidden)  #TODO

    coff = T.exp( - T.sum((hidden[neighbors] - hidden)**2, axis=1) / h**2 / 2.0 )
    v = coff.dimshuffle(0, 'x') * (-hidden[neighbors] + hidden) / h**2

    X1 = X0[neighbors]
    hidden1, _, _ = discrim(X1)
    dxkxy = T.Lop(hidden1, X1, v)

    svgd_grad = grad + dxkxy / 2.
    return grad, svgd_grad, dxkxy 

我试过这种方法,但是维度有问题

    def svgd_gradient(self, x0):
            hidden, _, mse = self.discriminator(x0)
            grad = -tf.gradients(tf.reduce_sum(mse), x0)

            kxy,neighbors, h = self.rbd_kernel(hidden)

            coff = tf.exp(-tf.reduce_sum((hidden[neighbors] - hidden)**2, axis=1) / h**2 / 2.0)
            v = tf.expand_dims(coff, axis=1) * (-hidden[neighbors] + hidden) / h**2

            x1 = x0[neighbors]
            hidden1, _, _ = self.discriminator(x1, reuse=True)
            dxkxy = self.Lop(hidden1, x1, v)

            svgd_grad = grad + dxkxy / 2
            return grad, svgd_grad, dxkxy

        def Lop(self, f, wrt, v):
            Lop = tf.multiply(tf.gradients(f, wrt), v)
            return Lop

Tags: selftfdefneighborshiddensummselop