我注意到,当损失函数将输入转换为numpy数组以计算输出值时,Tensorflow的自动微分不会给出与有限差分相同的值。下面是该问题的一个最起码的工作示例:
import tensorflow as tf
import numpy as np
def lossFn(inputTensor):
# Input is a rank-2 square tensor
return tf.linalg.trace(inputTensor @ inputTensor)
def lossFnWithNumpy(inputTensor):
# Same function, but converts input to a numpy array before performing the norm
inputArray = inputTensor.numpy()
return tf.linalg.trace(inputArray @ inputArray)
N = 2
tf.random.set_seed(0)
randomTensor = tf.random.uniform([N, N])
# Prove that the two functions give the same output; evaluates to exactly zero
print(lossFn(randomTensor) - lossFnWithNumpy(randomTensor))
theoretical, numerical = tf.test.compute_gradient(lossFn, [randomTensor])
# These two values match
print(theoretical[0])
print(numerical[0])
theoretical, numerical = tf.test.compute_gradient(lossFnWithNumpy, [randomTensor])
# The theoretical value is [0 0 0 0]
print(theoretical[0])
print(numerical[0])
函数tf.test.compute_gradients
使用自动微分计算“理论”梯度,使用有限差分计算数值梯度。如代码所示,如果在损失函数中使用.numpy()
,则自动微分不会计算梯度
有人能解释一下原因吗
来自指南:Introduction to Gradients and Automatic Differentiation
numpy值将在对
tf.linalg.trace
的调用中被转换回常量张量,而Tensorflow无法在其上计算梯度相关问题 更多 >
编程相关推荐