用tensorflow 2实现梯度惩罚损失

2024-05-15 00:15:22 发布

您现在位置:Python中文网/ 问答频道 /正文

早上好

我正在尝试实现本文所述的1D数据的改进WGAN: https://arxiv.org/pdf/1704.00028.pdf

它已在keras contrib github中作为示例实施: https://github.com/keras-team/keras-contrib/blob/master/examples/improved_wgan.py 尽管如此,梯度惩罚损失的这种实现不再适用于tf2。K.gradients()返回[None]

ValueError: in user code:

    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:505 train_function  *
        outputs = self.distribute_strategy.run(
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:951 run  **
        return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2290 call_for_each_replica
        return self._call_for_each_replica(fn, args, kwargs)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2649 _call_for_each_replica
        return fn(*args, **kwargs)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:467 train_step  **
        y, y_pred, sample_weight, regularization_losses=self.losses)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/compile_utils.py:204 __call__
        loss_value = loss_obj(y_t, y_p, sample_weight=sw)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/losses.py:143 __call__
        losses = self.call(y_true, y_pred)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/losses.py:246 call
        return self.fn(y_true, y_pred, **self._fn_kwargs)
    <ipython-input-7-4f0896d0107b>:104 gradient_penalty_loss
        gradients_sqr = K.square(gradients)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/backend.py:2189 square
        return math_ops.square(x)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_math_ops.py:9964 square
        "Square", x=x, name=name)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/op_def_library.py:488 _apply_op_helper
        (input_name, err))

    ValueError: Tried to convert 'x' to a tensor and failed. Error: None values not supported.

以下是该问题的完整示例: https://colab.research.google.com/drive/11dcMKoiCigTnEn7QvmjqLNrJdmFztByT

有人知道发生了什么变化吗?你知道怎么解决这个问题吗

更新:这会在构造反计算图时忽略错误。然后它就好像跑了

def gradient_penalty_loss(y_true, y_pred, averaged_samples):
  gradients = K.gradients(y_pred, averaged_samples)[0]
  try:
    gradients_sqr = K.square(gradients)
  except ValueError:
    print("Gradients returned None")
    return 0
  gradients_sqr_sum = K.sum(gradients_sqr, axis=np.arange(1, len(gradients_sqr.shape)))
  gradient_l2_norm = K.sqrt(gradients_sqr_sum)

  gradient_penalty = K.square(1 - gradient_l2_norm)

  return K.mean(gradient_penalty)

尽管如此,我得到了越来越高的损失函数,梯度惩罚损失被忽略了吗? Loss


Tags: pyselfreturnlibpackagesusrlocaldist
1条回答
网友
1楼 · 发布于 2024-05-15 00:15:22

如果您按照更新中的建议进行操作,tf将忽略损失函数

对于Tensorflow 2,似乎不可能以旧的方式实现这一点。我最终修改了代码,使其适应这种创建模型的方式。我的建议是什么

  1. 使用keras创建gen/disc型号
  2. 加入他们,扩展tf.keras.Model类,比如:https://github.com/timsainb/tensorflow2-generative-models的WGAN

相关问题 更多 >

    热门问题