Keras每个张量群的自定义损失函数

2024-04-26 18:12:29 发布

您现在位置:Python中文网/ 问答频道 /正文

我正在编写一个自定义损失函数,需要计算每组预测值的比率。作为一个简化的示例,以下是我的数据和模型代码:

def main():
    df = pd.DataFrame(columns=["feature_1", "feature_2", "condition_1", "condition_2", "label"],
                      data=[[5, 10, "a", "1", 0],
                            [30, 20, "a", "1", 1],
                            [50, 40, "a", "1", 0],
                            [15, 20, "a", "2", 0],
                            [25, 30, "b", "2", 1],
                            [35, 40, "b", "1", 0],
                            [10, 80, "b", "1", 1]])
    features = ["feature_1", "feature_2"]
    conds_and_label = ["condition_1", "condition_2", "label"]
    X = df[features]
    Y = df[conds_and_label]
    model = my_model(input_shape=len(features))
    model.fit(X, Y, epochs=10, batch_size=128)
    model.evaluate(X, Y)


def custom_loss(conditions, y_pred):  # this is what I need help with
    conds = ["condition_1", "condition_2"]
    conditions["label_pred"] = y_pred
    g = conditions.groupby(by=conds,
                           as_index=False).apply(lambda x: x["label_pred"].sum() /
                                                           len(x)).reset_index(name="pred_ratio")
    # true_ratios will be a constant, external DataFrame. Simplified example here:
    true_ratios = pd.DataFrame(columns=["condition_1", "condition_2", "true_ratio"],
                               data=[["a", "1", 0.1],
                                     ["a", "2", 0.2],
                                     ["b", "1", 0.8],
                                     ["b", "2", 0.9]])
    merged = pd.merge(g, true_ratios, on=conds)
    merged["diff"] = merged["pred_ratio"] - merged["true_ratio"]
    return K.mean(K.abs(merged["diff"]))


def joint_loss(conds_and_label, y_pred):
    y_true = conds_and_label[:, 2]
    conditions = tf.gather(conds_and_label, [0, 1], axis=1)
    loss_1 = standard_loss(y_true=y_true, y_pred=y_pred)  # not shown
    loss_2 = custom_loss(conditions=conditions, y_pred=y_pred)
    return 0.5 * loss_1 + 0.5 * loss_2


def my_model(input_shape=None):
    model = Sequential()
    model.add(Dense(units=2, activation="relu"), input_shape=(input_shape,))
    model.add(Dense(units=1, activation='sigmoid'))
    model.add(Flatten())
    model.compile(loss=joint_loss, optimizer="Adam",
                  metrics=[joint_loss, custom_loss, "accuracy"])
    return model

我需要的帮助是custom_loss函数。如您所见,它当前的编写方式就像输入是数据帧一样。但是,输入将是Keras张量(带有tensorflow后端),因此我试图弄清楚如何将custom_loss中的当前代码转换为使用Keras/TF后端函数。例如,我在网上搜索,但找不到一种方法在Keras/TF中进行分组,以获得我需要的比率

一些可能对您有帮助的上下文/解释:

  1. 我的主要损失函数是joint_loss,它由standard_loss(未显示)和custom_loss组成。但我只需要转换custom_loss的帮助
  2. {}所做的是:
    1. 两个条件列上的Groupby(这两列表示数据组)
    2. 获得预测的1s与每组批次样本总数的比率
    3. 将“pred_比率”与一组“true_比率”进行比较,得出差异
    4. 根据差值计算平均绝对误差

Tags: and函数truemodeldefcustommergedcondition
1条回答
网友
1楼 · 发布于 2024-04-26 18:12:29

我最终找到了一个解决方案,尽管我希望得到一些反馈(特别是一些部分)。以下是解决方案:

import pandas as pd
import tensorflow as tf
import keras.backend as K
from keras.models import Sequential
from keras.layers import Dense, Flatten, Dropout
from tensorflow.python.ops import gen_array_ops


def main():
    df = pd.DataFrame(columns=["feature_1", "feature_2", "condition_1", "condition_2", "label"],
                      data=[[5, 10, "a", "1", 0],
                            [30, 20, "a", "1", 1],
                            [50, 40, "a", "1", 0],
                            [15, 20, "a", "2", 0],
                            [25, 30, "b", "2", 1],
                            [35, 40, "b", "1", 0],
                            [10, 80, "b", "1", 1]])
    df = pd.concat([df] * 500)  # making data artificially larger
    true_ratios = pd.DataFrame(columns=["condition_1", "condition_2", "true_ratio"],
                               data=[["a", "1", 0.1],
                                     ["a", "2", 0.2],
                                     ["b", "1", 0.8],
                                     ["b", "2", 0.9]])
    features = ["feature_1", "feature_2"]
    conditions = ["condition_1", "condition_2"]
    conds_ratios_label = conditions + ["true_ratio", "label"]
    df = pd.merge(df, true_ratios, on=conditions, how="left")
    X = df[features]
    Y = df[conds_ratios_label]
    # need to convert strings to ints because tensors can't mix strings with floats/ints
    mapping_1 = {"a": 1, "b": 2}
    mapping_2 = {"1": 1, "2": 2}
    Y.replace({"condition_1": mapping_1}, inplace=True)
    Y.replace({"condition_2": mapping_2}, inplace=True)
    X = tf.convert_to_tensor(X)
    Y = tf.convert_to_tensor(Y)
    model = my_model(input_shape=len(features))
    model.fit(X, Y, epochs=1, batch_size=64)
    print()
    print(model.evaluate(X, Y))


def custom_loss(conditions, true_ratios, y_pred):
    y_pred = tf.sigmoid((y_pred - 0.5) * 1000)
    uniques, idx, count = gen_array_ops.unique_with_counts_v2(conditions, [0])
    num_unique = tf.size(count)
    sums = tf.math.unsorted_segment_sum(data=y_pred, segment_ids=idx, num_segments=num_unique)
    lengths = tf.cast(count, tf.float32)
    pred_ratios = tf.divide(sums, lengths)
    mean_pred_ratios = tf.math.reduce_mean(pred_ratios)
    mean_true_ratios = tf.math.reduce_mean(true_ratios)
    diff = mean_pred_ratios - mean_true_ratios
    return K.mean(K.abs(diff))


def standard_loss(y_true, y_pred):
    return tf.losses.binary_crossentropy(y_true=y_true, y_pred=y_pred)


def joint_loss(conds_ratios_label, y_pred):
    y_true = conds_ratios_label[:, 3]
    true_ratios = conds_ratios_label[:, 2]
    conditions = tf.gather(conds_ratios_label, [0, 1], axis=1)
    loss_1 = standard_loss(y_true=y_true, y_pred=y_pred)
    loss_2 = custom_loss(conditions=conditions, true_ratios=true_ratios, y_pred=y_pred)
    return 0.5 * loss_1 + 0.5 * loss_2


def my_model(input_shape=None):
    model = Sequential()
    model.add(Dropout(0, input_shape=(input_shape,)))
    model.add(Dense(units=2, activation="relu"))
    model.add(Dense(units=1, activation='sigmoid'))
    model.add(Flatten())
    model.compile(loss=joint_loss, optimizer="Adam",
                  metrics=[joint_loss, "accuracy"],  # had to remove custom_loss because it takes 3 args now
                  run_eagerly=True)
    return model


if __name__ == '__main__':
    main()

主要更新是custom_loss。我从custom_loss中删除了创建true_ratios数据帧,而是将其附加到main中的Y。现在custom_loss接受3个参数,其中一个是true_ratios张量。我必须使用gen_array_ops.unique_with_counts_v2unsorted_segment_sum来获得每组条件的总和。然后我得到每个组的长度,以便创建pred_ratios(根据y_pred计算每个组的比率)。最后,我得到了平均预测比率和平均真实比率,并取绝对差值得到我的自定义损失

值得注意的是:

  1. 因为我的模型的最后一层是一个sigmoid,所以我的y_pred值是介于0和1之间的概率。因此,我需要将它们转换为0和1,以便计算自定义损耗中所需的比率。起初我尝试使用tf.round,但我意识到这是不可微的。因此,我用custom_loss内部的y_pred = tf.sigmoid((y_pred - 0.5) * 1000)替换了它。这本质上是将所有的y_pred值取为0和1,但以可微的方式。这看起来有点像“黑客”,所以如果你对此有任何反馈,请让我知道
  2. 我注意到,我的模型只有在model.compile()中使用run_eagerly=True时才有效。否则我会得到这个错误:“ValueError:维度必须相等,但对于…”是1和2。我不确定为什么会出现这种情况,但错误源于我使用tf.unsorted_segment_sum的那一行
  3. unique_with_counts_v2实际上并不存在于tensorflow API中,但它存在于源代码中。我需要它能够根据多个条件(而不仅仅是单个条件)进行分组

如果您对此有任何反馈,请随时发表评论,一般来说,或者是对上述要点的回应

相关问题 更多 >