使用spektral训练图形神经网络(GNN)创建嵌入

2024-06-17 10:58:57 发布

您现在位置:Python中文网/ 问答频道 /正文

我正致力于创建一个图形神经网络(GNN),它可以创建输入图形的嵌入,以便在强化学习等其他应用中使用

我从spektral库TUDataset classification with GIN中的示例开始,并对其进行了修改,将网络分为两部分。第一部分生成嵌入,第二部分生成分类。我的目标是在带有图形标签的数据集上使用监督学习来训练这个网络,例如TUDataset,并在其他应用程序中训练后使用第一部分(嵌入生成)

我在两个不同的数据集中得到了不同的结果。使用这种新方法,TUDataset显示了更好的损失和准确性,而其他本地数据集显示了损失的显著增加

如果我创建嵌入的方法合适,我能得到任何反馈吗?或者有任何进一步改进的建议吗

以下是我用来生成图形嵌入的代码:

import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Dense, Dropout
from tensorflow.keras.losses import CategoricalCrossentropy
from tensorflow.keras.metrics import categorical_accuracy
from tensorflow.keras.models import Model, Sequential
from tensorflow.keras.optimizers import Adam

from spektral.data import DisjointLoader
from spektral.datasets import TUDataset
from spektral.layers import GINConv, GlobalAvgPool

################################################################################
# PARAMETERS
################################################################################
learning_rate = 1e-3  # Learning rate
channels = 128  # Hidden units
layers = 3  # GIN layers
epochs = 300  # Number of training epochs
batch_size = 32  # Batch size

################################################################################
# LOAD DATA
################################################################################
dataset = TUDataset("PROTEINS", clean=True)

# Parameters
F = dataset.n_node_features  # Dimension of node features
n_out = dataset.n_labels  # Dimension of the target

# Train/test split
idxs = np.random.permutation(len(dataset))
split = int(0.9 * len(dataset))
idx_tr, idx_te = np.split(idxs, [split])
dataset_tr, dataset_te = dataset[idx_tr], dataset[idx_te]

loader_tr = DisjointLoader(dataset_tr, batch_size=batch_size, epochs=epochs)
loader_te = DisjointLoader(dataset_te, batch_size=batch_size, epochs=1)

################################################################################
# BUILD MODEL
################################################################################
class GIN0(Model):
    def __init__(self, channels, n_layers):
        super().__init__()
        self.conv1 = GINConv(channels, epsilon=0, mlp_hidden=[channels, channels])
        self.convs = []
        for _ in range(1, n_layers):
            self.convs.append(
                GINConv(channels, epsilon=0, mlp_hidden=[channels, channels])
            )
        self.pool = GlobalAvgPool()
        self.dense1 = Dense(channels, activation="relu")

    def call(self, inputs):
        x, a, i = inputs
        x = self.conv1([x, a])
        for conv in self.convs:
            x = conv([x, a])
        x = self.pool([x, i])
        return self.dense1(x)


# Build model
model = GIN0(channels, layers)
model_op = Sequential()
model_op.add(Dropout(0.5, input_shape=(channels,)))
model_op.add(Dense(n_out, activation="softmax"))
opt = Adam(lr=learning_rate)
loss_fn = CategoricalCrossentropy()


################################################################################
# FIT MODEL
################################################################################
@tf.function(input_signature=loader_tr.tf_signature(), experimental_relax_shapes=True)
def train_step(inputs, target):
    with tf.GradientTape(persistent=True) as tape:
        node2vec = model(inputs, training=True)
        predictions = model_op(node2vec, training=True)
        loss = loss_fn(target, predictions)
        loss += sum(model.losses)
    gradients = tape.gradient(loss, model.trainable_variables)
    opt.apply_gradients(zip(gradients, model.trainable_variables))
    gradients2 = tape.gradient(loss, model_op.trainable_variables)
    opt.apply_gradients(zip(gradients2, model_op.trainable_variables))
    acc = tf.reduce_mean(categorical_accuracy(target, predictions))
    return loss, acc


print("Fitting model")
current_batch = 0
model_lss = model_acc = 0
for batch in loader_tr:
    lss, acc = train_step(*batch)

    model_lss += lss.numpy()
    model_acc += acc.numpy()
    current_batch += 1
    if current_batch == loader_tr.steps_per_epoch:
        model_lss /= loader_tr.steps_per_epoch
        model_acc /= loader_tr.steps_per_epoch
        print("Loss: {}. Acc: {}".format(model_lss, model_acc))
        model_lss = model_acc = 0
        current_batch = 0

################################################################################
# EVALUATE MODEL
################################################################################
def tolist(predictions):
    result = []
    for item in predictions:
        result.append((float(item[0]), float(item[1])))
    return result
loss_data = []
print("Testing model")
model_lss = model_acc = 0
for batch in loader_te:
    inputs, target = batch
    node2vec = model(inputs, training=False)
    predictions = model_op(node2vec, training=False)
    predictions_list = tolist(predictions)
    loss_data.append(zip(target,predictions_list))
    model_lss += loss_fn(target, predictions)
    model_acc += tf.reduce_mean(categorical_accuracy(target, predictions))
model_lss /= loader_te.steps_per_epoch
model_acc /= loader_te.steps_per_epoch
print("Done. Test loss: {}. Test acc: {}".format(model_lss, model_acc))
for batchi in loss_data:
    for item in batchi:
        print(list(item),'\n')

Tags: fromimportselftargetmodelbatchloaderdataset
1条回答
网友
1楼 · 发布于 2024-06-17 10:58:57

生成图形嵌入的方法是正确的,GIN0模型将返回给定图形的向量

然而,这里的代码似乎很奇怪:

gradients = tape.gradient(loss, model.trainable_variables)
opt.apply_gradients(zip(gradients, model.trainable_variables))
gradients2 = tape.gradient(loss, model_op.trainable_variables)
opt.apply_gradients(zip(gradients2, model_op.trainable_variables))

这里要做的是更新model的权重两次,更新model_op的权重一次

当您在tf.GradientTape的上下文中计算损失时,将跟踪所有用于计算最终值的计算。这意味着,如果调用loss = foo(bar(x)),然后使用该损失计算训练步骤,foobar的权重都将更新

除此之外,我认为代码没有问题,因此它主要取决于您正在使用的本地数据集

干杯

相关问题 更多 >