高效友好的TensorFlow 1.x和2.x图形神经网络库。

tf-geometric的Python项目详细描述


TensorFlow 1.x和2.x高效友好的图形神经网络库

rusty1s/pytorch\u geometric的启发,我们为TensorFlow构建了一个GNN库。在

高效友好

我们使用消息传递机制来实现图神经网络,它比基于稠密矩阵的实现更高效,比基于稀疏矩阵的实现更友好。 此外,我们还为复杂的GNN操作提供了简单而优雅的api。 下面的示例构造了一个图,并在其上应用了多头图注意力网络(GAT):

# coding=utf-8importnumpyasnpimporttf_geometricastfgimporttensorflowastfgraph=tfg.Graph(x=np.random.randn(5,20),# 5 nodes, 20 features,edge_index=[[0,0,1,3],[1,2,2,1]]# 4 undirected edges)print("Graph Desc: \n",graph)graph.convert_edge_to_directed()# pre-process edgesprint("Processed Graph Desc: \n",graph)print("Processed Edge Index:\n",graph.edge_index)# Multi-head Graph Attention Network (GAT)gat_layer=tfg.layers.GAT(units=4,num_heads=4,activation=tf.nn.relu)output=gat_layer([graph.x,graph.edge_index])print("Output of GAT: \n",output)

输出:

^{pr2}$

安装

要求:

  • 操作系统:Windows/Linux/Mac操作系统
  • Python:version>;=3.5
  • Python包:
    • tensorflow/tensorflow gpu:>;=1.14.0或>;=2.0.0b1
    • 数量=1.17.4
    • 网络X>;=2.1
    • scipy>;=1.1.0

使用以下命令之一:

pip install -U tf_geometric # this will not install the tensorflow/tensorflow-gpu package

pip install -U tf_geometric[tf1-cpu]# this will install TensorFlow 1.x CPU version

pip install -U tf_geometric[tf1-gpu]# this will install TensorFlow 1.x GPU version

pip install -U tf_geometric[tf2-cpu]# this will install TensorFlow 2.x CPU version

pip install -U tf_geometric[tf2-gpu]# this will install TensorFlow 2.x GPU version

面向对象与功能API

我们提供了OOP和函数API,你可以用它们来做一些很酷的事情。在

# coding=utf-8importos# Enable GPU 0os.environ["CUDA_VISIBLE_DEVICES"]="0"importtf_geometricastfgimporttensorflowastfimportnumpyasnpfromtf_geometric.utils.graph_utilsimportconvert_edge_to_directed# ==================================== Graph Data Structure ====================================# In tf_geometric, graph data can be either individual Tensors or Graph objects# A graph usually consists of x(node features), edge_index and edge_weight(optional)# Node Features => (num_nodes, num_features)x=np.random.randn(5,20).astype(np.float32)# 5 nodes, 20 features# Edge Index => (2, num_edges)# Each column of edge_index (u, v) represents an directed edge from u to v.# Note that it does not cover the edge from v to u. You should provide (v, u) to cover it.# This is not convenient for users.# Thus, we allow users to provide edge_index in undirected form and convert it later.# That is, we can only provide (u, v) and convert it to (u, v) and (v, u) with `convert_edge_to_directed` method.edge_index=np.array([[0,0,1,3],[1,2,2,1]])# Edge Weight => (num_edges)edge_weight=np.array([0.9,0.8,0.1,0.2]).astype(np.float32)# Make the edge_index directed such that we can use it as the input of GCNedge_index,[edge_weight]=convert_edge_to_directed(edge_index,[edge_weight])# We can convert these numpy array as TensorFlow Tensors and pass them to gnn functionsoutputs=tfg.nn.gcn(tf.Variable(x),tf.constant(edge_index),tf.constant(edge_weight),tf.Variable(tf.random.truncated_normal([20,2]))# GCN Weight)print(outputs)# Usually, we use a graph object to manager these information# edge_weight is optional, we can set it to None if you don't need itgraph=tfg.Graph(x=x,edge_index=edge_index,edge_weight=edge_weight)# You can easily convert these numpy arrays as Tensors with the Graph Object APIgraph.convert_data_to_tensor()# Then, we can use them without too many manual conversionoutputs=tfg.nn.gcn(graph.x,graph.edge_index,graph.edge_weight,tf.Variable(tf.random.truncated_normal([20,2])),# GCN Weightcache=graph.cache# GCN use caches to avoid re-computing of the normed edge information)print(outputs)# For algorithms that deal with batches of graphs, we can pack a batch of graph into a BatchGraph object# Batch graph wrap a batch of graphs into a single graph, where each nodes has an unique index and a graph index.# The node_graph_index is the index of the corresponding graph for each node in the batch.# The edge_graph_index is the index of the corresponding edge for each node in the batch.batch_graph=tfg.BatchGraph.from_graphs([graph,graph,graph,graph])# We can reversely split a BatchGraph object into Graphs objectsgraphs=batch_graph.to_graphs()# Graph Pooling algorithms often rely on such batch data structure# Most of them accept a BatchGraph's data as input and output a feature vector for each graph in the batchoutputs=tfg.nn.mean_pool(batch_graph.x,batch_graph.node_graph_index,num_graphs=batch_graph.num_graphs)print(outputs)# We provide some advanced graph pooling operations such as topk_poolnode_score=tfg.nn.gcn(batch_graph.x,batch_graph.edge_index,batch_graph.edge_weight,tf.Variable(tf.random.truncated_normal([20,1])),# GCN Weightcache=graph.cache# GCN use caches to avoid re-computing of the normed edge information)node_score=tf.reshape(node_score,[-1])topk_node_index=tfg.nn.topk_pool(batch_graph.node_graph_index,node_score,ratio=0.6)print(topk_node_index)# ==================================== Built-in Datasets ====================================# all graph data are in numpy formattrain_data,valid_data,test_data=tfg.datasets.PPIDataset().load_data()# we can convert them into tensorflow formattest_data=[graph.convert_data_to_tensor()forgraphintest_data]# ==================================== Basic OOP API ====================================# OOP Style GCN (Graph Convolutional Network)gcn_layer=tfg.layers.GCN(units=20,activation=tf.nn.relu)forgraphintest_data:# Cache can speed-up GCN by caching the normed edge informationoutputs=gcn_layer([graph.x,graph.edge_index,graph.edge_weight],cache=graph.cache)print(outputs)# OOP Style GAT (Multi-head Graph Attention Network)gat_layer=tfg.layers.GAT(units=20,activation=tf.nn.relu,num_heads=4)forgraphintest_data:outputs=gat_layer([graph.x,graph.edge_index])print(outputs)# ==================================== Basic Functional API ====================================# Functional Style GCN# Functional API is more flexible for advanced algorithms# You can pass both data and parameters to functional APIsgcn_w=tf.Variable(tf.random.truncated_normal([test_data[0].num_features,20]))forgraphintest_data:outputs=tfg.nn.gcn(graph.x,edge_index,edge_weight,gcn_w,activation=tf.nn.relu)print(outputs)# ==================================== Advanced OOP API ====================================# All APIs are implemented with Map-Reduce Style# This is a gcn without weight normalization and transformation.# Create your own GNN Layer by subclassing the MapReduceGNN classclassNaiveGCN(tfg.layers.MapReduceGNN):defmap(self,repeated_x,neighbor_x,edge_weight=None):returntfg.nn.identity_mapper(repeated_x,neighbor_x,edge_weight)defreduce(self,neighbor_msg,node_index,num_nodes=None):returntfg.nn.sum_reducer(neighbor_msg,node_index,num_nodes)defupdate(self,x,reduced_neighbor_msg):returntfg.nn.sum_updater(x,reduced_neighbor_msg)naive_gcn=NaiveGCN()forgraphintest_data:print(naive_gcn([graph.x,graph.edge_index,graph.edge_weight]))# ==================================== Advanced Functional API ====================================# All APIs are implemented with Map-Reduce Style# This is a gcn without without weight normalization and transformation# Just pass the mapper/reducer/updater functions to the Functional APIforgraphintest_data:outputs=tfg.nn.aggregate_neighbors(x=graph.x,edge_index=graph.edge_index,edge_weight=graph.edge_weight,mapper=tfg.nn.identity_mapper,reducer=tfg.nn.sum_reducer,updater=tfg.nn.sum_updater)print(outputs)

欢迎加入QQ群-->: 979659372 Python中文网_新手群

推荐PyPI第三方库


热门话题
java标准API:在具有继承用户权限的树中查找实体   带有Hibernate和注释的java多对多自引用   java如何下载和安装Xugler?   java如何向JFrame JButton添加操作?   java如何安装自定义LAF?   java将旧对象的id分配给新对象将把它作为重复的@ManyToMany插入   java如何从WSDL生成/创建服务端点?   java使用基于iText Core的OpenPdf从pdf页面删除或更新添加的图像图标   java WeakHashMap,具有Long、Int或String等类型   java在JFrame中获取和更改值   java变量在扩展另一个类的类中不可见   地理定位Java和GeoLite数据库:如何使用GeoLiteCity。生产方式中的dat?   java类加载器应该能够解析和加载来自不同包的类吗?   java试图调用虚拟方法Volley Android Api   UDP数据报在Java中仅显示第一个字符