当前位置: X-MOL 学术ACM Trans. Knowl. Discov. Data › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Asymmetric Learning for Graph Neural Network based Link Prediction
ACM Transactions on Knowledge Discovery from Data ( IF 3.6 ) Pub Date : 2024-02-27 , DOI: 10.1145/3640347
Kai-Lang Yao 1 , Wu-Jun Li 1
Affiliation  

Link prediction is a fundamental problem in many graph-based applications, such as protein-protein interaction prediction. Recently, graph neural network (GNN) has been widely used for link prediction. However, existing GNN-based link prediction (GNN-LP) methods suffer from scalability problem during training for large-scale graphs, which has received little attention from researchers. In this paper, we first analyze the computation complexity of existing GNN-LP methods, revealing that one reason for the scalability problem stems from their symmetric learning strategy in applying the same class of GNN models to learn representation for both head nodes and tail nodes. We then propose a novel method, called asymmetric learning (AML), for GNN-LP. More specifically, AML applies a GNN model to learn head node representation while applying a multi-layer perceptron (MLP) model to learn tail node representation. To the best of our knowledge, AML is the first GNN-LP method to adopt an asymmetric learning strategy for node representation learning. Furthermore, we design a novel model architecture and apply a row-wise mini-batch sampling strategy to ensure promising model accuracy and training efficiency for AML. Experiments on three real large-scale datasets show that AML is 1.7×∼7.3× faster in training than baselines with a symmetric learning strategy while having almost no accuracy loss.



中文翻译:

基于图神经网络的链路预测的非对称学习

链接预测是许多基于图的应用中的一个基本问题,例如蛋白质-蛋白质相互作用预测。最近,图神经网络(GNN)已被广泛用于链接预测。然而,现有的基于 GNN 的链接预测(GNN-LP)方法在大规模图的训练过程中遇到了可扩展性问题,很少受到研究人员的关注。在本文中,我们首先分析了现有 GNN-LP 方法的计算复杂度,揭示了可扩展性问题的一个原因源于它们应用同一类 GNN 模型来学习头节点和尾节点表示的对称学习策略。然后我们提出了一种新方法,称为A符号埃特里克GNN-LP 的收入 (AML)。更具体地说,AML 应用 GNN 模型来学习头节点表示,同时应用多层感知器 (MLP) 模型来学习尾节点表示。据我们所知,AML 是第一个采用非对称学习策略进行节点表示学习的 GNN-LP 方法。此外,我们设计了一种新颖的模型架构,并应用行式小批量采样策略,以确保 AML 的模型准确性和训练效率。在三个真实的大规模数据集上进行的实验表明,AML 的训练速度比采用对称学习策略的基线快 1.7×∼7.3×,同时几乎没有精度损失。

更新日期:2024-03-01
down
wechat
bug