当前位置: X-MOL 学术arXiv.cs.LG › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Variational Graph Auto-Encoder Based Inductive Learning Method for Semi-Supervised Classification
arXiv - CS - Machine Learning Pub Date : 2024-03-26 , DOI: arxiv-2403.17500
Hanxuan Yang, Zhaoxin Yu, Qingchao Kong, Wei Liu, Wenji Mao

Graph representation learning is a fundamental research issue in various domains of applications, of which the inductive learning problem is particularly challenging as it requires models to generalize to unseen graph structures during inference. In recent years, graph neural networks (GNNs) have emerged as powerful graph models for inductive learning tasks such as node classification, whereas they typically heavily rely on the annotated nodes under a fully supervised training setting. Compared with the GNN-based methods, variational graph auto-encoders (VGAEs) are known to be more generalizable to capture the internal structural information of graphs independent of node labels and have achieved prominent performance on multiple unsupervised learning tasks. However, so far there is still a lack of work focusing on leveraging the VGAE framework for inductive learning, due to the difficulties in training the model in a supervised manner and avoiding over-fitting the proximity information of graphs. To solve these problems and improve the model performance of VGAEs for inductive graph representation learning, in this work, we propose the Self-Label Augmented VGAE model. To leverage the label information for training, our model takes node labels as one-hot encoded inputs and then performs label reconstruction in model training. To overcome the scarcity problem of node labels for semi-supervised settings, we further propose the Self-Label Augmentation Method (SLAM), which uses pseudo labels generated by our model with a node-wise masking approach to enhance the label information. Experiments on benchmark inductive learning graph datasets verify that our proposed model archives promising results on node classification with particular superiority under semi-supervised learning settings.

中文翻译:

基于变分图自动编码器的半监督分类归纳学习方法

图表示学习是各个应用领域的一个基础研究问题,其中归纳学习问题尤其具有挑战性,因为它需要模型在推理过程中泛化到看不见的图结构。近年来,图神经网络(GNN)已成为用于节点分类等归纳学习任务的强大图模型,而它们通常严重依赖于完全监督训练设置下的带注释节点。与基于 GNN 的方法相比,变分图自动编码器(VGAE)具有更强的泛化能力,可以捕获独立于节点标签的图的内部结构信息,并且在多个无监督学习任务上取得了显着的性能。然而,由于以监督方式训练模型和避免过度拟合图的邻近信息的困难,迄今为止仍然缺乏利用 VGAE 框架进行归纳学习的工作。为了解决这些问题并提高用于归纳图表示学习的 VGAE 模型性能,在这项工作中,我们提出了自标签增强 VGAE 模型。为了利用标签信息进行训练,我们的模型将节点标签作为 one-hot 编码输入,然后在模型训练中执行标签重建。为了克服半监督设置的节点标签稀缺问题,我们进一步提出了自标签增强方法(SLAM),该方法使用我们的模型生成的伪标签和节点屏蔽方法来增强标签信息。基准归纳学习图数据集的实验验证了我们提出的模型在节点分类上取得了有希望的结果,并且在半监督学习设置下具有特别的优越性。
更新日期:2024-03-27
down
wechat
bug