当前位置: X-MOL 学术Found. Comput. Math. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Low-Dimensional Invariant Embeddings for Universal Geometric Learning
Foundations of Computational Mathematics ( IF 3 ) Pub Date : 2024-02-08 , DOI: 10.1007/s10208-024-09641-2
Nadav Dym , Steven J. Gortler

This paper studies separating invariants: mappings on D-dimensional domains which are invariant to an appropriate group action and which separate orbits. The motivation for this study comes from the usefulness of separating invariants in proving universality of equivariant neural network architectures. We observe that in several cases the cardinality of separating invariants proposed in the machine learning literature is much larger than the dimension D. As a result, the theoretical universal constructions based on these separating invariants are unrealistically large. Our goal in this paper is to resolve this issue. We show that when a continuous family of semi-algebraic separating invariants is available, separation can be obtained by randomly selecting \(2D+1 \) of these invariants. We apply this methodology to obtain an efficient scheme for computing separating invariants for several classical group actions which have been studied in the invariant learning literature. Examples include matrix multiplication actions on point clouds by permutations, rotations, and various other linear groups. Often the requirement of invariant separation is relaxed and only generic separation is required. In this case, we show that only \(D+1\) invariants are required. More importantly, generic invariants are often significantly easier to compute, as we illustrate by discussing generic and full separation for weighted graphs. Finally we outline an approach for proving that separating invariants can be constructed also when the random parameters have finite precision.



中文翻译:

用于通用几何学习的低维不变嵌入

本文研究分离不变量:D维域上的映射,这些域对于适当的群作用是不变的并且分离轨道。这项研究的动机来自于分离不变量在证明等变神经网络架构的普遍性方面的有用性。我们观察到,在某些情况下,机器学习文献中提出的分离不变量的基数远大于维度D。结果,基于这些分离不变量的理论通用结构大得不切实际。我们本文的目标是解决这个问题。我们证明,当半代数分离不变量的连续族可用时,可以通过随机选择这些不变量的\(2D+1 \)来获得分离。我们应用这种方法来获得一种有效的方案来计算几个经典群动作的分离不变量,这些经典群动作已经在不变学习文献中进行了研究。示例包括通过排列、旋转和各种其他线性组对点云进行矩阵乘法操作。通常,不变分离的要求会被放松,只需要通用分离。在这种情况下,我们证明只需要\(D+1\)不变量。更重要的是,正如我们通过讨论加权图的通用和完全分离来说明的那样,通用不变量通常更容易计算。最后,我们概述了一种方法,用于证明当随机参数具有有限精度时也可以构造分离不变量。

更新日期:2024-02-09
down
wechat
bug