当前位置: X-MOL 学术Int. J. Parallel. Program › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Scaling the Maximum Flow Computation on GPUs
International Journal of Parallel Programming ( IF 1.5 ) Pub Date : 2022-11-15 , DOI: 10.1007/s10766-022-00740-7
Jash Khatri , Arihant Samar , Bikash Behera , Rupesh Nasre

Maximum flow is one of the fundamental problems in graph theory with several applications such as bipartite matchings, image segmentation, disjoint paths, network connectivity, etc. Goldberg-Tarjan’s well-known Push Relabel (PR) Algorithm calculates the maximum s–t (source–target) flow on a directed weighted graph. PR algorithm has been effectively parallelized on GPUs. However, computing the maximum flow even using the GPU parallel PR algorithm continues to be time-consuming for large graphs. For the maximum flow algorithm’s error-tolerant applications, it is sufficient to compute the approximate maximum flow value. In this work, we propose multiple techniques for improving the push-relabel algorithm’s performance on the GPUs keeping its error-tolerant applications in mind. Our proposed techniques improve performance by carefully reducing the impact of the particular property that hampers the performance of the GPU parallel PR algorithm. These techniques provide tunable knobs to control the amount of approximation added and the respective performance achieved. In the end, we propose the Pull Relabel algorithm, which is the natural symmetric counterpart of the Push Relabel algorithm. Further, we combine both algorithms to construct a Pull-Push Relabel maxflow algorithm and analyze its effect on the dynamically changing graphs. We illustrate the effectiveness of our proposed algorithm and techniques using several real-world and synthetic graphs from the DIMACS Challenge, SNAP, Konect, and Network Repository, along with three maximum flow applications (Maximum Bipartite Matching, Team Elimination Problem, and Supply–Demand Problem). The proposals achieve 1.05\(\times\) to 94.83\(\times\) speedup over the exact GPU parallel push-relabel algorithm, and 14.29\(\times\), 40.40\(\times\) and 32.41\(\times\) speed-up on the three applications.



中文翻译:

在 GPU 上扩展最大流量计算

最大流是图论中的基本问题之一,具有多种应用,例如二分匹配、图像分割、不相交路径、网络连接等。Goldberg-Tarjan 著名的推送重新标记 (PR) 算法计算最大 s–t(来源–target) 在有向加权图上流动。PR 算法已在 GPU 上有效并行化。然而,即使使用 GPU 并行 PR 算法计算最大流对于大型图仍然是耗时的。对于最大流量算法的容错应用,计算出近似的最大流量值就足够了。在这项工作中,我们提出了多种技术来提高推送重新标记算法在 GPU 上的性能,同时牢记其容错应用程序。我们提出的技术通过仔细减少阻碍 GPU 并行 PR 算法性能的特定属性的影响来提高性能。这些技术提供可调旋钮来控制添加的近似值和实现的相应性能。最后,我们提出了 Pull Relabel 算法,它是 Push Relabel 算法的自然对称对应物。此外,我们将两种算法结合起来构建一个 Pull-Push Relabel maxflow 算法,并分析其对动态变化图的影响。我们使用来自 DIMACS 挑战、SNAP、Konect 和网络存储库的几个真实世界和合成图,以及三个最大流应用程序(最大二分匹配、团队消除问题、和供需问题)。提案达到1.05\(\times\)到 94.83 \(\times\)比精确的 GPU 并行推送-重新标记算法加速,以及 14.29 \(\times\)、40.40 \(\times\)和 32.41 \(\times\)速度-up 上的三个应用程序。

更新日期:2022-11-15
down
wechat
bug