Skip to content
Licensed Unlicensed Requires Authentication Published by De Gruyter August 22, 2023

The deal.II Library, Version 9.5

  • Daniel Arndt , Wolfgang Bangerth , Maximilian Bergbauer , Marco Feder , Marc Fehling , Johannes Heinz , Timo Heister EMAIL logo , Luca Heltai , Martin Kronbichler , Matthias Maier , Peter Munch , Jean-Paul Pelteret , Bruno Turcksin , David Wells and Stefano Zampini

Abstract

This paper provides an overview of the new features of the finite element library deal.II, version 9.5.

JEL Classification: 65M60; 65N30; 65Y05

Funding statement: deal.II and its developers are financially supported through a variety of funding sources:

Funding statement: D. Arndt and B. Turcksin: Research sponsored by the Laboratory Directed Research and Development Program of Oak Ridge National Laboratory, managed by UT-Battelle, LLC, for the U. S. Department of Energy.

Funding statement: W. Bangerth and T. Heister were partially supported by the Computational Infrastructure for Geodynamics initiative (CIG), through the National Science Foundation (NSF) under Award No. EAR-1550901 and EAR-2149126 via The University of California – Davis.

Funding statement: W. Bangerth and M. Fehling were partially supported by Award OAC-1835673 as part of the Cyberinfrastructure for Sustained Scientific Innovation (CSSI) program.

Funding statement: W. Bangerth was also partially supported by Awards DMS-1821210 and EAR-1925595.

Funding statement: M. Bergbauer was supported by the German Research Foundation (DFG) under the project “High-Performance Cut Discontinuous Galerkin Methods for Flow Problems and Surface-Coupled Multiphysics Problems” Grant Agreement No. 456365667.

Funding statement: J. Heinz was supported by the European Union’s Framework Programme for Research and Innovation Horizon 2020 (2014-2020) under the Marie Skłodowska–Curie Grant Agreement No. [812719].

Funding statement: T. Heister was also partially supported by NSF Awards OAC-2015848, DMS-2028346, and EAR-1925575.

Funding statement: L. Heltai andM.Feder were partially supported by the Italian Ministry of University and Research (MUR), under the grant MUR PRIN 2022 No. 2022WKWZA8 “Immersed methods for multiscale and multiphysics problems (IMMEDIATE)”.

Funding statement: M. Kronbichler and P. Munch were partially supported by the German Ministry of Education and Research, project “PDExa: Optimized software methods for solving partial differential equations on exascale supercomputers” and the Bayerisches Kompetenznetzwerk für Technisch-Wissenschaftliches Hoch- und Höchstleistungsrechnen (KONWIHR), projects “High-order matrix-free finite element implementations with hybrid parallelization and improved data locality” and “Fast and scalable finite element algorithms for coupled multiphysics problems and non-matching grids”.

Funding statement: M. Maier was partially supported by NSF Award DMS-2045636 and and by the Air Force Office of Scientific Research under grant/contract number FA9550-23-1-0007.

Funding statement: D. Wells was supported by the NSF Award OAC-1931516.

Funding statement: S. Zampini was supported by the KAUST Extreme Computing Research Center.

Funding statement: Clemson University is acknowledged for generous allotment of compute time on Palmetto cluster.

Funding statement: The authors acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing HPC resources that have contributed to the research results reported within this paper. http://www.tacc.utexas.edu

Funding statement: This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1053575 access through the CIG Science Gateway and Community Codes for the Geodynamics Community MCA08X011 allocation.

References

[1] S. Abhyankar, J. Brown, E. M. Constantinescu, D. Ghosh, B. F. Smith, and H. Zhang. PETSc/TS: A modern scalable ODE/DAE solver library. arXiv preprint arXiv:1806.01437, 2018.Search in Google Scholar

[2] M. Adams. Evaluation of three unstructured multigrid methods on 3d finite element problems in solid mechanics. International Journal for Numerical Methods in Engineering, 55(2002):519–534.10.1002/nme.506Search in Google Scholar

[3] P. R. Amestoy, A. Buttari, J.-Y. L’Excellent, and T. Mary. Performance and Scalability of the Block Low-Rank Multifrontal Factorization on Multicore Architectures. ACM Transactions on Mathematical Software, 45(2019):2/1–26 .10.1145/3242094Search in Google Scholar

[4] P. R. Amestoy, I. S. Duff, J. Koster, and J.-Y. L’Excellent. A fully asynchronous multifrontal solver using distributed dynamic scheduling. SIAM Journal on Matrix Analysis and Applications, 23( 2001):15–41.10.1137/S0895479899358194Search in Google Scholar

[5] E. Anderson, Z. Bai, C. Bischof, S. Blackford, J. Demmel, J. Dongarra, J. Du Croz, A. Greenbaum, S. Hammarling, A. McKenney, and D. Sorensen. LAPACK Users’ Guide. Society for Industrial and Applied Mathematics, Philadelphia, PA, third edition, 1999.10.1137/1.9780898719604Search in Google Scholar

[6] H. Anzt, T. Cojean, Y.-C. Chen, G. Flegar, F. Göbel, T. Grützmacher, P. Nayak, T. Ribizel, and Y.-H. Tsai. Ginkgo: A high performance numerical linear algebra library. Journal of Open Source Software, 5(2020):2260.10.21105/joss.02260Search in Google Scholar

[7] H. Anzt, T. Cojean, G. Flegar, F. Göbel, T. Grützmacher, P. Nayak, T. Ribizel, Y. M. Tsai, and E. S. Quintana-Ortí. Ginkgo: A modern linear operator algebra framework for high performance computing. ACM Transactions on Mathematical Software, 48(2022):2/1–33 .10.1145/3480935Search in Google Scholar

[8] D. Arndt, W. Bangerth, B. Blais, T. C. Clevenger, M. Fehling, A. V. Grayver, T. Heister, L. Heltai, M. Kronbichler, M. Maier, P. Munch, J.-P. Pelteret, R. Rastak, I. Thomas, B. Turcksin, Z. Wang, and D. Wells. The deal.II library, version 9.2. Journal of Numerical Mathematics, 28(2020), No. 3, 131–146.10.1515/jnma-2020-0043Search in Google Scholar

[9] D. Arndt, W. Bangerth, B. Blais, M. Fehling, R. Gassmöller, T. Heister, L. Heltai, U. Köcher, M. Kronbichler, M. Maier, P. Munch, J.-P. Pelteret, S. Proell, K. Simon, B. Turcksin, D. Wells, and J. Zhang. The deal.II library, version 9.3. Journal of Numerical Mathematics, 29(2021), No. 3, 171–186.10.1515/jnma-2021-0081Search in Google Scholar

[10] D. Arndt, W. Bangerth, D. Davydov, T. Heister, L. Heltai, M. Kronbichler, M. Maier, J.-P. Pelteret, B. Turcksin, and D. Wells. The deal.II finite element library: Design, features, and insights. Computers & Mathematics with Applications, 81(2021), 407–422, .10.1016/j.camwa.2020.02.022Search in Google Scholar

[11] S. Balay, S. Abhyankar, M. F. Adams, S. Benson, J. Brown, P. Brune, K. Buschelman, E. Constantinescu, L. Dalcin, A. Dener, V. Eijkhout, W. D. Gropp, V. Hapla, T. Isaac, P. Jolivet, D. Karpeev, D. Kaushik, M. G. Knepley, F. Kong, S. Kruger, D. A. May, L. C. McInnes, R. T. Mills, L. Mitchell, T. Munson, J. E. Roman, K. Rupp, P. Sanan, J. Sarich, B. F. Smith, S. Zampini, H. Zhang, H. Zhang, and J. Zhang. PETSc/TAO users manual. Technical Report ANL-21/39 - Revision 3.17, Argonne National Laboratory, 2022.10.2172/1968587Search in Google Scholar

[12] S. Balay, S. Abhyankar, M. F. Adams, S. Benson, J. Brown, P. Brune, K. Buschelman, E. M. Constantinescu, L. Dalcin, A. Dener, V. Eijkhout, W. D. Gropp, V. Hapla, T. Isaac, P. Jolivet, D. Karpeev, D. Kaushik, M. G. Knepley, F. Kong, S. Kruger, D. A. May, L. C. McInnes, R. T. Mills, L. Mitchell, T. Munson, J. E. Roman, K. Rupp, P. Sanan, J. Sarich, B. F. Smith, S. Zampini, H. Zhang, H. Zhang, and J. Zhang. PETSc Web page. https://petsc.org/ 2023.Search in Google Scholar

[13] W. Bangerth, C. Burstedde, T. Heister, and M. Kronbichler. Algorithms and data structures for massively parallel generic adaptive finite element codes. ACM Transactions on Mathematical Software, 38(2012), No. 2, 14/1–28.10.1145/2049673.2049678Search in Google Scholar

[14] W. Bangerth, R. Hartmann, and G. Kanschat. deal.II — a general purpose object oriented finite element library. ACM Trans. Math. Softw., 33(2007), No. 4, 24–es.10.1145/1268776.1268779Search in Google Scholar

[15] W. Bangerth and O. Kayser-Herold. Data structures and requirements for hp finite element software. ACM Transactions on Mathematical Software, 36(2009), No. 1, 4/1–31.10.1145/1486525.1486529Search in Google Scholar

[16] M. L. Bittencourt, C. C. Douglas, and R. A. Feijóo. Nonnested multigrid methods for linear problems. Numerical Methods for Partial Differential Equations: An International Journal, 17(2001), No. 4, 313–331 .10.1002/num.1013Search in Google Scholar

[17] L. S. Blackford, J. Choi, A. Cleary, E. D’Azevedo, J. Demmel, I. Dhillon, J. Dongarra, S. Hammarling, G. Henry, A. Petitet, K. Stanley, D. Walker, and R. C. Whaley. ScaLAPACK Users’ Guide. Society for Industrial and Applied Mathematics, Philadelphia, PA, 1997.10.1137/1.9780898719642Search in Google Scholar

[18] Boost C++ Libraries. http://www.boost.org/Search in Google Scholar

[19] J. H. Bramble, J. E. Pasciak, and J. Xu. The analysis of multigrid algorithms with nonnested spaces or noninherited quadratic forms. Mathematics of Computation, 56(1991), No. 193, 1–34.10.1090/S0025-5718-1991-1052086-4Search in Google Scholar

[20] P. R. Brune, M. G. Knepley, B. F. Smith, and X. Tu. Composing scalable nonlinear algebraic solvers. SIAM Review, 57(2015), No. 4, 535–565.10.1137/130936725Search in Google Scholar

[21] C. Burstedde. Parallel tree algorithms for AMR and non-standard data access. ACM Transactions on Mathematical Software (TOMS), 46(2020), No. 4, 32/1–31.10.1145/3401990Search in Google Scholar

[22] C. Burstedde, L. C. Wilcox, and O. Ghattas. p4est: Scalable algorithms for parallel adaptive mesh refinement on forests of octrees. SIAM J. Sci. Comput., 33(2011), No. 3, 1103–1133 .10.1137/100791634Search in Google Scholar

[23] T. C. Clevenger, T. Heister, G. Kanschat, and M. Kronbichler. A flexible, parallel, adaptive geometric multigrid method for FEM. ACM Trans. Math. Software, 47(2021), No. 1, 7/1–27.10.1145/3425193Search in Google Scholar

[24] W. Couzy. Spectral element discretization of the unsteady navier-stokes equations and its iterative solution on parallel computers. Technical report, EPFL, 1995.Search in Google Scholar

[25] cuSOLVER Library. https://docs.nvidia.com/cuda/cusolver/index.htmlSearch in Google Scholar

[26] cuSPARSE Library. https://docs.nvidia.com/cuda/cusparse/index.htmlSearch in Google Scholar

[27] T. A. Davis. Algorithm 832: UMFPACKV4.3—an unsymmetric-pattern multifrontal method. ACM Trans. Math. Software, 30(2004),196–199.10.1145/992200.992206Search in Google Scholar

[28] D. Davydov, T. Gerasimov, J.-P. Pelteret, and P. Steinmann. Convergence study of the h-adaptive PUM and the hp-adaptive FEM applied to eigenvalue problems in quantum mechanics. Advanced Modeling and Simulation in Engineering Sciences, 4(2017), No. 1, 7.10.1186/s40323-017-0093-0Search in Google Scholar PubMed PubMed Central

[29] A. DeSimone, L. Heltai, and C. Manigrasso. Tools for the solution of PDEs Defined on Curved Manifolds with deal.II. SISSA Report No.42/2009/M, 2009.Search in Google Scholar

[30] M. Fehling and W. Bangerth. Algorithms for parallel generic hp-adaptive finite element software, ACM Trans. Math. Software 48 (2022).Search in Google Scholar

[31] M. Galassi, J. Davies, J. Theiler, B. Gough, G. Jungman, M. Booth, and F. Rossi. GNU Scientific Library Reference Manual. 3rd ed., Network Theory Ltd., 2009.Search in Google Scholar

[32] R. Gassmöller, H. Lokavarapu, E. Heien, E. G. Puckett, and W. Bangerth. Flexible and scalable particle-in-cell methods with adaptive mesh refinement for geodynamic computations. Geochemistry, Geophysics, Geosystems, 19(2018), No. 9, 3596–3604.10.1029/2018GC007508Search in Google Scholar

[33] C. Geuzaine and J.-F. Remacle. Gmsh: A 3-D finite element mesh generator with built-in pre-and post-processing facilities. Int. J. Numer. Methods Engrg. 79(2009), No. 11, 1309–1331.10.1002/nme.2579Search in Google Scholar

[34] N. Giuliani, A. Mola, and L. Heltai. π-BEM: A flexible parallel implementation for adaptive, geometry aware, and high order boundary element methods. Advances in Engineering Software, 121:39–58, July 2018.10.1016/j.advengsoft.2018.03.008Search in Google Scholar

[35] A. Griewank, D. Juedes, and J. Utke. Algorithm 755: ADOL-C: a package for the automatic differentiation of algorithms written in C/C++. ACM Transactions on Mathematical Software, 22(1996), No. 2,131–167.10.1145/229473.229474Search in Google Scholar

[36] GSL: GNU Scientific Library. http://www.gnu.org/software/gslSearch in Google Scholar

[37] J. Heinz, P. Munch, and M. Kaltenbacher. High-order non-conforming discontinuous Galerkin methods for the acoustic conservation equations. International Journal for Numerical Methods in Engineering, 124(2023), No. 9, 2034–2049, 2023.10.1002/nme.7199Search in Google Scholar

[38] L. Heltai and A. Mola. Towards the Integration of CAD and FEM using open source libraries: a Collection of deal.II ManifoldWrappers for the OpenCASCADE Library. Technical report, SISSA, 2015.10.1145/3468428Search in Google Scholar

[39] L. Heltai, W. Bangerth, M. Kronbichler, and A. Mola. Propagating geometry information to finite element computations. ACM Transactions on Mathematical Software, 47(2021):32/1–30.10.1145/3468428Search in Google Scholar

[40] V. Hernandez, J. E. Roman, and V. Vidal. SLEPc: A scalable and flexible toolkit for the solution of eigenvalue problems. ACM Trans. Math. Software, 31(2005), No. 3, 351–362.10.1145/1089014.1089019Search in Google Scholar

[41] M. A. Heroux, R. A. Bartlett, V. E. Howle, R. J. Hoekstra, J. J. Hu, T. G. Kolda, R. B. Lehoucq, K. R. Long, R. P. Pawlowski, E. T. Phipps, A. G. Salinger, H. K. Thornquist, R. S. Tuminaro, J. M. Willenbring, A. Williams, and K. S. Stanley. An overview of the Trilinos project. ACM Transactions on Mathematical Software, 31(2005), 397–423.10.1145/1089014.1089021Search in Google Scholar

[42] A. C. Hindmarsh, P. N. Brown, K. E. Grant, S. L. Lee, R. Serban, D. E. Shumaker, and C. S. Woodward. SUNDIALS: Suite of nonlinear and differential/algebraic equation solvers. ACM Transactions on Mathematical Software, 31(2005), No. 3, 363–396.10.1145/1089014.1089020Search in Google Scholar

[43] B. Janssen and G. Kanschat. Adaptive multilevel methods with local smoothing for H1- and Hcurl-conforming high order finite element methods. SIAM J. Sci. Comput., 33(4):2095–2114, 2011.10.1137/090778523Search in Google Scholar

[44] G. Kanschat. Multi-level methods for discontinuous GalerkinFEMon locally refined meshes. Comput. & Struct., 82(2004), No. 28, 2437–2445, 2004.10.1016/j.compstruc.2004.04.015Search in Google Scholar

[45] G. Karypis andV. Kumar. Afast and high quality multilevel scheme for partitioning irregular graphs. SIAM J. Sci. Comput., 20(1998), No. 1, 359–392, 1998.10.1137/S1064827595287997Search in Google Scholar

[46] D. A. Knoll and D. E. Keyes. Jacobian-free Newton–Krylov methods: a survey of approaches and applications. J Comput Physics, 193(2004), No. 2, 357–397.10.1016/j.jcp.2003.08.010Search in Google Scholar

[47] M. Kronbichler and K. Kormann. A generic interface for parallel cell-based finite element operator application. Comput. Fluids, 63(2012), 135–147.10.1016/j.compfluid.2012.04.012Search in Google Scholar

[48] M. Kronbichler and K. Kormann. Fast matrix-free evaluation of discontinuous Galerkin finite element operators. ACM Trans. Math. Software, 45(2019), No. 3, 29/1–40, 2019.Search in Google Scholar

[49] M. Kronbichler, K. Kormann, N. Fehn, P. Munch, and J.Witte. A Hermite-like basis for faster matrix-free evaluation of interior penalty discontinuous Galerkin operators. arXiv:1907.08492, 2019.10.1145/3325864Search in Google Scholar

[50] M. Kronbichler, D. Sashko, and P. Munch. Enhancing data locality of the conjugate gradient method for high-order matrix-free finite-element implementations. The International Journal of High Performance Computing Applications (2022), 10943420221107880.10.1177/10943420221107880Search in Google Scholar

[51] M. Kronbichler, S. Schoeder, C. Müller, and W. A. Wall. Comparison of implicit and explicit hybridizable discontinuous Galerkin methods for the acoustic wave equation. Int. J. Numer. Meth. Eng., 106(2016),No. 9, 712–739.10.1002/nme.5137Search in Google Scholar

[52] D. Lebrun-Grandié, A. Prokopenko, B. Turcksin, and S. R. Slattery. ArborX: A performance portable geometric search library. ACM Transactions on Mathematical Software, 47(2020), No. 1, 2/1–15, .10.1145/3412558Search in Google Scholar

[53] R. B. Lehoucq, D. C. Sorensen, and C. Yang. ARPACK users’ guide: solution of large-scale eigenvalue problems with implicitly restarted Arnoldi methods. SIAM, Philadelphia, 1998.10.1137/1.9780898719628Search in Google Scholar

[54] List of changes for 9.5. https://www.dealii.org/developer/doxygen/deal.II/changes_between_9_4_2_and_9_5_0.htmlSearch in Google Scholar

[55] J. Lottes. Optimal polynomial smoothers for multigrid V-cycles. arXiv:2202.08830, 2022.10.1002/nla.2518Search in Google Scholar

[56] R. E. Lynch, J. R. Rice, and D. H. Thomas. Direct solution of partial difference equations by tensor product methods. Numerische Mathematik, 6(1964), No. 1, 185–199.10.1007/BF01386067Search in Google Scholar

[57] M. Maier, M. Bardelloni, and L. Heltai. LinearOperator – a generic, high-level expression syntax for linear algebra. Computers and Mathematics with Applications, 72(2016), No.1,1–24.10.1016/j.camwa.2016.04.024Search in Google Scholar

[58] M. Maier, M. Bardelloni, and L. Heltai. LinearOperator Benchmarks, Version 1.0.0, March 2016.Search in Google Scholar

[59] P. Munch, T. Heister, L. Prieto Saavedra, and M. Kronbichler. Efficient distributed matrix-free multigrid methods on locally refined meshes for FEM computations, ACM Trans. Parallel Computing 10 (2023), No. 1, 1–38.10.1145/3580314Search in Google Scholar

[60] muparser: Fast Math Parser Library. https://beltoforion.de/en/muparserSearch in Google Scholar

[61] OpenCASCADE: Open CASCADE Technology, 3Dmodeling&Numerical Simulation. http://www.opencascade.org/Search in Google Scholar

[62] M. Phillips and P. Fischer. Optimal Chebyshev smoothers and one-sided V-cycles, arXiv:2210.03179, 2022.Search in Google Scholar

[63] M. Phillips, S. Kerkemeier, and P. Fischer. Auto-tuned preconditioners for the spectral element method on GPUS. arXiv preprint arXiv:2110.07663, 2021.Search in Google Scholar

[64] S. D. Proell, P. Munch, M. Kronbichler, W. A. Wall, and C. Meier. A highly efficient computational framework for fast scan-resolved simulations of metal additive manufacturing processes on the scale of real parts. arXiv preprint arXiv:2302.05164, 2023.Search in Google Scholar

[65] J. Reinders. Intel Threading Building Blocks. O’Reilly, 2007.Search in Google Scholar

[66] D. Ridzal and D. P. Kouri. Rapid Optimization Library. Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States), Report, 2014.Search in Google Scholar

[67] A. Sartori, N. Giuliani, M. Bardelloni, and L. Heltai. deal2lkit: A toolkit library for high performance programming in deal.II. SoftwareX, 7(2018), 318–327.10.1016/j.softx.2018.09.004Search in Google Scholar

[68] T. Schulze, A. Gessler, K. Kulling, D. Nadlinger, J. Klein, M. Sibly, and M. Gubisch. Open Asset Import Library (assimp). https://github.com/assimp/assimp 2021.Search in Google Scholar

[69] SymEngine: Fast Symbolic Manipulation Library, Written in C++. https://symengine.org/Search in Google Scholar

[70] The CGAL Project. CGAL User and Reference Manual, 5.4.1 edition., CGAL Editorial Board, 2022. https://doc.cgal.org/5.4.1/Manual/packages.htmlSearch in Google Scholar

[71] The HDF Group. Hierarchical Data Format, version 5, 2022. http://www.hdfgroup.org/HDF5/Search in Google Scholar

[72] The Trilinos Project Team. The Trilinos Project Website. https://trilinos.github.io/Search in Google Scholar

[73] C. R. Trott, D. Lebrun-Grandié, D. Arndt, J. Ciesko, V. Dang, N. Ellingwood, R. Gayatri, E. Harvey, D. S. Hollman, D. Ibanez, N. Liber, J. Madsen, J. Miles, D. Poliakoff, A. Powell, S. Rajamanickam, M. Simberg, D. Sunderland, B. Turcksin, and J. Wilke. Kokkos 3: Programming model extensions for the exascale era. IEEE Transactions on Parallel and Distributed Systems, 33(2022),No. 4, 805–817.10.1109/TPDS.2021.3097283Search in Google Scholar

[74] B. Turcksin, M. Kronbichler, and W. Bangerth. WorkStream – a design pattern for multicoreenabled finite element computations. ACM Transactions on Mathematical Software, 43(2016), No. 1, 2/1–29.10.1145/2851488Search in Google Scholar

[75] J. Witte, D. Arndt, and G. Kanschat. Fast tensor product Schwarz smoothers for high-order discontinuous Galerkin methods. Computational Methods in Applied Mathematics, 21(2021),No. 3,709–728.10.1515/cmam-2020-0078Search in Google Scholar

[76] J. Zhang, J. Brown, S. Balay, J. Faibussowitsch, M. Knepley, O. Marin, R. T. Mills, T. Munson, B. F. Smith, and S. Zampini. The PetscSF scalable communication layer. IEEE Transa Parallel and Distributed Systems, 33(2021), No. 4, 842–853.10.1109/TPDS.2021.3084070Search in Google Scholar

Received: 2023-07-29
Accepted: 2023-08-01
Published Online: 2023-08-22
Published in Print: 2023-09-07

© 2023 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 28.4.2024 from https://www.degruyter.com/document/doi/10.1515/jnma-2023-0089/html
Scroll to top button