Skip to main content
Log in

Lattice enumeration via linear programming

  • Published:
Numerische Mathematik Aims and scope Submit manuscript

Abstract

Given a positive integer d and \({{\varvec{a}}}_{1},\dots ,{{\varvec{a}}}_{r}\) a family of vectors in \({{\mathbb {R}}}^d\), \(\{k_1{{\varvec{a}}}_{1}+\dots +k_r{{\varvec{a}}}_{r}: k_1,\dots ,k_r \in {{\mathbb {Z}}}\}\subset {{\mathbb {R}}}^d\) is the so-called lattice generated by the family. In high dimensional integration, prescribed lattices are used for constructing reliable quadrature schemes. The quadrature points are the lattice points lying on the integration domain, typically the unit hypercube \([0,1)^d\) or a rescaled shifted hypercube. It is crucial to have a cost-effective method for enumerating lattice points within such domains. Undeniably, the lack of such fast enumeration procedures hinders the applicability of lattice rules. Existing enumeration procedures exploit intrinsic properties of the lattice at hand, such as periodicity, orthogonality, recurrences, etc. In this paper, we unveil a general-purpose fast lattice enumeration algorithm based on linear programming (named FLE-LP).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Algorithm 1
Algorithm 2
Fig. 1

Similar content being viewed by others

Notes

  1. The number \(\textrm{Nm}(\cdot )\) is a lattice invariant, it does not depend on generating matrix \({{\varvec{A}}}\).

  2. \({{{\varvec{A}}}}'\) and \({{{\varvec{A}}}}''\) are not necessarily sub-matrices of \({{\varvec{A}}}\), and numbers of rows is irrelevant.

  3. We note that the entries of diagonal matrices \({{\varvec{D}}}_j\) are \(>0\) for any j.

  4. for example returned \({{\underline{k}}}_{j+1}=p+1\) but \({{\overline{k}}}_{j+1}=p\) for some \(p\in {{\mathbb {Z}}}\), or returned nothing.

  5. In exact arithmetics, the programs are always feasible and bounded, see Sect. 4.

  6. Vertical chaining as opposed to horizontal chaining seen later.

  7. Dead ends are formalized in complexity analysis, c.f. Sect. 1.3.

  8. If this arises while solving for one, there is no need to check for the other.

  9. Otherwise the program is infeasible.

  10. Horizontal chaining.

  11. These tableaux data are implemented as global variables.

  12. The code is publicly available at https://github.com/Belloliva/SLE-LP.

References

  1. Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge (2004)

    Book  Google Scholar 

  2. Bremner, M.R.: Lattice Basis Reduction: An Introduction to the LLL Algorithm and Its Applications. CRC Press, Boca Raton (2011)

    Book  Google Scholar 

  3. Bungartz, H.J., Griebel, M.: Sparse grids. Acta Numer. 13, 147–269 (2004)

  4. Chkifa, M.A.: On generation and enumeration of orthogonal Chebyshev-Frolov lattices. Hiroshima Math. J. 52(2), 235–253 (2022)

    Article  MathSciNet  Google Scholar 

  5. Diamond, S., Boyd, S.: CVXPY: a python-embedded modeling language for convex optimization. J. Mach. Learn. Res. 17(1), 2909–2913 (2016)

    MathSciNet  Google Scholar 

  6. Dick, J., Kuo, F.Y., Sloan, I.H.: High-dimensional integration: the Quasi-Monte Carlo way. Acta Numer. 22, 133–288 (2013)

    Article  MathSciNet  Google Scholar 

  7. Dick, J., Pillichshammer, F., Suzuki, K., Ullrich, M., Yoshiki, T.: Lattice-based integration algorithms: Kronecker sequences and rank-1 lattices. Ann. Mat. Pura Appl. 197(1), 109–126 (2018)

    Article  MathSciNet  Google Scholar 

  8. Frolov, K.K.: Upper error bounds for quadrature formulas on function classes. Dokl. Akad. Nauk SSSR 231, 818–821 (1976)

    MathSciNet  Google Scholar 

  9. Gruber, P.M.: Convex and Discrete Geometry. Springer, Berlin (2007)

    Google Scholar 

  10. Kacwin, C.: Realization of the Frolov cubature formula via orthogonal Chebyshev–Frolov lattices. Master’s thesis, Mathematisch-Naturwissenschaftliche Fakultat der Rheinischen Friedrich–Wilhelms-Universitat, Bonn (2016)

  11. Kacwin, C., Oettershagen, J., Ullrich, M., Ullrich, T.: Numerical performance of optimized Frolov lattices in tensor product reproducing kernel Sobolev spaces. Found. Comput. Math. 21(3), 849–889 (2021)

    Article  MathSciNet  Google Scholar 

  12. Kacwin, C., Oettershagen, J., Ullrich, T.: On the orthogonality of the Chebyshev-Frolov lattice and applications. Monatsh. Math. 184(3), 425–441 (2017)

    Article  MathSciNet  Google Scholar 

  13. Nguyen, V.K., Ullrich, M., Ullrich, T.: Change of variable in spaces of mixed smoothness and numerical integration of multivariate functions on the unit cube. Constr. Approx. 46, 69–108 (2017)

    Article  MathSciNet  Google Scholar 

  14. Potra, F.A., Wright, S.J.: Interior-point methods. J. Comput. Appl. Math. 124(1), 281–302 (2000)

    Article  MathSciNet  ADS  Google Scholar 

  15. Sloan, I.H., Joe, S.: Lattice Methods for Multiple Integration. Oxford Science Publications, Oxford (1994)

    Book  Google Scholar 

  16. Sullivan, T.J.: Introduction to Uncertainty Quantification. Springer, Berlin (2015)

    Book  Google Scholar 

  17. Suzuki, K., Yoshiki, T.: Enumeration of the Chebyshev-Frolov lattice points in axis-parallel boxes. Hiroshima Math. J. 49(1), 139–159 (2019)

    Article  MathSciNet  Google Scholar 

  18. Temlyakov, V.N.: Approximation of Periodic Functions. Nova Science, New York (1994)

    Google Scholar 

  19. Temlyakov, V.N.: Cubature formulas, discrepancy, and nonlinear approximation. J. Complex. 19(3), 352–391 (2003)

    Article  MathSciNet  Google Scholar 

  20. Ullrich, M.: On Upper error bounds for quadrature formulas on function classes by K.K. Frolov. In: Cools, R., Nuyens, D. (eds.) Monte Carlo and Quasi-Monte Carlo Methods, pp. 571–582. Springer International Publishing, Cham (2016)

  21. Ullrich, M., Ullrich, T.: The role of Frolov’s cubature formula for functions with bounded mixed derivative. SIAM J. Numer. Anal. 54(2), 969–993 (2017)

    Article  MathSciNet  Google Scholar 

  22. Vanderbei, R.J.: Linear Programming. Foundations and Extensions. Springer, Berlin (2004)

    Google Scholar 

Download references

Acknowledgements

I would like to thank the anonymous referees for the meticulous reviewing and insightful feedback provided on the preliminary version of the paper. Their attention to detail, thoroughness, and comprehensive comments have proven invaluable in shaping the final version.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Moulay Abdellah Chkifa.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A: Simplex tableaux batch initialization

We are given a \(d\times d \) nonsingular matrix \({{\varvec{U}}}=(u_{i,j})_{1\le i,j\le d}\). We consider a “bottom-up constrained” Gauss elimination to \({{\varvec{U}}}\) in d iterations whose elementary row operations are

  • swap two rows,

  • Multiply a row by a non-zero scalar,

  • add to one row a scalar non-zero multiple of another,

  • at iteration k, only the \({k \, \textrm{trailing}}\) rows can be operated upon.

The three first operations are standard elementary row operations. The first operation is crossed in purpose, since not allowed here. The forth operation enforces bottom-up elimination for reasons discussed shortly. Our goal is that after executing instructions at iteration k, the \(k\times d\) matrix consisting in k trailing rows of \({{\varvec{U}}}\) has, up to a permutation of columns, a \(k\times k\) identity sub-matrix. Basically, we produce nested bases \(\{j_d,\dots ,j_{d-k+1}\}\) of \(\{1,\dots ,d\}\) for \(k=1,2,\dots \) such that after executing iteration k, columns indexed \(j_d,\dots ,j_{d-k+1}\) of \({{\varvec{U}}}\) coincide respectively with unit vectors \({{\varvec{e}}}_d,\dots ,{{\varvec{e}}}_{d-k+1}\) of \({{\mathbb {R}}}^d\) on the k trailing coordinates.

For the sake of clarity, we denote the rows of \({{\varvec{U}}}\) by \(L_1,\dots ,L_d\). The goal we have set is achieved using the following algorithm:

  • Iteration 1:

    • find \(j_d\) the lowest index j s.t \(u_{d,j} \ne 0\) and do \(L_d \leftarrow L_d/u_{d,j_d}\);

  • ...

  • Iteration k: we have \(\{j_{d},\dots ,j_{l+1}\}\) already computed with \(l = d-k+1\), then

    • do \(L_{l} \leftarrow L_{l} -\sum _{m=l+1}^{d} u_{l,j_m} L_{m}\);

    • find \(j_l\) the lowest index j s.t \(u_{l,j} \ne 0\) and do \(L_l \leftarrow L_l/u_{l,j_l}\);

    • do \(L_{m} \leftarrow L_{m} - u_{m,j_l} L_{l} \) for \(m=l+1,\dots ,d\);

It is easily checked this reduction process is well defined (since \(\textrm{rank}( {{\varvec{U}}})=d\)), is unique (since we choose the lowest j) and realizes the desired goal.

At iteration k, we can simultaneously perform the prescribed instructions and provide a basic solution to \({{\varvec{Q}}}{{\varvec{y}}}={{\varvec{e}}}_1\) where \({{\varvec{Q}}}\) is the \(k\times d\) matrix of k trailing rows of (original) \({{\varvec{U}}}\) and \({{\varvec{e}}}_1\) considered in \({{\mathbb {R}}}^{k}\). To be more precise, up to iteration k we only operate on rows \(l+1,\dots ,d\) (with \(l=d-k+1\)) of \({{\varvec{U}}}\), hence \({{\varvec{Q}}}{{\varvec{y}}}= {{\varvec{e}}}_1\) where \( {{\varvec{Q}}}\) is the matrix of k trailing rows of (current) \({{\varvec{U}}}\). We then apply instructions of iteration k while reflecting them to the right hand side \({{\varvec{e}}}_1\), thus obtaining a system \({{\varvec{Q}}}^{(k)} {{\varvec{y}}}= {{\varvec{c}}}^{(k)}\). The columns of \({{\varvec{Q}}}^{(k)}\) that are indexed in \({{\mathcal {B}}}_k=\{\pi (1),\dots , \pi (k)\} \subset \{1,\dots ,d\}\) where \( \pi (i):= j_{(l-1)+i}\), are equal to unit vectors \({{\varvec{e}}}_1,\dots ,{{\varvec{e}}}_{k}\) of \({{\mathbb {R}}}^{k}\).

At iteration k, the above also yields the equivalence

$$\begin{aligned} {{\varvec{L}}}{{\varvec{w}}}= {{\varvec{c}}}\quad \quad \iff \quad \quad {{\varvec{L}}}^{(k)}{{\varvec{w}}}= {{\varvec{c}}}^{(k)}, \end{aligned}$$
(51)

where \({{\varvec{L}}}= ({{\varvec{Q}}}~|- {{\varvec{Q}}})\) and \({{\varvec{L}}}^{(k)}= ({{\varvec{Q}}}^{(k)}~|- {{\varvec{Q}}}^{(k)} )\) in \({{\mathbb {R}}}^{k\times (2d)}\) and \({{\varvec{c}}}= {{\varvec{e}}}_1\) considered in \({{\mathbb {R}}}^k\). We associate to \({{\varvec{c}}}^{(k)}\) the bases \({\underline{{{\mathcal {B}}}}}_k=\{{\underline{\pi }}(i),\dots ,{\underline{\pi }}(k) \}\), \({\overline{{{\mathcal {B}}}}}_k=\{{\overline{\pi }}(1),\dots ,{\overline{\pi }}(k) \}\) of {1,...,2d}, and the \(k\times k\) diagonal matrix \({{\varvec{S}}}=(s_{i,j})\) of signs according to: for \(i=1,\dots ,k\)

$$\begin{aligned} \begin{array}{llll} {\underline{\pi }}(i) = \pi (i)+d,&{} \quad {\overline{\pi }}(i) = \pi (i),&{} \quad s_{i,i}=+1,&{} \quad \text{ if } \quad ({{\varvec{c}}}^{(k)})_i>0,\\ {\underline{\pi }}(i) =\pi (i),&{} \quad {\overline{\pi }}(i) = \pi (i)+d,&{} \quad s_{i,i}=-1&{} \quad \text{ if } \quad ({{\varvec{c}}}^{(k)})_i \le 0. \end{array} \end{aligned}$$
(52)

The columns of \(-{{\varvec{S}}}\times {{\varvec{L}}}^{(k)}\) that are indexed in \({\underline{{{\mathcal {B}}}}}_k\) and the columns of \({{\varvec{S}}}\times {{\varvec{L}}}^{(k)}\) that are indexed in \({\overline{{{\mathcal {B}}}}}_k\) are equal to unit vectors \({{\varvec{e}}}_1,\dots ,{{\varvec{e}}}_{k}\) of \({{\mathbb {R}}}^{k}\). Moreover \({{\varvec{S}}}\times {{\varvec{c}}}^{(k)} = |{{\varvec{c}}}^{(k)}|\) where \(|\cdot |\) is coordinate-wise absolute values. Finally, we formulate the equivalences above as

$$\begin{aligned} \begin{array}{lcr} {{\varvec{L}}}{{\varvec{w}}}= -{{\varvec{c}}}\quad \quad &{}\iff &{} \quad \quad -{{\varvec{S}}}{{\varvec{L}}}^{(k)} {{\varvec{w}}}= |{{\varvec{c}}}^{(k)}|, \\ {{\varvec{L}}}{{\varvec{w}}}= {{\varvec{c}}}\quad \quad &{}\iff &{} \quad \quad {{\varvec{S}}}{{\varvec{L}}}^{(k)} {{\varvec{w}}}= |{{\varvec{c}}}^{(k)}|. \\ \end{array} \end{aligned}$$
(53)

We thus have derived initial simplex tableaux for both the system \( {{\varvec{L}}}{{\varvec{w}}}= -{{\varvec{c}}}\) and \({{\varvec{L}}}{{\varvec{w}}}= {{\varvec{c}}}\).

Given a \(d\times d \) nonsingular matrix \({{\varvec{A}}}=(a_{i,j})_{1\le i,j\le d}\), we apply the above to matrix \({{\varvec{U}}}= {{\varvec{A}}}^\top \). By storing (taking snapshots) of the two obtained tableaux at every iteration k for \(k=1,2,\dots ,d\), we fulfill the goal of gathering simplex tableaux for \(j=d-1,d-2,\dots ,0\) needed to initiate simplex tableau algorithm for programs in (34) and in (36) (\(j=0\)).

Appendix B: Lattice enumeration in general axis-parallel boxes

We let \({{\varvec{A}}}\in {{\mathbb {R}}}^{d \times r}\) general, \({{\varvec{a}}}_{1},\dots ,{{\varvec{a}}}_{r} \in {{\mathbb {R}}}^d\) its columns, then consider associated lattice \({{\varvec{A}}}{{\mathbb {Z}}}^r:=\{k_1{{\varvec{a}}}_{1}+\dots +k_r{{\varvec{a}}}_{r}: k_1,\dots ,k_r \in {{\mathbb {Z}}}\}\subset {{\mathbb {R}}}^d\). Of course, we may assume \({{\varvec{a}}}_{j} \ne {{\varvec{0}}}\) for all j. The enumeration of lattice \({{\varvec{A}}}{{\mathbb {Z}}}^r\) in axis parallel boxes \([{{\varvec{u}}},{{\varvec{v}}}]\) with \({{\varvec{u}}},{{\varvec{v}}}\in {{\mathbb {R}}}^d\), can be investigated as done in the paper. However, let us first observe,

Lemma 1

We assume \(u_i<v_i\) for all \(1\le i\le d\). If \({{\varvec{a}}}_{1},\dots ,{{\varvec{a}}}_{r}\) are linearly dependent and the interior of hypercube \([{{\varvec{u}}},{{\varvec{v}}}]\) contains at least one lattice point \({{\varvec{x}}}\in {{\varvec{A}}}{{\mathbb {Z}}}^r\), then \([{{\varvec{u}}},{{\varvec{v}}}]\) contains an infinite number of lattice points.

Proof

There exist \(\lambda _1,\dots ,\lambda _r\) not all zeros such that \(\sum \lambda _j {{\varvec{a}}}_{j}=0\). Up to permute columns and normalize \(\lambda _j\), we may assume \(\lambda _1=-1\). Hence \(k_1 {{\varvec{a}}}_{1} + \sum _{j=2}^r k_j {{\varvec{a}}}_{j} = \sum _{j=2}^r (k_1 \lambda _j + k_j) {{\varvec{a}}}_{j} \) for any \(k_1,\dots ,k_r \in {{\mathbb {Z}}}\). By simultaneous Diophantine approximation, see e.g. [9], there exist infinitely many \(k_1\ne 0\) and \(k_2,\dots ,k_r\) in \({{\mathbb {Z}}}\) s.t.

$$\begin{aligned} \big | k_1 \lambda _j + k_{j} \big | \le |k_1|^{-1/(r-1)},\quad \quad j=2,\dots ,r. \end{aligned}$$

This implies that \(\Vert {{\varvec{A}}}{{\varvec{k}}}\Vert _\infty \le C |k_1|^{-1/(r-1)}\) with \(C:= \sum _{j=2}^r \Vert {{\varvec{a}}}_{j} \Vert _\infty \). Therefore, there is an infinite number of lattice points \({{\varvec{A}}}{{\varvec{k}}}\) arbitrarily near \({{\varvec{0}}}\). The same holds arbitrarily near the lattice point \({{\varvec{x}}}\) which is interior to \([{{\varvec{u}}},{{\varvec{v}}}]\). The proof is complete. \(\square \)

The setting where \({{\varvec{a}}}_1,\dots ,{{\varvec{a}}}_{r}\) are linearly dependent can therefore be ignored for enumeration purposes. We assume that \(r\le d\) and \({{\varvec{a}}}_{1},\dots ,{{\varvec{a}}}_{r}\) are linearly independent, in which case r is the rank of \({{\varvec{A}}}\). The whole analysis of Sect. 4 holds with \(-{{\varvec{1}}}\) and \(+{{\varvec{1}}}\) replaced by \({{\varvec{u}}}\) and \({{\varvec{v}}}\) respectively. The pairs of interval endpoints \({{\underline{\alpha }}}_1\), \({{\overline{\alpha }}}_1\) and \({{\underline{\alpha }}}(\cdot )\), \({{\overline{\alpha }}}(\cdot )\), optimal values of linear programs in (30) and (31), are now replaced by

$$\begin{aligned}&{{\underline{\alpha }}}_1:=\inf \limits _{{{\varvec{u}}}\le {{\varvec{A}}}x \le {{\varvec{v}}}} x_{1}, \quad \quad \quad \quad {{\overline{\alpha }}}_1:=\sup \limits _{{{\varvec{u}}}\le {{\varvec{A}}}x \le {{\varvec{v}}}} x_{1}, \end{aligned}$$
(54)
$$\begin{aligned}&{{\underline{\alpha }}}({{\varvec{a}}}):= \inf \limits _{\begin{array}{c} {{\varvec{u}}}\le {{\varvec{A}}}x \le {{\varvec{v}}}\\ {{\varvec{x}}}_{[j]} = {{\varvec{a}}} \end{array}} x_{j+1}, \quad \quad \quad \quad {{\overline{\alpha }}}({{\varvec{a}}}):= \sup \limits _{\begin{array}{c} {{\varvec{u}}}\le {{\varvec{A}}}x \le {{\varvec{v}}}\\ {{\varvec{x}}}_{[j]} = {{\varvec{a}}} \end{array}} x_{j+1}, \end{aligned}$$
(55)

where \(j=1,\dots ,r-1\).

In the present case, the Moore-Penrose inverse \({{\varvec{A}}}^{\dagger }=({{\varvec{A}}}^{\top } {{\varvec{A}}})^{-1} {{\varvec{A}}}^{\top }\) is a left inverse of \({{\varvec{A}}}\). We use \({{\varvec{B}}}= ({{\varvec{A}}}^{\dagger })^{\top } \in {{\mathbb {R}}}^{d\times r}\) and \({{\varvec{B}}}_{[j]}\in {{\mathbb {R}}}^{d\times j}\) the submatrix of leading j columns of \({{\varvec{B}}}\). By letting \({{\varvec{A}}}{{\varvec{x}}}= {{\varvec{y}}}\) (\(\Longrightarrow \) \({{\varvec{x}}}= {{\varvec{B}}}^{\top }{{\varvec{y}}}\)) above, we obtain that \({{\underline{\beta }}}_1 \le {{\underline{\alpha }}}_1 \le {{\overline{\alpha }}}_1 \le {{\overline{\beta }}}_1\) and \({{\underline{\beta }}}({{\varvec{a}}}) \le {{\underline{\alpha }}}({{\varvec{a}}}) \le {{\overline{\alpha }}}({{\varvec{a}}}) \le {{\overline{\beta }}}({{\varvec{a}}})\) for any \({{\varvec{a}}}\in {{\mathbb {R}}}^j\), where

$$\begin{aligned}&{{\underline{\beta }}}_1:=\inf \limits _{{{\varvec{u}}}\le {{\varvec{y}}}\le {{\varvec{v}}}} {{\varvec{b}}}_{1}^\top {{\varvec{y}}}, \quad \quad \quad \quad {{\overline{\beta }}}_1:=\sup \limits _{{{\varvec{u}}}\le {{\varvec{y}}}\le {{\varvec{v}}}} {{\varvec{b}}}_{1}^\top {{\varvec{y}}}, \end{aligned}$$
(56)
$$\begin{aligned}&{{\underline{\beta }}}({{\varvec{a}}}):= \inf \limits _{\begin{array}{c} {{\varvec{u}}}\le {{\varvec{y}}}\le {{\varvec{v}}}\\ {{\varvec{B}}}_{[j]}^\top {{\varvec{y}}}= {{\varvec{a}}} \end{array}} {{\varvec{b}}}_{j+1}^\top {{\varvec{y}}}, \quad \quad \quad \quad {{\overline{\beta }}}({{\varvec{a}}}):= \sup \limits _{\begin{array}{c} {{\varvec{u}}}\le {{\varvec{y}}}\le {{\varvec{v}}}\\ {{\varvec{B}}}_{[j]}^\top {{\varvec{y}}}= {{\varvec{a}}} \end{array}} {{\varvec{b}}}_{j+1}^\top {{\varvec{y}}}. \end{aligned}$$
(57)

Unless \(r=d\), the programs in (54), (55) and their counterparts in (56), (57) are not equivalent. The \(\beta \) values are only estimates to the \(\alpha \) values. By reformulations, duality, etc., other linear programs can be derived as described in the paper.

We are mainly interested in dual programs which are expedient for parametric linear programming. First, we assume we have computed \({{\underline{\alpha }}}_1\) and \({{\overline{\alpha }}}_1\) (if \(r=d\), they are equal to \({{\underline{\beta }}}_1={{\varvec{b}}}_{1}^\top {{\varvec{u}}}\) and \({{\overline{\beta }}}_1={{\varvec{b}}}_{1}^\top {{\varvec{v}}}\) respectively). Then, for \({{\varvec{a}}}\in {{\mathbb {R}}}^j\) with \(j\ge 1\), we have by duality

$$\begin{aligned} {{\underline{\alpha }}}({{\varvec{a}}}) = \sup _{\begin{array}{c} {{\varvec{M}}}^\top {{\varvec{w}}}= {{\varvec{c}}}\\ {{\varvec{w}}}\le 0 \end{array} } {{\varvec{b}}}^\top {{\varvec{w}}}, \quad \quad \quad \quad \quad \quad {{\overline{\alpha }}}({{\varvec{a}}}) = \inf _{\begin{array}{c} {{\varvec{M}}}^\top {{\varvec{w}}}= {{\varvec{c}}}\\ {{\varvec{w}}}\ge 0 \end{array} } {{\varvec{b}}}^\top {{\varvec{w}}}, \end{aligned}$$
(58)

where \({{\varvec{M}}}\) and \({{\varvec{c}}}\) are as in (32) with \({{\varvec{A}}}_{\overline{ [j]}}=({{\varvec{a}}}_{j+1}|\dots |{{\varvec{a}}}_{r})\), but now \({{\varvec{b}}}=\left( \begin{array}{r} {{\varvec{u}}}- {{\varvec{A}}}_{{ [j]}} {{\varvec{a}}}\\ -{{\varvec{v}}}+{{\varvec{A}}}_{{ [j]}} {{\varvec{a}}}\end{array} \right) \). Sequential enumeration based on the idea of chaining the simplex algorithm can be unrolled.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chkifa, M.A. Lattice enumeration via linear programming. Numer. Math. 156, 71–106 (2024). https://doi.org/10.1007/s00211-023-01376-6

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00211-023-01376-6

Mathematics Subject Classification

Navigation