Next Article in Journal
Simulation of Natural Convection with Sinusoidal Temperature Distribution of Heat Source at the Bottom of an Enclosed Square Cavity
Next Article in Special Issue
Optimal Quaternary Hermitian LCD Codes
Previous Article in Journal
Hybrid Classical–Quantum Branch-and-Bound Algorithm for Solving Integer Linear Problems
Previous Article in Special Issue
Some Constructions and Mathematical Properties of Zero-Correlation-Zone Sonar Sequences
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluating the Gilbert–Varshamov Bound for Constrained Systems †

School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore 637121, Singapore
*
Author to whom correspondence should be addressed.
The paper was presented in part at the 2022 IEEE International Symposium on Information Theory (ISIT), Espoo, Finland, 26 Jun–1 July 2022.
Entropy 2024, 26(4), 346; https://doi.org/10.3390/e26040346
Submission received: 28 February 2024 / Revised: 11 April 2024 / Accepted: 17 April 2024 / Published: 19 April 2024
(This article belongs to the Special Issue Discrete Math in Coding Theory)

Abstract

:
We revisit the well-known Gilbert–Varshamov (GV) bound for constrained systems. In 1991, Kolesnik and Krachkovsky showed that the GV bound can be determined via the solution of an optimization problem. Later, in 1992, Marcus and Roth modified the optimization problem and improved the GV bound in many instances. In this work, we provide explicit numerical procedures to solve these two optimization problems and, hence, compute the bounds. We then show that the procedures can be further simplified when we plot the respective curves. In the case where the graph presentation comprises a single state, we provide explicit formulas for both bounds.

1. Introduction

From early applications in magnetic recording systems to recent applications in DNA-based data storage [1,2,3,4] and energy-harvesting [5,6,7,8,9,10], constrained codes have played a central role in enhancing reliability in many data storage and communications systems (see also [11] for an overview). Specifically, for most data storage systems, certain substrings are more prone to errors than others. Thus, by forbidding the appearance of such strings, that is, by imposing constraints on the codewords, the user is able to reduce the likelihood of error. We refer to the collection of words that satisfy the constraints as the constrained space  S .
To further reduce the error probability, one can impose certain distance constraints on the codebook. In this work, we focus on the Hamming metric and consider the maximum size of a codebook whose words belong to the constrained space S and whose pairwise distance is at least of a certain value d. Specifically, we study one of the most well-known and fundamental lower bounds of this quantity—the Gilbert–Varshamov (GV) bound.
To determine the GV bound, one requires two quantities: the size of the constrained space, | S | , and, also, the ball volume, that is, the number of words with a distance of at most d 1 from a “center” word. In the case where the space is unconstrained, i.e., S = { 0 , 1 } n , the ball volume does not depend on the center. Then, the GV bound is simply | S | / V , where V is the ball volume of a center. However, for most constrained systems, the ball volume varies with the center. Nevertheless, Kolesnik and Krachkovsky showed that the GV lower bound can be generalized to | S | / 4 V ¯ , where V ¯ is the average ball volume [12]. This was further improved by Gu and Fuja to | S | / V ¯ in [13] (see pp. 242–243 in [11] for additional details). In the same paper [12], they showed the asymptotic rate of average ball volume can be computed via an optimization problem. Later, Marcus and Roth modified the optimization problem by including an additional constraint and variable [14], and the resulting bound, which we refer to as GV-MR bound, improves the usual GV bound. Furthermore, in most cases, the improvement is strictly positive.
However, about three decades later, very few works have evaluated these bounds for specific constrained systems. To the best of our knowledge, in all works that numerically computed the GV bound and/or GV-MR bound, the constrained systems of interest have, at most, eight states [15]. In [15], the authors wrote that “evaluation of the bound required considerable computation”, referring to the GV-MR bound.
In this paper, we revisit the optimization problems defined by Kolesnik and Krachkovsky [12] and Marcus and Roth [14] and develop a suite of explicit numerical procedures that solve these problems. In particular, to demonstrate the feasibility of our methods, we evaluated and plotted the GV and GV-MR bounds for a constrained system involving 120 states in Figure 1b.
We provide a high-level description of our approach. For both optimization problems, we first characterized the optimal solutions as roots of certain equations. Then, using the celebrated Newton–Raphson iterative procedure, we proceeded to find the roots of these equations. However, as the latter equations involved the largest eigenvalues of certain matrices, each Newton–Raphson iteration required the (partial) derivatives of these eigenvalues (in some variables). To resolve this, we made modifications to another celebrated iterative procedure—the power iteration method—and the resulting procedures computed the GV and GV-MR bounds efficiently for a specific relative distance δ . Interestingly, if we plot the bounds for 0 δ 1 , the numerical procedure can be further simplified. Specifically, by exploiting certain properties of the optimal solutions, we provided procedures that use less Newton–Raphson iterations.
Parts of this paper were presented in the IEEE International Symposium on Information Theory (ISIT 2022) [16]. In the next section, we provide the formal definitions and state the optimization problems that compute the GV bound.

2. Preliminaries

Let Σ = { 0 , 1 } be the binary alphabet and let Σ n denote the set of all words of length n over Σ . A labeled graph  G = ( V , E , L ) is a finite directed graph with states  V , edges E V × V , and an edge labeling  L : E Σ s for some s 1 . Here, we use v i σ v j to mean that there is an edge from v i to v j with label σ . The labeled graph G is deterministic if, for each state, the outgoing edges have distinct labels.
A constrained system  S is, then, the set of all words obtained by reading the labels of paths in a labeled graph G . We say that G is a graph presentation of S . We further denote the set of all n-length words S by S n . Alternatively, S n is the set of all words obtained by reading the labels of ( n / s ) -length paths in G . Then, the capacity of S , denoted by Cap ( S ) , is given by Cap ( S ) lim sup n log | S n | / n . It is well-known that Cap ( S ) corresponds to the largest eigenvalue of the adjacency matrix A G (see, for example, [11]). Here, A G is a ( | V | × | V | ) -matrix whose rows and columns are indexed by V . For each entry ( u , v ) V × V , we set the corresponding entry to be one if ( u , v ) is an edge, and zero otherwise.
Every constrained system can be presented by a deterministic graph G . Furthermore, any deterministic graph can be transformed into a primitive deterministic graph H such that the capacity of G is same as the capacity of the constrained system presented by some irreducible component (maximal irreducible subgraph) of H (see, for example, Marcus et al. [11]). It should be noted that a graph G is primitive if there exists a positive integer such that ( A G ) is strictly positive. Therefore, we henceforth assume that our graphs are deterministic and primitive. When | V | = 1 , we call this a single-state graph presentation and study these graphs in Section 5.
For x , y S , d H ( x , y ) is the Hamming distance between x and y. We fix 1 d n , and a fundamental problem in coding theory is finding the largest subset C of S n such that d H ( x , y ) d for all distinct x , y C . Let A ( n , d ; S ) denote the size of largest subset C .
In terms of asymptotic rates, we fix 0 δ 1 , and our task is to find the highest attainable rate, denoted by R ( δ ) , which is given by R ( δ ; S ) lim sup n log A ( n , δ n ; S ) / n .

2.1. Review of Gilbert–Varshamov Bound

To define the GV bound, we need to determine the total ball size. Specifically, for x S n and 0 r n , we define V ( x , r ; S ) | { y S n : d H ( x , y ) r } | . We further define T ( n , d ; S ) = x S n V ( x , d 1 ; S )  . Then, the GV bound, as given by Gu and Fuja [13,17], states that there exists an ( n , d ; S ) code of size at least | S n | 2 / T ( n , d ; S ) .
In terms of asymptotic rates, there exists a family of ( n , δ n ; S ) codes such that their rates approach
R GV ( δ ) = 2 Cap ( S ) T ( δ ) ,
where T ( δ ) lim sup n log T ( n , δ n ; S ) / n  .
In this paper, our main task is to determine R GV ( δ ) efficiently. We observe that since Cap ( S ) = T ( 0 ) , it suffices to find efficient ways of determining T ( δ ) . It turns out that T ( δ ) can be found via the solution of a convex optimization problem. Specifically, given a labeled graph G = ( V , E , L ) , we define its product graph  G = ( V , E , L ) as follows:
  • V V × V .
  • For ( v i , v j ) , ( v k , v ) V , and ( σ 1 , σ 2 ) Σ s × Σ s , we draw an edge ( v i , v j ) ( σ 1 , σ 2 ) ( v k , v ) if and only if both v i σ 1 v k and v j σ 2 v belong to E .
  • Then, we label the edges in E with the function L : E Z 0 , where L ( v i , v j ) ( σ 1 , σ 2 ) ( v k , v ) = d H ( σ 1 , σ 2 ) / s .
A stationary Markov chain P on a graph G = ( V , E , L ) is a probability distribution function P : E [ 0 , 1 ] such that e E P ( e ) = 1 and, for any state u G , the sum of the probabilities of the outgoing edges equals the sum of the probabilities of the incoming edges. We denote by M ( G ) the set of all stationary Markov chains on G . For a state u V , let E u denote the set of outgoing edges from u in G . The state vector π T = ( π u ) u V of a stationary Markov chain P on G is defined by π u = e E u P ( e ) . The entropy rate of a stationary Markov chain is defined by
H ( P ) = u V e E u π u P ( e ) log ( P ( e ) )
Furthermore, T ( δ ) can be obtained by solving the following optimization problem [12,14]:
T ( δ ) = sup H ( P ) : P M ( G × G ) , e E P ( e ) D ( e ) δ .
To this end, we consider the dual problem of (2). Specifically, we define a ( | V | 2 × | V | 2 ) -distance matrix T G × G ( y ) whose rows and columns are indexed by V . For each entry indexed by e V × V , we set the entry to be zero if e E and we set it to be y D ( e ) if e E . Then, the dual problem can be stated in terms of the dominant eigenvalue of the matrix T G × G ( y ) .
By applying the reduction techniques from [14], we can reduce the problem size by a factor of two. Formally, in the case of s = 1 , we define a | V | + 1 2 × | V | + 1 2 -reduced distance matrix B G × G ( y ) whose rows and columns are indexed by V ( 2 ) { ( v i , v j ) : 1 i j | V | } using the following procedure.
Two states s 1 = ( v i , v j ) and s 2 = ( v k , v ) in G × G are said to be equivalent if v i = v and v j = v k . The matrix B G × G ( y ) is then obtained by merging all pairs of equivalent states s 1 and s 2 . That is, we add the column indexed by v 2 to the column indexed by v 1 and then remove the row and column which are indexed by v 2 . It should be noted that it may be possible to reduce the size of this matrix B G × G ( y ) further. However, for the ease of exposition, we did not consider this case in this work.
Following this procedure, we observe that the entries in the matrix B G × G ( y ) can be described by the rules in Table 1. Moreover, the dominant eigenvalue of B G × G ( y ) is the same as that of T G × G ( y ) . Then, by strong duality, computing (2) is equivalent to solving the following dual problem [18,19] (see also, [20]):
T ( δ ) = inf δ log y + log Λ ( B G × G ( y ) ) : 0 y 1 .
Here, we use Λ ( M ) to denote the dominant eigenvalue of matrix M. To simplify further, we write Λ ( y ; B ) Λ ( B G × G ( y ) ) .
Since the objective function in (3) is convex, it follows from standard calculus that any local minimum solution y * in the interval [ 0 , 1 ] is also a global minimum solution. Furthermore, y * is a zero of the first derivative of the objective function. If we consider the numerator of this derivative, then y * is a root of the function
F ( y ) y Λ ( y ; B ) δ Λ ( y ; B ) .
In Corollary 1, we showed that there is only one y * such that F ( y * ) = 0 and F ( y ) is strictly positive for all values of y. Therefore, to evaluate the GV bound for a fixed δ , it suffices to determine y * .
Later, Marcus and Roth [14] improved the GV bound (1) by considering certain subsets of the constrained space S . This entails the inclusion of an additional constraint defined in the optimization problem (2), and, correspondingly, an additional variable in the dual problem (3). Specifically, they considered certain subsets S ( p ) S where each symbol in the words of S ( p ) appears with a certain frequency dependent on the parameter p. We describe this in more detail in Section 4.

2.2. Our Contributions

(A)
In Section 3, we develop the numerical procedures to compute T ( δ ) for a fixed δ and, hence, determine the GV bound (1). Our procedure modifies the well-known power iteration method to compute the derivatives of Λ ( y ; B ) . After that, using these derivatives, we apply the classical Newton–Raphson method to determine the root of (4). In the same section, we also study procedures to plot the GV curve, that is, the set { ( δ , R GV ( δ ) ) : 0 δ 1 } . Here, we demonstrate that the GV curve can be plotted without any Newton–Raphson iterations.
(B)
In Section 4, we then develop similar power iteration methods and numerical procedures to compute the GV-MR bound. Similar to the GV curve, we also provide a plotting procedure that uses significantly less Newton–Raphson iterations.
(C)
In Section 5, we provide explicit formulas for the computation of the GV bound and GV-MR bound for graph presentations that have exactly one state but multiple parallel edges.
(D)
In Section 6, we validate our methods by computing the GV and the GV-MR bounds for some specific constrained systems. For comparison purposes, we also plot a simple lower bound that is obtained by using an upper estimate of the ball size. From the plots in Figure 1, Figure 2 and Figure 3, it is also clear that the GV and GV-MR bounds are significantly better. We also observe that the GV bound and GV-MR bound for subblock energy-constrained codes (SECCs) obtained through our procedures improve the GV-type bound given by Tandon et al. (Proposition 12 in [21]).

3. Evaluating the Gilbert–Varshamov Bound

In this section, we first describe a numerical procedure that solves (3) and, hence, determine R GV ( δ ) for fixed values of δ . Then, we show that the procedure can be simplified when we compute the GV curve, that is, the set of points { ( δ , R GV ( δ ) ) : δ 0 , 1 } . Here, we eschew notation and use a , b to denote the interval { x : a x b } , if a < b , and the interval { x : b x a } otherwise.
Below, we provide formal description of our procedure to obtain the GV bound for a fixed relative distance δ .
Procedure 1 (GV bound for fixed relative distance).
Input: Adjacency matrix A G , reduced distance matrix B G × G ( y ) , and relative minimum distance δ
Output: GV bound, that is, R GV ( δ ) as defined in (1)
(1)
Apply the Newton–Raphson method to obtain y * such that F ( y * ) is approximately zero.
  • Fix the tolerance value ϵ .
  • Set t = 0 and pick an initial guess 0 y t 1 .
  • While | y t y t 1 | > ϵ ,
    Compute the next guess y t + 1 as follows:
    y t + 1 = y t F ( y t ) F ( y t ) = y t y t Λ ( y t ; B ) δ Λ ( y t ; B ) ) ( 1 δ ) Λ ( y t ; B ) + y t Λ ( y t ; B ) .
    In this step, apply the power iteration method to compute Λ ( y t ; B ) , Λ ( y t ; B ) , and Λ ( y t ; B ) .
    Increment t by one.
  • Set y * y t .
(2)
Determine R GV ( δ ) using y * . Specifically, compute T ( δ ) δ log y * + log Λ ( y * ; B ) , Cap ( S ) log Λ ( A G ) , and R GV ( δ ) 2 Cap ( S ) T ( δ ) .
Throughout Section 3 and Section 4, we illustrate our numerical procedures via a running example using the class of sliding window-constrained codes (SWCCs). Formally, we fix a window length L and window weightw, and say that a binary word satisfies the ( L , w ) -sliding window weight constraint if the number of ones in every consecutive L bits is at least w. We refer to the collection of words that meet this constraint as an ( L , w ) -SWCC constrained system. The class of SWCCs was introduced by Tandon et al. for the application of simultaneous energy and information transfer [7,10]. Later, Immink and Cai [8,9] studied encoders for this constrained system and provided a simple graph presentation that uses only L w states.
In the next example, we illustrate how the numerical procedure can be used to compute the GV bound for the value when δ = 0.1 .
Example 1.
Let L = 3 and w = 2 , and we consider a ( 3 , 2 ) -SWCC constrained system. From [8], we have the following graph presentation with states x11, 101, and  110
Entropy 26 00346 i028
Then, the corresponding adjacency and reduced distance matrices are as follows:
A G = 1 1 0 0 0 1 1 0 0 , B G × G ( y ) = 1 2 y 0 1 0 0 0 0 1 0 y 0 1 y 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 0 .
To determine the GV bound at δ = 0.1 , we first approximate the optimal point y * for which δ log y + log Λ ( y ; B ) is minimized.
We apply the Newton–Raphson method to find a zero of the function F ( y ) . Now, with the initial guess y 0 = 0.3 , we apply the power iteration method to determine
Λ ( 0.3 ; B ) = 1.659 , Λ ( 0.3 ; B ) = 0.694 , Λ ( 0.3 ; B ) = 0.183 .
Then, we compute that y 1 0.238 . Repeating the computations, we have that y 2 0238 . Since | y 2 y 1 | is less than the tolerance value 10 5 , we set y * = 0.238 . Hence, we have that T ( 0.1 ) = 0.9 . Applying the power iteration method to either A G or B G × G ( 0 ) , we compute the capacity of the ( 3 , 2 ) -SWCC constrained system to be Cap ( S ) = 0.551 . Then, the GV bound is given by R GV ( 0.1 ) = 2 ( 0.551 ) 0.9 = 0.202 .
We discuss the convergence issues arising from Procedure 1. We observe that there are two different iterative processes in Step 1, namely, (a) the power iteration method to compute the values Λ ( y t ; B ) , Λ ( y t ; B ) , and Λ ( y t ; B ) , and (b) the Newton–Raphson method that determines the zero of F ( y ) .
(a)
We recall that Λ ( y ; B ) is the largest eigenvalue of the reduced distance matrix B G × G ( y ) . If we apply naive methods to compute this dominant eigenvalue, the computational complexity increases very rapidly with the matrix size. Specifically, if G has M states, then the reduced distance matrix has dimensions Θ ( M 2 ) × Θ ( M 2 ) and finding its characteristic equation takes O ( M 6 ) time. Even then, determining the exact roots of characteristic equations with at least five degrees is generally impossible. Therefore, we turn to the numerical procedures like the ubiquitous power iteration method [22]. However, the standard power iteration method is only able to compute the dominant eigenvalue Λ ( y ; B ) . Nevertheless, we can modify the power iteration method to compute Λ ( y ; B ) and its higher order derivatives. In Appendix A, we demonstrate that under certain mild assumptions, the modified power iteration method always converges. Moreover, using the sparsity of the reduced distance matrix, we have that each iteration can be completed in O ( M 2 ) time.
(b)
Next, we discuss whether we can guarantee that y t converges to y * as t approaches infinity. Even though the Newton–Raphson method converges in all our numerical experiments, we are unable to demonstrate that it always converges for F ( y ) . Nevertheless, we can circumvent this issue if we are interested in plotting the GV curve. Specifically, if our objective is to determine the curve { ( δ , R GV ( δ ) ) : δ 0 , 1 } , it turns out that we do not need to implement the Newton–Raphson iterations and we discuss this next.
We fix some constrained system S . Let us define its corresponding GV curve to be the set of points GV ( S ) { ( δ , R GV ( δ ) ) : δ 0 , 1 } . Here, we demonstrate that the GV curve can be plotted without any Newton–Raphson iterations.
To this end, we observe that when F ( y * ) = 0 , we have that δ = y * Λ ( y * ; B ) / Λ ( y * ; B ) . Hence, we eschew notation and define the function
δ ( y ) y Λ ( y ; B ) / Λ ( y ; B ) .
We further define δ max = δ ( 1 ) = Λ ( 1 ; B ) / Λ ( 1 ; B ) . In this section, we prove the following theorem.
Theorem 1.
Let G be the graph presentation for the constrained system S . If we define the function
ρ GV ( y ) 2 Cap ( S ) + δ ( y ) log y log Λ ( y ; B ) ,
then the corresponding GV curve is given by
GV ( S ) = ( δ ( y ) , ρ GV ( y ) ) : y [ 0 , 1 ] ( δ , 0 ) : δ δ max .
Before we prove Theorem 1, we discuss its implications. It should be noted that to compute δ ( y ) and ρ ( y ) , it suffices to determine Λ ( y ; B ) and Λ ( y ; B ) using the modified power iteration methods described in Appendix A. In other words, no Newton–Raphson iterations are required. We also have additional computational savings, as we do not need to apply the power iteration method to compute the second derivative Λ ( y ; B ) .
Example 2.
We continue our example and plot the GV curve for the ( 3 , 2 ) -SWCC constrained system in Figure 1a. Before plotting, we observe that when y = 0 , we have ( δ ( 0 ) , ρ ( 0 ) ) = ( 0 , 0.551 ) = ( 0 , Cap ( S ) ) , as expected. When y = 1 , we have δ ( 1 ) = δ max = 0.313 . Indeed, both ρ ( 1 ) and R GV ( δ max ) are equal to zero and we have that R GV ( δ ) = 0 for δ δ max .
Next, we compute a set of 100 points on the GV curve. If we apply Procedure 1 to compute R GV ( δ ) for 100 values of δ in the interval [ 0 , δ max ] , we require 275 Newton–Raphson iterations and 6900 power iterations to find these points. In contrast, applying Theorem 1, we compute ( δ ( y ) , ρ ( y ) ) for 100 values of y in the interval 0 , 1 . This does not require any Newton–Raphson iterations and involves only 2530 power iterations.
To prove Theorem 1, we demonstrate the following lemmas. Our first lemma is immediate from the definitions of R GV , δ , and ρ in (1), (5), and (6), respectively.
Lemma 1.
R GV ( δ ( y ) ) = ρ ( y ) for all y [ 0 , 1 ] .
The next lemma studies the behaviour of both δ and ρ as functions in y.
Lemma 2.
In terms of y, the functions δ ( y ) and ρ ( y ) are monotone increasing and decreasing, respectively. Furthermore, we have that ( δ ( 0 ) , ρ ( 0 ) ) = ( 0 , Cap ( S ) ) , ( δ ( 1 ) , ρ ( 1 ) ) = ( δ max , 0 ) and R GV ( δ ) = 0 for δ δ max .
Proof. 
To simplify notation, we write Λ ( y ; B ) , Λ ( y ; B ) , and Λ ( y ; B ) as Λ , Λ , and Λ , respectively.
First, we show that δ ( y ) is positive for 0 y < 1 . Differentiating the expression in (5), we have that δ ( y ) > 0 is equivalent to
Λ ( Λ + y Λ ) y ( Λ ) 2 > 0 .
We recall that (3) is a convex minimization problem. Hence, the second order derivative of the objective function is always positive. In other words,
δ y 2 + Λ Λ ( Λ ) 2 Λ 2 > 0 .
Substituting δ with y Λ / Λ and multiplying by y Λ 2 , we obtain (8), as desired.
Next, we show that ρ is monotone decreasing. We recall that ρ ( y ) = R GV ( δ ( y ) ) = Cap ( S ) T ( δ ) . Since T ( δ ) yields the asymptotic rate of the total ball size, we have that as y increases, δ ( y ) increases and so, T ( δ ) increases. Therefore, ρ ( y ) decreases, as desired.
Next, we show that ρ ( 1 ) = 0 . When y = 1 , we have from (6) that ρ ( 1 ) = 2 Cap ( S ) log Λ ( 1 ; B ) . Now, we recall that B G × G ( y ) shares the same dominant eigenvalue as the matrix T G × G ( y ) [12]. Furthermore, it can be verified that when y = 1 , T G × G ( 1 ) is tensor product of A G and A G . That is, T G × G ( 1 ) = A G A G . It then follows from standard linear algebra that Λ ( 1 ; B ) = Λ ( 1 ; T ) = Λ ( A G ) 2 . Thus, log Λ ( 1 ; B ) = 2 Cap ( S ) and ρ ( 1 ) = 0 . In this instance, we also have that T ( δ max ) = 2 Cap ( S ) .
Finally, for δ δ max , we have that T ( δ max ) = 2 Cap ( S ) and thus, R GV ( δ ) = 0 , as required.    □
Theorem 1 is then immediate from Lemmas 1 and 2.
We have the following corollary that immediately follows from Lemma 2. This corollary then implies that y * yields the global minimum for the optimization problem.
Corollary 1.
When 0 δ δ m a x = Λ ( 1 , B ) Λ ( 1 , B ) , F ( y ) y Λ ( y ; B ) δ Λ ( y ; B ) has a unique zero in 0 , 1 . Furthermore, F ( y ) is strictly positive for all y 0 , 1 .

4. Evaluating Marcus and Roth’s Improvement of the Gilbert–Varshamov Bound

In [14], Marcus and Roth improved the GV lower bound for most constrained systems by considering subsets S ( p ) of S where p is some parameter. Here, we focus on the case s = 1 and set p to be the normalized frequency of edges whose labels correspond to one. Specifically, we set S ( p ) { x S : wt ( x ) = p | x | } .
Next, let S n ( p ) be the set of all words/paths of length n in S ( p ) and we define S ( p ) lim sup n 1 n log | S n ( p ) | .
Similar to before, we define T ( p , δ ) = lim sup n 1 n log T ( δ n , n ; S n ( p ) ) . Since S n ( p ) is a subset of S n , it follows from the usual GV argument that there exists a family of ( n , δ n ; S ) codes whose rates approach 2 S ( p ) T ( p , δ ) for all 0 p 1 . Therefore, we have the following lower bound on asymptotic achievable code rates:
R MR ( δ ) = sup { 2 S ( p ) T ( p , δ ) : 0 p 1 } .
Now, a key result from [14] is that both S ( p ) and T ( p , δ ) can be obtained via two different convex optimization problems. For succinctness, we state the dual formulations of these optimization problems.
First, S ( p ) can be obtained from the following problem:
S ( p ) = inf p log z + log Λ ( C G ( z ) ) : z 0 .
Here, C G ( z ) is the following ( | V | × | V | ) matrix C G ( z ) whose rows and columns are indexed by V . For each entry indexed by e, we set ( C G ( z ) ) e to be zero if e E , and z L ( e ) if e E .
As before, we simplify notation by writing Λ ( z ; C ) Λ ( C G ( z ) ) . Again, following the convexity of (10), we are interested in finding the zero of the following function:
G 1 ( z ) z Λ ( z ; C ) p Λ ( z ; C ) .
Next, T ( p , δ ) can be obtained via the following optimization:
T ( p , δ ) = inf 2 p log x δ log y + log Λ ( D G × G ( x , y ) ) : x 0 , 0 y 1 .
Here, D G × G ( x , y ) is a | V | + 1 2 × | V | + 1 2 -reduced distance matrix indexed by V ( 2 ) . To define the entry of matrix D G × G ( x , y ) indexed by ( ( v i , v j ) , ( v k , v ) ) , we look at the vertices v i , v j , v k , and v and follow the rules given in Table 2.
Again, we write Λ ( x , y ; D ) Λ ( D G × G ( x , y ) ) . Furthermore, following the convexity of (12), we have that if the optimal solution is obtained at x and y, then
G 2 ( x , y ) x Λ x ( x , y ; D ) 2 p Λ ( x , y ; D ) = 0 .
G 3 ( x , y ) y Λ y ( x , y ; D ) δ Λ ( x , y ; D ) = 0 .
To this end, we consider the function Δ ( x ) = Λ y ( x , 1 ; D ) / Λ ( x , 1 ; D ) for x > 0 and set δ max = sup { Δ ( x ) : x > 0 } . As with the previous section, we develop a numerical procedure to solve the optimization problem (9). To this end, we have the following critical observation.
Theorem 2.
For a given δ < δ max , consider the optimization problem
sup { 2 p log z + 2 log Λ ( z ; C ) + 2 p log x + δ log y log Λ ( x , y ; D ) : G 1 ( z ) = G 2 ( x , y ) = G 3 ( x , y ) = 0 } .
If ( p * , x * , y * , z * ) is an optimal solution, then x * = z * . Furthermore, if 0 p * 1 , then x * , z * 0 and 0 y * 1 .
Proof. 
Let λ 1 , λ 2 , and λ 3 be real-valued variables and we define L ( p , x , y , z , λ 1 , λ 2 , λ 3 ) G ( p , x , y , z ) + λ 1 G 1 ( z ) + λ 2 G 2 ( x , y ) + λ 3 G 3 ( x , y ) . Using the Lagrangian multiplier theorem, we have that L / p = L / x = L / y = L / z = 0 for any optimal solution. Solving these equations with the constraints G 1 ( z ) = G 2 ( x , y ) = G 3 ( x , y ) = 0 , we have that λ 1 = λ 2 = λ 3 = 0 and x = z for any optimal solution.
Now, when p * [ 0 , 1 ] , using G 1 ( z ) = 0 , let us define z ( p ) z Λ ( z ; C ) / Λ ( z ; C ) . Then, proceeding as with the proof of Lemma 2, we see that z ( p ) is monotone increasing with z ( 0 ) = 0 . Therefore, z * = z ( p * ) is zero.
Similarly, given p * and x * , we use G 3 ( x * , y ) = 0 to define δ ( y ) = y Λ y ( x * , y ; D ) / Λ ( x * , y ; D ) . Again, we can proceed as with the proof of Lemma 2 to show that δ ( y ) is monotone increasing. Furthermore, since δ ( y * ) < δ max = δ ( 1 ) , we have that y * [ 0 , 1 ] .    □
Therefore, to determine R MR ( δ ) for any fixed δ , it suffices to find x, y, z, and p such that G 1 ( z ) = G 2 ( x , y ) = G 3 ( x , y ) = 0 and x = z .
Now, the optimization in Theorem 2 does not constrain the values of p. Furthermore, for certain constrained systems, there are instances where p falls outside the interval [ 0 , 1 ] . In this case, instead of solving the optimization problem (9), we set p to be either zero or one, and we solve the corresponding optimization problems (10) and (12). Specifically, if we have p * < 0 , then we set p * = 0 and x * = 0 , or if p * > 1 , then we set p * = 1 and x * = . Hence, the resulting rates that we obtain are a lower bound for the GV-MR bound.  
Procedure 2 ( R MR ( δ ) for fixed δ δ max ).
Input: Matrices C G ( x ) , D G ( x , y )
Output: R MR ( δ ) or R LB ( δ ) , where R MR ( δ ) R LB ( δ ) .
(1)
Apply the Newton–Raphson method to obtain p * , x * , and y * such that G 1 ( x * ) , G 2 ( x * , y * ) , and G 3 ( x * , y * ) are approximately zero. Specifically, do the following:
  • Fix a tolerance value ϵ
  • Set t = 0 and pick an initial guess p t 0 , x t 0 , 0 y t 1 .
  • While | p t p t 1 | + | x t x t 1 | + | y t y t 1 | > ϵ  ,
    Compute the next guess p t + 1 , x t + 1 , y t + 1 :
    p t + 1 x t + 1 y t + 1 = p t x t y t G 1 p G 1 x G 1 y G 2 p G 2 x G 2 y G 3 p G 3 x G 3 y 1 G 1 ( x t ) G 2 ( x t , y t ) G 3 ( x t , y t ) .
    Here, apply the power iteration method to compute Λ ( x t ; C ) , Λ ( x t ; C ) , Λ ( x t ; C ) , Λ ( x t , y t ; D ) , Λ x ( x t , y t ; D ) , Λ y ( x t , y t ; D ) , Λ x x ( x t , y t ; D ) , Λ y y ( x t , y t ; D ) , and Λ x y ( x t , y t ; D ) .
    Increment t by one.
  • Set p * p t , x * x t , y * y t .
(2A)
If 0 p * 1 , set R MR ( δ ) 2 log Λ ( x * ; C ) + δ log y * log Λ ( x * , y * ; D ) .
(2B)
Otherwise,
  • If p * < 0 , set p * 0 , x * 0 , and y * solution of G 3 ( 0 , y ) = 0 .
  • If p * > 1 , set p * 1 , x * , and y * solution of G 3 ( , y ) = 0 .
Finally, set R LB ( δ ) 2 log Λ ( x * ; C ) + δ log y * log Λ ( x * , y * ; D ) .
Remark 1.
Let p * be the value computed at Step 1. When p * falls outside the interval [ 0 , 1 ] , we set p * { 0 , 1 } , and we argued earlier that the value returned R LB ( δ ) (at Step 2B) is, at most, R MR ( δ ) . Nevertheless, we conjecture that R LB ( δ ) = R MR ( δ ) .
As before, we develop a plotting procedure that minimizes the use of Newton–Raphson iterations.
We note that we have three scenarios for Δ ( x ) . If Δ ( x ) is monotone decreasing, then δ max = lim x 0 Δ ( x ) and we set x # = 0 . If Δ ( x ) is monotone increasing, then δ max = lim x Δ ( x ) and we set x # = . Otherwise, Δ ( x ) is maximized for some positive value and we set x # to be this value. Next, to obtain the GV-MR curve (see Remark 2); we iterate over x 1 , x # . It should be noted that if y ( x # ) < 1 or, equivalently, δ ( x # ) < δ max , we obtain a lower bound on the GV-MR curve by iterating over y y ( x # ) , 1 . Similar to Theorem 1, we define
ρ MR ( x ) 2 log Λ ( x ; C ) + δ ( x ) log y ( x ) log Λ ( x , y ( x ) ; D ) ,
and
ρ LB ( y ) 2 log Λ ( x # ; C ) + δ ( y ) log y log Λ ( x # , y ; D ) .
Finally, we state the following analogue of Theorem 1.
Theorem 3.
We define δ max , x # as before. For x 1 , x # , we set
p ( x ) x Λ ( x ; C ) / Λ ( x ; C ) , y ( x ) s o l u t i o n o of G 2 ( x , y ) = 0 , δ ( x ) y ( x ) Λ y ( x , y ( x ) ; D ) / Λ ( x , y ( x ) ; D ) ,
If y ( x # ) < 1 , then for y y ( x # ) , 1 , we set
δ ( y ) y Λ y ( x # , y ; D ) / Λ ( x # , y ; D ) ,
then, the corresponding GV-MR curve is given by
( δ ( x ) , ρ MR ( x ) ) : x 1 , x # { ( δ ( y ) , ρ LB ( y ) ) : y y ( x # ) , 1 } ( δ , 0 ) : δ δ max .
where ρ MR and ρ LB are defined in (15) and (16), respectively.
Example 3.
We continue our example and evaluate the GV-MR bound for the ( 3 , 2 ) -SWCC constrained system. In this case, the matrices of interest are
C G ( z ) = z 1 0 0 0 z z 0 0 a n d D G × G ( x , y ) = x 2 2 x y 0 1 0 0 0 0 x 2 0 x y 0 x 2 x y 0 0 0 0 0 0 0 0 0 x 2 0 0 x 2 0 0 0 x 2 0 0 0 0 0 .
Here, we observe that Δ ( x ) is a monotone decreasing function and so, we set x # = 0.01 and δ max = lim x 0 Δ ( x ) 0.426 . If we apply Procedure 2 to compute R MR ( δ ) for 100 points in 0 , δ max , we require 437 Newton–Raphson iterations and 85,500 power iterations. In contrast, we use Theorem 3 to compute ( δ ( x ) , ρ MR ( x ) ) for 100 values of x in the interval 1 , x # . This requires 323 Newton–Raphson iterations and involves 22,296 power iterations. The resulting GV-MR curve is given in Figure 1a.
Remark 2.
Strictly speaking, the GV-MR curve described by (17) may not be equal to the curve defined by the optimization problem (15). Nevertheless, the curve provides a lower bound for the optimal asymptotic code rates and we conjecture that the GV-MR curve described by (17) is a lower bound for the curve defined by the optimization problem (15).

5. Single-State Graph Presentation

In this section, we focus on graph presentations that have exactly one state. Here, we allow these single-state graph presentations to contain the parallel edges and their labels to be binary strings of length possibly greater than one. Now, for these constrained systems, the procedures to evaluate the GV bound and its MR improvements can be greatly simplified. This is because the matrices B G × G ( y ) , C G ( z ) , and D G × G ( x , y ) are all of dimensions one by one. Therefore, determining their respective dominant eigenvalues is straightforward and does not require the power iteration method. The results in this section follow directly from previous sections and our objective is to provide explicit formulas whenever possible.
Formally, let S be the constrained system with graph presentation G = ( V , E , L ) such that | V | = 1 and L : E Σ s with s 1 (existing methods that determine the GV bound for constrained systems with | V | 1 assume that the edge-labels have single letters, i.e., s = 1 . In other words, previous methods developed in [12,14] do not apply).
We further define α t # { ( x , y ) L ( E ) 2 : d H ( x , y ) = t } for 0 t s . Then. the corresponding adjacency and reduced distance matrices are as follows:
A G = | E | and B G × G ( y ) = t 0 α t y t .
Then, we compute the capacity using its definition as Cap ( S ) = ( log | E | ) / s .
To compute T ( δ ) , we consider the following extension of the optimization problem (3) for the case s 1 :
T ( δ ) = 1 s inf δ s log y + log λ ( y ; B ) : 0 y 1 = 1 s inf δ s log y + log t 0 α t y t : 0 y 1 .
As before, following the convexity of the objective function in (18), we have that the optimal y is the zero (in the interval [ 0 , 1 ] ) of the function
F ( y ) t 0 ( t δ s ) α t y t .
So, for fixed values of δ , we can use the Newton–Raphson procedure to compute the root y of (19), and, hence, evaluate R GV ( δ ) . It should be noted that the power iteration method is not required in this case.
On the other hand, to plot the GV curve, we have the following corollary of Theorem 1.
Corollary 2.
Let G be the single-state graph presentation for a constrained system S . Then, the corresponding GV curve is given by
GV ( S ) ( δ , R GV ( δ ) ) : δ [ 0 , 1 ] = ( δ ( y ) , ρ ( y ) ) : y [ 0 , 1 ] ( δ , 0 ) : δ δ max ,
where
δ max = t 0 t α t s | E | 2 , δ ( y ) = t 0 t α t y t s t 0 α t y t , ρ ( y ) = 1 s log | E | 2 t 0 α t y t t 0 t α t y t t 0 α t y t log y .
We illustrate this evaluation procedure via an example of the class of subblock energy-constrained codes (SECCs). Formally, we fix a subblock length L and energy constraintw. A binary word x of length m L is said to satisfy the ( L , w ) -subblock energy constraint if we partition x into m subblocks of length L, then the number of ones in every subblock is at least w. We refer to the collection of words that meet this constraint as an ( L , w ) -SECC constrained system. The class of SECCs was introduced by Tandon et al. for the application of simultaneous energy and information transfer [7]. Later, in [21], a GV-type bound was introduced (see Proposition 12 in [21] and also, (28)) and we make comparisons with the GV bound (20) in the following example.
Example 4.
Let L = 3 and w = 2 and we consider a ( 3 , 2 ) -SECC constrained system. It is straightforward to observe that the graph presentation is as follows with the single state x. Here, s = L = 3 .
Entropy 26 00346 i029
Then, the corresponding adjacency and reduced distance matrices are as follows:
A G = 4 , B G × G ( y ) = 4 + 6 y + 6 y 2 .
First, we determine the GV bound at δ = 1 / 3 . We observe that F ( y ) = 4 + 6 y 2 and, so, the optimal point y for (18) is 2 / 3 (the unique solution to F ( y ) in the interval [ 0 , 1 ] ). Hence, we have that T ( 1 / 3 ) 1.327 . On the other hand, the capacity of a ( 3 , 2 ) -SECC constrained system is Cap ( S ) = 2 / 3 . Therefore, the GV bound is given by R GV ( 1 / 3 ) = 0.006 .
In contrast, the GV-type lower bound given by Proposition 12 in [21] is zero for δ > 0.174 . Hence, the evaluation of the GV bound yields a significantly better lower bound. In fact, we can show that R GV ( δ ) > 0 for all δ δ max = 3 / 8 .
To plot the GV curve, using the fact that δ max = 3 / 8 , we have that
GV ( S ) = y + 2 y 2 2 + 3 y + 3 y 2 , 1 3 log 8 2 + 3 y + 3 y 2 + 3 y + 6 y 2 2 + 3 y + 3 y 2 log y : y [ 0 , 1 ] ( δ , 0 ) : δ 3 8 .
We plot the curve in Section 6.
From this example, we see that our methods yield better lower bounds in terms of asymptotic coding rates for a specific pair of ( L , w ) . It is open to determine how much improvement can be achieved for general pairs of L and w.
Next, we evaluate the GV-MR bound. To this end, we consider some proper subset P E and define
α t # { ( x , y ) L ( E ) 2 : d H ( x , y ) = t , x , y P } , β t # { ( x , y ) L ( E ) : d H ( x , y ) = t , ( x P , y P ) or ( x P , y P ) } , γ t # { ( x , y ) L ( E ) : d H ( x , y ) = t , x , y P } .
Then, we consider the following matrices:
C G ( z ) = | E | | P | + | P | z and D G × G ( x , y ) = t 0 ( α t x 2 + β t x + γ t ) y t .
Setting p to be the normalized frequency of edges in P , we obtain S ( p ) by solving the optimization problem (10).
Specifically, we have that
S ( p ) = 1 s H ( p ) + p + log | P | + ( 1 p ) log ( | E | | P | ) ,
and this value is achieved when
z = p ( | E | | P | ) ( 1 p ) | P | .
To compute T ( p , δ ) , we consider the following extension of the optimization problem (12) for the case s 1 .
T ( p , δ ) = 1 s inf 2 p log x δ s log y + log λ ( y ; D ) : 0 y 1 = 1 s inf 2 p log x δ s log y + log t 0 ( α t x 2 + β t x + γ t ) y t : 0 y 1 .
As before, following the convexity of the objective function in (23), we have that the optimal x and y are the zeroes (in the interval [ 0 , 1 ] ) of the functions
G 2 ( x , y ) 2 ( 1 p ) ( t 0 α t y t ) x 2 + ( 1 2 p ) ( t 0 β t y t ) x 2 p ( t 0 γ t y t ) G 3 ( x , y ) t 0 ( t δ s ) ( α t x 2 + β t x + γ t ) y t
So, for fixed values of p and δ , we can use the Newton–Raphson procedure to compute the roots x and y of (24), and, hence, evaluate R GV ( p , δ ) . It should be noted that the power iteration method is not required in this case. We find x # as defined in Section 4 and set
ρ MR ( x ) 2 log ( | E | | P | + | P | x ) + δ ( x ) log y ( x ) log t 0 ( α t x 2 + β t x + γ t ) y ( x ) t .
Furthermore, if y ( x # ) < 1 , we set
ρ LB ( y ) 2 log ( | E | | P | + | P | x # ) + δ ( y ) log y log t 0 ( α t ( x # ) 2 + β t x # + γ t ) y t .
Next, to plot the GV-MR curve, we have the following corollary of Theorem 3.
Corollary 3.
Let G be the single-state graph presentation for a constrained system S . For x 1 , x # , we set
p ( x ) = | P | x ( | E | | P | ) + | P | x ) , δ ( x ) = t 1 t ( α t x 2 + β t x + γ t ) y ( x ) t s t 0 ( α t x 2 + β t x + γ t ) y ( x ) t ,
where y ( x ) is the smallest root of the equation
2 ( | E | | P | ) ( t 0 α t y t ) x + ( | E | | P | | P | x ) ( t 0 β t y t ) 2 | P | ( t 0 γ t y t ) = 0 .
If y ( x # ) < 1 , then for y y ( x # ) , 1 , we set
δ ( y ) = t 1 t ( α t ( x # ) 2 + β t x # + γ t ) y t s t 0 ( α t ( x # ) 2 + β t x # + γ t ) y t ,
Then, the corresponding GV-MR curve is given by
( δ ( x ) , ρ MR ( x ) ) : x 1 , x # { ( δ ( y ) , ρ LB ( y ) ) : y y ( x # ) , 1 } ( δ , 0 ) : δ δ max .
where ρ MR and ρ LB are defined in (25) and (26), respectively.
Example 5.
We continue our example and evaluate the GV-MR bound for the ( 3 , 2 ) -SECC constrained system. We have the following single-state graph presentation:
Entropy 26 00346 i030
Then, the matrices of interest are:
C G = 1 + 3 z , D G × G ( x , y ) = ( 3 + 6 y 2 ) x 2 + 6 x y + 1 .
Since C G and D G × G ( x , y ) are both singleton matrices, we have Λ ( z ; C ) = 1 + 3 z and Λ ( x , y ; D ) = ( 3 + 6 y 2 ) x 2 + 6 x y + 1 . Then, G 1 ( z ) = p ( 1 + 3 z ) + 3 z , G 2 ( x , y ) = 3 ( 1 + 2 y 2 ) x 2 ( 1 p ) + 3 x y ( 1 2 p ) p and G 3 ( x , y ) = 4 x 2 y 2 3 δ ( 1 + 2 y 2 ) x 2 + 2 x y ( 1 3 δ ) δ . Now, we apply Theorem 2 and express p , y , and δ in terms of x where x [ 1 , x # ] where x # .
p = 3 x ( 1 + 3 x ) y = x 1 2 x δ = 2 x ( x 1 ) ( 9 x 2 1 )
Now, we observe that we have y ( x # ) = 1 / 2 . Since we can still increase y to 1, we apply the GV bound with p = 1 and x = z = x # once we reach the boundary that is p = 1 . Hence, at the boundary, we solve the following problem:
S ( 1 ) = 2 log 3 T ( 1 , δ ) = inf 2 log x 3 δ log y + log ( 3 ( 1 + 2 y 2 ) x 2 + 6 x y + 1 ) : 1 / 2 y 1 ; x = x # = inf 3 δ log y + log 3 + log ( 1 + 2 y 2 ) : 1 / 2 y 1 R MR ( δ ) = S ( 1 ) T ( 1 , δ ) .
By setting F ( y ) = 3 δ ( 1 + 2 y 2 ) + 4 y 2 = 0 , we get δ = 4 y 2 / 3 ( 1 + 2 y 2 ) where y [ 1 / 2 , 1 ] and we plot the respective curve.

6. Numerical Plots

In this section, we apply our numerical procedures to compute the GV and the GV-MR bounds for some specific constrained systems. In particular, we consider the ( L , w ) -SWCC constrained systems defined in Section 3, the ubiquitous ( d , k ) -runlength limited systems (see, for example, p. 3 in [11]) and the ( L , w ) -subblock energy constrained codes recently introduced in [7]. In addition to the GV and GV-MR curves, we also plot a simple lower bound. For each δ 0 , 1 / 2 , any ball size is at most 2 H ( δ n ) . So, for any constrained system S , we have that T ˜ ( δ ) Cap ( S ) + H ( δ ) . Therefore, we have that
R ( δ ; S ) Cap ( S ) H ( δ ) .
From the plots in Figure 1, Figure 2 and Figure 3, it is also clear that the computations of (7) and (17) yield a significantly better lower bound.

6.1. ( L , w ) -Sliding Window Constrained Codes

We fix L and w. We recall from Section 3 that a binary word satisfies the ( L , w ) -sliding window weight constraint if the number of ones in every consecutive L bits is at least w and the ( L , w ) -SWCC constrained system refers to the collection of words that meet this constraint. From [8,9], we have a simple graph presentation that uses only L w states. To validate our methods, we choose ( L , w ) { ( 3 , 2 ) , ( 10 , 7 ) } and the corresponding graph presentations have 3 and 120 states, respectively. Applying the plotting procedures described in Theorems 1 and 3, we obtain Figure 1.

6.2. ( d , k ) -Runlength Limited Codes

Next, we revisit the ubiquitous runlength constraint. We fix d and k. We say that a binary word satisfies the ( d , k ) -RLL constraint if each run of zeroes in the word has a length of at least d and at most k. Here, we allow the first and last runs of zeroes to have a length of less than d. We refer to the collection of words that meet this constraint as a ( d , k ) -RLL constrained system. It is well known that a ( d , k ) -RLL constrained system has the graph presentation with k + 1 states (see, for example, [11]). Here, we choose ( d , k ) { ( 1 , 3 ) , ( 3 , 7 ) } to validate our methods and apply Theorems 1 and 3 to obtain Figure 2. For ( d , k ) = ( 3 , 7 ) , we corroborate our results with those derived in [15]. Specifically, Winick and Yang determined the GV bound (1) for the ( 3 , 7 ) -RLL constraint and remarked that the “evaluation of the (GV-MR) bound required considerable computation” for “a small improvement”. In Table 3, we verify this statement.

6.3. ( L , w ) -Subblock Energy-Constrained Codes

We fix L and w. We recall from Section 5 that a binary word satisfies the ( L , w ) -subblock energy constraint if each subblock of length L has a weight of at least w and the ( L , w ) -SECC constrained system refers to the collection of words that meet this constraint. Then, the corresponding graph presentation has a single state x with i = 0 w L i edges, where each edge is labeled by a word of length L and weight at least w. We apply the methods in Section 5 to determine the GV and GV-MR bounds.
For the GV bound, we provide the explicit formula for α t and proceed as in Example 4.
α t = L t ( | E | j = 1 t k = 0 j 2 1 L t w j + k t k )
Similarly, for GV-MR bound, we provide the explicit formula for α t , β t , and γ t and proceed as in Example 5.
α t = L w L w i / 2 w i / 2 if t is even , otherwise , α t = 0 .
β t = 2 L w j = 1 t 2 L w t j w j 2 α t
γ t = L t ( | E | j = 1 t k = 0 j 2 1 L t w j + k t k ) α t β t
In Figure 3, we plot the GV bound and GV-MR bounds. We remark that the simple lower bound (28) corresponds to Proposition 12 in [21].

Author Contributions

Conceptualization, K.G. and H.M.K.; software, K.G.; writing—original draft preparation, K.G.; writing—review and editing, K.G. and H.M.K. All authors have read and agreed to the published version of the manuscript.

Funding

The work of Han Mao Kiah was supported by the Ministry of Education, Singapore, under its MOE AcRF Tier 2 Award under Grant MOE-T2EP20121-0007 and MOE AcRF Tier 1 Award under Grant RG19/23.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

The authors would also like to thank the assistant editor for her skillful handling and the anonymous reviewers for their valuable suggestions.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Appendix A. Power Iteration Method for Derivatives of Dominant Eigenvalues

Throughout this appendix, we assume that A is a diagonalizable matrix with dominant eigenvalue λ 1 and whose corresponding eigenspace has dimension one. Let e 1 be the unit eigenvector whose entries are positive in this space. Then, the power iteration method is a well-known numerical procedure that finds the dominant eigenvalue λ 1 and the corresponding eigenvector e 1 efficiently.
Now, in the preceding sections, the entries in the matrix A are given functions in either one or two variables and, thus, the dominant eigenvalue λ 1 is a function in the same variables. Moreover, the numerical procedures in these sections require us to compute the higher order (partial) derivatives of this dominant eigenvalue function λ 1 . To the best of our knowledge, we are unaware of any algorithms or numerical procedures that estimate the values of these derivatives. Hence, in this appendix, we modify the power iteration method to compute these estimates.
Formally, let A be an irreducible nonnegative diagonalizable square matrix with dominant eigenvalue λ 1 and corresponding unit eigenvector e 1 . Since A is diagonalizable, A has n eigenvectors e 1 , e 2 , , e n that form an orthonormal basis for R n . Let λ 1 , λ 2 , , λ n be the corresponding eigenvalues and, so, we have that
A e i = λ i e i for all i = 1 , 2 , . . . , n .
Since A is irreducible, the dominant eigenspace has dimension one and, also, the dominant eigenvalue is real and positive. Therefore, we can assume that λ 1 > | λ 2 | | λ n | .
We first assume that the entries of A are functions in the variable z. Hence, λ i and the entries of e i are functions in z too. Then Power Iteration I then evaluates both λ 1 and λ 1 for some fixed value of z, while Power Iteration II additionally evaluates the second order derivative λ 1 .
The case where the entries of A are functions in two variables x and y is discussed at the end of the appendix. Here, Power Iteration III evaluates higher order partial derivatives of λ 1 for certain fixed values of x and y. For ease of exposition, we provide detailed proofs for the correctness of Power Iteration I and the proofs can be extended for Power Iteration II and Power Iteration III.
We continue our discussion where the entries of A are univariate functions in z. We differentiate each entry of A with respect to z to obtain the matrix A . Furthermore, for all 1 i n , we differentiate each entry of eigenvectors e i and the eigenvalue λ i to obtain e i and λ i , respectively. Specifically, it follows from (A1) that
A e i + A e i = λ i e i + λ i e i for all i = 1 , 2 , , n .
Then, the following procedure computes both λ 1 and λ 1 .
Power Iteration I.
Input: Irreducible nonnegative diagonalizable matrix A
Output: Estimates of λ 1 and λ 1
(1)
Initialize q ( 0 ) such that all its entries are strictly positive.
  • Fix a tolerance value ϵ .
  • While | q ( k ) q ( k 1 ) | > ϵ ,
    Set
    λ ( k ) = A q ( k 1 ) , q ( k ) = A q ( k 1 ) λ ( k ) , μ ( k ) = A q ( k 1 ) + A r ( k 1 ) λ ( k ) r ( k 1 ) , r ( k ) = A r ( k 1 ) + A q ( k 1 ) μ ( k ) q ( k 1 ) λ ( k ) .
    Increment k by one.
  • (2)
    Set λ 1 λ ( k ) and λ 1 μ ( k ) .
    Theorem A1.
    If A is an irreducible nonnegative diagonalizable matrix and q ( 0 ) has positive components with unit norm, then, as k , we have
    λ ( k ) λ 1 , q ( k ) e 1 , μ ( k ) λ 1 .
    Here, q ( k ) e 1 means that q ( k ) e 1 0 as k .
    Before we present the proof of Theorem A1, we remark that the usual power iteration method computes only λ ( k ) and q ( k ) . Then, it is well-known (see, for example, [22]) that λ ( k ) and q ( k ) tend to λ 1 and e 1 , respectively.
    Now, since e i spans R n , we can write q ( 0 ) = i = 1 n α i e i for any initial vector q ( 0 ) . The next technical lemma provides closed formulas for λ ( k ) , q ( k ) , μ ( k ) , and r ( k ) in terms of λ i , e i and α i .
    Lemma A1.
    Let q ( 0 ) = i = 1 n α i e i . Then,
    q ( k ) = i = 1 n α i λ i k e i i = 1 n α i λ i k e i ,
    λ ( k ) = i = 1 n α i λ i k e i i = 1 n α i λ i k 1 e i ,
    r ( k ) = i = 1 n ( α i e i + α i e i ) λ i k + ( k λ i j = 1 k μ ( j ) ) α i λ i k 1 e i i = 1 n α i λ i k e i ,
    μ ( k ) = i = 1 n ( α i e i + α i e i ) λ i k 1 ( λ i λ ( k ) ) + α i λ i k 1 λ i e i + ( ( k 1 ) λ i j = 1 k 1 μ ( j ) ) α i λ i k 2 ( λ i λ ( k ) ) e i i = 1 n α i λ i k 1 e i .
    Proof. 
    Since q ( k ) is defined recursively as q ( k ) = A q ( k 1 ) λ ( k ) = A q ( k 1 ) A q ( k 1 ) , we have that
    q ( k ) = A k q ( 0 ) A k q ( 0 ) .
    Then, it follows from Equation (A1) that
    A k q ( 0 ) = A k i = 1 n α i e i = i = 1 n α i ( A k e i ) = i = 1 n α i λ i k e i ,
    and, so, we obtain (A3). Similarly, from (A1), we have that
    λ ( k ) = A q ( k 1 ) = A k q ( 0 ) A k 1 q ( 0 ) = i = 1 n α i λ i k e i i = 1 n α i λ i k 1 e i ,
    as required for (A4).
    Next, we note that r ( 0 ) = i = 1 n α i e i + i = 1 n α i e i . Then, using the recursive definition of r ( k ) , we have
    r ( k ) = A k r ( 0 ) + j = 0 k 1 A j A A k j 1 q ( 0 ) ( j = 1 k μ ( j ) ) A k 1 q ( 0 ) A k q ( 0 ) .
    Then, from (A1), we have
    A k r ( 0 ) = A k i = 1 n α i e i + i = 1 n α i e i = i = 1 n α i ( A k e i ) + i = 1 n α i λ i k e i .
    and, from (A2),
    A i = 1 n α i λ i k j 1 e i = i = 1 n α i λ i k j 1 ( A e i ) = i = 1 n α i λ i k j 1 ( λ i e i + λ i e i A e i ) .
    Therefore, using (A1) again,
    j = 0 k 1 A j A i = 1 n α i λ i k j 1 e i = j = 0 k 1 A j i = 1 n α i λ i k j 1 ( λ i e i + λ i e i A e i ) = k i = 1 n α i λ i k 1 λ i e i + i = 1 n α i λ i k e i i = 1 n α i ( A k e i ) .
    Therefore, we obtain (A5).
    Finally, we recall that μ ( k ) is defined as
    μ ( k ) = A q ( k 1 ) + A r ( k 1 ) λ ( k ) r ( k 1 ) .
    Then, by replacing r ( k 1 ) and q ( k 1 ) from (A5) and (A3), respectively, and then using Equation (A2), we obtain (A6). □
    Finally, we are ready to demonstrate the correctness of Power Iteration I.
    Proof of Theorem A1.
    Since A is an irreducible nonnegative diagonalizable matrix, λ 1 is real positive and there exists 0 < ϵ < 1 such that | λ i | λ 1 < ϵ for all i = 2 , 3 , , n (see, for example, [11]). For purposes of brevity, we write
    Φ k = i = 1 n α i λ i k e i
    and, so, we can rewrite (A3) as
    q ( k ) = Φ k Φ k = λ 1 k Φ k Φ k λ 1 k = λ 1 k Φ k α 1 e 1 + i = 2 n α i λ i k λ 1 k e i .
    Now, since λ i k / λ 1 k ϵ k for all i = 2 , , n , we have that
    Φ k λ 1 k α 1 e 1 C 1 ϵ k for some constant C 1 .
    Then, using the triangle inequality, we have that as k , Φ k λ 1 k α 1 0 and, thus, λ 1 k Φ k 1 α 1 . Therefore, q ( k ) e 1 0 as required.
    It should be noted that since λ 1 k Φ k tends to a finite limit, we have that λ 1 k Φ k is bounded above by some constant. In other words, we have that
    λ 1 k Φ k C 2 for some constant C 2 .
    Next, we show the following inequality:
    | λ ( k ) λ 1 | C 3 ϵ k 1 for some constant C 3 .
    Using (A4), we have that
    Φ k λ 1 Φ k 1 Φ k 1 = λ 1 k 1 Φ k 1 i = 1 n α i λ i k e i α i λ 1 λ i k 1 e i λ 1 k 1 = λ 1 k 1 Φ k 1 · λ 1 · i = 2 n α i λ i k λ 1 k λ i k 1 λ 1 k 1 e i .
    Now, observe that λ i k λ 1 k λ i k 1 λ 1 k 1 2 ϵ k 1 for i = 2 , , n . Since λ 1 k 1 Φ k 1 C 2 , we have (A13) after applying the triangle inequality.
    Again, to reduce clutter, we introduce the following abbreviations:
    D k = i = 1 n ( α i e i + α i e i ) λ i k 1 ( λ i λ ( k ) ) , E k = i = 1 n α i λ i k 1 λ i e i , F k = i = 1 n ( k 1 ) λ i j = 1 k 1 μ ( j ) α i λ i k 2 ( λ i λ ( k ) ) e i .
    Thus, we can rewrite (A6) as
    μ ( k ) = D k + E k + F k Φ k 1 λ 1 + D k Φ k 1 + E k λ 1 Φ k 1 Φ k 1 + F k Φ k 1 .
    Next, we bound each of the summands on the right-hand side. Specifically, we show the following inequalities:
    D k Φ k 1 + E k λ 1 Φ k 1 Φ k 1 C 4 ϵ k 1 for some constant C 4 ,
    F k Φ k 1 C 5 ( k 1 ) ϵ k 1 + C 5 j = 1 k 1 μ ( k ) ϵ k 1 for some constant C 5 .
    To demonstrate (A14), we consider
    D k λ 1 k 1 = i = 1 n ( α i e i + α i e i ) λ i k 1 λ 1 k 1 ( λ i λ ( k ) ) α 1 e 1 + α 1 e 1 | λ 1 λ ( k ) | + ϵ k 1 i = 2 n α i e i + α i e i | λ i λ ( k ) | .
    We use (A13) to bound the first summand by some constant multiple of ϵ k 1 . On the other hand, we have | λ i λ ( k ) | | λ i λ 1 | + | λ 1 λ ( k ) | max { | λ i λ 1 | : 2 i n } + C 3 ϵ k 1 for 2 i n . In other words, the second summand is also bounded by some constant multiple of ϵ k 1 . Next, we consider
    E k λ 1 Φ k 1 λ 1 k 1 = i = 1 n α i λ i k 1 λ 1 k 1 ( λ i λ 1 ) e i ϵ k 1 i = 2 n | α i ( λ i λ 1 ) | .
    and, so, E k λ 1 Φ k 1 λ 1 k 1 is also bounded by a multiple of ϵ k 1 . Therefore, since λ 1 k 1 Φ k 1 C 2 , we have (A14). Using similar methods, we can establish (A15).
    Next, we apply (A14) and then recursively apply (A15) until the right-hand side is free of μ ( i ) s. Then, it follows that
    μ ( k ) λ 1 + C 4 ϵ k 1 + C 5 ( k 1 ) ϵ k 1 + j = 2 k 1 ( 1 + C 5 ϵ k j ) + C 5 ϵ k 1 i = 1 k 1 ( λ 1 + C 4 ϵ k i 1 C 5 ( k i 1 ) ϵ k i 1 ) j = 2 i ( 1 + C 5 ϵ k j ) ) .
    Furthermore, since i k 1 , j = 2 i ( 1 + C 5 ϵ k j ) j = 2 k 1 ( 1 + C 5 ϵ k j ) , we can rewrite (A16) as
    μ ( k ) λ 1 + C 4 ϵ k 1 + C 5 ( k 1 ) ϵ k 1 + j = 2 k 1 ( 1 + C 5 ϵ k j ) 1 + C 5 ϵ k 1 i = 1 k 1 ( λ 1 + C 4 ϵ k i 1 C 5 ( k i 1 ) ϵ k i 1 ) .
    Next, it follows from standard calculus that j = 2 k 1 ( 1 + C 5 ϵ k j ) < e C 5 1 ϵ . Furthermore, since ϵ < 1 , we have i = 0 k 2 ϵ j < 1 1 ϵ and i = 0 k 2 j ϵ j < 1 ( 1 ϵ ) 2 . Putting everything together, we have
    μ ( k ) λ 1 + C 4 ϵ k 1 + C 5 ( k 1 ) ϵ k 1 + C 5 ϵ k 1 e C 5 1 ϵ 1 + ( k 1 ) λ 1 + C 4 1 ϵ + C 5 ( 1 ϵ ) 2 .
    As k , since ϵ < 1 , we have ϵ k 0 and k ϵ k 0 . Therefore, lim k μ ( k ) λ 1 . Using similar methods, we have that lim k μ ( k ) λ 1 and, so, lim k μ ( k ) = λ 1 , as required. □
    Next, we modify Power Iteration I so as to compute the higher order derivatives. We omit a detailed proof as it is similar to the proof of Theorem A1.
    Power Iteration II.
    Input: Irreducible nonnegative diagonalizable matrix A
    Output: Estimates of λ 1 , λ 1 , and λ 1
    (1)
    Initialize q ( 0 ) such that all its entries are strictly positive.
  • Fix a tolerance value ϵ .
  • While | q ( k ) q ( k 1 ) | > ϵ ,
    Set
    λ ( k ) = A q ( k 1 ) , q ( k ) = A q ( k 1 ) λ ( k ) , μ ( k ) = A q ( k 1 ) + A r ( k 1 ) λ ( k ) r ( k 1 ) , r ( k ) = A r ( k 1 ) + A q ( k 1 ) μ ( k ) q ( k 1 ) λ ( k ) , ν ( k ) = A q ( k 1 ) + 2 A r ( k 1 ) + A s ( k 1 ) λ ( k ) s ( k 1 ) 2 μ ( k ) r ( k 1 ) , s ( k ) = A q ( k 1 ) + 2 A r ( k 1 ) + A s ( k 1 ) 2 μ ( k ) r ( k 1 ) ν ( k ) q ( k 1 ) λ ( k ) .
    Increment k by one.
  • (2)
    Set λ 1 λ ( k ) , λ 1 μ ( k ) and λ 1 ν ( k ) .
    Theorem A2.
    If A is an irreducible nonnegative diagonalizable matrix and q ( 0 ) has positive components with unit norm, then, as k , we have
    λ ( k ) λ 1 , q ( k ) e 1 , μ ( k ) λ 1 , ν ( k ) λ 1 .
    Finally, we end this appendix with a power iteration method that computes the partial derivatives when the elements of the given matrix are bivariate functions.
    Power Iteration III.
    Input: Irreducible nonnegative diagonalizable matrix A
    Output: Estimates of λ 1 , ( λ 1 ) x , ( λ 1 ) y , ( λ 1 ) x x , ( λ 1 ) y y , and ( λ 1 ) x y
    (1)
    Initialize q ( 0 ) such that all its entries are strictly positive.
    • Fix a tolerance value ϵ .
    • While | q ( k ) q ( k 1 ) | > ϵ ,
      Set
      λ ( k ) = A q ( k 1 ) , q ( k ) = A q ( k 1 ) λ ( k ) , λ x ( k ) = A x q ( k 1 ) + A q x ( k 1 ) λ q x ( k 1 ) , q x ( k ) = A x q ( k 1 ) + A q x ( k 1 ) λ x ( k 1 ) q ( k 1 ) λ ( k ) , λ y ( k ) = A y q ( k 1 ) + A q y ( k 1 ) λ q y ( k 1 ) , q y ( k ) = A y q ( k 1 ) + A q y ( k 1 ) λ y ( k 1 ) q ( k 1 ) λ ( k ) , λ x x ( k ) = A x x q ( k 1 ) + 2 A x q x ( k 1 ) + A q x x ( k 1 ) λ ( k 1 ) q x x ( k 1 ) 2 λ x ( k 1 ) q x ( k 1 ) , q x x ( k ) = A x x q ( k 1 ) + 2 A x q x ( k 1 ) + A q x x ( k 1 ) 2 λ x ( k 1 ) q x ( k 1 ) λ x x ( k 1 ) q ( k 1 ) λ ( k ) λ y y ( k ) = A y y q ( k 1 ) + 2 A y q y ( k 1 ) + A q y y ( k 1 ) λ ( k 1 ) q y y ( k 1 ) 2 λ y ( k 1 ) q y ( k 1 ) , q y y ( k ) = A y y q ( k 1 ) + 2 A y q y ( k 1 ) + A q y y ( k 1 ) 2 λ y ( k 1 ) q y ( k 1 ) λ y y ( k 1 ) q ( k 1 ) λ ( k ) λ x y ( k ) = A x y q ( k 1 ) + A x q y ( k 1 ) + A y q x ( k 1 ) + A q x y ( k 1 ) λ ( k 1 ) q x y ( k 1 ) λ x ( k 1 ) q y ( k 1 ) λ y ( k 1 ) q x ( k 1 ) , q x y ( k ) = A x y q ( k 1 ) + A x q y ( k 1 ) + A y q x ( k 1 ) + A q x y ( k 1 ) λ x y ( k 1 ) q ( k 1 ) λ x ( k 1 ) q y ( k 1 ) λ y ( k 1 ) q x ( k 1 ) λ ( k ) .
      Increment k by one.
    • Set λ ( k ) λ 1 , λ x ( k ) ( λ 1 ) x , λ y ( k ) ( λ 1 ) y , λ x x ( k ) ( λ 1 ) x x , λ y y ( k ) ( λ 1 ) y y , λ x y ( k ) ( λ 1 ) x y .
    Theorem A3.
    If A is an irreducible nonnegative diagonalizable matrix and q ( 0 ) has positive components with unit norm, then, as k , we have λ x x ( k ) ( λ 1 ) x x , λ y y ( k ) ( λ 1 ) y y , λ x y ( k ) ( λ 1 ) x y .

    References

    1. Yazdi, S.M.H.T.; Kiah, H.M.; Garcia-Ruiz, E.; Ma, J.; Zhao, H.; Milenkovic, O. DNA-Based Storage: Trends and Methods. IEEE Trans. Mol. Biol. Multi-Scale Commun. 2015, 1, 230–248. [Google Scholar] [CrossRef]
    2. Immink, K.A.S.; Cai, K. Efficient balanced and maximum homopolymer-run restricted block codes for DNA-based data storage. IEEE Commun. Lett. 2019, 23, 1676–1679. [Google Scholar] [CrossRef]
    3. Nguyen, T.T.; Cai, K.; Immink, K.A.S.; Kiah, H.M. Capacity-Approaching Constrained Codes with Error Correction for DNA-Based Data Storage. IEEE Trans. Inf. Theory 2021, 67, 5602–5613. [Google Scholar] [CrossRef]
    4. Kovačević, M.; Vukobratović, D. Asymptotic Behavior and Typicality Properties of Runlength-Limited Sequences. IEEE Trans. Inf. Theory 2022, 68, 1638–1650. [Google Scholar] [CrossRef]
    5. Popovski, P.; Fouladgar, A.M.; Simeone, O. Interactive joint transfer of energy and information. IEEE Trans. Commun. 2013, 61, 2086–2097. [Google Scholar] [CrossRef]
    6. Fouladgar, A.M.; Simeone, O.; Erkip, E. Constrained codes for joint energy and information transfer. IEEE Trans. Commun. 2014, 62, 2121–2131. [Google Scholar] [CrossRef]
    7. Tandon, A.; Motani, M.; Varshney, L.R. Subblock-constrained codes for real-time simultaneously energy and information transfer. IEEE Trans. Inf. Theory 2016, 62, 4212–4227. [Google Scholar] [CrossRef]
    8. Immink, K.A.S.; Cai, K. Block Codes for Energy-Harvesting Sliding- Window Constrained Channels. IEEE Commun. Lett. 2020, 24, 2383–2386. [Google Scholar] [CrossRef]
    9. Immink, K.A.S.; Cai, K. Properties and Constructions of Energy-Harvesting Sliding-Window Constrained Codes. IEEE Commun. Lett. 2020, 24, 1890–1893. [Google Scholar] [CrossRef]
    10. Wu, T.Y.; Tandon, A.; Varshney, L.R.; Motani, M. Skip-sliding window codes. IEEE Trans. Commun. 2021, 69, 2824–2836. [Google Scholar] [CrossRef]
    11. Marcus, B.H.; Roth, R.M.; Siegel, P.H. An Introduction to Coding for Constrained Systems. Lecture Notes. 2001. Available online: https://ronny.cswp.cs.technion.ac.il/wp-content/uploads/sites/54/2016/05/chapters1-9.pdf (accessed on 1 October 2020).
    12. Kolesnik, V.D.; Krachkovsky, V.Y. Generating functions and lower bounds on rates for limiting error-correcting codes. IEEE Trans. Inf. Theory 1991, 37, 778–788. [Google Scholar] [CrossRef]
    13. Gu, J.; Fuja, T. A generalized Gilbert-Varshamov bound derived via analysis of a code-search algorithm. IEEE Trans. Inf. Theory 1993, 39, 1089–1093. [Google Scholar] [CrossRef]
    14. Marcus, B.H.; Roth, R.M. Improved Gilbert-Varshamov bound for constrained systems. IEEE Trans. Inf. Theory 1992, 38, 1213–1221. [Google Scholar] [CrossRef]
    15. Winick, K.A.; Yang, S.H. Upper bounds on the size of error-correcting runlength-limited codes. Eur. Trans. Telecommun. 1996, 37, 273–283. [Google Scholar] [CrossRef]
    16. Goyal, K.; Kiah, H.M. Evaluating the Gilbert-Varshamov Bound for Constrained Systems. In Proceedings of the 2022 IEEE International Symposium on Information Theory (ISIT), Espoo, Finland, 26 Jun–1 July 2022; pp. 1348–1353. [Google Scholar]
    17. Tolhuizen, L.M.G.M. The generalized Gilbert-Varshamov bound is implied by Turan’s theorem. IEEE Trans. Inf. Theory 1997, 43, 1605–1606. [Google Scholar] [CrossRef]
    18. Luenberger, D.G. Introduction to Linear and Nonlinear Programming; Addison-Wesley: Reading, MA, USA, 1973. [Google Scholar]
    19. Rockafellar, T. Convex Analysis; Princeton University: Pressrinceton, NJ, USA, 1970. [Google Scholar]
    20. Kashyap, N.; Roth, R.M.; Siegel, P.H. The Capacity of Count-Constrained ICI-Free Systems. In Proceedings of the 2019 IEEE International Symposium on Information Theory (ISIT), Paris, France, 7–12 July 2019; pp. 1592–1596. [Google Scholar]
    21. Tandon, A.; Kiah, H.M.; Motani, M. Bounds on the size and asymptotic rate of subblock-constrained codes. IEEE Trans. Inf. Theory 2018, 64, 6604–6619. [Google Scholar] [CrossRef]
    22. Stewart, G.W. Introduction to Matrix Computations; Computer Science and Applied Mathematics; Academic Press: New York, NY, USA, 1973. [Google Scholar]
    Figure 1. Lower bounds for optimal asymptotic code rates R ( δ ; S ) for the class of sliding-window constrained codes.
    Figure 1. Lower bounds for optimal asymptotic code rates R ( δ ; S ) for the class of sliding-window constrained codes.
    Entropy 26 00346 g001
    Figure 2. Lower bounds for optimal asymptotic code rates R ( δ ; S ) for the class of runlength limited codes.
    Figure 2. Lower bounds for optimal asymptotic code rates R ( δ ; S ) for the class of runlength limited codes.
    Entropy 26 00346 g002
    Figure 3. Lower bounds for optimal asymptotic code rates R ( δ ; S ) where S is the class of ( 3 , 2 ) -SECCs (subblock energy-constrained codes).
    Figure 3. Lower bounds for optimal asymptotic code rates R ( δ ; S ) where S is the class of ( 3 , 2 ) -SECCs (subblock energy-constrained codes).
    Entropy 26 00346 g003
    Table 1. We set the ( v i , v j ) , ( v k , v ) entry of the matrix B G × G ( y ) according to subgraph induced by the states v i , v j , v k Gilbert–Varshamov v . Here, σ ¯ denotes the complement of σ .
    Table 1. We set the ( v i , v j ) , ( v k , v ) entry of the matrix B G × G ( y ) according to subgraph induced by the states v i , v j , v k Gilbert–Varshamov v . Here, σ ¯ denotes the complement of σ .
    B G × G ( y ) at Entry ( v i , v j ) , ( v k , v ) Subgraph Induced by the States { v i , v j , v k , v }
    0Entropy 26 00346 i001Entropy 26 00346 i002Entropy 26 00346 i003Entropy 26 00346 i004Entropy 26 00346 i005
    1Entropy 26 00346 i006Entropy 26 00346 i007Entropy 26 00346 i008
    yEntropy 26 00346 i009Entropy 26 00346 i010Entropy 26 00346 i011
    2 y Entropy 26 00346 i012
    Table 2. We set the ( v i , v j ) , ( v k , v ) entry of the matrix D G × G ( x , y ) according to the subgraph induced by the states v i , v j , v k , and v .
    Table 2. We set the ( v i , v j ) , ( v k , v ) entry of the matrix D G × G ( x , y ) according to the subgraph induced by the states v i , v j , v k , and v .
    D G × G ( x , y ) at Entry ( v i , v j ) , ( v k , v ) Subgraph Induced by the States { v i , v j , v k , v }
    0Entropy 26 00346 i013Entropy 26 00346 i014Entropy 26 00346 i015Entropy 26 00346 i016Entropy 26 00346 i017
    1Entropy 26 00346 i018Entropy 26 00346 i019Entropy 26 00346 i020
    x 2 Entropy 26 00346 i021Entropy 26 00346 i022Entropy 26 00346 i023
    x y Entropy 26 00346 i024Entropy 26 00346 i025Entropy 26 00346 i026
    2 x y Entropy 26 00346 i027
    Table 3. Comparison of the GV-MR bound with lower bound [15] for ( 3 , 7 ) -RLL constrained systems.
    Table 3. Comparison of the GV-MR bound with lower bound [15] for ( 3 , 7 ) -RLL constrained systems.
    δ GV-MR Bound (15)GV Bound [15] (see Equation (1))
    00.4060.406
    0.050.2550.225
    0.10.1630.163
    0.150.0950.094
    0.20.0480.044
    0.250.0180.012
    Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

    Share and Cite

    MDPI and ACS Style

    Goyal, K.; Kiah, H.M. Evaluating the Gilbert–Varshamov Bound for Constrained Systems. Entropy 2024, 26, 346. https://doi.org/10.3390/e26040346

    AMA Style

    Goyal K, Kiah HM. Evaluating the Gilbert–Varshamov Bound for Constrained Systems. Entropy. 2024; 26(4):346. https://doi.org/10.3390/e26040346

    Chicago/Turabian Style

    Goyal, Keshav, and Han Mao Kiah. 2024. "Evaluating the Gilbert–Varshamov Bound for Constrained Systems" Entropy 26, no. 4: 346. https://doi.org/10.3390/e26040346

    Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

    Article Metrics

    Back to TopTop