Skip to content
BY 4.0 license Open Access Published online by De Gruyter February 28, 2024

Adaptive Multi-level Algorithm for a Class of Nonlinear Problems

  • Dongho Kim , Eun-Jae Park ORCID logo EMAIL logo and Boyoon Seo

Abstract

In this article, we propose an adaptive mesh-refining based on the multi-level algorithm and derive a unified a posteriori error estimate for a class of nonlinear problems. We have shown that the multi-level algorithm on adaptive meshes retains quadratic convergence of Newton’s method across different mesh levels, which is numerically validated. Our framework facilitates to use the general theory established for a linear problem associated with given nonlinear equations. In particular, existing a posteriori error estimates for the linear problem can be utilized to find reliable error estimators for the given nonlinear problem. As applications of our theory, we consider the pseudostress-velocity formulation of Navier–Stokes equations and the standard Galerkin formulation of semilinear elliptic equations. Reliable and efficient a posteriori error estimators for both approximations are derived. Finally, several numerical examples are presented to test the performance of the algorithm and validity of the theory developed.

MSC 2020: 65N15; 65N30; 65N50; 76D05

1 Introduction

Adaptive mesh-refining plays an important practical role in accurate and efficient calculation of the numerical solutions of partial differential equations. A posteriori error estimates are an essential ingredient of adaptivity. They are composed of the known values such as the given data of the problem and the computed numerical solutions. Various a posteriori error estimates for finite element methods for second-order elliptic problems have already been studied in [1, 2, 8, 17, 19], for the Stokes problem in [9, 26], and for the Navier–Stokes problem in [12, 14]. Particularly, the nonlinear elliptic equations were considered in [19]. There, a priori and a posteriori error estimates measured in the L m ( Ω ) -norm in the framework of Brezzi–Rappaz–Raviart (BRR) (cf. [4]) were derived. The framework was designed for approximation of branches of nonsingular solutions for a class of nonlinear problems. The theories developed by BRR were applied to study mixed approximation of scalar elliptic problems with gradient nonlinearities in [16, 19]. The theories applied to the velocity-pressure formulation of the Navier–Stokes equations can be found in [13]. The authors of [12, 14] constructed a posteriori error estimators for the two-level method for the Navier–Stokes equations.

In this paper, we derive a unified a posteriori error estimate for the approximate solutions of the multi-level algorithm developed in [22] for a class of nonlinear problems in the framework of BRR. The multi-level algorithm using a two-grid idea is an efficient numerical method designed to resolve nonlinearities. The two-grid algorithm (see [20, 27, 28]) is first applied, and in the subsequent process, uniform mesh refinement is exploited to deliver a desired accuracy and convergence; as the mesh is being refined, the solution on a given mesh is exploited as an accurate starting iterate for solutions on the next mesh level. The multi-level algorithm has quadratic convergence in the sense of equation (2.5) in Theorem 2.5. We apply this strategy to efficiently compute numerical solutions by adaptive mesh refinement. We emphasize that the multi-level algorithm on adaptive meshes retains quadratic convergence of Newton’s method across different mesh levels, which is validated from the numerical results presented in the last section. The BRR framework facilitates to use the general theory established for a linear problem associated with given nonlinear equations. In particular, existing a posteriori error estimates for the linear problem can be utilized to find reliable error estimators for the nonlinear problem. We mention that this approach is quite general and can be applied to any numerical scheme for nonlinear problems satisfying the BRR framework.

As applications, we consider the pseudostress-velocity formulation of the stationary Navier–Stokes equations (NSE) and the standard Galerkin formulation of semilinear elliptic equations (SEE). Finite element methods for the velocity-pressure and the stress-velocity-pressure formulation of the NSE have been analyzed for decades (cf. [3, 13]). The stress-velocity-pressure formulation is the original physical equations for incompressible Newtonian flows induced by the conservation of momentum and the constitutive law. The mixed finite element methods for the pseudostress-velocity formulation have been recently established in for the Stokes [6, 10] and the Navier–Stokes equations [5, 21]. The pseudostress-velocity formulation has several advantages. The pseudostress is nonsymmetric unlike stress tensor and approximated by the RT k elements, Raviart–Thomas elements of index k 0 . Moreover, the approximation of the pressure, the velocity gradient, or even the stress can be algebraically obtained from the approximate value of the pseudostress.

For adaptivity, we derive reliable and efficient residual-based a posteriori error estimators for our multi-level algorithm for the NSE. We mention existing works on the adaptive two-level algorithm. The authors in [12, 14] derived reliable a posteriori error estimators for the two-level methods for the velocity-pressure formulation of the NSE. In comparison to usual adaptive finite element methods [10, 18, 19, 23, 25], the estimators for two-level algorithm contain an additional term which measures the difference between coarse and fine grid solutions. If the additional term measures the difference between consecutive levels and is of higher order (and negligible), then the link between levels does not effect the computation, which is computationally more advantageous. Indeed, this additional term is negligible in our multi-level setting in contrast to [14]. See Remark 3.19 for further details. As the second application of the unified framework, we consider the standard Galerkin method for the SEE and derive reliable a posteriori error estimators. It turns out that behavior of the additional term arising from SEE is similar to that of NSE. Note that our algorithm can be applied to other problems such as the elliptic problem with gradient nonlinearities (cf. [24, 16, 19, 20]).

The remainder of this paper is organized as follows. In the next section, we introduce approximation of branches of nonsingular solutions based on the BRR theory and multi-level algorithm analyzed in [22]. We derive a unified a posteriori error estimate for the approximate solutions of the multi-level algorithm for a class of nonlinear problems in the framework of BRR. Section 3 and Section 4 are devoted to applying the algorithms to the NSE and the SEE, respectively, and constructing a posteriori error estimators of residual type. Section 5 presents our adaptive multi-level algorithm. In Section 6, numerical results are presented to show the performance of the algorithms and to validate the theory developed in this paper. Concluding remarks are given in the last section.

2 BRR Theory and Multi-level Algorithm

In this section, we start with the abstract approximation theory of Brezzi, Rappaz and Raviart for nonlinear problems developed in [4, 7, 13]. Then we introduce the multi-level algorithm analyzed in [22], which is a method to solve resulting nonlinear algebraic equations.

From here to the Remark 2.6, we will use the same parts of what we described in [22]. Let 𝑋, 𝒳 and 𝑌 be real Banach spaces with the norms X , X and Y , respectively, and assume that X X , continuously imbedding. Let Λ R be a compact interval and denote by L ( X ; Y ) the set of all linear and continuous operators from 𝑋 into 𝑌. For a given C 2 map G : Λ × X Y , we consider the following nonlinear problem: find ( ν , ϕ ) Λ × X such that

(2.1) F ( ν , ϕ ) := ϕ + S G ( ν , ϕ ) = 0 ,

where S L ( Y ; X ) is a linear operator independent of 𝜈. The set { ( ν , ϕ ( ν ) ) : ν Λ } is called a branch of solutions of (2.1) if F ( ν , ϕ ( ν ) ) = 0 and the map ν ϕ ( ν ) is continuous from Λ into 𝑋. If the Fréchet derivative D ϕ F ( ν , ϕ ( ν ) ) of 𝐹 with respect to 𝜙 is an isomorphism from 𝑋 onto 𝒳 for all ν Λ , then the branch of solutions { ( ν , ϕ ( ν ) ) : ν Λ } is called nonsingular. To approximate the solution of problem (2.1), we introduce an operator S h L ( Y ; X ) intended to approximate the linear operator 𝑆, where h > 0 is a real parameter which will tend to zero. The approximation of nonlinear problem (2.1) is to find ϕ h X such that

(2.2) F h ( ν , ϕ h ) := ϕ h + S h G ( ν , ϕ h ) = 0 .

We assume that the following hypotheses.

  1. 𝐺 is a C 2 -mapping from Λ × X into 𝑌, and the second-order Fréchet derivatives of 𝐺 are bounded on all bounded subsets of Λ × X .

  2. There exists another Banach space Z Y , continuously imbedding, such that

    D ϕ G ( ν , ϕ ) L ( X ; Z ) for all ν Λ and all ϕ X .

  3. For all y Y , lim h 0 ( S S h ) y X = 0 .

  4. lim h 0 S S h Z ; X = 0 .

Under the hypotheses, existence of a unique branch of nonsingular solutions of problem (2.2), its a priori error estimate and its convergence property are stated below. We skip the proof since it is very similar to that of [13, Theorem IV.3.3]. We introduce the following quantities:
μ ( ν ) := D ϕ F ( ν , ϕ ( ν ) ) 1 X ; X , μ ̄ := sup ν Λ μ ( ν ) ,
(2.3) L ( ψ * ( ν ) , ξ ) := sup ν Λ , ψ B ( ψ * ( ν ) , ξ ) D ϕ 2 G ( ν , ψ ) X 2 ; Y ,
where B ( ψ * ( ν ) , ξ ) := { ψ X ; ψ * ( ν ) ψ X ξ } .

Theorem 2.1

Assume that (H1)–(H4) hold and that { ( ν , ϕ ( ν ) ) : ν Λ } is a branch of nonsingular solutions of (2.1). Then there exist ξ ̄ > 0 and h ̄ ( ξ ̄ ) > 0 such that, for all h h ̄ ( ξ ̄ ) , there is a C 2 -function ϕ h : Λ X satisfying that { ( ν , ϕ h ( ν ) ) : ν Λ } is a branch of nonsingular solutions of (2.2), ϕ h ( ν ) is the only solution of (2.2) in the ball B ( ϕ ( ν ) , ξ ̄ ) for all ν Λ , and for K 1 := 4 μ ( ν ) and a constant C S > 0 , the following estimates hold:

ϕ ( ν ) ϕ h ( ν ) X K 1 F h ( ν , ϕ ( ν ) ) X = K 1 ( S S h ) G ( ν , ϕ ( ν ) ) X for all ν Λ , D ϕ F h ( ν , ϕ h ( ν ) ) 1 X ; X 2 D ϕ F h ( ν , ϕ ( ν ) ) 1 X ; X 4 D ϕ F ( ν , ϕ ( ν ) ) 1 X ; X K 1 , S h Z ; X C S .

Note that, since Λ is compact, the function μ ( ν ) is bounded above on Λ. Thus, if needed, we can take K 1 as the constant independent of not only ℎ but also 𝜈, i.e., K 1 := 4 μ ̄ . You may refer to [22] for proofs of Lemma 2.2, Theorem 2.5 and Theorem 2.7 that follow in this section.

Lemma 2.2

Assume that (H1)–(H4) hold and that D ϕ F ( ν , ϕ ̃ ) for ϕ ̃ X is an isomorphism from 𝑋 onto 𝒳. Then there exists ξ * = ξ * ( ϕ ̃ ) > 0 such that if ψ B ( ϕ ̃ , ξ ) X for 0 < ξ ξ * , then D ϕ F ( ν , ψ ) is an isomorphism from 𝑋 onto 𝒳 and there exists h * = h * ( ϕ ̃ ) > 0 such that if 0 < h h * , then D ϕ F h ( ν , ϕ ̃ ) is an isomorphism from 𝑋 onto 𝒳 with the bound, respectively,

(2.4) D ϕ F ( ν , ψ ) 1 X ; X 2 D ϕ F ( ν , ϕ ̃ ) 1 X ; X ,
D ϕ F h ( ν , ϕ ̃ ) 1 X ; X 2 D ϕ F ( ν , ϕ ̃ ) 1 X ; X .

Remark 2.3

When 𝐹 is replaced by F h for ℎ small enough in Lemma 2.2, estimate (2.4) also holds. That is, if D ϕ F h ( ν , ϕ ̃ ) for ϕ ̃ X is an isomorphism from 𝑋 onto 𝒳, then there exists ξ * = ξ * ( ϕ ̃ ) > 0 such that, for ψ B ( ϕ ̃ , ξ ) X with 0 < ξ ξ * , D ϕ F h ( ν , ψ ) is an isomorphism from 𝑋 onto 𝒳 with the bound

D ϕ F h ( ν , ψ ) 1 X ; X 2 D ϕ F h ( ν , ϕ ̃ ) 1 X ; X .

We introduce the multi-level version of the two-grid algorithm. For j 0 , let h j h ̄ given in Theorem 2.1 be mesh sizes, X j := X h j finite element spaces such that X j X , and F j ( ν , ϕ ) := ϕ + S j G ( ν , ϕ ) , where an operator S j := S h j is intended to approximate the linear operator 𝑆.

Algorithm 1

Algorithm 1 (Multi-level algorithm)

  1. (Nonlinear solver) Solve nonlinear system on initial coarse mesh.

    Find ϕ 0 ( ν ) X 0 such that F 0 ( ν , ϕ 0 ( ν ) ) = 0 .

  2. (Linear solver) Update on each finer mesh level 𝑗 with one Newton iteration.

    For j = 1 , 2 , , J , find ϕ j ( ν ) X j such that D ϕ F j ( ν , ϕ j 1 ( ν ) ) ( ϕ j ( ν ) ϕ j 1 ( ν ) ) = F j ( ν , ϕ j 1 ( ν ) ) .

Remark 2.4

The two-grid scheme in [27, 28] is based on passing information between finite element equations defined on two grids of different mesh sizes. In the first step, a nonlinear problem itself is solved in a coarse space, i.e., a finite-dimensional space with coarse grids. In the second step, the nonlinear problem is linearized locally at the solution obtained in the coarse space. Then the linearized problem is solved in a fine space. For better performance, this process can be iterated on a sequence of linearized problems with increasing dimensions. In the second step of the multi-level algorithm in [14], the nonlinear problem is linearized locally at the solution ϕ 0 obtained in the fixed coarse space of Step 1. By contrast, in our multi-level algorithm, the nonlinear problem is linearized locally at the solution obtained on the very previous mesh. The advantages of our algorithm are discussed in detail in Remark 3.19.

Theorem 2.5

Theorem 2.5 (A Priori Error Bound)

Assume that (H1)–(H4) hold. Let ϕ h j ( ν ) be nonsingular solutions of (2.2) on X j . And let ϕ j ( ν ) be solutions obtained from Step 2 of Algorithm 1. Then there exist ξ > 0 with ξ ξ ̄ given in Theorem 2.1 and h 0 * > 0 such that, for h j h 0 * , ϕ h j ( ν ) B ( ϕ ( ν ) , ξ / 2 ) and ϕ j ( ν ) belongs to the ball B ( ϕ h j ( ν ) , ξ / 2 ) for j 1 . Moreover, for the positive constant K 2 := 4 μ ̄ C S L ( ϕ , ξ ̄ ) independent of mesh sizes h j and 𝜈, we have the following quadratic relation:

(2.5) ϕ h j ( ν ) ϕ j ( ν ) X K 2 ϕ h j ( ν ) ϕ j 1 ( ν ) X 2 ,

and for K 3 := max { K 1 , 2 K 1 2 K 2 , 2 K 2 } , we have an a priori estimate

ϕ ( ν ) ϕ j ( ν ) X K 3 ( ( S S j ) G ( ν , ϕ ( ν ) ) X + ( S S j ) G ( ν , ϕ ( ν ) ) X 2 + ϕ ( ν ) ϕ j 1 ( ν ) X 2 ) .

Remark 2.6

It should be noted that the above algorithm can be applied even if T j is not a uniform refinement of T j 1 and adaptive mesh refining can be exploited along with this algorithm. The multi-level algorithm has the quadratic convergence in the sense of equation (2.5) even on adaptive meshes. This is confirmed by numerical examples in Section 6.

Now, we want to find an a posteriori error bound for solutions obtained from the multi-level algorithm. First, we introduce the following theorem presented and proved in [7, Theorem 2.1]. Let F : X X be a C 1 mapping and let v X be such that D F ( v ) L ( X ; X ) is an isomorphism. We introduce the notations

ϵ := F ( v ) X , γ := D F ( v ) 1 X ; X , L ( α ) := sup w B ( v , α ) D F ( v ) D F ( w ) X ; X .

Theorem 2.7

We assume that 2 γ L ( 2 γ ϵ ) 1 . Then F ( x ) = 0 has a unique solution 𝑢 in the ball B ( v , 2 γ ϵ ) and D F ( u ) L ( X ; X ) is invertible with

D F ( u ) 1 X ; X 2 γ .

Moreover, for all v B ( v , 2 γ ϵ ) ,

v u X 2 γ F ( v ) X .

Theorem 2.8

Theorem 2.8 (A Posteriori Error Bound)

Assume that (H1)–(H4) hold. Let ϕ ( ν ) be a nonsingular solution of (2.1) and let ϕ j ( ν ) be the approximate solution produced by Algorithm 1. Then there exists 0 < h 1 * h 0 * given in Theorem 2.5 such that, for all h j h 1 * , we have ϕ h j ( ν ) B ( ϕ ( ν ) , ξ / 2 ) and ϕ j ( ν ) B ( ϕ h j ( ν ) , ξ / 2 ) for 𝜉 given in Theorem 2.5, and we have a residual type a posteriori error bound with K 1 := 4 μ ( ν ) ,

(2.6) ϕ ( ν ) ϕ j ( ν ) X 2 D ϕ F ( ν , ϕ j ( ν ) ) 1 X ; X F ( ν , ϕ j ( ν ) ) X K 1 F ( ν , ϕ j ( ν ) ) X .

Proof

The proof follows by applying Theorem 2.7. For this, we first notice that ϕ j ( ν ) B ( ϕ ( ν ) , ξ ) for h j with h j h 0 * given in Theorem 2.5 and ϕ j ( ν ) ϕ ( ν ) strongly in 𝑋 as h j 0 thanks to Theorem 2.5 and (H3). By application of Lemma 2.2, D ϕ F ( ν , ϕ j ( ν ) ) is an isomorphism from 𝑋 onto 𝒳 with

(2.7) D ϕ F ( ν , ϕ j ( ν ) ) 1 X ; X 2 D ϕ F ( ν , ϕ ( ν ) ) 1 X ; X = : γ .

To apply Theorem 2.7, we set

ϵ j := F ( ν , ϕ j ( ν ) ) X , γ j := D ϕ F ( ν , ϕ j ( ν ) ) 1 X ; X , L j ( α ) := sup w B ( ϕ j ( ν ) , α ) D ϕ F ( ν , ϕ j ( ν ) ) D ϕ F ( ν , w ) X ; X .

From the mean value theorem and (H1), we have, for C 1 = S Y ; X L ( ϕ j ( ν ) , α ) with L ( , ) defined in (2.3),

D ϕ F ( ν , ϕ j ( ν ) ) D ϕ F ( ν , w ) X ; X S Y ; X D ϕ G ( ν , ϕ j ( ν ) ) D ϕ G ( ν , w ) X ; Y C 1 ϕ j ( ν ) w X ,

which yields that there exist 0 < h 0 h 0 * and 0 < α < ξ such that, for h j h 0 ,

(2.8) L j ( α ) 1 2 γ and ϕ ( ν ) ϕ j ( ν ) X < α .

Applying Theorem 2.5, the mean value theorem and (H1) to the identity

F ( ν , ϕ j ( ν ) ) = F ( ν , ϕ j ( ν ) ) F ( ν , ϕ ( ν ) ) = ϕ j ( ν ) ϕ ( ν ) + S ( G ( ν , ϕ j ( ν ) ) G ( ν , ϕ ( ν ) ) ) ,

we can find h 1 * h 0 such that, for all h j h 1 * ,

(2.9) 2 γ ϵ j α .

From (2.7)–(2.9), we have

2 γ j L j ( 2 γ j ϵ j ) 2 γ L j ( 2 γ ϵ j ) 2 γ L j ( α ) 1 .

Applying Theorem 2.7 with v = ϕ j ( ν ) , h j h 1 * , we conclude that there exists a unique ψ B ( ϕ j , 2 γ j ϵ j ) such that

F ( ψ ) = 0 and ψ ϕ j ( ν ) X 2 D ϕ F ( ν , ϕ j ( ν ) ) 1 X ; X F ( ν , ϕ j ( ν ) ) X .

From the uniqueness of 𝜓 in B ( ϕ j , α ) B ( ϕ j , ξ ) and the fact that F ( ν , ϕ ( ν ) ) = 0 and ϕ B ( ϕ j , ξ ) , we have ψ = ϕ ( ν ) and (2.6) is proved. ∎

3 Application 1: Navier–Stokes Equations

In this section, we introduce the a priori error estimate and analyze an a posteriori error for multi-level algorithms for the pseudostress-velocity formulation of the NSE. The a priori error was analyzed in [21]. The following details are only given for the reader’s convenience. First, we introduce some notation and function spaces. Let div and ∇ denote the divergence and gradient operators, respectively. For v = ( v 1 , , v d ) R d and τ = ( τ i j ) d × d R d × d , let τ i = ( τ i 1 , , τ i d ) denote its 𝑖th row for i = 1 , , d and define

div τ := ( div τ 1 , , div τ d ) , v τ := ( v τ 1 , , v τ d ) , v := ( v 1 , , v d ) t = ( v i x j ) d × d , δ := d × d unit matrix .

Given data 𝒇 and 𝒈, the stationary and incompressible NSE for the unknown velocity 𝒖 and the pressure 𝑝 reads

(3.1) ν u + u u + p = f in Ω ,
(3.2) div u = 0 in Ω ,
(3.3) u = g on Ω .
The compatibility conditions read

Ω g n d s = 0 and Ω p d x = 0 ,

where 𝒏 is the outward unit normal vector to the boundary.

Let A : R d × d R d × d be the deviatoric operator

A τ := τ 1 d ( tr τ ) δ .

The pseudostress tensor σ := ν u p δ allows the pseudostress-velocity formulation of NSE (3.1)–(3.2),

(3.4) A ( σ ν ) u = 0 in Ω ,
(3.5) div σ u A ( σ ν ) = f in Ω .
Equation (3.4) is obtained from

tr ( u ) = div u = 0 and tr σ = d p .

The compatibility condition Ω p d x = 0 implies

Ω tr σ d x = 0 .

Scaling pseudostress 𝝈 and data 𝒇 by σ / ν σ and f / ν f , (3.4)–(3.5) may be rewritten as

(3.6) A σ u = 0 in Ω ,
(3.7) div σ 1 ν u A σ = f in Ω .

Assume that Ω is a bounded, open, connected subset of R d ( d = 2 or 3) with the boundary Ω of convex polygon/polyhedra. We use the standard notation and definitions for the Sobolev spaces W r , p ( Ω ) and W r , p ( Ω ) for an integer r 0 and p [ 1 , ] . The standard associated norms are denoted by r , p := r , p , Ω and r , p , Ω . For r = 0 , W r , p ( Ω ) coincides with L p ( Ω ) . Moreover, the space W r , 2 ( Ω ) will be written in the form H r ( Ω ) . Let

H ( div ; Ω ) d = { τ L 2 ( Ω ) d × d : div τ L 2 ( Ω ) d } , W 0 , 3 ( div ; Ω ) d = { τ W 0 , 3 ( Ω ) d × d : div τ W 0 , 3 ( Ω ) d } , L 0 2 ( Ω ) = { q L 2 ( Ω ) | Ω q d x = 0 } .

We define spaces

H ̂ ( div ; Ω ) d = { τ H ( div ; Ω ) d | Ω tr τ d x = 0 } , W ̂ 0 , 3 ( div ; Ω ) d = { τ W 0 , 3 ( div ; Ω ) d | Ω tr τ d x = 0 } , H ̂ r ( Ω ) d × d = { τ H r ( Ω ) d × d | Ω tr τ d x = 0 } .

Then the mixed variational problem of the pseudostress-velocity formulation (3.6)–(3.7) is to find

( σ , u ) W ̂ 0 , 3 ( div ; Ω ) d × L 2 ( Ω ) d

such that

(3.8) ( A σ , τ ) + ( div τ , u ) = g * ( τ ) for all τ H ̂ ( div ; Ω ) d ,
(3.9) ( div σ , v ) 1 ν ( u A σ , v ) = f * ( v ) for all v L 2 ( Ω ) d ,
where ( , ) denotes the L 2 ( Ω ) d × d or L 2 ( Ω ) d inner product, g * ( τ ) = Ω ( n τ ) g d s and f * ( v ) = Ω f v d x . Note that if we use the trial space W ̂ 0 , 3 ( div ; Ω ) d × L 2 ( Ω ) d , then we can guarantee the nonlinear term, u A σ , to be in L 2 ( Ω ) d from the imbedding theorem (see [5, Remark 2.1]). The well-posedness and uniqueness of system (3.8)–(3.9) is established by the following well-known theorem (see [5]).

Theorem 3.1

Let 𝒇 be in L 2 ( Ω ) d . For g H 3 / 2 ( Ω ) d , system (3.8)–(3.9) has solutions ( σ , u ) which belong to H 1 ( Ω ) d × d × H 2 ( Ω ) d . Moreover, if ν > ν 0 ( Ω ; f , g ) (see [5] or [13] for the specific value of ν 0 ), then the solution ( σ , u ) is unique.

3.1 BRR Framework for NSE

We let

G ( ν , ϕ ) := ( g , 1 ν u A σ + f )

for ϕ := ( σ , u ) and Y := H 3 / 2 ( Ω ) d × L 2 ( Ω ) d . Let Λ ( 0 , ) be a compact interval. We consider Z = Y . Then condition (H4) implies condition (H3). Note the Fréchet derivative of 𝐺 at ( ν , ϕ ) = ( ν , ( σ , u ) ) is

D ϕ G ( ν , ϕ ) ( ψ ) = ( 0 , 1 ν ( u A τ + v A σ ) )

for ψ = ( τ , v ) . To guarantee that D ϕ G ( ν , ϕ ) ( ψ ) Z for all 𝜓, we let X := L 3 ( Ω ) d × d × L 6 ( Ω ) d . In fact, note that, for any ψ = ( τ , v ) X ,

u A τ + v A σ 0 , 2 u A τ 0 , 2 + v A σ 0 , 2 u 0 , 6 τ 0 , 3 + v 0 , 6 σ 0 , 3 ( σ 0 , 3 2 + u 0 , 6 2 ) 1 / 2 ( τ 0 , 3 2 + v 0 , 6 2 ) 1 / 2 .

Here we used the triangle inequality and the Hölder inequality. So hypothesis (H2) holds. From the definition, 𝐺 belongs to the C 2 -class clearly. So hypothesis (H1) holds.

We let X := L 2 ( Ω ) d × d × L 2 ( Ω ) d and introduce the linear solution operator

S : Y X , ( g * , f * ) S ( g * , f * ) = ( σ , u )

defined by the solution of

(3.10) ( A σ , τ ) + ( div τ , u ) = Ω ( n τ ) g * d s for all τ H ̂ ( div ; Ω ) d ,
(3.11) ( div σ , v ) = Ω f * v d x for all v L 2 ( Ω ) d .
The following result is well known (see [5, Lemma 5.1]).

Lemma 3.2

For any ( g * , f * ) Y , Stokes equations (3.10)– (3.11) have a unique solution ( σ , u ) = S ( g * , f * ) which is in H 1 ( Ω ) d × d × H 2 ( Ω ) d X .

Clearly, the pair ( ν , ϕ ) Λ × X is a solution of

(3.12) F ( ν , ϕ ) := ϕ + S G ( ν , ϕ ) = 0

if and only if the function ϕ = ( σ , u ) is a solution of (3.8)–(3.9).

Theorem 3.3

For any ( g , f ) Y , there is a ν 0 ( Ω ; f , g ) such that if ν > ν 0 ( Ω ; f , g ) , where ν 0 is the fixed value stated in Theorem 3.1, then nonlinear problem (3.12) has a branch of nonsingular solutions.

Proof

See [21]. ∎

We now consider mixed finite element approximations. Let { T h } be a family of shape-regular triangulations of Ω ̄ by triangles ( d = 2 ) and tetrahedra ( d = 3 ), respectively, of diameter h T , and let h := max T T h h T denote the mesh size. Associated with T h , we define a finite subspace of 𝑋 by

X h := RT ̂ k d ( T h ) × P k d ( T h ) ,

where P k d ( T h ) := i = 1 d P k ( T h ) (i.e., a product space of polynomials of degree 𝑘) and

RT ̂ k d ( T h ) := { τ RT k d ( T h ) | Ω tr τ d x = 0 }

for RT k d ( T h ) := i = 1 d RT k ( T h ) (i.e., a product space of Raviart–Thomas space of order 𝑘).

For short notation on generic constants 𝐶 independent of the local or global mesh sizes, for any expressions 𝐴 and 𝐵,

A B abbreviates A C B .

We introduce a projection operator over the space 𝑊, where W := H ̂ 1 ( Ω ) d × d or W := H ̂ ( div ; Ω ) d L s ( Ω ) d × d for some s > 2 . Let Π ̃ h denote the Raviart–Thomas projection operator (see [3]) associated with the degrees of freedom onto RT ̂ k d ( T h ) + span { δ } . We define Π h : W RT ̂ k d ( T h ) by

Π h τ = Π ̃ h τ Ω ( tr Π ̃ h τ ) d x d | Ω | δ for all τ W ,

where | Ω | = Ω d x . Then we have Ω ( tr Π h τ ) d x = 0 . We notice that the operator Π ̃ h approximates the normal components on element edges such that, for E E h , the set of all edges of T h , for v h P k d ( T h ) and for τ W ,

(3.13) v h , n E ( τ Π ̃ h τ ) E = 0 .

But, in general, the above relation does not hold for the operator Π h .

Let P h be the L 2 -projection onto P k d ( T h ) with the well-known approximation property

(3.14) P h v v 0 , p , T h T r | v | r , p , T , 0 r k + 1 , for all v W r , p ( T ) d .

Then the following lemma holds (see [6] for details).

Lemma 3.4

The commutative property div Π h = P h div holds. Furthermore, we have

(3.15) τ Π h τ 0 , 2 h r | τ | r , 2 for 1 r k + 1 ,
div τ div ( Π h τ ) 0 , 2 h r | div τ | r , 2 for 0 r k + 1 .

The mixed finite element method for approximation of the solution of system (3.8)–(3.9) is defined by finding ( σ h , u h ) X h such that

(3.16) ( A σ h , τ ) + ( u h , div τ ) = g * ( τ ) for all τ RT ̂ k d ( T h ) ,
(3.17) ( div σ h , v ) ( 1 ν u h A σ h , v ) = f * ( v ) for all v P k d ( T h ) .

Next, we define a discrete solution operator S h : Y X h by

S h ( g * , f * ) = ( σ h , u h ) X h ,

where ( σ h , u h ) X h is the solution of the discrete counterpart of (3.10)–(3.11),

(3.18) ( A σ h , τ ) + ( u h , div τ ) = Ω ( n τ ) g * d s for all τ RT ̂ k d ( T h ) ,
(3.19) ( div σ h , v ) = Ω f * v d x for all v P k d ( T h ) .
Now, the discrete nonlinear problem of (3.12) is to find ϕ h = ( σ h , u h ) in X h such that

(3.20) F h ( ν , ϕ h ) := ϕ h + S h G ( ν , ϕ h ) = 0 .

Note that ϕ h = ( σ h , u h ) is the solution of (3.20) if and only if ϕ h X h satisfies (3.16)–(3.17).

Theorem 3.5

For any ( g * , f * ) Y , let ( σ , u ) = S ( g * , f * ) and ( σ h , u h ) = S h ( g * , f * ) be the solution of (3.10)–(3.11) and (3.18)–(3.19), respectively. Then

lim h 0 S S h Z ; X = 0 .

Moreover, assume that ( σ , u ) H r ( Ω ) d × d × H r ( Ω ) d for 1 r k + 1 ; then we have

S ( g * , f * ) S h ( g * , f * ) X h r ( σ r , 2 + u r , 2 ) h r ( f * r 2 , 2 + g * r 1 2 , 2 , Ω ) .

Proof

See [21]. ∎

Theorem 3.5 implies that hypotheses (H3) and (H4) hold. We get the following theorem.

Theorem 3.6

Let ( g , f ) Y . Assume that ν > ν 0 ( Ω ; f , g ) , where ν 0 is the fixed value stated in Theorem 3.3, and that ( ν , ϕ ( ν ) ) = ( ν , ( σ ( ν ) , u ( ν ) ) ) is a branch of nonsingular solutions of (3.12). Then, for ℎ sufficiently small, there exist a neighborhood 𝒪 of the origin in 𝑋 and a unique C 2 -function ν ϕ h ( ν ) X h such that { ( ν , ϕ h ( ν ) ) : ν Λ } is a branch of nonsingular solutions of (3.20) and that ϕ ( ν ) ϕ h ( ν ) O .

Moreover, assume that ( σ ( ν ) , u ( ν ) ) H r ( Ω ) d × d × H r ( Ω ) d for r = 1 , 2 , , k + 1 . Then we have

ϕ ( ν ) ϕ h ( ν ) X := σ ( ν ) σ h ( ν ) 0 , 3 + u ( ν ) u h ( ν ) 0 , 6 h r ( σ ( ν ) r , 2 + u ( ν ) r , 2 ) ,

where ϕ h ( ν ) = ( σ h ( ν ) , u h ( ν ) ) .

Proof

See [21]. ∎

3.2 Error Estimates for Multi-level Algorithm for NSE

Now, we apply the multi-level algorithm to NSE based on pseudostress-velocity formulation. Note that the Fréchet derivative of 𝐹 at ( ν , ϕ ) = ( ν , ( σ , u ) ) is

D ϕ F ( ν , ϕ ) ( ψ ) = ( I + S D ϕ G ( ν , ϕ ) ) ( ψ ) = ψ + S ( 0 , 1 ν ( u A τ + v A σ ) )

for any ψ = ( τ , v ) X .

We generate a shape-regular triangulation T j + 1 from T j ( j 1 ) with newest vertex bisection or longest edge bisection. For convenience, we omit 𝜈 from the functions ϕ j ( ν ) .

Algorithm 2

Algorithm 2 (Multi-level algorithm for NSE)

  1. (Nonlinear solver) Find ϕ 0 = ( σ 0 , u 0 ) X 0 , where X 0 := RT ̂ k d ( T 0 ) × P k d ( T 0 ) , such that

    (3.21) ( A σ 0 , τ ) + ( div τ , u 0 ) = g * ( τ ) for all τ RT ̂ k d ( T 0 ) ,
    (3.22) ( div σ 0 , v ) 1 ν ( u 0 A σ 0 , v ) = f * ( v ) for all v P k d ( T 0 ) ,

  2. (Linear solver) For 1 j J , find ϕ j = ( σ j , u j ) X j := RT ̂ k d ( T j ) × P k d ( T j ) such that

    (3.23) ( A σ j , τ ) + ( div τ , u j ) = g * ( τ ) for all τ RT ̂ k d ( T j ) ,
    (3.24) ( div σ j , v ) 1 ν ( u j 1 A σ j + u j A σ j 1 , v ) = 1 ν ( u j 1 A σ j 1 , v ) + f * ( v ) for all v P k d ( T j ) .

If T j with diameter h j is uniformly refined triangulations from the previous triangulations T j 1 with diameter h j 1 , from the results of Theorem 2.5 and Theorem 3.6, the following theorem for Algorithm 1 can be easily proved.

Theorem 3.7

Theorem 3.7 (A Priori Error Estimates)

Let ( g , f ) Y . Assume that ν > ν 0 ( Ω ; f , g ) , where ν 0 is the fixed value stated in Theorem 3.3, and that ( ν , ϕ ( ν ) ) = ( ν , ( σ ( ν ) , u ( ν ) ) ) is a branch of nonsingular solutions of (3.12). Let ϕ 0 ( ν ) = ( σ 0 ( ν ) , u 0 ( ν ) ) be the solution of (3.21)–(3.22) and ϕ j ( ν ) = ( σ j ( ν ) , u j ( ν ) ) , 1 j J , solutions of (3.23)–(3.24). If ( σ ( ν ) , u ( ν ) H r ( Ω ) d × d × H r ( Ω ) d for r = 1 , 2 , , k + 1 , then we have

ϕ ( ν ) ϕ j ( ν ) X = σ ( ν ) σ j ( ν ) 0 , 3 + u ( ν ) u j ( ν ) 0 , 6 ( h j 1 2 r + h j r ) ( σ ( ν ) r , 2 + u ( ν ) r , 2 ) .

Proof

See [21]. ∎

Theorem 2.8 indeed provides the reliability bound. For brevity, 𝜈 is dropped from the functions ϕ j ( ν ) and ϕ ̄ j ( ν ) . First, observe that

(3.25) F ( ν , ϕ j ) X = ϕ j + S G ( ν , ϕ j ) X = ϕ j ϕ ̄ j L 2 ( Ω ) d × d × L 2 ( Ω ) d ,

where ϕ ̄ j := ( σ ̄ , u ̄ ) H 1 ( Ω ) d × d × H 2 ( Ω ) d is the unique solution for the pseudostress-velocity formulation of the Stokes problem

(3.26) ( A σ ̄ , τ ) + ( u ̄ , div τ ) = g * ( τ ) for all τ H ̂ ( div ; Ω ) d ,
(3.27) ( div σ ̄ , v ) = 1 ν ( u j A σ j , v ) + f * ( v ) for all v L 2 ( Ω ) d .
The following error equations immediately follow from (3.23)–(3.24) and (3.26)–(3.27):
( A σ ̄ A σ j , τ ) + ( u ̄ u j , div τ ) = 0 for all τ RT ̂ k d ( T j ) ,
(3.28) ( div σ ̄ div σ j , v ) = 1 ν ( ( u j u j 1 ) ( A σ j A σ j 1 ) , v ) for all v P k d ( T j ) .

Remark 3.8

Equation (3.25) says that an a posteriori error estimate for the NSE is controlled by that for the Stokes equations with the additional term appearing in the right-hand side of (3.28). Thus, we can apply existing a posteriori error estimates for the pseudostress-velocity formulation of the Stokes equations to the NSE. Indeed, we apply the residual-based a posteriori error estimates for the two-dimensional Stokes equations analyzed by Carstensen, Kim and Park in [10].

Let us describe the well-known results used throughout this paper. Assume that Ω is a bounded Lipschitz domain with a polygonal boundary. Then there exists a constant C = C ( Ω ) such that Poincare’s inequalities read

(3.29) v v Ω 0 , 2 C v 0 , 2 for all v H 1 ( Ω ) d ,
v 0 , 2 C v 0 , 2 for all v H 0 1 ( Ω ) d ,
where v Ω is an average of 𝒗 over Ω, i.e., v Ω = 1 | Ω | Ω v ( y ) d y .

The general interpolation operator cannot be applied to H 1 functions. But Clément’s interpolation which defines a continuous interpolating polynomials using averages of 𝒗 instead of point values can be applied to H 1 functions. The nodal values of Clément’s interpolating polynomials depend on the value of 𝒗 on the adjacent elements through an averaging process. We now consider the error estimate of Clément’s interpolation operator. We define S 0 ( T j ) d L 2 ( Ω ) d as the piecewise constant and S 1 ( T j ) d H 1 ( Ω ) d or S 0 1 ( T j ) d H 0 1 ( Ω ) d as continuous and piecewise affine vector functions. For each T j , let E j denote the set of all edges (faces) of T j . In what follows, h T and h E stand for the diameters of the triangle T T j and the edge E E j , respectively. For any T T j and E E j , we set

ω T := { T T j : T ̄ T ̄ ϕ } and ω E := { T T j : E T ̄ }

Note that, since the triangulation T j is shape-regular, the maximal number of elements in ω T is ℎ-independently bounded by C M . Clément’s interpolation operator (see [11]) I j : H 1 ( Ω ) d S 1 ( T j ) d satisfies the following estimates:

(3.30) v I j v m , 2 , T h T 1 m v 1 , 2 , ω T for all v H 1 ( Ω ) d , m = 0 , 1 and all T T j ,
(3.31) v I j v 0 , 2 , E h E 1 / 2 v 1 , 2 , ω E for all v H 1 ( Ω ) d , all E T and all T T j .

Finally, we state the usual inverse inequalities introduced in [25] which we will use in order to derive local lower bounds for the error. We need some modified results whose proof easily follows from the idea of the paper [25], and we omit the proof.

Theorem 3.9

There exist constants C 1 , , C 5 , which only depend on the shape of the elements, such that the following inequalities hold for all polynomials v , w P k ( T ) d * and d P k ( E ) d * , d * = d or d × d , and for two real numbers p , q with 1 p and 1 p + 1 q = 1 :

(3.32) C 1 v 0 , p , T sup w P k ( T ) d * T v b T w d x w 0 , q , T v 0 , p , T ,
(3.33) C 2 d 0 , 2 , E b E 1 / 2 d 0 , 2 , E d 0 , 2 , E ,
(3.34) b E P ext d 0 , q , ω E C 3 h T 1 / q d 0 , q , E ,
(3.35) ( b T v ) 0 , q , T C 4 h T 1 b T v 0 , q , T ,
(3.36) ( b E P ext d ) 0 , q , ω E C 5 h T 1 b E P ext d 0 , q , T .
Here, for the barycenters x T of 𝑇 and x E of 𝐸, two functions b T , b E C ( T , R ) satisfy b T P 3 ( T ) , b T ( x T ) = 1 and b T = 0 on T , and b E P 2 ( T ) , b E ( x E ) = 1 and b E = 0 on T \ E , respectively, and P ext is an extension operator P ext : C ( E , R d * ) C ( ω E , R d * ) with P ext w | E = w .

Let R j ( f ) := f + div σ j 1 ν u j A σ j , e ̄ j = u ̄ u j and ε ̄ j = σ ̄ σ j , and Σ := H ̂ ( div ; Ω ) d and V := L 2 ( Ω ) d . For simplicity, we let P j := P h j be the L 2 -projection onto P k d ( T h j ) .

Lemma 3.10

For the error e ̄ j ( 1 j J ), we have

e ̄ j 0 , 2 2 T T j h T 2 R j ( f ) 0 , 2 , T 2 + T T j h T 2 A σ j u j 0 , 2 , T 2 + E E j Ω h E g g j 0 , 2 , E 2 + ( u j u j 1 ) A ( σ j σ j 1 ) 0 , 2 , Ω 2 ,

where g j is an approximation of 𝒈 so that g j , n ( η Π ̃ j η ) = 0 .

Proof

Since e ̄ L 2 ( Ω ) d , we have unique η H 1 ( Ω ) d × d and z H 0 1 ( Ω ) d H 2 ( Ω ) d such that

(3.37) ( A η , τ ) + ( div τ , z ) = 0 for all τ Σ ,
( div η , v ) = ( e ̄ , v ) for all v V ,
and

(3.38) z 2 , 2 e ̄ 0 , 2 , η 1 , 2 e ̄ 0 , 2 .

Then, by integration by parts, (3.26) and the relation div Π j = P j div , we get

e ̄ 0 , 2 2 = ( u ̄ u j , div η ) = ( A σ ̄ , η ) + g , n η ( u j , div ( Π j η ) ) .

Equations (3.23), (3.37), (3.27) and (3.24) imply

(3.39) e ̄ 0 , 2 2 = ( A σ ̄ A σ j , η ) ( A σ j , η ) + ( A σ j , Π j η ) + g , n ( η Π j η ) = ( div ( σ ̄ σ j ) , z ) + ( A σ j , Π j η η ) + g , n ( η Π j η ) = ( R j ( f ) , z P j z ) 1 ν ( ( u j u j 1 ) A ( σ j σ j 1 ) , P j z ) + ( A σ j , Π j η η ) + g g j , n ( η Π ̃ j η ) .

The last term of (3.39) is derived by applying the compatibility condition Ω g n d s = 0 and relation (3.13) to the term g , n ( η Π j η ) . To estimate the trilinear term ( ( u j u j 1 ) A ( σ j σ j 1 ) , P j z ) , using the Hölder inequality and equation (3.14), then we have

| ( ( u j u j 1 ) A ( σ j σ j 1 ) , P j z ) | ( u j u j 1 ) A ( σ j σ j 1 ) 0 , 2 , Ω P j z 0 , 2 , Ω ( u j u j 1 ) A ( σ j σ j 1 ) 0 , 2 , Ω z 1 , 2 , Ω .

Applying estimates (3.14), (3.15) and (3.38) to the above equation (3.39), we conclude the proof. ∎

To derive the estimate of the error bound for ε ̄ = σ ̄ σ j , we formulate the Helmholtz decomposition suitable for our problems. For this, we first define the curl operator for various kinds of functions. When d = 2 , we define the curl operator for a scalar function 𝑞, a vector function v = ( v 1 , v 2 ) and a tensor function ψ = ( ψ i j ) 2 × 2 , ψ i = ( ψ i 1 , ψ i 2 ) denote its 𝑖th row for i = 1 , 2 , by

curl q := ( q x 2 , q x 1 ) , curl v := v 1 x 2 + v 2 x 1 , Curl v := ( curl v 1 curl v 2 ) , Curl ψ := ( curl ψ 1 , curl ψ 2 ) .

When d = 3 , we define the curl operator for a vector function v = ( v 1 , v 2 , v 3 ) and a tensor function ψ = ( ψ i j ) 3 × 3 , ψ i = ( ψ i 1 , ψ i 2 , ψ i 3 ) by

curl v := ( v 2 x 3 + v 3 x 2 , v 3 x 1 + v 1 x 3 , v 1 x 2 + v 2 x 1 ) , Curl ψ := ( curl ψ 1 curl ψ 2 curl ψ 3 ) .

We notice that, for ψ H 1 ( ω ) d × d and r H 1 ( ω ) d * , d * = 2 for d = 2 and d * = 3 × 3 for d = 3 , with the Lipschitz boundary ω ,

(3.40) ( Curl ψ , r ) ω ( ψ , Curl r ) ω = γ t ψ , r ω ,

where γ t ψ is denoted by t ψ | ω or ( γ t ψ ) i = n × ψ i | ω according to whether d = 2 or d = 3 . Here, when d = 2 , t = ( t 1 , t 2 ) is tangential on ω and the direction of 𝒕 follows that t 1 = n 2 , t 2 = n 1 with the unit outer normal vector n = ( n 1 , n 2 ) to ω and when d = 3 , ( γ t ψ ) i is the 𝑖th row of the tensor γ t ψ .

Lemma 3.11

For the error ε ̄ , let s := 1 | Ω | Ω tr ε ̄ d x . Then there exist z H 0 1 ( Ω ) d H 2 ( Ω ) d with div z = tr ε ̄ s , p ̃ L 0 2 ( Ω ) and r H 1 ( Ω ) d * / R d * , d * = 2 for d = 2 and d * = 3 × 3 for d = 3 such that

(3.41) ε ̄ = z p ̃ δ + Curl r ,

with the estimate

(3.42) z 0 , 2 + | r | 1 , 2 + p ̃ 0 , 2 C ε ̄ 0 , 2 ,

where 𝐶 is a positive constant independent of 𝜺.

Proof

We consider the following Stokes problem: find z H 0 1 ( Ω ) d H 2 ( Ω ) d and p ̃ L 0 2 ( Ω ) such that

( z , w ) ( p ̃ , div w ) = ( ε ̄ , w ) for all w H 0 1 ( Ω ) d , ( q , div z ) = ( q , tr ε ̄ s ) for all q L 0 2 ( Ω ) d .

The solutions 𝒛 and p ̃ satisfy

z + p ̃ = div ε ̄ and div z = tr ε ̄ s .

Hence ε ̄ z + p ̃ δ is a divergence-free tensor. Moreover, ε ̄ z + p ̃ δ satisfies

( ε ̄ z + p ̃ δ ) i n , 1 Ω = 0 , i = 1 , , d ,

where ( ε ̄ z + p ̃ δ ) i is the 𝑖th row of the tensor ε ̄ z + p ̃ δ . These conditions imply that there exist stream functions r i H 1 ( Ω ) for d = 2 or r i H 1 ( Ω ) 3 satisfying div r i = 0 and curl r i H ( div ; Ω ) for d = 3 such that ( ε ̄ z + p ̃ δ ) i = curl r i (see [13, Theorems 3.1 and 3.4]). Thus, we conclude that, for the two-dimensional vector r = ( r 1 , r 2 ) or for the 3 × 3 -tensor r = ( r 1 t , r 2 t , r 3 t ) t , we have ε ̄ = z p ̃ δ + Curl r . The regularity theorem of the Stokes problem (cf. [13, Theorem 5.4]) yields z 0 , 2 + p ̃ 0 , 2 C ε ̄ 0 , 2 , where the positive constant 𝐶 is independent of ε ̄ (see [13]). From this inequality and the fact that | r | 1 , 2 = Curl r 0 , 2 for d = 2 and | r | 1 , 2 C ( div r 0 , 2 + Curl r 0 , 2 ) = C Curl r 0 , 2 for d = 3 , we easily get | r | 1 , 2 C ε ̄ 0 , 2 . ∎

We define the jump [ ] for E E j and for v H 1 ( T j ) d by

[ v ] | E := ( v | T + ) | E ( v | T ) | E if E T ̄ + T ̄ ( E E j , T + , T T j )

and n E points from T + into its neighbor element T .

Lemma 3.12

For the error ε ̄ j ( 1 j J ), we have

ε ̄ 0 , 2 2 T T j h T R j ( f ) 0 , 2 , T 2 + T T j h T Curl ( A σ j ) 0 , 2 , T 2 + E E j \ Ω h E 1 / 2 [ γ t A σ j ] 0 , 2 , E 2 + E E j Ω h E 1 / 2 γ t ( A σ j g ) 0 , 2 , E 2 + ( u j u j 1 ) A ( σ j σ j 1 ) 0 , 2 , Ω 2 .

Proof

In the Helmholtz decomposition (3.41) of ε ̄ ,

ε ̄ = z p ̃ δ + Curl r ,

we choose 𝒓 satisfying Ω r d x = 0 in order to apply Poincare’s inequality (3.29) later.

From the above decomposition and the identity div z = tr ε ̄ s , we get

tr ε ̄ = div z tr ( p ̃ δ Curl r ) = tr ε ̄ s tr ( p ̃ δ Curl r ) .

Thus tr ( p ̃ δ Curl r ) = s . Since Ω tr σ ̄ d x = 0 and Ω tr σ j d x = 0 , we have

(3.43) ( tr ε ̄ , tr ( p ̃ δ Curl r ) ) = s ( tr σ ̄ tr σ j , 1 ) = 0 .

From (3.43), we have the error decomposition

(3.44) ( ε ̄ , ε ̄ ) = ( ε ̄ , z p ̃ δ + Curl r ) = ( ε ̄ , z ) ( tr ε ̄ , p ̃ ) + ( A ε ̄ , Curl r ) + 1 d ( ( tr ε ̄ ) δ , Curl r ) = ( ε ̄ , z ) + ( A ε ̄ , Curl r ) 1 d ( tr ε ̄ , tr p ̃ δ ) + 1 d ( tr ε ̄ , tr Curl r ) = ( ε ̄ , z ) + ( A ε ̄ , Curl r ) = ( I ) + ( II ) .

Integration by parts, (3.24) and (3.27) imply

( I ) = ( ε ̄ , z ) = ( R j ( f ) , z P j z ) + 1 ν ( ( u j u j 1 ) A ( σ j σ j 1 ) , P j z ) .

Using inequality (3.14), Cauchy–Schwarz inequality and Sobolev imbedding theorem, we have

(3.45) ( I ) h j R j ( f ) 0 , 2 , Ω z 0 , 2 , Ω + ( u j u j 1 ) A ( σ j σ j 1 ) 0 , 2 , Ω z 2 , 2 , Ω .

We now consider the second term ( II ) of (3.44). For the Clement’s operator I j , we have div Curl ( I j r ) = 0 in the weak sense and Curl ( I j r ) H ( div ; Ω ) d × d . For an arbitrary r H 1 ( Ω ) d * , Curl ( I j r ) is not necessarily in RT ̂ k d ( T j ) since it may not satisfy the constraint Ω ( tr Curl ( I j r ) ) d x = 0 . But Curl ( I j r ) s ̃ δ belongs to the space RT ̂ k d ( T j ) , where

s ̃ = Ω ( tr Curl ( I j r ) ) d x d | Ω | .

We find, from (3.23) and (3.26),

( A ε ̄ , Curl ( I j r ) s ̃ δ ) = ( e ̄ , div ( Curl ( I j r ) s ̃ δ ) ) = 0 .

Moreover, from the fact that ( A ε ̄ , s ̃ δ ) = ( tr A ε ̄ , s ̃ ) = 0 , we get ( A ε ̄ , Curl ( I j r ) ) = 0 . By the identity A σ ̄ = u ̄ and integration by parts (3.40), the second term ( II ) of (3.44) becomes

( II ) = ( A ε ̄ , Curl ( r I j r ) ) = ( A σ j , Curl ( r I j r ) ) γ t u ̄ , r I j r

Considering integration by parts (3.40) and estimate (3.30)–(3.31), recalling that the number of elements in ω T is bounded by the constant C M and using Poincare’s inequality (3.29), we obtain

(3.46) ( II ) = T T j ( Curl ( A σ j ) , r I j r ) T + T T j γ t A σ j , r I j r T γ t g , r I j r ( ( T T j h T Curl ( A σ j ) 0 , 2 , T 2 ) 1 / 2 + ( E E j \ Ω h E 1 / 2 [ γ t A σ j ] 0 , 2 , E 2 ) 1 / 2 + ( E E j Ω h E 1 / 2 γ t ( A σ j g ) 0 , 2 , E 2 ) 1 / 2 ) | r | 1 , 2 .

Putting (3.45) and (3.46) into (3.44) and using estimate (3.42), we conclude the proof. ∎

Combining Corollary 2.8 and Lemmas 3.113.12, we obtain the following reliability estimate.

Theorem 3.13

Theorem 3.13 (Reliable A Posteriori Error Estimates)

For the solution ϕ = ( σ , u ) of problem (3.8)–(3.9) and the approximate solution ϕ j = ( σ j , u j ) of problem (3.23)–(3.24), we have, for 1 j J ,

ϕ ϕ j X 2 η j 2 + ( u j u j 1 ) A ( σ j σ j 1 ) 0 , 2 , Ω 2 ,

where the error estimator η j is defined by η j 2 := T T j η j ( T ) 2 with the local error estimator η j ( T ) given by

η j ( T ) 2 := h T 2 R j ( f ) 0 , 2 , T 2 + h T 2 A σ j u j 0 , 2 , T 2 + h T 2 Curl ( A σ j ) 0 , 2 , T 2 + E T \ Ω h E 2 [ γ t A σ j ] 0 , 2 , E 2 + E T Ω h E ( g g j 0 , 2 , E 2 + γ t ( A σ j g ) 0 , 2 , E 2 ) .

Remark 3.14

We note that physical quantities such as pressure, gradient velocity and stress can be expressed as follows:

p = ν d tr σ , u = A σ , σ ̃ = ν ( σ + ( u ) t ) = ν ( σ + ( A σ ) t ) = ν ( A σ + σ t ) .

From these identities, the approximation of pressure, gradient velocity and stress can be defined by

p j := ν d tr σ j , ( u ) j := A σ j , σ ̃ j := ν ( A σ j + ( σ j ) t ) .

Then the following relations hold:

p p j 0 , 3 = ν d tr σ tr σ j 0 , 3 ν d σ σ j 0 , 3 , u ( u ) j 0 , 3 = A σ A σ j 0 , 3 σ σ j 0 , 3 , σ ̃ σ ̃ j 0 , 3 ν A σ A σ j 0 , 3 + ν σ t ( σ j ) t 0 , 3 2 ν σ σ j 0 , 3 .

Now, we derive an efficient a posteriori error estimate. Let f j P k ( T ) d be a polynomial approximation of 𝒇.

Lemma 3.15

For each T T j ( 1 j J ), we have

h T R j ( f ) 0 , 2 , T h T u u j 0 , 6 , T + σ σ j 0 , 3 , T + h T f f j 0 , 2 , T .

Proof

Using inverse inequality (3.32), the identity div σ = 1 ν u A σ f and integration by parts, we have

C 1 f j + div σ j 1 ν u j A σ j 0 , 2 , T 2 ( f j + div σ j 1 ν u j A σ j , b T R j ( f j ) ) T = ( f j f + div σ j div σ 1 ν u j A σ j + 1 ν u A σ , b T R j ( f j ) ) T = ( f j f 1 ν u j A σ j + 1 ν u A σ , b T R j ( f j ) ) T + ( σ j σ , b T R j ( f j ) ) T .

By the Hölder inequality, inverse inequality (3.32) and (3.35), and the mean value theorem, we get

(3.47) C 1 f j + div σ j 1 ν u j A σ j 0 , 2 , T 2 ( 1 ν u A σ 1 ν u j A σ j 0 , 2 , T + + f f j 0 , 2 , T ) R j ( f ) 0 , 2 , T + h T 1 C 4 σ σ j 0 , 3 , T b T R j ( f ) 0 , 3 / 2 , T ( u u j 0 , 6 , T + σ σ j 0 , 3 , T ) R j ( f ) 0 , 2 , T + ( h T 1 C 4 σ σ j 0 , 3 , T + f f j 0 , 2 , T ) R j ( f ) 0 , 2 , T

With an application of the triangle inequality to the identity

R j ( f ) = f + div σ j 1 ν u j A σ j = ( f f j ) + R j ( f j ) ,

we complete the proof by (3.47). ∎

Since ( A σ j ) | T P k + 1 ( T ) d × d , we can apply inverse inequalities (3.32)–(3.36) to the polynomial function ( A σ j ) | T .

Lemma 3.16

For each T T j ( 1 j J ), we have

h T A σ j u j 0 , 2 , T C 1 2 C 4 ( h T σ σ j 0 , 3 , T + u u j 0 , 6 , T ) .

Proof

By inequality (3.32) and integration by parts, we have

(3.48) C 1 2 A σ j u j 0 , 2 , T 2 ( A σ j u j , b T ( A σ j u j ) ) T = ( A σ A σ j , b T ( A σ j u j ) ) T + ( u u j , b T ( A σ j u j ) ) T = ( A σ A σ j , b T ( A σ j u j ) ) T + ( u u j , div ( b T ( A σ j u j ) ) ) T A σ A σ j 0 , 3 , T A σ j u j 0 , 3 / 2 , T + u u j 0 , 6 , T | b T ( A σ j u j ) | 1 , 6 / 5 , T .

Applying inverse inequality (3.35),

(3.49) | b T ( A σ j u j ) | 1 , 6 / 5 , T C 4 h T 1 A σ j u j 0 , 6 / 5 , T .

Putting (3.49) into (3.48) and using the relation A σ A σ j 0 , 3 , T σ σ j 0 , 3 , T , we get

h T A σ j u j 0 , 2 , T C 1 2 C 4 ( h T σ σ j 0 , 3 , T + u u j 0 , 6 , T ) .

Lemma 3.17

For each T T j ( 1 j J ), we have

h T Curl ( A σ j ) 0 , 2 , T C 1 2 C 4 σ σ j 0 , 3 , T .

Proof

Inequality (3.32), the identity Curl ( A σ ) = Curl ( u ) = 0 and integration by parts imply

(3.50) C 1 2 Curl ( A σ j ) 0 , 2 , T 2 b T 1 / 2 Curl ( A σ j ) 0 , 2 , T 2 = ( Curl ( A σ A σ j ) , b T Curl ( A σ j ) ) T = ( A σ A σ j , Curl ( b T Curl ( A σ j ) ) ) T σ σ j 0 , 3 , T | b T Curl ( A σ j ) | 1 , 3 / 2 , T .

We obtain, from inverse inequality (3.35),

(3.51) | b T Curl ( A σ j ) | 1 , 3 / 2 , T C 4 h T 1 Curl ( A σ j ) 0 , 3 / 2 , T C 4 h T 1 Curl ( A σ j ) 0 , 2 , T .

Hence, combining (3.50) and (3.51), we conclude the proof. ∎

Lemma 3.18

For each E E j ( 1 j J ),

h E 1 / 2 [ γ t A σ j ] 0 , 2 , E C σ σ j 0 , 3 , w E ,

where C = C 2 2 C 3 max { C 5 , C 1 2 C 4 } .

Proof

Set d := [ γ t A σ j ] , which is a polynomial along 𝐸. Note that [ γ t A σ j ] = [ γ t A ( σ σ j ) ] on each edge 𝐸 in the weak sense and P ext d | E = d . Using inequality (3.33), integration by parts (3.40) and the identity Curl ( A σ ) = 0 , we have, for any edge E Γ and for T i ω E , i = 1 , 2 ,

(3.52) C 2 2 d 0 , 2 , E 2 b E 1 / 2 d 0 , 2 , E 2 = b E P ext d , [ γ t A ( σ σ j ) ] E = ( A ( σ σ j ) , Curl ( b E P ext d ) ) ω E i = 1 , 2 ( b E P ext d , Curl ( A σ j ) ) T i σ σ j 0 , 3 , ω E | b E P ext d | 1 , 3 / 2 , ω E + b E P ext d 0 , 2 , ω E ( i = 1 , 2 Curl ( A σ j ) 0 , 2 , T i ) .

From Lemma 3.17, we have

(3.53) i = 1 , 2 Curl ( A σ j ) 0 , 2 , T i C 1 2 C 4 i = 1 , 2 h T i 1 σ σ j 0 , 3 , T i .

Application of (3.34), (3.36) and (3.53) to (3.52) yield

h E 1 / 2 d 0 , 2 , E C σ σ j 0 , 3 , ω E ,

where C = C 2 2 C 3 max { C 5 , C 1 2 C 4 } . ∎

Remark 3.19

By the Hölder inequality and the triangle inequality, we have

( u j u j 1 ) A ( σ j σ j 1 ) 0 , 2 , Ω u j u j 1 0 , 6 , Ω σ j σ j 1 0 , 3 , Ω ( u j u 0 , 6 , Ω + u u j 1 0 , 6 , Ω ) ( σ j σ 0 , 3 , Ω + σ σ j 1 0 , 3 , Ω ) .

Assume that h j 1 2 h j for j > 0 . Then, by the result of Theorem 3.7,

u j u 0 , 6 , Ω + u u j 1 0 , 6 , Ω ( h j 1 2 r + h j r ) + ( h j 2 2 r + h j 1 r ) h j r + h j 1 r h j 1 r .

Similarly,

σ j σ 0 , 3 , Ω + σ σ j 1 0 , 3 , Ω h j 1 r .

Hence

( u j u j 1 ) A ( σ j σ j 1 ) 0 , 2 , Ω = O ( h j 1 2 r ) ,

which shows the asymptotic behavior of the additional term. Note that the additional term

( u j u j 1 ) A ( σ j σ j 1 ) 0 , 2 , Ω = O ( h j 1 2 r )

is of higher order compared to the other error estimates and the error u j u 0 , 6 , Ω + σ j σ 0 , 3 , Ω = O ( h j r ) if we take h j = O ( h j 1 2 ϵ ) for some positive 𝜖. In practice, adaptively refining mesh via newest vertex bisection or red refinement satisfies the relation h j 1 2 1 4 h j 1 h j , which implies that the additional term can be negligible in the adaptive algorithm in contrast to the result in [14], where this additional term is not negligible as refinements proceed. The reason is that, in the second step of his two-grid algorithm, the nonlinear problem is linearized locally at the solution obtained in the fixed coarse space. When the linearized problem in the second step is solved in a fine mesh, as the mesh refinement proceeds, the additional term becomes relatively large. This issue can be resolved by using our multi-level algorithm, where the nonlinear problem is linearized locally at the solution obtained on the very previous mesh.

Combining Lemmas 3.153.18, we arrive at the following efficiency estimates.

Theorem 3.20

Theorem 3.20 (Efficient A Posteriori Error Estimate)

For the solution ϕ = ( σ , u ) of problem (3.8)–(3.9) and the approximate solution ϕ j = ( σ j , u j ) to problem (3.23)–(3.24), we have

η j ϕ ϕ j X + osc k ( f , T j ) ,

where

osc k ( f , T j ) 2 := T T j min f j | T P k ( T ) d h T 2 f f j 0 , T 2 .

Note that the derived a posteriori error estimator is efficient and reliable, i.e., the efficiency index

EI := η j / ϕ ϕ j X

and its reciprocal are bounded for all triangulations.

4 Application 2: Semilinear Elliptic Equations

We consider the semilinear boundary value problems of the form

(4.1) u + G ( λ , u ) = 0 in Ω , u = 0 on Ω ,

where G : D ( G ) R × H 0 1 ( Ω ) L 2 ( Ω ) is a given mapping. This problem has been treated in [7] for d = 2 . Multiplying by test functions and integrating by parts, we obtain the following weak formulation of problem (4.1): find u H 0 1 ( Ω ) such that

(4.2) Ω u v d x + Ω G ( λ , u ) v d x = 0 for all v H 0 1 ( Ω ) .

4.1 BRR Framework for SEE

We define the continuous linear operator S : f L 2 ( Ω ) S f = w H 0 1 ( Ω ) H 2 ( Ω ) which maps 𝑓 onto the solution 𝑤 of the following problem: given a function f L 2 ( Ω ) , find the unique solution w H 0 1 ( Ω ) H 2 ( Ω ) such that

Ω w v d x = Ω f v d x for all v H 0 1 ( Ω ) .

Introducing the operator 𝑆, problem (4.2) can be written in the equivalent way

(4.3) F ( λ , u ) = u + S G ( λ , u ) = 0 .

In order to approximate the solution of problem (4.3), given a finite-dimensional space X h H 0 1 ( Ω ) , we define the linear operator S h : f L 2 ( Ω ) S h f = w h X h such that

Ω w h v h d x = Ω f v h d x for all v h X h .

Then the approximation of nonlinear problem (4.3) is to find ( λ , u h ) ( R × X h ) D ( G ) such that

(4.4) F h ( λ , u h ) = u h + S h G ( λ , u h ) = 0 .

We analyze problem (4.1) with G ( λ , u ) := G ( u ) = u α f for α = 2 or 3 and a given function f L 2 ( Ω ) , which was analyzed in [20] for α = 3 . The following details are only given for the reader’s convenience. In order to apply Theorem 2.1, we take X = X := H 0 1 ( Ω ) and Y := Z := L 2 ( Ω ) . Since the mapping 𝐺 is C and D 2 G is bounded on all bounded subsets of X := H 0 1 ( Ω ) , hypothesis (H1) holds. And since H 1 ( Ω ) is continuously imbedded in L p ( Ω ) with 1 p 6 for d 3 by Sobolev imbedding theorem, the function u α belongs to L 2 ( Ω ) , i.e., G ( u ) Y and D G ( u ) L ( X ; Z ) for all u X . Hence, hypothesis (H2) holds. Let X h H 0 1 ( Ω ) be a conforming finite element space of piecewise polynomials of degree k 1 defined on the triangulation T h . It is well known that, for f * H r ( Ω ) with 2 r k + 1 ,

(4.5) ( S S h ) f * s , 2 C h r s S f * r , 2 C h r s f * r 2 , 2 , s = 0 , 1 ,

where r , 2 := H r ( Ω ) and 0 , 2 := L 2 ( Ω ) , which yields that hypotheses (H3)–(H4) hold. From [7, Theorem 3.1], we have a unique nonsingular solution u H 0 1 ( Ω ) H 2 ( Ω ) of problem (4.1). From Theorem 2.1 and (4.5), there exists a unique nonsingular solution u h of (4.4) such that u u h 1 , 2 C ( S S h ) G ( u ) 1 , 2 C h u 2 , 2 . We notice that the error bound u u h 0 , 2 C h 2 u 2 , 2 was also derived in [20].

4.2 Error Estimates for Multi-level Algorithm for SEE

We generate a shape-regular triangulation T j + 1 from T j ( j 1 ) with newest vertex bisection or longest edge bisection. For simplicity, we let X j := X h j . Then, for problem (4.1) with G ( λ , u ) = u α f , the multi-level algorithm reads as follows.

Algorithm 3

Algorithm 3 (Multi-level algorithm for SEE)

  1. (Nonlinear solver) Find u 0 X 0 such that

    (4.6) Ω u 0 v d x + Ω u 0 α v d x = Ω f v d x for all v X 0 .

  2. (Linear solver) For 1 j J , find u j X j such that

    (4.7) Ω u j v d x + α Ω u j 1 α 1 u j v d x = Ω ( ( α 1 ) u j 1 α v + f v ) d x for all v X j .

Theorem 4.1

Theorem 4.1 (A Priori Error Estimates)

Let f Y . Let 𝑢 be a solution of (4.1), u 0 the solution of (4.6), and u j the solution of (4.7). Assume that u H r ( Ω ) for 2 r k + 1 . Then we have, for a constant C > 0 that depends on u 2 , 2 and independent of mesh sizes,

u u j s , 2 C ( h j 1 2 ( r s ) + h j r s ) , s = 0 , 1 .

Proof

See [20, 22]. ∎

Remark 4.2

By referring to [20], we can prove that the result of (2.5) holds for X = L 2 ( Ω ) as well as X = H 0 1 ( Ω ) . Moreover, we see that if the exact solution 𝑢 is in W p 2 ( Ω ) with p > 2 for d = 2 and p = 12 5 for d = 3 , then the following result holds:

(4.8) u h j u j 1 , 2 C u h j u j 1 0 . p 2 ( C h j 1 4 ) ,

which shows improvement O ( h j 1 4 ) compared with the result O ( h j 1 2 ) via (2.5).

From Theorem 2.8, we observe that

F ( u j ) X = u j + S G ( u j ) X = u j u ̄ X ,

where u ̄ H 0 1 ( Ω ) H 2 ( Ω ) is the unique solution for the Galerkin formulation of the linear elliptic equations

(4.9) ( u ̄ , v ) = ( f u j α , v ) for all v X .

The following error equations immediately follow from (4.7)–(4.9). For w j := u j α α u j u j 1 α 1 + ( α 1 ) u j 1 α ,

(4.10) ( u ̄ u j , v ) = ( w j , v ) for all v X j .

Let R j ( f ) := f + u j u j α and e ̄ = u ̄ u j .

Theorem 4.3

Theorem 4.3 (Reliable A Posteriori Error Estimates)

For the error e ̄ , we have | e ̄ | 1 , 2 2 η j 2 + A j 2 , where the error estimator η j is defined by η j 2 := T T j η j ( T ) 2 with the local error estimator η j ( T ) given by

η j ( T ) 2 := h T 2 R j ( f ) 0 , 2 , T 2 + E T \ Ω h E 2 [ u j n ] 0 , 2 , E 2

and A j = u j u j 1 0 , 3 , Ω 2 for α = 2 and A j = u j u j 1 0 , 4 , Ω 2 u j + 2 u j 1 0 , 3 , Ω for α = 3 .

Proof

By integration by parts and (4.10), we get

| e ̄ | 1 , 2 2 = ( u ̄ u j , e ̄ P j e ̄ ) + ( u ̄ u j , P j e ̄ ) = T T j ( ( u ̄ u j , e ̄ P j e ̄ ) T 1 2 E T \ Ω [ u j n ] , e ̄ P j e ̄ E ) ( w j , P j e ̄ ) ,

where P j is L 2 -projection onto X j . Using the error estimates of L 2 -projection, we conclude the proof. In fact, to estimate the term ( w j , P j e ̄ ) , we use Hölder inequality and Sobolev imbedding theorem. Then we have, for α = 2 ,

| ( w j , P j e ̄ ) | ( u j u j 1 ) 2 0 , 3 / 2 P j e ̄ 0 , 3 u j u j 1 0 , 3 2 | e ̄ | 1 , 2 ,

and for α = 3 ,

| ( w j , P j e ̄ ) | ( u j u j 1 ) 2 0 , 2 u j + 2 u j 1 0 , 3 P j e ̄ 0 , 6 u j u j 1 0 , 4 2 u j + 2 u j 1 0 , 3 | e ̄ | 1 , 2 .

Remark 4.4

Assume that h j 1 2 h j for j > 0 . Then, by the triangle inequality, the Sobolev imbedding theorem, which is that H 1 ( Ω ) is continuously imbedded in L p ( Ω ) for p [ 1 , 6 ] , and Theorem 4.1, we have, for q = 3 or 4,

u j u j 1 0 , q 2 u j u 0 , q 2 + u u j 1 0 , q 2 u j u 1 , 2 2 + u u j 1 1 , 2 2 ( h j 1 2 ( r 1 ) + h j r 1 ) 2 + ( h j 2 2 ( r 1 ) + h j 1 r 1 ) 2 = O ( h j 1 2 ( r 1 ) ) ,

and

u j + 2 u j 1 0 , 3 u j + 2 u j 1 3 u 1 , 2 + 3 u 1 , 2 C ,

where a constant 𝐶 depends on u 2 , 2 and is independent of mesh sizes. As in the case of the Navier–Stokes equations, the additional term w j is of higher order O ( h j 1 2 ( r 1 ) ) compared to the other error estimates and the error u j u 1 , 2 = O ( h j r 1 ) if we take h j = O ( h j 1 2 ϵ ) for some positive 𝜖. Efficient a posteriori error estimates can be derived in a way similar to those addressed in the NSE using the usual inverse inequalities introduced in [25].

5 Adaptive Multi-level Algorithm

We introduce an adaptive multi-level algorithm for nonlinear problems based on the local error estimator η j ( T ) and the global error estimator η j driven in Section 3 and 4. The adaptive procedure for the nonlinear problem goes as follows: find ϕ 0 from Step 1 of Algorithm 1 on a given initial (uniform) mesh T 0 , form a mesh T 1 uniformly refined from T 0 , and generate an adaptively refined mesh T j + 1 from T j ( j 1 ) by repeating the following process:

SOLVE ESTIMATE MARK REFINE .

The step SOLVE means to perform Step 2 of Algorithm 1 on the mesh T j ( j 1 ). This process is iterated until the global error estimator η j becomes smaller than a given tolerance. We organize the adaptive multi-level algorithm for the nonlinear problems.

Algorithm 4

Algorithm 4 (Adaptive multi-level algorithm)

Figure 1 
               Multi-level process computing 
                     
                        
                           
                              
                                 ϕ
                                 j
                              
                              ∈
                              
                                 X
                                 j
                              
                              ⊂
                              X
                           
                        
                        
                        \phi_{j}\in X_{j}\subset X
                     
                  , 
                     
                        
                           
                              j
                              =
                              
                                 0
                                 ,
                                 1
                                 ,
                                 …
                                 ,
                                 J
                              
                           
                        
                        
                        j=0,1,\ldots,J
                     
                  , with 𝐽 finest level
Figure 1

Multi-level process computing ϕ j X j X , j = 0 , 1 , , J , with 𝐽 finest level

For the reader’s convenience, we present Figure 1 in order to explain the outline of the (uniform/adaptive) multi-level algorithm. The details of the uniform multi-level algorithm can be found in [22]. Figure 1 depicts the multi-level algorithm compared with the usual nonlinear solver (e.g., Newton–Raphson algorithm). The nonlinear solver finds discrete nonlinear solutions by iteration on a much higher-dimensional fixed space X J than X 0 . The multi-level algorithm finds the solution ϕ 0 from the nonlinear solver on the low-dimensional space X 0 and then repeatedly finds ϕ j by only one iteration on X j with starting value ϕ j 1 for j = 1 , , J . The notations { ϕ k h 0 } k = 0 k 0 and { ϕ k h J } k = 0 k J denote the iteration sequence obtained from the nonlinear solver on the space X 0 and X J , respectively. The thickness of the arrows indicates the computational costs at the given level; thicker means higher costs.

6 Numerical Experiments

In this section, we perform various numerical experiments to test our adaptive multi-level algorithm and to validate the theory analyzed in this paper. We present the convergence of various errors and indicators on both uniform and adaptive meshes. In particular, we will numerically verify that the adaptive multi-level algorithm has the quadratic convergence in the sense of (2.5).

6.1 Semilinear Elliptic Equations

We approximate the semilinear elliptic problem (4.1) with G ( λ , u ) = u α f , α = 2 , 3 , by using a conforming finite element space of piecewise linear polynomials. We introduce the following quantities: for 1 j J ,

E j , 1 ( T ) := h T R j ( f ) 0 , 2 , T , E j , 2 ( T ) := ( 1 2 E T \ Ω h E [ u j n ] 0 , 2 , E 2 ) 1 / 2 , E j , 3 ( T ) := { u j u j 1 0 , 3 , T 2 , α = 2 , u j u j 1 0 , 4 , T 2 u j + 2 u j 1 0 , 3 , T , α = 3 .

We define the local error estimator η j ( T ) by

η j ( T ) := ( E j , 1 ( T ) 2 + E j , 2 ( T ) 2 + E j , 3 ( T ) 2 ) 1 / 2 ,

the global error estimator η j by

η j := ( T T j η j ( T ) 2 ) 1 / 2 ,

and efficiency indices EI j by

EI j := η j | u u j | 1 , 2 .

To show the performance, we consider two test examples: one with a boundary layer, the other with a corner singularity.

Example 6.1

Example 6.1 (A Solution with a Boundary Layer)

We consider SEE (4.1) with

G ( λ , u ) = u α f , α = 2 , 3 , and Ω = ( 0 , 1 ) × ( 0 , 1 ) .

The exact solution 𝑢,

u ( x , y ) = 1 2 x y ( 1 x ) ( 1 y ) e 10 ( x + y 1 ) ,

specifies data 𝑓.

In Table 1, we present convergence history of multi-level algorithm and nonlinear solver on the uniform meshes. Here, N el means the number of elements at T j . The solution obtained from the multi-level algorithm is as accurate as the nonlinear solver at each level 𝑗. But we observe that various errors up to low levels ( j 2 ) exhibit non-optimal convergence behavior since meshes in low levels are in the pre-asymptotic range. We see that the errors have optimal order of convergence by carrying out more uniform refinement ( j 3 ). However, in this uniform refinement, for high levels j 3 , the number of elements at T j increases rapidly and the finite element space X j requires too many degrees of freedom, which necessitates the adaptive multi-level algorithm.

In Table 2 and Figures 2 and 3, we present numerical results from the adaptive multi-level algorithm. Efficiency indices EI j = η j / | u u j | 1 , 2 are presented in Table 2. The history of errors | u u j | 1 , 2 and η j , and the efficiency indices EI j are plotted with a logarithmic scale in Figure 2, where the 𝑥-axis represents the number of elements. Numerical data for EI j show that the a posteriori error estimate is numerically reliable and efficient. The superiority of the adaptive multi-level algorithm can be clearly seen in Tables 1 and 2. Indeed, e.g., when α = 2 , the accuracy on uniform mesh with N el = 262 , 144 in Table 1 is comparable to the accuracy on adaptive mesh with N el = 36 , 448 in Table 2 (see boldface in Tables 1 and 2). Figure 3 shows convergence orders of the indicators. The symbols in the legend of Figure 3 represent the indicators such as

E j , k := ( T T j E j , k ( T ) 2 ) 1 / 2 , k = 1 , 2 , 3 .

In Figure 3, we observe that the additional term E j , 3 is of higher order than the other indicators for both uniform and adaptive meshes as mentioned in Remark 4.4.

Table 1

Convergence orders (CO) of various errors on uniform meshes for Example 6.1

Multi-level algorithm Nonlinear solver

𝛼 𝑗 N el ( 1 / h j ) u u j 0 , 2 CO | u u j | 1 , 2 CO u u h j 0 , 2 CO | u u h j | 1 , 2 CO
2 0 256(8) 6.6901e−1 1.9446e+1 6.6901e−1 1.9446e+1
1 1024(16) 2.0875e−1 1.68 1.1322e+1 0.78 2.0263e−1 1.72 1.1314e+1 0.78
2 4096(32) 5.3730e−2 1.96 5.8497e+0 0.95 5.3255e−2 1.93 5.8494e+0 0.95
3 16384(64) 1.3509e−2 1.99 2.9470e+0 0.99 1.3482e−2 1.98 2.9470e+0 0.99
4 65536(128) 3.3828e−3 2.00 1.4762e+0 1.00 3.3812e−3 2.00 1.4762e+0 1.00
5 262144(256) 8.4606e−4 2.00 7.3843e−1 1.00 8.4596e−4 2.00 7.3843e−1 1.00

3 0 256(8) 6.1077e−1 1.9426e+1 6.1077e−1 1.9426e+1
1 1024(16) 2.1849e−1 1.48 1.1368e+1 0.77 1.8097e−1 1.75 1.1307e+1 0.78
2 4096(32) 5.1247e−2 2.09 5.8541e+0 0.96 4.7263e−2 1.94 5.8500e+0 0.95
3 16384(64) 1.2075e−2 2.09 2.9472e+0 0.99 1.1936e−2 1.99 2.9471e+0 0.99
4 65536(128) 2.9973e−3 2.01 1.4762e+0 1.00 2.9917e−3 2.00 1.4762e+0 1.00
5 262144(256) 7.4873e−4 2.00 7.3844e−1 1.00 7.4840e−4 2.00 7.3844e−1 1.00
Table 2

Efficiency indices on adaptive meshes for Example 6.1

α = 2 α = 3

𝑗 N el | u u j | 1 , 2 η j EI j 𝑗 N el | u u j | 1 , 2 η j EI j
0 256 1.9446e+1 0 256 1.9426e+1
1 1024 1.1320e+1 5.7659e+1 5.09 1 1024 1.1368e+1 5.8250e+1 5.12
2 1076 7.5222e+0 3.8374e+1 5.10 2 1076 7.5233e+0 3.8552e+1 5.12
3 1260 5.1540e+0 2.5581e+1 4.96 3 1257 5.2785e+0 2.6042e+1 4.93
9 20447 7.4173e−1 3.6179e+0 4.88 9 19876 7.4923e−1 3.6707e+0 4.90
10 36448 5.3738e−1 2.6609e+0 4.95 10 35236 5.4581e−1 2.7078e+0 4.96
Figure 2

The history of errors | u u j | 1 , 2 and η j , and EI j on adaptive meshes for Example 6.1

(a) 
                     
                        
                           
                              
                                 
                                    α
                                    =
                                    2
                                 
                              
                              
                              \alpha=2
(a)

α = 2

(b) 
                     
                        
                           
                              
                                 
                                    α
                                    =
                                    3
                                 
                              
                              
                              \alpha=3
(b)

α = 3

Figure 3

Convergence plots of indicators E j , k for k = 1 , 2 , 3 for Example 6.1

(a) 
                     
                        
                           
                              
                                 
                                    α
                                    =
                                    2
                                 
                              
                              
                              \alpha=2
                           
                        , uniform mesh
(a)

α = 2 , uniform mesh

(b) 
                     
                        
                           
                              
                                 
                                    α
                                    =
                                    2
                                 
                              
                              
                              \alpha=2
                           
                        , adaptive mesh
(b)

α = 2 , adaptive mesh

(c) 
                     
                        
                           
                              
                                 
                                    α
                                    =
                                    3
                                 
                              
                              
                              \alpha=3
                           
                        , uniform mesh
(c)

α = 3 , uniform mesh

(d) 
                     
                        
                           
                              
                                 
                                    α
                                    =
                                    3
                                 
                              
                              
                              \alpha=3
                           
                        , adaptive mesh
(d)

α = 3 , adaptive mesh

To examine quadratic convergence behavior in the sense of (2.5) for X = L 2 ( Ω ) and in the sense of (4.8), we investigate the difference between u j 1 and u h j , and the difference between u j and u h j in the L p or H 1 -norm, and their ratios. In Figure 4, we present these quantities and the ratios R L j and R H j , where

d L j 1 := u h j u j 1 0 , 2 , d L j := u h j u j 0 , 2 , R L j := u h j u j 0 , 2 u h j u j 1 0 , 2 2 , d H j 1 := u h j u j 1 0 , 4 , d H j := | u h j u j | 1 , 2 , R H j := | u h j u j | 1 , 2 u h j u j 1 0 , 4 2 .

The ratios R L j and R H j are almost constant, which means that our adaptive multi-level algorithm retains quadratic convergence of Newton’s method across different mesh levels.

Figure 4

The history of quadratic convergence and various differences for Example 6.1

(a) 
                     
                        
                           
                              
                                 
                                    α
                                    =
                                    2
                                 
                              
                              
                              \alpha=2
                           
                        , uniform mesh
(a)

α = 2 , uniform mesh

(b) 
                     
                        
                           
                              
                                 
                                    α
                                    =
                                    2
                                 
                              
                              
                              \alpha=2
                           
                        , adaptive mesh
(b)

α = 2 , adaptive mesh

(c) 
                     
                        
                           
                              
                                 
                                    α
                                    =
                                    3
                                 
                              
                              
                              \alpha=3
                           
                        , uniform mesh
(c)

α = 3 , uniform mesh

(d) 
                     
                        
                           
                              
                                 
                                    α
                                    =
                                    3
                                 
                              
                              
                              \alpha=3
                           
                        , adaptive mesh
(d)

α = 3 , adaptive mesh

Figure 5

Meshes and u 10 generated by the adaptive multi-level algorithm for Example 6.1

(a) 
                     
                        
                           
                              
                                 
                                    j
                                    =
                                    0
                                 
                              
                              
                              j=0
(a)

j = 0

(b) 
                     
                        
                           
                              
                                 
                                    j
                                    =
                                    1
                                 
                              
                              
                              j=1
(b)

j = 1

(c) 
                     
                        
                           
                              
                                 
                                    j
                                    =
                                    2
                                 
                              
                              
                              j=2
(c)

j = 2

(d) 
                     
                        
                           
                              
                                 
                                    j
                                    =
                                    10
                                 
                              
                              
                              j=10
(d)

j = 10

(e) 
                     Graph of 
                           
                              
                                 
                                    u
                                    10
                                 
                              
                              
                              u_{10}
(e)

Graph of u 10

We present the adaptive meshes T j for j = 0 , 1 , 2 , 10 and the approximate solution u 10 on the final mesh T 10 for α = 2 in Figure 5. We see that the adaptively designed meshes properly express the characteristics of the solution with a boundary layer in the upper right corner.

Example 6.2

Example 6.2 (A Solution with a Corner Singularity)

We consider SEE (4.1) with G ( λ , u ) = u α f , α = 2 , 3 , and 𝐿-shaped domain Ω = ( 1 , 1 ) × ( 1 , 1 ) \ [ 0 , 1 ) × ( 1 , 0 ] . The exact solution 𝑢 in polar coordinates ( r , θ ) ,

u ( r , θ ) = r 2 / 3 sin ( 2 θ / 3 ) + r β sin ( 2 θ / 3 ) ,

specifies data 𝑓. On the Dirichlet boundary Γ D := { 0 } × [ 1 , 0 ] [ 0 , 1 ] × { 0 } , u = 0 , and on the Neumann boundary Γ N := Ω \ Γ D , u n = g N for the unit outward normal vector 𝒏.

We note that Δ v = 0 for v := r 2 / 3 sin ( 2 θ / 3 ) H s ( Ω ) with s < 5 / 3 , and for β > 1 , w := r β sin ( 2 θ / 3 ) H 2 ( Ω ) and G ( u ) = u α f = Δ w L 2 ( Ω ) . We take β = 1.01 for the numerical test. Note also that a priori error estimates for the Poisson model case on the 𝐿-shaped domain are well known (see [15, Remark 4.1, Example 4.5 and Problems 4.7]); for any ϵ > 0 ,

(6.1) u u h 1 , 2 h 2 / 3 ϵ u H 5 / 3 ϵ ( Ω ) , u u h 0 , 2 h 4 / 3 ϵ u H 5 / 3 ϵ ( Ω ) .

To take the Neumann part of the boundary condition into account, we modify indicator E j , 2 as follows:

E j , 2 ( T ) := ( 1 2 E T \ Ω h E [ u j n ] 0 , 2 , E 2 + E T Γ N h E ( u j n g N ) 0 , 2 , E 2 ) 1 / 2 .

Table 3

Convergence orders (CO) of various errors on uniform meshes for Example 6.2

Multi-level algorithm Nonlinear solver

𝑗 N el ( 1 / h j ) u u j 0 , 2 CO | u u j | 1 , 2 CO u u h j 0 , 2 CO | u u h j | 1 , 2 CO
0 96 2.3413e−2 2.3451e−1 2.3413e−2 2.3451e−1
1 384 9.7023e−3 1.27 1.4761e−1 0.67 9.7329e−3 1.27 1.4760e−1 0.67
2 1536 3.9676e−3 1.29 9.1644e−2 0.69 3.9724e−3 1.29 9.1644e−2 0.69
3 6144 1.5829e−3 1.33 5.6561e−2 0.70 1.5836e−3 1.33 5.6561e−2 0.70
4 24576 6.2119e−4 1.35 3.4866e−2 0.70 6.2129e−4 1.35 3.4866e−2 0.70
5 98304 2.4160e−4 1.36 2.1517e−2 0.70 2.4161e−4 1.36 2.1517e−2 0.70
Figure 6 
                  Convergence orders of errors and etas on adaptive meshes for Example 6.2
Figure 6

Convergence orders of errors and etas on adaptive meshes for Example 6.2

Table 4

Errors and etas, and efficiency indices on adaptive meshes for Example 6.2

𝑗 N el | u u j | 1 , 2 η j EI j
0 96 2.3451e−1
1 384 1.4761e−1 4.8126e−1 3.26
2 415 1.1532e−1 3.7244e−1 3.23
3 548 8.9599e−2 2.8853e−1 3.22
4 991 6.5174e−2 2.0990e−1 3.22
5 1875 4.6944e−2 1.5114e−1 3.22
6 3705 3.3205e−2 1.0794e−1 3.25
7 7177 2.3774e−2 7.7373e−2 3.25
8 13858 1.7006e−2 5.5668e−2 3.27
Figure 7

The convergence of indicators EI j and ratios R L j and R H j for Example 6.2

(a) 
                     Uniform convergence
(a)

Uniform convergence

(b) 
                     Adaptive convergence
(b)

Adaptive convergence

(c) 
                     Uniform ratios
(c)

Uniform ratios

(d) 
                     Adaptive ratios
(d)

Adaptive ratios

Figure 8

Meshes and u 6 generated by the adaptive multi-level algorithm for Example 6.2

(a) 
                     
                        
                           
                              
                                 
                                    j
                                    =
                                    0
                                 
                              
                              
                              j=0
(a)

j = 0

(b) 
                     
                        
                           
                              
                                 
                                    j
                                    =
                                    1
                                 
                              
                              
                              j=1
(b)

j = 1

(c) 
                     
                        
                           
                              
                                 
                                    j
                                    =
                                    2
                                 
                              
                              
                              j=2
(c)

j = 2

(d) 
                     
                        
                           
                              
                                 
                                    j
                                    =
                                    6
                                 
                              
                              
                              j=6
(d)

j = 6

(e) 
                     Graph of 
                           
                              
                                 
                                    u
                                    6
                                 
                              
                              
                              u_{6}
(e)

Graph of u 6

We only provide results for α = 2 as the case α = 3 shows similar behavior. We observe from Table 3 that convergence orders on the uniform meshes are in par with the Poisson problem (cf. (6.1)). In Table 4 and Figure 6, we display history of errors on the adaptive meshes for each level 𝑗, which shows the full rate of convergence. The superiority of the adaptive multi-level algorithm can be clearly seen in Tables 3 and 4. Indeed, the accuracy on uniform mesh with N el = 24 , 576 in Table 3 is comparable to the accuracy on adaptive mesh with N el = 3 , 705 in Table 4 (see boldface in Tables 3 and 4).

In Figure 7 (a) and (b), we present indicators and their convergence orders on uniform and adaptive meshes, respectively. The convergence order of additional term E j , 3 is higher than that of the other indicators E j , 1 and E j , 2 . The ratios R L j and R H j are almost constant in Figure 7 (c) and (d), which means that our adaptive multi-level algorithm retains quadratic convergence of Newton’s method across different mesh levels.

We present adaptive meshes T j for j = 0 , 1 , 2 , 6 and the approximate solution u 6 on the mesh T 6 in Figure 8. We see that the adaptively designed meshes well reflect the corner singularity of the solution.

6.2 Navier–Stokes Equations

Now, we consider Navier–Stokes equations (3.1)–(3.3). We use the lowest-order Raviart–Thomas finite element space X j = RT ̂ 0 2 ( T j ) × P 0 2 ( T j ) . We introduce the following error indicators: for 1 j J ,

E j , 1 ( T ) := h T R j ( f ) 0 , 2 , T , E j , 2 ( T ) := h T A σ j u j 0 , 2 , T 2 , E j , 3 ( T ) := h T Curl ( A σ j ) 0 , 2 , T , E j , 4 ( T ) := ( 1 2 E T \ Ω h E [ γ t A σ j ] 0 , 2 , E 2 ) 1 / 2 , E j , 5 ( T ) := ( E T Ω h E γ t ( A σ j g ) 0 , 2 , E 2 ) 1 / 2 , E j , 6 ( T ) := ( E T Ω h E g g j 0 , 2 , E 2 ) 1 / 2 , E j , 7 ( T ) := ( u j u j 1 ) A ( σ j σ j 1 ) 0 , 6 / 5 , T .

We define the local error estimator η j ( T ) by

η j ( T ) := ( k = 1 7 E j , k ( T ) 2 ) 1 / 2 ,

the global error estimator η j by

η j := ( T T j η j ( T ) 2 ) 1 / 2 ,

and efficiency indices EI j by

EI j := η j ϕ ϕ j X .

Example 6.3

Example 6.3 (A Solution with a Boundary Layer)

We consider NSE (3.1)–(3.3) with Ω = ( 0 , 1 ) × ( 0 , 1 ) and ν = 1 . The exact solutions u = ( u 1 , u 2 ) and 𝑝,

u 1 ( x , y ) = 1 2 x 2 y ( x 1 ) 2 ( y 1 ) ( 4 y 10 y + 10 y 2 2 ) e 10 ( x + y 1 ) , u 2 ( x , y ) = 1 2 x y 2 ( x 1 ) ( y 1 ) 2 ( 4 x 10 x + 10 x 2 2 ) e 10 ( x + y 1 ) , p ( x , y ) = cos ( π x ) cos ( π y ) ,

specify data 𝒇 and 𝒈.

Table 5 shows convergence history of multi-level algorithm and nonlinear solver on uniform meshes. We observe that the errors ϕ ϕ j X up to low levels ( j 2 ) exhibit non-optimal convergence behavior since meshes in low levels are in the pre-asymptotic range. We see that the errors have optimal order of convergence by carrying out more uniform refinement ( j 3 ). However, in this uniform refinement, for high levels j 3 , the number of elements at T j increases rapidly and the finite element space X j requires too many degrees of freedom as in Example 6.1; the adaptive multi-level algorithm is beneficial.

Table 5

Convergence orders (CO) of errors on uniform meshes for Example 6.3

Multi-level algorithm Nonlinear solver

𝑗 N el ( 1 / h j ) ϕ ϕ j X CO ϕ ϕ h j X CO
0 256(8) 7.903738e+0 7.903738e+0
1 1024(16) 4.556065e+0 0.795 4.556080e+0 0.795
2 4096(32) 2.436524e+0 0.903 2.436521e+0 0.903
3 16384(64) 1.245096e+0 0.969 1.245096e+0 0.969
4 65536(128) 6.262103e−1 0.992 6.262102e−1 0.992
5 262144(256) 3.135742e−1 0.998 3.135742e−1 0.998
Table 6

Efficiency indices on adaptive meshes for Example 6.3

𝑗 N el ϕ ϕ j X η j EI j
0 256 7.903738e+0
1 1024 4.556065e+0 1.164e+1 2.554
2 1206 2.472931e+0 4.728e+0 1.912
3 1486 1.525288e+0 2.924e+0 1.917
4 2260 1.137552e+0 2.025e+0 1.780
5 3492 7.797460e−1 1.455e+0 1.866
6 6130 5.987671e−1 1.066e+0 1.780
7 10614 4.118791e−1 7.717e−1 1.874
8 20130 3.183675e−1 5.678e−1 1.783
9 35736 2.221120e−1 4.165e−1 1.875
Figure 9

Various behaviors for Example 6.3

(a) 
                     
                        
                           
                              
                                 
                                    EI
                                    j
                                 
                              
                              
                              \mathrm{EI}_{j}
                           
                         on adaptive mesh
(a)

EI j on adaptive mesh

(b) 
                     Convergence of indicators on uniform mesh
(b)

Convergence of indicators on uniform mesh

(c) 
                     Convergence of indicators on adaptive mesh
(c)

Convergence of indicators on adaptive mesh

(d) 
                     Quadratic convergence behavior
(d)

Quadratic convergence behavior

Figure 10

Meshes and | u 6 | generated by the adaptive multi-level algorithm for Example 6.3

(a) 
                     
                        
                           
                              
                                 
                                    j
                                    =
                                    0
                                 
                              
                              
                              j=0
(a)

j = 0

(b) 
                     
                        
                           
                              
                                 
                                    j
                                    =
                                    1
                                 
                              
                              
                              j=1
(b)

j = 1

(c) 
                     
                        
                           
                              
                                 
                                    j
                                    =
                                    2
                                 
                              
                              
                              j=2
(c)

j = 2

(d) 
                     
                        
                           
                              
                                 
                                    j
                                    =
                                    6
                                 
                              
                              
                              j=6
(d)

j = 6

(e) 
                     Graph of 
                           
                              
                                 
                                    |
                                    
                                       u
                                       6
                                    
                                    |
                                 
                              
                              
                              \lvert\boldsymbol{u}_{6}\rvert
(e)

Graph of | u 6 |

We present results for the adaptive multi-level algorithm applied to Example 6.3 with a boundary layer in Table 6. The superiority of the adaptive multi-level algorithm can be clearly seen in Table 5 and Table 6. Indeed, the accuracy on uniform mesh with N el = 65 , 536 in Table 5 is comparable to the accuracy on adaptive mesh with N el = 6 , 130 in Table 6 (see boldface in Tables 5 and 6). We present the history of errors ϕ ϕ j X and η j , and the efficiency indices EI j in Figure 9 (a)

Figure 9 (b) and (c) illustrate the history of indicators on uniform and adaptive meshes, respectively. Here, the indicators E j , k in Figure 9 (b) and (c) are defined by

E j , k := ( T T j E j , k ( T ) 2 ) 1 / 2 , k = 1 , 2 , , 7 .

We observe that the additional term E j , 7 has smaller value and is of higher order than the other indicators for both uniform and adaptive meshes as mentioned in Remark 3.19. Similar behavior has been observed in the case of SEE.

To examine the behavior of quadratic convergence in the sense of (2.5), we investigate the difference between ϕ j 1 and ϕ h j , and the difference between ϕ j and ϕ h j , and their ratios. These quantities are displayed and particularly ratios R X j are plotted by bold line in Figure 9 (d). Notations of the legend in Figure 9 (d) are defined by

d X j 1 := ϕ h j ϕ j 1 X , d X j := ϕ h j ϕ j X , R X j := ϕ h j ϕ j X ϕ h j ϕ j 1 X 2 .

We find that the ratios R X j are almost constant, which means that our adaptive multi-level algorithm maintains quadratic convergence of Newton’s method across different mesh levels.

In Figure 10, we exhibit some meshes T j for j = 0 , 1 , 2 , 6 generated by the adaptive multi-level algorithm and a graph of the final approximate solution u 6 measured in Euclidean norm on T 6 . We see that the adaptively designed meshes properly express the characteristics of the solution with boundary layers in the upper right corner.

7 Conclusion

The adaptive multi-level algorithm analyzed in this paper can be used to efficiently solve nonlinear problems with singularity or layer. We start with a nonlinear system on a very coarse mesh and then solve linearized system on the successively refined meshes with adaptivity. The BRR framework facilitates to use the general theory established for a linear problem associated with given nonlinear equations. In particular, a posteriori error estimates for the linear problem can be utilized to find reliable error estimators for the nonlinear problem.

As applications, the Navier–Stokes equations and the semilinear elliptic equations are considered. Residual type reliable and efficient a posteriori error estimators are constructed for the adaptive multi-level algorithm for the considered problems. With few modifications we apply existing a posteriori error estimators for the linear Stokes equations and elliptic equations corresponding to the NSE and SEE, respectively. Note that the multi-level algorithm with uniform meshes has quadratic convergence in the sense of (2.5). We emphasize that the multi-level algorithm with adaptive meshes retains quadratic convergence of Newton’s method across different mesh levels, which is validated from the numerical results presented in this paper.

Data Availability

The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.

Award Identifier / Grant number: NRF-2021R1F1A106243413

Award Identifier / Grant number: NRF-2022R1A2B5B02002481

Award Identifier / Grant number: NRF-2020R1I1A1A01070361

Funding statement: Dongho Kim was supported by NRF-2021R1F1A106243413. The research of Eun-Jae Park was supported by the National Research Foundation of Korea (NRF) grant funded by the Ministry of Science and ICT (NRF-2022R1A2B5B02002481). Boyoon Seo was supported by NRF-2020R1I1A1A01070361.

References

[1] M. Ainsworth and J. T. Oden, A Posteriori Error Estimation in Finite Element Analysis, Pure Appl. Math. (New York), Wiley-Interscience, New York, 2000. 10.1002/9781118032824Search in Google Scholar

[2] I. Babuška and T. Strouboulis, The Finite Element Method and its Reliability, Numer. Math. Sci. Comput., Oxford University, New York, 2001. 10.1093/oso/9780198502760.001.0001Search in Google Scholar

[3] F. Brezzi and M. Fortin, Mixed and Hybrid Finite Element Methods, Springer Ser. Comput. Math. 15, Springer, New York, 1991. 10.1007/978-1-4612-3172-1Search in Google Scholar

[4] F. Brezzi, J. Rappaz and P.-A. Raviart, Finite-dimensional approximation of nonlinear problems. I. Branches of nonsingular solutions, Numer. Math. 36 (1980/81), no. 1, 1–25. 10.1007/BF01395985Search in Google Scholar

[5] Z. Cai, C. Wang and S. Zhang, Mixed finite element methods for incompressible flow: Stationary Navier–Stokes equations, SIAM J. Numer. Anal. 48 (2010), no. 1, 79–94. 10.1137/080718413Search in Google Scholar

[6] Z. Cai and Y. Wang, A multigrid method for the pseudostress formulation of Stokes problems, SIAM J. Sci. Comput. 29 (2007), no. 5, 2078–2095. 10.1137/060661429Search in Google Scholar

[7] G. Caloz and J. Rappaz, Numerical analysis for nonlinear and bifurcation problems, Handbook of Numerical Analysis. Vol. V, North-Holland, Amsterdam (1997), 487–637. 10.1016/S1570-8659(97)80004-XSearch in Google Scholar

[8] C. Carstensen, A posteriori error estimate for the mixed finite element method, Math. Comp. 66 (1997), no. 218, 465–476. 10.1090/S0025-5718-97-00837-5Search in Google Scholar

[9] C. Carstensen and S. A. Funken, A posteriori error control in low-order finite element discretisations of incompressible stationary flow problems, Math. Comp. 70 (2001), no. 236, 1353–1381. 10.1090/S0025-5718-00-01264-3Search in Google Scholar

[10] C. Carstensen, D. Kim and E.-J. Park, A priori and a posteriori pseudostress-velocity mixed finite element error analysis for the Stokes problem, SIAM J. Numer. Anal. 49 (2011), no. 6, 2501–2523. 10.1137/100816237Search in Google Scholar

[11] P. Clément, Approximation by finite element functions using local regularization, RAIRO Oper. Res. 9 (1975), no. R-2, 77–84. 10.1051/m2an/197509R200771Search in Google Scholar

[12] V. Ervin, W. Layton and J. Maubach, A posteriori error estimators for a two-level finite element method for the Navier–Stokes equations, Numer. Methods Partial Differential Equations 12 (1996), no. 3, 333–346. 10.1002/(SICI)1098-2426(199605)12:3<333::AID-NUM4>3.0.CO;2-PSearch in Google Scholar

[13] V. Girault and P.-A. Raviart, Finite Element Methods for Navier–Stokes equations, Springer Ser. Comput. Math. 5, Springer, Berlin, 1986. 10.1007/978-3-642-61623-5Search in Google Scholar

[14] V. John, Residual a posteriori error estimates for two-level finite element methods for the Navier-Stokes equations, Appl. Numer. Math. 37 (2001), no. 4, 503–518. 10.1016/S0168-9274(00)00058-1Search in Google Scholar

[15] C. Johnson, Numerical Solution of Partial Differential Equations by the Finite Element Method, Cambridge University, Cambridge, 1987. Search in Google Scholar

[16] D. Kim and E.-J. Park, Primal mixed finite-element approximation of elliptic equations with gradient nonlinearities, Comput. Math. Appl. 51 (2006), no. 5, 793–804. 10.1016/j.camwa.2006.03.006Search in Google Scholar

[17] D. Kim and E.-J. Park, A posteriori error estimator for expanded mixed hybrid methods, Numer. Methods Partial Differential Equations 23 (2007), no. 2, 330–349. 10.1002/num.20178Search in Google Scholar

[18] D. Kim and E.-J. Park, A posteriori error estimators for the upstream weighting mixed methods for convection diffusion problems, Comput. Methods Appl. Mech. Engrg. 197 (2008), no. 6–8, 806–820. 10.1016/j.cma.2007.09.009Search in Google Scholar

[19] D. Kim and E.-J. Park, A priori and a posteriori analysis of mixed finite element methods for nonlinear elliptic equations, SIAM J. Numer. Anal. 48 (2010), no. 3, 1186–1207. 10.1137/090747002Search in Google Scholar

[20] D. Kim, E.-J. Park and B. Seo, A unified framework for two-grid methods for a class of nonlinear problems, Calcolo 55 (2018), no. 4, Paper No. 45. 10.1007/s10092-018-0287-ySearch in Google Scholar

[21] D. Kim, E.-J. Park and B. Seo, Optimal error estimates for the pseudostress formulation of the Navier–Stokes equations, Appl. Math. Lett. 78 (2018), 24–30. 10.1016/j.aml.2017.10.017Search in Google Scholar

[22] D. Kim, E.-J. Park and B. Seo, Convergence of multi-level algorithms for a class of nonlinear problems, J. Sci. Comput. 84 (2020), no. 2, Paper No. 34. 10.1007/s10915-020-01287-wSearch in Google Scholar

[23] K. Y. Kim, A posteriori error analysis for locally conservative mixed methods, Math. Comp. 76 (2007), no. 257, 43–66. 10.1090/S0025-5718-06-01903-XSearch in Google Scholar

[24] F. A. Milner and E.-J. Park, Mixed finite-element methods for Hamilton–Jacobi–Bellman-type equations, IMA J. Numer. Anal. 16 (1996), no. 3, 399–412. 10.1093/imanum/16.3.399Search in Google Scholar

[25] R. Verfürth, A posteriori error estimates for nonlinear problems. Finite element discretizations of elliptic equations, Math. Comp. 62 (1994), no. 206, 445–475. 10.1090/S0025-5718-1994-1213837-1Search in Google Scholar

[26] R. Verfürth, A Review of A Posteriori Error Estimation and Adaptive Mesh-Refinement Techniques, Teubner-Wiley, Stuttgart, 1996. Search in Google Scholar

[27] J. Xu, A novel two-grid method for semilinear elliptic equations, SIAM J. Sci. Comput. 15 (1994), no. 1, 231–237. 10.1137/0915016Search in Google Scholar

[28] J. Xu, Two-grid discretization techniques for linear and nonlinear PDEs, SIAM J. Numer. Anal. 33 (1996), no. 5, 1759–1777. 10.1137/S0036142992232949Search in Google Scholar

Received: 2023-03-31
Revised: 2023-12-04
Accepted: 2024-02-07
Published Online: 2024-02-28

© 2024 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 3.5.2024 from https://www.degruyter.com/document/doi/10.1515/cmam-2023-0088/html
Scroll to top button