Abstract
We consider an optimization problem for the eigenvalues of a multi-material elastic structure that was previously introduced by Garcke et al. (Adv. Nonlinear Anal. 11:159–197, 2022). There, the elastic structure is represented by a vector-valued phase-field variable, and a corresponding optimality system consisting of a state equation and a gradient inequality was derived. In the present paper, we pass to the sharp-interface limit in this optimality system by the technique of formally matched asymptotics. Therefore, we derive suitable Lagrange multipliers to formulate the gradient inequality as a pointwise equality. Afterwards, we introduce inner and outer expansions, relate them by suitable matching conditions and formally pass to the sharp-interface limit by comparing the leading order terms in the state equation and in the gradient equality. Furthermore, the relation between these formally derived first-order conditions and results of Allaire and Jouve (Comput. Methods Appl. Mech. Eng. 194:3269–3290, 2005) obtained in the framework of classical shape calculus is discussed. Eventually, we provide numerical simulations for a variety of examples. In particular, we illustrate the sharp-interface limit and also consider a joint optimization problem of simultaneous compliance and eigenvalue optimization.
Similar content being viewed by others
1 Introduction
The goal of structural shape and topology optimization is to find the optimal distribution of materials in a prescribed region, the so-called design domain. Here, in addition to pure shape optimization, also the topology of the structure is to be optimized. This includes the formation of holes (void regions) in the structure as well as the merging and splitting of connected material components. In many applications, certain properties of the materials (such as their elastic properties) as well as additional side conditions (e.g., volume constraints or support conditions) need to be taken into account within the optimization problem.
Besides the optimization of shape and topology, the optimization of eigenvalues is an important task in engineering science to make structures robust against vibrations. It has been observed that structures are less susceptiple against vibrations if their principal eigenvalue is large, see [18, Section 2], [4] and also [45] for concrete examples and further references. Heuristically, this can be explained by the fact that larger principal eigenvalues are associated with higher temporal frequencies which correspond to smaller wavelengths of the oscillations.
The traditional mathematical tool to handle shape optimization problems is the calculus of shape derivatives based on boundary variations (see, e.g., [5, 6, 35, 59, 66, 67]). However, frequent remeshing leads to high computational costs and it cannot deal with topological changes, see also [62] for a comprehensive discussion. In some situations, it is possible to handle topology changes by means of homogenization methods (see, e.g., [3]) or variants of this approach such as the SIMP method (see, e.g., [18, 26]). A drawback of this method occuring in applications to spectral problems is the phenomenon of so-called localized eigenmodes (also often referred to as spurious eigenmodes), see [7, 18, 29, 63]. In this context localized eigenmodes are eigenfunctions which are supported only in the void regions and pollute the spectrum with low eigenvalues. Especially in recent times, the level-set method has become a popular approach for topology optimization problems. After the method was developed in [61], it has been used extensively in the literature (see, e.g., [7, 10, 30, 55, 60, 62]). Although the level-set method is capable of dealing with topological changes, difficulties can arise if voids are to be created.
In this paper, we consider an optimization problem that was introduced in [45]. There, the authors employed a different method to optimize the shape and the topology as well as a finite selection of eigenvalues of an elastic structure, namely the so-called (multi-)phase-field approach. This method for shape and topology optimization was first developed in [26] and subsequently used frequently in the literature. We refer the reader to [11, 19, 20, 22, 27, 31, 32, 34, 36,37,38, 58, 64] to at least mention some of the various contributions.
In [45], an elastic structure consisting of \(N-1\) materials is described by a multi-phase-field variable. This is a vector-valued function \({\varvec{\varphi }}:\Omega \rightarrow \mathbb {R}^N\) whose components \(\varphi ^1,...,\varphi ^{N-1}\) represent the volume fractions of the materials, and \(\varphi ^N\) represents the void (i.e., the region where no material is present). In particular, the components of \({\varvec{\varphi }}\) are restricted to attain their values only in the interval [0, 1]. In most parts of the design domain, the materials are expected to appear in their pure form, meaning that the corresponding component of the multi-phase-field \({\varvec{\varphi }}\) attains the value one, whereas all other components are zero. These regions are separated by diffuse interfaces, which are thin layers between the pure phases whose thickness is proportional to a small parameter \(\varepsilon >0\). In particular, \({\varvec{\varphi }}\) is expected to exhibit a continuous transition between the values zero and one at these diffuse interfaces. The main advantage of the phase-field approach in the context of shape and topology optimization is that topological changes (such as merging or splitting of material components or the creation of holes) during the optimization process can be handled without any problems. The optimization problem in [45] is formulated as a minimization problem for an objective functional which involves a selection of eigenvalues as well as a Ginzburg–Landau type penalisation term for the phase-field. For this problem, the existence of at least one global minimizer was established and a first-order necessary optimality condition for local minimizers was derived. A detailed mathematical formulation of the optimization problem from [45] will be presented in Sect. 2.
The main goal of this paper is to derive the sharp-interface limit of the aforementioned optimization problem from [45]. This means that we want to send the parameter \(\varepsilon \), that is related to the thickness of the diffuse interface, to zero. In this way, we can relate the diffuse-interface approach from [45] to the physically reasonable scenario of sharp-interfaces. In particular, one of our key goals is to show that minimizers of the problem in the diffuse-interface framework converge to minimizers of a corresponding sharp-interface optimization problem.
Qualitatively, there are two ways to deal with this passage to the limit: the rigorous investigation of the \(\Gamma \)-limit of the involved cost functional, and the formal method of matched asymptotic expansions. For a rigorous discussion of the sharp-interface limit of diffuse-interface models describing elastic systems, we refer the reader to [8, 21]. There, the void is modeled as a further material having low but non-degenerate stiffness, which is crucial for the analysis. Up to now, to the best of our knowledge, there is no rigorous \(\Gamma \)-limit analysis for spectral problems in the case of degenerating stiffness in the void regions.
As a first step towards the task of dealing with this delicate problem, the sharp-interface \(\Gamma \)-limit for an optimization problem involving a selection of eigenvalues of the Dirichlet Laplacian was rigorously established in [44]. A relation between the minimization of the principal eigenvalue of the Dirichlet Laplacian on the phase-field level and the Faber–Krahn inequality on the sharp-interface level was discussed in [54].
In order to understand the sharp interface limit, we thus intend to apply the technique of formally matched asymptotic expansions on the optimization problem from [45]. This technique has already been employed on different phase-field models (especially of Allen–Cahn or Cahn–Hilliard type), see, e.g., [1, 2, 16, 20, 28, 42, 46, 47, 53]. For comprehensive overviews of this technique, we refer to [39, 41, 56].
The basic strategy of this formal approach is as follows: We assume that the phase-field as well as the corresponding eigenvalues and eigenfunctions each possess an inner asymptotic expansion and an outer asymptotic expansion, both given by a power series with respect to the interface parameter \(\varepsilon \). The inner expansions approximate the aforementioned quantities “close” to the diffuse interface where the phase-transition takes place, whereas the outer expansions approximate these quantities in regions that are “far” away from the interface where only the pure phases are present.
Plugging the outer expansions into the eigenvalue equation on the diffuse-interface level, a comparison of the leading order terms leads to limit eigenvalue equations on the sharp-interface level. At this point we will include a discussion about localized eigenmodes. As also mentioned above, in numerical simulations the formation of eigenmodes that are supported only in void areas and produce eigenvalues which pollute the low part of the spectrum (which we are interested in) is a major problem. We will see that our asymptotic approach is able to deal with such localized eigenmodes. More precisely, we will see that if such modes appear, then the corresponding eigenvalues will diverge to infinity as \(\varepsilon \rightarrow 0\). Thus, if \(\varepsilon >0\) is sufficiently small, localized eigenmodes do not affect the lower part of the spectrum that is considered in our optimization problem.
The inner expansions are used to describe the aforementioned quantities in tubular neighborhoods around the interfaces. The distinction between inner and outer regions is needed as we expect the phase-field to change its values rapidly in regions close to the interface. This is because the diffuse interface will become infinitesimally thin as \(\varepsilon \rightarrow 0\). Here, the main idea is to introduce a rescaled coordinate system which takes the \(\varepsilon \)-scaling of this region into account.
After studying these two forms of expansions separately, it is crucial to match both expansions in a so-called intermediate region. This means we compare both extensions by exploiting the two different coordinate systems they are formulated in. Plugging these relations into the optimality system and comparing the leading order terms, we obtain boundary conditions for the previously obtained limit eigenvalue equations. We observe that the boundary condition on the free boundary will essentially be of homogeneous Neumann type. Furthermore, we use the inner expansions to derive a limit equality from the strong formulation of the gradient inequality. The limit eigenvalue equations together with this gradient equality will then constitute the optimiality system of a corresponding sharp-interface optimization problem. This will be justified from the viewpoint of classical shape calculus (see, e.g., [7]) by relating the limit of the gradient inequality to the shape derivative of the associated cost functional.
However, in order to apply the technique of formally matched asymptotics, we first need to reformulate the gradient inequality on the diffuse-interface level as a pointwise gradient equality by introducing suitable Lagrange multipliers. Under an additional regularity assumption on the involved eigenfunctions, this is achieved by employing a regularization technique following the ideas of [23], in which we eventually pass to the limit. A key benefit of this strategy is that it provides an explicit construction of the Lagrange multipliers arising from the constraints of our optimization problem. This specific knowledge about the Lagrange multipliers will turn out to be essential for the asymptotic analysis. As a byproduct, we also prove that the phase-field variable \({\varvec{\varphi }}\) solving the original gradient inequality is actually \(H^2\)-regular under the aforementioned assumption of suitably regular eigenfunctions.
The present paper is structured as follows. In Sect. 2, we first introduce the theory that is necessary to formulate the diffuse-interface optimization problem along with its first-order necessary optimality condition. The derivation of the strong formulation of the gradient inequality will then be performed in Sect. 3. Using the outer expansions, we derive the state equations of the limit problem in Sect. 4.1. In order to construct inner expansions we first analyze in Sect. 4.3 how the involved differential operators are reformulated in a suitable rescaled coordinate system. Suitable matching conditions connecting the outer expansions with the inner expansions are then derived in Sect. 5. In Sect. 6, we use the inner expansions to derive boundary conditions on the free boundary in the sharp-interface setting as well as the sharp-interface limit of the gradient inequality. Then, in Sect. 7, we comprehensively state the limit optimality system, and in Sect. 8, the first-order necessary optimality condition on the sharp-interface level is related to classical shape calculus. Eventually, in Sect. 9, we present several numerical solutions for concrete optimization problems on the diffuse interface-level. In this context, we also discuss suitable choices of the model parameters. In particular, we observe that our results using the phase-field approach compare very well to similar numerical results obtained in [7] by means of the level-set method combined with classical shape calculus on the sharp interface level.
2 Formulation of the Problem
In this section, we recall the framework introduced in [45] in order to formulate and understand the optimality system our analysis is based on. Therefore, we first introduce the key assumptions which shall hold throughout the paper.
2.1 General Assumptions
-
(A1)
The design domain \(\Omega \subset \mathbb {R}^d\) is a bounded Lipschitz domain with \(d\in \mathbb {N}\) and outer unit normal vector field \(\varvec{n}\). Its boundary is split into two disjoint parts: A homogeneous Dirichlet boundary \(\Gamma _D\) with strictly positive \((d-1)\)-dimensional Hausdorff measure and a homogeneous Neumann boundary \(\Gamma _0\). We define
$$\begin{aligned} H^1_D(\Omega ;\mathbb {R}^d){:}{=}\left\{ \varvec{\eta }\in H^1(\Omega ;\mathbb {R}^d)\;|\;\varvec{\eta }=\varvec{0}\text { a.e.}~\text {on }\Gamma _D\right\} . \end{aligned}$$ -
(A2)
The potential \(\psi : \mathbb {R}^N \rightarrow \mathbb {R}_+\cup \left\{ +\infty \right\} \) attains exactly N global minima of value 0 attained at the points \(\varvec{e}_i\), i.e.,
$$\begin{aligned} \min \psi = \psi (\varvec{e}_i) = 0 \quad \text {for all}\, i\in \{1,...,N\}, \end{aligned}$$where \(\varvec{e}_i\in \mathbb {R}^N\) denotes the i-th standard basis vector in \(\mathbb {R}^N\). Additionally, we assume \(\psi \) to be decomposed into \(\psi (\varvec{\varphi })=\psi _0(\varvec{\varphi })+I_{\varvec{G}}(\varvec{\varphi })\) with \(\psi _0\in C^1(\mathbb {R}^N,\mathbb {R})\) and the indicator functional
$$\begin{aligned} I_{\varvec{G}}(\varvec{\varphi })= {\left\{ \begin{array}{ll} 0&{}\text {if } \varvec{\varphi }\in \varvec{G},\\ +\infty &{} \text {otherwise}, \end{array}\right. } \end{aligned}$$where \(\varvec{G}:=\mathbb {R}^N_{+}\cap \Sigma ^N\) with
$$\begin{aligned} \Sigma ^N&:=\left\{ \varvec{\xi }\in \mathbb {R}^N \;\left| \;\, \sum _{i=1}^{N}\xi ^{i}=1\right. \right\} , \end{aligned}$$(2.1)$$\begin{aligned} \mathbb {R}^N_{+}&:=\left\{ \left. \varvec{\xi }\in \mathbb {R}^N \,\right| \, \forall i\in \left\{ 1,\dots , N\right\} :\; \xi ^{i}\ge 0 \right\} . \end{aligned}$$(2.2)The set \(\varvec{G}\) is referred to as the Gibbs simplex. A prototype example for the continuous part \(\psi _0\) would be \(\psi _0({\varvec{\varphi }})=\frac{1}{2}(1-{\varvec{\varphi }}\cdot {\varvec{\varphi }})\) (cf. [20]).
-
(A3)
The function \(\Psi : \left( \mathbb {R}_{>0}\right) ^l\rightarrow \mathbb {R}\) is continuously differentiable.
2.2 The Phase-Field Variable
To describe the material distribution of \((N-1)\) different materials in the design domain \(\Omega \), we introduce the phase-field \(\varvec{\varphi }:\Omega \rightarrow \mathbb {R}^N\). Its components \(\varphi ^i\), \(i=1,...,N-1\) represent the materials, whereas \(\varphi ^N\) represents the void. We expect \(\varvec{\varphi }\) to continuously change its values at the diffuse interface. From a physical point of view, this means that the considered materials can be mixed at the interfacial region. In order for the phase-field to behave in a physically reasonable way we impose suitable constraints. First of all, we fix the total amount of each material by the mean value constraint
with \(m^i\in (0,1)\) and \(\varvec{m}\in \Sigma ^N\) (cf. (2.1)). The constraint \(\varvec{m}\in \Sigma ^N\) is a consequence of the physical assumption that the amount of the individual volume fractions \(\varphi ^i\) needs to sum up to 1 at each point in the domain. Furthermore, it is physically reasonable to assume that each volume fraction shall attain its values only in the interval [0, 1]. This property is incorporated by assuming that any \({\varvec{\varphi }}\) belongs to the set of admissible phase-fields
with
Here, \(\varvec{G}\) is the Gibbs simplex that was introduced in (A2).
2.3 The Ginzburg–Landau Energy
In order to make our optimzation problem well-posed, we need to include a regularizing term for the phase-field in the cost functional. For this purpose, we use the so-called Ginzburg–Landau energy
for all \({\varvec{\varphi }}\in H^1(\Omega ;\mathbb {R}^N)\). Here, the parameter \(\varepsilon >0\) is related to the the thickness of the diffuse-interface and therefore, it is usually chosen very small. In the sharp-interface limit, we intend to (formally) send this parameter to zero. Due to assumption (A2), the potential \(\psi \) enforces the phase-field \(\varvec{\varphi }\) to attain its values only in the Gibbs simplex. However, as we already include the the Gibbs simplex constraint in the set of admissible phase-fields, it suffices to merely consider the regular part \(\psi _0\) of the potential \(\psi \) in the Ginzburg–Landau energy as long as \({\varvec{\varphi }}\in \varvec{\mathcal {G}}\). This means that
for all \({\varvec{\varphi }}\in \varvec{\mathcal {G}}\).
2.4 The Elasticity Tensor and the Density Function
As we intend to consider an elastic structure, we next introduce the two tensors of linear elasticity, which will be used to formulate the state equation. The strain tensor of a vector-valued function \(\varvec{u}\in H^1_D(\Omega ;\mathbb {R}^d)\) is given as
The elasticity tensor \(\mathbb {C}: \mathbb {R}^N\rightarrow \mathbb {R}^{d\times d\times d\times d}\) is a fourth order tensor with the following properties.
-
(B1)
\(\mathbb {C}_{ijkl}\in C^{1,1}_\text {loc}(\mathbb {R}^N;\mathbb {R})\).
-
(B2)
\(\mathbb {C}\) is symmetric, i.e.,
$$\begin{aligned} \mathbb {C}_{ijkl}=\mathbb {C}_{jikl}=\mathbb {C}_{ijlk}=\mathbb {C}_{klij}\,, \end{aligned}$$for \(i,j,k,l=1,\dots ,d\).
-
(B3)
\(\mathbb {C}\) is coercive for any fixed \(\varepsilon >0\), i.e., there exists \(\theta _\varepsilon >0\) such that
$$\begin{aligned} \theta _\varepsilon \left| \mathcal {B}\right| ^2\le \mathbb {C}({\varvec{\varphi }})\, \mathcal {B}:\mathcal {B}\,, \end{aligned}$$for all \({\varvec{\varphi }}\in \mathbb {R}^N\) and all symmetric matrices \(\mathcal {B}\in \mathbb {R}^{d\times d}\). For two matrices \(\mathcal {A},\mathcal {B}\in \mathbb {R}^{d\times d}\) this product is defined as
$$\begin{aligned} \mathcal {A}:\mathcal {B}{:}{=}\sum _{i,j=1}^{d}\mathcal {A}_{ij}\mathcal {B}_{ij}\;. \end{aligned}$$
The component specific densities are modeled by a density function \(\rho : \mathbb {R}^N\rightarrow \mathbb {R}\) with the following properties.
-
(C1)
\(\rho \in C^{1,1}_\text {loc}(\mathbb {R}^N;\mathbb {R})\,\).
-
(C2)
\(\rho \) is uniformly positive for any fixed \(\varepsilon >0\), i.e., there is a constant \(\rho _{0,\varepsilon }>0\) such that \(\rho ({\varvec{\varphi }})\ge \rho _{0,\varepsilon }\) for all \({\varvec{\varphi }}\in \mathbb {R}^N\).
As in [20], we want \(\mathbb {C}\) and \(\rho \) to possess a decomposition that reflects the material specific elasticity and density of the \(N-1\) materials. Therefore, for \({\varvec{\varphi }}\in \varvec{G}\), we set
for any \(k,l\in \mathbb {N}\) and any \(\alpha _M,\alpha _V,\beta _M,\beta _V \in C^{1,1}_{\text {loc}}(\mathbb {R})\) with
In this way, we have
The positivity conditions are necessary to satisfy the assumptions (B3) and (C2). The global non-negativity of \(\alpha _M,\beta _M\) will be essential in avoiding spurious eigenmodes, see Sect. 4.2.
This means, for \(i\in \{1,...,N-1\}\), we choose component specific but constant elasticity tensors \(\mathbb {C}^i\in \mathbb {R}^{d\times d\times d\times d}\) and densities \(\rho ^i >0\). As the void obviously has neither a stiffness nor a density, we approximate the void components by some fixed elasticity tensor \(\tilde{\mathbb {C}}^N\in \mathbb {R}^{d\times d\times d\times d}\) and density \(\tilde{\rho }^N > 0\) that are multiplied by the small interface parameter \(\varepsilon \) that was introduced in Sect. 2.3 in the context of the Ginzburg–Landau energy. Of course, these constant prefactors need to be chosen such that the assumptions (B2), (B3) and (C2) are satisfied, see [45].
Even though an adequate scaling of the void components \(\tilde{\mathbb {C}}^N\) and \(\tilde{\rho }^N\) with respect to \(\varepsilon \) combined with an appropriate choice of interpolation functions \(\alpha _M,\alpha _V,\beta _M,\beta _V\) will be crucial for the numerical simulations in order to avoid spurious eigenmodes, see also Sect. 4.2, we emphasize that our formal analysis works for any kind of decomposition as in (2.5) as long as the void components are scaled with \(\varepsilon ^p\) for some \(p\in \mathbb {N}\). Thus, in terms of our analysis, we will work with the general decomposition in (2.5), but we will also justify in the framework of asymptotic expansions how a specific choice of k, l and \(\alpha _M,\alpha _V,\beta _M,\beta _V\) in (2.5) is capable of dealing with localized eigenmodes, see Sect. 4.2.
As in [20] and [45], we extend the definition (2.5) to the whole hyperplane \(\Sigma ^N\) by introducing a cut-off function for a small parameter \(\omega >0\). We define
where \(a_{\omega }\) and \(b_{\omega }\) are monotonically increasing \(C^{1,1}\) functions that are constructed in such a way that \(\sigma _{\omega }\) is also a \(C^{1,1}\) function. Then we consider the extensions
where
denotes the \(\ell ^2\) projection of \(\mathbb {R}^N\) onto the convex set \(\Sigma ^N\). Note that in order for (C2) to be satisfied, the demanded positivity of the interpolation functions in (2.7) is in general not enough, as we allow \(\alpha _V,\beta _V\) to become negative outside the unit interval. However, the special choice
for \(\varphi \in \Sigma ^N\), which will be used in the numerical simulations, see also (9.3), satisfies (C2) due to the fact that the \(\ell ^1\) and \(\ell ^2\) norms are equivalent on \(\mathbb {R}^N\). In our analysis we will stick to the general decomposition (2.5) for full generality. The tensor \(\mathbb {C}\) is dealt with analogously.
To conclude this subsection, let us introduce some further notation. For \({\varvec{\varphi }}\in L^{\infty }(\Omega ;\mathbb {R}^N)\), we define a weighted scalar product on \(L^2(\Omega ;\mathbb {R}^d)\) by
and a weighted scalar product on \(H^1_D(\Omega ;\mathbb {R}^d)\) by
In the following, we write \(L^2_{{\varvec{\varphi }}}(\Omega ;\mathbb {R}^d)\) in order to emphasize the fact that we equip \(L^2(\Omega ;\mathbb {R}^d)\) with the scalar product \((\cdot ,\cdot )_{\rho ({\varvec{\varphi }})}\).
2.5 The State Equation
We now introduce the system of equations describing the elastic structure, which will be referred to as the state equation. It reads as
and its weak formulation is given by
for all \(\varvec{\eta }\in H^1_D(\Omega ;\mathbb {R}^d)\). In [45], using classical spectral theory, it was shown that for any \({\varvec{\varphi }}\in L^\infty (\Omega ,\mathbb {R}^N)\), there exists a sequence of eigenvalues (with multiple eigenvalues being repeated according to their multiplicity) which can be ordered as
This comprises all eigenvalues of (2.11). Moreover, the corresponding eigenfunctions
can be chosen as an orthonormal basis of \(L^2_{\varvec{\varphi }}(\Omega ;\mathbb {R}^d)\), meaning that
for all \(i,j\in \mathbb {N}\). This property will be crucial when considering the formal asymptotics of the eigenfunctions. In the following, when we talk about eigenvalues and eigenfunctions, we will always refer to the pairs \((\lambda _i^{\varepsilon ,{\varvec{\varphi }}},{\varvec{w}}_i^{\varepsilon ,{\varvec{\varphi }}})\) with \(i\in \mathbb {N}\), which have the aforementioned properties.
2.6 The Optimization Problem and the Gradient Inequality
Finally, we are in a position to state the optimization problem
with
for some \(l\in \mathbb {N}\), where \(n_1,\dots ,n_l\in \mathbb {N}\) indicate a selection of eigenvalues. Here, \(\gamma >0\) is a fixed constant related to surface tension.
Remark 2.1
It is worth mentioning that we do not need any boundedness assumption on \(\Psi \) in order to prove the existence of a minimizer to (\(\mathcal {P}^{\varepsilon }_{l}\)) in the same way as in [45, Theorem 6.1]. In analogy to [44, Lemma 3.7], one can show that there are constants \(C_{1,\varepsilon },\,C_{2,\varepsilon } > 0\) depending only on the choice of \(\mathbb {C}_{\varepsilon }\) and \(\rho _{\varepsilon }\) such that
Here, \(\lambda _k^M\) denotes the k-th eigenvalue of the problem (2.11) with \(\mathbb {C}\equiv \textrm{Id}\) and \(\rho \equiv 1\). Qualitatively speaking, \(\lambda _k^M\) denotes an eigenvalue in the situation when the whole design domain is occupied by one material.
In [45, Theorem 6.2], the following first-order necessary optimality conditions was derived.
Theorem 2.2
Let \({\varvec{\varphi }}\in \mathcal {\varvec{\mathcal {G}}}^{\varvec{m}}\) be a local minimizer of (\(\mathcal {P}^{\varepsilon }_{l}\)), i.e., there exists \(\delta >0\) such that \(J_{l}^{\varepsilon }(\varvec{\varphi }) \le J_{l}^{\varepsilon }(\varvec{\zeta })\) for all \(\varvec{\zeta } \in \mathcal {\varvec{\mathcal {G}}}^{\varvec{m}}\) with \(\left\| \varvec{\zeta }-{\varvec{\varphi }}\right\| _{H^1(\Omega ;\mathbb {R}^N)\cap L^\infty (\Omega ;\mathbb {R}^N)}<\delta \). We further assume that the eigenvalues \(\lambda _{n_1}^{\varepsilon ,{\varvec{\varphi }}},\dots ,\lambda _{n_l}^{\varepsilon ,{\varvec{\varphi }}}\) are simple. Then the gradient inequality
holds for all \(\tilde{\varvec{\varphi }}\in \varvec{\mathcal {G}}^{\varvec{m}}\).
The upcoming sharp-interface analysis will be concerned with passing to the limit in the state equation (\(SE^\varepsilon \)) as well as in the gradient inequality (\(GI^\varepsilon \)).
3 Analysis of the Gradient Inequality
In this section, we will show under a suitable regularity assumption on the eigenfunctions involved in (\(GI^\varepsilon \)) that there exists a solution of the above gradient inequality possessing even the regularity \(\varvec{\varphi }\in H^2(\Omega ; \mathbb {R}^N)\). This will be carried out by applying a regularization process to the non-smooth potential \(\psi \), which was employed in a similar fashion in [15, 23, 25, 40]. Our approach mainly follows the ideas of [23].
We regularize the gradient inequality in order to deal with the indicator functional \(I_{\varvec{G}}\) contained in the definition of the potential \(\psi \). This will yield a sequence of \(H^2\)-regular approximating phase-fields \((\varvec{\varphi }_{\delta })_{\delta >0}\) solving regularized equations and converging to the desired phase-field \(\varvec{\varphi }\). Another convenient aspect of this procedure is that it will generate Lagrange multipliers that will allow us to transform the gradient inequality into an equality. This strong formulation of (\(GI^\varepsilon \)) will be the starting point for our asymptotic analysis in Sect. 6.
3.1 Regularization of the Potential \(\psi \) and Rewriting the Constraints
We notice that \(\varvec{\varphi }\in \varvec{\mathcal {G}}^{\varvec{m}}\) needs to satisfy the constraint
for almost every \(x\in \Omega \) and \(i=1,\dots ,N\). To deal with this constraint we regularize the potential appearing in the Ginzburg-Landau energy which was initially given as
Definition 3.1
For \(\delta >0\) we define the regularized potential
where
Remark 3.2
We see that the regularization now approximates the indicator functional \(I_{\mathbb {R}^N_{+}}\) by the function \(\frac{1}{\delta }\hat{\psi }\). For \(\delta \searrow 0\), exactly the negative parts of the components of \(\varvec{\varphi }\) are penalized.
To deal with the remaining constraints hidden in \(\varvec{\mathcal {G}}^{\varvec{m}}\) namely the integral constraint and the sum constraint \(\sum _{i=1}^{N}\varphi ^i=1\) a.e. in \(\Omega \), we introduce linear orthogonal projections.
Definition 3.3
Let us define the linear orthogonal projections
with \(L^2_0(\Omega ;\mathbb {R}^N){:}{=}\big \{\varvec{u}\in L^2(\Omega ;\mathbb {R}^N)\,\big |\, \int _{\Omega }\varvec{u}\,\text {d}x=\varvec{0}\big \}\) and
where \(\varvec{1}=(1,\dots ,1)^T\in \mathbb {R}^N\) and
To simplify the notation, we further define the composition \(P{:}{=}P_{T\Sigma }\circ P_{\int }=P_{\int }\circ P_{T\Sigma }\).
Remark 3.4
Note that for the constraint \(\varphi ({\varvec{x}})\in \mathbb {R}^N_{+}\), we cannot introduce a linear orthogonal projection as there is no vector space corresponding to this constraint. Thus, the approximation of the indicator function in Definition 3.1 is actually necessary.
3.2 Smoothness Assumption and Rewriting the Gradient Inequality
In order to obtain a suitable regularization of the gradient inequality, we need to find a way to test (\(GI^\varepsilon \)) with arbitrary functions in \(H^1(\Omega ;\mathbb {R}^N)\) and not only in \(H^1(\Omega ;\mathbb {R}^N)\cap L^{\infty }(\Omega ;\mathbb {R}^N)\). For this reason and to obtain higher regularity of the phase-field, we will need to assume higher regularity of the corresponding eigenfunctions.
We fix a parameter \(\varepsilon >0\) as well as a solution \({\varvec{\varphi }}^\varepsilon \in \varvec{\mathcal {G}}^{\varvec{m}}\subset H^1(\Omega ;\mathbb {R}^N)\cap L^{\infty }(\Omega ;\mathbb {R}^N)\) of (\(GI^\varepsilon \)). For a cleaner presentation we omit the superscript \(\varepsilon \) in the eigenvalues and eigenfunctions.
A priori, the term
is well defined only for \(\varvec{\eta }\in L^{\infty }(\Omega ;\mathbb {R}^N)\) as the expression \(\mathcal {E}({\varvec{w}}_{n_r}):\mathcal {E}({\varvec{w}}_{n_r})\) merely belongs to \(L^1(\Omega )\). However, in order to consider a suitable regularized problem associated to (\(GI^\varepsilon \)), we need this term to be an element in \(L^2(\Omega )\). For this purpose, we require the regularity \(\varvec{w}_{n_r}\in W^{1,4}(\Omega )\).
Therefore, we now make the following crucial regularity assumption which shall hold for the rest of this paper.
-
(R)
For \(r=1,\dots ,l\), let the eigenfunctions \(\varvec{w}_{n_r}\) involved in (\(GI^\varepsilon \)) belong to \(W^{1,4}(\Omega ;\mathbb {R}^d)\).
Remark 3.5
Note that there exists a regularity theory for the equations of linear and nonlinear elasticity, see, e.g. [52, 65]. However, due to the fact that the coefficient \(\mathbb {C}({\varvec{\varphi }})\) is merely essentially bounded, we could only prove the existence of an (in general arbitrarily small) parameter \(\iota >0\) such that
Note that there exist counterexamples going back to De Giorgi for linear systems of elliptic PDEs (see, e.g., [17, Section 4.1]) providing unbounded solutions \(\varvec{u}\in W^{1,2}(B;\mathbb {R}^d)\) for \(d\ge 3\) to a system of the form
where \(\mathcal {A}\) is bounded and coercive and B denotes the unit ball. In particular, in the physically relevant case \(d=3\) where \(W^{1,4}(\Omega ;\mathbb {R}^d)\hookrightarrow C^0(\overline{\Omega };\mathbb {R}^d)\), the condition \({\varvec{w}}_{n_r}\in W^{1,4}(\Omega ;\mathbb {R}^d)\) seems to be a real assumption as unbounded eigenfunctions might exist.
In the following, let \((\cdot ,\cdot )\) denote the classical scalar product on \(L^2(\Omega ;\mathbb {R}^N)\). Recalling
for \(\varvec{\eta }\in L^{2}(\Omega ;\mathbb {R}^N)\), we have
Note that the term in the last line is to be understood as
Thus, the projection of this term is well defined and the \(L^2\) regularity of this object is ensured by the assumptions (R) and (B1). For later purposes, we point out that a straightforward computation reveals
where
To have a more concise notation, we will write
Analogously, we use the notation
for the density term. To reformulate the gradient inequality (\(GI^\varepsilon \)), we further define the function
By means of assumption (R), we infer \(\varvec{f}^{{\varvec{\varphi }}}\in L^{2}(\Omega ;\mathbb {R}^N)\). This is crucial for the subsequent analysis, especially for the absorption argument at the end of Lemma 3.10, and therefore, assumption (R) cannot be waived. As \({\varvec{\varphi }}\in \varvec{\mathcal {G}}^{\varvec{m}}\) is fixed, we write \(\varvec{f}=\varvec{f}^{{\varvec{\varphi }}}\) in the following. Using this notation, we obtain:
Proposition 3.6
The gradient inequality (\(GI^\varepsilon \)) is equivalent to
3.3 The Regularized Problem and its Limit
Now that we have introduced the regularized potential and suitable orthogonal projections, and have made the necessary regularity assumption, we can formulate a regularized problem which will approximate our initially fixed solution \({\varvec{\varphi }}\in \varvec{\mathcal {G}}^{\varvec{m}}\) of (\(GI^\varepsilon \)) in order to provide the desired \(H^2\)-regularity of \({\varvec{\varphi }}\).
Using all the previously introduced notation, we are now in a position to state the so-called regularized problem.
Definition 3.7
Let
We say that \(\varvec{\varphi }_{\delta }\in \tilde{\varvec{\mathcal {G}}}^{\varvec{m}}\) is a solution to the regularized problem if it solves
Before proving the existence of a solution to (RE), we recall some properties proven in [23] that will be important for the upcoming analysis.
Proposition 3.8
Let \(\hat{\psi }\) be as defined in (3.2). Then the following properties hold true.
-
(a)
The weak derivative fulfills
$$\begin{aligned} \nabla \hat{\psi }=\hat{\varvec{\phi }} \end{aligned}$$(3.9)where, for \(\varvec{\xi }\in \mathbb {R}^N\), \(\hat{\phi }^i(\varvec{\xi }){:}{=}\hat{\phi }^i({\xi }^i){:}{=}2\left[ \xi ^i\right] _{-}\) with \([{s}]_{-}{:}{=}\min (s,0)\) for all \(s\in \mathbb {R}\).
-
(b)
Monotonicity: \(\hat{\varvec{\phi }}\) is non-decreasing in each component, i.e.,
$$\begin{aligned} 0\le \left( \hat{\phi }^i(r)-\hat{\phi }^i(s)\right) (r-s) \end{aligned}$$(3.10)for all \(r,s\in \mathbb {R}\) and \(i=1,\dots ,N\).
-
(c)
Convexity: \(\hat{\psi }\) is convex, i.e.,
$$\begin{aligned} (\varvec{\xi }-\varvec{\eta })\cdot \hat{\varvec{\phi }}(\varvec{\eta }) \le \hat{\psi }(\varvec{\xi })-\hat{\psi }(\varvec{\eta }) \end{aligned}$$(3.11)for all \(\varvec{\xi },\varvec{\eta }\in \mathbb {R}^N\).
Using these results, we now prove the well-posedness result for the regularized problem. In order to show \(H^2\)-regularity of the solution \(\varvec{\varphi }_{\delta }\), we need the following regularity assumption on the design domain which shall hold for the rest of the paper.
-
(D)
In addition to (A1), we assume that \(\Omega \) has at least one of the following properties:
-
(i)
The boundary \(\partial \Omega \) is of class \(C^{1,1}\).
-
(ii)
\(\Omega \) is convex.
-
(i)
The well-posedness result for the regularized problem (RE) reads as follows.
Lemma 3.9
For \(\delta >0\) there exists a unique solution \(\varvec{\varphi }_{\delta }\in \tilde{\varvec{\mathcal {G}}}^{\varvec{m}}\subset H^1(\Omega ;\mathbb {R}^N)\) of (RE). The solution possesses the regularity \(\varvec{\varphi }_{\delta }\in H^2(\Omega ;\mathbb {R}^N)\) and it holds
Proof
First of all, we want to show that there exists at most one solution to (RE). To this end, we assume that there are two solutions \({\varvec{\varphi }}_{\delta ,1},{\varvec{\varphi }}_{\delta ,2}\in \tilde{\varvec{\mathcal {G}}}^{\varvec{m}}\). Then, by subtracting the corresponding equations, we obtain
for all \(\varvec{\eta }\in H^1(\Omega ;\mathbb {R}^N)\). Testing with \({\varvec{\varphi }}_{\delta ,1}-{\varvec{\varphi }}_{\delta ,2}\in L^2_{T\Sigma }(\Omega ;\mathbb {R}^N)\cap L_0^2(\Omega ;\mathbb {R}^N)\), we can drop the projection P in the second term. Using the monotonicity property (3.10), we infer
This yields \({\varvec{\varphi }}_{\delta ,1}={\varvec{\varphi }}_{\delta ,2}\) as these functions have identical mean value.
In order to prove the existence of a solution, we consider a suitable minimization problem. Therefore, we define the functional
for all \(\varvec{\xi }\in H^1(\Omega ;\mathbb {R}^N)\). If we can now show that there exists a \(\varvec{\varphi }_{\delta }\in \tilde{\varvec{\mathcal {G}}}^{\varvec{m}}\) that solves the minimization problem
the existence result is proven since then the Gâteaux derivative of \(I_{\delta }'(\varvec{\varphi }_{\delta })\), which is given by
for all directions \(\varvec{\eta }\in H^1(\Omega ;\mathbb {R}^N)\cap L^2_{T\Sigma }(\Omega ;\mathbb {R}^N)\cap L^2_{0}(\Omega ;\mathbb {R}^N)\), vanishes. By applying the projections \(P_{T\Sigma }\) and \(P_{\int }\) to any \(\varvec{\eta }\in H^1(\Omega ;\mathbb {R}^N)\) and then switching them to the other component in the \(L^2\) scalar product, it follows that solving (3.13) is equivalent to solving (RE).
Note that there is no need to project the gradient term. This is justified as follows. By construction, we have
On the other hand, we compute
and therefore, the entries in each column are identical. Now, we compute
We see that this term vanishes as by construction, as \(\sum _{i=1}^{N}\varphi _{\delta }^i=1\) a.e. in \(\Omega \) because \(\varvec{\varphi }_{\delta }\in \tilde{\varvec{\mathcal {G}}}^{\varvec{m}}\). In other words, \(\partial _i \varvec{\varphi }_{\delta }\in L^2_{T\Sigma }(\Omega ;\mathbb {R}^N)\).
As the gradient term is invariant under addition of constants we can also omit the projection \(P_{\int }\).
It remains to show that there exists a minimizer of (3.13). By construction, \(\hat{\psi }\ge 0\). Furthermore, using Young’s inequality, we find a constant \(C>0\) such that
This is obtained by absorbing the quantity \(\left\| \varvec{\xi }\right\| _{L^2}^2\) by the term \(\left\| \nabla \varvec{\xi }\right\| _{L^2}^2\) which controls the whole \(H^1(\Omega ;\mathbb {R}^N)\)-norm as all \(\varvec{\xi }\in \tilde{\varvec{\mathcal {G}}}^{\varvec{m}}\) have a fixed mean value. Hence, \(I_{\delta }\) is bounded from below on \(\tilde{\varvec{\mathcal {G}}}^{\varvec{m}}\) and thus, the infimum exists. Consequently, we find a minimizing sequence \(({\varvec{\varphi }}_{\delta ,k})_{k\in \mathbb {N}}\subset \tilde{\varvec{\mathcal {G}}}^{\varvec{m}}\) such that
In particular, \(\Vert {\varvec{\varphi }}_{\delta ,k}\Vert _{H^1(\Omega ;\mathbb {R}^N)}\) remains bounded and thus, there exists \(\varvec{\varphi }_{\delta }\in H^1(\Omega ;\mathbb {R}^N)\) and such that the convergences
hold along a non-relabeled subsequence. From these convergences, we deduce \(\varvec{\varphi }_{\delta }\in \tilde{\varvec{\mathcal {G}}}^{\varvec{m}}\). Using the estimate \(\hat{\psi }({\varvec{\varphi }}_{\delta ,k})\le |{\varvec{\varphi }}_{\delta ,k}|^2\) a.e. in \(\Omega \) along with the generalized majorized convergence theorem of Lebesgue (see [69, p. 1015]), and further employing the weak lower semincontinuity of norms, we conclude
This implies that \(\varvec{\varphi }_{\delta }\in \tilde{\varvec{\mathcal {G}}}^{\varvec{m}}\) is a minimizer.
Now that we have shown the existence of a solution \(\varvec{\varphi }_{\delta }\in H^1(\Omega ;\mathbb {R}^N)\) to (RE), it remains to prove that it possesses the desired regularity \(H^2(\Omega ;\mathbb {R}^N)\). Since \(\varvec{\varphi }_{\delta }\) is a weak solution of (RE), it can be interpreted as a weak solution of
with
Due to assumption (D), elliptic regularity theory (see e.g., [49, Theorem 2.4.2.7] in the case (D)(i) or [49, Theorem 3.2.3.1] in the case (D)(ii), respectively) yields \(\varvec{\varphi }_{\delta }\in H^2(\Omega ;\mathbb {R}^N)\). In particular, using this regularity, we conclude from (RE) that \(\varvec{\varphi }_{\delta }\) satisfies (PRE). \(\square \)
As we want to pass to the limit in the regularized equation, we need some uniform bounds to apply classical compactness results.
Lemma 3.10
Let \(\varvec{\varphi }_{\delta }\in H^2(\Omega ;\mathbb {R}^N)\) be the solution of (RE). Then there exist a constant \(C>0\) such that
for all \(\delta >0\).
Proof
By the previous lemma, we know that \(\varvec{\varphi }_{\delta }\) minimizes \(I_{\delta }\) (see (3.12)) over \(\tilde{\varvec{\mathcal {G}}^{\varvec{m}}}\) (see (3.8)). Thus, we have
If we now choose any \(\varvec{\xi }\in \varvec{\mathcal {G}}^{\varvec{m}}\subset \tilde{\varvec{\mathcal {G}}}^{\varvec{m}}\), we know that it is additionally componentwise non-negative and therefore \(\hat{\psi }(\varvec{\xi })=0\) a.e. in \(\Omega \). In view of definition (3.12), this yields
where \(C>0\) is a constant independent of \(\delta \). Recalling the absorption trick (3.16), we obtain
which will be needed in the end of the proof. Furthermore, using the definition of \(\hat{\psi }\) (see (3.2)), we deduce that
which directly leads to (3.20).
We notice that \(\frac{1}{\delta }\hat{\varvec{\phi }}(\varvec{\varphi }_{\delta })\) is weakly differentiable (cf. [48, Lemma 7.6]) and belongs to \(H^1(\Omega ;\mathbb {R}^N)\). In order to prove (3.21), we test (RE) with \(\varvec{\eta }=\frac{1}{\delta }\hat{\varvec{\phi }}(\varvec{\varphi }_{\delta })\). We obtain
Applying [48, Lemma 7.6] to \(\hat{\varvec{\phi }}\), we further deduce
since for a.e. x in \(\Omega \) either \(\nabla \hat{\varvec{\phi }}(\varvec{\varphi }_{\delta })(x)=\varvec{0}\) or \(\nabla \hat{\varvec{\phi }}(\varvec{\varphi }_{\delta })(x)=\nabla \varvec{\varphi }_{\delta }(x)\).
Applying Hölder’s inequality in (3.24) and an absorption argument, which crucially requires \(\varvec{f}\in L^2(\Omega ;\mathbb {R}^N)\), we thus infer
and thus,
As we now have bounded both the right-hand side of (PRE) and \(\varvec{\varphi }_{\delta }\) itself in \(L^{2}(\Omega ;\mathbb {R}^N)\) uniformly in \(\delta \) (see (3.23)), we can again apply elliptic regularity theory (see [49, Theorem 2.3.1.5] or [49, Theorem 3.2.3.1]) to deduce (3.19). \(\square \)
In order to reformulate (PRE) by means of Lagrange multipliers that are expected to converge in the weak sense, we need to get rid of the projection in (3.21). This is done analogously to [23, Theorem 2.1] and therefore, we only present the statement of the result without a proof.
Lemma 3.11
There exists a constant \(C>0\) such that
Now, we introduce suitable Lagrange multipliers and pass to the limit in the the regularized equation.
Theorem 3.12
The initially chosen solution \({\varvec{\varphi }}\in \varvec{\mathcal {G}}^{\varvec{m}}\) of (\(GI^\varepsilon \)) possesses the regularity \({\varvec{\varphi }}\in H^2(\Omega ;\mathbb {R}^N)\). Furthermore, there are Lagrange multipliers \(\varvec{\Lambda }, \varvec{\mu }\in L^2(\Omega ;\mathbb {R}^N)\) and \( \varvec{\vartheta }\in \mathbb {R}^N\) such that
with
Proof
In this proof, we will again use the notation \(\varvec{f} = \varvec{f}^{\varvec{\varphi }}\). From (3.19) we deduce the existence of a function \(\overline{{\varvec{\varphi }}}\in H^2(\Omega ;\mathbb {R}^N)\) such that
as \(\delta \rightarrow 0\) along a non-relabeled subsequence. This directly implies that \(\overline{{\varvec{\varphi }}}(x)\in \mathbb {R}^N_{+}\) for almost all \(x\in \Omega \). Hence, since \(\varvec{\varphi }_{\delta }\in \tilde{\varvec{\mathcal {G}}}^{\varvec{m}}\), we know that \(\overline{{\varvec{\varphi }}}\in \varvec{\mathcal {G}}^{\varvec{m}}\).
Recalling the definition of \(\varvec{f}\) in (3.6), we now define the Lagrange multipliers of the regularized problem as
The reason why we do not reformulate the projection term \(P_{T\Sigma }\varvec{f}\) by means of a Lagrange multiplier is that this is a term depending on x, which will produce terms of order \(\mathcal {O}(\frac{1}{\varepsilon ^2})\) when we consider the inner expansions in Sect. 6 due to the involved derivative of eigenfunctions. Recalling Definition 3.3, we have
Hence, we can write (RE) as
We point out that the Lagrange multipliers are constructed in such a way that the factor \(\frac{1}{\varepsilon }\) corresponding to the scaling of the original potential \(\psi \) is still present. This will be important in the next sections for the sharp-interface asymptotics.
We know from Lemma 3.11 that \(\varvec{\Lambda }_{\delta },\varvec{\mu }_{\delta }\in L^2(\Omega ;\mathbb {R}^N)\) and \(\varvec{\vartheta }_{\delta }\in \mathbb {R}^N\) are bounded uniformly in \(\delta \). Hence, we find a subsequence and \(\varvec{\Lambda },\varvec{\mu }\in L^2(\Omega ;\mathbb {R}^N)\) and \(\varvec{\vartheta }\in \mathbb {R}^N\) such that
as \(\delta \rightarrow 0\). We additionally know from the definition of \(\hat{\varvec{\phi }}\) in (3.9) that \(\varvec{\mu }\ge 0\) componentwise as weak convergence in \(L^2(\Omega ;\mathbb {R}^N)\) preserves non-negativity. Furthermore, from the construction in (3.30) we directly deduce (3.26) and (3.28).
Passing to the limit in (3.31), we infer
Thus, the regularity \(\overline{{\varvec{\varphi }}}\in H^2(\Omega ;\mathbb {R}^N)\) and integration by parts yield the equation
If we can now show that for our initially fixed solution \({\varvec{\varphi }}\in \varvec{\mathcal {G}}^{\varvec{m}}\) of (\(GI^\varepsilon \)) it holds \({\varvec{\varphi }}=\overline{{\varvec{\varphi }}}\), the proof is complete.
Let us consider the test function \(\varvec{\eta }{:}{=}\overline{{\varvec{\varphi }}}-{\varvec{\varphi }}\in H^1(\Omega ;\mathbb {R}^N)\cap L^{\infty }(\Omega ;\mathbb {R}^N)\). Due to (3.26), we have \(\big (\varvec{\Lambda }_{\delta },\varvec{\eta }\big )=0\), as \(\sum _{i=1}^{N}\eta ^i=0\) because of \(\overline{{\varvec{\varphi }}},{\varvec{\varphi }}\in \varvec{\mathcal {G}}^{\varvec{m}}\). In view of (3.28) we know that \(\big (\varvec{\vartheta },\varvec{\eta }\big )=0\), because by construction \(\int _{\Omega } \varvec{\eta }\,\text {d}x=0\).
As already mentioned, we have \(\varvec{\mu }_{\delta }\ge 0\). Hence, using the monotonicity (3.10), we infer
Using the convergences (3.29) and (3.32), we deduce \((\varvec{\mu },\overline{{\varvec{\varphi }}})\le 0\). Recalling \(\varvec{\mu }\ge 0\) and that \(\overline{{\varvec{\varphi }}}\in \varvec{\mathcal {G}}^{\varvec{m}}\) is component wise non-negative, we already deduce \((\varvec{\mu },\overline{{\varvec{\varphi }}})=0\).
As also \({\varvec{\varphi }}\in \varvec{\mathcal {G}}^{\varvec{m}}\) and \({\varvec{\varphi }}\) is component wise non-negative, we have \((\varvec{\mu },{\varvec{\varphi }})\ge 0\). Combining these results and testing (3.33) with our particular choice \(\varvec{\eta }=\overline{{\varvec{\varphi }}}-{\varvec{\varphi }}\), we get
Considering the gradient inequality (3.7) tested with \(\tilde{{\varvec{\varphi }}}=\overline{{\varvec{\varphi }}}\in \varvec{\mathcal {G}}^{\varvec{m}}\), we have
Hence, by subtracting both inequalities, we infer
As \(\int _{\Omega }\overline{{\varvec{\varphi }}}-{\varvec{\varphi }}\,\text {d}x=0\), this gives us the desired identity \({\varvec{\varphi }}=\overline{{\varvec{\varphi }}}\in H^2(\Omega ;\mathbb {R}^N)\).
From the previous reasoning we know
Furthermore, we know that \(\varvec{\mu },{\varvec{\varphi }}\ge 0\) componentwise and thus, each summand in above equality has to be identical to 0. This verifies (3.27). \(\square \)
In the following, we use the above knowledge to show that our asymptotic expansions will produce a state equation and a gradient equality in the sharp-interface limit.
4 Asymptotic Expansions
As mentioned above, we will now perform the procedure of sharp-interface asymptotics. Therefore, we start by analyzing outer and inner expansions approximating the quantities involved in our problem. The outer expansions are used to approximate these quantities in regions far away from the interfacial layers. They will be used to derive the state equation in the sharp-interface limit. The inner expansions are used in regions close to the interfacial layers where the phase transition takes place. They will provide boundary conditions for the equations obtained in the sharp-interface limit. As these layers are expected to scale proportionally to \(\varepsilon \), a rescaling is needed here. By comparing the leading order equations, we will obtain jump conditions at the phase interfaces within the design domain and a sharp-interface version of the gradient equality (\(GS^\varepsilon \)).
In the following, we choose \(\left( {\varvec{\varphi }}^{\varepsilon }\right) _{\varepsilon >0}\subset \varvec{\mathcal {G}}^{\varvec{m}}\) as a sequence of minimizers of the optimization problem (\(\mathcal {P}^{\varepsilon }_{l}\)). For \(r=1,\dots ,l\), \(\left( {\varvec{w}}_{n_r}^{\varepsilon },\lambda _{n_r}^{\varepsilon }\right) _{\varepsilon >0}\subset H^1_D(\Omega ;\mathbb {R}^d)\times \mathbb {R}\) denotes the corresponding sequence of \(L^2_{{\varvec{\varphi }}}(\Omega ;\mathbb {R}^d)\)-normalized eigenfunctions and eigenvalues, which are non-trivial solutions of the state equation (\(SE^\varepsilon \)) involved in the optimization problem (\(\mathcal {P}^{\varepsilon }_{l}\)).
4.1 Outer Expansions
As in [20], we first consider the asymptotic expansions in regions “far” away from the interface. Therefore, we assume expansions of the form
for all \(x\in \Omega \). Furthermore, we demand for all \(x\in \Omega \) that \({\varvec{\varphi }_0(x)\in \varvec{G}}\), \(\varvec{\varphi }_k(x)\in T\Sigma \), and for \(k\ge 1\), in order to be compatible with the constraints on the phase-field formulated in Sect. 2.2. As we are concerned with a formal limit process, we assume all the appearing quantities to possess a suitable regularity such that we can write the state equation (\(SE^\varepsilon \)) in its strong formulation.
Using standard arguments relying on the \(\Gamma \)-convergence of the Ginzburg–Landau energy in [14], we can partition the domain as
where \(\mathcal {N}\subset \Omega \) is a Lebesgue null set. In general, the sets \(\Omega _i\) are only finite perimeter sets. This follows from the boundedness of the Ginburg–Landau energy, the inequality [14, (3.1)] and [14, Proposition 2.2]. Nevertheless, for our asymptotic analysis we assume them to be smooth enough.
With this knowledge, we are in a position to derive the limit state equation resulting from (\(SE^\varepsilon \)) in the framework of outer asymptotic expansions.
Claim 4.1
Recall the scaling of \(\mathbb {C}\) and \(\rho \) in (2.5), i.e.,
for \({\varvec{\varphi }}\in \varvec{G}\). Then, for \(r\in \left\{ 1,\dots ,l\right\} \), we obtain that the pair \((\lambda _{0,n_r},{\varvec{w}}_{0,n_r})\) fulfills the eigenvalue equations in the material regions
for \(i=1,\dots , N-1\). Furthermore, the normalization condition (2.13) is transferred to the limit eigenfunction \({\varvec{w}}_{0,n_r}\) meaning that
In particular, the eigenfunction \({\varvec{w}}_{0,n_r}\) is non-trivial in \(\Omega _i\) for at least one index \(i\in \{1,\dots ,N-1\}\). Thus, \({\varvec{w}}_{0,n_r}\) cannot be a localized eigenmode as it is not supported only in the void region \(\Omega _N\).
Remark 4.2
-
(a)
Of course, the eigenvalue \(\lambda _{n_r}^\varepsilon \) could degenerate in the limit, i.e., \(\lambda _{0,n_r}=0\). This is no contradiction to the normalization (4.4) because \({\varvec{w}}_{0,n_r}\) could potentially be a non-trivial constant in each material region \(\Omega _i\). If each material region \(\Omega _i\) shares a sufficiently nice part of the boundary with \(\Gamma _D\), one can use Korn’s inequality (see, e.g., [33, Theorem 6.15-4] or [69, Theorem 62.13]) to deduce that \({\varvec{w}}_{0,n_r}=0\) in each \(\Omega _i\), which would then indeed contradict (4.5). The inner expansions will provide us with boundary conditions that allow us to refine this statement, see Sect. 7 (especially Remark 7.1).
-
(b)
In the case \(\lambda _{0,n_r}>0\), even though the limit eigenvalue equations (\({SE}^{i}_0\)) hold for all \(i\in \{1,\dots ,N-1\}\), the eigenfunction \({\varvec{w}}_{0,n_r}\) could potentially be non-trivial only in one particular material region \(\Omega _i\) but vanish in all other material regions \(\Omega _j\) with \(j\in \{1,\dots ,N-1\}\setminus \{i\}\). This means that a non-trivial equation might hold only in one single material region.
Let us show the Claim 4.1 assuming that outer expansions of the form (4.1) exist. For the sake of a cleaner presentation, we will now fix the index \(n_r\in \mathbb {N}\) and in the following, we omit the subscript \(n_r\). In the spirit of formal asymptotics, we consider the state equation (\(SE^\varepsilon \)), i.e.,
and the normalization condition
resulting from (2.13). Then, we plug in the asymptotic expansions (4.1) and consider each resulting order in \(\varepsilon \) separately.
We deduce that (4.5) reads to order \(\mathcal {O}(1)\) as
which proves (4.4). As a consequence, \({\varvec{w}}_0\) has to be non-trivial in in \(\Omega _i\) for at least one index \(i\in \{1,\dots ,N-1\}\).
Eventually, we compare the contributions of order \(\mathcal {O}(1)\) in the state equation. We obtain
which reads for each phase
for \(i=1,\dots ,N-1\). The remaining boundary conditions on the outer boundary \(\Gamma \) follow directly by plugging in the asymptotic expansion into (\(SE^\varepsilon \)). This completes the argumentation.
4.2 Intermezzo on Spurious Eigenmodes
As already mentioned in the introduction, we now want to analytically justify the model that will be chosen for the numerical computations in order to avoid spurious eigenmodes. As we have seen in the above reasoning, assuming outer expansions of the form (4.1) and the general decomposition of \(\mathbb {C}\) and \(\rho \) as in (4.3), we recover the desired limit system. In this subsection, in order to discuss spurious eigenmodes, we consider (4.3) with the specific choices \(k=1\), \(l=2\), and in addition to (2.5) and (2.6), we choose the interpolation functions \(\alpha _M\) and \(\beta _M\) such that
In numerical simulations the phenomenon of spurious eigenmodes is a serious issue, see [7, 18, 29, 63]. The problem is that if the model parameters are not chosen correctly, eigenmodes that are supported only in the void region can actually emerge. Of course, the associated eigenvalues are unphysical as the void should not contribute to the resonance behaviour of the structure. Nevertheless, even though spurious eigenmodes might not be avoided in numerical simulations, they do not pose any problem if their associated eigenvalues are large since then, they do not affect the part of the spectrum that is involved in the optimization problem (\(\mathcal {P}^{\varepsilon }_{l}\)). For this reason, as also observed in the aforementioned literature, the key idea is to choose the scaling and interpolation in (4.3) in such a way that spurious eigenmodes will only produce large eigenvalues or more precisely, eigenvalues \(\lambda ^\varepsilon \) with \(\lambda ^\varepsilon \rightarrow \infty \) as \(\varepsilon \rightarrow 0\). In particular, this means that by using an adequate interplay of scaling and interpolation, spurious eigenmodes will not enter the sharp interface limit as their eigenvalues leave the considered part of the spectrum.
In order to allow for spurious eigenmodes in our asymptotic expansions, we have to include terms of negative order in \(\varepsilon \).
Claim 4.3
Assume the following outer asymptotic expansions
for an arbitrary \(m\in \mathbb {N}\). Let \(\mathbb {C}\) and \(\rho \) be given as in (4.3) with \(k=1\), \(l=2\), i.e.,
for \({\varvec{\varphi }}\in \varvec{G}\). Then, for \(r\in \left\{ 1,\dots ,l\right\} \), we obtain \({\varvec{w}}_{k,n_r}=\varvec{0}\) and \(\lambda _{k,n_r}=0\) for \(k<-1\) and the pair \((\lambda _{-1,n_r},{\varvec{w}}_{-1,n_r})\) fulfills
Remark 4.4
The asymptotic analysis in the argumentation of this subsection is crucially based on the interplay of the non-negative interpolation functions \(\alpha _M\) and \(\beta _M\), see (4.7), and the different \(\varepsilon \)-scaling of the void components in (4.3). Note that these two features are also important for our numerical experiments in Sect. 9, where the quadratic interpolation of \(\mathbb {C}\) and \(\rho \) as well as the relatively lower scaling in \(\varepsilon \) of the void contribution of \(\rho \) compared to the void contribution of \(\mathbb {C}\) are crucial to obtain meaningful results. It has also already been observed in the literature that a relatively lower scaling of mass compared to stiffness is an appropriate choice to deal with localized eigenmodes, see [7, 29, 63].
We now argue why Claim 4.3 is true. Therefore, we consider the state equation (\(SE^\varepsilon \)) and the normalization (4.5). First of all, we note that plugging in the asymptotic expansion of \({\varvec{\varphi }}^\varepsilon \) into (4.9) yields
As a first step let us show that \({\varvec{w}}_{k}=\varvec{0}\) in \(\Omega \) for \(k=-m,-m+1,\dots ,-2\). Therefore let us start with the contribution of lowest order \(\mathcal {O}(\varepsilon ^{-2m})\) in (4.5), which reads as
This implies that \({\varvec{w}}_{-m}=0\) in \(\Omega \backslash \Omega _N\), or in other words, \({\varvec{w}}_{-m}\) is localized in the void region. Now, we consider (4.5) to the order \(\mathcal {O}(\varepsilon ^{-2m+2})\). We have
Here, we used that \(-2m+2<0\). As \({\varvec{w}}_{-m}\) is localized in the void we infer
Thus, due to the non-negativity of the first summand in (4.12) we deduce
Using the crucial global non-negativity of \(\beta _M\) as assumed in (4.7), we have \(\overline{\rho }({\varvec{\varphi }}_1)\ge 0\), see (4.9). Moreover, \(\varphi _0=\varvec{e}_N\) in \(\Omega _N\) and we thus deduce from (2.6)
Hence, since \(\tilde{\rho }^N\) is positive, we infer \({\varvec{w}}_{-m}=\varvec{0}\) in \(\Omega \). These steps can now be repeated until the critical order \(\mathcal {O}(1)\) is reached because up to this order, the normalization equation (4.5) possesses a trivial left hand side. This shows \({\varvec{w}}_{k}=0\) for \(k=-m,-m+1,\dots ,-2\). As in (4.11), we additionally conclude that \({\varvec{w}}_{-1}=\varvec{0}\) in \(\Omega \backslash \Omega _N\).
With this knowledge, we are in a position to show \(\lambda _{k}=0\) for \(k=-m,-m+1,\dots ,-2\). Therefore let us consider the energy associated with (\(SE^\varepsilon \)), i.e.,
Due to the fact that \({\varvec{w}}_{k}=\varvec{0}\) in \(\Omega \) for \(k=-m,-m+1,\dots ,-2\) and \({\varvec{w}}_{-1}=\varvec{0}\) in \(\Omega \backslash \Omega _N\), we deduce that the right hand side is of leading order \(\mathcal {O}(\varepsilon ^{-1})\). This directly implies \(\lambda _{k}=0\) for \(k=-m,-m+1,\dots ,-2\) as well as
It remains to show that \((\lambda _{-1},{\varvec{w}}_{-1})\) solves the desired limit eigenvalue problem. Therefore we consider the state equation (\(SE^\varepsilon \)) to order \(\mathcal {O}(1)\)
In \(\Omega _N\) this simplifies to
Summing up this intermezzo, we have now seen that even if spurious eigenmodes are not excluded, the appropriate choice of the model parameters will force the associated eigenvalues to leave the spectrum in the limit \(\varepsilon \rightarrow 0\). Hence, the spurious modes do not affect our optimization problem as they leave the considered part of the spectrum.
4.3 Inner Expansions
In the interfacial regions, i.e., in layers separating two outer regions, we need to rescale our coordinate system in order to take into account that \(\varvec{\varphi }^{\varepsilon }\) changes rapidly in directions perpendicular to the interface.
Therefore, for all \(i,j=1,\dots ,N\), we write \(\Gamma =\Gamma _{ij}\) to denote the sharp-interface separating \(\Omega _i\) and \(\Omega _j\). Moreover, let \(\varvec{n}_{\Gamma _{ij}}\) denote the unit normal vector field on \(\Gamma \) pointing from \(\Omega _i\) to \(\Omega _j\). In the following, we omit these indices to provide a cleaner presentation.
We now introduce a suitable coordinate system that fits the geometry of the interface. The following discussion can be found, e.g., in [1] and thus we only give the key steps needed for our analysis. Let us choose a local parametrization
of \(\Gamma \), where U is an open subset of \(\mathbb {R}^d\). We further define \(\varvec{\nu }:= \varvec{n}_\Gamma \circ \varvec{\gamma }\).
As we want to describe a whole neighborhood surrounding the local part of the interface \({\varvec{\gamma }(U)\subset \Gamma }\), we introduce the signed distance function relative to \(\Omega _i\) which satisfies \(d(x)>0\) if \(x\in \Omega _j\) and \(d(x)<0\) if \(x\in \Omega _i\). For more details concerning the signed distance function we refer the reader to [48, Sec. 14.6]. By introducing the rescaled distance coordinate \(z(x){:}{=}\frac{1}{\varepsilon }d(x)\in \mathbb {R}\) we define for fixed \(z\in \mathbb {R}\) and sufficiently small \(\varepsilon >0\) the \((d-1)\)-dimensional submanifold
which describes a translation of \(\Gamma \) in the direction \(\varvec{\nu }\). Here, for x belonging to a sufficiently thin tubular neighbourhood around \(\gamma (U)\), \(\varvec{s}(x)\) is the unique point in U such that \(\gamma (\varvec{s})\in \Gamma \) is the orthogonal projection of x onto \(\Gamma \). The summand
shifts the point \(\varvec{\gamma }\big (\varvec{s}(x)\big )\) back onto x. Hence, a sufficiently thin tubular neighborhood around \(\varvec{\gamma }(U)\) can be expressed by the coordinate system \((\varvec{s},z)\).
Now we can express the transformation of differential operators with respect to the coordinate transformation \(x\mapsto (\varvec{s}(x),z(x))\). Therefore, let us consider an arbitrary scalar function
It holds
where \(\nabla _{\Gamma _{\varepsilon z}}\) stands for the surface gradient on \(\Gamma _{\varepsilon z}\). Proceeding analogously, we deduce that the divergence of a vector-valued function \(\varvec{j}(x)=\hat{\varvec{j}}(\varvec{s}(x),z(x))\) can be expressed as
where \(\nabla _{\Gamma _{\varepsilon z}}\cdot \hat{\varvec{j}}\) stands for the surface divergence on \(\Gamma _{\varepsilon z}\). Furthermore the full gradient of a vector-valued function is given by
where \(\otimes \) denotes the dyadic product that is defined as \(\varvec{a}\otimes \varvec{b}=(a_ib_j)_{i,j=1}^{d}\) for all \(\varvec{a},\varvec{b}\in \mathbb {R}^d\). Analogously, for a matrix-valued function
we apply formula (4.18) to each component of the row-wise defined divergence \(\nabla _{x}\cdot \mathcal {A}\). We obtain
For the Laplacian we obtain the representation
Here \(\mathcal {W}\) denotes the Weingarten map associated with \(\Gamma \) that is given by
see, e.g., [39, Appendix B]. Its non-trivial eigenvalues \(\kappa _1,\dots ,\kappa _{d-1}\) are the principal curvatures of \(\Gamma \) and its spectral norm can be expressed as
Furthermore \(\kappa \) denotes the mean curvature which is defined as the sum of the principal curvatures of \(\Gamma \). Note that in view of (4.22), \(\kappa \) can be expressed as
which will be important for later purposes.
To conclude this section, we introduce the inner expansions that we will work with in the next section. Therefore, we make the ansatz
where we assume \(\varvec{\Phi }_0(\varvec{s}(x),z(x))\in \varvec{G}\) and \(\varvec{\Phi }_k(\varvec{s}(x),z(x))\in T\Sigma ^N\) for all \(k\ge 1\). In the next section, we will relate these inner expansions to the outer expansions that were introduced before.
Remark 4.5
Note that the eigenvalues \(\lambda _{n_r}^\varepsilon \) do not depend locally on \(x\in \Omega \) and thus, their inner expansion simply equals their outer expansion.
5 The Matching Conditions
So far, we have constructed outer expansions which are supposed to hold inside the material regions \(\Omega _i\) for \(i=1,\dots , N\) as well as inner expansions which are supposed to hold in a tubular neighborhood around the sharp-interfaces \(\Gamma _{ij}\). Note that due to the construction in the previous section, the thickness of this tubular neighborhood is proportional to \(\varepsilon \). In order to be compatible, both expansions must match in a suitable intermediate region by suitable matching conditions. This region is approximately given by all points \(x\in \Omega \) with the property \(\text {dist}(x,\Gamma )\le \varepsilon ^{\theta }\) for some fixed \(\theta \in (0,1)\). This means we stretch the tubular neighborhood the inner expansions were constructed on from a thickness proportional to \(\varepsilon \) to a thickness proportional to \(\varepsilon ^{\theta }\) and relate both expansions in this region. These matching conditions will be expressed as limit conditions for the inner expansions when \(\varepsilon \rightarrow 0\) or equivalently \(z\rightarrow \pm \infty \) depending on which side we approach the interface from. This procedure is again standard in the context of formally matched asymptotics and we only state the matching conditions, for the computations see [41].
Using the notation
for the lowest order term we have the matching condition
For the term of order \(\mathcal {O}(\varepsilon )\) we have
for all \(x = \varvec{\gamma }(\varvec{s})\in \Gamma =\Gamma _{ij}\). Note that here the symbol \(\,\approx \,\) means that the difference of the left-hand side and the right-hand side as well as all its derivatives with respect to z tend to zero as \(z\rightarrow \pm \infty \). In particular (5.3) provides us with
The analogous relations also hold true for the expansions of \(\varvec{w}^\varepsilon _{n_r}\).
In the following, we will also see that the jump across the interfaces \(\Gamma _{ij}\) is an important quantity. It is defined by
for any \(x = \varvec{\gamma }(\varvec{s})\in \Gamma =\Gamma _{ij}\).
Now, we have made all the necessary computations to analyze the state equations and the gradient equality near the interfaces \(\Gamma _{ij}\). In particular, we are able to investigate their behavior as \(\varepsilon \rightarrow 0\).
6 Comparison of the Leading Order Terms
Now, we want to apply our knowledge about the inner and outer expansions to the optimality system consisting of (\(SE^\varepsilon \)) and (\(GS^\varepsilon \)). This means we apply the formulas for the differential operators discussed in Sect. 4.3 to the optimality system, compare the terms with same orders in \(\varepsilon \) and apply the matching conditions. In this section, we will suppress the index \(n_r\) to provide a clearer notation.
6.1 Comparison of the Leading Order Terms in the State Equation
We point out that our state equation (\(SE^\varepsilon \)) differs from the one in [20] only in terms of the right-hand side. In contrast to [20], where the right-hand side is just a given function \(\varvec{f}\), our right-hand is given by
In particular, it depends on the phase-field \({\varvec{\varphi }}\) as well as the corresponding eigenvalue \(\lambda ^{\varepsilon ,{\varvec{\varphi }}}\) and its associated eigenfunction \(\varvec{w}^{\varepsilon ,\varvec{\varphi }}\). Recall that the inner expansion of \(\lambda ^{\varepsilon ,\varvec{\varphi }}\) equals its outer expansion (cf. Remark 4.5) as the eigenvalue does not depend locally on \(x\in \Omega \). As no derivatives of \(\rho \), \(\varvec{w}^{\varepsilon ,\varvec{\varphi }}\) or \(\varvec{\varphi }\) are involved, we conclude that the inner expansion of (6.1) possesses only summands of non-negative orders in \(\varepsilon \). As the discussion of the left-hand side of the state equation works exactly as in [20], we can thus proceed in a completely analogous manner. We will therefore only summarize the most important results.
For the functions \({\textbf {W}}_0\) and \({\textbf {W}}_1\) involved in the inner expansion of the eigenfunction, we deduce the following relations:
for all \(x = \varvec{\gamma }(\varvec{s})\in \Gamma _{ij}\). Here and in the remainder of this paper, the expression “around \(\Gamma _{ij}\)” means that the statement is valid in a sufficiently thin tubular neighborhood around \(\Gamma _{ij}\) where our inner expansions hold. We thus arrive at the jump condition
However, we point out that the jump condition on an interface between a material region and a void region (i.e., \(i=N\) or \(j=N\)) is negligible as we do not have any information about the behavior of \(\varvec{w}_0\) in the void. In other words, we will obtain a closed system of PDEs forming the state equations of the sharp-interface problem in Sect. 7 without needing this additional jump condition at the material-void boundary.
For the function \(\overline{\mathbb {C}}(\varvec{\Phi }_0(z))\), where \(\varvec{\Phi }_0\) is the lowest order term of the inner expansion of the phase-field, we obtain:
Here, the convergence (6.9) follows due to the additional factor \(\varepsilon \) in the void contribution of \(\overline{\mathbb {C}}({\varvec{\varphi }}^\varepsilon )\) (see (4.3)).
Eventually, we obtain that
holds on each \(\Gamma _{ij}\) with \(i\ne N\), where
6.2 Comparison of the Leading Order Terms in the Gradient Equality
Now, we want to analyse the gradient equality (\(GS^\varepsilon \)), which reads as
Here, we recall that the Lagrange multipliers were constructed in Theorem 3.12 in such a way that their sum appearing in the gradient equality (6.11) is scaled by the factor \(\tfrac{1}{\varepsilon }\). We now assume the Lagrange multipliers to have the following inner asymptotic expansions:
Furthermore, in order to deal with the nonlinear terms in (6.11) involving \(\mathbb {C}^\prime \), \(\rho ^{\prime }, \psi _0^\prime , \partial _{\lambda _{n_r}} \Psi \), we perform a (componentwise) Taylor expansion around the leading order term \(\varvec{\Phi }_0\) to obtain the inner expansions
We now take a closer look at the quantities \(\mathbb {C}^\prime (\varvec{\Phi }_0)\) and \(\rho ^{\prime }(\varvec{\Phi }_0)\). To this end, we recall the definition of \(\rho \) in (2.9), which reads as
Thus, it is clear that \(\rho ' = \overline{\rho }' + \mathcal {O}(\varepsilon )\). Note that we can write the projection \(P_{\Sigma }\) as
for \({\varvec{\varphi }}\in \mathbb {R}^N\), where \({\textbf {1}} = (1,...,1)^T \in \mathbb {R}^N\). For the partial derivatives with respect to \(\varphi ^j\) with \(j\in \{1,...,N\}\), we thus obtain
and therefore,
where \(\delta _{ij}\) denotes the Kronecker delta. Inserting \(\varvec{\Phi }_0\) (which belongs pointwise to \(\varvec{G}\subset \Sigma ^N\) and thus, no projection is necessary) and recalling that \(\sigma _{\omega }\) is the identity on [0, 1] (cf. (2.8)), using (2.6) we arrive at
Thus, considering the inner expansion of \(\rho ^\prime ({\varvec{\varphi }}^\varepsilon )\) to the lowest order \(\mathcal {O}(1)\) gives
Thus, it obviously holds \(\overline{\rho }^\prime (\varvec{\Phi }_0)\in T\Sigma ^N\). The function \(\mathbb {C}^\prime (\varvec{\Phi }_0)\) can be expressed analogously. Altogether, this allows us to drop the projection acting on the left-hand side in (6.11) when considering only the lowest order contributions.
Now that we have considered all the quantities appearing in (6.11), we begin with our formal asymptotics. First of all, applying formula (4.19) on the lowest order contribution \({\textbf {W}}_0\) of the inner expansion of \(\varvec{w}^\varepsilon \), we find that
Comparing the contributions of order \(\mathcal {O}(\varepsilon ^{-2})\) in (6.11), we use (6.15) to obtain
This equation is obviously fulfilled since \(\partial _z{\textbf {W}}_0\) vanishes according to (6.3).
Let us now consider (6.11) to order \(\mathcal {O}(\varepsilon ^{-1})\). First of all, we infer from (6.3) and (6.15) that the left-hand side has no contribution of order \(\mathcal {O}(\varepsilon ^{-1})\). We thus have
where we used the formula (4.21) to compute the Laplacian. Multiplying (6.16) by \(\partial _z\varvec{\Phi }_0\) and integrating with respect to z from \(-\infty \) to \(\infty \), we deduce
Now, we consider each of the terms in (6.17) separately. First of all, we see
where the last equality follows from the matching condition (5.2).
As \(\varvec{\Phi }_0\in \varvec{G}\) pointwise, we know that \(\partial _{z}\varvec{\Phi }_0\in T\Sigma ^N\) pointwise. Hence, we obtain
For the last equality, we used the fact that \(\psi _0\) vanishes on \(\varvec{e}_i\) for \(i=1,\dots ,N\) along with the matching condition (5.2). We have thus shown that the right-hand side of (6.17) vanishes.
Recall from (3.26) that \(\varvec{\Lambda }^{\varepsilon }\) is identical in each component. It is therefore natural to assume that every term in the inner expansion of \(\varvec{\Lambda }^\varepsilon \) also has this property. Thus, recalling that \(\partial _{z}\varvec{\Phi }_0\in T\Sigma ^N\) pointwise, we infer
where \(\Lambda _0\) denotes an arbitrary component of \(\varvec{\Lambda }_0\).
Recall from Theorem 3.12 that \(\varvec{\vartheta }^{\varepsilon }\in \mathbb {R}^N\) is constant. Thus, assuming that this property is transferred to the inner expansion, \(\varvec{\vartheta }_0\) is independent of z, we infer by means of the matching condition (5.2) that
Eventually, we want to justify that the remaining Lagrange multiplier fulfills
Therefore, we recall (3.27) which tells us for \(i=1,\dots ,N\) that
Using [48, Lemma 7.7], we infer that for all \(i\in \{1,\dots ,N\}\),
Using (4.17) and comparing the terms of order \(\mathcal {O}(\varepsilon ^{-1})\), we deduce
for all \(i\in \{1,\dots ,N\}\). In particular, by multiplying with \(\varvec{\nu }\) and integrating with respect to z from \(-\infty \) to \(\infty \), we arrive at
for all \(i\in \{1,\dots ,N\}\). This proves (6.22).
Combining (6.18)–(6.22), we conclude from (6.17) that
for all \(i,j=1,\dots ,N\), meaning that all components of \(\varvec{\vartheta }_0\) are equal. Since \(\varvec{\vartheta }^\varepsilon \in T\Sigma ^N\) in (3.28), we also assume \(\varvec{\vartheta }_0\in T\Sigma ^N\). This implies that \(\varvec{\vartheta }_0 = \varvec{0}\) and thus, (6.16) can be rewritten as
Let now \(\tilde{z}\in \mathbb {R}\) be arbitrary. Multiplying (6.25) by \(\partial _{z}\varvec{\Phi }_0\) and integrating with respect to \(\tilde{z}\) from \(-\infty \) to \(\tilde{z}\), we obtain
Here, the last equality holds because of (6.20) and (6.22). By the fundamental theorem of calculus, we thus have
for all \(\tilde{z}\in \mathbb {R}\). We further know from the matching condition (5.2) that the left-hand side vanishes as \(\tilde{z}\rightarrow \pm \infty \). This entails
and thus, we arrive at
In order to obtain further information, we next show that (6.25) can be interpreted as the first-order optimality condition of a particular optimization problem that is similar to the minimization of the one-dimensional Ginzburg–Landau energy. Therefore, we first assume that
possesses a minimizer, which we call \(\varvec{\theta }_{ij}\). This means that \(\varvec{\theta }_{ij}\) is a geodesic with respect to the degenerate metric induced by the potential \(\psi _0\) that connects the values \(\varvec{e}_i\) and \(\varvec{e}_j\). Now, proceeding as in [68, proof of formula (15)], this geodesic can be used to construct a minimizer \(\varvec{\Phi }\) of the problem
This means that \(\varvec{\Phi }\) describes an optimal transition between the values \(\varvec{e}_i\) and \(\varvec{e}_j\). As in [68, proof of formula (15)], we further see that \(\varvec{\Phi }\) solves (6.25) and (6.27), where \(\varvec{\Lambda }_0+\varvec{\mu }_0\) is the Lagrange multiplier for the Gibbs–Simplex constraint. Consequently, choosing \(\varvec{\Phi }_0=\varvec{\Phi }\) we have found a solution of (6.25) and (6.27). Moreover, [68, formula (15)] states that \(2\sigma _{ij}\) is exactly the value of the minimum sought in (6.29).
As the minimizer \(\varvec{\Phi }_0=\varvec{\Phi }\) of (6.29) satisfies (6.27), we further conclude
which will be important for later purposes.
Finally, we now consider (6.11) to the order \(\mathcal {O}\left( 1\right) \). Using (4.21) to reformulate the term \(\gamma \varepsilon \Delta \varvec{\varphi }\), employing (6.3), and recalling that (6.14) holds analogously for \(\overline{\mathbb {C}}^\prime (\varvec{\Phi }_0)\), we conclude
We now multiply this equation by \(\partial _{z}\varvec{\Phi }_0\) and integrate with respect to z from \(-\infty \) to \(\infty \). Let us consider each term of the resulting equation separately. Analogously to (6.20) and (6.21), we obtain
Considering the Lagrange multiplier \(\varvec{\mu }\), we recall (6.23)
Due to (4.17), its contribution of leading order \(\mathcal {O}(1)\) in inner coordinates is given by
for all \(i\in \{1,\dots ,N\}\). Multiplying this identity by \(\varvec{\nu }\) and integrating the resulting equation with respect to z, we infer
Furthermore, applying integration by parts twice and using that due to the matching condition (5.2) all derivatives of \(\varvec{\Phi }_0\) with respect to z tend to 0 as \(z\rightarrow \pm \infty \), we obtain
As \(\partial _{z}\varvec{\Phi }_0\) attains its values only in \(T\Sigma ^N\), we deduce
due to the symmetry of the Hessian matrix. Moreover, recalling that \({\textbf {W}}_{0}\) is independent of z due to (6.3), a simple computation yields
Furthermore, by the definition of the dyadic product, it holds
Now we use (6.3) (which directly entails \(\partial _z\nabla _{\Gamma }{\textbf {W}}_0=\varvec{0}\)), (6.4) and \(\partial _{z}\varvec{\nu }=\varvec{0}\) to deduce
by means of the product rule and integration by parts.
Collecting (6.32)–(6.39) and recalling (6.30), we eventually obtain
on \(\Gamma _{ij}\), where \((...)^{\text {sym}}\) abbreviates \(\big (\nabla _{\Gamma }{\textbf {W}}_{0,n_r}+\partial _{z}{\textbf {W}}_{1,n_r}\otimes \varvec{\nu }\big )^{\text {sym}}\). Next, we want to show that
Differentiating (6.25) with respect to z, multiplying by \(\varvec{\Phi }_1\) and integrating the resulting equation with respect to z, we deduce
Thus, in order to prove (6.41), it suffices to show
By means of the product rule, the left-hand side can be reformulated as
Now as \(\varvec{\Phi }_1,\partial _z\varvec{\Phi }_1 \in T\Sigma ^N\) pointwise, we know as in (6.20)
Thus, as we want to prove (6.42), it remains to show
Recalling once more formula (3.27), we infer
for all \(i\in \{1,\dots ,N\}\). Hence, for the \(\mathcal {O}(1)\)-contribution and the \(\mathcal {O}(\varepsilon )\)-contribution of the inner expansions, we obtain the relations
respectively, for all \(i\in \{1,\dots ,N\}\) and \(z\in \mathbb {R}\). Now, the first equation in (6.44) implies that for any \(z\in \mathbb {R}\) with \(\Phi _0^i(z)\ne 0\), we have \(\mu _0^i(z)=0\) and thus also \(\mu _0^i(z)\Phi _1^i(z)=0\). On the other, for all \(z\in \mathbb {R}\) with \(\Phi _0^i(z)= 0\), we infer from the second equation in (6.44) that \(\mu _0^i(z)\Phi _1^i(z)=0\). Combining both statements, we conclude
This proves (6.43). By the above considerations, this verifies (6.42) which in turn implies equation (6.41).
To conclude this section, we recall the definition of the jump, see (5.5). Moreover, we recall from (4.23) that the mean curvature of \(\Gamma _{ij}\) is given by \(\kappa _{ij} = - \nabla _{\Gamma _{ij}} \cdot \varvec{n}_{\Gamma _{ij}}\). Using the matching conditions (5.4), (6.5) and (6.6), we finally infer from (6.40) that
on \(\Gamma _{ij}\) for all \(i,j\in \{1,\dots ,N-1\}\). In the case \(j=N\) and \(i\ne N\), equation (6.40) simplifies to
on \(\Gamma _{iN}\) by the matching in (6.9) and (6.10).
To conclude this section we note that according to [20, Section 5.3] or [28, Section 2.4] equation (6.25) induces a further solvability condition, namely an angle condition for triple junctions. To see this, let us assume that the regions \(\Omega _i\), \(\Omega _j\), \(\Omega _k\) (with i, j, k pairwise different) meet at a triple point \(m_{ijk}\) in the 2-dimensional case or on a triple curve \(m_{ijk}\) in the 3-dimensional case. Then the angle condition is expressed via the normals of the three meeting interfaces as follows
This relation immediately implies the angle condition
where \(\theta _{ij}\) denotes the angle between \(\varvec{n}_{\Gamma _{jk}}\) and \(\varvec{n}_{\Gamma _{ki}}\). For the choice \(\psi _0({\varvec{\varphi }})=\frac{1}{2}(1-{\varvec{\varphi }}\cdot {\varvec{\varphi }})\) we deduce that the transition energies denoted by \(\sigma _{ij},\sigma _{jk},\sigma _{ki}\) are always equal. This follows by exploiting the symmetry of this potential in (6.28). Thus, in this case, we know that triple junctions always occur at a \(120^\circ \) contact angle.
7 The Sharp-Interface Problem
Now we are in a position to state the complete problem that is obtained from (\(SE^\varepsilon \)) and (\(GI^\varepsilon \)) in the sharp-interface situation.
7.1 The Sharp-Interface Limit of the State Equation
Therefore, we recall that the domain \(\Omega \) is partitioned into N regions \(\Omega _i\) for \(i=1,\dots ,N\) representing the presence of the i-th material (\(i<N\)) or void (\(i=N\)) in its pure form. Those regions are separated by interfaces \(\Gamma _{ij}\). Furthermore we have chosen \(\varvec{\eta }_{\Gamma _{ij}}\) to be the unit normal vector field on \(\Gamma _{ij}\) pointing from \(\Omega _i\) into the region \(\Omega _j\). This means that
To capture the behavior of a function \(\varvec{v}\) across the interface \(\Gamma _{ij}\), we defined its jump by
for all \(x\in \Gamma _{ij}\), see (5.5).
Combining the equations (\({SE}^{i}_0\)) derived in Claim 4.1 and the jump conditions obtained in (6.5) and (6.10), we obtain the system
for \(i,j=1,\dots ,N-1\) and \(r=1,\dots ,l\), as the sharp-interface limit of the state equation (\(SE^\varepsilon \)). Here, \({\varvec{w}}_{0,n_r}\) is normalized in the material regions, i.e.,
Furthermore, we infer from (6.10) that
for all \(i\in \{1,\dots ,N-1\}\) and each \(r\in \{1,\dots ,l\}\). However, this condition does not provide any additional information as we do not know how \({\varvec{w}}_{0,n_r}\) behaves in the void region. In particular, we see that by interpreting (\({SE}^{ij}_r\)) as one system of PDEs in the material region \(\bigcup _{i=1}^{N-1}\Omega _i\), the homogeneous Neumann boundary condition in the fourth line of (\({SE}^{ij}_r\)) is enough to obtain a closed system.
Combining the Neumann type jump condition on \(\Gamma _{ij}\) stated in the second line of (\({SE}^{ij}_r\)) with the normality condition (7.1), we are able to obtain the relation
with
where \(\mathbb {1}_{\Omega _i}\) denotes the characteristic function on \(\Omega _i\). This means that the eigenvalue \(\lambda _{0,n_r}\) in the sharp-interface setting is indeed solely determined by an eigenvalue equation on the material region \(\Omega ^M\) but does not have any contribution from the void region.
To verify (7.3), we test (\({SE}^{ij}_r\)) with \({\varvec{w}}_{0,n_r}\) and integrate by parts. This yields
for all \(i\in \{1,\dots ,N-1\}\), where \(\varvec{n}_{\Gamma _i}\) stands for the outer unit normal vector field of \(\partial \Omega _i\). Noticing that the outer unit normal vector simply switches its sign on neighboring boundaries, we now use the second and the fourth line of (\({SE}^{ij}_r\)) to infer
Thus, summing the equations (7.4) from \(i=1\) to \(N-1\) and using property (7.1), we conclude
By the linearity of the integral, this directly proves (7.3).
Remark 7.1
As a refinement of Remark 4.2 (a), we now see that as long as at least one of the material regions \(\Omega _{1},\dots , \Omega _{N-1}\) shares a sufficiently nice part of its boundary with \(\Gamma _D\), we can apply Korn’s inequality in order to deduce that all \(\lambda _{0,n_r}\) are strictly positive. From a physical point of view, this is reasonable since if the material region \(\Omega ^M\) of the structure is not attached to some fixed boundary the shape can freely move within the design domain just by translation without exhibiting any vibrations.
7.2 The Sharp-Interface Limit of the First-Order Optimality Condition
Now let us turn to the limit of the gradient inequality (\(GI^\varepsilon \)). For the sake of completeness, let us restate our final results from the previous section, i.e., (6.45) and (6.46). We have
on \(\Gamma _{ij}\) for all \(i,j=1\dots ,N-1\), and
on \(\Gamma _{iN}\) for all \(i=1\dots ,N-1\) if \(j=N\).
Here \(\sigma _{ij}\) is defined as in (6.28) and stands for the total energy of a transition across the interface \(\Gamma _{ij}\). The vector \(\varvec{\vartheta }_1\in \mathbb {R}^N\) denotes the \(\mathcal {O}(\varepsilon )\)-contribution of the Lagrange-multiplier resulting from the integral constraint that is hidden in the condition \({\varvec{\varphi }}^\varepsilon \in \varvec{\mathcal {G}}^{\varvec{m}}\) (cf. Theorem 3.12).
Recalling (6.47), we additionally have the triple junction condition at any junction \(m_{ijk}\) with pairwise disjoint \(i,j,k\{1,\dots ,N\}\)
7.3 The Sharp-Interface Optimality System in the Case of only one Material
We now want to state above equations for the simplest case of only one single material (i.e., \(N=2\)) as this is the scenario we further study in the subsequent sections.
In this case, we have \(\Omega =\Omega ^{M}\cup \Omega ^{V}\), where \(\Omega ^{M}\) and \(\Omega ^{V}\) denote the material and the void parts of the domain, respectively. We now denote the interface separating the two phases by \(\Gamma _{MV}\), its outer unit normal vector field by \(\varvec{n}_{\Gamma _{MV}}\) and its mean curvature by \(\kappa _{MV} = - \nabla _{\Gamma _{MV}}\cdot \varvec{n}_{\Gamma _{MV}}\). Using the notation \(\Gamma _D^{M}{:}{=}\Gamma _D\cap \partial \Omega ^M\) and \(\Gamma _0^{M}{:}{=}\Gamma _0\cap \partial \Omega ^M\), we obtain from (\({SE}^{ij}_r\)), (7.1) and (7.2) the state equation
for \(r=1,\dots ,l\), along with the first-order necessary optimality condition
on \(\Gamma _{MV}\). This means that the functions \(\varvec{w}_{0,n_r}\) are eigenfunctions to the eigenvalues \(\lambda _{0,n_r}\) which essentially solve the eigenvalue problem for the elasticity equation subject to a homogeneous Neumann boundary condition on the shape \(\Omega ^M\).
Remark 7.2
Note that, in general, one cannot predict the behavior of solutions to (\({SE}^{MV}_r\)). If \(\Omega ^M\) is merely a set of finite perimeter that does not have a Lipschitz boundary or if \(\Gamma _{MV}\cap \Gamma _D^M=\emptyset \), the classical spectral theory (as applied in Sect. 2.5) does not provide us with an infinite sequence of positive eigenvalues. Nevertheless, as we want to consider a well posed minimization problem and want to calculate shape derivatives associated to this problem, we assume that these issues do not occur. In particular, we always assume \(\Omega ^M\) to be sufficiently smooth and \(\partial \Omega ^M\) to have a suitably nice intersection with \(\Gamma ^M_D\) such that an infinite sequence of positive eigenvalues actually exists (see also Remark 7.1).
8 Relating the First-Order Optimality Condition to Classical Shape Calculus
We now want to compare the above results, especially (7.7), to the results in [7], which were obtained using shape calculus. Our goal is to justify that the gradient equality (7.7) is indeed the first-order condition of a sharp-interface eigenvalue optimization problem, which is formally the limit of the diffuse-interface problem we started with. Therefore, we need to fit the notation of [7] to our setting.
As above consider the situation \(N=2\), i.e., \(\Omega =\Omega ^M\cup \Omega ^V\). Denote with \(P_{\Omega }(\Omega ^M)\) the perimeter of the shape \(\Omega ^M\) within the design domain \(\Omega \), which is given by the Hausdorff measure \(\mathcal {H}^{d-1}(\partial \Omega ^M\cap \Omega )\) provided that \(\Omega ^M\) is non-empty and sufficiently smooth. Furthermore, we consider a prescribed mass \(m=\big |\Omega ^M\big |<|\Omega |\). In order to be consistent with the notation used in the previous chapters, we choose \(\varvec{m}=(m_1,m_2)^T\in \Sigma ^2\) with \(m_1 = m|\Omega |^{-1}\) and \(m_2=1-m_1\). Then the sharp-interface structural optimization problem that we intend to approximate via our diffuse-interface problem (\(\mathcal {P}^{\varepsilon }_{l}\)) reads as
This system is the sharp-interface limit problem associated to the diffuse-interface problem (\(\mathcal {P}^{\varepsilon }_{l}\)), where the side condition is exactly the sharp-interface state equation (\({SE}^{MV}_r\)) and the perimeter \(\sigma _{MV}P_{\Omega }(\Omega ^M)\) is the rigorous \(\Gamma \)-limit of the Ginzburg–Landau energy, see [14]. We recall that the constant \(\sigma _{MV}\) we obtained in (6.28) is exactly the one obtained in [14] in terms of the rigorous \(\Gamma \)-limit, which is denoted by \(d(\varvec{e}_i,\varvec{e}_j)\) there. In particular, \(\sigma _{MV}\) is independent of the shape \(\Omega ^M\).
In case an ambiguity might arise, we indicate the shape dependency explicitly in the eigenfunctions and eigenvalues, i.e., we write \((\lambda _{n_r}(\Omega ^M),{\varvec{w}}_{n_r}(\Omega ^M))\) for \(r=1,\dots ,l\). Now, we want to apply the calculus of shape derivatives from [7, Thm. 2.5] to our situation. We obtain the following statement.
Theorem 8.1
Let \(\Omega ^M\) be a smooth bounded open set and let \(\varvec{\theta }\in W^{1,\infty }(\mathbb {R}^d,\mathbb {R}^d)\) with \({\varvec{\theta }\cdot \varvec{n}_{\Gamma _{\partial \Omega ^M}}=0}\) on \(\partial \Omega ^M\backslash \Gamma _{MV}\). We further assume that for \(r=1,\dots ,l\), the eigenfunctions \(\varvec{w}_{n_r}(\Omega ^M)\) in \(({SE}^{MV}_r)\) are sufficiently smooth, say \(\varvec{w}_{n_r}(\Omega ^M)\in H^2(\Omega ^M;\mathbb {R}^d)\).
Then, if the involved eigenvalues \(\lambda _{n_r}\) for \(r=1,\dots ,l\) are all simple, the shape derivative of J at the shape \(\Omega ^M\) in the direction \(\varvec{\theta }\) fulfills the equation
Here, the shape derivative of J at a shape \(\Omega ^M\) is defined as the Fréchet derivative of the functional
evaluated at \(\varvec{\zeta }=\varvec{0}\).
Remark 8.2
-
(a)
Note that the simplicity of eigenvalues is crucial here. Only then it is guaranteed that the eigenvalues and eigenfunctions depend on the domain \(\Omega ^M\) in a differentiable way. For a comprehensive overview over the differentiablitiy of spectral quantities with repsect to the domain we refer to [50, Section 5.7].
-
(b)
For \(\varvec{\zeta }\in W^{1,\infty }(\mathbb {R}^d;\mathbb {R}^d)\) the application
$$\begin{aligned} T_{\varvec{\zeta }}:\mathbb {R}^d\rightarrow \mathbb {R}^d, \quad x \mapsto (\textrm{Id}+\varvec{\zeta })(x), \end{aligned}$$is invertible if \(\left\| \varvec{\zeta }\right\| _{W^{1,\infty }}<1\), and it holds \((\textrm{Id}+\varvec{\zeta })^{-1}-\textrm{Id}\in W^{1,\infty }(\mathbb {R}^d;\mathbb {R}^d)\) with
$$\begin{aligned} \left\| (\textrm{Id}+\varvec{\zeta })^{-1}-\textrm{Id}\right\| _{W^{1,\infty }}\le \left\| \varvec{\zeta }\right\| _{W^{1,\infty }}(1-\left\| \varvec{\zeta }\right\| _{W^{1,\infty }})^{-1}. \end{aligned}$$This means the family \((T_{\varvec{\zeta }})_{\varvec{\zeta }\in W^{1,\infty }}\) describes diffeomorphic perturbations of \(\Omega ^M\) “close” to \(\Omega ^M\) if \(\left\| \varvec{\zeta }\right\| _{W^{1,\infty }}\) is small, motivating the definition of the shape derivative above. For a detailed discussion of this concept, we refer to [50, Section 5.2].
Proof
We proceed analogously to [7, Theorem 2.5]. In the following, \(\Omega _{\varvec{\zeta }}=(\textrm{Id}+\varvec{\zeta })(\Omega ^M)\) denotes the perturbation of \(\Omega ^M\) associated with a sufficiently small \(\varvec{\zeta }\in W^{1,\infty }(\mathbb {R}^d;\mathbb {R}^d)\). First of all, for \(\varvec{v}_{n_r}\in H^1(\mathbb {R}^d;\mathbb {R}^d)\) with \(r=1,\dots ,l\), we introduce the Lagrangian
For the partial Fréchet derivatives of the Lagrangian with respect to \(\varvec{v}_{n_r}\) for \(r=1,\dots ,l\) at the point \((\Omega _{\varvec{\zeta }},\varvec{w}_{n_1}(\Omega _{\varvec{\zeta }}),\dots , \varvec{w}_{n_l}(\Omega _{\varvec{\zeta }}))\), we obtain
This is simply due to the fact, that the derivative of the Rayleigh quotient
evaluated at an eigenfunction \(\varvec{w}_n=\varvec{w}_n(\Omega _{\varvec{\zeta }})\) reads as
and this vanishes due to (\({SE}^{MV}_r\)).
On the other hand, recalling the definition of J in (\(\mathcal {P}_l^0\)), we obviously have
as the eigenvalues can be expressed by the corresponding Rayleigh quotients. Note that due to the differentiability of eigenfunctions as discussed in Remark 8.2, we can now apply the chain rule. Thus, using (8.2) we infer that the shape derivative is given by
Applying the formulas for shape derivatives in [7, Lemma 2.3], we deduce
where \(\kappa _M\) denotes the mean curvature of \(\partial \Omega ^M\). By the assumption \({\varvec{\theta }\cdot \varvec{n}_{\partial \Omega ^M}=0}\) on \(\partial \Omega ^M\backslash \Gamma _{MV}\), the boundary integrals vanish on \(\partial \Omega ^M\backslash \Gamma _{MV}\) and we thus arrive at (8.1). Note that in [7], the mean curvature is defined as \(\kappa =\nabla _{\partial \Omega ^M}\cdot \varvec{n}_{\partial \Omega ^M}\), whereas (in accordance with (4.23)) our mean curvature is given by \(\kappa =-\nabla _{\partial \Omega ^M}\cdot \varvec{n}_{\partial \Omega ^M}\). This explains the negative sign of our term involving \(\kappa _M\). \(\square \)
Remark 8.3
The preceding theorem shows that using the approach of classical shape calculus and additionally taking the volume constraint \(\big |\Omega ^M\big |=m\) into account, we recover the gradient equality (7.7) since the volume constraint produces a Lagrange multiplier as in our previous analysis. This justifies our formal approach from the viewpoint of classical shape calculus since (7.7) can be interpreted as the first-order necessary optimality condition of the shape optimization problem (\(\mathcal {P}_l^0\)).
9 Numerical Examples
In the following, we present numerical results that illustrate the applicability of our approach to find optimal topologies. After a brief introduction of the numerical method, we investigate the dependence of solutions on the parameter \(\varepsilon \) in Sect. 9.1. Therefore, we study a particular setting of an elastic beam that is known from literature (cf. [7]). In Sect. 9.2, we consider a joint optimization of \(\lambda _1\) and \(\lambda _2\) for this beam setup, and in Sect. 9.3, we investigate an extended optimization problem to not only optimize the shape and topology of this beam with respect to its first eigenvalue but also its compliance.
As in Subsection 7.3 and Sect. 8, we restrict ourselves to the case of only two phases, i.e., material and void. In this situation, the vector-valued phase-field \({\varvec{\varphi }}=(\varphi ^1,\varphi ^2)\) can be represented by a scalar order parameter
and in the Ginzburg–Landau energy \(E^\varepsilon (\varphi ) = \int _\Omega \frac{\varepsilon }{2}|\nabla \varphi |^2 +\psi (\varphi ) \,\text {d}x\), we choose
where \(I_{[-1,1]}\) is the indicator functional of the interval \([-1,1]\). This means that \(\varphi \) attains its values in \([-1,1]\), where “1” represents the material and “\(-1\)” represents the void. The elastic tensor \(\mathbb {C}(\varphi )\) now is defined as
for Lamé parameters \(\mu , \ell >0\) and the quadratic interpolation function \(\alpha (\varphi )\) satisfying \(\alpha (1) = 1\), \(\alpha (-1) = \underline{\alpha }\varepsilon \), and \(\alpha ^\prime (-1) = 0\) for some constant \(\underline{\alpha }\). The eigenvalue equation is given by
with the quadratic interpolation function \(\beta (\varphi )\) satisfying \(\beta (1) = 1\), \(\beta (-1) = \underline{\beta }\varepsilon ^2\) and \({\beta ^\prime (-1) = 0}\) as well as an additional density function \(\rho \) that might depend on the spatial variable. If not stated differently, we use \(\underline{\alpha }= 2\cdot 10^{-4}\) and \(\underline{\beta }= 10^{-4}\). Note that this choice of interpolation functions is exactly reflected by the choice (2.10) and the scaling choice of \(k=1, l=2\) as discussed in Sect. 4.2. More precisely we have
and analogously for \(\alpha \).
Numerical Solution Method. The numerical implementation is based on linear finite elements for all functions provided by the finite element package FEniCs [9, 57] together with the PETSc linear algebra backend [12, 13]. For the eigenvalue problem, we use the package SLEPc [51]. The optimization problem is solved by the VMPT method that is proposed in [24]. In our case, it can be understood as an extension of the projected gradient method into the space \(H^1(\Omega ) \cap L^\infty (\Omega )\). We refer to [24, 43, 44] for more details.
9.1 Numerical Investigation of the Sharp-Interface Limit \(\varepsilon \rightarrow 0\)
In this section, to illustrate the sharp-interface limit, we present numerical results for a sequence of decreasing values of \(\varepsilon \).
We use the setup from [7, Sec. 7.1] to find a cantilever beam with maximal first eigenvalue, i.e., we choose \(\Psi (\lambda _1) = -\lambda _1\). Our computational domain is given by \(\Omega = (0,2)\times (0,1)\). The Young’s modulus is \(E = 1\) and Poisson’s ratio is \(\nu = 0.3\) leading to \(\mu \approx 0.38\) and \(\ell \approx 0.58\). We define the subset \(\Omega _{\rho } = (1.9,2.0)\times (0.45,0.55)\) and set \(\rho (x) = 1\) if \(x \not \in \Omega _\rho \) and \(\rho (x) = 100\) if \(x \in \Omega _\rho \). We also fix \(\varphi (x) = 1\) for all \(x \in \Omega _\rho \). The beam is supposed to be attached to the wall at the left boundary of \(\Omega \), i.e., at \(\Gamma _D = \{ (0,\eta )\mid \eta \in (0,1)\} \subset \partial \Omega \). This leads to the boundary condition \(w = 0\) on \(\Gamma _D\). We further set \(\Gamma _0 = \partial \Omega \setminus \Gamma _D\) and we fix \(\gamma = 10^{-4}\) and \(\int _\Omega \varphi = 0\).
Similar as in [7, Sec. 7.1], we start our optimization process with a checkerboard type initial function given by \(\varphi _0(x) = {{\,\textrm{sign}\,}}\big (v(x)\big )\, \big |v\big (x\big )\big |^{0.3}\) with \(v(x) = \cos (3\pi x_1)\cos (4\pi x_2)\) for all \(x\in \Omega \). We want to emphasize that this problem is expected to have many local minima and thus, the choice of initial function can significantly influence the shape and topology of the local minimizer found by our numerical method.
We now solve the optimization problem for a decreasing sequence of values of \(\varepsilon \). In Table 1, we present the values of \(\varepsilon \) together with the corresponding value of the Ginzburg–Landau energy \(E^\varepsilon (\varphi ) = \int _\Omega \frac{\varepsilon }{2}|\nabla \varphi |^2 + \frac{1}{2\varepsilon }(1-\varphi ^2) \,\text {d}x\) and the eigenvalue \(\lambda _1\). Recall here that the values of the Ginzburg–Landau energy converge to a weighted perimeter of the shape in the sharp interface limit \(\varepsilon \rightarrow 0\). In Fig. 1, we present the zero level lines of the (locally optimal) shapes we obtain for different values of \(\varepsilon \). Here we started with \(\varepsilon =0.08\) and used the local optimum as initial value for subsequent simulations.
9.2 Optimization of a Beam
As a first test, we illustrate the influence of the regularization strength \(\gamma \) on the found structure. The parameter \(\gamma \) acts as a weight for the penalization of the length of the interface between void and material. Thus a smaller value of \(\gamma \) is expected to lead to thinner structures which contain more braces. Using the same setup as before, we solve again the optimization problem for the cantilever beam, but this time we fix \(\varepsilon =0.02\). We perform two simulations with \(\gamma \in \{10^{-4},10^{-5}\}\). The smaller \(\gamma \) is chosen, the finer structures we expect. We also expect that we reach a larger value for \(\lambda _1\), because less regularization is used.
In Fig. 2, we present the found structures for these parameters. On the left we present the result for \(\gamma =10^{-4}\) and on the right for \(\gamma =10^{-5}\). As expected, it is clearly visible that the structure obtained for the smaller value of \(\gamma \) is finer and contains more braces. Additionally, decreasing \(\gamma \) also leads to sharper corners.
In a second test for the beam setup, we compare the numerical results for different choices of \(\Psi (\lambda _1,\lambda _2)\) as a linear combination of \(\lambda _1\) and \(\lambda _2\). We set \(\gamma =10^{-4}\) and use the solution shown in Fig. 2 as the initialization of the optimization method. In Fig. 3, we present numerical results for this setting with the choice \(\Psi (\lambda _1,\lambda _2) = -\lambda _1 - \alpha \lambda _2\) for \(\alpha \in \{10^{-2},2\cdot 10^{-2},6\cdot 10^{-2},10^{-1}\}\). Moreover, in Table 2 we list the corresponding values of \(\lambda _1\) and \(\lambda _2\). Here, \(\alpha =0\) corresponds to the result shown in Fig. 2 on the left.
9.3 Joint Optimization of Compliance and Principal Eigenvalue
In this subsection, we extend the problem by using a linear combination of compliance and the first eigenvalue as objective. For any given \(\varphi \in H^1(\Omega )\cap L^\infty (\Omega )\), the compliance problem is to find a displacement field \(\varvec{u}_c^\varphi \in H^1(\Omega ;\mathbb {R}^d)\) satisfying
which minimizes the objective \(\int _{\Gamma _g}\varvec{g}\cdot \varvec{u}_c^\varphi \).
Combining this with our eigenvalue optimization problem for \(\Psi (\lambda _1) = -\alpha \lambda _1\) for some \(\alpha > 0\), we arrive at
This means that we are looking for a structure that simultaneously minimizes the compliance with respect to a given force \(\varvec{g}\) and maximizes the first eigenvalue \(\lambda _1\). The optimization problem (9.5) is actually a special case of the compliance and eigenvalue optimization problems studied in [45] (with the quantities therein being chosen as \(N=2\), \(\alpha =1\), \(\beta =0\), \(\varvec{f} \equiv {\textbf {0}}\), \(J_0 \equiv 0\) and \(U_c = H^1(\Omega )\), i.e., \(S_0=S_1=\emptyset \)). More details about the formulation of the problem (9.5) can be found in [45, Section 2.7]. For the optimality system of (9.5), which we are going to solve numerically, we refer to [45, Theorem 7.2].
For the sharp-interface limit of compliance optimization problems without eigenvalue optimization (such as (9.5), where \(\alpha \) is set to zero), we refer to [20]. As our sharp-interface analysis for eigenvalue optimization problems without compliance optimization relies on the same expansions as in [20], both approaches can be combined to formally derive the sharp-interface limit of problem (9.5).
For our numerical computations, we use the same setup as in Sect. 9.2 for the beam example and fix \(\gamma =1\cdot 10^{-3}\). Moreover, the exterior force is \(\varvec{g} = (0,-1)^T\) and acts on \(\Gamma _g = \{ (2.0,y)\mid y \in (0.45,0.55)\}\). Note that \(\Gamma _g\) belongs to the boundary of the domain \(\Omega _\rho \) on which we assume a higher value of the density \(\rho \).
In Fig. 4, we show numerical result for this setting for different values of \(\alpha \). We observe that the structures become finer when we increase the influence of the principal eigenvalue. In Table 3, we present the corresponding values for compliance and \(\lambda _1\) for these shapes. As expected, we achieve a larger compliance when we increase the weight \(\alpha \) of the principal eigenvalue. Simultaneously, we also obtain larger values for the principal eigenvalue. It is worth mentioning that these results compare very well with the ones obtained in [7], where a level-set method was used to directly tackle the sharp-interface problem (see especially Fig. 2 and Fig. 5 in [7]).
References
Abels, H., Garcke, H., Grün, G.: Thermodynamically consistent, frame indifferent diffuse interface models for incompressible two-phase flows with different densities. Math. Models Methods Appl. Sci. 22(3), 1150013, 40 (2012)
Abels, H., Liu, Y.: Sharp interface limit for a Stokes/Allen-Cahn system. Arch. Ration. Mech. Anal. 229(1), 417–502 (2018)
Allaire, G.: Shape Optimization by the Homogenization Method. In: Bloch, A., Epstein, C.L., Goriely, A., Greengard, L. (eds.) Applied Mathematical Sciences. Springer-Verlag, New York (2002)
Allaire, G., Aubry, S., Jouve, F.: Eigenfrequency optimization in optimal design. Comput. Methods Appl. Mech. Eng. 190(28), 3565–3579 (2001)
Allaire, G., Dapogny, C., Delgado, G., Michailidis, G.: Multi-phase structural optimization via a level set method. ESAIM Control Optim. Calc. Var. 20(2), 576–611 (2014)
Allaire, G., Jakabčin, L.: Taking into account thermal residual stresses in topology optimization of structures built by additive manufacturing. Math. Models Methods Appl. Sci. 28(12), 2313–2366 (2018)
Allaire, G., Jouve, F.: A level-set method for vibration and multiple loads structural optimization. Comput. Methods Appl. Mech. Eng. 194(30–33), 3269–3290 (2005)
Almi, S., Stefanelli, U.: Topology optimization for incremental elastoplasticity: a phase-field approach. SIAM J. Control Optim. 59(1), 339–364 (2021)
Alnæs, M., Blechta, J., Hake, J., Johansson, A., Kehlet, B., Logg, A., Richardson, C., Ring, J., Rognes, M., Wells, G.: The FEniCS project version 1.5. Arch. Num. Softw. 3, 100 (2015)
Antunes, P.R.S., Oudet, E.: Numerical results for extremal problem for eigenvalues of the Laplacian. In: Henrot, A. (ed.) Shape Optimization and Spectral Theory, pp. 398–411. De Gruyter Open, Warsaw (2017)
Auricchio, F., Bonetti, E., Carraturo, M., Hömberg, D., Reali, A., Rocca, E.: A phase-field-based graded-material topology optimization with stress constraint. Math. Models Methods Appl. Sci. 30(8), 1461–1483 (2020)
Balay, S., Abhyankar, S., Adams, M., Brown, J., Brune, P., Buschelman, K., Dalcin, L., Eijkhout, V., Gropp, W., Kaushik, D., Knepley, M., May, D., McInnes, L.C., Mills, R.T., Munson, T., Rupp, K., Sanan, P., Smith, B., Zampini, S., Zhang, H., Zhang, H.: PETSc users manual. Tech. Rep. ANL-95/11 - Revision 3.9, Argonne National Laboratory, (2018)
Balay, S., Gropp, W.D., McInnes, L.C., Smith, B.F.: Efficient management of parallelism in object-oriented numerical software libraries. In: Arge, E., Bruaset, A.M., Langtangen, H.P. (eds.) Modern Software Tools for Scientific Computing, pp. 163–202. Birkhäuser Press, Basel (1997)
Baldo, S.: Minimal interface criterion for phase transitions in mixtures of Cahn-Hilliard fluids. Ann. Inst. H. Poincaré Anal. Non Linéaire 7(2), 67–90 (1990)
Barrett, J.W., Blowey, J.F.: An error bound for the finite element approximation of a model for phase separation of a multi-component alloy. IMA J. Numer. Anal. 16(2), 257–287 (1996)
Barrett, J.W., Garcke, H., Nürnberg, R.: On sharp interface limits of Allen-Cahn/Cahn-Hilliard variational inequalities. Discret. Contin. Dyn. Syst. Ser. S 1(1), 1–14 (2008)
Beck, L.: Elliptic regularity theory, vol. 19 of Lecture Notes of the Unione Matematica Italiana. Springer, Cham; Unione Matematica Italiana, Bologna (2016). A first course
Bendsøe, M., Sigmund, O.: Topology Optimization. Springer-Verlag, Berlin (2003). (Theory, methods and applications)
Blank, L., Farshbaf-Shaker, M., Garcke, H., Rupprecht, C., Styles, V.: Multi-material phase field approach to structural topology optimization. Trends in PDE constrained optimization. Internat. Ser. Numer. Math. 165, 231–246 (2014)
Blank, L., Garcke, H., Farshbaf-Shaker, M.H., Styles, V.: Relating phase field and sharp interface approaches to structural topology optimization. ESAIM Control Optim. Calc. Var. 20(4), 1025–1058 (2014)
Blank, L., Garcke, H., Hecht, C., Rupprecht, C.: Sharp interface limit for a phase field model in structural optimization. SIAM J. Control Optim. 54(3), 1558–1584 (2016)
Blank, L., Garcke, H., Sarbu, L., Srisupattarawanit, T., Styles, V., Voigt, A.: Phase-field approaches to structural topology optimization. Constrained optimization and optimal control for partial differential equations. Internat. Ser. Numer. Math. 160, 245–256 (2012)
Blank, L., Garcke, H., Sarbu, L., Styles, V.: Nonlocal Allen-Cahn systems: analysis and a primal-dual active set method. IMA J. Numer. Anal. 33(4), 1126–1155 (2013)
Blank, L., Rupprecht, C.: An extension of the projected gradient method to a Banach space setting with application in structural topology optimization. SIAM J. Control Optim. 55(3), 1481–1499 (2017)
Blowey, J.F., Elliott, C.M.: The Cahn-Hilliard gradient theory for phase separation with nonsmooth free energy. I. Math. Anal. Euro. J. Appl. Math. 2(3), 233–280 (1991)
Bourdin, B., Chambolle, A.: Design-dependent loads in topology optimization. ESAIM Control Optim. Calc. Var. 9, 19–48 (2003)
Bourdin, B., Chambolle, A.: The phase-field method in optimal design. In: Bendsøe, M., Olhoff, N., Sigmund, O. (eds.) IUTAM Symposium on Topological Design Optimization of Structures, Machines and Materials, pp. 207–215. Springer, Netherlands (2006)
Bronsard, L., Garcke, H., Stoth, B.: A multi-phase Mullins-Sekerka system: matched asymptotic expansions and an implicit time discretisation for the geometric evolution problem. Proc. Roy. Soc. Edinburgh Sect. A 128(3), 481–506 (1998)
Bucur, D., Martinet, E., Oudet, E.: Maximization of Neumann Eigenvalues. Arch. Ration. Mech. Anal. 247(2), 19 (2023)
Burger, M.: A framework for the construction of level set methods for shape optimization and reconstruction. Interfaces Free Bound. 5(3), 301–329 (2003)
Burger, M., Stainko, R.: Phase-field relaxation of topology optimization with local stress constraints. SIAM J. Control Optim. 45(4), 1447–1466 (2006)
Carraturo, M., Rocca, E., Bonetti, E., Hömberg, D., Reali, A., Auricchio, F.: Graded-material design based on phase-field and topology optimization. Comput. Mech. 64(6), 1589–1600 (2019)
Ciarlet, P.G.: Linear and nonlinear functional analysis with applications. Society for Industrial and Applied Mathematics, Philadelphia, PA (2013)
Dedè, L., Borden, M., Hughes, T.: Isogeometric analysis for topology optimization with a phase field model. Arch. Comput. Methods Eng. 19(3), 427–465 (2012)
Delfour, M.C., Zolésio, J.-P.: Shapes and geometries, second ed., vol. 22 of Advances in Design and Control. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA (2011). Metrics, analysis, differential calculus, and optimization
Dondl, P., Poh, P.S.P., Rumpf, M., Simon, S.: Simultaneous elastic shape optimization for a domain splitting in bone tissue engineering. Proc. R. Soc. A. 475(2227), 20180718, 17 (2019)
Ebeling-Rump, M., Hömberg, D., Lasarzik, R., Petzold, T.: Topology optimization subject to additive manufacturing constraints. J. Math. Ind. 11, 19 (2021)
Ebeling-Rump, M., Hömberg, D., Lasarzik, R., Petzold, T.: Topology optimization subject to additive manufacturing constraints. J. Math. Ind. 11, 19 (2021)
Eck, C., Garcke, H., Knabner, P.: Mathematical modeling. Springer Undergraduate Mathematics Series. Springer, Cham (2017)
Elliott, C.M., Luckhaus, S.: A generalised diffusion equation for phase separation of a multi-component mixture with interfacial free energy. In: Preprint SFB 256 University Bonn, vol. 195 (1991)
Fife, P.C.: Dynamics of internal layers and diffusive interfaces, vol. 53 of CBMS-NSF Regional Conference Series in Applied Mathematics. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA (1988)
Garcke, H., Hecht, C., Hinze, M., Kahle, C., Lam, K.F.: Shape optimization for surface functionals in Navier-Stokes flow using a phase field approach. Interfaces Free Bound. 18(2), 219–261 (2016)
Garcke, H., Hinze, M., Kahle, C., Lam, K.F.: A phase field approach to shape optimization in Navier-Stokes flow with integral state constraints. Adv. Comput. Math. 44(5), 1345–1383 (2018)
Garcke, H., Hüttl, P., Kahle, C., Knopf, P., Laux, T.: Phase-field methods for spectral shape and topology optimization. ESAIM Control Optim. Calc. Var. 29, 57 (2023)
Garcke, H., Hüttl, P., Knopf, P.: Shape and topology optimization involving the eigenvalues of an elastic structure: a multi-phase-field approach. Adv. Nonlinear Anal. 11(1), 159–197 (2022)
Garcke, H., Lam, K.F., Nürnberg, R., Sitka, E.: A multiphase Cahn-Hilliard-Darcy model for tumour growth with necrosis. Math. Models Methods Appl. Sci. 28(3), 525–577 (2018)
Garcke, H., Lam, K.F., Sitka, E., Styles, V.: A Cahn-Hilliard-Darcy model for tumour growth with chemotaxis and active transport. Math. Models Methods Appl. Sci. 26(6), 1095–1148 (2016)
Gilbarg, D., Trudinger, N.S.: Elliptic partial differential equations of second order. Classics in Mathematics. Springer-Verlag, Berlin (2001). (Reprint of the 1998 edition)
Grisvard, P.: Elliptic problems in nonsmooth domains, vol. 69 of Classics in Applied Mathematics. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, (2011). Reprint of the 1985 original
Henrot, A., Pierre, M.: Shape variation and optimization, vol. 28 of EMS Tracts in Mathematics. European Mathematical Society (EMS), Zürich, (2018). A geometrical analysis, English version of the French publication with additions and updates
Hernandez, V., Roman, J.E., Vidal, V.: SLEPc: a scalable and flexible toolkit for the solution of eigenvalue problems. ACM Trans. Math. Software 31(3), 351–362 (2005)
Herzog, R., Meyer, C., Wachsmuth, G.: Integrability of displacement and stresses in linear and nonlinear elasticity with mixed boundary conditions. J. Math. Anal. Appl. 382(2), 802–813 (2011)
Hilhorst, D., Kampmann, J., Nguyen, T.N., Van Der Zee, K.G.: Formal asymptotic limit of a diffuse-interface tumor-growth model. Math. Models Methods Appl. Sci. 25(6), 1011–1043 (2015)
Hüttl, P., Knopf, P., Laux, T.: A phase-field version of the Faber–Krahn theorem. Preprint: arXiv:2207.10946 [math.AP] (2022)
Kao, C.Y., Osher, S., Yablonovitch, E.: Maximizing band gaps in two-dimensional photonic crystals by using level set methods. Appl. Phys. B 81(2), 235–244 (2005)
Kevorkian, J., Cole, J.D.: Multiple scale and singular perturbation methods. Applied Mathematical Sciences, vol. 114. Springer-Verlag, New York (1996)
Logg, A., Mardal, K.-A., Wells, G.: Eds. Automated solution of differential equations by the finite element method - The FEniCS Book, vol. 84 of Lecture Notes in Computational Science and Engineering. Springer, Cham (2012)
Marino, M., Auricchio, F., Reali, A., Rocca, E., Stefanelli, U.: Mixed variational formulations for structural topology optimization based on the phase-field approach. Struct. Multidiscip. Optim. 64(4), 2627–2652 (2021)
Murat, F., Simon, S.: Etudes de problèmes d’optimal design. In Lecturenotes in Computer Science, vol. 41, Springer Verlag, Berlin, pp. 54–62 (1976)
Osher, S., Santosa, F.: Level set methods for optimization problems. Involving geometry and constraints: I. Frequencies of a two-density inhomogeneous drum. J. Comput. Phys. 171(1), 272–288 (2001)
Osher, S., Sethian, J.: Fronts propagating with curvature-dependent speed: Algorithms based on Hamilton-Jacobi formulations. J. Comput. Phys. 79(1), 12–49 (1988)
Oudet, E.: Numerical minimization of eigenmodes of a membrane with respect to the domain. ESAIM Control Optim. Calc. Var. 10(3), 315–330 (2004)
Pedersen, N.L.: Maximization of eigenvalues using topology optimization. Struct. Multidiscip. Optim. 20(1), 2–11 (2000)
Penzler, P., Rumpf, M., Wirth, B.: A phase-field model for compliance shape optimization in nonlinear elasticity. ESAIM Control Optim. Calc. Var. 18(1), 229–258 (2012)
Shi, P., Wright, S.: Higher integrability of the gradient in linear elasticity. Math. Ann. 299(3), 435–448 (1994)
Simon, J.: Differentiation with respect to the domain in boundary value problems. Numer. Funct. Anal. Optim. 2(7–8), 649–687 (1980)
Sokolowski, J., Zolesio, J.-P.: Introduction to shape optimization: shape sensitivity analysis, vol. 16. Springer Series in Computational Mathematics. Springer-Verlag, Berlin Heidelberg (1992)
Sternberg, P.: Vector-valued local minimizers of nonconvex variational problems. Rocky Mountain J. Math. 21(2), 799–807 (1991)
Zeidler, E. Nonlinear., functional analysis and its applications. II, B. Springer-Verlag, New York,: Nonlinear monotone operators. Translated from the German by the author and Leo F, Boron (1990)
Acknowledgements
Harald Garcke, Paul Hüttl and Patrik Knopf were partially supported by the RTG 2339 “Interfaces, Complex Structures, and Singular Limits” of the German Science Foundation (DFG). The support is gratefully acknowledged.
Funding
Open Access funding enabled and organized by Projekt DEAL. This work received support from Deutsche Forschungsgemeinschaft, Grant No. (321821685).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Competing Interests
The authors have not disclosed any competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Garcke, H., Hüttl, P., Kahle, C. et al. Sharp-Interface Limit of a Multi-phase Spectral Shape Optimization Problem for Elastic Structures. Appl Math Optim 89, 24 (2024). https://doi.org/10.1007/s00245-023-10093-3
Accepted:
Published:
DOI: https://doi.org/10.1007/s00245-023-10093-3
Keywords
- Shape and topology optimization
- Structural optimization
- Eigenvalue problem
- Sharp-interface limit
- Formally matched asymptotics
- Phase-field models
- Linear elasticity