Adaptive wavelet methods for space-time variational formulations of - - PowerPoint PPT Presentation

adaptive wavelet methods for space time variational
SMART_READER_LITE
LIVE PREVIEW

Adaptive wavelet methods for space-time variational formulations of - - PowerPoint PPT Presentation

Adaptive wavelet methods for space-time variational formulations of evolutionary PDEs Rob Stevenson Korteweg-de Vries Institute for Mathematics University of Amsterdam Motivation Consider heat eq., implicit time discretization, AFEM for


slide-1
SLIDE 1

Adaptive wavelet methods for space-time variational formulations of evolutionary PDEs

Rob Stevenson Korteweg-de Vries Institute for Mathematics University of Amsterdam

slide-2
SLIDE 2

Motivation

Consider heat eq., implicit time discretization, AFEM for solving the seq.

  • f elliptic problems. Issues:
  • How to distribute ‘grid points’ optimally over space and time?
  • Often the class of possible space-time grids doesn’t include ones that are

suited for singularities that are local in space and time.

  • Inherently sequential.
  • When whole time evolution is needed, as with problems of optimal

control, huge memory requirements. Aim at adaptive solver for the problem as a whole, which is quasi-optimal within a class of trial spaces that contains one that gives best possible rate allowed by the order. Furthermore, using that the space-time cylinder is a product domain, we apply (adaptive) tensor product approximation (cf. sparse grids). So we solve time-evolution at a complexity of an optimal solver for the stationary problem.

1/33

slide-3
SLIDE 3

Topics

  • Optimal adaptive wavelet method for solving well-posed (non-) linear
  • perator eqs.
  • New approximate residual evaluation scheme.
  • To avoid C1 wavelets: FOSLS formulations.
  • Applications: elliptic; stationary NSE; parabolic; instat. NSE.
  • Some numerical results.

2/33

slide-4
SLIDE 4

Well-posed op. eqs.

Let X, Y be sep. Hilbert spaces (over I R). Let B ∈ Lis(X, Y′). Given f ∈ Y′, we seek u ∈ X s.t. Bu = f Ex.:

  • (Bw)(v) :=
  • Ω ∇w · ∇v dx, X = Y := H1

0(Ω) (Poisson problem),

  • (B(

w, p))( v, q) :=

  • Ω ∇

w : ∇ v −

  • Ω p div

v −

  • Ω q div

w dx, X = Y := H1

0(Ω)n × L2(Ω)/I

R (stat. Stokes problem),

  • (Bw)(v) :=

1 4π

  • ∂Ω
  • ∂Ω

(w(y)−w(x))(v(y)−v(x)) |x−y|3

dxdy, X = Y := H

1 2(∂Ω)/R (hypersingular boundary integral equation).

  • (Bw)(v) :=

T

  • Ω −w∂v

∂t + ∇w · ∇v dx dt, (heat eq.)

X := L2(I; H1

0(Ω)), Y := L2(I; H1 0(Ω)) ∩ H1 0,{T }(I, H−1(Ω)).

3/33

slide-5
SLIDE 5

Reformulation as well-posed bi-infinite MV eq

Let ΨX = {ψX

λ : λ ∈ ∨ X}, ΨY = {ψY λ : λ ∈ ∨ Y} Riesz bases for X, Y (we

have wavelet bases in mind). That is, the synthesis operator,

FX : c → c⊤ΨX :=

  • λ∈∨

X

cλψX

λ ∈ Lis(ℓ2(∨ X), X),

and so its adjoint, the analysis operator, F′

X : g → g(ΨX) := [g(ψX λ )]λ∈∨

X ∈ Lis(X ′, ℓ2(∨

X)).

(analogously for FY).

Bu = f ⇐ ⇒ F′

YBFX

  • B

F−1

X u

u = F′

Yf

  • f

, where B = (BΨX)(ΨY) ∈ Lis(ℓ2(∨

X), ℓ2(∨ Y))

(infinite “stiffness” matrix),

f = f(ΨY) ∈ ℓ2(∨Y)

(infinite “load” vector).

4/33

slide-6
SLIDE 6

Least squares problems

Let B ∈ L(X, Y′) with B · Y′ · X Then argmin

u∈X 1 2Bu − f2 Y′ ⇐

⇒ argmin

u∈ℓ2(∨

X )

1 2Bu − f2 ℓ2(∨

Y) ⇐

⇒ B⊤(Bu − f) = 0, where B⊤B ∈ Lis(ℓ2(∨X), ℓ2(∨X)). We will apply these normal equations also when B ∈ Lis(X, Y′) and thus Bu = f is well-posed (B ∈ Lis(ℓ2(∨X), ℓ2(∨Y)), but not B = B⊤ > 0, because not X = Y, ΨX = ΨY and B = B′ > 0.

5/33

slide-7
SLIDE 7

Adaptive Wavelet-Galerkin scheme (Bu = f)

([CDD01]) Let B = B⊤ > 0. Otherwise apply to normal equations. Goal: To generate sequence of approx. to u that, whenever for some s > 0, uAs := supN N su − uN < ∞, converges with this rate s, at linear

  • cost. (Here uN is a best approx. to u with # supp uN ≤ N.)

Notations: Λ ⊆ ∨, IΛ : ℓ2(Λ) → ℓ2(∨), RΛ = I⊤

Λ : ℓ2(∨) → ℓ2(Λ),

BΛ := RΛBIΛ, uΛ := B−1

Λ RΛf,

||| · ||| := B·, ·

1 2

(we will identify uΛ with IΛuΛ).

awgm: • Solve BΛiuΛi = RΛif;

  • Λi+1 ⊃ Λi by bulk chasing on f − BuΛi;
  • repeat with i := i + 1

(Familiar solve-estimate-mark-refine loop known from AFEM, with role a posteriori estimator played by the residual vector. Convergence and optimality not restricted to elliptic problems)

6/33

slide-8
SLIDE 8

awgm

Prop 1 ([CDD01]). Let θ ∈ (0, 1], Λ ⊂ Ξ ⊂ ∨, s.t. RΞ(f − BuΛ) ≥ θf − BuΛ. Then |||u − uΞ||| ≤ [1 − κ(B)−1θ21

2|||u − uΛ|||.

Prop 2 ([GHS07]). If θ < κ(B)−1

2 and Ξ is the smallest set satisfying

bulk chasing criterium, then #(Ξ\Λ) ≤ N for smallest N s.t. |||u − uN||| ≤ [1 − θ2κ(B)]

1 2|||u − uΛ|||.

Corol 1. awgm realizes optimal rate s (N u − uΛ−1/s & lin. conv.), but, in this form, it is not implementable. Thm 1. awgm with approx. eval.

  • f residual f − BuΛ and approx.

solution of BΛuΛ = RΛf within suff. small, but fixed rel. tolerance δ, also converges with optimal rate s, and, if such approx. eval. of f − Bw for w ∈ ℓ2(Λ) takes O(u − w−1/s + #Λ) operations (cost condition), then scheme has optimal comput. compl.

7/33

slide-9
SLIDE 9

Nonlinear operator equations

([XZ03, Ste14]) Theorem 1 generalizes to equations F(u) = 0 for F : X ⊃ dom(F) → Y′, written as F(u) = 0, where F := F′

YFFX, assuming that X = Y, DF(u) ∈ Lis(X, X ′) and

DF(u) = DF(u)′ > 0,

  • r to

argmin

u∈dom(F ) 1 2F(u)2 Y′,

written as DF(u)⊤F(u) = 0,

  • nly assuming that DF(u)(·)Y′ · X.

8/33

slide-10
SLIDE 10

Verification of cost condition

Valid approx. eval. of f − Bw from [CDD01] exploits near-sparsity of f, B, and w (u ∈ As ❀ w ∈ As). Depends non-linearly on w. A quantitatively better scheme is obtained by not splitting the residual: Ex 1. Poisson 1D. w := w⊤Ψ, to approx. f − Bw = 1

0 fψλ

  • λ∈∇ −

1

0 w′ψ′ λ dx

  • λ∈∇

within δu − wH1(0,1).Assuming Ψ ⊂ H2(0, 1), f − Bw = 1

0 (f + w′′)ψλ dx

  • λ∈∇,

and f +w′′ is piecewise polynomial w.r.t some mesh1, with ·H−1(0,1)-norm u − wH1(0,1). By dropping all ψλ whose levels exceed level of local mesh by fixed increment, resulting linear approx. res. eval. scheme meets

  • acc. req.

By putting tree constraint on wavelet index sets, multi- to locally single scale transforms in lin. complex, and so approx res. eval in O(# supp w)

  • ps., thus meets cost condition.

1modulo data oscillation 9/33

slide-11
SLIDE 11

Verification of cost condition

  • Scheme applies whenever operator applies to any wavelet in mild sense,

and also applies to semi-linear PDEs. One vanishing moment suffices.

  • Instead of applying C1-wavelets, we advocate to write second order PDE

as (well-posed) first order system least squares. Always possible.

10/33

slide-12
SLIDE 12

FOSLS for semi-linear 2nd order PDEs

Let F : X ⊃ dom(F) → Y′. For some sep.

  • H. space P, let F = F0 + F1F2 where F2 ∈ L(X, P),

F1 ∈ L(P, Y′). Then F(u) = 0 ⇐ ⇒ H(u, θ) := (F0(u) + F1θ, θ − F2u) = 0. Thm 2 ([RS16]). DF(u)vY′ vX = ⇒ D H(u, θ)(v, η)Y′×P (v, η)X×P. So F(u) = 0 can be found as argmin

(u,θ)∈dom(F )×P 1 2

H(u, θ)2

Y′×P,

i.e. as solution of D H(u, θ)⊤ H(u, θ) = 0.

11/33

slide-13
SLIDE 13

Application: Semi-linear elliptic eq.

  • −△u + N(u) = f
  • n Ω ⊂ I

Rn u = 0 at ∂Ω. (−△u may read as ∇ · A∇u + b · ∇u + cu; other (inhom.) b.c. can be used). Let standard var. form. with X = Y = H1

0(Ω) be well-posed (i.e. solution

u exists, linearized operator at u in Lis(X, Y′)). Take P = L2(Ω)n. F2u := ∇u, (F1 θ)(v) :=

θ · ∇v dx. Well-posed FOSLS: argmin

(u, θ)∈X×P 1 2

  • v →
  • θ · ∇v dx + (N(u) − f)v dx2

Y′ +

θ − ∇u2

P

  • .

12/33

slide-14
SLIDE 14

Application: Semi-linear elliptic eq.

Equip X, Y, P, with wavelet Riesz bases ΨX, ΨX, ΨP. To solve

  • 0 =

∇ΨX −ΨP

  • , ∇u −

θ

  • L2(Ω)n+
  • DN(u)ΨX, ΨYL2(Ω)

ΨP, ∇ΨYL2(Ω)n ΨY, N(u) − f

  • L2(Ω) + ∇ΨY,

θL2(Ω)n

  • which rhs under the additional condition ΨP ⊂ H(div; Ω), and u,

θ from the span of the wavelets, reads as

  • ∇ΨX

−ΨP

  • ,∇u−

θ

  • L2(Ω)n

+

  • DN(u)ΨX,ΨYL2(Ω)

ΨP, ∇ΨYL2(Ω)n

  • ΨY, N(u)−f −div

θ

  • L2(Ω).

13/33

slide-15
SLIDE 15

Application: Semi-linear elliptic eq.

Numerical experiment

(Nikolaos Rekatsinas (KdVI, Amsterdam))

  • −△u + u3 = f
  • n Ω ⊂ I

R2 u = 0 at ∂Ω. L-shaped domain. FOSLS. Finite element wavelets. Continuous piecewise linears for ΨY and ΨP, continuous piecewise quadratics for ΨX. Figure 1: Approximate solution of −△u + u3 = 1 on L-shaped, u = 0 on bdr., as lin. combi of 202 wavelets.

14/33

slide-16
SLIDE 16

Application: Semi-linear elliptic eq.

Figure 2: Norm of residual vector, vs. number of wavelets, and optimal slope 3−1

2

= 1

15/33

slide-17
SLIDE 17

Application: Semi-linear elliptic eq.

Figure 3: Centers of supports wavelets (# = 5447) that were selected

16/33

slide-18
SLIDE 18

Application: Stationary Navier-Stokes

For n ∈ {2, 3},    −△ u + ν−3/2( u · ∇) u + ∇p = f

  • n Ω

div u = 0

  • n Ω
  • u =
  • n ∂Ω.

Standard var. form. well-posed with X = Y = H1

0(Ω)n × L2(Ω)/I

R. On H1

0(Ω)n × H1 0(Ω)n,

  • Ω ∇

u : ∇ v − div u div v − curl u · curl v dx = 0. Take P = L2(Ω)2n−3 (recall F = F0 + F1F2) F2( u, p) = curl u, (F1 ω)( v, q) =

  • ω · curl

v dx. Well-posed FOSLS argmin

( u,p, ω)∈X×P 1 2

  • v →
  • ω · curl

v − p div v + (

u·∇) u· v ν3/2

− f · v dx2

H−1(Ω)n

+ div u2

L2(Ω) +

ω − curl u2

L2(Ω)2n−3

  • .

(gives effortless proof of [CMM95, Thm. 2.1])

17/33

slide-19
SLIDE 19

Application: Stationary Navier-Stokes

Equip ∗ ∈ {H1

0(Ω)n, L2(Ω)/I

R, L2(Ω)2n−3} with wavelet Riesz bases Ψ∗, apply awgm to

  • 0 =

   div Ψ(H1

0)n, div

uL2(Ω)

  +    curl Ψ(H1

0)n, curl

u − ωL2(Ω)2n−3

  • ΨL2n−3

2

, ω − curl uL2(Ω)2n−3    +     (

u·∇)Ψ(H1

0)n

+(Ψ(H1

0)n

·∇) u ν3/2

, Ψ(H1

0)nL2(Ω)n

−ΨL2/I

R, div Ψ(H1

0)nL2(Ω)

ΨL2n−3

2

, curl Ψ(H1

0)nL2(Ω)2n−3

   

  • Ψ(H1

0)n, (

u·∇) u ν3/2

− fL2(Ω)n+ curl Ψ(H1

0)n,

ωL2(Ω)2n−3 − div Ψ(H1

0)n, pL2(Ω)

  • =

which rhs under the conditions ΨL2/I

R ⊂ H1(Ω), ΨL2n−3

2

⊂ H(curl; Ω), and u, ω, p from the span of the wavelets, reads as

18/33

slide-20
SLIDE 20

Application: Stationary Navier-Stokes

   div Ψ(H1

0)n,div

uL2(Ω)

 +    curl Ψ(H1

0)n, curl

u − ωL2(Ω)2n−3

  • ΨL2n−3

2

, ω − ∇ uL2(Ω)2n−3   +     (

u·∇)Ψ(H1

0)n

+(Ψ(H1

0)n

·∇) u ν3/2

, Ψ(H1

0)nL2(Ω)n

−ΨL2/I

R, div Ψ(H1

0)nL2(Ω)

ΨL2n−3

2

, curl Ψ(H1

0)nL2(Ω)2n−3

    Ψ(H1

0)n

,(

u·∇) u ν3/2 −

f +curl′ ω+∇pL2(Ω)n.

19/33

slide-21
SLIDE 21

Application: Parabolic problems

With Ω ⊂ I Rn, I := (0, T),   

∂u ∂t − ∇ · A∇u + Nu = g

  • n I × Ω,

u = 0

  • n I × ∂Ω,

u(0, ·) = h

  • n Ω,

(1) where ξ⊤A(·)ξ ξ2, N bounded first order PDO. With X := L2(I; H1

0(Ω))∩H1(I; H−1(Ω)),

Y = Y1×Y2 := L2(I; H1

0(Ω))×L2(Ω),

(Bu)(v) :=

  • I

∂u ∂t + Nu

  • v1 + A∇u · ∇v1 dx dt +

u(0, ·)v2 dx satisfies B ∈ Lis(X, Y′). With P := L2(I; L2(Ω)n), F2u = A∇u, (F1 p)(v) =

  • I
  • p · ∇v1 dx.

20/33

slide-22
SLIDE 22

Application: Parabolic problems

well-posed FOSLS (seems new) argmin

(u,p)∈X×P 1 2

  • v1 →
  • I

∂u ∂t + N − g

  • v1 +

p · ∇v1 dx dt

  • 2

Y′

1

+ u(0, ·) − h2

L2(Ω) +

p − A∇u2

P

  • .

Equip ∗ ∈ {X, Y1, P} with wavelet Riesz bases Ψ∗, apply awgm to

  • 0 =
  • ( ∂

∂t + N)ΨX,ΨY1L2(I×Ω)

ΨP, ∇ΨY1L2(I×Ω)n

  • ΨY1, ∂u

∂t +Nu−g

  • L2(I×Ω)

+∇ΨY1, pL2((I×Ω)n +

  • ΨX(0, ·), u(0, ·) − hL2(Ω)
  • +
  • −A∇ΨX,

p − A∇uL2(I×Ω)n ΨP, p − A∇uL2(I×Ω)n

  • which rhs under the additional condition ΨP ⊂ L2(I; H(div; Ω)), and

21/33

slide-23
SLIDE 23

Application: Parabolic problems

u, p from the span of the wavelets, reads as

  • ( ∂

∂t + N)ΨX, ΨY1L2(I×Ω)

ΨP, ∇ΨY1L2(I×Ω)n

  • ΨY1, ∂u

∂t + Nu − g − div p

  • L2(I×Ω)

+

  • ΨX(0, ·), u(0, ·) − hL2(Ω)
  • +
  • −A∇ΨX,

p − A∇uL2(I×Ω)n ΨP, p − A∇uL2(I×Ω)n

  • 22/33
slide-24
SLIDE 24

Application: Parabolic problems

Tensor product bases

Bases needed for X = L2(I;H1

0(Ω)

)∩H1(I;H−1(Ω) ), Y = L2(I;H1

0(Ω)

)×L2(Ω), P = L2(I;L2(Ω)n). Let, prop. sc., Σ (wavelet) Riesz basis for H1

0(Ω) and H−1(Ω), and, prop.

sc., Θ (wavelet) Riesz basis for L2(I) and H1(I). Then, prop. sc., Θ ⊗ Σ is a Riesz basis for X (tensor product or anisotropic wavelets). Similar construction of the bases for Y1 and P. Advantage: In any case for sufficiently smooth solutions, convergence rates possible as for the stationary problem. Index set ∨X = ∨Θ × ∨Σ. We restrict to subsets that satisfy a double-tree constraint. For elliptic problems, anis. reg. results ([CDN12]) combined with approximation results ([DS10]), show that on 2 and 3-dim. polytopes, for suff. smooth forcing best (piecewise) multi-tree tensor approx. yields

  • ptimal rates as for 1D problems.

We conjecture that with sufficiently smooth forcing, best double-tree tensor approximation yields rates as for the stationary problem, i.e. that the singularities induced by the bottom corners of space-time cylinder are sufficiently smooth.

23/33

slide-25
SLIDE 25

Application: Parabolic problems

Residual evaluation with tensor product bases

Returning to the residual evaluation

  • ( ∂

∂t + N)ΨX, ΨY1L2(I×Ω)

ΨP, ∇ΨY1L2(I×Ω)n

  • ΨY1, ∂u

∂t + Nu − g − div p

  • L2(I×Ω)

+

  • ΨX(0, ·), u(0, ·) − hL2(Ω)
  • +
  • −A∇ΨX,

p − A∇uL2(I×Ω)n ΨP, p − A∇uL2(I×Ω)n

  • For u and

p from the span of tensor product wavelets with indices in double-trees ΛX and ΛP, the restriction of this residual vector to ¯ ΛX × ¯ ΛP, for double trees ¯ ΛX and ¯ ΛP with #(¯ ΛX ∪ ¯ ΛP) #(ΛX ∪ ΛP), achieves a relative tolerance δ. How to do this evaluation in linear complexity? Multi-to-locally single scale, followed by ‘stiffness’ in single-scale, followed by transpose of multi-to-locally single scale doesn’t work.

24/33

slide-26
SLIDE 26

Application: Parabolic problems

Unidirectional scheme

Let Λ be a ‘sparse grid’. To apply RΛ(B ⊗ A)IΛ.

ℓ2 ℓ1

[BZ96]: Write A = L(→) + U(←), and

  • (→ + ←) = ◦ → + ◦ ←

=→ ◦ + ◦ ← Application in linear complexity, when A and B are sparse in single scale coordinates. Generalization to multi-trees in [KS14].

25/33

slide-27
SLIDE 27

Numerical results heat eqn

(Nabi Chegini (Univ. of Tafresh, Iran)). Heat eqn. in one spatial dimension. No FOSLS. Special quartic wavelets that yield truly sparse stiffness matrix.

10 10

1

10

2

10

3

10

4

10

5

10

!15

10

!10

10

!5

10 10

5

Figure 4: Right-hand side g = 1 and initial condition u0 = 0. Buε−f/f

  • vs. N = #supp uε for the awgm (solid), full-grid (dashed) and sparse-grid

method (dashed-dotted). The dotted line is a multiple of N −5(log N)51

2. 26/33

slide-28
SLIDE 28

Numerical results heat eqn

10 10

1

10

2

10

3

10

4

10

5

10

−10

10

−5

10 10

5

10

10

Figure 5: Heat eqn. in n = 1 spatial dimension, right-hand side g = 1 and initial condition u0 = 1. Buε − f/f vs. N = #supp uε for the awgm (solid). The dotted line is a multiple of N −5(log N)51

2. 27/33

slide-29
SLIDE 29

Numerical results heat eqn

0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0000 0.0002 0.0004 0.0006 0.0008 0.0010 0.0 0.2 0.4 0.6 0.8 1.0 0.0000 0.0002 0.0004 0.0006 0.0008 0.0010

Figure 6: Heat eqn. in n = 1 spatial dimension and right-hand side g = 1. Centers of the supports of the wavelets selected by the awgm. Left u0 = 0 and #uε = 13420. Right u0 = 1 and #uε = 13917. A zoom in near t = 0 is given at the bottom row.

28/33

slide-30
SLIDE 30
  • instat. (N)SE

(With. Ch. Schwab)           

∂u ∂t − ν∆u + ∇ p = g

  • n I × Ω,

div u = h

  • n I × Ω,

u = 0

  • n I × ∂Ω,

u(0, ·) = 0

  • n Ω,
  • Ω p dx = 0.

(2) Can be reduced, for h = 0, to parabolic for velocities, but then arising spaces will be spaces of divergence-free functions. We enforce incompressibility constraint via Lagrange multiplier. Saddle point form. Space-time variational form: With              c(u, v) :=

  • I

∂u ∂t · v + ν∇u : ∇v dxdt,

d(p, v) := −

  • I

p div v dxdt, f(v, q) :=

  • I

g · v + h q dxdt, find (u, p) in some suitable space, such that (S(u, p))(v, q) := c(u, v) + d(p, v) + d(q, u) = f(v, q) for all (v, q) from another suitable space.

29/33

slide-31
SLIDE 31

(N)SE

For δ ∈ {0, T}, ˘ Hs

0,{δ}(I) := [L2(I), H1 0,{δ}(I)]s,

Hs(Ω) := [L2(Ω), H2(Ω) ∩ H1

0(Ω)]s

2,

¯ Hs(Ω) := [(H1(Ω)/I R)′, H1(Ω)/I R)]s+1

2 ,

U s

δ := L2(I; H2s(Ω)n) ∩ ˘

Hs

0,{δ}(I; L2(Ω)n),

Ps

δ :=

  • L2(I; ¯

H2s−1(Ω)′) ∩ ˘ H1−s

0,{δ}(I; ¯

H1(Ω)′) ′. Thm 3 ([SS15]). For Ω ⊂ I Rn a bounded Lipschitz domain, and s ∈ (1

4, 3 4),

it holds that S ∈ Lis(U s

0 × Ps T, (U 1−s T

× P1−s )′).

  • For ∂Ω ∈ C2, result also valid for s ∈ [0, 1]. s ∈ {0, 1} avoids fractional
  • Sob. sp., but U 1

δ involves H2(Ω).

  • For s ∈ (1

4, 3 4), all arising spaces can be ‘conveniently’ equipped with

wavelet Riesz bases.

  • Generalizes to NSE for n = 2; for n = 3 we need ‘s’ > 3

4 which requires

smooth ∂Ω or convex domains, and C1-wavelets.

  • Well-posed FOSLS.

30/33

slide-32
SLIDE 32

(N)SE

Main ingredients of proof of Thm 3 about the boundedness of S−1:

  • [U 0

δ , U 1 δ ]s ≃ U s δ , [P0 δ, P1 δ]s ≃ Ps δ (s ∈ [0, 1]).

  • Right-inverse div+ from [Bog79] satisfies div+ ∈ L( ¯

H−1(Ω), L2(Ω)n) and, for s ∈ [0, 3

4), div+ ∈ L( ¯

H2s−1(Ω), H2s(Ω)n), and so I ⊗ div+ ∈ L((P1−s

δ

)′, U s

δ ), so I ⊗ div ∈ L(U s δ , (P1−s δ

)′) is surjective, i.e., inf

0=q∈P1−s

δ

sup

0=u∈U s

δ

d(u, q) uU s

δ qP1−s δ

> 0. Remains to show that (Cu)(v) := c(u, v) is boundedly invertible between {u ∈ U s

0 : d(P1−s

, u) = 0} and

  • {u ∈ U 1−s

T

: d(Ps

T, u) = 0}

′.

  • Using div+ again, first space (second similar)

≃ L2(I; H2s(div 0; Ω)) ∩ ˘ Hs

0,{0}(I; H0(div 0; Ω))

≃ L2(I; [H0(div 0; Ω), D(A)]s) ∩ ˘ Hs

0,{0}(I; H0(div 0; Ω))

using elliptic regularity of stat. Stokes op. A on div.-free functions.

  • Proof completed by maximal regularity results (e.g. [PS01])

C ∈ Lis

  • L2(I; D(A)) ∩ H1

0,{α}(I; H0(div 0; Ω)), L2(I; H0(div 0; Ω))

  • ,

C ∈ Lis

  • L2(I; H0(div 0; Ω)), (L2(I; D(A))∩H1

0,{β}(I; H0(div 0; Ω)))′

, and interpolation.

31/33

slide-33
SLIDE 33

Conclusions

  • Adaptive wavelet method solves general well-posed operator equations

with the best possible rate in linear complexity.

  • Besides optimal ‘preconditioning’, main advantage is that ‘efficient and

reliable’ a posteriori estimator, being the residual of operator equation in wavelet coordinates, is not restricted to elliptic problems.

  • Tensor product wavelets for time-dependent problems give complexity

reduction.

  • Well-posedness of space-time variational formulations of evolution

problems is not only of interest for wavelet methods.

Thanks for your patience/attention!

32/33

slide-34
SLIDE 34

References

[Bog79]

  • M. E. Bogovski˘

ı. Solution of the first boundary value problem for an equation of continuity of an incompressible medium. Dokl. Akad. Nauk SSSR, 248(5):1037–1040, 1979. [BZ96]

  • R. Balder and Ch. Zenger. The solution of multidimensional real Helmholtz

equations on sparse grids. SIAM J. Sci. Comput., 17(3):631–646, 1996. [CDD01]

  • A. Cohen, W. Dahmen, and R. DeVore. Adaptive wavelet methods for elliptic
  • perator equations – Convergence rates. Math. Comp, 70:27–75, 2001.

[CDD03]

  • A. Cohen, W. Dahmen, and R. DeVore. Sparse evaluation of compositions
  • f functions using multiscale expansions. SIAM J. Math. Anal., 35(2):279–303

(electronic), 2003. [CDN12]

  • M. Costabel,
  • M. Dauge,

and S. Nicaise. Analytic regularity for linear elliptic systems in polygons and polyhedra. Math. Models Methods Appl. Sci., 22(8):1250015, 63, 2012. [CMM95] Z. Cai, T. A. Manteuffel, and S. F. McCormick. First-order system least squares for velocity-vorticity-pressure form of the Stokes equations, with application to linear elasticity. Electron. Trans. Numer. Anal., 3(Dec.):150–159 (electronic), 1995. [CS15] N.G. Chegini and R.P. Stevenson. An adaptive wavelet method for semi-linear first order system least squares. Comput. Math. Appl., August 2015. DOI: 10.1515/cmam-2015-0023.

33/33

slide-35
SLIDE 35

[DS10]

  • M. Dauge and R.P. Stevenson. Sparse tensor product wavelet approximation of

singular functions. SIAM J. Math. Anal., 42(5):2203–2228, 2010. [GHS07]

  • T. Gantumur, H. Harbrecht, and R.P. Stevenson. An optimal adaptive wavelet

method without coarsening of the iterands. Math. Comp., 76:615–629, 2007. [KS14]

  • S. Kestler and R.P. Stevenson.

Fast evaluation of system matrices w.r.t. multi-tree collections of tensor product refinable basis functions. J. Comput.

  • Appl. Math., 260:103–116, 2014.

[PS01]

  • J. Pr¨

uss and R. Schnaubelt. Solvability and maximal regularity of parabolic evolution equations with coefficients continuous in time. J. Math. Anal. Appl., 256(2):405–430, 2001. [RS16]

  • N. Rekatsinas and R. Stevenson. Adaptive wavelet methods for first order least
  • squares. Technical report, Korteweg-de Vries Institute, 2016. In preparation.

[SS09]

  • Ch. Schwab and R.P. Stevenson. A space-time adaptive wavelet method for

parabolic evolution problems. Math. Comp., 78:1293–1318, 2009. [SS15]

  • Ch. Schwab and R.P. Stevenson. Fractional space-time variational formulations
  • f (Navier)-Stokes equations. Technical report, December 2015.

[Ste14] R.P. Stevenson. Adaptive wavelet methods for linear and nonlinear least-squares

  • problems. Found. Comput. Math., 14(2):237–283, 2014.

[XZ03]

  • Y. Xu and Q. Zou. Adaptive wavelet methods for elliptic operator equations

with nonlinear terms. Adv. Comput. Math., 19(1-3):99–146, 2003. Challenges in computational mathematics (Pohang, 2001).

34/33