A priori and a posteriori analyses of the DPG method Jay - - PowerPoint PPT Presentation

a priori and a posteriori analyses of the dpg method
SMART_READER_LITE
LIVE PREVIEW

A priori and a posteriori analyses of the DPG method Jay - - PowerPoint PPT Presentation

A priori and a posteriori analyses of the DPG method Jay Gopalakrishnan Portland State University ICERM Workshop on Robust Discretization and Fast Solvers for Computable Multi-Physics Models Brown University, May 2013 AFOSR, NSF Thanks: Jay


slide-1
SLIDE 1

A priori and a posteriori analyses of the DPG method

Jay Gopalakrishnan

Portland State University

ICERM Workshop on Robust Discretization and Fast Solvers for Computable Multi-Physics Models Brown University, May 2013 Thanks:

AFOSR, NSF

Jay Gopalakrishnan 1/38

slide-2
SLIDE 2

Contents

Principal Collaborator in DPG research: Leszek Demkowicz. Three avenues to DPG methods

◮ ◮ ◮

A priori error analysis

◮ ◮

A posteriori error analysis Fast solvers Examples

◮ ◮ ◮ ◮ ◮ Jay Gopalakrishnan 2/38

slide-3
SLIDE 3

Three avenues to DPG methods

DPG methods

Least-squares Galerkin method Petrov-Galerkin with optimal test space Mixed Galerkin method

Jay Gopalakrishnan 3/38

slide-4
SLIDE 4

“Petrov-Galerkin” schemes (PG)

PG schemes are distinguished by different trial and test (Hilbert) spaces. The problem:

  • P.D.E.+

boundary conditions. ↓ Variational form:    Find x in a trial space X satisfying b(x, y) = ℓ(y) for all y in a test space Y. ↓ Discretization:    Find xh in a discrete trial space Xh ⊂ X satisfying b(xh, yh) = ℓ(yh) for all yh in a discrete test space Yh ⊂ Y . For PG schemes, Xh = Yh in general.

Jay Gopalakrishnan 4/38

slide-5
SLIDE 5

Elements of theory

Variational formulation:    Exact inf-sup condition CxX ≤ sup

y∈Y

|b(x, y)| yY   +

  • a uniqueness

condition

  • =

⇒ wellposedness Babuˇ ska-Brezzi theory:    Discrete inf-sup condition CxhX ≤ sup

yh∈Yh

|b(xh, yh)| yhY    = ⇒ x − xhX ≤ C inf

wh∈Xh

x − whX. Difficulty: Exact inf-sup condition

  • =

⇒ Discrete inf-sup condition

Jay Gopalakrishnan 5/38

slide-6
SLIDE 6

Elements of theory

Variational formulation:    Exact inf-sup condition CxX ≤ sup

y∈Y

|b(x, y)| yY   +

  • a uniqueness

condition

  • =

⇒ wellposedness Babuˇ ska-Brezzi theory:    Discrete inf-sup condition CxhX ≤ sup

yh∈Yh

|b(xh, yh)| yhY    = ⇒ x − xhX ≤ C inf

wh∈Xh

x − whX. Difficulty: Exact inf-sup condition

  • =

⇒ Discrete inf-sup condition Is there a way to find a stable test space for any given trial space (thus giving a stable method automatically)?

Jay Gopalakrishnan 5/38

slide-7
SLIDE 7

The ideal method

Pick any Xh ⊆ X. The ideal DPG method finds xh ∈ Xh such that b(xh, y) = ℓ(y), ∀y ∈ Y opt

h def

= T(Xh), where T : X → Y is defined by (Tw, y)Y = b(w, y), ∀w ∈ X, y ∈ Y . [Demkowicz+G 2011] Rationale:

Jay Gopalakrishnan 6/38

slide-8
SLIDE 8

The ideal method

Pick any Xh ⊆ X. The ideal DPG method finds xh ∈ Xh such that b(xh, y) = ℓ(y), ∀y ∈ Y opt

h def

= T(Xh), where T : X → Y is defined by (Tw, y)Y = b(w, y), ∀w ∈ X, y ∈ Y . [Demkowicz+G 2011] Rationale: Q: Which function y maximizes |b(x, y)| yY for any given x ? A: y = Tx is the maximizer. ← Optimal test function. DPG Idea: If the discrete test space contains the optimal test functions, exact inf-sup condition = ⇒ discrete inf-sup condition.

Jay Gopalakrishnan 6/38

slide-9
SLIDE 9

The ideal method

Pick any Xh ⊆ X. The ideal DPG method finds xh ∈ Xh such that b(xh, y) = ℓ(y), ∀y ∈ Y opt

h def

= T(Xh), where T : X → Y is defined by (Tw, y)Y = b(w, y), ∀w ∈ X, y ∈ Y . [A.1] {w ∈ X : b(w, y) = 0 ∀y ∈ Y } = {0}. [A.2] ∃ C1, C2 > 0 such that C1yY ≤ sup

w∈X

|b(w, y)| wX ≤ C2yY .

Theorem (DPG Quasioptimality)

[A.1–A.2] = ⇒ x − xhX ≤ C2 C1 inf

wh∈Xh

x − whX.

Jay Gopalakrishnan 6/38

slide-10
SLIDE 10

The ideal method

Pick any Xh ⊆ X. The ideal DPG method finds xh ∈ Xh such that b(xh, y) = ℓ(y), ∀y ∈ Y opt

h def

= T(Xh), where T : X → Y is defined by (Tw, y)Y = b(w, y), ∀w ∈ X, y ∈ Y . But . . . can we really compute Tx? For a few problems, Tx can be calculated in closed form. When Tx cannot be hand calculated, we overcome two difficulties:

◮ Redesign formulation so that T is local (by hybridization). ◮ Approximate T by a computable (finite-rank) T r. Jay Gopalakrishnan 6/38

slide-11
SLIDE 11

The ideal method

Pick any Xh ⊆ X. The ideal DPG method finds xh ∈ Xh such that b(xh, y) = ℓ(y), ∀y ∈ Y opt

h def

= T(Xh), where T : X → Y is defined by (Tw, y)Y = b(w, y), ∀w ∈ X, y ∈ Y . The ideal DPG method = iDPG method

Jay Gopalakrishnan 7/38

slide-12
SLIDE 12

Trivial Example 1

Standard FEM is an iDPG method

Problem

  • Given F ∈ H−1(Ω),

find u ∈ H1

0(Ω) solving:

  • ∇ u ·

∇ v = F(v), ∀v ∈ H1

0(Ω).

Recall Pick any Xh ⊆ X. The ideal DPG method finds xh ∈ Xh such that b(xh, y) = ℓ(y), ∀y ∈ Y opt

h def

= T(Xh), where T : X → Y is defined by (Tw, y)Y = b(w, y), ∀w ∈ X, y ∈ Y .

Jay Gopalakrishnan 8/38

slide-13
SLIDE 13

Trivial Example 1

Standard FEM is an iDPG method

Problem

  • Given F ∈ H−1(Ω),

find u ∈ H1

0(Ω) solving:

  • ∇ u ·

∇ v = F(v), ∀v ∈ H1

0(Ω).

Set X = Y = H1

0(Ω) and

(v, y)Y =

  • ∇ v ·

∇ y.

Recall Pick any Xh ⊆ X. The ideal DPG method finds xh ∈ Xh such that b(xh, y) = ℓ(y), ∀y ∈ Y opt

h def

= T(Xh), where T : X → Y is defined by (Tw, y)Y = b(w, y), ∀w ∈ X, y ∈ Y .

Jay Gopalakrishnan 8/38

slide-14
SLIDE 14

Trivial Example 1

Standard FEM is an iDPG method

Problem

  • Given F ∈ H−1(Ω),

find u ∈ H1

0(Ω) solving:

  • ∇ u ·

∇ v = F(v), ∀v ∈ H1

0(Ω).

Set X = Y = H1

0(Ω) and

(v, y)Y =

  • ∇ v ·

∇ y. Then (·, ·)Y = b(·, ·) = ⇒ T = identity, so Y opt

h

= Xh.

Recall Pick any Xh ⊆ X. The ideal DPG method finds xh ∈ Xh such that b(xh, y) = ℓ(y), ∀y ∈ Y opt

h def

= T(Xh), where T : X → Y is defined by (Tw, y)Y = b(w, y), ∀w ∈ X, y ∈ Y .

Jay Gopalakrishnan 8/38

slide-15
SLIDE 15

Next

Three avenues to DPG methods

◮ Petrov-Galerkin with optimal test functions . . . . . . . . . . . . . . . . . . . . . . . . ✦ ◮ Least-squares Galerkin method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ◮

A priori error analysis

◮ Ideal DPG method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .✦ ◮

A posteriori error analysis Fast solvers Examples

◮ Example 1 (Standard FEM). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .✦ ◮ ◮ ◮ ◮ Jay Gopalakrishnan 9/38

slide-16
SLIDE 16

Trivial Example 2

L2-based least squares method is an ideal DPG method

Problem

  • Given an f ∈ L2(Ω) and a linear continuous bijective A : X → L2(Ω),

find u ∈ X satisfying Au = f .

Recall Pick any Xh ⊆ X. The ideal DPG method finds xh ∈ Xh such that b(xh, y) = ℓ(y), ∀y ∈ Y opt

h def

= T(Xh), where T : X → Y is defined by (Tw, y)Y = b(w, y), ∀w ∈ X, y ∈ Y .

Jay Gopalakrishnan 10/38

slide-17
SLIDE 17

Trivial Example 2

L2-based least squares method is an ideal DPG method

Problem

  • Given an f ∈ L2(Ω) and a linear continuous bijective A : X → L2(Ω),

find u ∈ X satisfying Au = f . Set Y = L2(Ω), b(x, y) = (Ax, y)Y , ℓ(y) = (f , y)Y .

Recall Pick any Xh ⊆ X. The ideal DPG method finds xh ∈ Xh such that b(xh, y) = ℓ(y), ∀y ∈ Y opt

h def

= T(Xh), where T : X → Y is defined by (Tw, y)Y = b(w, y), ∀w ∈ X, y ∈ Y .

Jay Gopalakrishnan 10/38

slide-18
SLIDE 18

Trivial Example 2

L2-based least squares method is an ideal DPG method

Problem

  • Given an f ∈ L2(Ω) and a linear continuous bijective A : X → L2(Ω),

find u ∈ X satisfying Au = f . Set Y = L2(Ω), b(x, y) = (Ax, y)Y , ℓ(y) = (f , y)Y . Then (Tw, y)Y = (Aw, y) = ⇒ T = A = ⇒ Y opt

h

= AXh = ⇒ iDPG equations become Normal equations: (Axh, Awh)Y = (f , Awh)Y ∀wh ∈ Xh.

Recall Pick any Xh ⊆ X. The ideal DPG method finds xh ∈ Xh such that b(xh, y) = ℓ(y), ∀y ∈ Y opt

h def

= T(Xh), where T : X → Y is defined by (Tw, y)Y = b(w, y), ∀w ∈ X, y ∈ Y .

Jay Gopalakrishnan 10/38

slide-19
SLIDE 19

The least-squares avenue

DPG methods

Least-squares Galerkin method Petrov-Galerkin with optimal test space Mixed Galerkin method

Jay Gopalakrishnan 11/38

slide-20
SLIDE 20

Definitions

Riesz map: RY : Y → Y ∗ : (RY y)(v) = (y, v)Y , ∀y, v ∈ Y . Operator generated by the form: B : X → Y ∗ : Bx(y) = b(x, y), ∀x ∈ X, y ∈ Y . Trial-to-Test operator T : X → Y was defined by (Tw, y)Y = b(w, y), ∀w ∈ X, y ∈ Y . = ⇒ T = R−1

Y

  • B.

Energy norm on X: | | |z| | |X

def

= TzY .

Jay Gopalakrishnan 12/38

slide-21
SLIDE 21

Residual minimization

Theorem (DPG methods are least-squares methods)

The following are equivalent statements: i) xh ∈ Xh is the unique solution of the ideal DPG method. ii) xh is the best approximation to x from Xh in the energy norm: | | |x − xh| | |X = inf

zh∈Xh

| | |x − zh| | |X iii) xh minimizes residual in the following sense: xh = arg min

zh∈Xh

ℓ − BzhY ∗.

Proof of (i) ⇐ ⇒ (ii) .

b(x − xh, yh) = 0 ∀yh ∈ Y opt

h

⇐ ⇒ b(x − xh, Tzh) = 0 ∀zh ∈ Xh ⇐ ⇒ (T(x − xh), Tzh)Y = 0 ∀zh ∈ Xh.

Jay Gopalakrishnan 13/38

slide-22
SLIDE 22

Residual minimization

Theorem (DPG methods are least-squares methods)

The following are equivalent statements: i) xh ∈ Xh is the unique solution of the ideal DPG method. ii) xh is the best approximation to x from Xh in the energy norm: | | |x − xh| | |X = inf

zh∈Xh

| | |x − zh| | |X iii) xh minimizes residual in the following sense: xh = arg min

zh∈Xh

ℓ − BzhY ∗.

Proof of (ii) ⇐ ⇒ (iii) .

| | |x − zh| | |X = T(x − zh)Y = R−1

Y B(x − zh)Y

= B(x − zh)Y ∗ = ℓ − Bzh)Y ∗.

Jay Gopalakrishnan 13/38

slide-23
SLIDE 23

Example 3: An ODE

Pavlovian integration by parts, or not?

1D transport eq.

  • u′ = f

in (0, 1), u(0) = u0 (inflow b.c.) Variational form:      Find u in H1, satisfying u(0) = u0, & 1 u′v

b(u,v)

= 1 f v,

l(v)

∀v in L2.

Jay Gopalakrishnan 14/38

slide-24
SLIDE 24

Example 3: An ODE

Pavlovian integration by parts, or not?

1D transport eq.

  • u′ = f

in (0, 1), u(0) = u0 (inflow b.c.) Variational form:      Find u in H1, satisfying u(0) = u0, & 1 u′v

b(u,v)

= 1 f v,

l(v)

∀v in L2. Ultra-weak form:      Find u ∈ L2, and a number ˆ u1 ∈ R, satisfying − 1 uv′ + ˆ u1v(1)

  • b( (u,ˆ

u1), v)

= 1 f v + u0v(0)

  • l(v)

, ∀v ∈ H1.

Jay Gopalakrishnan 14/38

slide-25
SLIDE 25

Example 3: An ODE

Pavlovian integration by parts, or not?

1D transport eq.

  • u′ = f

in (0, 1), u(0) = u0 (inflow b.c.) Variational form: (DPG gives LS with Au = u′.)      Find u in H1, satisfying u(0) = u0, & 1 u′v

b(u,v)

= 1 f v,

l(v)

∀v in L2. Ultra-weak form: (Here DPG gives something new.)      Find u ∈ L2, and a number ˆ u1 ∈ R, satisfying − 1 uv′ + ˆ u1v(1)

  • b( (u,ˆ

u1), v)

= 1 f v + u0v(0)

  • l(v)

, ∀v ∈ H1.

Jay Gopalakrishnan 14/38

slide-26
SLIDE 26

One-dimensional results using spectral trial space

[Click here to download FEniCS code.] Jay Gopalakrishnan 15/38

slide-27
SLIDE 27

Next

Three avenues to DPG methods

◮ Petrov-Galerkin with optimal test functions . . . . . . . . . . . . . . . . . . . . . . . . ✦ ◮ Least-squares Galerkin method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ✦ ◮

A priori error analysis

◮ Ideal DPG method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .✦ ◮ Practical DPG method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

A posteriori error analysis Fast solvers Examples

◮ Example 1 (Standard FEM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ✦ ◮ Example 2 (L2-based least-squares) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ✦ ◮ Example 3 (An ODE) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .✦ ◮ ◮ Jay Gopalakrishnan 16/38

slide-28
SLIDE 28

The practical method

Pick any Xh ⊆ X. The ideal DPG method finds xh ∈ Xh such that b(xh, y) = ℓ(y), ∀y ∈ Y opt

h def

= T(Xh), where T : X → Y is defined by (Tw, y)Y = b(w, y), ∀w ∈ X, y ∈ Y . Pick any Xh ⊆ X. The practical DPG method finds xr

h ∈ Xh, using

a (finite-dimensional) Y r ⊆ Y , such that b(xr

h, y) = ℓ(y),

∀y ∈ Y r

h def

= T r(Xh), where T r : X → Y r is defined by (T rw, y)Y = b(w, y), ∀w ∈ X, y ∈ Y r.

Jay Gopalakrishnan 17/38

slide-29
SLIDE 29

The practical method

Pick any Xh ⊆ X. The ideal DPG method finds xh ∈ Xh such that xh = arg min

zh∈Xh

ℓ − BzhY ∗. Pick any Xh ⊆ X. The practical DPG method finds xr

h ∈ Xh, using

a (finite-dimensional) Y r ⊆ Y , such that xr

h = arg min zh∈Xh

ℓ − Bzh(Y r)∗.

Jay Gopalakrishnan 17/38

slide-30
SLIDE 30

Error analysis of the practical DPG method

[A.1] {w ∈ X : b(w, y) = 0 ∀y ∈ Y } = {0}. [A.2] ∃ C1, C2 > 0 such that C1yY ≤ sup

w∈X

|b(w, y)| wX ≤ C2yY . [A.3] ∃ Π : Y → Y r and CΠ > 0 such that for all wh ∈ Xh and y ∈ Y , b(wh, y − Πy) = 0, ΠyY ≤ CΠyY .

Theorem (A priori estimates for practical DPG method [G+Qiu 2013])

[A.1–A.3] = ⇒ x − xr

hX ≤ C2CΠ

C1 inf

wh∈Xh

x − whX.

Jay Gopalakrishnan 18/38

slide-31
SLIDE 31

The ‘D’ in ‘DPG’

For the residual minimization in xh = arg min

zh∈Xh

ℓ − BzhY ∗ to be feasible, the dual norm · Y ∗ must be easily computable! “Negative-norm least-squares” uses multigrid or operators spectrally equivalent to the dual norm. [Bramble+Pasciak+Lazarov 1997] DPG methods reformulate problems to localize the dual norm computation (to parallel element-by-element computations). DPG methods have discontinuous test function space Y =

  • K∈mesh

Y (K), which have locally invertible Riesz maps.

Jay Gopalakrishnan 19/38

slide-32
SLIDE 32

Example 4: The Dirichlet problem

A new weak form for the old Laplacian

Find u:

  • −∆u = f ,
  • n Ω,

u = 0,

  • n ∂Ω.

Let Ωh be a mesh of Ω and K ∈ Ωh be a mesh element. Then:

  • K
  • ∇ u ·

∇ v −

  • ∂K

(n · ∇ u)v =

  • K

f v. This allows test function v ∈ Y to be in a “broken” Sobolev space Y = H1(Ωh) :=

  • K∈Ωh

H1(K).

Jay Gopalakrishnan 20/38

slide-33
SLIDE 33

Example 4: The Dirichlet problem

A new weak form for the old Laplacian

Find u:

  • −∆u = f ,
  • n Ω,

u = 0,

  • n ∂Ω.

Let Ωh be a mesh of Ω and K ∈ Ωh be a mesh element. Then:

  • K
  • ∇ u ·

∇ v −

  • ∂K

(n · ∇ u)v =

  • K

f v.

  • K∈Ωh

K

  • ∇ u ·

∇ v −

  • ∂K

ˆ qn v

  • =

f v. This allows test function v ∈ Y to be in a “broken” Sobolev space Y = H1(Ωh) :=

  • K∈Ωh

H1(K).

Jay Gopalakrishnan 20/38

slide-34
SLIDE 34

Functional setting for the Laplacian

Want X and Y to make B : X → Y ∗ a continuous bijection, i.e., the form b(x, y) = (Bx)(y)

  • n

X × Y must satisfy a uniqueness and inf-sup condition. Set b( (u, ˆ qn), v) =

  • K∈Ωh

K

  • ∇ u ·

∇ v −

  • ∂K

ˆ qn v

  • .

We seek u in H1

0(Ω) and ˆ

qn in H−1/2(∂Ωh).

Jay Gopalakrishnan 21/38

slide-35
SLIDE 35

Functional setting for the Laplacian

Want X and Y to make B : X → Y ∗ a continuous bijection, i.e., the form b(x, y) = (Bx)(y)

  • n

X × Y must satisfy a uniqueness and inf-sup condition. Set b( (u, ˆ qn), v) =

  • K∈Ωh

K

  • ∇ u ·

∇ v −

  • ∂K

ˆ qn v

  • .

We seek u in H1

0(Ω) and ˆ

qn in H−1/2(∂Ωh).

Definition (of H−1/2(∂Ωh), the space of numerical fluxes)

Define the element-by-element trace operator trn by trn : H(div, Ω) →

  • K∈Ωh

H−1/2(∂K), trn r|∂K = r · n|∂K. and set H−1/2(∂Ωh) = ran(trn).

Jay Gopalakrishnan 21/38

slide-36
SLIDE 36

Functional setting for the Laplacian

Want X and Y to make B : X → Y ∗ a continuous bijection, i.e., the form b(x, y) = (Bx)(y)

  • n

X × Y must satisfy a uniqueness and inf-sup condition. Set b( (u, ˆ qn), v) =

  • K∈Ωh

K

  • ∇ u ·

∇ v −

  • ∂K

ˆ qn v

  • .

We seek u in H1

0(Ω) and ˆ

qn in H−1/2(∂Ωh).

Theorem

With X = H1

0(Ω) × H−1/2(∂Ωh) and Y = H1(Ωh) , the operator B is a

continuous bijection and has a continuous inverse. [Demkowicz+G 2013]

Jay Gopalakrishnan 21/38

slide-37
SLIDE 37

Discrete spaces for the Laplacian

Trial subspace Xh ⊆ X ≡ H1

0(Ω) × H−1/2(∂Ωh):

Approximate u by Lagrange FE of degree ≤ p + 1, ∀ K ∈ Ωh, ˆ qn by polynomials of degree ≤ p, ∀ mesh edges. Test subspace Y r ⊆ H1(Ωh): Set, for some r ≥ 0, Y r = {v : v|K ∈ Pr(K), ∀K ∈ Ωh}.

Jay Gopalakrishnan 22/38

slide-38
SLIDE 38

Discrete spaces for the Laplacian

Trial subspace Xh ⊆ X ≡ H1

0(Ω) × H−1/2(∂Ωh):

Approximate u by Lagrange FE of degree ≤ p + 1, ∀ K ∈ Ωh, ˆ qn by polynomials of degree ≤ p, ∀ mesh edges. Test subspace Y r ⊆ H1(Ωh): Set, for some r ≥ 0, Y r = {v : v|K ∈ Pr(K), ∀K ∈ Ωh}.

Recall Pick any Xh ⊆ X. The practical DPG method finds xr

h ∈ Xh, using a

(finite-dimensional) Y r ⊆ Y , such that b(xr

h, y) = ℓ(y),

∀y ∈ Y r

h def

= T r(Xh), where T r : X → Y r is defined by (T rw, y)Y = b(w, y), ∀w ∈ X, y ∈ Y r.

Jay Gopalakrishnan 22/38

slide-39
SLIDE 39

Discrete spaces for the Laplacian

Trial subspace Xh ⊆ X ≡ H1

0(Ω) × H−1/2(∂Ωh):

Approximate u by Lagrange FE of degree ≤ p + 1, ∀ K ∈ Ωh, ˆ qn by polynomials of degree ≤ p, ∀ mesh edges. Test subspace Y r ⊆ H1(Ωh): Set, for some r ≥ 0, Y r = {v : v|K ∈ Pr(K), ∀K ∈ Ωh}.

Computation of T r is local: Apply: (T rw, y)Y = b(w, y) = ⇒ (T r(u, ˆ qn), y)H1(Ωh) = b( (u, ˆ qn), y), ∀ y ∈ Y r. = ⇒ (T r(u, ˆ qn), y)H1(K) =

  • K
  • ∇ u ·

∇ y −

  • ∂K

ˆ qn y, ∀K ∈ Ωh.

Jay Gopalakrishnan 22/38

slide-40
SLIDE 40

Discrete spaces for the Laplacian

Trial subspace Xh ⊆ X ≡ H1

0(Ω) × H−1/2(∂Ωh):

Approximate u by Lagrange FE of degree ≤ p + 1, ∀ K ∈ Ωh, ˆ qn by polynomials of degree ≤ p, ∀ mesh edges. Test subspace Y r ⊆ H1(Ωh): Set, for some r ≥ 0, Y r = {v : v|K ∈ Pr(K), ∀K ∈ Ωh}. To prove optimal convergence, we must choose r so that [A.3] holds.

Recall [A.3] ∃ Π : Y → Y r and CΠ > 0 such that for all wh ∈ Xh and y ∈ Y , b(w h, y − Πy) = 0, ΠyY ≤ CΠyY .

Jay Gopalakrishnan 22/38

slide-41
SLIDE 41

Discrete spaces for the Laplacian

Trial subspace Xh ⊆ X ≡ H1

0(Ω) × H−1/2(∂Ωh):

Approximate u by Lagrange FE of degree ≤ p + 1, ∀ K ∈ Ωh, ˆ qn by polynomials of degree ≤ p, ∀ mesh edges. Test subspace Y r ⊆ H1(Ωh): Set, for some r ≥ 0, Y r = {v : v|K ∈ Pr(K), ∀K ∈ Ωh}.

Theorem (Verification of [A.3])

Let Ωh be a simplicial shape-regular finite element mesh in N-space

  • dimensions. For any p ≥ 0, whenever r ≥ p + N , there exists a

continuous Π : Y → Y r such that for all (wh,ˆ sn,h) ∈ Xh,

  • K
  • ∇ wh ·

∇(v − Πv) −

  • ∂K

ˆ sn,h (v − Πv) = 0, ∀K ∈ Ωh.

Jay Gopalakrishnan 22/38

slide-42
SLIDE 42

Next

Three avenues to DPG methods

◮ Petrov-Galerkin with optimal test functions . . . . . . . . . . . . . . . . . . . . . . . . ✦ ◮ Least-squares Galerkin method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ✦ ◮

A priori error analysis

◮ Ideal DPG method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .✦ ◮ Practical DPG method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .✦

A posteriori error analysis Fast solvers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Examples

◮ Example 1 (Standard FEM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ✦ ◮ Example 2 (L2-based least-squares) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ✦ ◮ Example 3 (An ODE) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .✦ ◮ Example 4 (Diffusion) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ✦ ◮ Jay Gopalakrishnan 23/38

slide-43
SLIDE 43

Preconditioning

Abstractly, b(xr

h, y) = ℓ(y)

∀y ∈ Y r

h = T r(Xh),

= ⇒ b(xr

h, T rzh) = ℓ(T rzh)

∀zh ∈ Xh = ⇒ (T rxr

h, T rzh)Y = ℓ(T rzh)

∀zh ∈ Xh.

Lemma

[A.1–A.3] = ⇒ C1 CΠ xX ≤ T rxY ≤ C2xX for all x ∈ Xh. This implies that any preconditioner spectrally equivalent to the (·, ·)X-inner product is also a preconditioner for the practical DPG method.

Jay Gopalakrishnan 24/38

slide-44
SLIDE 44

Example: A BDDC preconditioner

b( (u, ˆ qn), v) =

  • K∈Ωh

K

  • ∇ u ·

∇ v −

  • ∂K

ˆ qn v

  • X = H1

0(Ω) × H−1/2(∂Ωh),

Implementation in NGSolve with Lukas Kogler & Joachim Sch¨

  • berl

1 Statically condense the stiffness matrix to u|∂Ωh and ˆ

qn.

2 Apply a BDDC preconditioner as follows: 1

Do a wire basket coarse solve.

2

Add inverses of small blocks of u|∂Ωh-unknowns on each interface.

3

Add inverses of small blocks of ˆ qn-unknowns on each interface.

Jay Gopalakrishnan 25/38

slide-45
SLIDE 45

Example: A BDDC preconditioner

b( (u, ˆ qn), v) =

  • K∈Ωh

K

  • ∇ u ·

∇ v −

  • ∂K

ˆ qn v

  • X = H1

0(Ω) × H−1/2(∂Ωh),

Implementation in NGSolve with Lukas Kogler & Joachim Sch¨

  • berl

p + 1 diagonal BDDC 4 142 60 5 159 65 6 180 77 7 202 78 8 209 88 9 243 90 Used a small fixed 8 x 8 mesh Number of preconditioned conjugate gradient iterations are reported.

Jay Gopalakrishnan 25/38

slide-46
SLIDE 46

Next

Three avenues to DPG methods

◮ Petrov-Galerkin with optimal test functions . . . . . . . . . . . . . . . . . . . . . . . . ✦ ◮ Least-squares Galerkin method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ✦ ◮ Mixed Galerkin method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

A priori error analysis

◮ Ideal DPG method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .✦ ◮ Practical DPG method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .✦

A posteriori error analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fast solvers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ✦ Examples

◮ Example 1 (Standard FEM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ✦ ◮ Example 2 (L2-based least-squares) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ✦ ◮ Example 3 (An ODE) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .✦ ◮ Example 4 (Diffusion) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ✦ ◮ Jay Gopalakrishnan 26/38

slide-47
SLIDE 47

Built-in error estimator in DPG methods

Results for Carter’s flat plate problem: (courtesy of Jesse Chan) Adaptivity shows no preasymptotics. Iteration 0

Supersonic flow impinging over a flat plate (Ma = 3, Re = 1000). Used Petrov-Galerkin implementation in Camillia package with h-adaptivity, p = 2, starting with a mesh of just two elements.

Jay Gopalakrishnan 27/38

slide-48
SLIDE 48

Built-in error estimator in DPG methods

Results for Carter’s flat plate problem: (courtesy of Jesse Chan) Adaptivity shows no preasymptotics. Iteration 5

Supersonic flow impinging over a flat plate (Ma = 3, Re = 1000). Used Petrov-Galerkin implementation in Camillia package with h-adaptivity, p = 2, starting with a mesh of just two elements.

Jay Gopalakrishnan 27/38

slide-49
SLIDE 49

Built-in error estimator in DPG methods

Results for Carter’s flat plate problem: (courtesy of Jesse Chan) Adaptivity shows no preasymptotics. Iteration 10

Supersonic flow impinging over a flat plate (Ma = 3, Re = 1000). Used Petrov-Galerkin implementation in Camillia package with h-adaptivity, p = 2, starting with a mesh of just two elements.

Jay Gopalakrishnan 27/38

slide-50
SLIDE 50

The mixed method approach

DPG methods

Least-squares Galerkin method Petrov-Galerkin with optimal test space Mixed Galerkin method

Jay Gopalakrishnan 28/38

slide-51
SLIDE 51

Error representation function

Residual: ρ = ℓ − Bxh. Error representation function: εr = R−1

Y r (ℓ − Bxh).

It can be practically computed by (εr, y)Y = ℓ(y) − b(xh, y), ∀y ∈ Y r. Error estimator: η = εrY . [Demkowicz+G+Niemi 2012] Petrov-Galerkin solve εr by local postprocessing Least-squares εr is Riesz inverse of residual Mixed method εr is one of the variables

Jay Gopalakrishnan 29/38

slide-52
SLIDE 52

DPG as a Mixed method

Theorem (Reinterpretation of DPG as a mixed method)

The following are equivalent statements: i) xh ∈ Xh solves the practical DPG method. ii) xh ∈ Xh and εr ∈ Y r solve the mixed formulation (εr, y)Y + b(xh, y) = ℓ(y) ∀y ∈ Y r, (1a) b(zh, εr) = 0 ∀zh ∈ Xh. (1b)

Proof.

(i) = ⇒ (ii) : Eq. (1a) is just the defintion of εr. For (1b), b(zh, εr) = (T rzh, εr)Y = (T rzh, R−1

Y r (ℓ − Bxh))Y = (T rzh, T r(x − xh))Y

= b(x − xh, T rzh) = 0. (ii) = ⇒ (i) : Similar.

Jay Gopalakrishnan 30/38

slide-53
SLIDE 53

DPG as a Mixed method

Theorem (Reinterpretation of DPG as a mixed method)

The following are equivalent statements: i) xh ∈ Xh solves the practical DPG method. ii) xh ∈ Xh and εr ∈ Y r solve the mixed formulation (εr, y)Y + b(xh, y) = ℓ(y) ∀y ∈ Y r, (1a) b(zh, εr) = 0 ∀zh ∈ Xh. (1b) [Dahmen+Huang+Schwab+Welper 2012] studied similar mixed formulations and found techniques other than localization by discontinuous spaces to make the method practical.

Jay Gopalakrishnan 30/38

slide-54
SLIDE 54

Recall the previous assumptions

[A.1] {w ∈ X : b(w, y) = 0 ∀y ∈ Y } = {0}. [A.2] ∃ C1, C2 > 0 such that C1yY ≤ sup

w∈X

|b(w, y)| wX ≤ C2yY . [A.3] ∃ Π : Y → Y r and CΠ > 0 such that for all wh ∈ Xh and y ∈ Y , b(wh, y − Πy) = 0, ΠyY ≤ CΠyY . Optimal a priori estimates followed from these assumptions. We now show that a posteriori error estimators also follow from the same assumptions [A.1–A.3].

Jay Gopalakrishnan 31/38

slide-55
SLIDE 55

A posteriori error estimates

Theorem (Reliability & Efficiency of DPG error estimator)

Suppose [A.1–A.3] hold. Let F ∈ Y ∗, x = B−1F, [Carstensen+Demkowicz+G 2014] xh ∈ Xh be the DPG solution, η = F − Bxh(Y r)∗ = εrY be the error estimator,

  • sc(F) def

= F ◦ (1 − Π)Y ∗. Then C 2

1 x − xh2 X ≤ η2 + (CΠ η + osc(F))2 ,

← Reliability η2 ≤ C 2

2 x − xh2 X.

← Efficiency “Efficiency” is trivial in least-square methods. Proof of “Reliability” uses Π critically.

Jay Gopalakrishnan 32/38

slide-56
SLIDE 56

A posteriori error estimates

Theorem (Reliability & Efficiency of DPG error estimator)

Suppose [A.1–A.3] hold. Let F ∈ Y ∗, x = B−1F, [Carstensen+Demkowicz+G 2014] ˜ xh ∈ Xh be the DPG solution, ˜ η = F − B˜ xh(Y r)∗ = εrY be the error estimator,

  • sc(F) def

= F ◦ (1 − Π)Y ∗. Then C 2

1 x − ˜

xh2

X ≤ ˜

η2 + (CΠ ˜ η + osc(F))2 , ← Reliability ˜ η2 ≤ C 2

2 x − ˜

xh2

X.

← Efficiency “Efficiency” is trivial in least-square methods. Proof of “Reliability” uses Π critically.

Jay Gopalakrishnan 32/38

slide-57
SLIDE 57

Error estimator in the Laplace example

Iteration 0 Results for Dirichlet problem with f (x, y) = e−100(x2+y2) aside. No need to code an error estimator for driving adaptivity in DPG methods. The mixed formulation is standard Galerkin, so it is easily implementable in codes without support for Petrov-Galerkin forms.

[Click here to download FEniCS code for this experiment.] Jay Gopalakrishnan 33/38

slide-58
SLIDE 58

Error estimator in the Laplace example

Iteration 6 Results for Dirichlet problem with f (x, y) = e−100(x2+y2) aside. No need to code an error estimator for driving adaptivity in DPG methods. The mixed formulation is standard Galerkin, so it is easily implementable in codes without support for Petrov-Galerkin forms.

[Click here to download FEniCS code for this experiment.] Jay Gopalakrishnan 33/38

slide-59
SLIDE 59

Error estimator in the Laplace example

Iteration 11 Results for Dirichlet problem with f (x, y) = e−100(x2+y2) aside. No need to code an error estimator for driving adaptivity in DPG methods. The mixed formulation is standard Galerkin, so it is easily implementable in codes without support for Petrov-Galerkin forms.

[Click here to download FEniCS code for this experiment.] Jay Gopalakrishnan 33/38

slide-60
SLIDE 60

Example 5: Stresses in Stokes flow

Second order system:

1 2∆

u − ∇ p = f in Ω, ∇ · u = 0 in Ω. No slip B.C.:

  • u =

0 on ∂Ω. For uniqueness: (p, 1)Ω = 0. Convert to first order system: σ + p δ − ε( u) = 0, (definition of true fluid stress σ) ∇ · σ = f . (since ∇ · σ = 1

2∆

u − ∇ p)

Jay Gopalakrishnan 34/38

slide-61
SLIDE 61

Example 5: Stresses in Stokes flow

Second order system:

1 2∆

u − ∇ p = f in Ω, ∇ · u = 0 in Ω. No slip B.C.:

  • u =

0 on ∂Ω. For uniqueness: (p, 1)Ω = 0. Convert to first order system:

Apply deviatoric: Dτ = τ − tr τ N δ

σ + p δ − ε( u) = 0, ∇ · σ = f .

Jay Gopalakrishnan 34/38

slide-62
SLIDE 62

Example 5: Stresses in Stokes flow

Second order system:

1 2∆

u − ∇ p = f in Ω, ∇ · u = 0 in Ω. No slip B.C.:

  • u =

0 on ∂Ω. For uniqueness: (p, 1)Ω = 0. Convert to first order system:

Apply deviatoric: Dτ = τ − tr τ N δ

Dσ − ε( u) = 0, ∇ · σ = f . And (tr σ, 1)Ω = 0.

Jay Gopalakrishnan 34/38

slide-63
SLIDE 63

Example 5: Stresses in Stokes flow

Second order system:

1 2∆

u − ∇ p = f in Ω, ∇ · u = 0 in Ω. No slip B.C.:

  • u =

0 on ∂Ω. For uniqueness: (p, 1)Ω = 0. Convert to first order system: Dσ − ε( u) = 0, ∇ · σ = f . And (tr σ, 1)Ω = 0. DPG form with x = (σ, u, ˆ u, ˆ σn, α) and y = (τ, v, ω): b(x, y) = (Dσ, τ)Ω + ( u, ∇ · τ)Ωh − ˆ u, τn∂Ωh + (α, tr τ)Ω + (σ, ε( v))Ωh − ˆ σn, v∂Ωh + (tr σ, ω)Ω.

Jay Gopalakrishnan 34/38

slide-64
SLIDE 64

Spaces for Stokes example

DPG form with x = (σ, u, ˆ u, ˆ σn, α) and y = (τ, v, ω): b(x, y) = (Dσ, τ)Ω + ( u, ∇ · τ)Ωh − ˆ u, τn∂Ωh + (α, tr τ)Ω + (σ, ε( v))Ωh − ˆ σn, v∂Ωh + (tr σ, ω)Ω. Trial and test spaces: X = L2(Ω; S) × L2(Ω)N × H1/2 (∂Ωh)N × H−1/2(∂Ωh)N × R, Y = H(div, Ωh; S) × H1(Ωh)N × R. Discrete spaces: Xh = {(σ, u, ˆ u, ˆ σn, α) ∈ X : σ|K ∈ Pp(K; S), u|K ∈ Pp(K)N, ∀elements K, ˆ u|F ∈ Pp+1(F)N, ˆ σn|F ∈ Pp(F)N, ∀interfaces F, α ∈ R}, Y r = {(τ, v, ω) ∈ Y : ω ∈ R, τ|K ∈ Pp+2(K; S), v|K ∈ Pp+N(K)N, ∀elements K}.

Jay Gopalakrishnan 35/38

slide-65
SLIDE 65

A priori and a posteriori estimates for Stokes example

Theorem

Suppose Ωh is a shape-regular simplicial mesh of Ω and p ≥ 0. Then [A.1–A.3] holds for the Stokes example. Consequently, ∃ mesh-independent constants c1, . . . , c4 > 0 such that x − xhX ≤ c1 min

ξh∈Xh

x − ξhX, c4x − xh2

X − c2 osc(F)2 ≤ η2 ≤ c3x − xh2 X.

Verification of [A.3] uses degrees of freedom of symmetric matrix polynomials in [G+Guzm´

an 2011].

Proof proceeds by taking the incompressible limit of a similar elasticity discretization.

Jay Gopalakrishnan 36/38

slide-66
SLIDE 66

Stokes solution on L-shaped domain

Osborn’s singular solution: u = curl (a+s+ + a−s− + c+ − c−) , where

s± = r 1+z sin((z ± 1)θ), c± = r 1+z cos((z ± 1)θ), a± = −z cot(3zπ/2)/(z ± 1), z2 = sin2 (3zπ/2) [z = root with smallest real part].

Results from an h-adaptive algorithm with η as estimator and p = 2: σxy σxx σyy

Jay Gopalakrishnan 37/38

slide-67
SLIDE 67

Stokes solution on L-shaped domain

10

3

10

4

10

5

10

−2

10

−1

10 # Degrees of freedom Error x − xhX and estimator η Stokes example: h−adaptivity on L−shaped domain N−1.50 error error estimator

σxy σxx σyy

Jay Gopalakrishnan 37/38

slide-68
SLIDE 68

Stokes solution on L-shaped domain

10

3

10

4

0.4 0.6 0.8 1 # Degrees of freedom Effectivity ρ Effectivity during the adaptive process

Effectivity index ρ. ρ = η x − xhX σxy σxx σyy

Jay Gopalakrishnan 37/38

slide-69
SLIDE 69

Stokes solution on L-shaped domain

10

3

10

4

0.4 0.6 0.8 1 # Degrees of freedom Effectvity ˜ ρ Effectivity for perturbed adaptive iterates

After xh randomly perturbed by 5%. ˜ ρ = η x − ˜ xhX σxy σxx σyy

Jay Gopalakrishnan 37/38

slide-70
SLIDE 70

Conclusion

Three avenues to DPG methods

◮ Petrov-Galerkin with optimal test functions . . . . . . . . . . . . . . . . . . . . . . . . ✦ ◮ Least-squares Galerkin method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ✦ ◮ Mixed Galerkin method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ✦

A priori error analysis

◮ Ideal DPG method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .✦ ◮ Practical DPG method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .✦

A posteriori error analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .✦ Fast solvers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ✦ Examples

◮ Example 1 (Standard FEM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ✦ ◮ Example 2 (L2-based least-squares) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ✦ ◮ Example 3 (An ODE) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .✦ ◮ Example 4 (Diffusion) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ✦ ◮ Example 5 (Stokes) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .✦ Jay Gopalakrishnan 38/38