The Fast Sweeping Method for Convex Hamilton-Jacobi Equations and - - PowerPoint PPT Presentation
The Fast Sweeping Method for Convex Hamilton-Jacobi Equations and - - PowerPoint PPT Presentation
The Fast Sweeping Method for Convex Hamilton-Jacobi Equations and Beyond Hongkai Zhao UC Irvine Highlights of the fast sweeping method (FSM) Simplicity: a Gauss-Seidel type of iterative method. Optimal complexity: finite number of
Highlights of the fast sweeping method (FSM)
◮ Simplicity: a Gauss-Seidel type of iterative method. ◮ Optimal complexity: finite number of iterations independent of
mesh size.
◮ A truly nonlinear method. ◮ The idea and iterative approach can be extended to more general
problems.
A simple Hamilton-Jacobi (HJ) equation: the eikonal equation
Isotropic case: ∇u(x) = c(x), u(x) = 0, x ∈ S The viscosity solution u(x) is the first arrival time for a front starting at S with a propagation speed v(x) = 1 c(x) .
T=0 T T
1 2
vn =f
Anisotropic case: [∇u(x)M(x)∇u(x)]
1 2 = 1,
M(x) is a SPD matrix HJ equation is nonlinear hyperbolic PDE. The proper definition of weak solution is called the viscosity solution.
Different interpretation of viscosity solution (1)
u is a viscosity subsolution (supersolution) if for all φ ∈ C ∞
0 (Ω) with
u − φ attaining a local maximum (minimum) at some x0 ∈ Ω then H(x0, ∇φ(x0)) ≤ 0 (≥ 0) A viscosity solution is both a viscosity sub- and supersolution.
φ u φ u (a) subsolution (b) supersolution ◮ A general definition using test function. Useful for mathematical
proof.
◮ Non-constructive and difficult to use for designing numerical
schemes.
Different interpretation of viscosity solution (2)
Let uǫ be the solution to H(x, ∇uǫ(x)) = ǫ∆uǫ(x), uǫ → u as ǫ → 0. u is the viscosity solution.
◮ Singular perturbation ⇒ equation type is changed ⇒ more global
dependence.
◮ Using numerical viscosity ⇒ Lax-Friedrichs scheme. ◮ General and useful for non-convex Hamiltonian.
Different interpretation of viscosity solution (3)
Convex Hamiltonian: the viscosity solution is the optimal control solution u(x) = infy∈∂Ω(g(y) + l(x, y)) l(x, y)=infξ(t)
- 1
0 ρ(ξ(t), −ξ′(t))dt|ξ ∈ C 0,1([0, 1], Ω), ξ(0)=x, ξ(1)=y
- where ρ(x, q) = max{p|H(x,p)=0} < p, q >.
x
( , ( )) f x α ⋅
Ω Γ
◮ Characteristics for the hyperbolic PDE ⇒ local optimal paths. ◮ When characteristics intersect, take the optimal value among all
characteristics ⇔ viscosity solution.
◮ If Ω is convex and H is homogeneous in space ⇒ the optimal path
is a straight line and l(x, y) = max{p|H(p)=0}(x − y) · p.
Numerical methods
Two crucial ingredients:
◮ Appropriate discretization (local solver).
◮ converges to the right viscosity solution (monotone), ◮ follows characteristics (upwind), ◮ deals with non-smoothness and has high order accuracy if
possible (nonlinear).
◮ Solve a large nonlinear system by iterative methods:
◮ Jacobi iteration (pseudo time marching): explicit but need
many iterations due to the finite speed of propagation and the CFL condition.
◮ Gauss-Seidel iteration: efficient nonlinear solver based on
upwind monotone schemes and causality of the PDE.
Other related works
◮ Danielson’s algorithm (1980). ◮ Schneider, et al, post sweeping (1992). ◮ Bou´
e and Dupuis, fast sweeping method for stochastic control (1999).
◮ Jameson and Caughey use alternating sweeping to solve linearized
steady compressible Euler equation (2001).
◮ Tsitsiklis (1995), Sethian (1996) fast marching method. Sethian
and Vladimirsky (2001) ordered upwind scheme.
The FSM on rectangular grids
Solve |∇u(x)| = c(x), with u(x) = given, x ∈ S.
◮ Local solver (nonlinear equation): upwind and causal
[(ui,j −uxmin)+ h ]2+[(ui,j −uymin)+ h ]2 =c2
i,j,
i, j = 1, 2, . . . (1) uxmin =min(ui−
1,j, ui+ 1,j), uymin =min(ui,j− 1, ui,j+ 1), causality,
◮ Initial guess: ui,j = u(xi,j) = given, xi,j ∈ S, ui,j = ∞, xi,j /
∈ S.
◮ G-S iterations with four alternating orderings:
(1) i = 1 : I, j = 1 : J (2) i = I : 1, j = 1 : J (3) i = I : 1, j = J : 1 (4) i = 1 : I, j = J : 1 Let u be the solution to (1) using current values of ui±1,j, ui,j±1, unew
i,j
= u = min(uold
i,j , u),
control interpretation.
The FSM in 1D
In 1D, there are two directions of characteristics, left or right. Two sweeps are needed to compute the numerical solution.
the distance function to two points x1, x2: |ux| = 1, u(x1) = u(x2) = 0. the computed solution after first sweeping: left to right the computed solution after second sweeping: right to left
The fast sweeping algorithm in 2D
Facts:
◮ The characteristics have infinitely many directions. ◮ However all directions can be classified into four groups: up-right,
up-left, down-left and down-right. Each sweeping order covers one group of characteristics.
i=1:I, j=1:J i=1:I, j=J:1 i=I:1, j=1:J i=I:1, j=J:1 i j i j
i=1:I, j=1:J i=1:I, j=1,J i=1:i, j=J:1 i=1:i, j=J:1 i=I:1, j=1:J i=I:1, j=1:J i=I:1, j=J:1 i=I:1, j=J:1
(a) distance to a point (b) distance to a circle
Convergence and error estimate
Theorem 1: The solution of the fast sweeping algorithm converges monotonically to the solution of the discretized system. d(x, S): the distance function to S.; uh(x, S): the numerical solution. Theorem 2: For a single data point S = {x0}, uh(x, x0) converges after 2n sweeps in n dimensions and d(x, x0) ≤ uh(x, x0) ≤ d(x, x0) + O(h| log h|) (sharp!) Theorem 3: For arbitrary S in n dimensions, after 2n iterations. u(xi,j, S) ≤ uh(xi,j, S) ≤ d(xi,j, S) + O(|h log h|), where u(xi,j, S) is the solution to the discretized system.
General eikonal equation
|∇u(x)| = c(x), denote H(p, x) = |p| − c(x), where p = ∇u. The characteristic equations are: ˙ x = ∇pH = ∇u
c(x)
˙ p = −∇xH = ∇c(x) ˙ u = ∇u · ˙ x = c(x) Each characteristic can be segmented into a finite number of pieces and can be covered by the G-S sweeps successively.
x 0
1 2 3 4 5
! "
Total direction variation along a characteristic
|˙ x| =
- ∇u
c(x)
- = 1,
¨ x = ∇ ˙ u c(x) − ∇u c · ∇c c · ˙ x = (I − Pn)∇c(x) c(x) . Pn is the projection on n =
∇u |∇u|. ¨
x is the curvature. Let K = maxx∈Ω
|∇c(x)| c(x) , D = diameter of the domain, and cM(cm) is the
maximum (minimum) of c(x). The total direction variation is bounded by
- Γ
|¨ x|ds ≤
- Γ
|∇c(x)| c(x) ds ≤ K
- Γ
ds ≤ Klength(Γ) ≤ DK cM cm ⇒ The maximum number of turns is bounded by DK
2π cM cm .
Maximum length of a characteristic
Let the characteristics Γ joins x0 ∈ S and a x ∈ Ω. cm
- Γ
ds ≤
- Γ
c(s)ds = u(x) ≤ x
x0
c(s)ds ≤ cMx − x0 Hence length(Γ) =
- Γ
ds ≤ DcM cm where D is the radius of the domain and cM(cm) is the maximum (minimum) of c(x). The maximum number of turns is bounded by DK
2π cM cm . x x S
FSM on unstructured mesh ( Luo, Qian, Zhang & Zhao)
Two key ingredients:
◮ Montone upwind scheme. ◮ Systematic ordering covering all characteristics.
Ordering on unstructured mesh
The Key point: Systematic ordering such that each characteristics can be covered in a finite number of sweep orderings.
◮ Order all vertices according to the distance to a few reference points
and sweep back and forth.
characteristic reference point
reference point A reference point B reference point C characteristic
Discretization using optimal control (Bornemann & Rasch)
Piecewise linear approximation on a local mesh, Ωh
C, around the vertex C.
◮ Local constant approximation of the Hamiltonian:
Hh
C(p) = H(C, p).
◮ Local optimal control problem:
uh(C) = inf
y∈∂Ωh
C
- lh
C(C, y) + g h C(y)
- .
◮ g h
C(y) is the piecewise linear interpolation of uh at vertices on ∂Ωh C.
◮ The optimal path is a straight line!
lh
C(x, y) =
max
{p|Hh
C (p)=0}(x − y) · p.
C B A
Mesh Ωh around vertex C
Properties of the discretization
The discretization is of the implicit form: F h
C(C, uh C, uh A, uh B) = 0.
The scheme is
◮ consistent
if u is linear, ∇u = p, Hh
C (C, ∇u) = H(C, p),
◮ upwind and monotone
0 ≤ ∂uh
C
∂uh
A
, ∂uh
C
∂uh
B
≤ 1, ∂uh
C
∂uh
A
+ ∂uh
C
∂uh
B
= 1
◮ For anisotropic eikonal equation: [∇u(x)M(x)∇u(x)]1/2 = 1, the
implicit relation can be explicitly solved (Kao, Osher & Tsai 05, Bornemann & Rasch 06, Qian, Zhang & Z. 07) uh
C = uh C(C, uh A, uh B).
Convergence
Theorem
◮ There exists a unique solution to the discretized nonlinear system. ◮ The fast sweeping iteration converges monotonically to the unique
solution.
◮ The unique solution to the discretized solution converges to the
viscosity solution as h → 0.
Two special cases on rectangular grids.
C C (a) Five point stencils. (b) Nine point stencils.
Nine point stencils will yield more accurate solutions due to better angle resolution.
Remarks on discretization
◮ Lax-Friedirchs scheme: centered difference + explicit numerical
viscosity ⇒ monotone but not upwind! H(u+
x + u− x
2 , u+
y + u− y
2 ) + α(u+
x − u− x ) + β(u+ y − u− y )
◮ Efficient and unconditionally stable implicit scheme can be easily
designed for time dependent HJ equation: ut + H(x, ∇u) = 0 since the CFL condition for explicit schemes may not be easy to estimate for nonlinear problems.
n
t
1 n
t −
2 n
t +
1 n
t +
2 n
t +
1 n
t +
n
t
1 n
t −
1 i + 1 i − i 1 i + i 1 i − ( ) a ( ) b
explicit scheme implicit scheme
The complexity
The complexity of the fast sweeping method (FSM) is O(N).
◮ The constant in O depends on the dimension and inhomogeneity. ◮ Both the algorithm and the complexity does not depend on the
anisotropy of the Hamiltonian.
◮ Locking and dynamic queue strategy (Bak, McLaughlin and Renzi)
can be used to reduce unnecessary updates. Remarks.
- 1. The complexity of the fast marching method (FMM) is O(N log N).
◮ For isotropic eikonal equation, the constant in O depends on the
dimension but not on the inhomogeneity.
◮ For isotropic eikonal equation, both FSM and FMM use monotone
and upwind scheme and render exactly the same solution. For anisotropic eikonal equation, the ordered upwind scheme determines the stencils on the fly.
- 2. For a general convex Hamilton-Jacobi equation, a Dijkstra type of one
pass algorithm on a fixed mesh may fail.
- 3. The FSM as an iterative method can be be more flexible.
Anisotropic eikonal equations
[∇u(x)M(x)∇u(x)]
1 2 = 1,
x ∈ Rd, where M(x) is a d × d SPD matrix. In 2D M =
- a,
−c −c, b
- where a > 0, b > 0 and c2 − ab < 0. The anisotropy of M is
characterized by η =
- λmax(M)
λmin(M)
Numerical tests
X Y
0.3 0.4 0.5 0.6 0.7 0.3 0.4 0.5 0.6 0.7 Five-ring problem, Zoom in the mesh
X Y
0.2 0.4 0.6 0.8 1 0.2 0.4 0.6 0.8 1 a=1, b=1, c=0 12116 nodes, 29 iterations
(a) (b)
X Y
0.2 0.4 0.6 0.8 1 0.2 0.4 0.6 0.8 1 a=1, b=1, c=0.7 12116 nodes, 28 iterations
X Y
0.2 0.4 0.6 0.8 1 0.2 0.4 0.6 0.8 1 a=1, b=1, c=0.9 12116 nodes, 28 iterations
(c) (d) (a): the mesh; (b): a=1, b=1, c=0; η =
- 1+c
1−c = 1; 29 sweepings; (c): a=1, b=1,
c=0.7; 28 sweepings; (d): a=1, b=1, c=0.9; 28 sweepings.
Computing the distance fields and geodesics on manifolds
Convergence
Wrapping [−0.2, 0.2] × [−0.2, 0.2] No wrapping Nodes Elements L1 error
- rder
iter L1 error
- rder
iter 1473 2816 8.78E-3 – 4 8.87E-3 – 4 5716 11264 4.04E-3 1.12 4 5.38E-3 0.72 4 22785 45056 2.10E-3 0.94 4 3.22E-3 0.74 5 90625 180224 1.04E-3 1.02 4 1.88E-3 0.77 5
a = 150.25, b = 50.75, c = 86.16953, η = √ 200.
Wrapping [−0.2, 0.2] × [−0.2, 0.2] No wrapping Nodes Elements L1 error
- rder
iter L1 error
- rder
iter 1473 2816 6.23E-3 – 4 5.72E-3 – 4 5716 11264 2.89E-3 1.11 4 3.33E-3 0.78 4 22785 45056 1.53E-3 0.92 4 1.93E-3 0.79 5 90625 180224 7.66E-4 1.00 4 1.10E-3 0.80 5
a = 1500.25, b = 500.75, c = 865.5924, η = √ 2000.
five point stencils nine point stencils Mesh L1 error
- rder
iter L1 error
- rder
iter 40 × 40 1.17E-1 – 4 1.57E-2 – 4 80 × 80 6.35E-2 0.88 4 8.18E-3 0.94 4 160 × 160 3.39E-2 0.90 4 4.18E-3 0.97 4 320 × 320 1.78E-2 0.93 4 2.12E-3 0.98 4
Comparison between the 5 point and the 9 point stencils. a = 1, b = 1, c = 0.9.
Contraction property for FSM
FSM: monotone upwind scheme + GS iteration + alternating sweeping Linear example: aux + buy = 0, a > b > 0 The first order monotone upwind scheme is uh
i,j =
a a + b uh
i−1,j +
b a + b uh
i,j−1
Define en
i,j = un i,j − uh i,j,
with the right ordering (i = 1 : I, j = 1 : J) for GS iteration, the update at every grid point gives en
i,j =
a a + b en
i−1,j +
b a + b en
i,j−1
Remark: Similar relation is true for monotone upwind scheme on triangular mesh.
Two convergence scenarios (1)
Case 1: Information propagates directly from boundary. Convergence in finite number of iterations independent of mesh size.
a b x y (i,j) (i,j!1) (i!1,j) 1
For the linear example with constant coefficients, one sweep with right
- rdering is needed if boundary condition is given at left and bottom
boundary. ei−1,j = ei,j−1 = 0 ⇒ ei,j = 0.
Two convergence scenarios (2)
Case 2: There is circular dependence, e.g. with given boundary condition at the bottom and periodic boundary condition in x.
◮ With right ordering, at each grid the GS update gives an error
- contraction. For example, for the first grid line about the bottom
en
i,1 = α(a, b)en i−1,1,
α(a, b) = a a + b < 1.
◮ In each sweep with a right ordering, the error is contracted by a
factor of α
1 h after one iteration.
◮ Very few iterations n is needed to achieve
- α
1 h
n ∼ǫ, ∀ǫ > 0.
◮ The smaller h the fewer iterations are needed.
Convergence for nonlinear problem
The local solver for convex HJ equations is monotone upwind satisfies 0 ≤ ∂uC ∂uA , ∂uC ∂uB ≤ 1, ∂uC ∂uA + ∂uC ∂uB = 1, ⇒ contraction property ⇒ error is reduced at each grid update when the
- rdering is correct.
FSM: monotone upwind scheme + GS iteration + alternating sweeping ⇒ fast convergence.
Differences between linear and nonlinear problems
Linear Problem:
◮ Ordering and contraction rate is known ´
a priori (depending on the coefficients).
◮ New value always replaces the older one ⇒ the largest error
may propagate and influence all points ⇒ the solution may get worse at some points during the G-S iterations. Nonlinear problem:
◮ Ordering and contraction rate is unknown ´
a priori (depending
- n the solution).
◮ Nonlinear stability due to causality enforcement: the largest
error may not influence any other points and the solution at each grid always gets better after each update.
Linear example
aux + buy = 0, Ω = [0, 1] × [0, 1]. B.C. u(x, 0) = sin(2πx), u(0, y) = sin(−2π a
by)
a=1, b=2 a=2, b=1 a=4, b=1 a=8, b=1 h=1/20 2 2 2 2 h=1/40 2 2 2 2 h=1/80 2 2 2 2 h=1/160 2 2 2 2 B.C. u(x, 0) = sin(2πx), periodic in x. a=1, b=2 a=2, b=1 a=4, b=1 a=8, b=1 h=1/20 6 13 23 46 h=1/40 5 9 16 32 h=1/80 4 7 12 23 h=1/160 4 6 10 18
Nonlinear example
[∇uMθ∇u]
1 2 = 1,
u(x, 0) = (α + 1) + α sin(2πx), periodic in x where θ is the rotation angle and M0 = 1/16 1 . θ = 0 θ = π/6 θ = π/3 θ = π/2 θ = 2π/3 θ = 5π/6 h=1/20 6 12 20 6 19 10 h=1/40 6 12 16 6 15 10 h=1/80 6 9 16 6 14 10 h=1/160 6 8 12 6 10 6 h=1/320 6 8 12 6 10 6 α = 0.15, tolerance=10−7 θ = 0 θ = π/6 θ = π/3 θ = π/2 θ = 2π/3 θ = 5π/6 h=1/20 6 16 32 6 26 18 h=1/40 6 16 21 6 26 14 h=1/80 6 12 20 6 18 10 h=1/160 6 12 20 6 14 10 α = 0.15, tolerance= machine 0.
Can an iterative method converge in finite iterations?
◮ Elliptic problem: no!
◮ Every point is coupled with all other points ⇒ the discretized
system can not be put in a triangular system.
◮ Convergence mechanism: the iteration is a contraction map. ◮ Wish: a contraction rate that is bounded away from 1.
◮ Hyperbolic problem: yes!
◮ Information propagates along characteristics ⇒ if an
appropriate upwind scheme and ordering is used, the discretized system can be (or almost) put into a triangular system.
◮ Convergence mechanism: capture propagation of information. ◮ Wish: ordering of the nodes follows the characteristics. ◮ Linear problem: ordering can be determined ´
a priori.
◮ Nonlinear problem: ordering depends on the solution.
Monotone upwind scheme + GS iteration + alternating sweeping is a must to cover all characteristics blindly and efficiently.
Interpretation in the framework of iterative methods
◮ Convergence in a finite number of iterations ≡ the nonlinear system
can be put in a triangular system.
◮ FSM achieves this by enforcing causality in the monotone upwind
scheme and using Gauss-Seidel iterations with alternating sweeping.
◮ Time marching method is a Jacobi type of iteration which uses
information from previous iteration (step). Information propagates with finite speed between iterations (steps).
◮ Ordering does not affect elliptic problems but helps hyperbolic
problems and convection dominated problems.
◮ Potential difficulties for more general hyperbolic problems:
(1) causality, (2) monotone upwind scheme.
A few other developments
◮ Parallel implementation of FSM (Z. 07, Detrixhe, Min & Gibou 13). ◮ Remove point source singularity based on factorized equation
(Fomel, Luo & Z. 09, Luo, Qian & Z. 11).
◮ High order schemes
◮ using WENO (Zhang, Qian & Z. 06), ◮ using DG (Li, Shu, Zhang & Z. JCP 10, 11), ◮ deferred correction based on a compact scheme (Benamou,