Regularization of ill-posed problems Uno H amarik University of - - PDF document

regularization of ill posed problems
SMART_READER_LITE
LIVE PREVIEW

Regularization of ill-posed problems Uno H amarik University of - - PDF document

Regularization of ill-posed problems Uno H amarik University of Tartu, Estonia Content 1. Ill-posed problems (definition and examples) 2. Regularization of ill-posed problems with noisy data 3. Parameter choice rules for exact noise level


slide-1
SLIDE 1

Regularization of ill-posed problems

Uno H¨ amarik

University of Tartu, Estonia Content

  • 1. Ill-posed problems (definition and examples)
  • 2. Regularization of ill-posed problems with noisy data
  • 3. Parameter choice rules for exact noise level
  • 4. Iterative methods
  • 5. Discretization methods
  • 6. Lavrentiev and Tikhonov methods and modifications
  • 7. Parameter choice rules for approximate noise level

Talk is based on joint research with G. Vainikko, T. Raus, R. Palm (all Tartu, Estonia), U. Tautenhahn (Zittau, Germany), R. Plato (Berlin, Germany). Ideas for colloboration with Finnish Inverse Problems Society and Finnish Centre of Excellence in Inverse Problems are welcome!

slide-2
SLIDE 2

1 Ill-posed problems Problem Au = f (1) A ∈ L(H, F), H, F − Hilbert spaces 1.1 Definition. Problem (1) is well-posed, if 1) for all f ∈ F (1) has solution u∗ ∈ H 2) for all f ∈ F solution of (1) is unique 3) solution u∗ depends continuously on the data. If one of 1)–3) is not satisfied, (1) is ill-posed problem. If A compact, then R(A) nonclosed ⇒ A−1 (if exists) unbounded ⇒ 3) not satisfied. 1.2 Example 1. Differentiation of function f ∈ C1[0, 1]. If fn(t) = f(t) + 1

n sin n2t, then for n → ∞

fn − f∞ → 0 , f ′

n − f ′∞ → ∞

1.3 Example 2. Integral equation of the first kind Au(t) ≡

1

  • K(t, s)u(s)ds = f(t)

(0 ≤ t ≤ 1), K(t, s) smooth.

  • Ex. 2.1. K(t, s) ≡ 1

1

  • u(s)ds = f(t)

∃ u∗ ⇔ f(t) ≡ const; set of solutions very large.

  • Ex. 2.2.

t

  • u(s)ds = f(t)

∃ u∗ ∈ L2[0, 1] ⇔ f(0) = 0, f ∈ H1[0, 1]; u∗ = f ′

2

slide-3
SLIDE 3

1.4 Example 3. System of linear equations with large condition number of the matrix Example:

  • x1 + 10

x2 = 11 + ε 10 x1 +100.1 x2 = 110.1 ε = 0 ⇒ x1 = 1, x2 = 1 ε = 0.1 ⇒ x1 = 100.1, x2 = −9 2 Regularization of ill-posed problems with noisy data 2.1 Noisy data. Instead of f ∈ F available fδ ∈ F : fδ − f ≤ δ well-posed problems ill-posed problems for ∀ δ > 0 ∃!A−1fδ A−1fδ may not exists for δ → 0 A−1fδ → A−1f = u∗ if ∃ A−1fδ, generally A−1fδ → A−1f Hence noisy data are no problem for well-posed problems, but serious problem for ill-posed problems.

  • Remark. In Section 7 we assume: lim

δ→0 fδ−f δ

≤ c, c is unknown constant. 2.2 Regularization. 1. Choose some parametric solution method converging in case of exact data: for approximate solution ur it holds ur → u∗ as r → ∞ (typically this convergence is monotonical; parameter r = n ∈ N in iterative and projection methods, r ∈ R in Tikhonov method).

  • 2. In case of noisy data choose regularization parameter r = r(δ)

so that ur(δ) → u∗ as δ → 0 . (2)

3

slide-4
SLIDE 4

Main problem: how to choose r to quarantee (2) u∗ − ur ≤ u∗ − u0

r + u0 r − ur

u0

r – approximate solution for fδ = f

2.3 Special regularization methods. 1) Case A = A∗ ≥ 0: Lavrentiev method (A + αI)uα = fδ , (3) α > 0 – regularization parameter (r = α−1) (A + αI)−1 ≤ α−1 ⇒ (3) well-posed for ∀ α > 0 2) General case: Tikhonov method Au = fδ A∗Au = A∗fδ

A∗A=(A∗A)∗≥0

  • Lavr.m.

(A∗A + αI)uα = A∗fδ

4

slide-5
SLIDE 5

3 Parameter choice rules for exact noise level 3.1 Discrepancy principle. r(δ) = rD : AurD − fδ ≈ bδ, b = const > 1 3.2 Monotone error rule (ME-rule). The idea: choose r = rME(δ) as largest r, for which we are able to prove, that ur − u∗ is monotonically decreasing for r ∈ [0, rME] (assuming fδ − f ≤ δ): a) in methods with r ∈ R d drur − u∗2 ≤ 0 for r ∈ (0, rME) b) in methods with r = n ∈ N un − u∗ < un−1 − u∗ for n = 1, 2, . . . , nME In Tikhonov method α = αME is solution of the equation (Auα − fδ, Avα − fδ) Avα − fδ = δ , vα = (αI + A∗A)−1(αuα + A∗fδ) ME-rule is quasioptimal: uαME − u∗ ≤ const inf

α>0 uα − u∗

and order-optimal: if u∗ ∈ R((A∗A)p/2) , then uαME − u∗ ≤ const δ

p p+1

(p ≤ 2). Discrepancy principle is not quasioptimal, but is order optimal for p≤1. If u∗ ∈ R(A∗A), then uαME −u∗ = O(δ2/3), uαD −u∗ = O(δ1/2).

5

slide-6
SLIDE 6

4 Iterative methods un = un−1 + A∗zn−1, n = 1, 2, . . . (u0 = 0) (4) Here n – regularization parameter. Discrepancy principle: stopping index nD is first n with Aun − fδ ≤ bδ, b = const > 1 ME-rule: stopping index nME is first n with (A(un + un+1)/2 − fδ, zn) zn ≤ δ 4.1 Linear methods a) Landweber method: (4) with zn = β(fδ−Aun), β ∈ (0, A∗A−2) b) implicit iteration method βun + A∗Aun = βun−1 + A∗fδ, n = 1, 2, . . . ; u0 = 0, β > 0 is (4) with zn = β−1(fδ − Aun+1). In both methods nME = nD or nME = nD − 1 and both rules are quasioptimal and order optimal for all p > 0. 4.2 Conjugate gradient (CG) type methods a) CGLS applies CG to equation A∗Au = A∗f and gives uk = arg min{fδ − Au, u ∈ Kk} Kk = SPAN{A∗fδ, A∗AA∗fδ, . . . , (A∗A)k−1A∗fδ} Algorithm: take u0 = 0, r0 = fδ, v−1 = 0, p−1 = ∞ and compute for n = 0, 1, 2, . . . pn = A∗rn, σn = pn2/pn−12, vn = rn + σnvn−1, qn = A∗vn, sn = Aqn, βn = pn2/sn2, un+1 = un + βnqn, rn+1 = rn − βnsn.

6

slide-7
SLIDE 7

b) CGME applies CG to equation AA∗w = fδ with u = A∗w and in case fδ = f gives uk = arg min{u∗ − u, u ∈ Kk}. Algorithm: take u0 = 0, r0 = fδ, v−1 = 0, r−1 = ∞ and compute for n = 0, 1, 2, . . . σn = rn2/rn−12, vn = rn + σnvn−1, qn = A∗vn, βn = rn2/qn2, un+1 = un + βnqn, rn+1 = rn − βnAqn. In both methods ME-rule is applicable with zn = βnvn. In GGLS

  • rdinary discrepancy principle is good, but in GGME one can stop

by first n with

  • n
  • i=0

fδ − Aun−2 −1/2 ≤ bδ, b = const > 1. 5 Discretization methods 5.1 Numerical differentiation f ∈ Cm[0, 1], m ∈ {1, 2, 3}, fδ − fC[0,1] ≤ δ Approximate u∗ = f ′ by uh(t) = fδ(t + h) − fδ(t − h) 2h (for t ∈ [h, 1 − h]). Here h – regularization parameter. uh − u∗C[0,1] ≤ c

  • hm−1 + δ

h

  • → min for h ≈ δ

1 m, giving

uh − u∗C[0,1] = O(δ

m−1 m ),

m ∈ {2, 3}.

7

slide-8
SLIDE 8

5.2 Projection methods Hn ⊂ H, Fn ⊂ F, dim Hn = dim Fn < ∞ Pn, Qn − orthoprojectors, Pn : H → Hn, Qn : F → Fn Find un ∈ Hn : (Aun − fδ, vn) = 0 ∀ vn ∈ Fn. Here n – regularization parameter. 5.2.1 Least error method: Hn = A∗Fn. If fδ = f, then un = Pnu∗. Let N(A) = {0}, N(A∗) = {0}, v − Qnv → 0 (n → ∞, ∀ v ∈ F), Fn ⊂ Fn+1 (n ≥ 1). ME-rule: find n = n(δ) as first n ∈ N in un = A∗vn (vn ∈ Fn), for which (vn − vn+1, fδ) vn − vn+1 ≤ δ . Then unME − u∗ → 0 as δ → 0. 5.2.2 Least squares method: Fn = AHn Let N(A) = {0} and u − Pnu → 0 as n → ∞ (∀ u ∈ H) . (5) Let there exists m ∈ N, for which (κn + κn+1)1/m (I − Pn)(A∗A)1/(2m) ≤ const (n ≥ 1) , (6) where κn ≡ sup

wn∈Hn

wn Awn. If nD = n(δ) is chosen by discrepancy principle, then unD − u∗ → 0 as δ → 0 . (7) 5.2.3 Galerkin method: Fn = Hn Let H = F, Fn = Hn, A = A∗ > 0 and (5), (6) hold. If nD = n(δ) is chosen by discrepancy principle with b large enough, then (7) holds.

8

slide-9
SLIDE 9

5.2.4 Collocation method (Au)(t) ≡

1

  • K(t, s)u(s)ds = f(t)

(0 ≤ t ≤ 1) A : L2(0, 1) → L2(0, 1), N(A) = {0}, f ∈ C[0, 1]

1

  • |K(t, s)|2ds ≤ const

(0 ≤ t ≤ 1),

1

  • K(t′, s) − K(t, s)
  • 2ds → 0 as t′ → t

(0 ≤ t, t′ ≤ 1) Given is knot set {ti ∈ [0, 1], ti = tj for i = j, i, j ∈ I}, I – index set. Let {K(ti, s), i ∈ I} be linear independent system. Let indexsets In satsfy In ⊂ In+1 ⊂ . . . ⊂ I (n ≥ 1) and ∆n = sup

t∈[0,1]

inf

i∈In |t − ti| → 0 as n → ∞.

Approximate solution un =

i∈In

c(n)

i K(ti, s), where {c(n) i } is solution of

system

  • i∈In

c(n)

i 1

  • K(ti, s)K(tj, s)ds = f(tj)

(j ∈ In). Given {δi}, i ∈ I :

  • fδ(ti) − f(ti)
  • ≤ δi

ME-rule for choice of discretization level nME = n(δ): nME is the first n = 1, 2, . . ., for which un+12 − un2 ≤

  • i∈In+1/In
  • c(n+1)

i

  • δi +
  • i∈In
  • c(n)

i

− c(n+1)

i

  • δi

Then unME − u∗L2(0,1) → 0 for lim

n→∞

  • i∈In

δ2

i = 0. 9

slide-10
SLIDE 10

6 Modifications of the Lavrentiev and Tikhonov methods 6.1 Modifications of the Lavrentiev method Let F = H, A = A∗ ≥ 0. Consider two modifications of the Lavrentiev method (A + αI)uα = fδ a) iterated Lavrentiev method uα,o = u0, (A + αI)uα,k = αuα,k−1 + fδ, k = 1, 2, . . . , m; m ≥ 2 b) extrapolated Lavrentiev method: if m Lavrentiev approximations uαi with αi = αj (i = j) are given, find linear combination uα,m =

m

  • i=1

diuαi, di =

m

  • j=1,

j=i

αj αj − αi . (8) In both methods a), b) the parameter α may be chosen by discrepancy principle Auα,m − fδ = bδ, b > 1. Then under assumption u∗ ∈ R(Ap) the error estimate uα,m − u∗ ≤ cδ

p p+1

(9) holds with p ≤ m − 1. 6.2 Modifications of the Tikhonov method (A∗A + αI)uα = A∗fδ a) iterated Tikhonov method uα,0 = u0, (A∗A+αI)uα,k = αuα,k−1+A∗fδ, k = 1, 2, . . . , m; m ≥ 2 b) extrapolated Tikhonov method: if m Tikhonov approximations uαi with αi = αj (i = j) are given, find linear combination uα,m from (8). In both methods a), b) parameter α may be chosen by discrepancy principle and then under assumption u∗ ∈ R((A∗A)p/2) the error esti- mate (9) holds with p ≤ 2m − 1.

10

slide-11
SLIDE 11

7 Parameter choice rule for approximate noise level Assumption: given is fδ and δ, but instead of fδ − f ≤ δ it holds lim

δ→0

fδ − f δ ≤ c, where c is an unknown constant. Consider choice of parameter α in iterated Tikhonov approximation uα,m. Define ϕ(α) := α−1/2A∗(Auα,m+1 − fδ), t(α) := (Auα,m − fδ, Auα,m+1 − fδ). Let 0 < s ≤ 1

2 and b2 ≥ b1 > (2m + 1)m+1/2(2m + 2)−(m+1).

Rule R. If ϕ(1) ≤ b2δ, choose α(δ) = 1. Otherwise find α2(δ) such that ϕ(α2(δ)) ≤ b2δ, but ϕ(α) ≥ b1δ for each α ∈ [α2(δ), 1]. Choose α = α(δ) as global minimizer of the function α−st(α) on the interval [α2(δ), 1]. Rule R guarantees the convergence uα(δ),m − u∗ → 0 as δ → 0 and the error estimate uα(δ),m−u∗ ≤ const          1 1 − 2s inf

α≥0 Ψ(α), if fδ − f ≤ max(δ, δ0),

fδ − f δ0 1

2s

inf

α≥0 Ψ(α), if fδ−f>max(δ, δ0),

where Ψ(α) := ue

α,m − u∗ + 0.5α−1/2 max(δ, fδ − f),

δ0 := t(α(δ))/2 and ue

α,m is the iterated Tikhonov approximation with

f instead of fδ. Similar rules and error estimates are given for iterative methods as well.

11

slide-12
SLIDE 12

total error ur − u∗ error due to noisy data u0

r − ur

approximation error u∗ − u0

r 12

slide-13
SLIDE 13

Literature

Parameter choice: exact noise level 1.

  • U. Tautenhahn, U. H¨

amarik. The use of monotonicity for choosing the regularization parameter in ill-posed problems. Inverse Problems, 1999, 15, 6, 1487-1505. 2.

  • U. H¨

amarik, U. Tautenhahn. On the monotone error rule for parame- ter choice in iterative and continuous regularization methods. BIT Numerical Mathematics, 2001, 41, 5, 1029-1038. Iterative methods [2-4] 3.

  • U. H¨

amarik, T. Raus. On the choice of the stopping index in iteration methods for solving problems with noisy data. In: HERCMA 2001. Proceedings

  • f the Fifth Hellenic-European Conference on Computer Mathematics and its

Applications, Athens, Sept. 20-22, 2001, ed. E.A. Lipitakis, vol. 2, LEA Publishers, Athens, 2002, 524-529.

  • 4. U. H¨

amarik, R. Palm. Comparison of stopping rules in conjugate type meth-

  • ds for solving ill-posed problems. In: Mathematical modeling and Analysis

2005, Proceedings of the 10th International Conference MMA2005, Trakai. Discretization methods

  • 5. G. Vainikko, U. H¨
  • amarik. Projection methods and self-regularization in

ill-posed problems. Soviet Mathematics, 1985, 29, 10, 1-20.

  • 6. U. H¨

amarik, E. Avi, A. Ganina. On the solution of ill-posed problems by projection methods with a posteriori choice of the discretization level. Math.

  • Model. Anal., 7, 2, 2002, 241-252.

Lavrentiev and Tikhonov methods and modifications 7.

  • U. H¨

amarik. On the parameter choice in the regularized Ritz-Galerkin

  • method. Proc. Estonian Acad. Sc., phys.-math., 1993, 42, 2, 133-143.

Parameter choice: approximate noise level [4, 8, 9]

  • 8. U. H¨

amarik, T. Raus. On the choice of the regularization parameter in the case of the approximately given noise level of data. In: 5th International Con- ference on Inverse Problems in Engineering: Theory and Practice, Cambridge, UK, 11-15th July, 2005 (ed. D. Lesnic), vol. II, H01, pp. 1-10, Leeds University Press, Leeds, 2005.

  • 9. U. H¨

amarik, T. Raus. Choice of the regularization parameter in ill-posed problems with rough estimate of the noise level of data. WSEAS Transactions

  • n Mathematics, 2005, 4, 2, 76-81. 26.

13