Statistical Inverse Problems and abstract inverse problems examples - - PowerPoint PPT Presentation

statistical inverse problems and
SMART_READER_LITE
LIVE PREVIEW

Statistical Inverse Problems and abstract inverse problems examples - - PowerPoint PPT Presentation

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems Statistical Inverse Problems and abstract inverse problems examples Instrumental Variables regression problems and statistical inverse problems


slide-1
SLIDE 1

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

Statistical Inverse Problems and Instrumental Variables

Thorsten Hohage

Institut für Numerische und Angewandte Mathematik University of Göttingen

RICAM, Linz, 6.9.2008

slide-2
SLIDE 2

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

  • utline

inverse problems abstract inverse problems examples regression problems and statistical inverse problems problem formulation some functional analysis spectral theory for compact operators spectral theorem for bounded self-adjoint operators functional calculus regularization methods Picard criterion, spectral cutoff

  • ther regularization methods

definition of regularization methods convergence analysis negative results source conditions convergence in expectation choice of regularization parameters discrepancy principle

slide-3
SLIDE 3

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

Keller’s definition of an inverse problem

“We call two problems inverses of one another if the formulation of each involves all or part of the solution of the other. Often, for historical reasons, one of the two problems has been studied extensively for some time, while the

  • ther is newer and not so well understood. In

such cases, the former problem is called the direct problem, while the latter is called the inverse problem.”

J.B. Keller. Inverse Problems. Am. Math. Mon., 83:107-118, 1976

slide-4
SLIDE 4

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

Hadamard’s definition of well-posedness

Definition

A problem is called well-posed if

  • 1. there exists a solution to the problem (existence),
  • 2. there is at most one solution to the problem

(uniqueness),

  • 3. the solution depends continuously on the data

(stability). Otherwise the problem is called ill-posed.

slide-5
SLIDE 5

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

ill-posedness in terms of operator equations

Suppose the inverse problem can be formulated as an

  • perator equation

F(a) = u where x denotes the unknown solution and y the given data. Then the inverse problem is well-posed in the sense of Hadamard if

  • 1. F is surjective (existence)
  • 2. F in injective (uniqueness)
  • 3. F −1 is continuous (stability)

Typically, the third condition is violated for inverse problems!

slide-6
SLIDE 6

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

first kind integral equations

Find a function a such that

  • k(x, y)a(y) dy = u(x)

for all x.

slide-7
SLIDE 7

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

identification of parameters in differential equations

Estimate a parameter a in a differential equation given noisy measurements of the solution u! The parameter-to-solution operator F : a → u is defined

  • nly implicitely via the differential equation and typically

nonlinear even if the differential equation is linear. The unknown parameter a might be

◮ a coefficient in the differential equation, ◮ a boundary condition or an initial condition, ◮ a parametrization of the shape of a domain.

slide-8
SLIDE 8

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

  • utline

inverse problems abstract inverse problems examples regression problems and statistical inverse problems problem formulation some functional analysis spectral theory for compact operators spectral theorem for bounded self-adjoint operators functional calculus regularization methods Picard criterion, spectral cutoff

  • ther regularization methods

definition of regularization methods convergence analysis negative results source conditions convergence in expectation choice of regularization parameters discrepancy principle

slide-9
SLIDE 9

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

nonparametric regression: random design

Estimate a function u in some smoothness class F given i.i.d. random variables (Xi, Yi), i = 1, . . . , n such that E(Y|X = x) = u(x), E((Y − u(x))2|X = x) < ∞.

slide-10
SLIDE 10

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

nonparametric regression: deterministic design

Estimate a function u in some smoothness class F given Yi = u

  • x(n)

i

  • + σ
  • x(n)

i

  • ǫi,

i = 1, . . . , n where ǫi are independent random variables satisfying Eǫi = 0, Eǫ2

i = 1.

slide-11
SLIDE 11

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

estimating functions in white noise

Estimate a function u in some smoothness class F given a process dY (n)

t

= u(t)dt + σ(t) √n dBt + δn(t)dt where Bt is Brownian motion and δn is a (small) drift term. Under mild assumptions every nonparametric regression problem is asymptotically equivalent to a white noise problem. (Brown & Low, Ann. Stat., 1996)

slide-12
SLIDE 12

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

Hilbert-space processes

Let Y be a Hilbert space.

◮ A Hilbert-space process is a continuous linear

  • perator

ξ : Y → L2(Ω, P, P). Every Hilbert-space valued random variable Ξ satisfying EΞ2 < ∞ can be identified with a Hilbert-space process ϕ → Ξ, ϕ, ϕ ∈ Y, but not vice versa. Notation: ξ, ϕ := ξϕ, ϕ ∈ Y.

◮ The covariance Covξ ∈ L(Y) of ξ is defined implicitly

by Covξϕ1, ϕ2 = Cov (ξ, ϕ1 , ξ, ϕ2) for all ϕ1, ϕ2 ∈ Y.

slide-13
SLIDE 13

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

statistical inverse problem

estimate a given Y = F(a) + σξ + δζ F : D(F) ⊂ X → Y Fréchet differentiable and one-to-one, X, Y separable Hilbert spaces. F −1 is not continuous! ξ normalized stochastic noise (a Hilbert space process in Y) σ ≥ 0 stochastic noise level ζ ∈ Y normalized deterministic noise, ζ = 1 δ ≥ 0 deterministic noise level

slide-14
SLIDE 14

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

  • utline

inverse problems abstract inverse problems examples regression problems and statistical inverse problems problem formulation some functional analysis spectral theory for compact operators spectral theorem for bounded self-adjoint operators functional calculus regularization methods Picard criterion, spectral cutoff

  • ther regularization methods

definition of regularization methods convergence analysis negative results source conditions convergence in expectation choice of regularization parameters discrepancy principle

slide-15
SLIDE 15

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

spectral theorem for compact self-adjoint

  • perators

Theorem (Spectral theorem for compact self-adjoint

  • perators)

Let A : X → X be a linear, compact, self-adjoint operator. Then:

◮ There exists a complete orthonormal system

{aj : j ∈ N} of X consisting of eigenvectors of A.

◮ If λj denote the corresponding eigenvalues, then

Aa =

  • j∈N

λj

  • a, aj
  • aj

for all a ∈ X.

◮ The only possible accumulation point of the

sequence eigenvalues λj is 0.

slide-16
SLIDE 16

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

singular value decomposition

Theorem and Definition (singular value decomposition)

Let T ∈ L(X, Y) be compact with dim R(T) = ∞, and let P ∈ L(X) denote the orthogonal projection onto N(T). Then there exist singular values σ0 ≥ σ1 ≥ · · · > 0 and

  • rthonormal systems {a0, a1, . . . } ⊂ X and

{u0, u1, . . . } ⊂ Y such that a =

  • n=0

a, an an + Pa Ta =

  • n=0

σn a, an un. for all a ∈ X. A system {(σn, an, un)} with these properties is called a singular system of T. The singular values σn = σn(T) are uniquely determined by T and satisfy σn(T) → 0 as n → ∞.

slide-17
SLIDE 17

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

spectrum

Definition

Let A : X → X be bounded. Then ρ(A) := {z ∈ C : A − zI bijective and (A − zI)−1 bounded} is called the resolvent set of A. σ(A) := C \ ρ(A) is called the spectrum of A. It follows from Riesz theory that the spectrum of a compact operator is the union of {0} and the set of eigenvalues.

slide-18
SLIDE 18

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

Compact self-adjoint operators are unitarily equivalent to multiplication operators

Let A ∈ L(X) be compact and self-adjoint and Aϕ = ∞

j=1 λj

  • ϕ, ϕj
  • ϕj a spectral decomposition.

◮ We define the operator W : l2(N) → X by

W(f) :=

j∈N f(j)ϕj. Here l2(N) is the Hilbert space

  • f all functions f : N → C with the norm

f2 :=

j∈N |f(j)|2. By Parseval’s equality, W is a

unitary operator.

◮ Its inverse is given by (W −1ϕ)(j) =

  • ϕ, ϕj
  • for j ∈ N.

◮ With this notation, the spectral decomposition can be

written as W −1AW = Mλ where the multiplication operator Mλ ∈ L(l2(N)) is defined by (Mλf)(j) = λjf(j), j ∈ N.

slide-19
SLIDE 19

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

Convolution operators are unitarily equivalent to multiplication operators

Let k ∈ L1(Rd) satisfy k(x) = k(−x) for x ∈ Rd. Then the convolution operator (Aϕ)(x) :=

  • Rd k(x − y)ϕ(y) dy is

self-adjoint in L(X).

◮ Recall that the Fourier transform

(Fϕ)(ω) :=

  • Rd e−2πiω,xϕ(x) dx,

ω ∈ Rd is unitary on L2(Rd).

◮ Due to the symmetry of k, the function λ := Fk is

real-valued, and it is bounded since k ∈ L1(Rd).

◮ Introducing the multiplication operator

Mλ ∈ L(L2(Rd)), (Mλf)(ω) := λ(ω)f(ω), the Convolution Theorem implies that FAF−1 = Mλ.

slide-20
SLIDE 20

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

Halmos’ version of the spectral theorem

Theorem (spectral theorem for bounded self-adjoint

  • perators)

Let A ∈ L(X) be self-adjoint. Then there exist a locally compact space Ω, a positive Borel measure µ on Ω, a unitary map W : L2(Ω, dµ) − → X, and a real-valued function λ ∈ C(Ω) such that W −1AW = Mλ, where Mλ ∈ L(L2(Ω, dµ)) is the multiplication operator defined by (Mλf)(ω) := λ(ω)f(ω) for f ∈ L2(Ω, dµ) and ω ∈ Ω.

slide-21
SLIDE 21

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

functional calculus

Let M(σ(A)) denote the algebra of bounded measurable functions f : σ(A) → R.

Theorem and Definition

For f ∈ M(σ(A)) an operator f(A) ∈ L(X), f(A) := WMf◦λW −1 is well defined and self-adjoint. The mapping f → f(A) is a norm-decreasing algebra homomorphism from M(σ(A)) to L(X), i.e.

  • 1. (αf + βg)(A) = αf(A) + βg(A),
  • 2. (f · g)(A) = f(A)g(A),
  • 3. f(A) ≤ f∞

for all α, β ∈ R and f, g ∈ M(σ(A)).

slide-22
SLIDE 22

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

properties of the functional calculus

◮ For f0(λ) := 1 and f1(λ) := λ we have f0(A) = I and

f1(A) = A.

◮ If p(λ) = m j=0 cjλj is a polynomial, then we get the

usual definition p(A) = m

j=0 cjλj. ◮ Expressions like (µI + A)−1 and exp(A) also agree

with the usual definitions.

slide-23
SLIDE 23

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

  • utline

inverse problems abstract inverse problems examples regression problems and statistical inverse problems problem formulation some functional analysis spectral theory for compact operators spectral theorem for bounded self-adjoint operators functional calculus regularization methods Picard criterion, spectral cutoff

  • ther regularization methods

definition of regularization methods convergence analysis negative results source conditions convergence in expectation choice of regularization parameters discrepancy principle

slide-24
SLIDE 24

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

Picard criterion

Theorem (Picard criterion)

Let T ∈ L(X, Y) be a compact and injective operator with dense range. Let {(σn, an, uk)} be a singular system of T. Then the equation Ta = u is solvable if and only if the Picard criterion

  • n=0

1 σ2

n

|u, un|2 < ∞ is satisfied. Then the solution is given by a =

  • n=0

1 σn u, un an.

slide-25
SLIDE 25

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

degree of ill-posedness

The solution formula a = ∞

n=0 1 σn u, un an nicely

illustrates the ill-posedness of linear operator equations with a compact operator: Since 1/σn → ∞, the Fourier modes corresponding to large n are amplified without bound. We say that the equation Ta = u is

◮ mildly ill-posed if the singular values decay to 0 at a

polynomial rate, i.e. if there exist constants C, p > 0 such that σn ≥ Cn−p for all n ∈ N.

◮ Otherwise the problem is called severely ill-posed. ◮ If there exist constants C, p > 0 such that

σn ≤ C exp(−np), we call the problem exponentially ill-posed.

slide-26
SLIDE 26

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

spectral cut-off

One possibility to restore stability in the exact reconstruction formula a = ∞

n=0 1 σn u, un an is to

truncate the series, i.e. to compute Rαu :=

  • {n:σn≥α}

1 σn u, un an. for some regularization parameter α > 0. This is called spectral cut-off or truncated singular value decomposition. questions:

◮ choice of α? ◮ convergence? In which sense?

slide-27
SLIDE 27

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

disadvantages of spectral cut-off

◮ a singular value decomposition is known explicitely

  • nly for a small number of problem

◮ the numerical computation of a singular value

decomposition is prohibitively expensive for many problems.

slide-28
SLIDE 28

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

Tikhonov regularization

◮ Solving Ta = Y is equivalent to finding the minimum

  • f the functional a → Ta − Y2 in X. Of course, the

solution to this minimization problem again does not depend continuously on the data!

◮ To restore stability we add a penalty term to the

functional and minimize Jα(a) := 1 2Ta − Y2 + α 2 a − a02. The parameter α > 0 is called regularization parameter, and a0 is an initial guess of a†. If no initial guess is known, we take a0 = 0.

slide-29
SLIDE 29

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

Tikhonov regularization

Theorem

The Tikhonov functional Jα has a unique minimum aα in X for all α > 0, Y ∈ Y, and a0 ∈ X. This minimum is given by

  • aα = (T ∗T + αI)−1(T ∗Y + αa0).

The operator T ∗T + αI is boundedly invertible, so aα depends continuously on Y.

slide-30
SLIDE 30

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

iterated Tikhonov regularization

◮ Once we have computed the Tikhonov solution

aα we may find a better approximation by applying Tikhonov regularization again using aα as initial guess a0.

◮ This leads to iterated Tikhonov regularization:

  • aα,0

:=

  • aα,n+1

:= (T ∗T + αI)−1(T ∗Y + α aα,n), n ≥ 0

◮ Note that only one operator T ∗T + αI has to be

inverted to compute aα,n for any n ∈ N. If we use, e.g., the LU factorization to apply (T ∗T + αI)−1, the computation of aα,n for n ≥ 2 is not much more expensive than the computation of aα,1.

slide-31
SLIDE 31

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

Landweber iteration

◮ idea: minimize the functional J0(a) = Ta − Y2 by

the steepest decent method

◮ The direction of steepest decent is −T ∗(Ta − Y). ◮ Choosing a fixed step-size parameter µ > 0 leads to

the recursion formula a0 = 0, ak+1 = ak − µT ∗(Tak − Y), n ≥ 0.

slide-32
SLIDE 32

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

Landweber iteration

◮ An analysis shows that µ should be chosen such that

µT ∗T ≤ 1

◮ It follows by induction that the nth Landweber iterate

is given by ak =

k−1

  • j=0

(I − µT ∗T)jµT ∗Y.

slide-33
SLIDE 33

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

accelerating Landweber iteration: Krylov subspaces

◮ The nth Landweber iterate belongs the Krylov

subspace defined by Kk(T ∗T, T ∗Y) := span {(T ∗T)jT ∗Y : j = 1, . . . , n}

◮ Since the computation of any element of

Kk(T ∗T, T ∗Y) requires only (at most) n applications

  • f T ∗T, one may try to look for better approximations

in the Krylov subspace Kk(T ∗T, T ∗Y).

◮ The conjugate gradient method applied to the normal

equation T ∗Ta = T ∗y is characterized by the

  • ptimality condition

T ak − Y = min

a∈Kk(T ∗T,T ∗Y) Ta − Y.

slide-34
SLIDE 34

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

conjugate gradient method

  • a0 = 0;

d0 = Y; p1 = s0 = T ∗d0 for k = 1, 2, . . . , unless sk−1 = 0 qk = Tpk αk = sk−12/qk2

  • ak =

ak−1 + αkpk dk = dk−1 − αkqk sk = T ∗dk βk = sk2/sk−12 pk+1 = sk + βkpk Note that ak depends nonlinearly on Y!

slide-35
SLIDE 35

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

regularization methods: notation

◮ We consider a family of continuous (not necessarily

linear) operators Rα : Y → X defined for α in some index set A which approximate the unbounded

  • perator T −1

◮ Let α : (0, ∞) × Y → A be a parameter choice rule.

For a given noisy data Y = Ta + δζ with noise level δ > 0 and normalized error ζ ≤ 1 the exact solution is approximated by a ≈ Rα(δ,Y)Y. Examples:

◮ Tikhonov regularization: Rα = (αI + T ∗T)−1T ∗ ◮ spectral cutoff: Rαu := {n:σn≥α} 1 σn u, un an

slide-36
SLIDE 36

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

regularization methods: deterministic definition

Definition

◮ The pair (R, α) is called a convergent regularization

method for the problem Ta = u if the worst case error tends to 0 with the noise level, i.e. sup

  • Rα(δ,Y)(Y) − a
  • : Y = Ta + δζ, ζ ≤ 1

δ→0 − → 0 for all a ∈ X.

◮ α is called an a-priori parameter choice rule if α(δ, Y)

depends only on δ. Otherwise α is called an a-posteriori parameter choice rule.

slide-37
SLIDE 37

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

regularization methods: stochastic definition

Definition

The pair (R, α) is called a consistent regularization method for estimating a given data Y = Ta + σξ if the expected error tends to 0 with the noise level, i.e. E

  • Rα(σ,Ta+σξ)(Ta + σξ) − a
  • 2 σ→0

− → 0 for all a ∈ X. Sometimes convergence in expectation is replaced by convergence in probability: P

  • Rα(σ,Ta+σξ)(Ta + σξ) − a
  • > ǫ

σ→0 − → 0 for all a ∈ X and all ǫ > 0.

slide-38
SLIDE 38

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

deterministic error decomposition

Let α ∈ A = R and assume that Rα are linear operators with Rαu → T −1u as α → 0 for all u ∈ R(T). Then the total error can be decomposed by the triangle inequality RαY − T −1u ≤ RαY − Rαu + Rαu − T −1u into

◮ a propagated data noise error

RαY − Rαu ≤ δRα, which explodes as α → 0 and

◮ an approximation error Rαu − T −1u, which tends

to 0 as α → 0 We have a trade-off between accuracy (small α) and stability (large α).

slide-39
SLIDE 39

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

bias-variance decomposition

Let Y = u + σξ with Eξ = 0. Then ERαY − T −1u2 = ERαY − Rαu2 + ERαu − T −1u2 +Eσξ, R∗

α(Rαu − T −1u)

= σ2ERαξ2 + Rαu − T −1u2

slide-40
SLIDE 40

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

balancing the error components

slide-41
SLIDE 41

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

spectral description of regularization methods

◮ All regularization methods discussed so far1 are of

the form Rαgδ := gα(T ∗T)T ∗Y. with some functions gα ∈ C([0, T ∗T])) depending

  • n some regularization parameter α > 0.

◮ Then the reconstruction error for exact data is given

by a − aα = (I − gα(T ∗T)T ∗T)a = rα(T ∗T)a with rα(λ) := 1 − λgα(λ), λ ∈ [0, T ∗T].

1For the CGNE method rα also depends on Y!

slide-42
SLIDE 42

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

spectral description of regularization methods

gα(λ) rα(λ) Tikhonov

1 λ+α α λ+α

  • it. Tikhonov

(λ+α)n−αn λ(λ+α)n

  • α

λ+α

n spectral cut-off λ−1, λ ≥ α 0, λ < α 0, λ ≥ α 1, λ < α Landweber (α =

1 k+1)

k−1

j=0 (1 − λ)j

(1 − λ)n Showalter λ−1(1 − exp(λ/α)) exp(λ/α)

slide-43
SLIDE 43

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

  • utline

inverse problems abstract inverse problems examples regression problems and statistical inverse problems problem formulation some functional analysis spectral theory for compact operators spectral theorem for bounded self-adjoint operators functional calculus regularization methods Picard criterion, spectral cutoff

  • ther regularization methods

definition of regularization methods convergence analysis negative results source conditions convergence in expectation choice of regularization parameters discrepancy principle

slide-44
SLIDE 44

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

error-free parameter choice rules

Theorem (Bakushinskii’s theorem)

Let T : X → Y be one-to-one with dense range. Assume there exists a convergent regularization method (Rα, α) for Ta = u with a parameter choice rule α(δ, Y), which depends only on Y, but not on δ. Then T −1 is continuous. Remark: The analogous result for a white noise is false since for a white noise process Y = u + σξ with u ∈ Y and Covξ = I the variance σ2 can be estimated reliably.

slide-45
SLIDE 45

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

negative results

We consider regularization methods (Rα, α) which satisfy the following assumptions:

◮ Rα ∈ L(Y, X) for all α ∈ A ⊂ (0, ∞) ◮ limδ→0 sup {α(δ, u + δξ) : ξ ≤ 1} = 0 for all

u ∈ R(T).

◮ Rα converges pointwise to T −1:

lim

α→0 Rαu = T −1u

for all u ∈ R(T).

Theorem

Assume that T −1 is unbounded and the assumptions above hold true. Then

◮ the operators Rα cannot be uniformly bounded with

respect to α and

◮ the operators RαT cannot be norm convergent to I

as α → 0.

slide-46
SLIDE 46

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

arbitrarily slow convergence

Theorem

Assume that there exist a regularization method (Rα, α) for Ta = u and a continuous function f : [0, ∞) → [0, ∞) with f(0) = 0 such that sup

  • Rα(δ,Y)Y − T −1u
  • : Y ∈ Y, Y − u ≤ δ
  • ≤ f(δ)

for all u ∈ R(T) with u ≤ 1 and all δ > 0. Then T −1 is continuous.

slide-47
SLIDE 47

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

general source conditions

Assumption (SC)

a satisfies a source condition a = Λ(T ∗T)w, where Λ : [0, ∞) → [0, ∞) is continous, monotonely increasing, and Λ(0) = 0 Most important examples:

◮ Hölder-type source conditions: natural for mildly

ill-posed problems, i.e. finitely smoothing operators Λ(t) = tµ, µ > 0

◮ logarithmic source conditions: natural for

exponentially ill-posed problems, e.g. integral

  • perators with infinitely smooth kernels or inverse

problems in pde’s with partial measurements of the solution Λ(t) = (− ln t)−p, p > 0

slide-48
SLIDE 48

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

How to use source conditions

Recall that the approximation error is a − aα = rα(T ∗T)a with rα(t) → 0 for all t and rα(0) = 1

◮ By the spectral theorem T ∗T = W −1MλW we have

rα(T ∗T)a = rα(λ) · W −1aL2 which may tend to 0 arbitrarily slowly depending on a.

◮ If a = Λ(T ∗T)w, we get

rα(T ∗T)a = rα(T ∗T)Λ(T ∗T)w ≤ sup

t∈σ(T ∗T)

|rα(t)Λ(t)| w

◮ Since Λ(0) = 0 we can expect norm convergence,

rαΛL∞(σ(T ∗T))

α→0

− → 0, and the problem is reduced to the estimation of the univariate function rα · Λ.

slide-49
SLIDE 49

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

assumptions on regularization methods

Assumption (R)

◮ supt∈σ(T ∗T) |gα(t)| ≤ CV α for all α > 0 ◮ There exists a number ν0 > 0 called qualification of

the method such that sup

t∈σ(T ∗T)

|tνrα(t)| ≤ γναν for all α and 0 ≤ ν ≤ ν0.

◮ The smoothness is covered by the regularization:

sup

t∈σ(T ∗T)

|Λ(t)rα(t)| ≤ γΛΛ(α) for all α. (2)

slide-50
SLIDE 50

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

discussion of Assumption (R)

Lemma

If t → tν0/Λ(t) is increasing in a neighborhood of 0, then (2) is satisfied.

see Mathé & Pereverzev (2003) and Hohage (2000) for the special case Λ(t) = (− ln t)−p.

Assumption (R) holds true for all commonly used linear regularization methods, in particular

◮ Landweber iteration with any ν0 ∈ R (but with very

large constants for large ν)

◮ ν-methods with ν0 = ν ◮ Tikhonov regularization with ν0 = 1 ◮ spectral cut-off with any ν0 ∈ R

slide-51
SLIDE 51

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

deterministic error estimate

Theorem

If Assumptions (SC) and (R) hold true and Y = Ta + δζ with ζ ∈ Y, ζ = 1, then a − aα ≤ γΛΛ(α)w +

  • CV(1 + γ0)

α δ.

slide-52
SLIDE 52

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

  • ptimality

Under some mild additional assumptions on Λ the following optimality result holds true:

Theorem

With an appropriate a-priori parameter choice rule using smoothness information on the solution all methods under consideration are of optimal order up to their qualification in the sense that inf

α>0 a −

aα ≤ C inf

Φ:Y→X

sup

˜ a∈FΛ,ζ≤δ

Φ(T ˜ a + δζ) − ˜ a for all δ > 0 with FΛ := {Λ(T ∗T) ˜ w : ˜ w ≤ 1}.

slide-53
SLIDE 53

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

  • utline

inverse problems abstract inverse problems examples regression problems and statistical inverse problems problem formulation some functional analysis spectral theory for compact operators spectral theorem for bounded self-adjoint operators functional calculus regularization methods Picard criterion, spectral cutoff

  • ther regularization methods

definition of regularization methods convergence analysis negative results source conditions convergence in expectation choice of regularization parameters discrepancy principle

slide-54
SLIDE 54

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

Morozov’s discrepancy principle

For a fixed parameter τ ≥ 1 choose the largest α > 0 for which T aα − gδ ≤ δ. Here aα := RαY denotes the reconstruction for the regularization parameter α. α(δ, Y) := sup{α > 0 : T aα − Y ≤ τδ} Do not try to fit the noise! For iterative methods such as Landweber iteration, the discrepancy principle consists in stopping the iteration at the first index K for which Taδ

K − Y ≤ τδ.

slide-55
SLIDE 55

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

convergence rate result

If τ > supα,t |rα(t)||, the exact solution satisfies a = (T ∗T)νw and the regularization method has qualification ≥ ν + 1

2, then the regularization method with

the discrepancy principle converges of optimal order: ˆ aα − a ≤ Cδ

2ν 2ν+1

slide-56
SLIDE 56

Statistical Inverse Problems and Instrumental Variables Thorsten Hohage inverse problems

abstract inverse problems examples

regression problems and statistical inverse problems

problem formulation

some functional analysis

spectral theory for compact

  • perators

spectral theorem for bounded self-adjoint

  • perators

functional calculus

regularization methods

Picard criterion, spectral cutoff

  • ther regularization

methods definition of regularization methods

convergence analysis

negative results source conditions

choice of

discussion of discrepancy principle

◮ easy to implement, in particular for iterative methods,

most often used parameter choice rule

◮ reduces qualification by 1

  • 2. E.g. one only gets optimal

rates of convergence for Hölder index ν ≤ 1

2 instead

  • f ν ≤ 1.

◮ Not well defined for white noise since Y = ∞. For

discrete stochastic noise, the discrepancy principle typically chooses α too large, but typically it works reasonably well for sample sizes ≈ 100. For a further analysis of linear statistical inverse problems, in particular estimates on the variance term ERαξ2 see

  • N. Bissantz, T. Hohage, A. Munk, F

. Ruymgaart. Convergence rates of general regularization methods for statistical inverse problems and applications. SIAM J.

  • Numer. Anal, 45:2610-2636, 2007.