SLIDE 35 OVERVIEW:
◮ inverse problem setting under random i.i.d. design scheme ◮ “learning setting”: unknown sampling distribution, related
discretization error
◮ for source condition: H¨
◮ for ill-posedness: polynomial decay of eigenvalues of order s . ◮ Same regularization parameter works both in reconstruction error
and prediction error.
◮ Minimax rates (incl. correct dependence on R, σ) are attained by
general regularization methods (also Conjugate Gradient)
◮ rates of the form (for θ ∈ [0, 1 2]):
h)
≤ O
2r+1+s
◮ matches “classical” rates in the white noise model (=sequence
model) with σ−2 ↔ n .
◮ matching upper/lower bounds beyond polynomial spectrum decay
Rates for statistical inverse learning van Dantzig seminar 24/06/2016 35 / 38