Magic Inverse Problems Course - The Radon Transform Bill Lionheart - - PowerPoint PPT Presentation
Magic Inverse Problems Course - The Radon Transform Bill Lionheart - - PowerPoint PPT Presentation
Magic Inverse Problems Course - The Radon Transform Bill Lionheart March 25, 2013 The line L , s with normal vector = (cos , sin ) and distance s to the origin is the set L , s = { x R 2 | x = s } . takes its values on
The line LΘ,s with normal vector Θ = (cos θ, sin θ) and distance s to the origin is the set LΘ,s = {x ∈ R2 | x · Θ = s}. Θ takes its values on a circle S1 = {(cos θ, sin θ)|0 ≤ θ < 2π} so the space of lines in R2 is S1 × (0, ∞) (or maybe S1 × R but that counts every line with s = 0 twice). Topologically this is a cylinder. We define the Radon transform of a function f : R2 → R to be a function R[f ](Θ, s) on the set of lines R[f ](Θ, s) =
- LΘ,s
fdl =
- x·Θ=s
fdl. We can parameterize a line using the coordinate t as distance along the line. Define Θ⊥ = (− sin θ, cos θ), unit vector perpendicular to Θ. Then any x point on LΘ,s has the form x = sΘ + tΘ⊥, in this form
- x·Θ=s
fdl = ∞
t=−∞
f
- sΘ + tΘ⊥
dt.
Fourier Slice Theorem
We can use the Fourier Transform to better understand the Radon transform. We need the F-T. of a function of 2 variables—or in general n-variables ˆ f (w) = 1 (2π)n/2
- Rn f (x)e−ix·wdx.
Some authors miss out the constant, but with this definition, the inverse F-T. is ˇ f (x) = 1 (2π)n/2
- Rn f (w)eix·wdx
and of course ˇ ˆ f = ˆ ˇ f = f . Also the scaling, translation, differentiation and Perseval’s properties hold from the case n = 1. Let us define, for a fixed Θ RΘ[f ](s) = R[f ](Θ, s) for any s ∈ R. This is all the lines in a particular direction we have.
Theorem (Fourier Slice Theorem)
- RΘ[f ](σ) =
√ 2πˆ f (σΘ). Here σ ∈ R is the F-T.(frequency) variable for s.
Proof.
- RΘ[f ](σ) =
1 √ 2π ∞
−∞
- Θ·x=s
f (x)e−isσdl ds = 1 √ 2π ∞
s=−∞
∞
t=−∞
f (sΘ + tΘ⊥)e−isσdt ds. Now x = x y
- =
- cos θ
− sin θ sin θ cos θ s t
- which is simply a rotation of the coordinate system (x, y) to (s, t).
- ∂(x, y)
∂(s, t)
- = 1.
- RΘ[f ](σ) =
1 √ 2π ∞
−∞
∞
−∞
f (x)e−iσsdt ds = 1 √ 2π ∞
−∞
∞
−∞
f (x, y)e−iσΘ·xdx dy = √ 2πˆ f (σΘ).
So by taking the Radon transform data, Fourier transformed in s, gives us the Fourier transform of the image along each direction. At least where the F.Ts exist in the classical sense this shows that we have uniqueness of solution for the inverse Radon transform.
Corollary
For f ∈ S
- R2
, R[f ] uniquely determines f (that is R is injective).
Proof.
From FS Theorem R[f ] determines ˆ f on R2, hence f by the Fourier inversion formula. Note, althoughˆ: L2 R2 → L2 R2 is invertible, R isn’t necessarily defined on L2 R2 , as as a line is a set of measure zero. You need more smoothness for line integrals to be defined. It is possible to prove for example that R is defined and continuous, with a continuous inverse in the Sobolev spaces Hs
- R2
and Hs+1/2 (lines)(see [1] p. 42). (Not quite honest, I should say defined on a bounded subset of R2).
Backprojection
The Fourier Slice Theorem is not as useful as you might first think. The problem is that if we discretize Θ and σ in regular steps, this produces a non-uniform image resolution. To derive other reconstruction algorithms we must consider a kind of adjoint. Recall that for K : H1 → H2, K[f ]g = f , K∗[g] defines the adjoint. For the special case of Rn and Rm, K could be a matrix and K∗ its transpose. The ‘formal adjoint’ of our operator K defined on functions is the adjoint with respect to the L2 norm, even if K isn’t actually defined on L2. That is for all f and g for which the integrals make sense
- K[f ]g =
- f K∗[g],
where K∗ is the formal adjoint, for example let K[f ] = f ′ on functions on R vanishing outside some interval, then ∞
−∞
K[f ]g = ∞
−∞
f ′g = [fg]∞
−∞ −
∞
−∞
fg
′
= − ∞
∞
fK[g]. So K∗ = −K, but it isn’t K∗ as it’s not defined on L2(R). The formal adjoint of R, is an operator R∗ which takes Sinogram data, functions on S1 × R and turns it into ‘images’, functions on R2.
- R2 R∗[g](x)f (x) dx =
- R
- S1 R[f ](Θ, s)g(Θ, s).
So clearly R∗[g](x) is the integral of g over all lines through x. It’s called ‘backprojection’ as it throws the data back into the image space. It doesn’t reconstruct however.
In more specific terms g(Θ, s) backprojected R∗[g](x) = 2π
θ=0
g(Θ, Θ · x) dθ, where Θ = (cos θ, sin θ) as usual, and to make the integral over the (Θ, s) pairs with Θ · x = s, we have substituted for s. Visually, the effect of R∗R on a small blob is to smear the value of a line integral along a particular line through the blob, all over the line. This turns a blob into a star, at least when infinitely many angles are used. If one attempted to use ‘backprojection’ as a reconstruction algorithm one would get a ‘starry’ version of the
- image. This would be the first iteration of Landweber’s method for solving R[f ] = g, starting with f0 = 0.
Landweber like methods have been used in X-Ray CT where they are called SIRT—simultaneous iterative reconstruction technique, as all the data is ‘backprojected’ at once. By construct ART—algebraic reconstruction technique ‘backprojects’ one block. e.g., RΘ at a time. This is Karczmarz.
Filtered backprojection
This is done in more detail in [1], p. 18–23. We simplify the argument by taking n = 2. We define the ‘ramp filter’ operator on a function f : R → R by
- H[f ](w) = |w|ˆ
f (w) (It is not standard notation, Natterer calls it I −1). Note that it is a bit like a differential operator, indeed H ◦ H[f ](w) = |w|2ˆ f , so H ◦ H = −∂2/∂λ2 the second derivative So we could legitimately say H =
- −∂2/∂λ2.
Now by definition of the I.F-T. H[f ](x) = (2π)−1/2
- R2 eix·w|w|ˆ
f (w) dw. (notice here f is a function of one variable) Using polar coordinates in the w plane w = σΘ and the Fourier inversion formula. f (x) = 1 2π
- R2 eix·wˆ
f (w) dw = 1 2π 2π
θ=0
∞
σ=0
eix·σΘˆ f (σΘ)σ dσ dθ.
The Fourier Slice Theorem says √ 2πˆ f (σΘ) = RΘ[f ](σ) so f (x) = 1 (2π)3/2 2π
θ=0
∞
σ=0
eix·σΘRΘ[f ](σ)σ dσ dθ = 1 (2π)3/2 2π
θ=0
∞
σ=0
eix·σΘσˆ g(Θ, σ) dσ dθ, where g = R[f ] and ˆ g is the F-T. w.r.t the second variable s. f (x) = 1 2 1 (2π)3/2 2π
θ=0
∞
−∞
eix·σΘ|σ|ˆ g(Θ, σ) dσ dθ (remembering the symmetry of g) = 1 2 1 2π 2π
θ=0
H[g](Θ, Θ · x) dθ = 1 4π R∗H[g]. So in principle we can reconstruct Radon transform data by ‘ramp filtering’ the data then ‘backprojecting’.
More general reconstruction
In the more general case (still n = 2) Natterer defines
- I αf (w) = |w|−αˆ
f (w). So H = I −1 which is defined on data (on the cylinder) using the FT wrt the second variable, and on ‘images’ R2 → R2 using the 2D.FT. He then shows f = 1 2 (2π)−1I −αR∗I α−1g, where g = Rf , which is a reconstruction using a filter on the data and on the backprojected image. For α > 0, the image is smoothed (high frequencies suppressed) and high frequencies in the data are sharpened. This type of technique can result in a practical reconstruction algorithm. (Note that on S, I αI −α is the identity.) f = 1 2 (2π)−1I −1R∗R[f ] α = 1. So R∗R[f ] = 4πI 1[f ], from which we can see that the Tikhonov formula fµ =
- R∗R + µ2−1R∗g
can be implemented as a filter applied to backprojected data, or conversely fµ = R∗ RR∗ + µ2−1g backprojecting filtered data.
SVD of Radon Transform
See [2] p. 240–245, and [1]p. 95–101. In some ways Natterer’s treatment is clearer—but he does it in n-dimensions. We will consider our ‘image’ f to have support contained in a unit disc centered on the origin D, and f ∈ L2(D). The codomain of the Radon transform, functions on the cylinder, will need a certain weighted norm. g2
y =
2π 1
−1
|g(s, Θ)|2
- 1 − s2 ds dθ.
We will call this weighted L2 space Y and have X = L2(D), R : X → Y. (Natterer shows that R and R∗ are both bounded on these spaces.) Let’s check this. Define w(s) =
- 1 − s2
R[f ](s, Θ) = w(s)
−w(s)
f
- sΘ + tΘ⊥
dt |s| < 1 |R[f ](s, Θ)|2 ≤ 2w(s) w(s)
−w(s)
- f
- sΘ + tΘ⊥
- 2 dt.
So 1
−1
(w(s))−1|R[f ](s, Θ)|2 ≤ 2 1
−1
w
−w
- f
- sθ + tΘ⊥
- 2 dt ds
= 2
- D
|f (x)|2 dx g2
y =
2π 1
−1
|g(s, Θ)|2 w(s) ds dθ. So we have explicitly R[f ]y ≤ √ 4πf x .
Our strategy will be to find eigenfunctions and values of RR∗, the adjoint here using the Y norm so it’s not exactly backprojection. R∗[g](x) = 2π g(Θ · x, Θ)(w(Θ · x))−1 dθ. And it’s easy to check that R[f ], gy = f , R∗[g]X for any f ∈ X, g ∈ Y. We will need some ‘special functions’. Special functions arise mainly as the eigenfunctions of self-adjoint ordinary differential equations. They are orthogonal bases for weighted L2 spaces on intervals. They are usually associated with 19th century European mathematicians, and their properties are tabulated in great detail in old textbooks and the help system of ‘Mathematica’. Natterer uses Gegenbauer polynomials Cλ
ℓ , which are orthogonal on [−1, 1] with weight (1 − x2)λ−1/2. So
1
−1
(1 − x2)λ−1/2Cλ
ℓ Cλ k (x)dx =
- ℓ = k
horrible nonzero formula ℓ = k For λ = 0 these are ‘Chebyshev polynomials of the first kind’ Tℓ = C0
ℓ which have a nice simple formula
Tℓ(x) = cos
- ℓ cos−1 x
- It is a polynomial!
And λ = 1 Chebychev polynomials of the second kind Uℓ(x) = (ℓ + 1)C1
ℓ =
sin(ℓ + 1) cos−1 x sin cos−1 x ℓ = 0, 1, 2, . . . The orthogonality result is 1
−1
w(x)Um(x)Um′ (x) dx = π 2 δm,m′ .
- By a change of variables
x = cos θ 0 < θ < π Um(cos θ) = sin(m + 1)θ sin θ and 1
−1
w(x)Um(x)Um′ (x)dx = π sin(m + 1)θ sin(m
′
+ 1)θ dθ = π 2 δm,m′ .
- Now if g(s) is in the w−1 weighted L2 space
g2 = 1
−1
|g(s)|2 w(s) ds = π |g(cos θ)|2 dθ. And g has an expansion in these orthogonal function g(cos θ) =
∞
- m=0
cm sin(m + 1)θ cm = 2 π 1 g(cos θ) sin(m + 1)θ dθ. So we can represent g(s) as a series expansion in um(s) = w(s)Um(s).
Now we do a trick rather like ‘separation of variables’ for solving Laplace’s equation. We think of g(s, Θ) as having a series in w(s)Um(s) for each fixed Θ. We are lead to consider the subspaces of Y of functions of the form
- 2/πw(s)Um(s)u(θ) for any u ∈ L2(0, 2π) call this Ym.
We have Y = Y0 ⊕ Y1 ⊕ Y2 ⊕ · · · and it is clear that each subspace Ym is orthogonal to the others. Our important fact is that RR∗Ym ⊆ Ym, so that we can look for eigenfunctions of RR∗ in each Ym. In matrix terms RR∗ is ‘block-diagonal’.
Now we can show with a little calculation (It’s not bad, remark 4.4 on p. 243 of [2]) that any gm ∈ Ym, gm(s, Θ) =
- 2/πw(s)Um(s)u(Θ) satisfies
RR∗[gm](s, Θ) =
- 2
π w(s)
−w(s)
2π Um
- Θ′ ·
- sΘ + tΘ⊥
u
- Θ
′
dθ
′
dt = 4π m + 1
- 2
π w(s)Um(s) 1 2π 2π sin(m + 1)
- θ − θ
′
sin
- θ − θ′
u(θ
′
) dθ
′
which shows that RR∗Ym ⊆ Ym as claimed. Now we have to understand u(Θ) − → 1 2π 2π sin(m + 1)
- θ − θ
′
sin
- θ − θ′
u(θ
′
) dθ You may have met this integral operator before in the context of Fourier series the operator taking u to its truncated Fourier series (the Fej´ er kernel) Specifically let yℓ(θ) = 1/ √ 2πeiℓθ be a Fourier basis function (orthonormal on [0, 2π]), then 1 2π sin(m + 1)(θ − θ
′
sin
- θ − θ′
=
m
- k=0
ym−2k (θ)ym−2k
- θ
′
. With this in mind let us define um,k (s, Θ) = 2 √π w(s)Um(s)ym−k (θ) k = 0, 1, . . . , m which are orthonormal in ym. We now have RR∗um,k = σ2
mum,k
k = 0, 1, . . . , m. σm =
- 4π
m + 1 1/2 We can now find vm,k = (1/σm)R∗um,k with a little more ‘special function technology’.
It turns out that vm,k (x) = (2m + 2)1/2Qm,|m−2k|(|x|)ym−2k
- x
|x|
- ,
where Qm,|m−2k| = rℓP(0,ℓ)
1/2(m+ℓ)(2r2 − 1) where P is a Jacobi polynomial (a what?)
The Fourier Slice Theorem shows us that the um,k form a complete orthonormal system on Y so we have the SVD
- f the Radon transform.
In particular it tells us the Radon transform is only mildly ill-posed. The singular values σm are each associated with m + 1 singular functions um,0, um,1, . . . , um,m, so counting multiplicity the singular values are σ0 = √ 4π, σ1 =
- 4π
2 ,
- 4π
2 , σ2 =
- 4π
3 ,
- 4π
3 ,
- 4π
3 , . . . , so the singular values are decaying more slowly than 1/√m + 1.
- F. Natterer, The Mathematics of Computerized Tomography,
SIAM 2001.
- M. Bertero, P. Boccacci, Introduction to Inverse Problems in