Lecture 4 Capacity of Wireless Channels I-Hsiang Wang - - PowerPoint PPT Presentation
Lecture 4 Capacity of Wireless Channels I-Hsiang Wang - - PowerPoint PPT Presentation
Lecture 4 Capacity of Wireless Channels I-Hsiang Wang ihwang@ntu.edu.tw 3/20, 2014 What we have learned So far: looked at specific schemes and techniques Lecture 2: point-to-point wireless channel -
What ¡we ¡have ¡learned
- So far: looked at specific schemes and techniques
- Lecture 2: point-to-point wireless channel
- Diversity: combat fading by exploiting inherent diversity
- Coding: combat noise, and further exploits degrees of freedom
- Lecture 3: cellular system
- Multiple access: TDMA, CDMA, OFDMA
- Interference management: orthogonalization (partial frequency
reuse), treat-interference-as-noise (interference averaging)
2
Information ¡Theory
- Is there a framework to …
- Compare all schemes and techniques fairly?
- Assert what is the fundamental limit on how much rate can be
reliably delivered over a wireless channel?
- Information theory!
- Provides a fundamental limit to (coded) performance
- Identifies the impact of channel resources on performance
- Suggest novel techniques to communicate over wireless channels
- Information theory provides the basis for the modern
development of wireless communication
3
Historical ¡Perspective
4
- First radio built 100+ years ago
- Great stride in technology
- But design was somewhat ad-hoc
1901 1948
- G. Marconi
- C. Shannon
Engineering ¡meets ¡science New ¡points ¡of ¡view ¡arise
- Information theory: every channel
has a capacity
- Provides a systematic view of all
communication problems
Modern ¡View ¡on ¡Multipath ¡Fading
- Classical view: fading channels are unreliable
- Diversity techniques: average out the variation
- Modern view: exploit fading to gain spectral efficiency
- Thanks to the study on fading channel through the lens of
information theory!
5
Time Channel quality
Plot
- Use a heuristic argument (geometric) to introduce the
capacity of the AWGN channel
- Discuss the two key resources in the AWGN channel:
- Power
- Bandwidth
- The AWGN channel capacity serves as a building block
towards fading channel capacity:
- Slow fading channel: outage capacity
- Fast fading channel: ergodic capacity
6
Outline
- AWGN Channel Capacity
- Resources of the AWGN Channel
- Capacity of some LTI Gaussian Channels
- Capacity of Fading Channels
7
8
AWGN ¡Channel ¡Capacity
9
Channel ¡Capacity
- Capacity := the highest data rate can be delivered
reliably over a channel
- Reliably ≡ Vanishing error probability
- Before Shannon, it was widely believed that:
- to communicate with error probability → 0
- ⟹ data rate must also → 0
- Repetition coding (with M-level PAM) over N time slots
- n AWGN channel:
- Error probability
- Data rate
- As long as M ≤ N⅓, the error probability → 0 as N → ∞
- But, the data rate
- still → 0 as N → ∞
∼ 2Q r 6N M 3 SNR ! = log2 M N = log2 N 3N
Channel ¡Coding ¡Theorem
- For every memoryless channel, there is a definite number C
that is computable such that:
- If the data rate R < C, then there exists a coding scheme that can
deliver rate R data over the channel with error probability → 0 as the coding block length N → ∞
- Conversely, if the data rate R > C, then no matter what coding
scheme is used, the error probability → 1 as N → ∞
- We shall focus on the additive white Gaussian noise
(AWGN) channel
- Give a heuristic argument to derive the AWGN channel capacity
10
AWGN ¡Channel
- We consider real-valued Gaussian channel
- As mentioned earlier, repetition coding yield zero rate if
the error probability is required to vanish as N → ∞
- Because all codewords are spread on a single dimension
in an N-dimensional space
- How to do better?
11
y[n] = x[n] + z[n] x[n] z[n] ∼ N(0, σ2)
Power constraint:
N
X
n=1
|x[n]|2 ≤ NP
Sphere ¡Packing ¡Interpretation
12
y = x + z RN
- By the law of large numbers,
as N → ∞, most y will lie inside the N-dimensional sphere of radius
p N(P + σ2)
- Also by the LLN, as N → ∞,
y will lie near the surface of the N-dimensional sphere centered at x with radius √
Nσ2
How many non-overlapping spheres can be packed into the large sphere?
- Vanishing error probability
⟹ non-overlapping spheres
p N(P + σ2) √ Nσ2
Why ¡Repetition ¡Coding ¡is ¡Bad
13
y = x + z RN
It only uses one dimension
- ut of N !
p N(P + σ2)
Capacity ¡Upper ¡Bound
14
y = x + z RN
p N(P + σ2) √ Nσ2
Maximum # of non-overlapping spheres = Maximum # of codewords that can be reliably delivered = ⇒ R ≤ 1 N log p N(P + σ2)
N
√ Nσ2N ! = 1 2 log ✓ 1 + P σ2 ◆ 2NR ≤ p N(P + σ2)
N
√ Nσ2N This is hence an upper bound of the capacity C. How to achieve it?
Achieving ¡Capacity ¡(1/3)
- (random) Encoding: randomly generate 2NR codewords
{x1, x2, ...} lying inside the “x-sphere” of radius
- Decoding:
- Performance analysis: WLOG let x1 is sent
- By the LLN,
- As long as αy lies inside the uncertainty sphere centered at x1
with radius
- , decoding will be correct
- Pairwise error probability (see next slide) =
15
√ NP y − → MMSE − → αy − → Nearest Neighbor − → b x
α := P P + σ2
||αy − x1||2 = ||αw + (α − 1)x1||2 ≈ α2Nσ2 + (α − 1)2NP = N Pσ2 P + σ2 r N Pσ2 P + σ2 ✓ σ2 P + σ2 ◆N/2
Achieving ¡Capacity ¡(2/3)
16
√ NP
x-sphere
r N Pσ2 P + σ2 x1
When does an error occur? Ans: when another codeword falls inside the uncertainty sphere of x1 What is that probability (pairwise error probability)? Ans: the ratio of the volume of the two spheres
Pr {x1 → x2} = p NPσ2/(P + σ2)
N
√ NP
N
= ✓ σ2 P + σ2 ◆N/2
Union bound: Total error probability ≤ 2NR
✓ σ2 P + σ2 ◆N/2
Achieving ¡Capacity ¡(3/3)
- Total error probability (by union bound)
- As long as the following holds,
- Hence, indeed the capacity is
17
Pr {E} → 0 as N → ∞ R < 1 2 log ✓ 1 + P σ2 ◆ Pr {E} ≤ 2NR ✓ σ2 P + σ2 ◆N/2 = 2
N R+ 1
2 log 1 1+ P σ2
!!
C = 1 2 log ✓ 1 + P σ2 ◆ bits per symbol time
18
Resources ¡of ¡ AWGN ¡Channel
Continuous-‑Time ¡AWGN ¡Channel
- System parameters:
- Power constraint: P watts; Bandwidth: W Hz
- Spectral density of the white Gaussian noise: N0/2
- Equivalent discrete-time baseband channel (complex)
- 1 complex symbol = 2 real symbols
- Capacity:
19
Power constraint:
N
X
n=1
|x[n]|2 ≤ NP
y[n] = x[n] + z[n] x[n] z[n] ∼ CN(0, N0W) CAWGN(P, W) = 2 × 1 2 log ✓ 1 + P/2 N0W/2 ◆ bits per symbol time = W log ✓ 1 + P N0W ◆ bits/s = log (1 + SNR) bits/s/Hz
SNR := P/N0W SNR per complex symbol
Complex ¡AWGN ¡Channel ¡Capacity ¡
- The capacity formula provides a high-level way of
thinking about how the performance fundamentally depends on the basic resources available in the channel
- No need to go into details of specific coding and
modulation schemes
- Basic resources: power P and bandwidth W
20
CAWGN(P, W) = W log ✓ 1 + P N0W ◆ bits/s = log (1 + SNR) bits/s/Hz Spectral Efficiency
Power
- High SNR:
- Logarithmic growth with power
- Low SNR:
- Linear growth with power
21 3 4 5 6 7 20 40 60 80 100 1 2 SNR log (1 + SNR)
C = log(1 + SNR) ≈ log SNR C = log(1 + SNR) ≈ SNR log2 e
SNR = P N0W
Fix W:
Bandwidth
22
30 5 Bandwidth W (MHz) Capacity Limit for W → ∞ Power limited region 0.2 1 Bandwidth limited region (Mbps) C(W ) 0.4 25 20 15 10 1.6 1.4 1.2 0.8 0.6 P N0log2 e
C(W) = W log ✓ 1 + P N0W ◆ ≈ W P N0W log2 e = P N0 log2 e
Fix P
Bandwidth-‑limited ¡vs. ¡Power-‑limited
- When SNR ≪ 1: (Power-limited regime)
- Linear in power; Insensitive to bandwidth
- When SNR ≫ 1: (Bandwidth-limited regime)
- Logarithmic in power; Approximately linear in bandwidth
23
CAWGN(P, W) = W log ✓ 1 + P N0W ◆ bits/s SNR = P N0W CAWGN(P, W) ≈ W ✓ P N0W ◆ log2 e = P N0 log2 e CAWGN(P, W) ≈ W log ✓ P N0W ◆
24
Capacity ¡of ¡Some LTI ¡Gaussian ¡Channels
25
SIMO ¡Channel
- MRC is a lossless operation: we can generate y from :
- Hence the SIMO channel capacity is equal to the
capacity of the equivalent AWGN channel, which is
h x y e y = ||h||x + e w, e w ∼ CN
- 0, σ2
↓ MRC, h∗ ||h|| ↓ y = hx + w ∈ CL
Power constraint: P w ∼ CN
- 0, σ2IL
- e
y y = e y (h/||h||)
CSIMO = log ✓ 1 + ||h||2P σ2 ◆ Power gain due to Rx beamforming
MISO ¡Channel
- Goal: maximize the received power ||h*x||2
- The answer is ||h||2P ! (check. Hint: Cauchy–Schwarz inequality)
- Achieved by Tx beamforming
- Send a scalar symbol x on the direction of h
- Power constraint on x : still P
- Capacity:
26
h x y y = h∗x + w ∈ C, x, h ∈ CL h∗ = ⇥h1 h2 ⇤ y = x||h|| + w
↓ Tx Beamforming x = xh/||h|| ↓ Power constraint:
N
X
n=1
||x||2 ≤ NP
CMISO = log ✓ 1 + ||h||2P σ2 ◆
Frequency-‑Selective ¡Channel
- Key idea 1: use OFDM to convert the channel with ISI
into a bunch of parallel AWGN channels
- But there is loss/overhead due to cyclic prefix
- Key idea 2: CP overhead → 0 as Nc → ∞
- First focus on finding the capacity of parallel AWGN
channels of any finite Nc
- Then take Nc → ∞ to find the capacity of the frequency-
selective channel
27
y[m] =
L−1
X
l=0
hlx[m − l] + w[m]
Recap: ¡OFDM
28 d[N–1] ˜ y0 x [N + L – 1] = d[N – 1] Cyclic prefix y [N + L – 1] ˜ dN–1 IDFT DFT Remove prefix ˜ yN–1 y[L] y[N + L – 1] y[1] y[L – 1] y[L] x [L – 1] = d[N – 1] x [L] = d[0] x [1] = d[N – L + 1] ˜ d0 d[0] Channel
y := y[L : Nc + L − 1], w := w[L : Nc + L − 1], h := ⇥h0 h1 · · · hL−1 · · · 0⇤T
e yn = e hn e dn + e wn, n = 0, 1, . . . , Nc − 1
e yn := DFT (y)n , e dn := DFT (d)n , e wn := DFT (w)n , e hn := p NcDFT (h)n
Nc parallel AWGN channels
Parallel ¡AWGN ¡Channels
29
e d0[m] e h0 e y0[m] e w0[m]
. . .
e d1[m] e h1 e y1[m] e w1[m] e dNc−1[m] e hNc−1 e wNc−1[m] e yNc−1[m]
Parallel Channels Equivalent Vector Channel e y = e He d + e w e H = diag ⇣ e h0, . . . ,e hNc−1 ⌘ e yn = e hn e dn + e wn, n ∈ [0 : 1 : Nc − 1] Power Constraint m = 1, 2, . . . , M (M channel uses)
M
X
n=1
||e d[n]||2 ≤ MNcP
Due to Parseval theorem of DFT
e w ∼ CN
- 0, σ2I
Independent ¡Uses ¡of ¡Parallel ¡Channels
- One way to code over such parallel channels (a special
case of a vector channel): treat each channel separately
- It turns out that coding across parallel channels does not help!
- Power allocation:
- Each of the Nc channels get a portion of the total power
- Channel n gets power Pn , which must satisfy
- For a given power allocation {Pn}, the following rate can
be achieved:
30
Nc−1
X
n=0
Pn ≤ NcP R =
Nc−1
X
n=0
log 1 + |e hn|2Pn σ2 !
Optimal ¡Power ¡Allocation
- Power allocation problem:
- It can be solved explicitly by Lagrangian methods
- Final solution: let (x)+ := max(x , 0)
31
max
P0,...,PNc−1 Nc−1
X
n=0
log 1 + |e hn|2Pn σ2 ! , subject to
Nc−1
X
n=0
Pn = NcP, Pn ≥ 0, n = 0, . . . , Nc − 1 P ∗
n =
ν − σ2 |e hn|2 !+ , ν satisfies
Nc−1
X
n=0
ν − σ2 |e hn|2 !+ = NcP
Water]illing
32
σ2 |e hn|2 ν
Note: e hn = Hb ✓nW Nc ◆ Baseband frequency response at f = nW/Nc
P1 = 0 Subcarrier n P2 P3 * * * 1 λ
Frequency-‑Selective ¡Channel ¡Capacity
- Final step: making Nc → ∞
- Replace all
- by Hb(f), summation over [0 : Nc – 1]
becomes integration from 0 to W
- Power allocation problem becomes
- Optimal solution becomes
33
e hn = Hb ✓nW Nc ◆
max
P (f)
Z W log ✓ 1 + |H(f)|2P(f) σ2 ◆ d f, subject to Z W P(f) = P, P(f) ≥ 0, f ∈ [0, W] P(f)∗ = ✓ ν − σ2 |H(f)|2 ◆+ , ν satisfies Z W ✓ ν − σ2 |H(f)|2 ◆+ d f = P
Water]illing ¡over ¡the ¡Frequecy ¡Spectrum
34
P ( f ) Frequency ( f ) 0.4W 0.2W – 0.2W – 0.4W 4 3.5 3 2.5 2 1.5 1 0.5
|2
* λ
ν σ2 |H(f)|2