- Asst. Prof. Dr. Prapun Suksompong
prapun@siit.tu.ac.th
Channel Capacity
1
Di Digi gital tal Co Comm mmuni unication cation Sy Syst stem ems
EC ECS 452
Office Hours: Rangsit Library: Tuesday 16:20-17:20 BKD3601-7: Thursday 16:00-17:00
Di Digi gital tal Co Comm mmuni unication cation Sy Syst - - PowerPoint PPT Presentation
Di Digi gital tal Co Comm mmuni unication cation Sy Syst stem ems ECS 452 EC Asst. Prof. Dr. Prapun Suksompong prapun@siit.tu.ac.th Channel Capacity Office Hours: Rangsit Library: Tuesday 16:20-17:20 BKD3601-7: Thursday
prapun@siit.tu.ac.th
1
Office Hours: Rangsit Library: Tuesday 16:20-17:20 BKD3601-7: Thursday 16:00-17:00
2
3
Reliable communication means arbitrary small error
This seems to be an impossible goal.
If the channel introduces errors, how can one correct them all?
Any correction process is also subject to error, ad infinitum.
Operational Channel capacity C = the maximum rate at
Q y x X Y
4
or Encoding or Channel Encoding Introduce redundancy so that even if some of the information
5
The most obvious coding scheme is to repeat information. For example,
to send a 1, we send 11111, and to send a 0, we send 00000.
This scheme uses five symbols to send 1 bit, and therefore has a rate of
1/5 bit per symbol.
If this code is used on a binary symmetric channel, the ML decoding
rule (which is optimal when the 0s and 1s are equiprobable), is equivalent to taking the majority vote of each block of five received bits.
If three or more bits are 1, we decode the block as a 1; otherwise, we decode it as 0.
By using longer repetition codes, we can achieve an arbitrarily low
probability of error.
But the rate of the code also goes to zero with (larger) block length, so
even though the code is “simple,” it is really not a very useful code.
6
0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
n = 15 n = 5 n = 1 n = 25 p
P
7
In mathematics, parity refers to the evenness or oddness of an integer Here, parity refers to the evenness or oddness of the # 1’s within a given set of
bits.
It can be calculated via an XOR sum of the bits, yielding 0 for even parity and 1
for odd parity.
Ex.
Even parity: 0110, 011011 Odd Parity: 0111, 011010
A parity bit, or check bit, is a bit added to the end of the k information bit. 𝑜 = 𝑙 + 1 There are two variants of parity bits: even parity bit and odd parity bit. Even parity bit: Choose the nth bit so that the number of 1’s in the block
is even.
Ex. k = 5
100001; 101000; 111111; 010111
1 2 3
, 1,2, , , 1
i i k
B i k X B B B B i k n
8
Used as the simplest form of error detecting code. Does not detect an even number of errors Does not give any information about how to correct the
Generalization: Parity Check Codes
We can extend the idea of parity check bit
to allow for multiple parity check bits and to allow the parity checks to depend on various subsets of the information
bits. The Hamming code is an example of a parity check code.
9
[SHANNON, 1948] 1.
In particular, for any R < C, there exist codes (encoders and
decoders) with sufficiently large n such that
2.
ˆ 2
E R n
P P W W
Positive function of R for R < C Completely determined by the channel characteristics
10
Express the limit to reliable communication Provides a yardstick to measure the performance of
A system performing near capacity is a near optimal system and
does not have much room for improvement.
On the other hand a system operating far from this fundamental
bound can be improved (mainly through coding techniques).
11
Shannon introduces a method of proof called random coding. Instead of looking for the best possible coding scheme and
analyzing its performance, which is a difficult task,
all possible coding schemes are considered
by generating the code randomly with appropriate distribution
and the performance of the system is averaged over them. Then it is proved that if R < C, the average error probability tends to
zero.
This proves that
as long as R < C, at any arbitrarily small (but still positive) probability of error, one can find (there exist) at least one code (with sufficiently
long block length n) that performs better than the specified probability of error.
12
If we used the scheme suggested and generate a code at random,
the code constructed is likely to be good for long block lengths.
No structure in the code. Very difficult to decode
In addition to achieving low probabilities of error, useful codes should
be “simple,” so that they can be encoded and decoded efficiently.
Hence the theorem does not provide a practical coding scheme. Since Shannon’s paper, a variety of techniques have been used to
construct good error correcting codes.
The entire field of coding theory has been developed during this
search.
Turbo codes have come close to achieving capacity for Gaussian
channels.
13
14
(t)
i
s
j
s
ai bi aj bj Decision region for
j
s
Decision region for
i
s
b a a b P a N b Q Q
2
0, N
1 ˆ
i i i i i i i i i i i i i i i i i i i j j j j i i j j
P s P a R b S s P a s N b s a s b s s a b s Q Q Q Q P W j W i P a R b S s P a s N b s a s b s Q Q
15
1 2 3 2 2 2 2 1 2 2 2
i
s d d d d Q d d d d d d Q Q Q Q Q d Q Q d d Q
2 2 2 2 2 3 2 d d Q Q Q d d d d Q Q Q Q d d d d Q d d Q d
(t)
1
s d
2
s
3
s d
2 d 2 d
ˆ
i i j j
a s b s P W j W i Q Q
i j
16
1 2 3 3 3 1 1 2 2 2 2 2 1 2 2 2 2 3 3 3 1 2 2 2 2 d d d d Q Q Q Q d d d d Q Q Q Q d d d d Q Q Q Q
1 1 1 3 3 1 1 1 3 3 1
1 2 3 1 1 2 1 2 3 1 q q q q q q q q q q q
=
Transition probabilities i j i j
ˆ P W j W i
1 3
2 3 2 d q Q d q Q
17
i
s
j
s
Decision region for
j
s
i.i.d. 2
0,
i
N
𝜚1 𝜚2
:
j
a b c d
ˆ P W j W i
1 2 1 1 1 2 2 2 1 1 2 2
,c
i j i i i i i i i i i
P R S s P a R b R d S s P a s N b s P c s N d s a s b s c s d s Q Q Q Q
18
d
1
2
1
s
2
s
3
s
4
s
2
2 2 2 2 ˆ 1 1 1 1 1 2 2 ˆ 1 3 d d d d P W W Q Q Q Q d d Q Q q P W W Q
2 2 2 2 1 1 2 2 2 ˆ 1 4 d d d d Q Q Q d d Q Q q q d P W W Q
2
2 2 2 2 2 d d d Q Q Q d d Q Q q
2 2 2 2 2 2 2 2
1 2 3 4 1 1 1 1 2 1 1 1 3 1 1 1 4 1 1 1 q q q q q q q q q q q q q q q q q q q q q q q q
i j
ˆ P W j W i
2 d q Q
19
Q_Matrix = Q_ML(SS,EbN0dB,n) Capacity_BPSK_Example
20
5 10 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Eb/N0 [dB] C BPSK (sim) BPSK (theoretical)
22
2 2 2 2 2 2
log log log log ( ; ) log log
X X X x Y X X Y X Y X x y Y X Y Y X Y X x y Y
H X p X p x p x H Y X p p x p p I X Y H Y H Y X p p Y p p x p p Y X y x y x Y X y x y x y
2 2 2 2 2 2
log log log log ( ; ) log log
X X X x Y X X Y X Y X Y X Y Y X X Y X Y
h X f X f x f x h Y X f f x f f dydx I X Y h Y h Y X f f Y f f x f dydx Y X y x y x Y X y x f y y x
Discrete X and Y Continuous X and Y
Y X Y X
f y f x f d y x x
Y Y X x
p p x p y y x
23
Suppose
Input Power Constraint: In addition, it is usually assumed
Capacity:
The input pdf that achieves this capacity is a zero-mean Gaussian
2
X P
2 2
1 log 1 2 P C
[bits per transmission] [bits per channel use]
24
Assume
Band-Limited Channel: The channel has a given
Can use only the frequencies in the range , .
Input Power Constraint: AWGN with PSD N0/2
Capacity: This is the celebrated equation for the capacity of a band-
2
X t P
2
log 1 P C W N W
[bps]