Lecture 04 Reliable Communication I-Hsiang Wang ihwang@ntu.edu.tw - - PowerPoint PPT Presentation

lecture 04 reliable communication
SMART_READER_LITE
LIVE PREVIEW

Lecture 04 Reliable Communication I-Hsiang Wang ihwang@ntu.edu.tw - - PowerPoint PPT Presentation

Principle of Communications, Fall 2017 Lecture 04 Reliable Communication I-Hsiang Wang ihwang@ntu.edu.tw National Taiwan University 2017/10/25,26 Channel Coding Binary Interface x ( t ) { c i } { u m } x b ( t ) ECC Symbol Pulse Up { b


slide-1
SLIDE 1

Principle of Communications, Fall 2017

Lecture 04 Reliable Communication

I-Hsiang Wang

ihwang@ntu.edu.tw National Taiwan University 2017/10/25,26

slide-2
SLIDE 2

2

Previous lectures:

{bi} {ˆ bi}

{ci} {ˆ ci} {um} {ˆ um} xb(t) yb(t)

x(t) y(t)

ECC Encoder Symbol Mapper Pulse Shaper Filter + Sampler + Detection Symbol Demapper ECC Decoder

coded bits discrete sequence

Binary Interface

Channel Coding

Information bits Up Converter Down Converter

baseband waveform

Noisy Channel

passband waveform

Focusing on digital modulation, we can ensure that the coded bits {ci} can be reconstructed optimally (i.e., minimize avg. prob. of error) at the receiver

slide-3
SLIDE 3

3

However, this is not good enough …

Averaged symbol probability of error is exponentially decaying with SNR Pe . = exp(−c SNR) For each symbol, Pe = 10–3 is already pretty good! Consider a file mapped and converted into n = 250 symbols The file cannot be reconstructed if one symbol is wrong Pretty bad … But we cannot do much because noise is inevitable, while modulation only focus on the symbol level, not the the file level The “file” probability of error is 1 − (1 − Pe)n ≈ nPe = 250/1000 = 0.25

slide-4
SLIDE 4

4

This lecture:

{bi} {ˆ bi}

{ci} {ˆ ci} {um} {ˆ um} xb(t) yb(t)

x(t) y(t)

ECC Encoder Symbol Mapper Pulse Shaper Filter + Sampler + Detection Symbol Demapper ECC Decoder

coded bits discrete sequence

Binary Interface

Channel Coding

Information bits Up Converter Down Converter

baseband waveform

Noisy Channel

passband waveform

Introduce error correction coding, to add redundancy to the original file. We are able to make the overall “file” probability of error arbitrarily small! Prices to pay: data rate and energy Reliable Communication!

slide-5
SLIDE 5

5

Equivalent Discrete-time Complex Baseband Channel

ECC Encoder Digital Modulator

b [b1 b2 ... bk] c [c1 c2 ... cn] c b V = u + Z u [u1 u2 ... u˜

n]

u

˜ n = n/

V

Detection + Decoder

ˆ b

Soft decision: jointly consider detection and decoding; directly work on the demodulated symbols Hard decision: only consider decoding; directly work on the detected bit sequences

Equivalent Discrete-time Complex Baseband Channel

ECC Encoder Digital Modulator

b [b1 b2 ... bk] c [c1 c2 ... cn] c b V = u + Z u [u1 u2 ... u˜

n]

u

˜ n = n/

V

ECC Decoder

ˆ b

  • Demod. +

Detection

d

We focus on soft decision first!

Rate: R = k/n

slide-6
SLIDE 6

Outline

  • Prelude: repetition coding
  • Energy-efficient reliable communication: orthogonal code
  • Rate-efficient reliable communication: linear block code
  • Convolutional code

6

slide-7
SLIDE 7

7

Part I. Prelude: Repetition Coding

Repetition code, Rate and Energy efficiency

slide-8
SLIDE 8

Repetition: a simple way to enhance reliability

  • Idea: repeat each bit N times data rate R = 1/N .
  • We focus on the architecture below:

8

  • riginal bit seq.

coded bit seq. 1 coded bit seq. 2

b1 b2 b4 b3 b5

b1 b2 b3 b4 b5 b1 b2 b3 b4 b5 b1 b2 b3 b4 b5 b1 b2 b3 b4 b5

Many ways for repetition

Equivalent Discrete-time Complex Baseband Channel

Repetition Digital Modulator

b [b1 b2 ... bk] c [c1 c2 ... cn] c b V = u + Z u [u1 u2 ... u˜

n]

u

˜ n = n/

V

Detection + Decoder

ˆ b

  • c =
  • b1 ∼ b

b1 ∼ b b1 ∼ b

                                      

repeat N times

b+1 ∼ b2

n = kN : # of bits in a symbol

slide-9
SLIDE 9

9

  • c =
  • b1 ∼ b

b1 ∼ b b1 ∼ b

                                      

repeat N times

b+1 ∼ b2

u

  • u1 u2 · · · uN
  • ∈ CN

Equivalent vector symbol u1

mod mod mod mod

u2 uN

  • Since the noises are i.i.d., it suffices to use the N-dim. demodulated

to optimally decode V = u + Z b1 ∼ b

slide-10
SLIDE 10

= Q

  • N·4d2

2N0

  • = Q
  • N 2d2

N0

  • BPSK + repetition coding

10

Equivalent channel model: V = u + Z ∈ CN Z1, ...ZN

i.i.d.

∼ CN(0, N0) Equivalent constellation set: u ∈ {a0, a1}

a0 = −

  • d d · · · d
  • a1 = +
  • d d · · · d
  • Performance analysis:

P(N)

e

= Q

  • ∥a1−a0∥

2√ N0/2

  • SNR average energy per uncoded symbol

total noise variance per symbol

= d2

N0

Repetition effectively increase SNR by N-fold!

= Q √ N2SNR . = exp(−NSNR)

slide-11
SLIDE 11

Rate and energy efficiency

11

Rate: R = 1/N → 0 as N → ∞ Energy per bit: Eb = Nd2 → ∞ as N → ∞ Achieving arbitrarily small prob. of error at the price of zero rate and infinite energy per bit Question: can we resolve the issue with more general constellation sets?

slide-12
SLIDE 12

General modulation + repetition coding

12

Equivalent channel model: V = u + Z ∈ CN Z1, ...ZN

i.i.d.

∼ CN(0, N0) Equivalent constellation set: Probability of error (take M-ary PAM as an example): u ∈ {a1, ..., aM} M = 2 P(N)

e

= 2(1 − 2−ℓ)Q

  • N

6 4ℓ−1SNR

  • Rate: R = /N

= 2(1 − 2−NR)Q

  • N

4NR−16SNR

  • limN→∞ P(N)

e

= 0 ⇐ ⇒ limN→∞ 4NR−1

N

= 0 Energy per bit:

Eb N0 = N ℓ SNR = SNR R

→ ∞ as N → ∞ → 0 as N → ∞ it is necessary that limN→∞ R = 0

slide-13
SLIDE 13

Why repetition coding is not very good

  • Repetition coding: high reliability at the price of asymptotically zero rate and

infinite energy per bit

  • Repetition is too naive and does not utilize the available degrees of freedom in

the N-dimensional space efficiently

  • Is it possible to design better coding schemes with the following?
  • Vanishing probability of error
  • Positive rate
  • Finite energy per bit

13

YES!

slide-14
SLIDE 14

14

Part II. Energy-Efficient Reliable Communication

Orthogonal code, Optimal energy efficiency

slide-15
SLIDE 15

Orthogonal coding

  • With N dimensions (N time slots), use N equal-norm orthogonal vectors to

encode log2N bits

  • Since the noises are i.i.d. circularly symmetric complex Gaussian, we can WLOG

assume that these N vectors are simply scaled standard unit vectors:

15

Equivalent Discrete-time Complex Baseband Channel

Encoder + Modulation

b [b1 b2 ... bk] b V = u + Z u [u1 u2 ... u˜

n]

u V

Detection + Decoder

ˆ b

here we jointly consider coding and modulation

{dei | i = 1, ..., N}, ei(j) = {i = j}

slide-16
SLIDE 16

16

  • info. bits

000 001 010 011 100 101 110 111 symbol vector [d 0 0 0 0 0 0 0] [0 d 0 0 0 0 0 0] [0 0 d 0 0 0 0 0] [0 0 0 d 0 0 0 0] [0 0 0 0 d 0 0 0] [0 0 0 0 0 d 0 0] [0 0 0 0 0 0 d 0] [0 0 0 0 0 0 0 d] Example: N = 8

Equivalent to encoding messages using the location of a pulse Pulse Position Modulation (PPM)

slide-17
SLIDE 17

17

Performance analysis of orthogonal coding

Equivalent channel model: V = u + Z ∈ CN Z1, ...ZN

i.i.d.

∼ CN(0, N0) Equivalent constellation set: Probability of error: Rate: Energy per bit: → 0 as N → ∞ u ∈ {dei | i = 1, ..., N} R = log2 N/N Eb = d2/ log2 N P(N)

e

≤ (N − 1)Q

  • d2

min

2N0

  • = (N − 1)Q
  • d2

N0

  • dmin =

√ 2d

≤ NQ

  • log2 N Eb

N0

  • Finite energy per bit suffices!

≤ 1

2 exp

  • − ln N
  • Eb

(2 ln 2)N0 − 1

  • → 0 as N → ∞ as long as Eb

N0 > 2 ln 2

Eb > (2 ln 2)N0

slide-18
SLIDE 18

Minimum energy per bit

  • Does orthogonal coding achieve the minimum Eb/N0?
  • Let us use Shannon’s capacity formula to derive the minimum Eb/N0 of all

possible coding+modulation schemes:

  • For the additive Gaussian noise channel with energy per channel use P, the best

achievable rate follows (bits per channel use)

  • Energy per bit is hence
  • The minimum energy per bit when the rate is R can be found:
  • Taking infimum over all R, we see:
  • In fact, orthogonal code can achieve any Eb/N0 > ln 2 !
  • but union bound fails; new techniques required (see Gallager Ch. 8.5.3 for more details)

18

R < C log2(1 + P

N0 )

Eb = P/R = ⇒ R < log2(1 + R Eb

N0 ) Eb N0 > E∗

b(R)

N0

2R−1

R

E∗

b

N0 inf

R>0

E∗

b(R)

N0 = lim

R↓0

2R − 1 R = ln 2

slide-19
SLIDE 19

19

Part III. Rate-Efficient Reliable Communication

Linear block code, Existence of rate-efficient codes with vanishing error probability

slide-20
SLIDE 20

Linear block code + BPSK modulation

20

Equivalent Discrete-time Complex Baseband Channel

ECC Encoder Digital Modulator

b [b1 b2 ... bk] c [c1 c2 ... cn] c b V = u + Z u [u1 u2 ... u˜

n]

u

˜ n = n/

V

Detection + Decoder

ˆ b

  • Orthogonal code achieves vanishing probability of error with zero rate but finite

energy per bit (energy-efficient reliable communication).

  • Is it possible to achieve vanishing probability of error with positive rate and finite

energy per bit (rate-efficient reliable communication)?

  • We focus on the following architecture: linear block code + BPSK modulation
  • It turns out this simple architecture can achieve rate-efficient reliable communication!

Linear Block Code Binary PAM Modulator

= 1 R = k/n

slide-21
SLIDE 21

Linear block code

21

Encoding: matrix multiplication (under the binary field arithmetics)

g ∈ {0, 1}k×n b [b1 b2 ... bk] ∈ {0, 1}k c [c1 c2 ... cn] ∈ {0, 1}n

=

c = bg

Coded bit ci is the XOR of the info bits sampled by the i-th row of g. Codebook: the collection of all possible codewords [ c ]

codeword

    1 1 1 1 1     g

generator matrix

[ b ]

message

Cg

  • c = bg | b ∈ {0, 1}k
slide-22
SLIDE 22

Z ∼ N(0, N0

2 In)

Receiver: ML decoding

22

V = u + Z

ML Decoder u ∈ A {ab1,b2,...,bk | b ∈ {0, 1}k}

1-to-1 correspondence between constellation set and codebook A Cg (ab)i =

  • +√Es,

(bg)i = 1 −√Es, (bg)i = 0 i-th symbol represents the BPSK modulated outcome of the i-th bit

constellation point codeword

This is just a 2k -ary vector detection problem, and we know how to find bounds of the probability of error! ˆ B = φML(V ) Pe(φML; b, g) ≤

  • ˜

b=b

Pb,g

  • Eb˜

b

  • Distribution of V depends
  • n the message b and the

generator matrix g!

slide-23
SLIDE 23

Existence of a good generator matrix

  • The performance obviously depends on the generator matrix g.
  • Generator matrix g determines the decoding algorithm.
  • Generator matrix g determines the codebook too.
  • ML decoding can be realized by exhaustive search
  • Complexity is exponential in the codeword length n.
  • Reduction of complexity relies on the structure of the codebook.
  • Our goal:
  • NOT to explicitly design a good generator matrix
  • NOT to worry about decoding complexity at this point
  • Instead, we want to show that some of the generator matrices are good and can

yield vanishing probability of error!

  • Only need to prove the existence of good codes!

23

slide-24
SLIDE 24

Random generator matrix

24

G = [Gi,j]

= ⇒ P{G = g} =

1 2nk , ∀ g ∈ {0, 1}k×n

Gi,j

i.i.d.

∼ Ber( 1

2),

∀ i = 1, ..., k, j = 1, ..., n     1 1 1 1 1     g

generator matrix

Idea for proving existence: (fix rate R = k/n, drive n to infinity) (1) compute the average-over-random-codebook probability of error (2) show that converges to zero as (3) conclude that at least one generator matrix is good! Pe

(n)(R) EG,B [Pe(φML; B, G)]

Pe

(n)(R)

n → ∞

Why is this easier to upper bound?

slide-25
SLIDE 25

Upper bounding of

25

Pe

(n)(R)

Bounding : Pe

(n)(R) EG,B [Pe(φML; B, G)]

Pe(φML; b, g)

Pb,g

  • Eb→˜

b

  • = Q

u ˜ u 2N0

  • ∥u − ˜

u∥ = 2d √ # of 1's in c ⊕ ˜ c

= Q √ 2SNR

  • w(c ⊕ ˜

c)

  • weight of a 0-1 vector c ⊕ ˜

c

≤ 1 2 exp (−SNR w(c ⊕ ˜ c))

Pe(φML; b, g) ≤

˜ b̸=b Pb,g

  • Eb→˜

b

  • Averaging over random matrix G and random bits B :

Pe

(n)(R) EG,B [Pe(φML; B, G)] ≤

  • g
  • b
  • ˜

b=b

1 2nk 1 2k 1 2 exp (−SNR w(c ⊕ ˜ c))

c = bg

slide-26
SLIDE 26

26

=

  • b
  • ˜

b=b

1 2k 1 2

  • g

1 2nk exp (−SNR w(c ⊕ ˜ c))

Swap the order of summation:

  • g
  • b
  • ˜

b=b

1 2nk 1 2k 1 2 exp (−SNR w(c ⊕ ˜ c))

For a fixed pair of , scanning through all possible matrices g, let’s find the fraction of g such that and denote it as (b, ˜ b) w((b ⊕ ˜ b)g) = f(x) = P {w(xG) = } f(b ⊕ ˜ b) Pe

(n)(R) EG,B [Pe(φML; B, G)] ≤

  • g
  • b
  • ˜

b=b

1 2nk 1 2k 1 2 exp (−SNR w(c ⊕ ˜ c))

slide-27
SLIDE 27

27

Key observation: for y = xG, y1, ..., yn

i.i.d.

∼ Binom(n, 1

2)

  • b
  • ˜

b=b

1 2k 1 2

n

  • =0

f(b ⊕ ˜ b) exp (−SNR ) Pe

(n)(R) EG,B [Pe(φML; B, G)]

=

  • b
  • ˜

b=b n

  • =0

1 2k 1 2 n

  • 1

2n exp (−SNR ) = ⇒ fℓ(x) = P {w(y) = ℓ} = n

1

2n

slide-28
SLIDE 28

28

  • b
  • ˜

b=b

1 2k 1 2

n

  • =0

f(b ⊕ ˜ b) exp (−SNR ) Pe

(n)(R) EG,B [Pe(φML; B, G)]

=

  • b
  • ˜

b=b n

  • =0

1 2k 1 2 n

  • 1

2n exp (−SNR ) = 2k − 1 2

n

  • =0

n

  • 1

2n exp (−SNR ) = 2k − 1 2n+1 (1 + exp (−SNR))n ≤ 2n{R−1− 1

n +log2(1+exp(−SNR)}

slide-29
SLIDE 29

29

Pe

(n)(R) EG,B [Pe(φML; B, G)]

In particular, if we choose the rate R slightly smaller than R* as follows: then, we can guarantee that R = R∗ − δ, R∗ 1 − log2(1 + exp (−SNR)) ≤ 2n{R−1− 1

n +log2(1+exp(−SNR)}

Pe

(n)(R) EG,B [Pe(φML; B, G)] ≤ 2−nδ → 0

as n → ∞ Hence, when R < R*, there exists at least one sequence of generator matrices with strictly positive rate R and vanishing probability of error! Meanwhile, energy per bit is finite, too!

Eb N0 = nd2 kN0 = SNR R