Towards an Algebraic Network Information Theory
Bobak Nazer Boston University Charles River Information Theory Day April 28, 2014
Towards an Algebraic Network Information Theory Bobak Nazer Boston - - PowerPoint PPT Presentation
Towards an Algebraic Network Information Theory Bobak Nazer Boston University Charles River Information Theory Day April 28, 2014 Network Information Theory Goal: Roughly speaking, for a given network, determine necessary and sufficient
Bobak Nazer Boston University Charles River Information Theory Day April 28, 2014
Network Information Theory
Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations.
Network Information Theory
Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Classical Approach:
Network Information Theory
Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Classical Approach:
block Markov coding, and many more...
Network Information Theory
Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Classical Approach:
block Markov coding, and many more...
Network Information Theory
Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Classical Approach:
block Markov coding, and many more...
channels, Slepian-Wolf compression, network coding, and many more...
Network Information Theory
Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Classical Approach:
block Markov coding, and many more...
channels, Slepian-Wolf compression, network coding, and many more...
Kim.
Network Information Theory
Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Classical Approach:
block Markov coding, and many more...
channels, Slepian-Wolf compression, network coding, and many more...
Kim.
Network Information Theory
Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations.
Network Information Theory
Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Algebraic Approach:
Network Information Theory
Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Algebraic Approach:
distributed compression and, more recently, many papers on physical-layer network coding, distributed dirty paper coding, and interference alignment.
Network Information Theory
Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Algebraic Approach:
distributed compression and, more recently, many papers on physical-layer network coding, distributed dirty paper coding, and interference alignment.
Network Information Theory
Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Algebraic Approach:
distributed compression and, more recently, many papers on physical-layer network coding, distributed dirty paper coding, and interference alignment.
Network Information Theory
Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Algebraic Approach:
distributed compression and, more recently, many papers on physical-layer network coding, distributed dirty paper coding, and interference alignment.
Network Information Theory
Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Algebraic Approach:
distributed compression and, more recently, many papers on physical-layer network coding, distributed dirty paper coding, and interference alignment.
Network Information Theory
Goal: Roughly speaking, for a given network, determine necessary and sufficient conditions on the rates at which the sources (or some functions thereof) can be communicated to the destinations. Algebraic Approach:
distributed compression and, more recently, many papers on physical-layer network coding, distributed dirty paper coding, and interference alignment.
Road Map
Marton.
application to interference alignment.
Slepian-Wolf Problem
s1 E1 R1 s2 E2 R2 D ˆ s1 ˆ s2
n
pS1S2(s1i, s2i)
send s1 and s2 to the decoder with vanishing probability of error P{(ˆ s1,ˆ s2) = (s1, s2)} → 0 as n → ∞
Random Binning
sequence s1 to a label {1, 2, . . . , 2nR1}
sequence s2 to a label {1, 2, . . . , 2nR2}
Random Binning
sequence s1 to a label {1, 2, . . . , 2nR1}
sequence s2 to a label {1, 2, . . . , 2nR2}
s1,ˆ s2) within the received
P
s1,ˆ s2) = (s1, s2) in bin (ℓ1, ℓ2)
s1,˜ s2)
2−n(R1+R2) ≤ 2n(H(S1,S2)+ǫ) 2−n(R1+R2)
Random Binning
sequence s1 to a label {1, 2, . . . , 2nR1}
sequence s2 to a label {1, 2, . . . , 2nR2}
s1,ˆ s2) within the received
P
s1,ˆ s2) = (s1, s2) in bin (ℓ1, ℓ2)
s1,˜ s2)
2−n(R1+R2) ≤ 2n(H(S1,S2)+ǫ) 2−n(R1+R2)
Random Binning
sequence s1 to a label {1, 2, . . . , 2nR1}
sequence s2 to a label {1, 2, . . . , 2nR2}
s1,ˆ s2) within the received
P
s1,ˆ s2) = (s1, s2) in bin (ℓ1, ℓ2)
s1,˜ s2)
2−n(R1+R2) ≤ 2n(H(S1,S2)+ǫ) 2−n(R1+R2)
Slepian-Wolf Problem: Binning Illustration
1 2 3 2nR1 4 · · · 1 2 3 4 . . . 2nR2
Slepian-Wolf Problem: Binning Illustration
1 2 3 2nR1 4 · · · 1 2 3 4 . . . 2nR2
Random Linear Binning
alphabets to Fp.
from Fp. Each sequence s1 is binned via matrix multiplication, w1 = G1s1.
from Fp. Each sequence s2 is binned via matrix multiplication, w2 = G2s2.
Random Linear Binning
alphabets to Fp.
from Fp. Each sequence s1 is binned via matrix multiplication, w1 = G1s1.
from Fp. Each sequence s2 is binned via matrix multiplication, w2 = G2s2.
(except for sℓ = 0)
Random Linear Binning
alphabets to Fp.
from Fp. Each sequence s1 is binned via matrix multiplication, w1 = G1s1.
from Fp. Each sequence s2 is binned via matrix multiplication, w2 = G2s2.
(except for sℓ = 0)
Slepian-Wolf Rate Region
Slepian-Wolf Theorem Reliable compression possible if and
R1 ≥ H(S1|S2) R2 ≥ H(S2|S1) R1 + R2 ≥ H(S1, S2) Random linear binning is as good as random i.i.d. binning. R2 R1
S-W hB(θ) hB(θ) R1 + R2 = 1 + hB(θ)
Slepian-Wolf Rate Region
Slepian-Wolf Theorem Reliable compression possible if and
R1 ≥ H(S1|S2) = hB(θ) R2 ≥ H(S2|S1) = hB(θ) R1 + R2 ≥ H(S1, S2) = 1 + hB(θ) Random linear binning is as good as random i.i.d. binning. R2 R1
S-W hB(θ) hB(θ) R1 + R2 = 1 + hB(θ)
Example: Doubly Symmetric Binary Source S1 ∼ Bern(1/2) U ∼ Bern(θ) S2 = S1 ⊕ U
K¨
Bernoulli(θ) noise
s1 E1 R1 s2 E2 R2 D ˆ u u = s1 ⊕ s2 Rate Region: Set of rates (R1, R2) such that there exist encoders and decoders with vanishing probability of error P{ˆ u = u} → 0 as n → ∞ Are any rate savings possible over sending s1 and s2 in their entirety?
Random Binning
R1 + R2 > 1 + hB(θ).
But this probability goes to zero exponentially fast!
K¨
1 2 3 2nR1 4 · · · 1 2 3 4 . . . 2nR2
K¨
1 2 3 2nR1 4 · · · 1 2 3 4 . . . 2nR2
Linear Binning
w1 = Gs1 w2 = Gs2
w1 ⊕ w2 = Gs1 ⊕ Gs2 = G(s1 ⊕ s2) = Gu
K¨
Reliable compression of the sum is possible if and only if: R1 ≥ hB(θ) R2 ≥ hB(θ) .
K¨
1 2 3 2nR1 4 · · · 1 2 3 4 . . . 2nR2
K¨
1 2 3 2nR1 4 · · · 1 2 3 4 . . . 2nR2
K¨
R2 R1
S-W K-M hB(p) hB(p)
Linear codes can improve performance! (for distributed computation of dependent sources)
(Algebraic) Network Source Coding
distributed Gaussian source coding.
distributed source coding for discrete memoryless sources.
Berger-Tung region (best known performance via i.i.d. ensembles).
(Algebraic) Network Source Coding
distributed Gaussian source coding.
distributed source coding for discrete memoryless sources.
Berger-Tung region (best known performance via i.i.d. ensembles).
coding.
Compute-and-Forward
Goal: Convert noisy Gaussian networks into noiseless finite field ones.
Compute-and-Forward
Goal: Convert noisy Gaussian networks into noiseless finite field ones.
w1 E1 x1 h1 w2 E2 x2 h2 wK EK xK hK
. . . . . .
z y D ˆ w1 ˆ w2 . . . ˆ wK
Compute-and-Forward
Goal: Convert noisy Gaussian networks into noiseless finite field ones.
w1 E1 x1 h1 w2 E2 x2 h2 wK EK xK hK
. . . . . .
z y D ˆ w1 ˆ w2 . . . ˆ wK
p
p
Compute-and-Forward
Goal: Convert noisy Gaussian networks into noiseless finite field ones.
w1 E1 x1 h1 w2 E2 x2 h2 wK EK xK hK
. . . . . .
z y D ˆ w1 ˆ w2 . . . ˆ wK
p
p
Compute-and-Forward
Compute-and-Forward
Goal: Convert noisy Gaussian networks into noiseless finite field ones.
w1 E1 x1 h1 w2 E2 x2 h2 wK EK xK hK
. . . . . .
z y D ˆ w1 ˆ w2 . . . ˆ wK
p
p
Compute-and-Forward w1 w2 wK
u1 u2 uK
D
ˆ w1 ˆ w2 . . . ˆ wK
. . . . . .
Compute-and-Forward
Goal: Convert noisy Gaussian networks into noiseless finite field ones.
w1 E1 x1 h1 w2 E2 x2 h2 wK EK xK hK
. . . . . .
z y D ˆ w1 ˆ w2 . . . ˆ wK
p
p
Compute-and-Forward w1 w2 wK
u1 u2 uK
D
ˆ w1 ˆ w2 . . . ˆ wK
. . . . . .
p
Compute-and-Forward
Goal: Convert noisy Gaussian networks into noiseless finite field ones.
w1 E1 x1 h1 w2 E2 x2 h2 wK EK xK hK
. . . . . .
z y D ˆ w1 ˆ w2 . . . ˆ wK
p
p
Compute-and-Forward w1 w2 wK
u1 u2 uK
D
ˆ w1 ˆ w2 . . . ˆ wK
. . . . . .
p
Compute-and-Forward
Goal: Convert noisy Gaussian networks into noiseless finite field ones.
w1 E1 x1 h1 w2 E2 x2 h2 wK EK xK hK
. . . . . .
z y D ˆ w1 ˆ w2 . . . ˆ wK
p
p
Compute-and-Forward w1 w2 wK
u1 u2 uK
D
ˆ w1 ˆ w2 . . . ˆ wK
. . . . . .
p
Compute-and-Forward: Problem Statement
w1 E1 x1 h1 w2 E2 x2 h2 wL EL xL hL . . . z y D ˆ u1 ˆ u2 . . . ˆ uM um =
L
qmℓwℓ
wℓ ∈ Fk
p.
xℓ, y ∈ Rn.
nExℓ2 ≤ P.
n log2 p
Compute-and-Forward: Problem Statement
w1 E1 x1 h1 w2 E2 x2 h2 wL EL xL hL . . . z y D ˆ u1 ˆ u2 . . . ˆ uM um =
L
qmℓwℓ
wℓ ∈ Fk
p.
xℓ, y ∈ Rn.
nExℓ2 ≤ P.
n log2 p
vanishing probability of error lim
n→∞P m{ˆ
um = um}
Compute-and-Forward: Problem Statement
w1 E1 x1 h1 w2 E2 x2 h2 wL EL xL hL . . . z y D ˆ u1 ˆ u2 . . . ˆ uM um =
L
qmℓwℓ
wℓ ∈ Fk
p.
xℓ, y ∈ Rn.
nExℓ2 ≤ P.
n log2 p
vanishing probability of error lim
n→∞P m{ˆ
um = um}
linear combination coefficients qmℓ ∈ Fp to the channel coefficients hℓ ∈ R. Transmitters do not require CSI.
Compute-and-Forward: Problem Statement
w1 E1 x1 h1 w2 E2 x2 h2 wL EL xL hL . . . z y D ˆ u1 ˆ u2 . . . ˆ uM um =
L
qmℓwℓ
wℓ ∈ Fk
p.
xℓ, y ∈ Rn.
nExℓ2 ≤ P.
n log2 p
vanishing probability of error lim
n→∞P m{ˆ
um = um}
linear combination coefficients qmℓ ∈ Fp to the channel coefficients hℓ ∈ R. Transmitters do not require CSI.
Computation Rate
Computation Rate
Computation Rate
am = [am1 am2 · · · amL]T ∈ ZL corresponds to um =
L
qmℓwℓ where qmℓ = [amℓ] mod p (where we assume an implicit mapping between Fp and Zp).
Computation Rate
am = [am1 am2 · · · amL]T ∈ ZL corresponds to um =
L
qmℓwℓ where qmℓ = [amℓ] mod p (where we assume an implicit mapping between Fp and Zp).
Rcomp(h, a) is achievable if, for any ǫ > 0 and n, p large enough, a receiver can decode any linear combinations with integer coefficient vectors a1, . . . , aM ∈ ZL for which the message rate R satisfies R < min
m Rcomp(h, am)
Compute-and-Forward: Achievable Rates
Theorem (Nazer-Gastpar ’11)
The computation rate region described by Rcomp(h, a) = max
α∈R
1 2 log+
α2 + Pαh − a2
Compute-and-Forward: Achievable Rates
Theorem (Nazer-Gastpar ’11)
The computation rate region described by Rcomp(h, a) = 1 2 log+
aT P −1I + hhT−1a
Compute-and-Forward: Achievable Rates
Theorem (Nazer-Gastpar ’11)
The computation rate region described by Rcomp(h, a) = 1 2 log+
aT P −1I + hhT−1a
w1 E1 x1 h1 wL EL xL hL
. . . . . .
z y D
p
Compute-and-Forward w1 wL
ˆ u1 ˆ uM
. . . . . .
p
if R < min
m
Rcomp(h, am) for some am ∈ ZL satisfying [am] mod p = qm.
Compute-and-Forward: Achievable Rates
Theorem (Nazer-Gastpar ’11)
The computation rate region described by Rcomp(h, a) = 1 2 log+
aT P −1I + hhT−1a
Special Cases:
2 log+
a2 + P
Compute-and-Forward: Achievable Rates
Theorem (Nazer-Gastpar ’11)
The computation rate region described by Rcomp(h, a) = 1 2 log+
aT P −1I + hhT−1a
Special Cases:
2 log+
a2 + P
Rcomp
m−1 zeros
1 0 · · · 0]T = 1 2 log
h2
mP
1 + P
h2
ℓ
Compute-and-Forward: Effective Noise
y =
L
hℓxℓ + z =
L
aℓxℓ +
L
(hℓ − aℓ)xℓ + z Desired Codebook:
⇒ lattice codebook.
Compute-and-Forward: Effective Noise
y =
L
hℓxℓ + z =
L
aℓxℓ +
L
(hℓ − aℓ)xℓ + z Effective Noise Desired Codebook:
⇒ lattice codebook.
⇒ dithering.
Compute-and-Forward: Effective Noise
y =
L
hℓxℓ + z =
L
aℓxℓ +
L
(hℓ − aℓ)xℓ + z Decode
L
qℓwℓ Effective Noise Desired Codebook:
⇒ lattice codebook.
⇒ dithering.
p =
⇒ nested lattice codebook.
Nested Lattices
Nested Lattices
QΛ(x) = arg min
λ∈Λ
x − λ2
Nested Lattices
QΛ(x) = arg min
λ∈Λ
x − λ2
if Λ ⊂ ΛFINE
Nested Lattices
QΛ(x) = arg min
λ∈Λ
x − λ2
if Λ ⊂ ΛFINE
Nested Lattices
QΛ(x) = arg min
λ∈Λ
x − λ2
if Λ ⊂ ΛFINE
[x] mod Λ = x − QΛ(x) . Distributive Law:
for all a ∈ Z.
Nested Lattice Codes
taking all elements of ΛFINE that lie in the fundamental Voronoi region of Λ.
Nested Lattice Codes
taking all elements of ΛFINE that lie in the fundamental Voronoi region of Λ.
noise.
Nested Lattice Codes
taking all elements of ΛFINE that lie in the fundamental Voronoi region of Λ.
noise.
constraint. B(0, √ nP)
Nested Lattice Codes
taking all elements of ΛFINE that lie in the fundamental Voronoi region of Λ.
noise.
constraint.
Loeliger ’97, Forney-Trott-Chung ’00, Erez-Litsyn-Zamir ’05, Ordentlich-Erez ’12.
can achieve the point-to-point Gaussian capacity. B(0, √ nP)
Compute-and-Forward: Illustration
All users employ the same nested lattice code:
Compute-and-Forward: Illustration
Choose message vectors over finite field wℓ ∈ Fk
p:
w2 w1
Compute-and-Forward: Illustration
Map wℓ to lattice point tℓ = φ(wℓ): w2 w1
Compute-and-Forward: Illustration
Transmit lattice points over the channel: w2 w1 x1 h1 x2 h2 z y h = [ 1.4 2.1 ] am = [ 2 3 ]
Compute-and-Forward: Illustration
Transmit lattice points over the channel: w2 w1 x1 h1 x2 h2 z y h = [ 1.4 2.1 ] am = [ 2 3 ]
Compute-and-Forward: Illustration
Lattice codewords are scaled by channel coefficients: w2 w1 x1 h1 x2 h2 z y h = [ 1.4 2.1 ] am = [ 2 3 ]
Compute-and-Forward: Illustration
Scaled codewords added together plus noise: w2 w1 x1 h1 x2 h2 z y h = [ 1.4 2.1 ] am = [ 2 3 ]
Compute-and-Forward: Illustration
Scaled codewords added together plus noise: w2 w1 x1 h1 x2 h2 z y h = [ 1.4 2.1 ] am = [ 2 3 ]
Compute-and-Forward: Illustration
Extra noise penalty for non-integer channel coefficients: w2 w1 x1 h1 x2 h2 z y h = [ 1.4 2.1 ] am = [ 2 3 ] Effective noise: 1 + Ph − am2
Compute-and-Forward: Illustration
Scale output by α to reduce non-integer noise penalty: w2 w1 x1 h1 x2 h2 z y αh = [ α1.4 α2.1 ] am = [ 2 3 ] Effective noise: α2 + Pαh − am2
Compute-and-Forward: Illustration
Scale output by α to reduce non-integer noise penalty: w2 w1 x1 h1 x2 h2 z y αh = [ α1.4 α2.1 ] am = [ 2 3 ] Effective noise: α2 + Pαh − am2
Compute-and-Forward: Illustration
Decode to the closest lattice point: w2 w1 x1 h1 x2 h2 z y αh = [ α1.4 α2.1 ] am = [ 2 3 ] Effective noise: α2 + Pαh − am2
Compute-and-Forward: Illustration
Recover integer linear combination mod ΛC: w2 w1 x1 h1 x2 h2 z y αh = [ α1.4 α2.1 ] am = [ 2 3 ] Effective noise: α2 + Pαh − am2
Compute-and-Forward: Illustration
Map back to linear combination of the messages: w2 w1 x1 h1 x2 h2 z y αh = [ α1.4 α2.1 ] am = [ 2 3 ] Effective noise: α2 + Pαh − am2
L
qmℓwℓ
Random i.i.d. codes are not good for computation
2nR codewords each. 2n2R possible sums of codewords.
Random i.i.d. codes are not good for computation
2nR codewords each. 2n2R possible sums of codewords. x1 x2 z y
Random i.i.d. codes are not good for computation
2nR codewords each. 2n2R possible sums of codewords. x1 x2 z y
Random i.i.d. codes are not good for computation
2nR codewords each. 2n2R possible sums of codewords. x1 x2 z y
(Algebraic) Network Channel Coding
multi-user coding techniques.
(Algebraic) Network Channel Coding
multi-user coding techniques.
channel, the sum of the K best computation rates is exactly equal to the multiple-access sum capacity. Algebraic successive cancellation gives this operational meaning.
(Algebraic) Network Channel Coding
multi-user coding techniques.
channel, the sum of the K best computation rates is exactly equal to the multiple-access sum capacity. Algebraic successive cancellation gives this operational meaning.
memoryless networks.
(Algebraic) Network Channel Coding
multi-user coding techniques.
channel, the sum of the K best computation rates is exactly equal to the multiple-access sum capacity. Algebraic successive cancellation gives this operational meaning.
memoryless networks.
interference alignment.
Interference-Free Capacity 1 1
Interference-Free Capacity 1 1
Time Division 1 2 K 1 2 K
Time Division 1 2 K 1 2 K
Time Division 1 2 K 1 2 K
Time Division 1 2 K 1 2 K
Interference Alignment 1 2 K 1 2 K
for the K-user interference channel.
Khandani ’08: Alignment for the MIMO X channel. See Jafar ’11
monograph (or recent e-book) for a richer history.
Interference Alignment 1 2 K 1 2 K
for the K-user interference channel.
Khandani ’08: Alignment for the MIMO X channel. See Jafar ’11
monograph (or recent e-book) for a richer history.
Interference Alignment 1 2 K 1 2 K
for the K-user interference channel.
Khandani ’08: Alignment for the MIMO X channel. See Jafar ’11
monograph (or recent e-book) for a richer history.
Interference Alignment 1 2 K 1 2 K
for the K-user interference channel.
Khandani ’08: Alignment for the MIMO X channel. See Jafar ’11
monograph (or recent e-book) for a richer history.
Interference Alignment 1 2 K 1 2 K
for the K-user interference channel.
Khandani ’08: Alignment for the MIMO X channel. See Jafar ’11
monograph (or recent e-book) for a richer history.
Interference Alignment 1 2 K 1 2 K
for the K-user interference channel.
Khandani ’08: Alignment for the MIMO X channel. See Jafar ’11
monograph (or recent e-book) for a richer history.
Interference Alignment 1 2 K 1 2 K
for the K-user interference channel.
Khandani ’08: Alignment for the MIMO X channel. See Jafar ’11
monograph (or recent e-book) for a richer history.
Interference Alignment 1 2 K 1 2 K
for the K-user interference channel.
Khandani ’08: Alignment for the MIMO X channel. See Jafar ’11
monograph (or recent e-book) for a richer history.
Symmetric K-User Gaussian Interference Channel
w1 E1 x1 w2 E2 x2 . . . wK EK xK
z1 y1 z2 y2 zK yK D1 ˆ w1 D2 ˆ w2 . . . DK ˆ wK
channel gains, Motahari et al. ’09, Wu-Shamai-Verdu ’11.
cases: two-user Etkin-Tse-Wang ’08, many-to-one and one-to-many
Bresler-Parekh-Tse ’10, cyclic Zhou-Yu ’13.
Symmetric K-User Gaussian Interference Channel
w1 E1 x1 w2 E2 x2 . . . wK EK xK
z1 y1 z2 y2 zK yK D1 ˆ w1 D2 ˆ w2 . . . DK ˆ wK
channel gains, Motahari et al. ’09, Wu-Shamai-Verdu ’11.
cases: two-user Etkin-Tse-Wang ’08, many-to-one and one-to-many
Bresler-Parekh-Tse ’10, cyclic Zhou-Yu ’13.
Effective Multiple-Access Channel
yk = xk + g
xℓ + zk .
Effective Multiple-Access Channel
yk = xk + g
xℓ + zk . Successive Cancellation Decoding:
xℓ, then decode xk.
Effective Multiple-Access Channel
yk = xk + g
xℓ + zk . Successive Cancellation Decoding:
xℓ, then decode xk.
Joint Decoding:
Example: Two-User Lattice Alignment
1 √ 2
combination if the ratio of the coefficients is irrational.
Example: Two-User Lattice Alignment
1 2
combination if the ratio of the coefficients is irrational.
the pair of codewords.
Alignment via Two Equations
a set of channel gains of measure zero. Loss of degrees-of-freedom for rational coefficients. Etkin-Ordentlich ’09, Motahari et al. ’09,
Wu-Shamai-Verdu ’11.
Alignment via Two Equations
a set of channel gains of measure zero. Loss of degrees-of-freedom for rational coefficients. Etkin-Ordentlich ’09, Motahari et al. ’09,
Wu-Shamai-Verdu ’11.
a1xk + a2
xℓ b1xk + b2
xℓ using the compute-and-forward framework. If the coefficients are linearly independent, we can solve for the desired message.
Alignment via Two Equations
a set of channel gains of measure zero. Loss of degrees-of-freedom for rational coefficients. Etkin-Ordentlich ’09, Motahari et al. ’09,
Wu-Shamai-Verdu ’11.
a1xk + a2
xℓ b1xk + b2
xℓ using the compute-and-forward framework. If the coefficients are linearly independent, we can solve for the desired message.
denominator SNR1/4 or smaller cause issues.
Symmetric K-User Gaussian Interference Channel
1 1.5 2 2.5 3 3.5 4 0.5 1 1.5 2 2.5 3 3.5 Cross−Channel Gain g Symmetric Rate Two−User Upper Bound Lattice Alignment 15 dB 25 dB
Approximate Capacity Results: Strong Regime
to the multiple-access sum capacity, we can approximate the sum capacity of the symmetric K-user Gaussian interference channel in all regimes. Rsym > 1 2 log
a∈Z2 Rcomp
approximate the sum capacity up to an outage set.
Approximate Capacity Results: Strong Regime
to the multiple-access sum capacity, we can approximate the sum capacity of the symmetric K-user Gaussian interference channel in all regimes. Rsym > 1 2 log
a∈Z2 Rcomp
approximate the sum capacity up to an outage set.
1 4 log+(g2SNR) − c 2 − 3 ≤ Csym ≤ 1 4 log+(g2SNR) + 1 for all channel gains except for an outage set whose measure is a fraction of 2−c of the interval 1 < |g| < √ SNR, for any c > 0.
Generalizations
Generalizations
interference alignment for any setting where we have “stream-by-stream” alignment.
Algebraic Structure in Network Information Theory
Some topics we did not have a chance to cover:
Nam-Chung-Lee ’10, ’11, Goseling-Gastpar-Weber ’11, Song-Devroye ’13, Nokleby-Aazhang ’12
Nazer-Sanderovich-Gastpar-Shamai ’09, Zhan-Nazer-Erez-Gastpar ’12, Hong-Caire ’13, Ordentlich-Erez ’13
Philosof-Zamir-Erez-Khisti ’11, Wang ’12
Nazer-Gastpar ’07, ’08, Soundararajan-Vishwanath ’12
Kashyap-Shashank-Thangaraj ’12
Concluding Remarks
i.i.d. ensembles.
network source and channel coding.
memoryless analogues of these results now seem within reach.
structure?
available on my website.
and Networks.”