Anonymity Networks Laslo Hunhold Mathematisches Institut - PowerPoint PPT Presentation
Anonymity Networks Laslo Hunhold Mathematisches Institut Universitt zu Kln 27th July 2017 In the lecture Information Theory and Statistical Physics by Prof. Dr. Johannes Berg motivation motivation hide initiator of a message in a
Anonymity Networks Laslo Hunhold Mathematisches Institut Universität zu Köln 27th July 2017 In the lecture ‘Information Theory and Statistical Physics’ by Prof. Dr. Johannes Berg
motivation
motivation ◮ hide initiator of a message in a computer network
motivation ◮ hide initiator of a message in a computer network ◮ safe whistleblowing under corporate and state surveillance
motivation ◮ hide initiator of a message in a computer network ◮ safe whistleblowing under corporate and state surveillance ◮ ‘deniable communication’
motivation ◮ hide initiator of a message in a computer network ◮ safe whistleblowing under corporate and state surveillance ◮ ‘deniable communication’ ◮ decentralized
idea
idea node network participant link possible message path
idea node network participant link possible message path ◮ all nodes have equal weight
idea node network participant link possible message path ◮ all nodes have equal weight ◮ message unmodifiable, only receiver is known
idea node network participant link possible message path ◮ all nodes have equal weight ◮ message unmodifiable, only receiver is known ◮ each node on path: biased coin flip: forward or deliver
idea node network participant link possible message path ◮ all nodes have equal weight ◮ message unmodifiable, only receiver is known ◮ each node on path: biased coin flip: forward or deliver ◮ each node on path: initiator or just forwarder?
idea node network participant link possible message path ◮ all nodes have equal weight ◮ message unmodifiable, only receiver is known ◮ each node on path: biased coin flip: forward or deliver ◮ each node on path: initiator or just forwarder? → message initator gets lost in the crowd
model
model n 4 n 5 n 3 n 6 n 2 n 7 n 1 n 8 ◮ N nodes n 1 , . . . , n N with P ( n i is initiator) =: P ( X = n i ) =: p i
model n 4 n 5 n 3 n 6 n 2 n 7 n 1 n 8 ◮ N nodes n 1 , . . . , n N with P ( n i is initiator) =: P ( X = n i ) =: p i ◮ n i probably innocent ↔ p i ≤ 1 2
model n 4 n 5 n 3 n 6 n 2 n 7 n 1 n 8 ◮ N nodes n 1 , . . . , n N with P ( n i is initiator) =: P ( X = n i ) =: p i ◮ n i probably innocent ↔ p i ≤ 1 2 ◮ forwarding probability λ
model n 4 n 5 n 3 n 6 n 2 n 7 n 1 n 8 ◮ N nodes n 1 , . . . , n N with P ( n i is initiator) =: P ( X = n i ) =: p i ◮ n i probably innocent ↔ p i ≤ 1 2 ◮ forwarding probability λ if message received then flip biased coin P (heads) = λ if heads then forward to a uniformly chosen node else deliver to receiver end if end if
degree of anonymity
degree of anonymity best case X := X : ∀ i ∈ { 1 , . . . , N } : p i = 1 N
degree of anonymity best case X := X : ∀ i ∈ { 1 , . . . , N } : p i = 1 N N � H := H ( X ) = − p i · ln( p i ) = ln( N − C ) i =1
degree of anonymity best case X := X : ∀ i ∈ { 1 , . . . , N } : p i = 1 N N � H := H ( X ) = − p i · ln( p i ) = ln( N − C ) i =1 worst case X := X : ∀ i ∈ { 1 , . . . , N } \ { j } : p i = 0 ∧ p j = 1
degree of anonymity best case X := X : ∀ i ∈ { 1 , . . . , N } : p i = 1 N N � H := H ( X ) = − p i · ln( p i ) = ln( N − C ) i =1 worst case X := X : ∀ i ∈ { 1 , . . . , N } \ { j } : p i = 0 ∧ p j = 1 N � H := H ( X ) = − p i · ln( p i ) = 1 · ln(1) = 0 i =1
degree of anonymity best case X := X : ∀ i ∈ { 1 , . . . , N } : p i = 1 N N � H := H ( X ) = − p i · ln( p i ) = ln( N − C ) i =1 worst case X := X : ∀ i ∈ { 1 , . . . , N } \ { j } : p i = 0 ∧ p j = 1 N � H := H ( X ) = − p i · ln( p i ) = 1 · ln(1) = 0 i =1 d ( X ) := 1 − H − H ( X ) = H ( X ) ∈ [0 , 1] H H
corruption
corruption n 4 n 5 n 3 n 6 n 2 n 7 n 1 n 8 ◮ 0 ≤ C < N corrupt nodes (incoming message passer known)
corruption n 4 n 5 n 3 n 6 n 2 n 7 n 1 n 8 ◮ 0 ≤ C < N corrupt nodes (incoming message passer known) ◮ behave normally
corruption n 4 n 5 n 3 n 6 n 2 n 7 n 1 n 8 ◮ 0 ≤ C < N corrupt nodes (incoming message passer known) ◮ behave normally ◮ wait for message to be passed to us
corruption n 4 n 5 n 3 n 6 n 2 n 7 n 1 n 8 ◮ 0 ≤ C < N corrupt nodes (incoming message passer known) ◮ behave normally ◮ wait for message to be passed to us ◮ analyze probability of passer being initiator
corruption n 4 n 5 n 3 n 6 n 2 n 7 n 1 n 8 ◮ 0 ≤ C < N corrupt nodes (incoming message passer known) ◮ behave normally ◮ wait for message to be passed to us ◮ analyze probability of passer being initiator P (passer is initiator) > 1 2 → unmasked
analysis events
analysis events 4 n I n C 3 1 n n n m 2
analysis events 4 n I n C 3 1 n n n m 2 let k > 0
analysis events 4 n I n C 3 1 n n n m 2 let k > 0 H k := first corrupt node is at the k th path-position
analysis events 4 n I n C 3 1 n n n m 2 let k > 0 H k := first corrupt node is at the k th path-position ∞ � H k + := H i i = k
analysis events 4 n I n C 3 1 n n n m 2 let k > 0 H k := first corrupt node is at the k th path-position ∞ � H k + := H i i = k I := first corrupt node immediately postcedes the message initiator
analysis events 4 n I n C 3 1 n n n m 2 let k > 0 H k := first corrupt node is at the k th path-position ∞ � H k + := H i i = k I := first corrupt node immediately postcedes the message initiator P (passer is initiator) = P ( I | H 1+ )
analysis events 4 n I n C 3 1 n n n m 2 let k > 0 H k := first corrupt node is at the k th path-position ∞ � H k + := H i i = k I := first corrupt node immediately postcedes the message initiator P (passer is initiator) = P ( I | H 1+ ) note: H 1 → I , but I �→ H 1
analysis general probability I P ( I | H 1+ ) = N − λ ( N − C − 1) N
analysis general probability I P ( I | H 1+ ) = N − λ ( N − C − 1) N proof:
analysis general probability I P ( I | H 1+ ) = N − λ ( N − C − 1) N proof: � k − 1 � λ · N − C � λ · C � P ( H k ) = · N N
analysis general probability I P ( I | H 1+ ) = N − λ ( N − C − 1) N proof: � k − 1 � λ · N − C � λ · C � P ( H k ) = · N N � k � λ · N − C ∞ C · N � ⇒ P ( H k + ) = P ( H i ) = . . . = � � 1 − λ · N − C ( N − C ) · i = k N
analysis general probability I P ( I | H 1+ ) = N − λ ( N − C − 1) N proof: � k − 1 � λ · N − C � λ · C � P ( H k ) = · N N � k � λ · N − C ∞ C · N � ⇒ P ( H k + ) = P ( H i ) = . . . = � � 1 − λ · N − C ( N − C ) · i = k N H 1 → I ⇒ P ( I | H 1 ) = 1
analysis general probability I P ( I | H 1+ ) = N − λ ( N − C − 1) N proof: � k − 1 � λ · N − C � λ · C � P ( H k ) = · N N � k � λ · N − C ∞ C · N � ⇒ P ( H k + ) = P ( H i ) = . . . = � � 1 − λ · N − C ( N − C ) · i = k N H 1 → I ⇒ P ( I | H 1 ) = 1 1 P ( I | H 2+ ) = N − C
analysis general probability II P ( I ) TP = P ( H 1 ) P ( I | H 1 ) + P ( H 2+ ) P ( I | H 2+ )
analysis general probability II P ( I ) TP = P ( H 1 ) P ( I | H 1 ) + P ( H 2+ ) P ( I | H 2+ ) = . . .
analysis general probability II P ( I ) TP = P ( H 1 ) P ( I | H 1 ) + P ( H 2+ ) P ( I | H 2+ ) = . . . = λ · C � λ � · 1 + N − λ · ( N − C ) N
analysis general probability II P ( I ) TP = P ( H 1 ) P ( I | H 1 ) + P ( H 2+ ) P ( I | H 2+ ) = . . . = λ · C � λ � · 1 + N − λ · ( N − C ) N = P ( I ∧ H 1+ ) P ( I | H 1+ ) CP P ( H 1+ )
analysis general probability II P ( I ) TP = P ( H 1 ) P ( I | H 1 ) + P ( H 2+ ) P ( I | H 2+ ) = . . . = λ · C � λ � · 1 + N − λ · ( N − C ) N = P ( I ∧ H 1+ ) P ( I | H 1+ ) CP � I → H 1+ � P ( H 1+ ) P ( I ) = P ( H 1+ )
analysis general probability II P ( I ) TP = P ( H 1 ) P ( I | H 1 ) + P ( H 2+ ) P ( I | H 2+ ) = . . . = λ · C � λ � · 1 + N − λ · ( N − C ) N = P ( I ∧ H 1+ ) P ( I | H 1+ ) CP � I → H 1+ � P ( H 1+ ) P ( I ) = P ( H 1+ ) = . . .
analysis general probability II P ( I ) TP = P ( H 1 ) P ( I | H 1 ) + P ( H 2+ ) P ( I | H 2+ ) = . . . = λ · C � λ � · 1 + N − λ · ( N − C ) N = P ( I ∧ H 1+ ) P ( I | H 1+ ) CP � I → H 1+ � P ( H 1+ ) P ( I ) = P ( H 1+ ) = . . . = N − λ ( N − C − 1) N
analysis general probability II P ( I ) TP = P ( H 1 ) P ( I | H 1 ) + P ( H 2+ ) P ( I | H 2+ ) = . . . = λ · C � λ � · 1 + N − λ · ( N − C ) N = P ( I ∧ H 1+ ) P ( I | H 1+ ) CP � I → H 1+ � P ( H 1+ ) P ( I ) = P ( H 1+ ) = . . . = N − λ ( N − C − 1) N good node P (good node i is initiator) = 1 − P ( I | H 1+ ) = λ N < 1 N ≤ 1 N − C − 1 2
Recommend
More recommend
Explore More Topics
Stay informed with curated content and fresh updates.