in the broadband age: a European perspective Jonathan Cave July - - PowerPoint PPT Presentation
in the broadband age: a European perspective Jonathan Cave July - - PowerPoint PPT Presentation
Congested consumers and sectoral regulation in the broadband age: a European perspective Jonathan Cave July 2013 Outline The problem with congestion ( Says law and its antithesis) A brief discussion of broadband congestion from a
Outline
- The problem with congestion (Say’s law and its
antithesis)
- A brief discussion of broadband congestion from a
customer-centric perspective
- Some examples
- The regulatory and policy perspective
– The role of consumers in managing congestion – ‘Informational remedies’ transparency, switching and other reactions – Congestion pricing – The role of the regulator – Net neutrality – EU wrinkles
- Some theoretical excursions
- Conclusion and topics for discussion
The problem with congestion
- Congestion can distort technical, allocational and
dynamic efficiency or the trade-offs between them
- It can show up as slow speeds or QoS problems
(latency, jitter, etc.); different in wired, wireless domains
- It differentially affects different uses and users
- Can slow or narrow broadband adoption and entry
- It arises in different parts of the network – topology
matters (holes and hubs, tiers, IXs, transit/peering arrangements)
The problem, continued
- It is not just a capacity utilisation issue (turbulence,
bunching, local bottlenecks, and behavioural feedbacks)
- It is ‘felt’ in different ways by different parties
- It is not bad for everyone, and may not be bad for anyone
– Market segmentation – Induced state aids – Value of spectrum, infrastructure rights – The power to shape and to ration – Resets and total download volume; WTP for bypass
- Investing in scarcity
- May not be discernible, predictable, controllable at ISP, network
- perator or system level (complexity and self-organisation)
Analogies
- Electricity
– ‘Solutions’
- Congestion pricing (e.g. ToD)
- Smart Meters, CHP and grids
- Interruptible supply contracts
– Analogy inexact
- BB traffic not homogeneous, routing matters more than i2R
- Storable traffic, network less rigid
- ‘Cogeneration’ in P2P, symmetric DSL form
- Transport
– ‘Predict and provide’ mindset – Endogeneity of capacity problems (Say’s Law)
Some examples
- 1. Bifurcated response – online gaming
– Fat traffic, thin clients – rendering in server farms – Thin traffic, fat clients – local rendering – Implications: hardware; user-generated content; monetisation and sludge-passing (NN)
- 2. Non-linear content sharing (iPlayer, iCloud)
- 3. Cloud computing (thin clients, data centres)
– Correlation of demand, feedbacks
A provocative example
- High-frequency, computerised trading
- Initially, algo-trading (liquidity trades)
- Then, momentum trading
- Arbitrage – speculation – manipulation
- May 2010 – market and network efficiency
collide
– Sequellae: circuit breakers, speed limits and the normalisation of deviance
A few more
- The network epidemiology of congestion
- Multi-use spectrum (the 2.6 GHz auction –
minimise interference, maximise innovation, artificial scarcity)
- Unbundling strategies: the use of virtual lines
to evade competition authorities
The role of consumers in managing congestion
- Voting with their feet?
- Underwriting investment (pledges)
- Shareholder activism
Transparency – about what, and why?
- Proposed as light-touch remedy (mobility?)
- But policy isn’t outcome and ISPs may not be
able to control congestion
- Will the grass be greener when you (and
everyone else) gets there?
Congestion pricing
- All pay auctions
- To pay for infrastructure
- To change behaviour, rationalise demand
- As a sin-tax (hypothecation for capacity and
management – note unmonetised peering)
The role of the regulator
- Coasian liability
- Walled Gardens
- Consumer protection, prudential and macro-
prudential approach
- Self- and co-regulatory approaches
ex ante regulation to ensure net neutrality?
- Preliminary
– What’s so great about NN in a Ramsey world? – Monitoring and enforcement? – Attribution and remedies
- Different meanings in different market settings
– Commission has reversed itself – hence ‘natural experiments’ (e.g. Nl.) – Good (necessary) vs. bad discrimination – General belief in ‘level playing field’ or virtue of simplicity (?)
- Just an argument over DPI?
– Privacy, security externalities – KPN-WhatsApp, ACTA – Many offers have explicit or implicit (Fair Use) caps – Beneficial uses of P2P – inadequacy of individual controls – 3rd- party or ISP liability a 2-edged sword
Further regulatory and policy challenges
- Goals –
– what kind of efficiency? – Peak-load, strategic manipulation, innovation and investment?
- Lower price, higher speed, better QoS, alternative
infrastructures, extended value chain efficiency?
- Is congestion a useful stimulus to entry or do entrants just
free-ride while incumbents live off their rents?
- How (if at all) to (co)regulate cloud, HFT?
- Incorporate QoS in Universal Service?
- What is ‘price’ (freemium-type models, third-part
revenues)?
- Who is (usefully) responsible for congestion
management?
- Consumer protection-antitrust duality: can either work?
EU Governance aspects
- EU level (EC, CoM, EP)
– Directives, regulations, state aids and policy chapeaux – Subsidiarity – what gets regulated at which level? – Competition and competitiveness and innovation – Telecom Regulatory Framework (under revision) – Policy challenges: Europe2020, DAE and H2020; reform
- f major Directives; REFIT; Recommendation for
Regulation to boost Digital Single Market.
- MS level
– Converged regulation – Grands projets – esp. Broadband
Topics for discussion
- Relevance of consumer protection, prudential
rules, macroprudential regulation, etc.
- How can regulators coordinate to measure real or
expected congestion – and which matters more?
- Who can/should be held responsible for
congestion?
- Are EU-like presumptions (pro-infrastructure,
regulation of SMP players, dropping roaming, net neutrality, communication-converged regulation) appropriate here?
- How can shaping, investment, ‘rewiring’ and
demand-shaping be balanced and regulated?
- Do we need new rules for IoT, Cloud, etc.?
A brief discussion of broadband congestion
- Origins
– Traffic growth (slowing?), geography (density, uses) – Specific hubs (backhaul, Internet Exchanges, etc.) – centrality – Specific types of computing
- Dimensions/impacts
– Time, location, – Traffic (crowding types, clustering, endogeneity) – Latency, jitter, bandwidth – Upload/download, local vs. backbone
- Settings
- Wired, wireless, WLAN, RLAN (diff.
regulations)
The supply side
- ‘Investing in congestion’
- ‘Crowding types’ and market segmentation
- Cream-skimming and buck-passing
- Traffic-shaping, capacity formation, structural change
- Third part monetisation
- N-sided market considerations
- Discrimination:
– by identity; network management; content; devices; standards – Using price, QoS, SLAs, security/privacy…
- Behaviour often shared or emergent
– Load-sharing – Connectivity cycles
The demand side
- Congestion-sensitive uses – distribution, WTP and
clustering
- Specific kinds of demand
– Entertainment – Work – Cloud – Internet of Things – HFT (interaction of congestion, financial efficiency, stability)
- Limited rationality of switching behaviour
A theoretical excursion
- The epidemiology of congestion
- Routing and connectivity – strategic rewiring
- The SLA game (stable and efficient networks)
- Measurement for regulation
Epidemiology of Congestion
- QoS is High or Congested
- Probability of congestion = m * % of congested links;
probability of recovery is r; l = m/r is effective spreading rate;
- Every network structure has a critical value of l above which
congestion outbreaks saturate network; close to 0 for scale-free
- For fixed SF networks, can manage problem by targeting hubs
- Suppose routing allows rewiring of w % of congested links;
– Even SF networks become ‘flatter’ – Linking becomes associative (can’t target interventions) – New threshold for ‘epidemic’ or ‘endemic’ congestion – Gradual separation into congested and clear components
- Complex ‘phase diagram’
Other theoretical excursions
- Routing and connectivity – strategic rewiring
– Passing traffic to congested neighbours spreads the problem; routing to clear ones eases it – Networks may be too centralised to survive (claim on capacity resources?) – Strategic connectivity cycles (observed in practice)
- The SLA game (stable and efficient networks)
– Behavioural dynamics (evolution of conventions) – Network partnering (pairwise stability) – Combined dynamics more ‘interesting’
- Measurement for regulation
– Alignment as leading indicator of disruption – Real-time and partnering tracking (structural indicators)
Backup: Network congestion
- Consider the standard SI model (cf Jackson MO (2006) “Diffusion on
Social Networks at: http://economiepublique.revues.org/1721)
- Players are in one of 2 states: uncongested (S) or congested (I)
– The transition probabilities are
- This model can have two steady states (in a clique); there is an
endemic threshold l* above which a positive steady state exists.
- Now add a network;
– P(k) is the proportion of nodes with k links – the average congestion rate for nodes of degree k is r(k) – The expectation of r(k) under P(k) is r*
- This is not the same as the average congestion rate
- The expectation assumes that degree and congestion rates are independent.
- The calculation of r(k) is very complex – it depends on as well as ki
23
Pr is the number of i's infected neighbours; is background infection Pr
I i i I i i
S I N x N x I S l
I i
N
Backup: Further definitions
- The actual distribution of congested contacts is too hard to calculate, so we use a degree-
based mean field approximation (each node faces the average odds of being connected to a congested node). We approximate (stochastic) changes in the average congestion rates r(k) by a deterministic continuous-time process
- Inserting into the above formula for q, we get
24
- the probability that a randomly chosen link contains an infected node
k k
k P k k P k k r q
1 0 in steady-state, so 1 1
t i
N I N N k k k x k k k k k x k x k k x k x k x r r q r r r r q q r q l q l q
1 1
k
k x m k x l q q l q
Backup: Characterisation
- If x > 0, the model has a single steady-state
congestion rate with positive average prevalence
- If x = 0, then q = 0 is always a steady-state
‘prevalence’
- There is a threshold value of l above which
there is a second steady state with positive prevalence – this defines the ‘endemic’ threshold.
- If the distribution P(k) is scale-free, this
threshold value is 0
25
Backup: Dynamic networks: impact on congestion
- As above: an uncongested node with kI(i) congested neighbours becomes congested
with probability bkI(i)/k(i); a congested node recovers with probability r.
- Rewiring: an uncongested node with congested neighbours severs each such node
(independently) with probability w, replacing it with a new link to an uncongested node.
- If w = 0:
– a single congestion incident would produce
𝛾𝑙 𝜍 = 𝑙
secondaries (𝑙
is average degree). – Threshold for each congestion to lead to one more is thus
1 𝑙
- write as a threshold of
'congestivity′𝛾
=
𝜍 𝑙
- If w > 0:
– With rewiring, a single congested node will be dropped by an average of w its partners (this approximation holds for the onset of the congestion), so its degree evolves as approximately 𝑙𝑢 = 𝑙
𝑓−𝜕𝑢
– it remains congested for approximately
1 𝜍 periods; sustained congestion threshold is
𝛾 =
𝜕 𝑙 1−𝑓
−𝜕 𝜍 .
– As w increases, so does threshold (resistance to congestion); but – scale free networks remain vulnerable
26
Backup: Dynamic networks: impact on structure
- This case is somewhat more complex; only approximate
solutions. 1. Even starting from scale-free network, degree distribution tends to widen, with many more nodes at all levels of connectivity. 2. Connection becomes associative; a node of a given degree will have most of its contacts with nodes of similar degree. 3. Thus targeted intervention no longer works to ‘clean’ the network – cleaning highly connected nodes primarily helps
- ther highly connected nodes; hubs remain susceptible to
congestion coming from less connected nodes.
27
Backup: Dynamic networks: impact on structure
4. The network gradually separates into two almost-separated clusters: one has high prevalence of congestion, other is nearly efficient. 5. Although uncongested nodes continuously try to remove connections to the congested cluster, connections are restored by new congestions in the uncongested cluster and recovery in the congested cluster. 6. The number of links of individual nodes tends to vary widely over time, growing linearly while the node is congestion free but decaying exponentially (as above) while it is congested. This is responsible for the widened degree distribution and associative
- connectivity. It also reinforces the strategic basis for connectivity
cycles known from the literature. 7. A new threshold appears – lower than the first – above which any invasion of the network remains forever, but travels through the network as it rewires. Below this lower threshold congestion incidents die away; above the higher threshold they become endemic; and between the two thresholds they persist in epidemic form.
28
Phase diagram for dynamic model
29