IERG 5330 Network Economics Repeated and Supermodular Games - - PDF document

ierg 5330 network economics repeated and supermodular
SMART_READER_LITE
LIVE PREVIEW

IERG 5330 Network Economics Repeated and Supermodular Games - - PDF document

IERG 5330 Network Economics Repeated and Supermodular Games Instructor: Prof. Jianwei Huang November 27, 2016 1 Repeated Games In many strategic situation, players interact repeatedly over time, therefore it is important to understand the


slide-1
SLIDE 1

IERG 5330 Network Economics Repeated and Supermodular Games

Instructor: Prof. Jianwei Huang∗ November 27, 2016

1 Repeated Games

In many strategic situation, players interact repeatedly over time, therefore it is important to understand the effect of repetition on the play of the game. The repeated game model we will present in this section is a simple model to capture the ongoing interaction. In particular,

  • The players face the same stage game at all periods.
  • Overall payoff is the sum of discounted payoffs at each stage.

We will see how repeated play of the same strategic game introduces new (desirable) equi- libria by allowing players to condition their actions on the way their opponents played in the previous periods.

1.1 Illustrative Examples

Example (Cooperation in Prisoners Dilemma). The best known repeated game argument is that ongoing interaction can explain why people might behave cooperatively when it is against their self-interest in the short run. The classical example is the repeated prisoners dilemma. Cooperate Defect Cooperate 1,1

  • 1,2

Defect 2,-1 0,0 For this strategic form game, the strategy profile (D, D) is the unique NE. Moreover D strictly dominates C. Suppose now the players 1, 2 play the game repeatedly at 0, 1, 2, · · · and the payoff for the entire repeated game is: u1({a0, a1}, · · · ) = (1 − δ)

  • t=0

δtgi(at

i, at −i)

∗This note is mainly based on the lecture materials of MIT open course “Game Theory for Engineering

Applications” by Prof. Asu Ozdaglar: http://stellar.mit.edu/S/course/6/sp10/6.254/.

1

slide-2
SLIDE 2

in which δ ∈ [0, 1) is the discount factor. Notice: if we play the game for a finite T times, then backward induction shows that there is a unique subgame perfect equilibrium (SPE) in which both players player (D, D) in each

  • period. Now assume that the game is played infinitely often. Is playing (D, D) in every period

still an SPE outcome? Proposition 1. If δ ≥ 1

2 the repeated PD game has an SPE in which (C,C) is played in every

periods.

  • Proof. Suppose the players use the “grim trigger” strategy as follows. For player i,

(I) Play C in every period unless someone plays D, in which case go to stage II. (II) Play D forever. We next show that the preceding strategy is an SPE if δ ≥ 1

2 using the one-stage-deviation

principle. There are two kinds of sub-games: (1) Subgame following a history in which no player has ever defected (i.e., D has never been played). (2) Any other subgame (i.e., D has been played at some point in the past). Consider first a subgame of the form (1), i.e., suppose up to time t, D has never been

  • played. Player i’s continuation payoffs when he sticks with his strategy and when he deviates

for one stage and then conforms to his strategy thereafter are given respectively by: Play C :(1 − δ)[1 + δ + δ2 + · · · ] = 1 Play D :(1 − δ)[2 + 0 + 0 + · · · ] = 2(1 − δ)

  • ⇒ This shows that for δ ≥ 1

2 deviation is not profitable Consider next a subgame of form (2), i.e., action D is played at some point before t. Since (D, D) is the NE of the static game and choosing C does not have an effect on subsequent

  • utcomes, it follows that, for any discount factor, no player can profitably deviate by choosing

C in one period. Remark.

  • 1. Depending on size of the discount factor, there may be many more equilibria.
  • 2. If a∗ is the NE of the stage game, then the strategies “each player plays a∗

i in each stage”

form an SPE. (Note that with these strategies, future play of the opponent is independent

  • f how I play today, therefore, the optimal play is to maximize the current payoff, i.e.,

play a static best response.)

  • 3. Sets of equilibria for finite and infinite horizon versions of the “same game” can be quite

different. The following example shows that repeated play can lead to worse outcomes than in the

  • ne shot game:

2

slide-3
SLIDE 3

A B C A 2,2 2,1 0,0 B 1,2 1,1

  • 1,0

C 0,0 0,-1

  • 1,-1

Example. For the game defined above, the action A strictly dominates B, C for both players, therefore the unique NE is (A, A). Proposition 2. If δ ≥ 1

2 this game has an SPE in which (B, B) is played in every period.

Proof Sketch: Here, we construct a slightly more complicated strategy than grim trigger: (I) Play B in every period unless someone deviates, then go to II. (II) Play C. If no one deviates go to I. If someone deviates stay in II. Exercise: Show that the preceding strategy profile is an SPE of the repeated game for δ ≥ 1

2.

1.2 General Model

  • Let G be a strategic form game with action spaces A1, · · · , AI and (stage) payoff functions

gi : A → R, where A = A1 × · · · × AI.

  • Let G∞(δ) be the infinitely repeated version of G played at t = 0, 1, 2, · · · , where players

discount payoffs with the factor δ and observe all previous actions.

  • Payoffs for player i,

ui(si, s−i) = (1 − δ)

  • t=0

δtgi(ai, a−i). We now investigate what average payoffs could result from different equilibria when δ is close to 1. That is, what can happen in equilibrium when players are very patient? Some terminology related to payoffs:

  • 1. Set of feasible payoffs:

V = Conv{v | there exists a ∈ A such that g(a) = v}.

  • Example. Consider the following game:

L(q) R(1-q) U

  • 2,-2

1,-2 M 1,-1

  • 2,2

D 0,1 0,1 3

slide-4
SLIDE 4

Sketch set V for this example.

  • 2. Minimax payoff of player i: the lowest payoff that player i’s opponent can hold him to

vi = min

α−i [max ai gi(αi, α−i)]

Let mi

−i = arg min α−i [max αi gi(αi, α−i)],

i.e. minimax strategy profile against player i.

  • Example. We compute the minimax payoffs for Example 1.

To compute v1 , let q denote the probability that player 2 chooses action L. Then player 1s payoffs for playing different actions are given by:    U → 1 − 3q M → −2 + 3q D → Therefore, we have vi = min

0≤q≤1[max{1 − 3q, −2 + 3q, 0}] = 0

and m1

2 ∈

1

3, 2 3

  • .

Similarly, one can show that: v2 = 0. Verify that payoff of player 1 at any NE is 0. Remark.

  • 1. Player i’s payoff is at least vi in any static NE, and in any NE of the repeated game.
  • Static equilibrium: ˆ

α ⇒ vi = minα−i[maxαi gi(αi, α−i)] ≤ maxαi gi(αi, α−i)

  • Extend the above idea for the repeated case.

Check the following: In the Prisoners Dilemma example, NE payoff is (0, 0) and minmax payoff is (0, 0). In the second example introduced in this section, NE payoff is (2, 2) and minmax payoff is (0, 0).

1.3 Folk Theorems for Infinitely Repeated Games

Definition 1. A payoff vector v ∈ RI is strictly individually rational if vi > vi for all i. Let set V ∗ ⊂ RI denote the set of feasible and strictly individually rational payoffs. The “folk theorem” for repeated games establish that if the players are patient enough, then any feasible strictly individually rational payoff can be obtained as equilibrium payoffs. We start with the Nash folk theorem which obtains feasible strictly individually rational payoffs as Nash equilibria of the repeated game. Theorem 1 (Nash folk theorem). For all v = (v1, . . . , vI) ∈ V ∗, there exists some δ < 1 such that for all δ ≥ δ, there is a Nash Equilibrium of the infinitely repeated game G∞(δ) with payoffs v. 4

slide-5
SLIDE 5
  • Proof. Assume first that there exists a pure action profile a = (a1, · · · , aI) such that gi(a) = vi

(otherwise, we need a public randomization argument, see Fudenberg and Tirole, Theorem 5.1). Consider the following strategy for player i: (I) Play ai in period 0 and continue to play ai so long as either the realized action in the previous period is a, or the realized action in the previous period differed from a in two

  • r more components. If only one player deviates, go to stage II.

(II) If player j deviates, play mj

i thereafter.

We next check if player i can gain by deviating form this strategy profile. If i plays the strategy, his payoff is vi. If i deviates from the strategy in some period t, then denoting ¯ vi = maxa gi(a), the most that player i could get is given by: (1 − δ)

  • vi + δvi + · · · + δt−1vi + δt¯

vi + δt+1vi + δt+2vi + · · ·

  • Hence, following the suggested strategy will be optimal if

vi ≥ (1 − δt)vi + δt(1 − δ)¯ vi + δt+1vi It can be seen that the expression in the bracket is non negative for any δ ≥ δ, where δ = max

i

¯ vi − vi ¯ vi − vi , thus completing the proof Nash folk theorem states that essentially any payoff can be obtained as a Nash Equilibrium when players are patient enough. However, the corresponding strategies involve unrelenting punishments, which may be very costly for the punisher to carry out (i.e., they represent non-credible threats). Hence, the strategies used are not subgame perfect. The next example illustrates this fact. Example. L(q) R(1 − q) U 6,6 0,-100 D 7,1 0,-100 The unique NE in this game is (D, L). It can also be seen that the minmax payoffs are given by v1 = 0, v2 = 1 and the minmax strategy profile of player 2 is to play R. Nash Folk Theorem says that (6,6) is possible as a Nash equilibrium payoff of the repeated game, but the strategies suggested in the proof require player 2 to play R in every period following a deviation. While this will hurt player 1, it will hurt player 2 a lot, it seems unreasonable to expect her to carry out the threat. Our next step is to get the payoff (6, 6) in the above example, or more generally, the set of feasible and strictly individually rational payoffs as subgame perfect equilibria payoffs of the repeated game. 5

slide-6
SLIDE 6

Perfect Folk Theorems: The first perfect folk theorem shows that any payoff above the static Nash payoffs can be sustained as a subgame perfect equilibrium of the repeated game. This weaker theorem is sometimes referred to as the Nash-threats folk theorem. Theorem 2 (Friedman). Let α∗ be a static equilibrium of the stage game with payoffs e. Then for any feasible payoff v with vi > ei for all i ∈ I, there exists a subgame perfect equilibrium

  • f G∞(δ) with payoffs v.

The proof of this theorem involves constructing grim trigger strategies with punishment by the static Nash Equilibrium in phase II, thus satisfying sequential rationality in subgames where some player deviated. This theorem raises the questions:

  • What happens if the static NE payoff vector is strictly greater than minmax payoff vector

(in some components)?

  • Does the requirement of subgame perfection restrict the set of equilibrium payoffs?

The perfect folk theorem of Fudenberg and Maskin shows that this is not the case: For any feasible, individually rational payoff vector, there is a range of discount factors for which that payoff vector can be obtained as a subgame perfect equilibrium. Theorem 3 (Fudenberg and Maskin). Assume that the dimension of the set V of feasible payoffs is equal to the number of players I. Then, for any v ∈ V with vi > vi for all i, there exists a discount factor δ < 1 such that for all δ ≥ δ, there is a subgame perfect equilibrium of G∞(δ) with payoffs v. See Fudenberg and Tirole, Chapter 5, Theorem 5.4. Remark.

  • 1. The strategies constructed to prove the above theorem involve punishments and rewards.
  • 2. The assumption on the dimension of V ensures that each player i can be singled out for

punishment in the event of a deviation.This assumption could be weakened, but cannot be completely dispensed with.

  • 3. Folk theorem for finitely repeated games:
  • If the stage game has a unique Nash equilibrium, then by backward induction, the

unique perfect equilibrium is to play the static Nash equilibrium in every period.

  • With multiple stage Nash equilibria, one can use rewards & punishments to con-

struct other subgame perfect equilibria.

  • 4. Equilibrium concepts do very little to pin down the play by patient players!

6

slide-7
SLIDE 7

2 Supermodular Games

In this lecture, we develop the theory of supermodular games. Supermodular games are those characterized by “strategic complementarities”. Informally, this means that the marginal util- ity of increasing a player’s strategy raises with increases in the other players’ strategies. The implication is that the best response of a player is a nondecreasing function of other players’ strategies. The machinery needed to study supermodular games is lattice theory and monotonicity results in lattice programming (see the Appendix for more on lattices). The methods used are non-topological, and they exploit order properties. A rough idea for the analysis of the existence problem is as follows: Tarski’s fixed point theorem shows the existence of a fixed point for increasing functions (see Fudenberg and Tirole and the Appendix for a statement

  • f Tarski’s fixed point theorem). We can use it when the best-response correspondences of

players have a monotone increasing selection. This monotonicity property is guaranteed for supermodular games. Below, we present an analysis that does not rely on Tarski’s fixed point theorem, but instead uses ideas from the monotonicity results in lattice programming. Supermodular games are interesting for a number of reasons:

  • They arise in many models.
  • We can establish the existence of a pure strategy equilibrium without requiring the

quasiconcavity of the payoff functions.

  • Many solution concepts yield the same predictions.
  • The equilibrium set has a smallest and a largest element and there exists a simple algo-

rithm to compute these.

  • They have nice sensitivity (or comparative statics) properties and behave well under a

variety of learning rules. Much of the theory is due to Topkis [12], [13]; Vives [14], [15], Milgrom and Roberts [8], and Milgrom and Shannon [9].

2.1 Monotonicity of Optimal Solutions

Let ≥ be a binary relation on a nonempty set S. The pair (S, ≥) is a partially ordered set if ≥ is

  • Reflexive (x ≥ x for all x ∈ S),
  • Transitive (x ≥ y and y ≥ z implies that x ≥ z),
  • Antisymmetric (x ≥ y and y ≥ x implies that x = y)

A partially ordered set (S, ≥) is (completely) ordered if for x ∈ S and y ∈ S, either x ≥ y or y ≥ x. 7

slide-8
SLIDE 8

We first study the monotonicity properties of optimal solutions of parametric optimization

  • problems. In particular, we consider

x(t) = arg max

x∈X f(x, t),

where f : X × T → R, X ⊂ R, and T is some partially ordered set. We are interested in conditions under which we can establish that x(t) is a nondecreasing function of t. We next define the key property of increasing differences.1 Definition 2. Let X ⊆ R and T be some partially ordered set. A function f : X × T → R has increasing differences in (x, t) if for all x′ ≥ x and t′ ≥ t, we have f(x′, t′) − f(x, t′) ≥ f(x′, t) − f(x, t). The preceding definition implies that if f has increasing differences in (x, t), then the incremental gain to choosing a higher x (i.e., x′ rather than x) is greater when t is higher. That is f(x′, t) − f(x, t) is nondecreasing in t. You can check that the property of increasing differences is symmetric: an equivalent statement is that if t′ > t, then f(x, t′) − f(x, t) is nondecreasing in x. Note that f need not be nicely behaved, nor do X and T need to be intervals. For instance, we could have X = {0, 1} and just a few parameter values, e.g. T = {0, 1, 2}. If, however, f is nicely behaved, we can re-write increasing differences in terms of derivatives. Lemma 1. Let X ⊆ R and T ⊂ Rk for some k, a partially ordered set with the usual vector

  • rder 2. Let f : X ×T → R be a twice continuously differentiable function. Then, the following

statements are equivalent: (a) The function f has increasing differences in (x, t). (b) For all t′ ≥ t and all x ∈ X, we have ∂f(x, t′) ∂x ≥ ∂f(x, t) ∂x . (c) For all x ∈ X, t ∈ T, and all i = 1, . . . , k, we have ∂2f(x, t) ∂x∂ti ≥ 0. Before presenting our key result about monotonicity of optimal solutions, we introduce some examples, which satisfy increasing differences property.

1The following analysis can be extended to the case where X is a lattice (see the Appendix for a treatment

  • f lattices); see Topkis [12], and Milgrom and Roberts [8] for these extensions.

2For some x, y ∈ T, x ≥ y if and only if xi ≥ yi for all i = 1, . . . , k.

8

slide-9
SLIDE 9

2.2 Examples: Oligopoly Models

We first consider the Bertrand competition: Suppose firms 1 . . . , I simultaneously choose prices, and the demand function is given by Di(pi, p−i) = αi − bipi +

  • j=i

dijpj, where bi and dij are nonnegative constants. Let the strategy space be Si = [0, ∞) and the payoff function be ui(pi, p−i) = (pi − ci)Di(pi, p−i) (where as usual ci is the cost of producing one unit of good). Then,

∂2ui(pi,p−i) ∂pi∂pj

= dij ≥ 0, showing that ui(pi, p−i) has increasing differences in (pi, p−i). We next consider Cournot competition in a duopoly: two firms choose the quantity they produce qi ∈ [0, ∞). We denote the inverse demand function by P(qi, qj), and assume that it is a function of Q = qi + qj and is twice continuously differentiable in Q . We further assume that P ′(Q) + qP ′′(Q) ≤ 0 Let the payoff function of each firm be ui(qi, qj) = qiP(qi + qi) − cqi. Then, it can be seen that the payoff functions of the transformed game defined by s1 = q1, s2 = −q2 has increasing differences in (s1, s2). There are many more examples of using supermodular game theory in wireless communi- cations, e.g., Saraydar, Mandayam, Goodman [10], Altman and Altman [3], and Huang, Berry, Honig [4]

2.3 Main Result

We next present the key result of our development, which is due to Topkis [12]. Theorem 4. Let X ⊂ R be a compact set and T be some partially ordered set. Assume that the function f : X × T → R is upper semicontinuous in x for all t ∈ T and has increasing differences in (x, t). Define x(t) = arg maxx∈X f(x, t). Then, we have:

  • 1. For all t ∈ T, x(t) is nonempty and has a greatest and least element, denoted by ¯

x(t) and x(t) respectively.

  • 2. For all t′ ≥ t, we have ¯

x(t′) ≥ ¯ x(t) and x(t′) ≥ x(t). Proof.

  • 1. By the assumptions that for all t ∈ T, the function f(·, t) is upper semicontinuous

and X is compact, it follows by the Weierstrass’ Theorem that x(t) is nonempty. For all t ∈ T, x(t) ⊂ X, therefore is bounded. Since X ⊂ R, to establish that x(t) has a greatest and lowest element, it suffices to show that x(t) is closed. Let {xk} be a sequence in x(t). Since X is compact, xk has a limit point ¯

  • x. By restricting

to a subsequence if necessary, we may assume without loss of generality that xk converges to ¯

  • x. Since xk ∈ x(t) for all k, we have

f(xk, t) ≥ f(x, t), ∀x ∈ X 9

slide-10
SLIDE 10

Taking the limit as k → ∞ in the preceding relation and using the upper semicontinuity

  • f f(·, t), we obtain

f(¯ x, t) ≥ lim sup

k→∞

f(xk, t) ≥ f(x, t), ∀x ∈ X, thus showing that ¯ x belongs to x(t), and proving the desired closedness claim.

  • 2. Let t′ ≥ t. Let x ∈ x(t) and x′ = ¯

x(t′). By the fact that x maximizes f(x, t), we have f(x, t) = f(min(x, x′), t) ≥ 0. This implies (check the two cases: x ≥ x′ and x′ ≥ x) that f(max(x, x′), t′) − f(x′, t′) ≥ 0. By increasing differences of f, this yields f(max(x, x′), t′) − f(x′, t′) ≥ 0. Thus max(x, x′) maximizes f(·, t′), i.e, max(x, x′) belongs to x(t′). Since x′ is the greatest element of the set x(t′), we conclude that max(x, x′) ≤ x′, thus x ≤ x′. Since x is an arbitrary element of x(t), this implies ¯ x(t) ≤ ¯ x(t′). A similar argument applies to the lowest maximizers. Topkis’ Theorem states that if f has increasing dierences, then the set of maximizers x(t) is nondecreasing in t, in the sense that both the greatest maximizers ¯ x(t) and the lowest maximizers x(t) are nondecreasing in t.

2.4 Supermodular Games

We now introduce the class of supermodular games. Definition 3 (Supermodular Game). The strategic form game I, (Si), (ui) is a supermodular game if for all i:

  • 1. Si is a compact subset of R (or more generally Si is a sublattice of Rm),
  • 2. ui is upper semicontinuous in si and continuous in s−i,
  • 3. ui has increasing differences in (si, s−i) [or more generally ui supermodular in (si, s−i),

see Fudenberg and Tirole for the more general definition of supermodularity, which is an extension of the property of increasing differences to games with multi-dimensional strategy spaces]. Applying Topkis’ theorem in this context immediately implies that each player’s best re- sponse correspondence is increasing in the actions of other players. 10

slide-11
SLIDE 11

Corollary 1. Assume I, (Si), (ui) is a supermodular game. Let Bi(s−i) = arg max

si∈Si ui(si, s−i).

Then

  • 1. Bi(s−i) has a greatest and least element, denoted by ¯

Bi(s−i) and Bi(s−i).

  • 2. If s′

−i ≥ s−i then ¯

Bi(s′

−i) ≥ ¯

Bi(s−i) and Bi(s′

−i) ≥ Bi(s−i)

We now use the properties of the supermodular games to show that various solution con- cepts we have studied before for strategic form games yield the same predictions. Theorem 5. Let I, (Si), (ui) be a supermodular game. Then the set of strategies that survive iterated strict dominance (i.e., iterated elimination of strictly dominated strategies) has greatest and least elements ¯ s and s, which are both pure strategy Nash Equilibria. The preceding theorem immediately yields the following corollary. Corollary 2. Supermodular games have the following properties:

  • 1. Pure strategy NE exist.
  • 2. The largest and smallest strategies are compatible with iterated strict dominance (ISD),

rationalizability, correlated equilibrium, and Nash equilibrium are the same.

  • 3. If a supermodular game has a unique NE, it is dominance solvable (and lots of learning

and adjustment rules converge to it, e.g., best-response dynamics).

  • Proof. We iterate the best response mapping. Let S0 = S, and let s0 = (s0

1, . . . , s0 I) be the

largest element of S. Let s1

i = ¯

Bi(s0

−i) and S1 i = {si ∈ S0 i | si ≤ s1 i }. We show that any si > s1 i

i.e, any si / ∈ S1

i is strictly dominated by s1 i . For all s−i ∈ S−i, we have

ui(si, s−i) − ui(s1

i , s−i) ≤ ui(si, s0 −i) − ui(s1 i , s0 −i)

< 0 where the first inequality follows by the increasing differences of ui(si, s−i) in (si, s−i), and the strict inequality follows by the fact that si is not a best response to s0

−i. Note that s1 i ≤ s0 i .

Iterating this argument, We define sk

i = ¯

Bi(sk−1

−i ),

Sk

i = {si ∈ Sk−1 i

| si ≤ sk

i }.

Assume sk ≤ sk−1. Then, by Corollary 1, we have sk+1

i

= ¯ Bi(sk

−i) ≤ ¯

Bi(sk−1

−i ) = sk i

This shows that the sequence {sk

i } is a decreasing sequence, which is bounded from below,

and hence it has a limit, which we denote by ¯

  • si. Only the strategies si ≤ ¯

si are undominated. Similarly, we can start with s0 = (s0

1, . . . , s0 I) the smallest element in S and identify s. To

11

slide-12
SLIDE 12

complete the proof, we show that ¯ s and s are NE. By construction, for all i and si ∈ Si, we have ui(sk+1

i

, sk

−i) ≥ ui(si, sk −i).

Taking the limit as k → ∞ in the preceding relation and using the upper semicontinuity of ui in si and continuity of ui in s−i, we obtain ui(¯ si, ¯ s−i) ≥ ui(si, ¯ s−i), showing the desired claim.

References

[1] Abreu D., Pearce D., and Stachetti E., “Toward a theory of discounted repeated games with imperfect monitoring,” Econometrica, vol. 58, pp., 1041-1064, 1990. [2] Green E. and Porter R., “Noncooperative collusion under imperfect price information,” Econometrica, vol. 52, pp., 87-100, 1984. [3] Altman E. and Altman Z., S-Modular Games and Power Control in Wireless Networks IEEE Transactions on Automatic Control, vol. 48, pp. 839-842, May 2003. [4] Huang J., Berry R., and Honig M. L., ”Distributed Interference Compensation for Wireless Networks”, IEEE Journal on Selected Areas in Communications. [5] Kelly, F. P., “Charging and rate control for elastic traffic,” European Transactions on Telecommunications, vol. 8, pp. 33-37, 1997. [6] Kelly, F. P., Maulloo A. K., and Tan D. K.,“Rate control for communication networks: shadow prices, proportional fairness, and stability,” Journal of the Operational Research Society, vol. 49, pp. 237-252, 1998. [7] Low, S. and Lapsley, D. E., “Optimization flow control, I: Basic algorithm and conver- gence,” IEEE/ACM Transactions on Networking, vol. 7(6), pp. 861-874, 1999. [8] Milgrom, P. and Roberts, J., “Rationalizability, learning and equilibrium in games with strategic complementarities,” Econometrica, vol. 58, pp. 1255-1278, 1990. [9] Milgrom, P. and Shannon C., “Monotone Comparative Statics,” Econometrica, vol. 62,

  • pp. 157-180, 1994.

[10] Saraydar C., Mandayam N. B., and Goodman D. J., “Efficient Power Control via Pricing in Wireless Data Networks,” IEEE Trans. on Communications, vol. 50, no. 2, pp. 291- 303, February 2002. [11] Shenker, S., “Fundamental design issues for the future Internet,” IEEE Journal on Selected Areas in Communications, vol. 13, pp. 1176-1188, 1995. [12] Topkis, D., “Equilibrium points in nonzero-sum n-person submodular games,” SIAM Jour- nal of Control and Optimization, vol. 17, pp. 773-787. 12

slide-13
SLIDE 13

[13] Topkis, D., Supermodularity and Complementarity, Princeton University Press, 1998. [14] Vives, X., “Nash equilibrium with strategic complementarities,” Journal of Mathematical Economics, vol. 19, 305-321. [15] Vives, X., Oligopoly Pricing, MIT press, 2001. 13