14.15J | Spring 2018 | Undergraduate

Networks

Lecture and Recitation Notes

Lecture notes 1–12 are adapted from the 2009 version of this course by Prof. Daron Acemoglu and Prof. Asu Ozdaglar and from the 2017 version of the course as taught by Prof. Shah.

Ses # Topics Lecture Notes Recitation Notes
1 Introduction to Social, Economic, and Technological Networks Lecture 1 Slides (PDF - 2.6MB)  
2–3 Network Representations, Measures, and Metrics

Directed and undirected graphs, adjacency matrix. Paths, cycles, connectivity, components. Trees, rings, stars, bipartite graphs, hyper graphs. Centrality measures (degree, closeness, betweenness), clustering, structural balance, homophily, and assortative mixing.

Applications: Structural properties of Facebook graph.

Lectures 2 & 3 Slides (PDF) Recitation 1 (not available to OCW users)
4–6 Linear Dynamical Systems, Markov Chains, Centralities

Discrete-time, linear time-invariant systems with constant inputs. Eigenvalue decomposition. Convergence to equilibrium. Lyapunov function. Positive linear systems, Markov chains, and Perron-Frobenius. Random walk on graph. Eigen centrality. Katz centrality. Page rank.

Applications: Web search.

Lectures 4, 5, & 6 Slides (PDF) Recitations 2 & 3 (not available to OCW users)
7 Dynamics Over Graph: Spread of Information and Distributed Computation

Algebraic properties of graphs, Cheeger’s inequality, information spread and consensus.

Applications: social agreement, synchronization, distributed optimization.

Lecture 7 Slides (PDF) Recitation 4 (not available to OCW users)
8 Graph Decomposition and Clustering

Decomposing networks into clusters. Modularity. Spectral clustering and connectivity.

Lecture 8 Slides (PDF)  
9–11 Random Graph Models

Erdos-Renyi graphs. Review of branching processes. Degree distribution, phase transition, connectedness, giant component.

Applications: tipping, six degrees of separation, disease transmissions.

Lectures 9 & 10 Slides (PDF)

Lecture 11 Slides (PDF)

Recitations 5 & 6 (not available to OCW users)
12 Generative Graph Models

Preferential attachment: rich get richer phenomena, power laws. Small world models: clustering and path lengths.

Applications: Internet topology, Facebook and Twitter degree distributions, firm size distributions.

Lecture 12 Slides (PDF)  
13–14 Introduction to Game Theory

Games, pure and mixed strategies, payoffs, Nash equilibrium, Bayesian games.

Applications: tragedy of the commons, peer effects, auctions.

Introduction to Second Half of Course (PDF)

Lecture 13 Slides (PDF)

Lecture 14 Slides (PDF)

Recitation 7 Notes
15 Traffic Flow and Congestion Games Lecture 15 Slides (PDF)  
16 Network Effects (I)

Negative externalities, congestion, Braess’ paradox, routing.

Application: pricing traffic.

Lecture 16 Slides (PDF) Recitation 8 Notes
17 Network Effects (II)

Key players and the social multiplier.

Applications: criminal networks, public good provision, oligopoly.

Lecture 17 Slides (PDF) Recitation 9 Notes
18 Networked Markets

Matching markets, markets with intermediaries, platforms.

Applications: clearinghouses, ad exchanges, labor markets.

Lecture 18 Slides (PDF)  
19 Repeated Games, Cooperation, and Strategic Network Formation

Stable networks, Nash networks, efficient networks.

Applications: co-authorship, R&D networks.

Lecture 19 Slides (PDF) Recitation 10 Notes
20–21 Diffusion Models and Contagion

Positive externalities, strategic complements, coordination games, tipping, lock in, path dependence.

Applications: diffusion of innovation.

Lecture 20 Slides (PDF)

Lecture 21 Slides (PDF)

Rectation 11 Notes
22–24 Games with Incomplete Information and Introduction to Social Learning, Herding, and Informational Cascades

Rule of thumb and Bayesian learning, social influence, benefits of copying, herd behavior, informational cascades.

Applications: consumer behavior, financial markets.

Lecture 22 Slides (PDF - 4.0MB)

Lecture 23 Slides (PDF)

Lecture 24 Slides (PDF)

Recitation 12 Notes

Topics

  • Repeated games
  • Infinitely repeated prisoner’s dilemma
  • Finitely repeated prisoner’s dilemma 

Repeated Games (A Special Case of Dynamic Games)

In the real world, strategic interactions continue over a period of time. Dynamic games incorporate time structure into sets of strategies and sets of payoffs. 

Recall the Prisoner’s Dilemma:

  C  D
C (1, 1) (-1, 2)
D (2, -1) (0, 0)

Both cooperate (1,1): Cooperative outcome (against individual’s incentives?).
Both defect (0,0): Nash equilibrium outcome.

Question: When can “cooperative outcome” be sustained?
Example: Unilever and P&G price fixing in 2011 or a lysine cartel in the 1990’s.

Answer: When the game is played repeatedly, “cooperative outcomes” can be sustained as an equilibrium! This means no incentive to deviate!

Infinitely Repeated Prisoner’s Dilemma

Folk Theorem: (C, C) → (C, C) → … can be a equilibrum outcome if players are patient enough. 

          C
(1, 1) (-1, 2)     C (1, 1) (-1, 2) →  …
D (2, -1) (0, 0)       D (2, -1) (0, 0)

Proof:

Consider the following strategy for player 1:

  • t = 1: Play C.
  • t ≥ 2: If player 2 has been playing C up to period t, play C; otherwise, play D.

Consider the same strategy for player 2. 

Let’s check that this is an equilibrium:

  1. At period t, suppose they have been playing (C, C).
    By sticking to C, one obtains 1 + 𝛿 + 𝛿2 + … = 1/(1-𝛿 ).
    By deviating to D, one obtains 2 + 0 + 0 + … = 2.
    → If 𝛿 ≥ ½, no incentive to deviate.
  2. At period t, suppose either has deviated to D.
    By sticking to D, one obtains 0 + 0 + … = 0.
    By deviating to C, one obtains -1 + 0 + … = -1
    → No incentive to deviate. 

Thus, this pair of strategies constitutes an equilibrium. Intuiton: Players can punish opponents’ deviation in future periods. 

Finitely Repeated Prisoner’s Dilemma

What if players can only play for a fixed T times? Cooperation is not sustainable.

Proof:

  1. Consider similar strategies and suppose they have coopperated until T-1. At period T:
    By sticking to C, one obtains 1.
    By deviating to D, one obtains 2. 
    → Incentive to deviate.
  2. Consider similar stategies except for playing D at T. At T-1:
    By playing C, one obtains 1 + 0 = 1.
    By deviating to D, one obtains 2 + 0 = 2.
    → Incentive to deviate.

By induction, they have incentives to deviate unless they play (D, D) in all periods.

Topics

  • Strategic substitutes
  • Strategic complements (local network effects, continued)
  • Subgame perfect equillibrium
  • Rubinstein’s bargaining game

Strategic Substitutes

If other players act more aggressively, you have incentive to act less aggressively, and vice versa.

Strategic Complements

If other players act more aggressively, you have incentive to act more aggressively, and vice versa.

Example: Playing Sports with Friends

  • N = {1, 2, 3}
  • Si = ℝ+ = [0, ∞)
  • ui(xi, x-i, δ, G) = xi — ½ xi2 + δ∑(j≠i) gij xi xj
    • Where δ is the degree of complementarity (δ ≥ 0). 
  • Best response of player i
  • Collectively, BRi(x-i) = 𝟙 + δG𝕏.
  • The equilibrium (fixed point) is 𝕏* = (I — δG)-1𝟙.

Subgame Perfect Equilibrium

Nash equlibrium only requires mutual optimality:

  • Issue 1: In dynamic games, cannot change the strategy you have taken in the past.
    • Subgame perfect equilibrium (SPE).
  • Issue 2: Some players may know what other players don’t.
    • (Perfect) Bayesian Nash equilibrium.

Example: Battle of the Sexes

  Shopping  Football
Shopping (3, 2) (0, 0)
Football (0, 0) (2, 3)

  • Where (3, 2) and (2, 3) are the two Nash equalibirums.

Suppose the girl wakes up very early. Then, she can wait at Starbucks in the mall.

Then, (3, 2) is the unique SPE. The guy claiming that he’ll definitely go to football no matter what is an “empty threat.”

Rubinstein’s Bargaining Game

  • Seller (player 1) does not value the good: v1 = 0.
  • Buyer (player 2) values the good: v2 = 1.

Question: How is the price determined? 
Answer: Many models, e.g. take-it-or-leave-it-offer. Rubenstein’s is important because it can generate price ≈ ½ without altruism.

Suppose they make alternating offers of prices until either accepts an offer. What is the SPE?

When traded at price p:

  • Player 1 receives payoff p.
  • Player 2 receives payoff q := 1 — p.

Trick: Let ͟p and be the minimum and maximum payoffs 1 can receive in his turn. Similarly, define ͟q and .

  • In 1’s turn, since any offer above δ  is accepted, p ≥ 1 — δ.
  • Since any offer below δ͟q is rejected, p̅ ≤ 1 — δ͟q.
  • Similarly, in 2’s turn, we get ͟q ≥ 1 — δ p̅ and q̅ ≤ 1 — δ͟p.
  • Combining, we get:
    • p ≥ 1 — δ ≥ 1 — δ(1 — δ͟p) ⇒ ͟p ≥ (1 — δ)/(1 — δ2) = 1/(1 + δ).
    • ≤ 1 — δ͟q ≤ 1 — δ(1 — δ) ⇒ ≤ (1 — δ)/(1 — δ2) = 1/(1 + δ).
  • Thus, p = = 1/(1 + δ) and ͟q = = 1/(1 + δ).

Bottom line: If there is any SPE, 1’s payoff at t=odd should be 1/(1 + δ) and 2’s payoff at t=even be 1/(1 + δ).

Indeed, this equilibrium payoff profile is attainable if and only if they take the following strategies:

  • Player 1: At t=odd, offer p = 1(1 + δ) 
    and at t=even, accept any offer above p = δ/(1 + δ).
  • Player 2: At t=odd, accept any offer above q = δ/(1 + δ)  
    and at t=even, offer q = 1/(1 + δ).

On the equilibrium path, 1 offers p = 1/(1 + δ) and 2 immediately accepts it; end of game.

Topics

  • Bargaining on Networks
  • Contagion Models
  • Mean-Field Approximation

Recall

  • Take-it-or-leave-it offer:
    • One party makes an offer. 
      Then, the other party accepts or rejects it. 
      Game end.
    • The proposer takes all the surplus. 
  • Rubinstein’s bargaining:
    • One party makes an offer. 
      The other party accepts or rejects. 
      If rejected, he makes a counter offer. 
      Continue until a party accepts.
    • The first proposer takes 1/(1 + δ). The offeree takes δ/(1 + δ).

Bargaining on Networks

What if there are two sellers?

  • Take-it-or-leave-it offer:
    • If S1 offers p, S2 has incentive to offer p — ε.
    • Even if sellers make (simultaneous) offers, they receive 0. The buyer receives 1.
  • Rubinstein’s bargaining:
    • If S1 offers 1/(1 + δ)S2 has incentive to offer 1/(1 + δ) — ε.
    • Again, the sellers receive 0, the buyer takes it all. 

Contagion Models

Recall games with externalities

  • N = [0, 1]
  • Si = {Adopt, Not Adopt}
  • Where X is the share of adoption. 

This has externalities, but network structure doesn’t matter. In reality, you may register for Instagram not only because it is popular, but also because your friends have it. 

Consider undirected graph (V, E), N(i) ⊂ V 
Where V are vertices, E are edges, and N is the set of neighbors of vertex i ∈ V.

  • N = V
  • Si = {Adopt, Not Adopt} = {1, 0}.

Example

  • Too complicated to solve (many-body problem).
  • Nor do we care about particular solution on particular networks. Often concerned with particular degree distribution, but not finer details.
  • Mean-field approximation to solve for stationary equation. 

To solve:

  1. Consider random graph with the degree distribution and fix some neighbor adoption probability X
  2. Obtain best response strategy given X and compute adoption share .
  3. Derive fixed point. 

Example

Let N = V = [0, 1] and 

 

 

  • Agents are heterogeneous ci ~ F(0, 1).
  • Degree distribution di ~ D.

To solve:

  1. Fix some X. Pick agent i randomly.
  2. i with ci and di would adopt if
  1. Since ci ~ F[0, 1], the probability that a random person with degree di (but unknown ci) adopts is F(V(di, X)). Since di ~ D, we have the share of adoption 𝔼D[F(V(diX))] . From here, we can compute the neighbor adoption share .

Topics

  • What is a game?
  • Normal form games
  • Equilibria

Games

Why game theory? Games on networks!

Ex. congestion, international trade, Amazon’s new office location, peer effects in school learning, deciding state taxes.       
A game is a representation of strategic interaction. 

Example: Prisoner’s Dilemma

  2 Silent   2 Confess
1 Silent -2, -2 -20, 0
1 Confess 0, -20 -10, -10

Example: Cournot Competition

How many iPhones should Apple produce?

  • Apple produces q1 iPhones at marginal cost $500.
  • Samsung produces q2 Galaxies at marginal cost $500.
  • Price given by inverse demand P = 2000 — Q, Qq1q
  • Apple’s profit given by Pq1 $500 * q
  • Samsung’s profit given by Pq2 — $500 * q

Normal Form Games

Formally, a game consists of 3 elements:

  1. The set of players N.
  2. The sets of strategies {Si}i∈N.
  3. The sets of payoffs {ui: S → ℝ }i∈N.

Example: Prisoner’s Dilemma

  • N = {1, 2} 
  • S1 = {silent, confess}, S2 = {silent, confess}
  • u1 : S1 * S2 → ℝ and u2 : S1 * S2  → ℝ are given by the table, where u1 is red and u2 is blue.

  2 Silent   2 Confess
1 Silent -2, -2 -20, 0
1 Confess 0, -20 -10, -10

Example: Cournot Competition

  • N = {1, 2}
  • S1 = [0, ∞), S2 = [0, ∞)
    • We ignore that q must be integers.
  • u1 : S → ℝ and u2: S → ℝ given by       
    ui (q1, q2) = (P — $500)q1 = ($2000 — q1q2 — $500)qi

In many cases, the sets of strategies have some structure:

  1. Simultaneous games (penalty kicks in soccer).
  2. Repeated games (Libor rate manipulation scandal).
  3. Sequential games (how should US respond to china’s tariffs?).

What happens when there is a game-like situation?        
There are many variations…

  • Weak prediction: “Dominated strategies are never played.”
  • Strong prediction: “Mutually optimal strategies are played.”

Elimination of strictly dominated strategies

Example: Prisoner’s Dilemma

 

2 by 2 table with three options crossed out.

 

Example: Battle of the Sexes

2 by 2 table with two options circled.

No elimination needed.

Equilibria

Nash equilibrium - A state with no incentive to deviate that can be sustained.

Given the opponents’ strategies, what would you do?       
“Best response correspondence” Bi : S-iSi

  • Bgirl(musical) = {musical}
  • Bgirl(soccer) = {soccer}
  • Bboy(musical) = {musical}
  • Bboy(soccer) = {soccer}

⇒ (M,M) and (S,S) are mutually optimal; “nash equilibria.”

When the best response correspondence only has one element, we may instead use the best response function (Bgirl(musical) = musical).

Example: Cournot Competition

Given Samsung’s production q2, Apple wants to maximize its profits

 

u1(q1, q2)=(1500 — q1 — q2)q

\n<section data-uuid=

That is, B1(q2) = ½(1500 — q2). Similarly, B2(q1)= ½(1500 — q2).

Nash equilibrium is the fixed point:

Mathematical equation.

Topics

  • Review
  • Congestion Games
  • Potential Games

Recap from Last Week

A game consists of 3 elements:

  1. Set of players N
  2. Sets of strategies {Si}i∈N
  3. Sets of payoffs {ui}i∈N

Given a game, what do we do?

  1. Maximize the sum of everyone’s payoff.
    • Socially optimal outcome (what we want to happen).
  2. Maximize individual’s payoff conditional on everyone else’s strategy and find a fixed point.
    • Nash equilibrium (what we think will happen).

1 & 2 are often different because there are strategic interactions and individual incentives are unaligned. 

Some games have useful structures, which impose useful restrictions on {Si} and {ui}.

  • Dynamic games
  • Games on networks

Congestion Games

Paths are labeled (edge, traffic, cost/duration):

  • Directed network (J, E) = ({A, B, C}, {1, 2, 3, 4}).
  • Set of paths P = {p1, p2, p3} = {(1), (2,3), (2,4)}.
  • Traffic on paths {xp1, xp2, xp3} = {x1, x3, x4}.
  • Total (social) cost: x1*L1(x1) + x2*L2(x2) + x3*L3(x3) + x4*L4(x4).
    • Minimizing this with respect to x1, … x4 gives socially optimal traffic.

As a game, this can be written:

  • N = [0, 1]
  • Si = {p1, p2, p3}
  • ui(Pi, everyone else’s strategy) = ui(Pi, x1x4)

However, each individual driver incurs:

  • L1(x1) if he takes p1.
  • L2(x2) + L3(x3) if he takes p
  • L2(x2) + L4(x4) if he takes p3.

So, his best response correspondence is to choose a path that gives minimum individual cost.

  • In equilibrium, we must have L1(x1) = L2(x2) + L3(x3) = L2(x2) + L4(x4).

Example:

Paths are labeled (edge/path, traffic, individual cost):

  • Model constraint: x1 + x2 = 1.
  • Total cost: x1*L1(x1) + x2*L2(x2) = x12 + x2 = x12 + (1 — x1) = (x1 — ½)2 + ¾.
  • Socially optimal traffic: (x1S, x2S) = (½ , ½).

Each individual choose path with lower cost, so in equilibrium:

  • L1(x1E) = L2(x2E), so (x1E, x2E) = (1, 0).
  • Equilibrium total cost is x1E*L1(x1E) + x2E*L2(x2E) = 1 ≥ ¾.

How to solve this? 

  1. Impose toll C on p1.
  2. Then, ui(p1, x1, x2) = x1 + C.
  3. We want L1(x1S) + C = L2(x2S).
  4. C = ½.

Potential Games

In physics, particles move along the unique potential field.

In society, people move along their own incentives.

However, there are cases where people’s incentives are so similar that they move as if there is a unique potential field.

Example: Traffic Congestion

Definition: A game is an exact potential game if there is exists Φ (unique potential field) such that:

  • ui(Si, S-i) — ui(Si’, S-i) = Φ(Si, S-i) — Φ(Si’, Si).
  • That is, i’s incentive matches the potential’s gradient.

Topics

  • Games with homogeneous externalities
  • Games with local network effects

Recall

Congestion games had simple sets of payoffs but complicated network structure. There are games that have complicated payoffs with simple network structure. 

Homogeneous Externalities

In classical economics, the demand curve is assumed decreasing and the supply curve increasing. The existence of network effects may create a weird shape of the demand curve. 

  • N = [0, 1] continuum of players
  • Si = {buy, not buy}
  • ui(Si, S-i) = ui(Si, x) =
    vixp  if Si = buy
    0           if Si= not buy
  • vi ~ F([0, 1])
  • p > 0
  • Where x is how many buy, vi is its value, and p is price.

Example: Office suites, SNS, etc.

  • x = 1 — F() where  is the lowest value of agents who buy.

Claim: If agent with vi is better off buying, then any agent with vj is also better off buying. 

Corollary: We may assume without loss of generality that x (those who buy) have the highest values in equilibrium/socially optimal outcome. 

  1. Socially optimal outcome
    1. Social welfare = **∫1[v(1 — F()) — p] dF(v)
    2. Maximizing this with respect to  yields the social best.
  2. Nash equilibrum
    1. Pooling equilibrum: 
      If no one has the good (x = 0) then no one has incentive to buy ( vi * 0 — p < 0). Therefore, x* = 0 is an equilibrium. 
    2. Separating equilibrium: 
      If someone buys (x > 0), then there exists the lowest type who buys. His incentive must balance v̅xp = 0.

Note that everyone’s strategy is summarized by x. So consider the aggregate best response function BR(x) = . With v̅x — p = 0 and x = 1 — F(), we find:

  • BR(x) = 1 — F(p/x).
  • Its fixed point x* is an equilibrium. 

Local Network Effects

Some games have both complicated payoff structure and complicated network structure.

 

 

  • N = {1, 2, 3}.
  • Si = ℝ+ = [0, ∞).
  • ui(xi, x-i, δ, G) = 

i’s best response satisfies

  • BR(x) = max { 𝟘, 𝟙 — δG𝕏}

Course Info

Learning Resource Types
Problem Sets with Solutions
Exams with Solutions
Lecture Notes