Problem Set 3 - SolutionsMarch 2011Question 1Part a.Show that every action in the support of a (mixed strategy) Nash equilibrium is independentrationalizable. Let σ : σ1 . . . σN be a Nash Equilibrium. Take ai supp(σi ). We know: ui (ai , σ i) ui (a, σ i) a Ai .(Using Sofia’s solution as a guide) Proof by contradiction: Suppose that there exists a ai supp(σi ) that is not rationalizable. Let k denote the smallest natural such that there is a player isuch that ai / IRik (where ai supp(σi )).This implies that ai is never an independent BR against σ i for any σ i j# i IRik 1 . Yetsince no strategy in the support of the (mixed) NE has yet been eliminated by stage k 1 of iterative deletion, we know σ i j# i IRik 1 and by definition ai is a BR to σ i.Part b.Show that every independent rationalizable action is (correlated) rationalizable.(Using Dongkyu solution as a guide) Proof by induction. If at m 0 we have IR0 R0 .Assume that IRn Rn . If ai IRin 1 we then have that σ i j# i IRjn such that:ui (σi , σ i ) ui (a i , σ i ) a i .Since we assume IRn Rn we then have that σ i j# i Rjn . By the above condition, itfollows that ai Rin 1 . Thus if ai IRin ai Rin . This holds for all i. Thus IRn Rnwhich implies that IR R.Part c.Show that every action in the support of a correlated equilibrium is (correlated) rationalizable.(Using Sabyasachi’s solution as a guide) Proof by induction. Let a be in the support of the CE,0kkµ. Since a A, where A : Ni 1 Ai , a R A. Assume a R . Then µ(a i ai ) (R i ). Bythe definition of a CE we have:!!ai arg maxµ(a i ai )ui (ai , a i ).!aia i A i1

k). This implies ai Rik 1 i. Thus a Rn 1 . This impliesThus ai is a BR to µ(a i ai ) (R ia R.Part d.Show (by example) that not every action in the support of a corre- lated equilibrium is independentrationalizable.(Example by Sabyasachi)Let player 1 be the row player, 2 the column player and 3 the matrix player.(A) UDL1, 1, 10, 1, 0R1, 0, 10, 0, 0(B) UDL2, 2, .70, 0, 0R0, 0, 02, 2, .7(C) UDL1, 1, 00, 1, 1R1, 0, 00, 0, 1The following is a correlated equilibrium:µ(U, L, B) µ(D, R, B) 1.2Now assume players 1 and 2 randomize independently. Assume player 1 plays U with probabilityp and 2 plays L with probablity q. The payoffs of playing A, B or C are then:A: pB : .7(pq (1 p)(1 q))Note that if p .5and if p .5C : 1 p.7(pq (1 p)(1 q)) .7p p.7(pq (1 p)(1 q)) .7(1 p) 1 pThus we see that B is never a BR to any belief of p and q. Thus B is not independent rationalizable.2

Part e.Show (by example) that there exist independent rationalizable ac- tions that are not in the supportof any correlated equilibrium.(Example by Sabyasachi)LMRU 3, 3 2, 30, 2M 2, 1 3, 13, 3D 0, 3 3, 0 5, 5All actions are independent rationalizable. Yet it must be the case that P (D) 0 in any CE.Suppose not. If P (D, L) 0 or P (D, R) 0, player 2 has a strictly profitable deviation to playM if told to play D. Yet if we have P (D, M ) 0 and P (D, L) P (D, R) 0 then player 2 canprofitably deviate to play L when told to play M .Question 2Part a.Is {Rn } n 1 an iterative delete-non-best-response sequence? If yes, verify. If no, explain whichproperty it fails to satisfy. (Answer given by Vitor) Notice that {Rn }n 1 is (IDBR). The first requirement is simple, given by definition. The secondrequirement is also true, with equality. Just notice that the left side of the requirement is just thedefinition of Rin 1 . The third requirement is also easy to verify. If there is some player i and somenaction ai Rin that is not a best response to R ithen (since this is the requirement that definesn 1n 1nRi ) we know that ai / Riand then R i Rin * Rn 1 i Rin 1 .Part b.Is {IRn } n 1 an iterative delete-non-best-response sequence? If yes, verify. If no, explain whichproperty it fails to satisfy.No. Refer to the example given in the solution to problem 1 d. B is not independent rationalizable. In other words B / IR31 . Yet we see from the correlated equilibrium that B is a BR to thecorrelated strategies µ 3 where µ 3 (U, L) µ 3 (D, R) 12 . Thus we see B X31 . Thus {IRn } n 1fails to satisfy property 2.Part c.Do iterative delete-non-best-response sequences converge? If yes, what can you say about theirlimits?(Answer given by Vitor)By the definition of (IDBR), the sequence not necessarily converges. I will provide one example.Consider the following game"#1, 1 1, 0.0, 1 0, 03

Name the strategies as A1 {U, D} and A2 {L, R} (usual meaning). Now define the followingsequence of sets of action profiles: for the first 2 periodsXi0X11 {U }and for periods n 2X1n X2nand Ai for all i;andX21 {L} , {U }, if n even;{U, D} if n 1 odd{L} .In the first 2 periods it clearly satisfies the requirements since they are equal to the rationalizabilitysteps.After the second period I have to check the requirements. First notice that the set of bestresponses for player 1 is always {U } and for player 2 it is always {L}. Then they are alwayscontained in X n . Also it is only the case that in odd periods n there is a strategy (namely, D) thatis not a best response to X2n , and indeed we have that X1n 1 * X1n . (In fact, subsequent sets arealways different by construction). Then the 3 requirements are satisfied and {X n }is (IDBR). Notice now that {X1n }n 1 oscillates each period, and thus is not convergent. Hence the same is true for {X n }n 1 even though this is a pretty simple case.The correction necessary to rule out such a trivial example (that turns the definition of (IDBR)uninteresting) is to require that the sequence of sets {X n } be a decreasing one. This is in betteraccordance with the word “delete” in the question, since we require that the set be reduced wheneverpossible (instead of just requiring that it is different). If we include in the definition that the sequence {X n }n 1 has to be a decreasing one, then it isalways the case that it converges after a finite number of steps (we can only eliminate somethingfinitely many times). We can also say that, if {X n }n 1 is (IDBR), then Rin Xin for all n N. Notice that00Xi Ri Ai . From property 2 of (IDBR) we have that%&0ai Xi0 ai is a best response to X i Ri1 Xi1 .Now, for some arbitrary n, assume that Rin Xin for all i. Then we have that, for each i%& %&nnRin 1 ai Rin ai is a best response to R i ai Xin ai is a best response to X i Xin 1 ,the middle is justified since in the definition of Rin 1 we are considering player i’s action from asmaller set (since Rin Xin ) and the requirement we are imposing is stronger (it has to be a bestnn X i). We conclude that Xi n 1 Xin response to something also in a smaller set, since R i Ri . But notice that for any i, if ai Xi , this means that ai is a best response to some a i X i.n If this was not the case, then for n sufficiently large X i X i and then ai is not a best response4

n, which means (by 3) that for n sufficiently large we must always eliminate something fromto X inXi , which is impossible since this is a finite set. This already implies that Xi Ri (this is anIalternative definition of the sets (Ri )i 1 ), but I will prove that they are equal below using thesequential defintion from class.Now suppose, by way of a contradiction, that Xi * Ri for some i. Then take some actionai Xi \Ri . Since ai / Ri we know that there exists some step n1 such that ai / Rin1 , but sincen1 ai Xi (and it’s a decreasing sequence of sets) we know that'ai X(i , which means (from then1 1previous paragraph) that ai is a best response to some a i X iwhich must not have been' n1 1 (in R i(otherwise ai would be a BR). Then there exists at least some other player j andstrategy aj Xjn1 1 \Rjn1 1 . Since aj Rjn1 1 , it must have been eliminated in some step n2 n1 ,but by the same logic there must be an agent j * j and an action aj !! Xjn!!2 1 \Rjn!!2 1 and wecan find a step n3 n2 such that aj !! was eliminated. Hence we could find an infinite strictly decreasing sequence of step numbers (nk )k 1 in such a way, which is impossible, since eventuallynk 0.Question 3(Answer given by Vitor)In this exercise we consider an alternative formal way to describe the email information systemdiscussed in class. Instead of using the type space structure, we can describe the situation asa probability space, in which an element would describe totally the relevant information for thismodel, and we can describe the differential information of each player through “coarse” partitionsof the sample space. This means that no agent knows everything that happens, but can distinguishover a set of situations.Consider the sample space Ω {0, 1, 2, .}, in which an element ω Ω represents how manymessages have been received in total (summing up player 1 and player 2). I should emphasize thatwe consider as a message the fact that player 1 learns that the state is θg .So ω 0 means that we are in state θb and so no one receives any messages. In the situationω 1, the state is θg , so player 1 receives the initial message and sends his first message to player2, which does not reach its destiny. Notice that an element describes completely all randomness inthe model (from the possible states and the possible errors in the messages). So this descriptiondoes not throw away any relevant dimension of the problem.Using the same probabilities described in class we have thatωP (ω) (1 ε) ε.Now we describe the information of each agent in this model. No one know exactly everything thathappens. Player one’s information is given by the setsP1 {{0} , {1, 2} , {3, 4} , .} ,since player 1 knows when the state is bad, i.e. ω 0, and then he knows that he has received nemails, but the other agent might have received n 1 messages (and ω 2n 1) or he might havereceived n (and then ω 2n). So his information sets are of the form (2n 1, 2n), for n integer.Player 2’s information is given byP2 {{0, 1} , {2, 3} , .} .5

If player 2 has received 0 messages, it might be the case that player 1 also did not receive any(ω 0) or it might be the case that ht received one, then sent a message to player 2 and it didnot reach (ω 1). Then whenever player 1 has received n messages, he consider as possible thatplayer 2 has also received n messages or that player 1 might have received n 1 messages. Thenhis information sets are of the form (2n, 2n 1) for n integer.For each player i denote as Pi (ω) as the set in the partition Pi that contains ω.The interpretation is that whenever ω happens, player i can only know that something in Pi (ω)happened, but cannot differentiate between anything inside this set. Then the equivalent of thetype of an agent in the original model, which describes all information available to him, we knowhave the element of the partition that he recognizes as realized. Then the equivalent of a type profileis a double (P1 , P2 ) P1 P2 . Remember that, as in the previous case, most of this elements havezero probability.Now an event is any set E Ω. So the event “state is good” can be written as Eg {1, 2, .}.Similarly to the case described before, we can say that player i knows event E at partition Pi ifPi E. And the knowledge operator can be defined as Ki (E) {ω Ω Pi (ω) E}. So we havethat K1 (Eg ) Eg and K2 (Eg ) {2, 3, .}.'(Similarly we can define K i Ki (E) and C (E) n 0 K n (E) where K n (E) K K n 1 (E) .Notice that it is also the case that C (Eg ) .Question 4Example (OR Ex. 56.3). Find the rationalizable actions of each player in the following game.Since this is a 2 player game, we know that the set of actions surviving iterative strict dominanceis equivalent to the set of actions which are rationalizable.Note that 12 b1 , 12 b3 strictly dominates b4 . Thus we may truncate the game and eliminate b4 .Then in the truncated game, a4 is strictly dominated by a2 . Since we can check there are no moredominated actions, we see R1 {a1 , a2 , a3 } and R2 {b1 , b2 , b3 }.Question 5Part a.By solving the agents first order conditions, we find the unique NE is given by a a1 . . . aI where:1ai i.I 1I 2Note that Ai [0, 1], and gi (a) ai (1 a1 a2 ). We may rewrite g as:gi (a) ai (1 ai ) ai aj .We then see that:11 1gi ( , aj ) aj g(ai , aj ) ai (.5, 1], aj [0, 1]24 26

since 14 ai (1 ai ) ai (.5, 1], aj [0, 1] and 12 aj ai aj ai (.5, 1], aj [0, 1]. Thusall ai 21 are strictly dominated by ai 12 .Thus Ri1 [0, 12 ].Assume Rin [lown , highn ] (note by symmetry we need not index this by i). We already knowit must be the case that 0 low high 12 . We know want to find Rin 1 . If agent i believesagents j plays the strategy f (aj ) with support over Rjn we have that i’s best response is given by:ai 1 E[aj ].2nnSince we know E[aj ] [lown , highn ] we get that Rin 1 [ 1 high, 1 low] Rin . Noting that22111low 0, high 2 we see:1 highnlown 1 2and1 lownhighn 1 .2This implies:n1 1 high1 highn2highn 2 242and1 lownlown 2 .42Thus using the starting points high1 12and low1 0 we see:lim highn lim lown n n 1.3Thus we see the NE outcome is the only rationalizable, and thus independent rationalizableoutcome.I 3Using the same argument as above, we can show that Ri1 [0, 12 ]. We can then see Ri2 [0, 12 ].The reason behind this is is that x [0, 21 ] is a best reply to a i a1 . . . ai 1 , ai 1 . . . aI where11112aj 1 2xI 1 i * j. Since x [0, 2 ] we know a i R i . This gives us Ri Ri [0, 2 ], which1 implies Ri [0, 2 ] i.Precisely the same argument can be used to show that IRi [0, 21 ] i.Part b.(Answer provided by Vitor)7

Nash Equilibria:Now let’s characterize an arbitrary Nash equilibria ā. Suppose (WLOG) that firm 1 produces 0,this means that!1 γaj 0.j 2)Then we have that j 2 aj 1 0 and so at least some firm is producing positively. Take a firmi that has āi 0, then !gi (ā) āi 1 āi γāj j# i āi 1 γ!j 2 āj (1 γ) āi 0,but this is impossible since each firm has to have payoff at least zero! Then all firms producepositively, which means that)1 γ j# i ājāi ,2)then summing up over i (and using Q̄ i āi ),(1'Q̄ I γ (I 1) Q̄2 IQ̄ .2 γ (I 1)And we also haveāi āi āi 1 γ)j# iāj21 γ Q̄2 γI1 γ 2 γ(I 1)2 γSo this is the only Nash Equilibria of this game. 1 γ Q̄ γāi2 1.2 γ (I 1)Rationalizable strategies:we start with Ri0 [0, 1]. Once again we can look at best responses of only pure strategies byplayers i since everyone only cares about the expected joint production of the opponents (andthe pure strategy set is convex). Since the best response function is continuous, the set of bestresponses to Rik is going to also be an interval, with limits./1 γ (I 1)1q max 0,;21q̄ 1 .28

01So we have R̄i1 q 1 , q̄ 1 for all i.Notice that q 1 0 if1 I 1.γAnd q 1 0 if γ1 I 1.So now we have two possibilities:1) If q 1 0, then we have thatq̄ 2 and1 γ01 ,22 1 γ (I 1)2q max 0,222,Once again there are 2 possibilitiesq21γ 0/I 1.2 Or we have that q 2 0 if1I 1 .γ20 2 2101012So in case γ1 I 1 Ri1 q 1 , q̄ 1 and so we have that Ri 0, 12 .2 we have that Ri q , q̄122Now in the case where γ1 I 12 , we have that q 0 and q̄ 2 , so from this period one wehave thatq k 1 q̄ k 1 1 γ (I 1) q̄ k;21 γ (I 1) q k.2This means that best responses will always be strictly positive from then on.Given the assumption on parameters, we have that this mapping is a contraction. We also have 1that, using as q 2 γ(I 1) 1 γ(I 1)q(given the solution of Nash equilibria) we have that2''q q k 1q̄ k 1 q(( γ(I 1)2γ(I 1)2''q̄ k q q q((k;.And remember that the case we are focusing is equivalent to γ(I 1) 1, then both this distances2converge to zero and then./1Ri {q } .2 γ (I 1)And the only rationalizable strategy is the NE one.9

Part b) So the requirement needed so that Rationalizable strategies and Nash Equilibriumconcepts are the same is thatγ (I 1) 1.2which means that either the number of firms is not very large (so that for reasonable productionlevels every firm still considers producing positively) or it is the case that substitution betweenfirms is very small (and each firm does not care so much about what others are producing).In the caseγ (I 1) 1,201we have that Ri 0, 12 and rationalizability has considerably less predictive restrictions thanNash.Question 6(Answers provided by Vitor)Part a.Characterize the set of correlated equilibria.First we solve fir the set of correlated equilibria, I will name the probabilities as#"p11 p12.p21 p22So we have to make sure that, whenever an agent plays some action with positive probability, thisaction is optimal given the conditional expected play for the other agents. For player 1 we have 2optimality conditions. The condition for him to play “Hawk” isp11 0 p12 5 p11 1 p12 4 p12 p11 ,(1)and the condition for him wanting to play “Dove” isp12 1 p22 4 p21 0 p22 5 p21 p22 .(2)For player 2. The condition under which he wants to play “hawk” isp11 0 p21 5 p11 1 p21 4 p21 p11 ,(3)and the condition for him to want to play “Dove” isp12 p22 4 p12 0 p22 5 p12 p22 .(4)Notice that each of this conditions has to hold whenever an agent plays with positive probabilityan action. And in the case where an agent does not play an action with positive probability, theconditions are also satisfied trivially (since both sides are zero). Therefore these have to be true inany Correlated Equilibria.10

So we have a characterization of the set of correlated equilibria, CE:CE {(p11 , p12 , p21 , p22 ) (A) (1) , (2) , (3) and (4) hold} .Basically the requirement for a distribution to be a CE is that the diagonal entries are notplayed with large probabilities. This is so because whenever an agent believes that a diagonal playis occurring with large probability, he has an incentive to deviate (people do not want to “hawk”each other and to not want to “dove” each other).Part b.Characterize the set of expected payoff vectors that players can achieve in correlated equilibrium.See graph in answer to f.Part c.Characterize the set of Nash equilibria.In this game we have 3 possible NE (2 pure and one mixed).334 344/1 11 1,.N E (hawk, dove) , (dove, hawk) ,,,2 22 2Part d.Characterize the set of expected payoff vectors that players can achieve in Nash equilibrium.'(The payoffs vectors achieved with these 3 equilibria respectively are (5, 1) , (1, 5) and 25 , 52 .Part e.Find the correlated equilibrium that maximizes the sum of the players payoffs and minimizes thesum of the players payoffs.Since the set of correlated equilibria is defined by a set of linear inequalities, characterizing thesymmetric optimal allocation is a simple linear programming problem. we want to findpmax arg max 8p22 6 (p12 p22 )s.t. p CE.Clearly if we were not constrained by equilibrium conditions, the optimal would be to just play(p22 1) and get payoffs (4, 4) (which have maximal sum 8). Clearly this is not in CE since eachplayer would want to deviate do “Hawk”. Then binding constraints will be the ones for each playerplaying dove optimally. So the solution (given the linearity of the problem) is basically to reducep22 and increase p12 and p21 so that the binding constraint holds with equality, i.e.,p12 p21 p22 .Using p11 0 is clearly optimal since this is the worst possible realization in terms of payoff andp11 0 makes the other incentive constraints hold always. So the optimal isp 12 p 21 p 11 p 11 0.111;3

And the payoff vector achieved isthe joint payoff of any NE).' 103(, 10which has joint payoff of3Part f.Plot your answers to (b) and (d) on a graph.12203(notice that it is bigger than

to Xn i, which means (by 3) that for n sufficiently large we must always eliminate something from Xn i, which is impossible since this is a finite set.This already implies that X i R i (this is an alternative definition of the sets (R i) I 1), but I will prove that they are equal below using the sequential defintion from class. Now suppose, by way of a contradiction, that X