\documentclass[12pt]{article}
\usepackage{graphicx}
\usepackage{amsmath}
\usepackage{amssymb}
\reversemarginpar \topmargin -1in
\oddsidemargin .25in \textheight 9.4in \textwidth 6.4in
\begin{document}
\parindent 24pt \parskip 10pt
\setcounter{page}{374}
\begin{LARGE}
\begin{center}
{\bf PART III Applications} \end{center} \end{LARGE}
\noindent 11 November 2005. Eric Rasmusen, Erasmuse@indiana.edu.
Http://www.rasmusen.org .
\newpage
\begin{LARGE}
\begin{center}
{ \bf 12 Bargaining}
\end{center}
\end{LARGE}
\bigskip \noindent
{\bf 12.1 The Basic Bargaining Problem: Splitting a Pie}
\noindent
Part III of this book is designed to stretch your muscles by providing more
applications of the techniques from Parts I and II. The next three chapters may
be read in any order. They concern three ways that prices might be determined.
Chapter 12 is about bargaining-- where both sides exercise market power.
Chapter 13 is about auctions-- where the seller has market power, but sells a
limited amount of a good and wants buyers to compete against each other. Chapter
14 is about fixed-price models with a variety of different features such as
differentiated or durable goods. One thing all these chapters have in common
is that they use new theory to answer old questions.
Bargaining theory attacks a kind of price determination ill described by
standard economic theory. In markets with many participants on one side or the
other, standard theory does a good job of explaining prices. In competitive
markets we find the intersection of the supply and demand curves, while in
markets monopolized on one side we find the monopoly or monopsony output. Where
theory is less satisfactory is when there are one or few players on both sides
of the market. Early in one's study of economics, one learns that under
bilateral monopoly (one buyer and one seller), standard economic theory is
inapplicable because the traders must bargain. In the chapters on asymmetric
information we would have come across this repeatedly except for our assumption
that either the principal or the agent faced competition, which we could model
as the other side's ability to make a take-it-or-leave-it offer.
Sections 12.1 and 12.2 introduce the archetypal bargaining problem, Splitting
a Pie, ever more complicated versions of which make up the rest of the chapter.
Section 12.2, where we take the original rules of the game and apply the Nash
bargaining solution, is our one dip into cooperative game theory in this book.
Section 12.3 looks at bargaining as a finitely repeated process of offers and
counteroffers, and Section 12.4 views it as an infinitely repeated process,
leading up to the Rubinstein model. Section 12.5 returns to a finite number of
repetitions (two, in fact), but with incomplete information. Finally, Section
12.6 approaches bargaining from the different angle of the Myerson-
Satterthwaite model: how people could try to construct a mechanism for
bargaining, a pre-arranged set of rules that would maximize their expected
surplus.
\begin{center}
{\bf Splitting a Pie }
\end{center} {\bf Players}\\
Smith and Jones.
\noindent {\bf The Order of Play}\\
The players choose shares $\theta_s$ and $\theta_j$ of the pie simultaneously.
\noindent
{\bf Payoffs}\\
If $\theta_s + \theta_j \leq 1$, each player gets the fraction he chose:
\begin{equation}
\left\{
\begin{array}{ll}
\pi_s = & \theta_s \\
\pi_j = & \theta_j \\
\end{array}
\right.
\end{equation}
If $\theta_s + \theta_j > 1$, then $\pi_s=\pi_j = 0.$
Splitting a Pie resembles the game of Chicken except that it has a continuum of
Nash equilibria: any strategy profile ($\theta_s$, $\theta_j$) such that
$\theta_s+\theta_j = 1$ is Nash. The Nash concept is at its worst here, because
the assumption that the equilibrium being played is common knowledge is very
strong when there is a continuum of equilibria. The idea of the focal point
(section 1.5) might help to choose a single Nash equilibrium. The strategy space
of Chicken is discrete and it has no symmetric pure-strategy equilibrium,
but the strategy space of Splitting a Pie is continuous, which permits a
symmetric pure-strategy equilibrium to exist. That equilibrium is the even
split, (0.5, 0.5), which is a focal point.
If the players moved in sequence, Splitting a Pie becomes what is known as the
{\bf
Ultimatum Game}, which has a tremendous first-mover advantage. If Jones
moves first, the unique Nash outcome would be (0,1), although only weakly,
because Smith would be indifferent as to his action. (This is the same open-set
problem that was discussed in Section 4.3.) In the unique equilibrium, Smith
accepts Jones's offer by choosing
$\theta_s=0$ so that $\theta_s +\theta_j = 1$. Of course, if we add to the
model even a small amount of ill will by Smith against Jones for making such a
selfish offer, Smith
would pick $\theta_s>0$ and reject the
offer. That is quite realistic, so depending on the amount of ill will, the
equilibrium would have Jones making a more generous offer that depends on
Smith's utility tradeoff between getting a share of the pie on the one hand and
seeing Jones suffer on the other.
In many applications, this version of Splitting a Pie is unacceptably
simple, because if the two players find their fractions add to more than 1 they
have a chance to change their minds. In labor negotiations, for example, if
manager Jones makes an offer which union Smith rejects, they do not immediately
forfeit the gains from combining capital and labor. They lose a week's
production and make new offers. We will model just such a sequence of offers,
but before we do that let us see how cooperative game theory deals with the
original game.
\bigskip \noindent
{\bf 12.2 The Nash Bargaining Solution}
\noindent A quite different approach to game theory than we have been using in
this book is to describe the players and payoff functions for a game,
decide
upon some characteristics an equilibrium should have based on notions of
fairness or efficiency, mathematicize the characteristics, and maybe add a few
other axioms to make the equilibrium turn out neatly. This is a reduced-form
approach, attractive if the modeller finds it difficult to come up with a
convincing order of play but thinks he can say something about what outcome will
appear. Nash (1950a) did this for
the bargaining problem in what is the best-known application of cooperative
game theory. Nash's objective was to pick axioms that would characterize the
agreement the two players would anticipate making with each other. He used a
game only a little more complicated than Splitting a Pie. In the Nash model, the
two players can have different utilities if they do not come to an agreement,
and the utility functions can be nonlinear in terms of shares of the pie.
Figures 1a and 1b compare the two games.
\includegraphics[width=150mm]{fig12-01.jpg}
\begin{center} {\bf Figure 1: (a) Nash Bargaining Game; (b) Splitting a Pie}
\end{center}
In Figure 1, the shaded region denoted by $X$ is the set of feasible payoffs,
which we will assume to be convex. The pair of disagreement payoffs or {\bf
threat point} is $\bar{U} =
(\bar{U}_s, \bar{U}_j)$. The Nash bargaining solution, $U^* = (U_s^*, U_j^*)$,
is a function of $\bar{U}$ and $X$ that satisfies the following four axioms.
\noindent {\it 1 Invariance.} For any strictly increasing linear function $F$,
\begin{equation} \label{e11.1}
U^*[F(\bar{U}), F(X)] = F[U^*(\bar{U}, X)]. \end{equation}
This says that the solution is independent of the units in which utility is
measured.
\noindent
{\it 2 Efficiency.} The solution is pareto optimal, so the players cannot
both be made better off by any change. In mathematical terms,
\begin{equation} \label{e11.2}
(U_s, U_j) > U^* \Rightarrow (U_s,U_j) \not\in X. \end{equation}
\noindent
{\it 3 Independence of Irrelevant Alternatives.} If we drop some possible
utility profiles from $X$, leaving the smaller set $Y$, then if $U^*$ was not
one of the dropped points, $U^*$ does not change. \begin{equation} \label{e11.3}
U^*(\bar{U},X) \in Y \subseteq X \Rightarrow U^*(\bar{U},Y) = U^*(\bar{U},X).
\end{equation}
\noindent {\it 4 Anonymity (or Symmetry).} Switching the labels on players Smith
and Jones does not affect the solution.
The axiom of Independence of Irrelevant Alternatives is the most debated of the
four, but if I were to complain, it would be about the axiomatic approach
itself, which depends heavily on the intuition behind the axioms. Everyday
intuition says that the outcome should be efficient and symmetric, so that other
outcomes can be ruled out a priori. But most of the games in the earlier
chapters of this book turn out to have reasonable but inefficient outcomes, and
games like Chicken have reasonable asymmetric outcomes.
Whatever their drawbacks, these axioms fully characterize the Nash solution. It
can be proven that if $U^*$ satisfies the four axioms above, then it is the
unique strategy profile such that \begin{equation} \label{e11.4}
\begin{array}{lll} U^* =& argmax & (U_s - \bar{U}_s)(U_j - \bar{U}_j).\\
& U \in X, U \geq \bar{U} & \end{array} \end{equation}
Splitting a Pie is a simple enough game that not all the axioms are
needed to generate a solution. If we put the game in this context, however,
problem (\ref{e11.4}) becomes
\begin{equation} \label{e11.5}
\begin{array}{ll}
Maximize & (\theta_s - 0)(\theta_j - 0), \\
\theta_s,\theta_j \; & \\ \end{array}
\end{equation}
subject to $ \theta_s+\theta_j \leq 1$, which generates the first-order
conditions
\begin{equation} \label{e11.6}
\theta_s - \lambda = 0,\;\; {\rm and} \;\; \theta_j - \lambda = 0,
\end{equation}
where $\lambda$ is the Lagrange multiplier on the constraint. From
(\ref{e11.5}) and the constraint, we obtain $\theta_s = \theta_j = 1/2$, the
even split that we found as a focal point of the noncooperative game.
Although Nash's objective was simply to characterize the anticipations of the
players, I perceive a heavier note of morality in cooperative than in
noncooperative game theory. Cooperative outcomes are neat, fair, beautiful, and
efficient. In the next few sections we will look at noncooperative bargaining
models that while plausible, lack every one of those features. Cooperative game
theory may be useful for ethical decisions, but its attractive features are
often inappropriate for economic situations, and the spirit of the axiomatic
approach is very different from the utility maximization of economic theory.
It should be kept in mind, however, that the ethical component of cooperative
game theory can also be realistic, because people are often ethical, or pretend
to be. People very often follow rules that they believe represent
virtuous behavior, even at some monetary cost. In bargaining experiments in
which one player is given the ability to make a take-it-or-leave it offer (the
{Ultimatum Game}) it is commonly found that he offers a 50-50 split.
Presumably this is because either he wishes to be fair or he fears a spiteful
response from the other player to a smaller offer. If the subjects are made to
feel that they have ``earned'' the right to be the offering party, they behave
much more like the players in noncooperative game theory (Hoffman \& Spitzer
[1985]). Frank (1988) and Thaler (1992) describe numerous occasions where
simple games fail to describe real-world or experimental results. People's
payoffs include more than their monetary rewards, and sometimes knowing the
cultural disutility of actions is more important than knowing the dollar
rewards. This is one reason why it is helpful to a modeller to keep his games
simple: when he actually applies them to the real world, the model must not be
so unwieldy that it cannot be combined with details of the particular
setting.
\bigskip \noindent {\bf 12.3 Alternating Offers over Finite Time}
\noindent In the games of the next two sections, the actions are the same as in
Splitting a Pie, but with many periods of offers and counteroffers. This means
that strategies are no longer just actions, but rather are rules for choosing
actions based on the actions chosen in earlier periods.
\begin{center} {\bf Alternating Offers } \end{center} {\bf Players}\\ Smith
and Jones.
\noindent {\bf The Order of Play }\\ 1 Smith makes an offer $\theta_{1}$.
\\ 1* Jones accepts or rejects.\\ 2 Jones makes an offer $\theta_{2}$. \\
2* Smith accepts or rejects.\\ $\ldots$\\ T Smith offers $\theta_{T}$.\\ T*
Jones accepts or rejects.
\noindent {\bf Payoffs}\\ The discount factor is $\delta \leq 1.$\\ If Smith's
offer is accepted by Jones in round $m$,
\begin{equation} \nonumber
\begin{array} {ll}
\pi_s &= \delta^m \theta_{m},\\
& \\
\pi_j &= \delta^m (1-\theta_{m}).\\
\end{array}
\end{equation}
\noindent If Jones's offer is accepted, reverse the subscripts. \\
If no offer is ever accepted, both payoffs equal zero.
When a game has many rounds we need to decide whether discounting is
appropriate. If the discount rate is $r$ then the discount factor is $\delta =
1/(1+r)$, so, without discounting, $r=0$ and $\delta = 1$. Whether discounting
is appropriate to the situation being modelled depends on whether delay should
matter to the payoffs because the bargaining occurs over real time or the game
might suddenly end (section 5.2). The game Alternating Offers can be interpreted
in either of two ways, depending on whether it occurs over real time or not. If
the players made all the offers and counteroffers between dawn and dusk of a
single day, discounting would be inconsequential because, essentially, no time
has passed. If each offer consumed a week of time, on the other hand, the delay
before the pie was finally consumed would be important to the players and their
payoffs should be discounted.
Consider first the game without discounting. There is a unique subgame-
perfect outcome --- Smith gets the entire pie --- which is supported by a number
of different equilibria. In each equilibrium, Smith offers $\theta_{s} = 1$ in
each period, but each equilibrium is different in terms of when Jones accepts
the offer. All of them are weak equilibria because Jones is indifferent
between accepting and rejecting, and they differ only in the timing of Jones's
final acceptance.
Smith owes his success to his ability to make the last offer. When Smith
claims the entire pie in the last period, Jones gains nothing by refusing to
accept. What we have here is not really a first-mover advantage, but a last-
mover advantage, a difference not apparent in the one-period model.
In the game with discounting, the total value of the pie is 1 in the first
period, $\delta$ in the second, and so forth. In period $T$, if it is reached,
Smith would offer 0 to Jones, keeping 1 for himself, and Jones would accept
under our assumption on indifferent players. In period $(T-1)$, Jones could
offer Smith $\delta$, keeping $(1-\delta)$ for himself, and Smith would accept,
although he could receive a greater share by refusing, because that greater
share would arrive later and be discounted.
By the same token, in period $(T-2)$, Smith would offer Jones
$\delta(1-\delta)$, keeping $(1-\delta(1-\delta))$ for himself, and Jones would
accept, since with a positive share Jones also prefers the game to end soon.
In period $(T-3)$, Jones would offer Smith $\delta[1 -\delta(1-\delta)]$,
keeping $(1- \delta[1 -\delta(1-\delta)])$ for himself, and Smith would accept,
again to prevent delay. Table 1 shows the progression of Smith's shares when
$\delta =0.9$.
\begin{center} {\bf Table 1: Alternating Offers over Finite Time }
\begin{tabular}{ lll ll }
{\bf Round} & {\bf Smith's share} & {\bf Jones's share} & {\bf Total value} &
{\bf Who offers?} \\ \hline & & & & \\ $T-3$ & 0.819 & 0.181 & $0.9^{T-4}$
& Jones \\ & & & & \\ $T-2$ & 0.91 & 0.09 & $0.9^{T-3}$ & Smith \\ & &
& & \\ $T-1$ & 0.9 & 0.1 & $0.9^{T-2}$ & Jones \\ & & & & \\ $T $ &
1 & 0 & $0.9^{T-1}$ & Smith \\
& & & & \\
\hline
\end{tabular} \end{center}
As we work back from the end, Smith always does a little better when he makes
the offer than when Jones does, but if we consider just the class of periods in
which Smith makes the offer, Smith's share falls. If we were to continue to work
back for a large number of periods, Smith's offer in a period in which he makes
the offer would approach $\frac{1}{1+\delta}$, which equals about 0.53 if
$\delta=0.9$. The reasoning behind that precise expression is given in the next
section. In equilibrium, the very first offer would be accepted, since it is
chosen precisely so that the other player can do no better by waiting.
\bigskip \noindent
{\bf 12.4 Alternating Offers over Infinite Time }
\noindent
The Folk Theorem of Section 5.2 says that when discounting is low and a game
is repeated an infinite number of times, there are many equilibrium outcomes.
That does not apply to the bargaining game, however, because it is not a
repeated game. It ends when one player accepts an offer, and only the accepted
offer is relevant to the payoffs, not the earlier proposals. In particular,
there are no out-of-equilibrium punishments such as enforce the Folk Theorem's
outcomes.
Let players Smith and Jones have discount factors of $\delta_s$ and $\delta_j$
which are not necessarily equal but are strictly positive and no greater than
one. In the unique subgame-perfect outcome for the infinite-period bargaining
game, Smith's share is \begin{equation} \label{e11.7} \theta_s = \frac{1
-\delta_j}{ 1 - \delta_s\delta_j}, \end{equation} which, if $\delta_s=\delta_j=
\delta$, is equivalent to \begin{equation} \label{e11.8} \theta_s = \frac{1}{1 +
\delta}. \end{equation}
If the discount rate is high, Smith gets most of the pie: a 1,000 percent
discount rate ($r=10$) makes $\delta = 0.091$ and $\theta_s = 0.92$ (rounded),
which makes sense, since under such extreme discounting the second period hardly
matters and we are almost back to the simple game of Section 12.1. At the other
extreme, if $r$ is small, the pie is split almost evenly: if $r=0.01$, then
$\delta \approx 0.99$ and $\theta_s \approx 0.503$.
It is crucial that the discount rate be strictly greater than 0, even if only
by a little. Otherwise, the game has the same continuum of perfect equilibria
as in Section 12.1. Since nothing changes over time, there is no incentive to
come to an early agreement. When discount rates are equal, the intuition behind
the result is that since a player's cost of delay is proportional to his share
of the pie, if Smith were to offer a grossly unequal split, such as (0.7, 0.3),
Jones, with less to lose by delay, would reject the offer. Only if the split is
close to even would Jones accept, as we will now prove.
\noindent {\bf Proposition 1} (Rubinstein [1982])\footnote{The proof of
Proposition 1 is not from the original Rubinstein (1982),
but is adapted from Shaked \& Sutton (1984). The maximum rather than the
supremum can be used because of the assumption that indifferent players always
accept offers.
} {\it In the discounted
infinite game, the unique perfect equilibrium outcome is} $\theta_s = \frac{1
-\delta_j}{ (1 - \delta_s\delta_j)}$, {\it where Smith is the first mover.}
\noindent {\it Proof}\\
We found that in the {\it T}-period game Smith gets a
larger share in a period in which he makes the offer. Denote by {\it M} the
maximum nondiscounted share, taken over all the perfect equilibria that might
exist, that Smith can obtain in a period in which he makes the offer. Consider
the game starting at $t$. Smith is sure to get no more than {\it M}, as noted
in Table 2. (Jones would thus get 1 - {\it M}, but that is not relevant to the
proof.)
\begin{center} {\bf Table 2: Alternating Offers over Infinite Time}
\begin{tabular}{ l | l | l| l }
{\bf Round} & {\bf Smith's share} & {\bf Jones's share} & {\bf Who offers?}
\\ \hline & & & \\ $t-2$ & $1 - \delta_j(1-\delta_s M)$ & & Smith \\ & &
& \\ $t-1$ & & $1-\delta_s M$ & Jones \\ & & & \\ $t $ & $M$ & & Smith
\\ & & & \\
\hline
\end{tabular} \end{center}
The trick is to find a way besides $M$ to represent the maximum Smith can
obtain. Consider the offer made by Jones at $(t-1)$. Smith will accept any
offer which gives him more than the discounted value of $M$ received one period
later, so Jones can make an offer of $\delta_sM$ to Smith, retaining $(1 -
\delta_s M)$ for himself. At $(t-2)$, Smith knows that Jones will turn down any
offer less than the discounted value of the minimum Jones can look forward to
receiving at $(t-1)$. Smith, therefore, cannot offer any less than $ \delta_j
\left( 1 - \delta_s M \right)$ at $(t-2)$.
Now we have two expressions for ``the maximum which Smith can receive,''
which we can set equal to each other:
\begin{equation} \label{e11.9}
M = 1 - \delta_j \left( 1 - \delta_s M \right). \end{equation}
Solving equation (\ref{e11.9}) for $M$, we obtain \begin{equation}
\label{e11.10} M = \frac{1 -\delta_j}{ 1 - \delta_s\delta_j}.
\end{equation}
We can repeat the argument using $m$, the minimum of Smith's share. If Smith
can expect at least $m$ at $t$, Jones cannot receive more than $(1 - \delta_s
m)$ at $(t-1)$. At $(t-2)$ Smith knows that if he offers Jones the discounted
value of that amount, Jones will accept, so Smith can guarantee himself $( 1 -
\delta_j \left( 1 - \delta_s m \right))$, which is the same as the expression we
found for $M$. The smallest perfect equilibrium share that Smith can receive is
the same as the largest, so the equilibrium outcome must be unique. Q.E.D.
This model from Rubinstein (1982) is widely used because of the way it
explains why two players with the same discount rates tend to split the surplus
equally (the limiting case in the model as the discount rate goes to zero and
$\delta$ goes to one). Unfortunately, as with the Nash bargaining solution,
there is no obvious best way to extend the model to three or more
players--- no best way to specify how they make and accept offers. Haller
(1986) shows that for
at least one specification, the outcome is not similar to the Rubinstein (1982)
outcome, but rather is a return to the indeterminacy of the game without
discounting.
\bigskip
\noindent {\bf No Discounting, but a Fixed Bargaining Cost}
\noindent
There are two ways to model bargaining costs per period: as proportional to
the remaining value of the pie (the way used above), or as fixed costs each
period, which we analyze next (again following Rubinstein [1982]). To
understand the difference, think of labor negotiations during a construction
project. If a strike slows down completion, there are two kinds of losses. One
is the loss from delay in renting or selling the new building, a loss
proportional to its value. The other is the loss from late-completion penalties
in the contract, which often take the form of a fixed penalty each week. The
two kinds of costs have very different effects on the bargaining process.
Here, let us assume that there is no discounting but
whenever a period passes, Smith loses $c_s$ and Jones loses $c_j$. In
every subgame-perfect equilibrium, Smith makes an offer and Jones accepts, but
there are three possible cases.
\noindent {\bf Delay costs are equal}
\begin{center}
$c_s=c_j = c$.
\end{center}
The Nash indeterminacy of Section 12.1 remains almost as bad; any fraction such
that each player gets at least $c$ is supported by some perfect equilibrium.
\noindent
{\bf Delay hurts Jones more}
\begin{center}
$c_s < c_j$.
\end{center}
Smith gets the entire pie. Jones has more to lose than Smith by delaying, and
delay does not change the situation except by diminishing the wealth of the
players. The game is stationary, because it looks the same to both players no
matter how many periods have already elapsed. If in any period $t$ Jones
offered Smith $x$, in period $(t-1)$ Smith could offer Jones $(1-x-c_j)$,
keeping $(x+c_j)$ for himself. In period $(t-2)$, Jones would offer Smith
$(x+c_j-c_s)$, keeping $(1-x-c_j+c_s)$ for himself, and in periods $(t-4)$ and
$(t-6)$ Jones would offer $(1- x - 2c_j + 2c_s)$ and $(1-x - 3c_j + 3c_s)$. As
we work backwards, Smith's advantage rises to $\gamma(c_j - c_s)$ for an
arbitrarily large integer $\gamma$. Looking ahead from the start of the game,
Jones is willing to give up and accept zero.
\noindent {\bf Delay hurts Smith more}
\begin{center} $c_s > c_j$. \end{center}
Smith gets a share worth $c_j$ and Jones gets $(1-c_j)$. The cost $c_j$ is a
lower bound on the share of Smith, the first mover, because if Smith knows Jones
will offer (0,1) in the second period, Smith can offer $(c_j, 1-c_j)$ in the
first period and Jones will accept.
\bigskip \noindent
{\bf 12.5 Incomplete Information}
\noindent Instant agreement has characterized even the multiperiod games of
complete information discussed so far. Under incomplete information, knowledge
can change over the course of the game and bargaining can last more than one
period in equilibrium, a result that might be called inefficient but is
certainly realistic. Models with complete information have difficulty explaining
such things as strikes or wars, but if over time an uninformed player can learn
the type of the informed player by observing what offers are made or rejected,
such unfortunate outcomes can arise. The literature on bargaining under
incomplete information is vast. For this section, I have chosen to use a model
based on the first part of Fudenberg \& Tirole (1983), but it is only a
particular example of how one could construct such a model, and not a good
indicator of what results are to be expected from bargaining.
Let us start with a one-period game. We will denote the price by $p_1$ because
we will carry the notation over to a two-period version.
\begin{center}
{\bf One-Period Bargaining with Incomplete Information }
\end{center}
{\bf Players}\\
A seller, and a buyer called Buyer$_{100}$ or Buyer$_{150}$ depending on his
type.
\noindent
{\bf The Order of Play }\\
0 Nature picks the buyer's type, his valuation of the object being sold, which
is $b = 100$ with probability $\gamma$ and $b = 150$ with probability
$(1-\gamma$). \\
1 The seller offers price $p_1$.\\
2 The buyer accepts or rejects $p_1$.\\
\noindent {\bf Payoffs}\\
The seller's payoff is $p_1$ if the buyer accepts the offer, and otherwise 0.
\\ The buyer's payoff is $ (b - p_1)$ if he accepts the offer, and otherwise
0.
\noindent {\it Equilibrium: }\\
Buyer$_{100}$: accept if $p_1 \leq 100$. \\ Buyer$_{150}$: accept if
$p_1 \leq 150$. \\ Seller: offer $p_1=100$ if $\gamma \geq 1/3$ and $p_1=150$
otherwise. \\
Both types of buyers have a dominant strategy for the last move: accept any
offer $p_1** v_b^s)$ and otherwise the buyer acquires it for a payment to the
seller of
\begin{equation} \label{e300}
p = v_s^s + \frac{v_b^s-v_s^s}{2}.
\end{equation}
If $(v_s^s, v_b^s) \neq (v_s^b, v_b^b)$, the good is destroyed and the buyer
pays $v_b^s$ to the court.
\noindent {\bf Payoffs} \\
If the seller keeps the good, both players have payoffs of $0$. If the buyer
acquires the object, the seller's payoff is $(p -v_s)$ and the buyer's is
$v_b - p $. If the reports disagree, the seller's payoff is $-v_s$ and the
buyer's payoff is $-v_b^s$
\bigskip
I have normalized the payoffs so that each player's payoff is zero if no
trade occurs. I could instead have normalized to $\pi_s = v_s$ and to $\pi_b =
0$ if no trade occurred, a common alternative.
This is one of Chapter 10's Maskin matching mechanisms. One equilibrium is for
buyer and seller to both tell the truth. That is an equilibrium because if the
buyer acquires the object, the payoffs are $\pi_s = \frac{v_b^s-v_s^s}{2}$ and
$\pi_b = \frac{v_b^s-v_s^s}{2}$, both of which are positive if and only if $v_b>
v_s$.
This will result in the efficient allocation, in the sense that the good
ends up with the player who values it most highly. This is, moreover an
acceptable mechanism for both players if they expect this equilibrium to be
played out, because they share any gains from trade that may exist.\footnote{As
usual, the efficient equilibrium is not unique. Another equilibrium would be for
both players to always report $v_s = 0.5, v_b =0.4)$, which would yield zero
payoffs and never result in trade. If either player unilaterally deviated, the
punishment would kick in and payoffs would become negative.}
I include Bilateral Trading I to introduce the situation and provide a first-
best benchmark, as well as to give another illustration of the Maskin matching
mechanism. Let us next look at a game of incomplete information and a
mechanism which does depend on the players' actions.
\begin{center} \noindent
{\bf Bilateral Trading II: Incomplete Information } \end{center} {\bf
Players}\\
A buyer and a seller.
\noindent {\bf The Order of Play}\\
0 Nature independently chooses the seller to value the good at $v_s$ and the
buyer at $v_b$ using the uniform distribution between 0 and 1. Each player's
value is his own private information. \\
1 The seller reports $p_s$ and the buyer reports $p_b$. \\ 2 The buyer
accepts or rejects the seller's offer. The price at which the trade takes
place, if it does, is $ p_s$.
\noindent {\bf Payoffs}\\
If there is no trade, the seller's payoff is $0$ and the buyer's is $0$.\\ If
there is trade, the seller's payoff is $(p_s-v_s)$ and the buyer's is $(v_b -
p_s)$.\\
This mechanism does not use the buyer's report at all, and so perhaps it is not
surprising that the result is inefficient. It is easy to see, working back from
the end of the game, that the buyer's equilibrium strategy is to accept the
offer if $v_b \geq p_s$ and to reject it otherwise. If the buyer does that,
the seller's expected payoff is
\begin{equation} \label{e0baz} \left[ p_s - v_s \right] \left[ Prob\{
v_b \geq p_s \} \right] +0 \left[ Prob\{ v_b \leq p_s \} \right] =
\left[ p_s - v_s \right] \left[ 1-p_s \right] .
\end{equation} Differentiating this with respect to $p_s$ and setting equal to
zero yields the seller's equilibrium strategy of
\begin{equation} \label{e0bay} p_s = \frac{1+v_s}{2}.
\end{equation}
This is inefficient because if $v_b$ is just a little bigger than $v_s$, trade
will not occur even though gains from trade do exist. In fact, trade will fail
to occur whenever $v_b <\frac{1+v_s}{2}$.
Let us try another simple mechanism, which at least uses the reports of
both players, replacing move (2) with (2$'$)
(2$'$) The good is allocated to the seller if $p_s > p_b$ and to the buyer
otherwise. The price at which the trade takes place, if it does, is $ p_s$.
\\
\noindent Suppose the buyer truthfully reports $p_b = v_b$. What will the
seller's best response be? The seller's expected payoff for the $p_s$ he
chooses is now \begin{equation} \label{e0ba}
\left[ p_s - v_s \right] \left[ Prob\{ p_b (v_b) \geq p_s \} \right]
+ 0 \left[ Prob\{ p_b (v_b) \leq p_s \} \right] = [p_s - v_s ] [ 1-p_s].
\end{equation}
where the expectation has to be taken over all the possible values of $v_b$,
since $p_b$ will vary with $v_b$.
Maximizing this, the seller's strategy will solve the first-order condition
$1 - 2p_s +v_s = 0$, and so will again be \begin{equation} \label{e0bc} p_s(
v_s) = \frac{1+v_s}{2} = \frac{1 }{2}+ \frac{ v_s}{2}. \end{equation}
Will the buyer's best response to this strategy be $p_b = v_b$? Yes, because
whenever $v_b \geq \frac{1 }{2}+ \frac{ v_s}{2}$ the buyer is willing for
trade to occur, and the size of $p_b$ does not affect the transactions price,
only the occurrence or nonoccurrence of trade. The buyer needs to worry about
causing trade to occur when $v_b < \frac{1 }{2}+ \frac{ v_s}{2}$, but this can
be avoided by using the truthtelling strategy. The buyer also needs to worry
about preventing trade from occurring when $v_b > \frac{1 }{2}+ \frac{
v_s}{2}$, but choosing $p_b = v_b$ prevents this from happening either.
Thus, it seems that either mechanism (2) or (2$'$) will fail to be efficient.
Often, the seller will value the good less than the buyer, but trade will fail
to occur and the seller will end up with the good anyway -- whenever $v_b >
\frac{1+v_s}{2}$. Figure 2 shows when trades will be completed based on the
parameter values.
\includegraphics[width=150mm]{fig12-02.jpg}
\begin{center} {\bf Figure 2: Trades in Bilateral Trading II} \end{center}
As you might imagine, one reason this is an inefficient mechanism is that it
fails to make effective use of the buyer's information. The next mechanism will
do better. Its trading rule is called the {\bf double auction mechanism}. The
problem is like that of Chapter 10's Groves Mechanism, because we are trying
to come up with an action rule (allocate the object to the buyer or to the
seller) based on the agents' reports (the prices they suggest), under the
condition that each player has private information (his value).
\begin{center} \noindent
{\bf Bilateral Trading III: The Double Auction Mechanism}
\end{center} {\bf Players}\\
A buyer and a seller.
\noindent {\bf The Order of Play}\\
0 Nature independently chooses the seller to value the good at $v_s$ and the
buyer at $v_b$ using the uniform distribution between 0 and 1. Each player's
value is his own private information. \\
1 The buyer and the seller simultaneously decide whether to try to trade or
not. \\ 2 If both agree to try, the seller reports $p_s$ and the buyer
reports $p_b$ simultaneously. \\
3 The good is allocated to the seller if $p_s \geq p_b$ and to the buyer
otherwise. The price at which the trade takes place, if it does, is $p =
\frac{(p_b + p_s)}{2}$. \\
\noindent {\bf Payoffs}\\
If there is no trade, the seller's payoff is $0$ and the buyer's is zero. If
there is trade, then the seller's payoff is $(p-v_s)$ and the buyer's is $(v_b -
p)$.
The buyer's expected payoff for the $p_b$ he chooses is \begin{equation}
\label{e1aa}
\left[ v_b - \frac{ p_b + E[p_s |p_b \geq p_s ]}{2} \right] \left[
Prob\{ p_b \geq p_s \} \right],
\end{equation}
where the expectation has to be taken over all the possible values of $v_s$,
since $p_s$ will vary with $v_s$.
\noindent
The seller's expected payoff for the $p_s$ he chooses is
\begin{equation} \label{e1b}
\left[ \frac{ p_s + E(p_b |p_b \geq p_s )}{2} - v_s \right] \left[
Prob\{ p_b \geq p_s \} \right],
\end{equation}
where the expectation has to be taken over all the possible values of $v_b$,
since $p_b$ will vary with $v_b$.
The game has lots of Nash equilibria. Let's focus on two of them, a {\bf one-
price equilibrium} and the unique {\bf linear equilibrium}.
In the {\bf one-price equilibrium}, the buyer's strategy is to offer $p_b = x$
if $v_b \geq x$ and $p_b=0$ otherwise, for some value $x \in [0,1]$. The
seller's strategy is to ask $p_s = x$ if $v_s \leq x$ and $p_s=1$ otherwise.
Figure 3 illustrates the one-price equilibrium for a particular value of $x$.
Suppose $x =0 .7$. If the seller were to deviate and ask prices lower than
$0.7$, he would just reduce the price he receives. If the seller were to
deviate and ask prices higher than $0.7$, then $p_s > p_b$ and no trade occurs.
So the seller will not deviate. Similar reasoning applies to the buyer, and to
any value of $x$, including 0 and 1, where trade never occurs.
\includegraphics[width=150mm]{fig12-03.jpg}
\begin{center} {\bf Figure 3: Trade in the One-Price Equilibrium }
\end{center}
The {\bf linear equilibrium} can be derived very neatly. Suppose the seller
uses a linear strategy, so $p_s(v_s) = \alpha_s + c_sv_s$. From the buyer's
point of view, $p_s$ will be uniformly distributed from $\alpha_s$ to $
\alpha_s + c_s$ with density $1/c_s$, as $v_s$ ranges from 0 to 1. Since
$E_b[p_s |p_b \geq p_s ] = E_b(p_s |p_s \in [a_s,p_b] )= \frac{a_s +
p_b}{2}$, the buyer's expected payoff (\ref{e1aa}) becomes
\begin{equation} \label{e1zz}
{\displaystyle \left[ v_b - \frac{ p_b + \frac{\alpha_s + p_b}{2} }{2}
\right] \left[ \frac{ p_b -\alpha_s }{c_s} \right].}
\end{equation}
Maximizing with respect to $p_b$ yields
\begin{equation} \label{e1c}
{\displaystyle p_b = \left(\frac{2}{3} \right)v_b + \left(\frac{1}{3} \right)
\alpha_s.}
\end{equation}
Thus, if the seller uses a linear strategy, the buyer's best response is a
linear strategy too! We are well on our way to a Nash equilibrium.
If the buyer uses a linear strategy $p_b(v_b) = \alpha_b + c_bv_b$, then from
the seller's point of view $p_b$ is uniformly distributed from $\alpha_b $ to
$\alpha_b + c_b$ with density $1/c_b$ and the seller's payoff function,
expression (\ref{e1b}), becomes, since $ E_s (p_b|p_b \geq p_s ) = E_s
(p_b|p_b \in [p_s,\alpha_b + c_b] = \frac{p_s + \alpha_b + c_b}{2} $,
\begin{equation} \label{e1}
\left[ \frac{p_s + \frac{p_s + \alpha_b + c_b}{2} }{2} - v_s \right]
\left[ \frac{\alpha_b + c_b-p_s}{c_b} \right]. \end{equation} Maximizing
with respect to $p_s$ yields
\begin{equation} \label{e1d}
{\displaystyle p_s =\left( \frac{2}{3}\right) v_s + \frac{1}{3} }
\left(\alpha_b+c_b \right). \end{equation}
Solving equations (\ref{e1c}) and (\ref{e1d}) together yields
\begin{equation} \label{e100}
{\displaystyle p_b = \left(\frac{2}{3}\right) v_b + \frac{1}{12} }\end{equation}
and
\begin{equation} \label{e101}
{\displaystyle p_s =\left( \frac{2}{3} \right)v_s + \frac{1}{4}.}
\end{equation}
So we have derived a linear equilibrium. Manipulation of the equilibrium
strategies shows that trade occurs if and only if $v_b \geq v_s + (1/4)$, which
is to say, trade occurs if the valuations differ enough. The linear
equilibrium does not make all efficient trades, because sometimes $v_b > v_s$
and no trade occurs, but it does make all trades with joint surpluses of 1/4 or
more. Figure 4 illustrates this.
\includegraphics[width=150mm]{fig12-04.jpg}
\begin{center} {\bf Figure 4: Trade in the Linear Equilibrium }
\end{center}
One detail about equation (\ref{e100}) should bother you. The equation seems
to say that if $v_b = 0$, the buyer chooses $p_b = 1/12$. If that happens,
though, the buyer is bidding more than his value! The reason this can be part
of the equilibrium is that it is only a weak Nash equilibrium. Since the
seller never chooses lower than $p_s = 1/4$, the buyer is safe in choosing $p_b
= 1/12$; trade never occurs anyway when he makes that choice. He could just as
well bid 0 instead of 1/12, but then he wouldn't have a linear strategy.
The linear equilibrium is not a truth-telling equilibrium. The seller does not
report his true value $v_s$, but rather reports $p_s= (2/3)v_s + 1/4.$ But we
could replicate the outcome in a truth-telling equilibrium. We could have the
buyer and seller agree that they would make reports $r_b$ and $r_s$ to a
neutral mediator, who would then choose the trading price $p$. He would agree
in advance to choose the trading price $p$ by (a) mapping $r_s$ onto $p_s$
just as in the equilibrium above, (b) mapping $r_b$ onto $p_b$ just as in the
equilibrium above, and (c) using $p_b$ and $p_s$ to set the price just as in
the double auction mechanism. Under this mechanism, both players would tell the
truth to the mediator. Let us compare the original linear mechanism with a
truth-telling mechanism.
\noindent
{\bf The Chatterjee-Samuelson mechanism.} {\it The good is allocated to the
seller if $p_s \geq p_b$ and to the buyer otherwise. The price at which the
trade takes place, if it does, is $p =\frac{ p_b + p_s } {2}$ }
\noindent
{\bf A direct incentive-compatible mechanism.} {\it The good is allocated to
the seller if $ \left(\frac{2}{3} \right) p_s + \frac{1}{4} \geq \left(
\frac{2}{3} \right) p_b + \frac{1}{12} $, which is to say, if $p_s \geq p_b -
1/4$, and to the buyer otherwise. The price at which the trade takes place, if
it does, is }
\begin{equation}
p =\frac{ \left( \left( \frac{2}{3} \right) p_b + \frac{1}{12}\right) +
\left( \left(\frac{2}{3} \right) p_s + \frac{1}{4}\right)}{2} =\frac{p_b + p_s}
{3} + \frac{1}{6}
\end{equation}
What I have done is substituted the equilibrium strategies of the two
players into the mechanism itself, so now they will have no incentive to set
their reports different from the truth. The mechanism itself looks odd, because
it says that trade cannot occur unless $v_b$ is more than 1/4 greater than
$v_s$, but we cannot use the rule of trading if $v_b > v_s$ because then the
players would start misreporting again. The truth-telling mechanism only works
because it does not penalize players for telling the truth, and in order not to
penalize them, it cannot make full use of the information to achieve
efficiency.
In this game we have imposed a trading rule on the buyer and seller, rather
than letting them decide for themselves what is the best trading rule. Myerson
\& Satterthwaite (1983) prove that of all the equilibria and all the mechanisms
that are budget balancing, the linear equilibrium of the double auction
mechanism yields the highest expected payoff to the players, the expectation
being taken ex ante, before Nature has chosen the types. The mechanism is
not optimal when viewed after the players have been assigned their types, and
a player might not be happy with the mechanism once he knew his type. He will,
however, at least be willing to participate.
What mechanism would players choose, ex ante, if they knew they would be in
this game? If they had to choose after they were informed of their type, then
their proposals for mechanisms could reveal information about their types, and
we would have a model of bargaining under incomplete information that would
resemble signalling models. But what if they chose a mechanism before they were
informed of their type, and did not have the option to refuse to trade if after
learning their type they did not want to use the mechanism?
In general, mechanisms have the following parts.\\
1 Each agent $i$ simultaneously makes a report $p_i$. \\ 2 A rule $x(p)$
determines the action (such as who gets the good, whether a bridge is built,
etc.) based on the $p$. \\ 3 Each agent $i$ receives an incentive transfer
$a_i$ that in some way depends on his own report.\\
4 Each agent receives a budget-balancing transfer $b_i$ that does not depend
on his own report.
We will denote the agent's total transfer by $t_i $, so $t_i = a_i + b_i$.
In Bilateral Trading III, the mechanism had the following parts.\\ 1 Each
agent $i$ simultaneously made a report $p_i$.\\ 2 If $p_s \geq p_b$, the
good was allocated to the seller, but otherwise to the buyer. \\ 3 If there
was no trade, then $a_s=a_b =0$. If there was trade, then $a_s =\frac{(p_b +
p_s)}{2}$ and $a_b =-(\frac{(p_b + p_s)}{2})$.\\ 4 No further transfer $b_i$
was needed, because the incentive transfers balanced the budget by themselves.
It turns out that if the players in Bilateral Trading can settle their
mechanism and agree to try to trade in advance of learning their types, an
efficient budget-balancing mechanism exists that can be implemented as a Nash
equilibrium. The catch will be that after discovering his type, a player will
sometimes regret having entered into this mechanism.
This would actually be part of a subgame perfect Nash equilibrium of the game
as a whole. The mechanism design literature tends not to look at the entire
game, and asks ``Is there a mechanism which is efficient when played out as the
rules of a game?'' rather than ``Would the players choose a mechanism that is
efficient?''
\begin{center} \noindent {\bf Bilateral Trading IV: The Expected Externality
Mechanism } \end{center} {\bf Players}\\ A buyer and a seller.
\noindent {\bf The Order of Play}\\ -1 Buyer and seller agree on a mechanism
$(x(p), t(p))$ that makes decisions $x$ based on reports $p$ and pays $t$ to the
agents, where $p$ and $t$ are 2-vectors and $x$ allocates the good either to
the buyer or the seller. \\ 0 Nature independently chooses the seller to value
the good at $v_s$ and the buyer at $v_b$ using the uniform distribution between
0 and 1. Each player's value is his own private information. \\
1 The seller
reports $p_s$ and the buyer reports $p_b$ simultaneously. \\ 2 The mechanism
uses $x(p)$ to decide who gets the good, and $t(p)$ to make payments.\\
\noindent {\bf Payoffs}\\ Player $i$'s payoff is $(v_i + t_i)$ if he is
allocated the good, $t_i$ otherwise.
Part ($-1$) of the order of play is vague on how the two parties agree on a
mechanism. The mechanism design literature is also very vague, and focuses on
efficiency rather than payoff-maximization. To be more rigorous, we should have
one player propose the mechanism and the other accept or reject. The proposing
player would add an extra transfer to the mechanism to reduce the other
player's expected payoff to his reservation utility.
Let me use the term {\bf action surplus} to denote the utility an agent gets
from the choice of action.
The {\bf expected externality mechanism} has the following objectives for each
of the parts of the mechanism. \\ 1 Induce the agents to make truthful
reports.\\ 2 Choose the efficient action. \\ 3 Choose the incentive
transfers to make the agents choose truthful reports in equilibrium. \\ 4
Choose the budget-balancing transfers so that the incentive transfers add up to
zero.
First I will show you a mechanism that does this. Then I will show you how
I came up with that mechanism. Consider the following three-part mechanism:
1 The seller announces $p_s$. The buyer announces $p_b$. The good is
allocated to the seller if $p_s \geq p_b$, and to the buyer otherwise.
2 The seller gets transfer $t_s = \frac{(1-p_s^2)}{2} - \frac{(1- p_b^2)}
{2}$.
3 The buyer gets transfer $t_b = \frac{(1-p_b^2)}{2} - \frac{ (1- p_s^2)}
{2} $.
\noindent
This is budget-balancing:
\begin{equation} \label{mech1} \frac{ (1-p_s^2)}{2} - \frac{ (1-p_b^2)}{2} +
\frac{ (1-p_b^2)}{2} - \frac{ (1-p_s^2)}{2} =0.
\end{equation}
The seller's expected payoff as a function of his report $p_s$ is the sum
of his expected action surplus and his expected transfer. We have already
computed his transfer, which is not conditional on the action taken.
The seller's action surplus is 0 if the good is allocated to the buyer, which
happens if $v_b> p_s$, where we use $v_b$ instead of $p_b$ because in
equilibrium $p_b = v_b$. This has probability $(1-p_s)$. The seller's action
surplus is $v_s$ if the good is allocated to the seller, which has probability
$p_s$. Thus, the expected action surplus is $p_s v_s$.
\noindent
The seller's expected payoff is therefore
\begin{equation} \label{mech2}
p_s v_s + \frac{ (1-p_s^2)}{2} - \frac{ (1-p_b^2)}{2}. \end{equation}
Maximizing with respect to his report, $p_s$, the condition is
\begin{equation} \label{mech3}
v_s -p_s=0, \end{equation}
so the mechanism is incentive compatible --- the seller tells the truth.
The buyer's expected action surplus is $v_b$ if his report is higher, e.g. if
$p_b > v_s$, and zero otherwise, so his expected payoff is \begin{equation}
\label{mech4} p_b v_b + \frac{ (1-p_b^2)}{2} - \frac{(1-p_s^2)}{2}
\end{equation}
Maximizing with respect to his report, $p_s$, the first-order condition is
\begin{equation} \label{mech5}
v_b -p_b=0,
\end{equation}
so the mechanism is incentive compatible --- the buyer tells the truth.
Now let's see how to come up with the transfers. The expected externality
mechanism relies on two ideas.
The first idea is that to get the incentives right, each agent's incentive
transfer is made equal to the sum of the expected action surpluses of the
other agents, where the expectation is calculated conditionally on (a) the
other agents reporting truthfully, and (b) our agent's report. This makes the
agent internalize the effect of his externalities on the other agents. His
expected payoff comes to equal the expected social surplus. Here, this means,
for example, that the seller's incentive transfer will equal the buyer's
expected action surplus. Thus, denoting the uniform distribution by $F$,
\begin{equation} \label{mech6}
\begin{array}{ll}
a_s &= {\displaystyle \int_0^{p_s} \left( 0 \right) dF(v_b) + \int_{p_s}
^1 v_b dF(v_b)} \\
& \\
& = {\displaystyle 0 +\bigg|_{p_s}^1 \frac{v_b^2}{2}} \\
& \\
& = \frac{1}{2} - \frac{p_s^2}{2}. \\
\end{array}
\end{equation}
The first integral is the expected buyer action surplus if no transfer is
made because the buyer's value $v_b$ is less than the seller's report $p_s$,
so the seller keeps the good and the buyer's action surplus is zero. The
second integral is the surplus if the buyer gets the good, which occurs whenever
the buyer's value, $v_b$ (and hence his report $p_b$), is greater than the
seller's report, $p_s$.
\noindent
We can do the same thing for the buyer's incentive, finding the seller's
expected surplus.
\begin{equation} \label{mech7} \begin{array}{ll}
a_b &= {\displaystyle \int_0^{p_b} 0 dF(v_s) + \int_{p_b}^1 v_s dF(v_s)}
\\
& \\
& {\displaystyle= 0 + \bigg|_{p_b}^1 \frac{v_s^2}{2} } \\
&\\
& = \frac{1}{2} - \frac{p_b^2}{2}. \\
\end{array}
\end{equation}
If the seller's value $v_s$ is low, then it is likely that the buyer's report
of $p_b$ is higher than $v_s$, and the seller's action surplus is zero because
the trade will take place. If the seller's value $v_s$ is high, then the seller
will probably have a positive action surplus.
The second idea is that to get budget balancing, each agent's budget-balancing
transfer is chosen to help pay for the other agents' incentive transfers.
Here, we just have two agents, so the seller's budget-balancing transfer has to
pay for the buyer's incentive transfer. That is very simple: just set the
seller's budget-balancing transfer $b_s$ equal to the buyer's incentive
transfer $a_b$ (and likewise set $b_b$ equal to $a_s$).
The intuition and mechanism can be extended to $N$ agents. There are now $N$
reports $p_1,...p_N$. Let the action chosen be $x(p)$, where $p$ is the $N$-
vector of reports, and the action surplus of agent $i$ is $W_i( x(p),v_i)$.
To make each agent's incentive transfer equal to the sum of the expected
action surpluses of the other agents, choose it so \begin{equation}
\label{mech8}
{\displaystyle a_i = E \left( \Sigma_{j \neq i} W_j( x(p),v_j) \right). }
\end{equation}
The budget balancing transfers can be chosen so that each agent's incentive
transfer is paid for by dividing the cost equally among the other $(N-1)$
agents: \begin{equation} \label{mech9}
{\displaystyle b_i = \left( \frac{1}{N-1} \right) \left( \Sigma_{j \neq i}
E \left( \Sigma_{k \neq j} W_k( x(p),v_k) \right) \right).}
\end{equation}
There are other ways to divide the costs that will still allow the
mechanism to be incentive compatible, but equal division is the simplest.
The expected externality mechanism does have one problem: the participation
constraint. If the seller knows that $v_s=1$, he will not want to enter into
this mechanism. His expected transfer would be $t_s = 0 - (1 - 0.5)^2/2= -
0.125$. Thus, his payoff from the mechanism is $1 - 0.125 = 0.875$, whereas he
could get a payoff of 1 if he refused to participate. We say that this
mechanism fails to be {\bf interim incentive compatible}, because at the point
when the agents discover their own types, but not those of the other agents, the
agents might not want to participate in the mechanism or choose the actions we
desire.
Ordinarily economists think of bargaining as being less structured than in the
Bilateral Trading games, but it should be kept in mind that there are two styles
of bargaining: bargaining with loose rules that are ``made up as you go along'',
and bargaining with pre-determined rules to which the players can somehow
commit. This second kind of bargaining is more common in markets where many
bargains are going to be made and in situations where enough is at stake that
the players first negotiate the rules under which the main bargaining will
occur. Once bargaining becomes mechanism design, it becomes closer to the idea
of simply holding an auction. Bulow \& Klemperer (1996) compare the two means
of selling an item, observing that a key feature of auctions is involving more
traders, an important advantage.
\newpage \begin{small}
\noindent {\bf Notes}
\noindent {\bf N12.2} {\bf The Nash Bargaining Solution}
\begin{itemize} \item
See Binmore, Rubinstein, \& Wolinsky (1986) for a comparison of the cooperative
and noncooperative approaches to bargaining. For overviews of cooperative game
theory see Luce \& Raiffa (1957) and Shubik (1982).
\item While the Nash bargaining solution can be generalized to $n$ players (see
Harsanyi [1977], p. 196), the possibility of interaction between coalitions of
players introduces new complexities. Solutions such as the Shapley value
(Shapley [1953b] try to account for these complexities.
$\;\;\;$ The {\bf Shapley value} satisfies the properties of invariance,
anonymity, efficiency, and linearity in the variables from which it is
calculated. Let $S_i$ denote a {\bf coalition} containing player $i$; that is, a
group of players including $i$ that makes a sharing agreement. Let $v(S_i)$
denote the sum of the utilities of the players in coalition $S_i$, and $v(S_i -
\{i\})$ denote the sum of the utilities in the coalition created by removing $i$
from $S_i$. Finally, let $c(s)$ be the number of coalitions of size $s$
containing player $i$. The Shapley value for player $i$ is then
\begin{equation} \label{e11.14} \phi_i = \frac{1}{n} \sum_{s=1}^n \frac{1}
{c(s)} \sum_{S_i\;} \left[ v(S_i) - v(S_i - \{i\}) \right]. \end{equation}
where the $S_i$ are of size $s$. The motivation for the Shapley value is that
player $i$ receives the average of his marginal contributions to different
coalitions that might form. Gul (1989) has provided a noncooperative
interpretation. \end{itemize}
\bigskip
\noindent
{\bf N12.5} {\bf Incomplete Information.}
\begin{itemize}
\item
Bargaining under asymmetric information has inspired a large literature. In
early articles, Fudenberg \& Tirole (1983) uses a two-period model with two
types of buyers and two types of sellers. Sobel \& Takahashi (1983) builds a
model with either $T$ or infinite periods, a continuum of types of buyers, and
one type of seller. Crampton (1984) uses an infinite number of periods, a
continuum of types of buyers, and a continuum of types of sellers. Rubinstein
(1985a) uses an infinite number of periods, two types of buyers, and one type of
seller, but the types of buyers differ not in their valuations, but in their
discount rates. Rubinstein (1985b) puts emphasis on the choice of out-of-
equilibrium conjectures. Samuelson (1984) looks at the case where one bargainer
knows the size of the pie better than the other bargainer. Perry (1986) uses a
model with fixed bargaining costs and asymmetric information in which each
bargainer makes an offer in turn, rather than one offering and the other
accepting or rejecting. For overviews, see the surveys of Sutton
(1986) and Kennan \& Wilson (1993).
\item
The asymmetric information model in Section 12.5 has {\bf one-sided} asymmetry
in the information: only the buyer's type is private information. Fudenberg \&
Tirole (1983) and others have also built models with {\bf two-sided} asymmetry,
in which buyers' and sellers' types are both private information. In such models
a multiplicity of perfect bayesian equilibria can be supported for a given set
of parameter values. Out-of-equilibrium beliefs become quite important, and
provided much of the motivation for the exotic refinements mentioned in Section
6.2.
\item
There is no separating equilibrium if, instead of discounting, the asymmetric
information model has fixed-size per-period bargaining costs, unless the
bargaining cost is higher for the high-valuation buyer than for the low-
valuation. If, for example, there is no discounting, but a cost of $c$ is
incurred each period that bargaining continues, no separating equilibrium is
possible. That is the typical signalling result. In a separating equilibrium the
buyer tries to signal a low valuation by holding out, which fails unless it
really is less costly for a low-valuation buyer to hold out. See Perry (1986)
for a model with fixed bargaining costs which ends after one round of
bargaining.
\end{itemize}
\bigskip
\noindent
{\bf N12.6} {\bf Setting Up a Way To Bargain: The Myerson-Satterthwaite Model}
\begin{itemize}
\item The Bilateral Trading model originated in Chatterjee \& Samuelson (1983,
p. 842), who also analyze the more general mechanism with $p = \theta p_s +
(1-\theta) p_b$. I have adapted this description from Gibbons (1992, p.158).
\item Discussions of the general case can be found in Fudenberg \& Tirole
(1991a, p. 273), and Mas-Colell, Whinston \& Green (1994, p. 885). I have
taken the term ``expected externality mechanism'' from MWG. Fudenberg and Tirole
use ``AGV mechanism'' or ``AGV-Arrow mechanism'' for the same thing, because
the idea was first published in and D'Aspremont \& Gerard-Varet (1979) and
Arrow (1979). It is also possible to add extra costs that depend on the
action chosen (for example, a transactions tax if the good is sold from buyer
to seller). See Fudenberg and Tirole, p. 274. Myerson (1991) is also worth
looking into.
\end{itemize}
\newpage
\noindent {\bf Problems}
\bigskip
\noindent
\textbf{12.1. A Fixed Cost of Bargaining and Grudges } (medium)\\
Smith
and Jones are trying to split 100 dollars. In bargaining round 1, Smith
makes an offer at cost $0$, proposing to keep $S_1$ for himself and
Jones either accepts (ending the game) or rejects. In round 2, Jones
makes an offer at cost $10$ of $S_2$ for Smith and Smith either accepts
or rejects. In round 3, Smith makes an offer of $S_3$ at cost $c$, and
Jones either accepts or rejects. If no offer is ever accepted, the 100
dollars goes to a third player, Dobbs.
\begin{enumerate}
\item[(a)]
If $c=0$, what is the equilibrium outcome?
\item[(b)]
If $c=80$, what is the equilibrium outcome?
\item[(c)] If $c=10$, what is the equilibrium outcome?
\item[(d)] What happens if $c=0$, but Jones is very emotional and
would spit in Smith's face and throw the 100 dollars to Dobbs if Smith
proposes $ S=100$? Assume that Smith knows Jones's personality
perfectly.
\end{enumerate}
\bigskip
\noindent { \bf 12.2: Selling Cars} (medium) \\
A car dealer must pay \$10,000 to the
manufacturer for each car he adds to his inventory. He faces three buyers. From
the point of view of the dealer, Smith's valuation is uniformly distributed
between \$12,000 and \$21,000, Jones's is between \$9,000 and \$12,000, and
Brown's is between \$4,000 and \$12,000. The dealer's policy is to make a
single take-it-or-leave-it offer to each customer, and he knows these three
buyers will not be able to resell to each other.
Use the notation that the maximum valuation is $ \overline{V}$ and the range of
valuations is $ R $.
\begin{enumerate}
\item[(a)] What will the offers be?
\item[(b)]
Who is most likely to buy a car? How does this compare with the outcome
with perfect price discrimination under full information? How does it compare
with the outcome when the dealer charges \$10,000 to each customer?
\item[(c)] What happens to the equilibrium prices if, with probability 0.25,
each buyer has a valuation of \$0, but the probability distribution remains
otherwise the same?
\end{enumerate}
\bigskip \noindent
{\bf 12.3. The Nash Bargaining Solution} (medium)\\
Smith and Jones,
shipwrecked on a desert island, are trying to split 100 pounds of cornmeal and
100 pints of molasses, their only supplies. Smith's utility function is $U_s = C
+ 0.5M$ and Jones's is $U_j = 3.5C + 3.5 M$. If they cannot agree, they fight
to the death, with $U=0$ for the loser. Jones wins with probability 0.8.
\begin{enumerate}
\item[(a)] What is the threat point?
\item[(b)]
With a 50-50 split of the supplies, what are the utilities if the two players
do not recontract? Is this efficient?
\item[(c)] Draw the threat point and the pareto frontier in utility space (put
$U_s$ on the horizontal axis).
\item[(d)] According to the Nash bargaining solution, what are the utilities?
How are the goods split?
\item[(e)] Suppose Smith discovers a cookbook full of recipes for a variety of
molasses candies and corn muffins, and his utility function becomes $U_s=
10C + 5M$. Show that the split of goods in part (d) remains the same despite his
improved utility function.
\end{enumerate}
\bigskip
\noindent
{\bf 12.4. Price Discrimination and Bargaining } (easy) \\
A seller with marginal cost constant at $c$ faces a continuum of consumers
represented by the linear demand curve $Q^d = a-bP$, where $a>c.$ Demand is
at a rate of one or zero units per consumer, so if all consumers between
points 1 and 2.5 on the consumer continuum make purchases at a price of 13,
we say that a total of 1.5 units are sold at a price of 13 each.
\begin{enumerate}
\item[(a)]
What is the seller's profit if he chooses one take-it-or-leave- it price?
\item[(b)] What is the seller's profit if he chooses a continuum of take-it-
or-leave-it prices at which to sell, one price for each consumer? (You should
think here of a pricing function, since each consumer is infinitesimal).
\item[(c)]
What is the seller's profit if he bargains separately with each consumer,
resulting in a continuum of prices? You may assume that bargaining costs are
zero and that buyer and seller have equal bargaining power.
\end{enumerate}
\bigskip
\textbf{12.5. A Fixed Cost of Bargaining and Incomplete Information } (medium)\\
Up to part (c), this problem is identical with problem 12.1. Smith
and Jones are trying to split 100 dollars. In bargaining round 1, Smith
makes an offer at cost $0$, proposing to keep $S_1$ for himself and
Jones either accepts (ending the game) or rejects. In round 2, Jones
makes an offer at cost $10$ of $S_2$ for Smith and Smith either accepts
or rejects. In round 3, Smith makes an offer of $S_3$ at cost $c$, and
Jones either accepts or rejects. If no offer is ever accepted, the 100
dollars goes to a third player, Dobbs.
\begin{enumerate}
\item[(a)] If $c=0$, what is the equilibrium outcome?\
\item[(b)] If $c=80$, what is the equilibrium outcome?
\item[(c)]
If Jones' priors are that $c=0$ and $c=80$ are equally likely, but
only Smith knows the true value, what are the players' equilibrium
strategies in rounds 2 and 3? (that is: what are $S_2$ and $S_3$, and
what acceptance rules will each player use?)
\item[(d)]
If Jones' priors are that $c=0$ and $c=80$ are equally likely, but only
Smith knows the true value, what are the equilibrium strategies for
round 1? (Hint: the equilibrium uses mixed strategies.)
\end{enumerate}
\bigskip \noindent {\bf 12.6. A Fixed Bargaining Cost, Again } (easy)\\
Apex and
Brydox are entering into a joint venture that will yield 500 million dollars,
but they must negotiate the split first. In bargaining round 1, Apex makes an
offer at cost $0$, proposing to keep $A_1$ for itself. Brydox either accepts
(ending the game) or rejects. In Round 2, Brydox incurs a cost of $10$
million to make an offer that gives $A_2$ to Apex, and Apex either accepts or
rejects. In Round 3, Apex incurs a cost of $c$ to make an offer that gives
itself
$A_3$, and Brydox either accepts or rejects. If
no offer is ever accepted, the joint venture is cancelled.
\begin{enumerate}
\item[(a)] If $c=0$, what is the equilibrium? What is the equilibrium outcome?
\item[(b)] If $c=10$, what is the equilibrium? What is the equilibrium
outcome?
\item[(c)]
If $c=300$, what is the equilibrium? What is the equilibrium outcome?
\end{enumerate}
%---------------------------------------------------------------
\bigskip
\noindent {\bf 12.7. Myerson-Satterthwaite } (medium)\\ The owner of a tract
of land
values his land at $v_s$ and a potential buyer values it at $v_b$. The buyer
and seller do not know each other's valuations, but guess that they are
uniformly distributed between 0 and 1. The seller and buyer suggest $p_s$ and
$p_b$ simultaneously, and they have agreed that the land will be sold to the
buyer at price $p =\frac{(p_b + p_s)}{2}$ if $p_s \leq p_b$.
The actual valuations are $v_s=0.2$ and $v_b=0.8$. What is one equilibrium
outcome given these valuations and this bargaining procedure? Explain why this
can happen.
%---------------------------------------------------------------
\bigskip
\noindent
{\bf 12.8. Negotiation (Rasmusen [2002]) } (hard)\\
Two parties, the Offeror and the Acceptor, are trying to agree to the clauses
in a contract. They have already agreed to a basic contract, splitting a
surplus 50- 50, for a surplus of $Z$ for each player. The offeror can at cost
$C$ offer an additional clause which the acceptor can accept outright, inspect
carefully (at cost $M$), or reject outright. The additional clause is either
``genuine,'' yielding the Offeror $X_g$ and the Acceptor $Y_g$ if accepted,
or ``misleading,'' yielding the Offeror $X_m$ (where $X_m>X_g>0$) and the
Acceptor $- Y_m <0$.
What will happen in equilibrium?
%---------------------------------------------------------------
\newpage
\begin{center}
{\bf Labor Bargaining: A Classroom Game for Chapter 12}\footnote{This game
is adapted from a classroom game of Vijay Krishna.}
\end{center}
Currently, an employer is paying members of a labor union \$46,000 per year,
but the union has told its members it thinks \$68,000 would be a fairer amount.
Every \$1,000 increase in salary costs the employer \$30 million per year, and
benefits the workers in aggregate by \$25 million (the missing \$5 million going
to taxes, which are heavier for the workers).
If the workers go on strike, it will cost the players \$25 million per week in
foregone earnings, and it will cost the employer \$60 million in lost profits.
Interest rates are low enough that they can be ignored in this game.
The rules for bargaining are as follows. The union makes the first offer, on
May 1 (time 0), and the employer accepts or rejects. If the employer accepts the
offer, there is no strike. If the employer rejects it, there is a strike for the
next week, but the employer then can make a counteroffer on May 8 (time 1). If
it is accepted by the union, the strike has lasted one week. If it is rejected,
the union has one week in which to put together its counteroffer for May 15
(time 2).
The workers' morale and bank accounts will run out after 7 weeks of a strike,
at time 7. If no other agreement has been reached, the union must then accept an
offer as low as \$46,000. It will not accept an offer any lower, because the
workers angrily refuse to ratify a lower offer.
Students will be put into groups of three that represent either the employer
or the union. Employer groups and union groups will then pair up to
simultaneously play the game. A group's objective is to maximize its payoff.
The instructor will set up place on the blackboard for each group to record its
weekly offers. If a group cannot agree on what offer to make and does not write
it up on the board in time, then it forfeits its chance to make an offer that
week. Each offer must be in thousands of dollars of annual salary-- no offers
of \$52,932 are allowed.
\end{small}
\end{document}
**