\documentclass[12pt,reqno, usenames,dvipsnames]{amsart}
\usepackage{setspace}
\usepackage{amssymb}
\usepackage{graphicx}
\usepackage{amsmath}
\usepackage{hyperref}
\usepackage{verbatim}
\usepackage{array,multirow}
\hypersetup{breaklinks=true,
pagecolor=white,
colorlinks=true,
linkcolor= blue,
hyperfootnotes= true,
urlcolor=blue
}
\urlstyle{rm}
% \reversemarginpar
%\topmargin -.3in \oddsidemargin -.1in
%\textheight 9in \textwidth 7.5in
\newcommand{\margincomment}[1]
{\mbox{}\marginpar{\tiny\hspace{0pt}#1}}
\newcommand{\comments}[1]{}
\renewcommand{\baselinestretch}{1.2}
\parindent 24pt
\parskip 10pt
% \doublespacing
\begin{document}
\titlepage
%\vspace*{12pt}
\begin{center}
{\large {\bf Back to Bargaining Basics:\\
A Breakdown Model for Splitting a Pie
}}
February 29, 2020
\bigskip
Eric Rasmusen
{\it Abstract}
\end{center}
\begin{small}
Nash (1950) and Rubinstein (1982) give two different justifications for a 50-50 split of surplus in bargaining with two player. Nash's axioms extend to $n$ players, but no non-cooperative game theory model of $n$-person bargaining has become standard. I offer a simple static model that reaches a 50-50 split (or $1/n$) as the unique equilibrium. Each player chooses a ``toughness level'' simultaneously, but greater toughness always generates a risk of breakdown. Introducing asymmetry, a player who is more risk averse gets a smaller share in equilibrium. ``Bargaining strength'' can also be parameterized to yield an asymmetric split. The model can be extended by making breakdown mere delay, in which case it resembles Rubinstein (1982) but with an exact 50-50 split and delay in equilibrium. The model only needs minimal assumptions on breakdown probability and pie division as functions of toughness. Its intuition is simple: whoever has a bigger share loses more from breakdown and hence has less incentive to be tough.
\noindent
Rasmusen:
Professor, Department of Business Economics and Public Policy, Kelley
School
of Business, Indiana University. 1309 E. 10th Street,
Bloomington,
Indiana, 47405-1701. (812) 855-9219.
\href{mailto:erasmuse@indiana.edu}{ erasmuse@indiana.edu}, \url{http://www.rasmusen.org}.
%\margincomment{\hspace{.5in} \includegraphics[width=.7in]
%{EricRasmusen2007.jpg}}
{\small
\noindent This paper:
\url{http://www.rasmusen.org/papers/bargaining50.pdf}. }
{\small
\noindent
Keywords: bargaining, splitting a pie, Rubinstein model, Nash bargaining solution, hawk-dove game, Nash Demand Game, Divide the Dollar }\\
JEL Code: C78.
{\small I would like to thank George Loginov, Anh Nguyen, Benjamin Rasmusen, Michael Rauh, Joel Sobel, and participants in the 2019 IIOC Meetings, the 2019 Midwest Theory Meeting, and the BEPP Brown Bag Lunch for helpful comments, as well as referees from a journal which rejected the paper who nonetheless made cafeful and useful criticisms. }
\end{small}
\newpage
\noindent
{\sc 1. Introduction}
Bargaining shows up as part of so many models in economics that it's especially useful to have simple models of it with the properties appropriate for the particular context. Often, the modeller wants the simplest model possible, because the outcome doesn't matter to his question of interest, so he assumes one player makes a take-it-or-leave it offer and the equilibrium is that the other player accepts the offer. Or, if it matters that both players receive some surplus (for example, if the modeller wishes to give both players some incentive to make relationship-specific investments, the modeller chooses to have the surplus split 50-50. This can be done as a ``black box'' reduced form. Or, it can be taken as the unique symmetric equilibrium and the focal point in the ``Splitting a Pie'' game (also called ``Divide the Dollar''), in which both players simultaneously propose a surplus split and if their proposals add up to more than 100\% they both get zero. The caveats ``symmetric'' and ``focal point'' need to be applied because this game, the most natural way to model bargaining, has a continuum of equilibria, including not only 50-50, but 70-30, 80-20, 50.55-49.45, and so forth. Moreover, it is a large infinity of equilibria: as shown in Malueg (2010) and Connell \& Rasmusen (2018), there are also continua of mixed-strategy equilibria such as the Hawk-Dove equilibria (both players mixing between 30 and 70), more complex symmetric discrete mixed-strategy equilibria (both players mixing between 30, 40, 60, and 70), asymmetric discrete mixed-strategy equilibria (one player mixing between 30 and 40, and the other mixing betwen 60 and 70), and continuous mixed-strategy equilibria (both players mixing over the interval [30, 70]).
Commonly, though, modellers cite to Nash (1950) or Rubinstein (1982), which do have unique equilibria. On Google Scholar these two papers had 9,067 and 6,343 cites as of September 6, 2018.
The Nash model is the entire subject of Chapter 1 and the Rubinstein model is the entire subject of Chapter 2 of the best-known books on bargaining, Martin Osborne and Ariel Rubinstein's 1990 {\it Bargaining and Markets} and Abhinay Muthoo's 1999 {\it Bargaining Theory with Applications} (though, to be sure, my own treatment in Chapter 12 of {\it Games and Information} is organized somewhat differently).
Nash (1950) finds a unique 50-50 split using four axioms. {\it Efficiency} says that the solution is pareto optimal, so the players cannot
both be made better off by any change. {\it Anonymity (or Symmetry) } says that switching the labels on players 1
and 2 does not affect the solution. {\it Invariance } says that the solution is independent of the units in which utility is
measured. {\it Independence of Irrelevant Alternatives } says that if we drop some possible
pie divisions as possibilities, but not the equilibrium division, the division that we call the equilibrium does not change. For Splitting a Pie, only Efficiency and Symmetry are needed to for 50-50 to be the unique equilibrium when the players have the same utility functions. The other two axioms handle situations where the utility frontier is not $u_1=1-u_2$, i.e., the diagonal from (0,1) to (1,0). Essentially, they handle it by relabelling the pie division as being from player 1 getting 100\% of his potential maximum utility and player 2 getting 0\% to the opposite, where player 2 gets 100\%.
Rubinstein (1982) obtains the 50-50 split differently. Nash's equilibrium is in the style of cooperative games, and shows what the equilibrium will be if we think it will be efficient and symmetric.
The
``Nash program" as described in Binmore (1980, 1985) is to give noncooperative microfoundations for the 50-50 split, to show the tradeoffs players make that reaches that result. Rubinstein (1982) is the great success of the Nash program. Each player in turn proposes a split of the pie, with the other player responding with accept or reject. If the response is to reject, the pie's value shrinks according to the discount rates of the players. This is a game of complete information with an infinite number of possible rounds. In the unique subgame perfect equilibrium, the first player proposes a split giving slightly more than 50\% to himself, and the other player accepts, knowing that if he rejects and waits to the second period so he has the advantage of being the proposer, the pie will have shrunk, so it is not worth waiting. If one player is more impatient, that player's equilibrium share is smaller. Thus, the tradeoff is between accepting the other player's offer now, or incurring a time cost of waiting to make one's own, more advantageous offer--- but knowing that the alternation could continue forever.
The present paper illustrates a different tradeoff, where being tougher has the advantage of giving a bigger share if successful but the disadvantage of being more likely to cause breakdown in negotiations. We will see this in a model that will be be noncooperative like Rubinstein's but like Nash's will leave important assumptions unexplained--- here, why being tougher increases one's share of the pie and the probability of breakdown. This will allow for a very simple model, in the hope that the occasional reversal of the usual movement over time from less to more complex models may be welcome to some readers.
The significance of endogenous breakdown is that it imposes a continuous cost on a player who chooses to be tougher. In Rubinstein (1982), the proposer's marginal cost of toughness is zero as he proposes a bigger and bigger share for himself up until the point where the other player would reject his offer---where the marginal cost becomes infinite. In the model here, the marginal cost of toughness will be the increase in the probability of breakdown times the share that is lost, so a player cannot become tougher without positive marginal cost. Since ``the share that is lost'' is part of the marginal cost, that cost will be higher for the player with the bigger share. This implies that if the player with the bigger share is indifferent about being tougher, the other player will have a lower marginal cost of being tougher and will not be indifferent. As a result, a Nash equilibrium will require that both players have the same share.
\bigskip
\noindent
{\it Further Comments on the Literature}
In Rubinstein (1982), the player always reach immediate agreement. That is because he interprets the discount rate as time preference, but another way to interpret it--- if both players have the same discount rate and are risk neutral--- is as an exogenous probability of complete bargaining breakdown. Or, risk aversion can replace discounting in an alternating-offers model, in which case the more risk-averse player will receive a smaller share in equilibrium.
Binmore, Rubinstein \& Wolinsky (1986) and Chapter 4 of Muthoo (2000) explore these possibilities. If there is an exogenous probability that negotiations break down and cannot resume, so the surplus is forever lost, then even if the players are infinitely patient they will want to reach agreement quickly to avoid the possibility of losing the pie entirely. Especially when this assumption is made, the idea in Shaked and Sutton (1984) of looking at the ``outside options'' of the two players becomes important. These models still do not result in delay in equilibrium, but one approach that does is to look at bargaining as a war of attrition, as in Abreu \& Gul (2000). They assume a small possibility that a player is of an exogenously bull-headed type who will not back down, in which case the normal, rational, type of player uses a mixed strategy to decide when to concede, and agreement might occur only after delay. This sounds closer to the literature on bargaining under incomplete information, but is close to complete information because it assumes only a small probability of a player type being of a special type. The equivalent in their model of the current paper's ``toughness'' is the probability of continuing delay rather than conceding to the other player.
There have also been efforts to justify the Nash bargaining solution as the result of the risk of breakdown.
Roth (1979) models bargaining as two single-player decisions problems, where each player chooses a share that maximizes his expected utility under a particular belief about the share proposed by the other player. This attains the product solution, but the two players' beliefs are inconsistent. Bastianello \& LiCalzi (2019) use a model in which a mediator chooses shares $x$ and $1-x$ that maximizes the probability those shares are accepted by both players where the probability a player accepts is increasing in his share. Thus, the possibility of breakdown is driving the solution, but the focus is on justifying the product solution and extending the idea to other axiomatic bargaining solutions rather than modelling breakdown as an equilibrium phenomenon.
Another approach to bargaining is to return to the one-shot game but change the structure of the breakdown or payoff functions. Nash (1953), in his second paper on bargaining, adds precommitment to a mixed-strategy threat players use if their initial bids add up to more than one, and finds a 50-50 split as the limit of continuous approximations of the breakdown function. In Binmore (1987) and Carlsson (1991), players have positive probability of making errors in announcing their bids. In Anbarci (2001), the sharing rule says that when the bids add up to more than one, the player with the larger bid's share is scaled back the most.
\bigskip
\noindent
{\sc 2. The Model}
Players 1 and 2 are splitting a pie of size 1. Each simultaneously chooses a toughness level $x_i$ in $[0, \infty)$. With probability $p(x_1, x_2)$, bargaining fails and each ends up with a payoff of zero. Otherwise, player 1 receives $\pi(x_1, x_2)$ and Player 2 receives $1-\pi(x_1, x_2)$. For convenience, we will assume there is an arbitrarily small fixed cost of effort for toughness greater than zero so a player will prefer a toughness of zero to toughness high enough to cause breakdown with probability one. I will omit that infinitesimal from the payoff equations.
\noindent
{\bf Example 1: The Basics}.
Let $p(x_1, x_2) = Min\{ \frac{x_1+x_2}{12}, 1\}$ and $\pi(x_1, x_2)=\frac{x_1 }{x_1+x_2}$ (with $\pi=.5$ if $x_1=x_2=0$). The payoff functions are
\begin{equation} \label{equation1}
\mathit{Payoff}_1 = p\cdot (0) + (1-p)\pi = (1- \frac{x_1+x_2}{12})\frac{x_1 }{x_1+x_2} = \frac{x_1 }{x_1+x_2}- \frac{x_1}{12}
\end{equation}
and
$$% \begin{equation} \label{e0}
\mathit{Payoff}_2 = p\cdot (0) + (1-p)(1-\pi) = (1- \frac{x_1+x_2}{12}) (1-\frac{x_1 }{x_1+x_2}) = \frac{x_2 }{x_1+x_2}- \frac{x_2}{12}
$$%\end{equation}
Maximizing equation (\ref{equation1}) with respect to $x_1$,
Player 1's first order condition is
$$% \begin{equation} \label{e0}
\frac{\partial \mathit{Payoff}_1}{\partial x_1} = \frac{1 }{x_1+x_2}- \frac{x_1 }{(x_1+x_2)^2} - 1/12=0
$$% \end{equation}
so $x_1+x_2 - x_1 - \frac{(x_1+x_2)^2}{12}=0 $ and $12x_2 - (x_1+x_2)^2=0$.
Player 1's reaction curve is
$$% \begin{equation} \label{e0}
x_1 = 2 \sqrt{3} \sqrt{x_2} - x_2,
$$%\end{equation}
as shown in Figure 1.
Solving with $ x_1=x_2 =x$ we obtain $x= 3$ in the unique Nash equilibrium, with no need to apply the refinement of subgame perfectness. The pie is split equally, and the probability of breakdown is $p = \frac{3}{3+3} = 50\%$.
Note that the infinitesimal fixed cost of effort rules out uninteresting breakdown equilibria such as ($x_1= 13, x_2=13$) or $(x_1= \infty, x_2=13$).
\hspace*{-48pt} \begin{minipage}[c]{ \linewidth}
\begin{center}
{\sc Figure 1:\\
Reaction Curves for Toughnesses $x_1$ and $x_2$ in Example 1 } \label{example-1-reaction-curves.pdf}
\includegraphics[width=3in]{example-1-reaction-curves.pdf}
\end{center}
\end{minipage}
In Example 1, the particular breakdown function $p$ leads to a very high equilibrium probability of breakdown--- 50\%. The model retains its key features, however, even if the equilibrium probability of breakdown is made arbitrarily small by choice of a breakdown function with sufficiently great marginal increases in breakdown as toughness increases. Example 2 shows how that works.
\bigskip
\noindent
{\bf Example 2: A Vanishingly Small Probability of Breakdown. }
Keep $\pi(x_1, x_2)=\frac{x_1 }{x_1+x_2}$ as in Example 1, but let the breakdown probability be $p(x_1, x_2) = \frac{(x_1+x_2)^k}{12k }$ for $k$ to be chosen. We want an equilibrium maximizing
$$%\begin{equation} \label{e0}
\begin{array}{lll}
\mathit{Payoff}_1& = & p(0) + (1-p)\pi \\
&&\\
&= & (1- \frac{(x_1+x_2)^k}{12k})\frac{x_1 }{x_1+x_2} \\
&&\\
&= &\frac{x_1 }{x_1+x_2}- \frac{x_1(x_1+x_2)^{k-1}}{12k}\\
\end{array}
$$%\end{equation}
The first order condition is
$$% \begin{equation} \label{e0}
\frac{1 }{x_1+x_2}- \frac{x_1 }{(x_1+x_2)^2} - x_1(k-1) (x_1+x_2)^{k-2}/12k - \frac{ (x_1+x_2)^{k-1}}{12k}=0
$$% \end{equation}
so
$ 12k(x_1+x_2)- 12k x_1 - x_1(k-1) (x_1+x_2)^{k } - (x_1+x_2)^{k+1 } =0$ and
$ 12k x_2 - x_1(k-1) (x_1+x_2)^{k } - (x_1+x_2)^{k+1 } =0$. Player 2's payoff functions is
$$%\begin{equation} \label{e0}
Payoff (2) = (1- \frac{(x_1+x_2)^k}{12k})(1-\frac{x_1 }{x_1+x_2}) = \frac{x_2 }{x_1+x_2}- \frac{x_2(x_1+x_2)^{k-1}}{12k}
$$%\end{equation}
The equilibrium is symmetric, so we can solve
$ 12 kx - x (k-1) (2x)^{k } - (2x)^{k+1 } =0$ to get
$x=( \frac{ 12k (2^{-k})}{k+1})^{1/k}$, and
$$%\begin{equation} \label{e0}
x^*= .5( \frac{ 12k }{k+1})^{1/k}
$$%\end{equation}
If $k=1$ then $ x^* =( \frac{ 12 (2^{-1})}{2})^{1} =3$, and $p = \frac{6}{12} = .5$, as in Example 1.
If $k=2$ then $ x^* \approx 1.4$ and $p = 1/3$.
If $k=5$ then $ x^* \approx .79$ and $p \approx .17$.
This converges to $x^*=.5$ as $k$ becomes large. Since the probability of breakdown is $p(x_1, x_2) = \frac{(x_1+x_2)^k}{12k}$, the probability of breakdown $p= \frac{1}{k+1}$ which approaches 0 as $k$ increases.
Thus, it is possible to construct a variant of the model in which the probability of breakdown approaches zero, but we retain the other features, including the unique 50-50 split of the surplus. Note that it is also possible to construct a variant with the equilibrium probability of breakdown approaching one, by using a breakdown probability function with a very low marginal probability of breakdown as toughness increases.
\bigskip
Having seen equal shares in two examples, let us now build a more general model.
Let the probability of bargaining breakdown be $p(x_1, x_2)$, let player 1's share of the pie be $\pi(x_1, x_2)$. Let us add an effort cost $c(x_i)$ for player $i$, with $c \geq 0$, $ \frac{d c}{d x_i}\geq 0, \frac{d^2c}{d x_i^2} \geq 0$. Assume that player $i$ prefers a lower value of $x_i$ to a higher one if the payoffs are equal, even if $c=0$. The players have identical (for now) quasilinear utility functions and are possibly risk averse: $ u(1)=u(\pi) - c(x_1)$ and $u(2)= u (1-\pi) - c(x_2)$ with $u'> 0$ and $ u'' \leq 0$ and normalized to $u(0) \equiv 0$.
We will assume that the breakdown probability, $p(x_1, x_2)$, has $\frac{\partial p}{\partial x_1}>0$, $\frac{\partial p}{\partial x_2}>0$, $\frac{\partial^2 p}{\partial x_1^2} \geq 0$, $\frac{\partial^2 p}{\partial x_2^2} \geq 0$, and $\frac{\partial^2 p}{\partial x_1 \partial x_2} \geq 0$ for all values of $x_1, x_2$ such that $p<1$. The probability of breakdown rises with each player's toughness, but with diminishing returns until it reaches 1. Also, assume that $p(a,b) = p(b,a)$, which is to say that the breakdown probability does not depend on the identity of the players, just the combination of toughnesses they choose.
We will assume that player 1's share of the pie, $\pi(x_1, x_2) \in [0,1]$, has $\frac{\partial \pi}{\partial x_1}>0$, $\frac{\partial \pi}{\partial x_2}<0$, $\frac{\partial^2 \pi}{\partial x_1^2} \leq 0$, $\frac{\partial^2 \pi}{\partial x_2^2} \geq 0$. We will also assume that $\frac{\partial^2 \pi}{\partial x_1 \partial x_2} \leq 0$ if $ x_1 \leq x_2$ and $\frac{\partial^2 \pi}{\partial x_1 \partial x_2} \geq 0$ if $ x_1 \geq x_2$. A player's share ($\pi$ for player 1, $(1-\pi)$ for player 2) rises with his toughness, with diminishing returns; and the other player's toughness reduces his marginal return if he is the least tough of the two. Also, $\pi(a, b) = 1- \pi(b, a)$, which is to say that if one player chooses $a$ and the other chooses $b$, the share of the player choosing $a$ does not depend on whether he is player 1 or player 2. The function $\pi= \frac{x_1}{x_1+x_2}$ used in Example 1 satisfies these assumptions.
The assumptions on $\pi$ imply that $lim_{x_1 \rightarrow \infty} \frac{\partial \pi}{\partial x_1} \rightarrow 0$, since $\frac{\partial \pi}{\partial x_1}>0$, $\frac{\partial^2 \pi}{\partial x_1^2} \leq 0$, and $\pi \leq 1$; as $x_1$ grows, if its marginal effect on $\pi$ is constant then $p$ will hit the ultimate level $\pi=1$ eventually and for higher $x_1$ we would have $\frac{\partial \pi}{\partial x_1}=0$, but if the marginal effect on $\pi$ diminishes, it must diminish to zero (and similarly for $x_2$'s effect).
\noindent
{\bf
Proposition 1.}{\it The general model has a unique Nash equilibrium, and that equilibrium is in pure strategies with a 50-50 split of the surplus. $x_1^*=x_2^*$ and $\pi(x_1^*,x_2^*)=.5 $. }
\noindent
{\bf Proof.} The expected payoffs are
$$%\begin{equation} \label{e0}
\mathit{Payoff}_1 = p(x_1, x_2)(0) + (1-p(x_1, x_2)) u (\pi(x_1, x_2)) - c(x_1)
$$% \end{equation}
and
$$%\begin{equation} \label{e0}
\mathit{Payoff}_2 = p(x_1, x_2)(0) + (1-p(x_1, x_2)) u (1-\pi(x_1, x_2)) - c(x_2).
$$%\end{equation}
The first order conditions are
\begin{equation} \label{FOC-general}
\frac{\partial \mathit{Payoff}_1}{\partial x_1}= \left(\frac{d u (\pi) }{d \pi}\cdot \frac{\partial \pi}{\partial x_1} - p \frac{d u (\pi) }{d \pi}\frac{\partial \pi}{\partial x_1}\right) - \frac{\partial p}{\partial x_1} u(\pi) - \frac{d c}{dx_1} =0
\end{equation}
and
$$%{equation} \label{e0}
\frac{\partial \mathit{Payoff}_2}{\partial x_2}= \left(\frac{d u }{d \pi} \cdot \frac{\partial \pi}{\partial x_2} - p \frac{d u (1-\pi) }{d \pi}\frac{\partial \pi}{\partial x_2}\right) - \frac{\partial p}{\partial x_2} u(1-\pi) -\frac{d c}{dx_2} =0,
$$%\end{equation}
where the first two terms in parentheses are the marginal benefit of increasing one's toughness and the second two terms are the marginal cost. The marginal benefit is an increased share of the pie, adjusted for diminishing marginal utility of consumption. The marginal cost is the loss from more breakdown plus the marginal cost of toughness.
First, note that if there is a corner solution at $x_1=x_2=0$, it is a unique solution with a 50-50 split of the surplus. That occurs if
$\frac{\partial \mathit{Payoff}_1 (0,0)}{\partial x_1}<0$, since the weak convexity assumptions tell us that higher levels of toughness would also have marginal cost greater than marginal benefit. That is why we did not need to make a limit assumption such as $lim_{x_1 \rightarrow 0} \frac{\partial \pi}{\partial x_1} \rightarrow \infty$ and $c'' <\infty$ for the theorem to be valid, though of course the model is trivial if the toughness levels of both players are zero.
There is not a corner solution with large $x_1$. Risk-neutral utility with zero direct toughness costs makes risking breakdown by choosing large $x_1$ most attractive, so it is sufficient to rule it out for that case. Set $u_1(\pi) = \pi$ and $c(x_1)=0$, so $ \frac{\partial \mathit{Payoff}_1}{\partial x_1}= (1-p)\frac{\partial \pi}{\partial x_1} - \frac{\partial p}{\partial x_1} \pi$. The function $p$ is linear or convex, so it equals 1 for some finite $x_1 \equiv \overline{x}$ (for given $x_2$). $\frac{\partial p}{\partial x_1}>0$, by assumption, and does not fall below $\frac{\partial p}{\partial x_1}(0, x_2)$ by the assumption of $\frac{\partial^2 p}{\partial x_1^2} \geq 0$. Hence, at $x_1=\overline{x}$, $ (1-p(\overline{x}, x_2 ) \frac{\partial \pi}{\partial x_1} - \frac{\partial p}{\partial x_1}(\overline{x}, x_2 ) \pi = 0 - \frac{\partial p}{\partial x_1}(\overline{x}, x_2 ) \pi <0$ and the solution to player 1's maximization problem must be $x_1 < \overline{x}$.
We will now look at interior solutions and establish uniquenes and the 50-50 split.
We will first establish that the marginal return to toughness is strictly decreasing and the second-order condition is satisfied, that $ \frac{\partial^2 Payoff(1)}{\partial x_1^2} <0$.
The derivative of the first two terms in \eqref{FOC-general} with respect to $x_1$ is
\begin{equation} \label{first-two-terms}
\begin{array}{l}
[ \frac{d^2 u_1}{d \pi^2} (\frac{\partial \pi}{\partial x_1})^2 + \frac{d u_1}{d \pi} \frac{\partial^2 \pi}{\partial x_1^2}] +[ - \frac{\partial p}{\partial x_1} \frac{d u_1}{d \pi}\frac{\partial \pi}{\partial x_1} - p \frac{d u_1}{d \pi} (\frac{\partial \pi}{\partial x_1})^2 - p \frac{d u_1}{d \pi}\frac{\partial^2 \pi}{\partial x_1^2}] \\
\\
= (1-p) \frac{d^2 u_1}{d \pi^2} (\frac{\partial \pi}{\partial x_1})^2 + (1-p) \frac{d u_1}{d \pi} \frac{\partial^2 \pi}{\partial x_1^2} - \frac{\partial p}{\partial x_1} \frac{d u_1}{d \pi}\frac{\partial \pi}{\partial x_1} \\
\end{array}
\end{equation}
The first term of \eqref{first-two-terms}, the marginal benefit, is zero or negative because $ (1-p)>0$ and $\frac{d^2 u_1}{d \pi^2} \leq 0$. The second term is zero or negative because $ (1-p)>0$, $\frac{d u_1}{d \pi}>0$ and $ \frac{\partial^2 \pi}{\partial x_1^2} \leq 0$. The third term--- the key one--- is strictly negative because $\frac{\partial p}{\partial x_1}>0$, $\frac{d u_1}{d \pi}>0$, and $\frac{\partial \pi}{\partial x_1} >0$.
The derivative of the third and fourth terms of \eqref{FOC-general} with respect to $x_1$, the marginal cost, is
\begin{equation} \label{second-two-terms}
- \frac{\partial^2 p}{\partial x_1^2} u - \frac{\partial p}{\partial x_1} \frac{d u }{d \pi}\frac{\partial \pi}{\partial x_1} - \frac{d^2 c}{d x_1^2}
\end{equation}
The first term of \eqref{second-two-terms} is zero or negative because $\frac{\partial^2 p}{\partial x_1^2} \geq 0$ and $u >0$. The second term--- another key one--- is strictly negative because $\frac{\partial p}{\partial x_1}>0$, $\frac{d u }{d \pi} > 0$, and $\frac{\partial \pi}{\partial x_1}>0$. The third term is zero or negative because $\frac{d^2 c}{d x_1^2} \geq 0$. Thus, the marginal return to toughness is strictly decreasing.
The derivative of \eqref{FOC-general} with respect to $x_2$, the other player's toughness, is
\begin{equation} \label{d2}
\begin{array}{l}
\frac{\partial^2 \mathit{Payoff}_1 }{\partial x_1 \partial x_2}= (1-p)\frac{d u }{d \pi} \frac{\partial^2 \pi}{\partial x_1 \partial x_2} - \frac{\partial p}{\partial x_2} \frac{d u }{d \pi}\frac{\partial \pi}{\partial x_1} - \frac{\partial^2 p}{\partial x_1 \partial x_2}u - \frac{\partial p}{\partial x_1} \frac{d u }{d \pi}\frac{\partial \pi}{\partial x_2} - 0.
\end{array}
\end{equation}
The first term is weakly negative if $x_1x_2$, the sum is positive, and if $x_1=x_2$ the sum is zero. Using the implicit function theorem, we can conclude that if $x_1 x_2$, we cannot determine the sign of $\frac{dx_1}{dx_2}$ without narrowing the model. In the same way, it can be shown that player 2's reaction curve is negatively sloped when $x_2x_2$), by the time that $x_1$ reaches $x_1=x_2$ the reaction curve will have negative slope, and the reaction curves will never cross again. (See Figure 1 for illustration, noting that the apparent $x_1=x_2=0$ intersection is not actually on the reaction curves because the two first derivatives are both positive there.) So the equilibrium must be unique, and with $x_1=x_2$.The assumption that the pie-splitting function is symmetric then ensures that $\pi =.5$.
There are no mixed-strategy equilibria, because unless $x_1=x_2$, one player's marginal return to toughness will be greater than the other's, so they cannot both be zero, and existence of a mixed-strategy equilibrium requires that two pure strategies have the same payoffs given the other player's strategy. $\blacksquare$
\bigskip
Many of the assumptions behind Proposition 1 are stated with weak inequalities. This is because the basic intuition is about linear relations. We can add convexity to strengthen the result and ensure interior solutions, but convexity is not driving the result like it usually does in economics. Rather, the intuition is that if one player is tougher than the other, he gets a bigger share and so has more to lose from breakdown, which means he has less incentive to be tough. Even if his marginal benefit of toughness--- the rate of increase of his share--- were to be the same as the other player's (a linear relationship between $\pi$ and $x_i$), his marginal cost--- the increase in breakdown probability times his initial share--- would be bigger, and that is true even if his toughness's marginal effect on breakdown probability is the same as the other player's (again, a linear relationship, between $p$ and $x_i$). That is why we get a 50-50 equilibrium in Example 1 even though $p$ is linear, $u$ is linear (the payoff was simply $\pi$), and $c=0$. Proposition 1 tells us that if we add the natural convexity assumptions about $p$, $u$, and $c$, the 50-50 split continues to be the unique equilibrium , {\it a fortiori}. Very likely we could even dispense with differentiability and continuity to some extent.\footnote{Note that even the weak convexity assumptions could be weakened (and the infinitesimal fixed cost assumption dropped) if we are willing to impose a maximum toughness level at which breakdown has probability less than 1. Suppose $x_i \in [0, \overline{x}]$ with $p (\overline{x}, \overline{x}) <1$, but we do not require $\frac{\partial p^2}{\partial x_i^2} \geq 0$ but that we still require $\frac{\partial p^2}{\partial x_i \partial x_j} \leq 0$. In that case, the marginal cost of toughness might fall in one's own toughness and the equilibrium might either be interior and with equal toughness or an upper corner solution at $x_1=x_2=\overline{x}$, a 50-50 split in either case. (We still need $\frac{\partial p^2}{\partial x_i \partial x_j} \geq 0$ to prevent one player's marginal cost from rising in the other player's marginal cost, strategic substitutes, in which case the equilibrium might be asymmetric.)}
\bigskip
\bigskip
\noindent
{\sc 3. Three or More Bargainers }
Nash's axiomatic theory of bilateral bargaining extends unchanged to $n$ players. Symmetry and efficiency require $s_i =s_j$ and $\Sigma_{i=1}^n s_i =1$, so $s_i= 1/n$. Finding a model that attains an equilibrium less directly has proven elusive. Many attempts have been made to extend the Rubinstein Model to $n$ players, but none has attained the success of the 2-player version. (See section 3.13 of Osborne \& Rubinstein for the most natural way to extend the model.)
Kullti \& Vartiainen (2010) say ``Vastness of equilibria is a well known problem of multiplayer bargaining games,'' though with restrictions on the stationarity of equilibrium, uniqueness can be obtained in a dynamic game such as Rubinstein's. Shaked (in unpublished work),
Binmore (1985), and Herrero (1985) take this approach, and it is nicely presented in Sutton (1986). Later articles in the literature include Chae \& Yang
(1988, 1994), Krishna \& Serrano (1996), Chatterjee (2000), Huang (2002),
Suh \& Wen (2006), and Kullti \& Vartiainen (2010). Baron \& Ferejohn (1989) also look at a multi-player bargaining, but in the legislative context, where the multiple players are legislators who vote on proposals and agenda-setting--- the issue of who proposes and how--- is central, adding structure special to government. Their contribution to two-player bargaining is the idea of a model in which each player has an equal probability of making the offer in each period, which makes the ex ante payoffs not just close to equal, as in Rubinstein (1982), but exactly equal.
The breakdown model can be adapted to $N$ bargainers. We can use Example 1's specification of the breakdown and sharing functions, but with $p(x_1, x_2, \ldots, x_N) = \frac{\sum_{i=1}^N x_i}{12}$ and $\pi_i(x_1, x_2, \ldots, x_N)=\frac{x_i }{\sum_{i=1}^N x_i} $. This is limiting, because it avoids the need for adding extra assumptions to Proposition 1 to limit the cross-partials between the effects of $x_i$ and $x_j$ on $\pi$ and $p$ so that indirect effects do not exceed direct effects, but I don't think anybody actually would find those useful or interesting, even as a mathematical topic. So let
Player $i$'s payoff function be
$$%\begin{equation} \label{e0}
\mathit{Payoff}_i= (1- \frac{\sum_{i=1}^N x_i}{12}) \frac{x_i }{\sum_{i=1}^N x_i}
$$%\end{equation}
with first order condition
$$% \begin{equation} \label{e0}
\frac{1 }{\sum_{i=1}^N x_i} - \frac{x_i }{(\sum_{i=1}^N x_i})^2 - \frac{1}{12} = 0
$$%\end{equation}
All $N$ players have this same first order condition, so $x_i=x$ and
$$% \begin{equation} \label{e0}
\frac{1 }{Nx} - \frac{x }{(Nx)^2} - \frac{1}{12} = 0,
$$%\end{equation}
yielding
$$% \begin{equation} \label{e0}
x = \frac{12(N-1)}{N^2}.
$$% \end{equation}
The equilibrium probability of breakdown is
$$% \begin{equation} \label{e0}
p(x, \ldots,x) = \frac{ N \frac{12(N-1)}{N^2}}{12} = \frac{(N-1)}{N }
$$%\end{equation}
As $N$ increases, the probability of breakdown approaches one but does not reach it:
if $N=2$, then $x=12\cdot 1/4 = 3$ and the probability of breakdown is 50\%; If $N= 3$, $x = 12\cdot 2/9 \approx 2.67 $ so the probability of breakdown rises to about 67\%; If $N = 10$, $x = 12\cdot 9/100 = 1.08$ and the probability rises to 90\%. Increasing toughness has a negative externality which increases with the number of players. Each player's equilibrium share falls, so by being tougher he is mostly risking the destruction of the other players' payoffs.
\bigskip
\noindent
{\sc 4. Unequal Bargaining Power}
In ordinary conversation, ``bargaining power'' is used loosely to refer to whether someone can get a good outcome in bargaining, so, for example, someone trying to sell a product known to be worthless has little bargaining power. Economists now use the term to refer to the percentage of bargaining surplus a player can obtain. Thus, we would say that someone with a product that costs \$50 and is worth \$150 to the buyer has bargaining power of 80\% if he can get a price of \$130 out of the negotiation process. If John Doe was obviously negligent when he hit Richard Roe with his car and caused him \$100,000 in damage, and it would cost each of them \$10,000 in lawyer fees to go to court, common parlance would be that John Doe has little bargaining power, but if the outcome is a settlement payment of \$80,000 instead of \$100,000, the economist would say John Doe has overwhelming bargaining power--- even though he is a bad ``bargaining position''.
So far, the players in the present paper's bargaining model have had equal bargaining power. Often in applications we want to give one player more bargaining power than the other. This can be done by specifying that one player make a take-it-or-leave-it offer, or that one player will get $\theta$ percent of the surplus without specifying a model, or by using Rubinstein (1982) with two players who have different discount rates. The present model can be set up to introduce asymmetry via different discount rates also, or by differing rates of risk aversion, as we will see later. Here, however, I will introduce a functional form that allows us to assign the players ``bargaining power'' parameters of $\theta$ and $(1-\theta)$ that will give them shares of the same size based on how skilled they are at being tough without inducing breakdown.
\noindent
{\bf Example 3: Unequal Bargaining Power}.
Let the probability of breakdown be $p(x_1, x_2) =Min\{ e^{(1-\theta) \beta x_1 + \theta \beta x_2} -1, 1\}$, where $\theta \in [0,1]$ is player 1's bargaining power and $\beta>0$ is a parameter for breakdown risk. Let player 1's share of the pie be $\pi(x_1, x_2)=.5 + (x_1-x_2)$. The payoff functions are then
\begin{equation} \label{payoff1}
\mathit{Payoff}_1 = p (0) + (1-p)\pi = (1- e^{(1-\theta)\beta x_1 + \theta \beta x_2}+1)[.5 + (x_1-x_2)]
\end{equation}
and
$$%\begin{equation} \label{e0}
\mathit{Payoff}_2 = p(0) + (1-p)(1-\pi) = (1-e^{(1-\theta)\beta x_1 + \theta \beta x_2}+1) [1 - (.5 + (x_1-x_2))]
$$% end{equation}
Maximizing equation (\ref{payoff1}) with respect to $x_1$, Player
1's first order condition is
$$% \begin{equation} \label{e0}
\frac{\partial \mathit{Payoff}_1 } {\partial x_1}= (2- e^{(1-\theta)\beta x_1 + \theta \beta x_2}) - (1-\theta)\beta e^{(1-\theta)\beta x_1 + \theta \beta x_2} [.5 + (x_1-x_2)]=0
$$% \end{equation}
and for player 2,
$$% \begin{equation} \label{e0}
\frac{\partial \mathit{Payoff}_2} {\partial x_2}= (2- e^{(1-\theta)\beta x_1 + \theta \beta x_2}) - \theta \beta e^{(1-\theta)\beta x_1 + \theta \beta x_2} [.5 - (x_1-x_2)]=0
$$% \end{equation}
Define $Z \equiv e^{(1-\theta) \beta x_1 + \theta \beta x_2}$. Then we can equate these derivatives to get
$$% \begin{equation} \label{e0}
( 2-Z) -(1-\theta)\beta Z[.5 + (x_1-x_2)] = ( 2-Z) - \theta\beta Z[.5 - ( (x_1-x_2)]
$$% \end{equation}
It follows that\footnote{Then
$ (1-\theta) \beta Z[.5 + ( (x_1-x_2)] = \theta \beta Z[.5 - ( (x_1-x_2)]$, which implies
$ (1-\theta) [.5 + ( (x_1-x_2)] = \theta [.5 - ( (x_1-x_2)]$.Then
$.5 + x_1- x_2 -.5 \theta - \theta x_1 + \theta x_2 = .5\theta - \theta x_1+\theta x_2 $ so
$.5 + x_1- x_2 = \theta $ and
$ x_1 = x_2 + \theta -.5 $. It then follows that
$\pi= .5 + ( x_2 + \theta -.5 ) - x_2 = \theta . $}
$$% \begin{equation} \label{e0}
x_1 = x_2 + \theta -.5
$$% \end{equation}
and
$$% \begin{equation} \label{e0}
\pi = \theta.
$$%\end{equation}
In this specification the marginal benefit of toughness--- the increase in a player's share of the pie--- is equal for both players and independent of how tough they are, allowing considerable simplification. At the same time, use of the exponential function for the probability of breakdown, $p$, means that the marginal cost of toughness--- the increase in the probability of breakdown--- is $p$ times the player's bargaining power, $\theta$ or $(1-\theta)$. That is why the equilibrium shares work out so neatly.
It remains to find $x_1$ and $x_2$. These are
$$% \begin{equation} \label{e0}
x_1 = \left( \frac{1}{\beta} \right) log \left(
\frac{ 2 e^{ .5 \beta (2 \theta^2 - 3 \theta + 1) }
}
{ 1 + \beta \theta - \beta \theta^2 } \right) + \theta - .5
$$% \end{equation}
and
$$% \begin{equation} \label{e0}
x_2 = \left( \frac{1}{\beta} \right) log \left(
\frac{ 2 e^{ .5 \beta (2 \theta^2 - 3 \theta + 1) }
}
{ 1 + \beta \theta - \beta \theta^2 } \right)
$$% \end{equation}
Let $\beta =1$. Then if $\theta=.5$, $x_1=x_2\approx .47$, $p\approx .60$, and player 1's expected payoff is about
$.20$. If player 1's bargaining power rises to $\theta = .8$, then $x_1\approx .78, x_2\approx .48$, $p\approx .72 $, and player 1's expected payoff is about
$.22$. (Note that although player 1's bargaining power has led him to increase his toughness considerably, it has also led player 2 to be tougher, so the probability of breakdown rises enough to almost cancel out player 1's gain from getting a bigger share of the pie.) If we let $\beta =3$, then if $\theta=.5$, $x_1=x_2\approx .05$, $p\approx .14$, and player 1's expected payoff is about
$.43$. As one would expect, more convex costs (more convex breakdown probability) leads to less toughness in negotiating and higher expected payoffs for both players.
This specification is one in which one of the players can increase his toughness and get a bigger share but not increase the breakdown probability as much as the other player. He is somehow better at being tough without wrecking the deal. He is better at both parts of ``The Art of the Deal'': increasing his own share, and increasing the expected pie size.
\bigskip
\noindent
{\sc 5. Risk Aversion}
Proposition 1 allowed for risk aversion, but it required the players to have identical utility functions. The model can be applied even when the players have different utility functions. Proposition 2 confirms what one would expect: a player who is more risk averse will end up with a smaller share.
\noindent
{\bf Proposition 2:} {\it If player 1 is more risk averse than player 2, his share is smaller in equilibrium. }
\noindent
{\bf Proof.}
$$%\begin{equation} \label{e0}
\mathit{Payoff}_1 = p u(0; \alpha_1) + (1 - p) u (\pi; \alpha_1)
$$%\end{equation}
which has the first-order condition
$$%\begin{equation} \label{e0}
\frac{\partial p}{\partial x_1} u(0; \alpha_1) - \frac{\partial p}{\partial x_1} u(\pi; \alpha_1) + (1-p) u'(\pi; \alpha_1) \frac{\partial \pi}{\partial x_1} =0
$$%\end{equation}
We can rescale the units of utility functions of two people, so let's
normalize so $u(0; \alpha_1) \equiv u(0; \alpha_2) \equiv 0$ and $u'(0; \alpha_1) \equiv u'(0; \alpha_2)$. Then,
$$%\begin{equation} \label{e0}
\frac{\partial p}{\partial x_1} u(\pi; \alpha_1] = [1-p] u'(\pi; \alpha_1) \frac{\partial \pi}{\partial x_1},
$$%\end{equation}
so
$$%\begin{equation} \label{e0}
\frac{ \frac{\partial p}{\partial x_1}} {1-p} = \frac{ u'(\pi; \alpha_1) \frac{\partial \pi}{\partial x_1}}{ u(\pi; \alpha_1] }
$$%\end{equation}
Similarly, for player 2's choice of $x_2$,
$$% \begin{equation} \label{e0}
\frac{ \frac{\partial p}{\partial x_2}} {1-p} = \frac{ u'(1-\pi; \alpha_2) \frac{\partial \pi}{\partial x_2}}{ u(1-\pi; \alpha_2] }
$$%\end{equation}
If player 1 is less risk averse, his utility function is a concave increasing transformation of player 2's (Crawford [1991]).
This means that for a given $y$, for player 1 the marginal utility $u_1'(y)$ is bigger than for player 2, which also means that the average utility $u(y)/y$ is further from the marginal utility, because $u''<0$, and $u'(0)$ is the same for both. In that case, however, $\frac{u'(y)}{u(y)/y}$ is bigger for player 1, so $\frac{u'(y)}{u(y) }$ is also bigger. If $y = \pi = 1-\pi = .5 \pi$, we would need $\frac{\partial p}{\partial x_1}>\frac{\partial p}{\partial x_2}$ (unless both equalled zero) and $\frac{\partial \pi}{\partial x_1}< \frac{\partial \pi}{\partial x_2}$, which would require $x_1 \neq x_2$, which would contradict $\pi=.5$. The only way both conditions could be valid is if $x_1>x_2$, so that $\frac{\partial p}{\partial x_1} \geq \frac{\partial p}{\partial x_2}$ and $\frac{\partial \pi}{\partial x_1} < \frac{\partial \pi}{\partial x_2}$. $\blacksquare$
Example 4 illustrates Proposition 2.
\noindent
{\bf Example 4: Risk Aversion. }
As in Example 1, let the breakdown function be $p(x_1, x_2) = \frac{x_1+x_2}{12}$ and the sharing function be $\pi(x_1, x_2)=\frac{x_1 }{x_1+x_2}$. Now let the players have the constant average risk aversion (CARA) utility functions $u ( y_i;\alpha_i) =- e^{-\alpha_i y_i}$. This means that the player whose value of $\alpha$ is bigger will be the more risk averse for share value $y$.
$$% \begin{equation} \label{e0}
\mathit{Payoff}_1 = p u(0) + (1 - p) u (\pi) = \frac{x_1+x_2}{12} (-1) + (1- \frac{x_1+x_2}{12}) u_1(\frac{x_1 }{x_1+x_2}) ,
$$% \end{equation}
which has the first-order condition
$$% \begin{equation} \label{e0}
-\frac{1}{12} - \frac{1}{12} u +
(1- \frac{x_1+x_2}{12}) u' \cdot [\frac{1 }{x_1+x_2} - \frac{x_1 }{(x_1+x_2)^2}] =0.
$$% \end{equation}
With CARA utility, if $\alpha_1 \neq 0$ then $u' = - \alpha_1 u$, so
$$% \begin{equation} \label{e0}
-\frac{1}{12} + \frac{1}{12} u_1-
(1- \frac{x_1+x_2}{12}) \alpha_1 u_1 \cdot [\frac{1 }{x_1+x_2} - \frac{x_1 }{(x_1+x_2)^2}] =0
$$% \end{equation}
and
$$% \begin{equation} \label{e0}
-\frac{1}{12} + e^{-\alpha_1 \frac{x_1 }{x_1+x_2}} \left( \frac{1}{12}+
(1- \frac{x_1+x_2}{12}) \alpha_1 \cdot [\frac{1 }{x_1+x_2} - \frac{x_1 }{(x_1+x_2)^2}] \right) =0.
$$% \end{equation}
We cannot solve this expression to get analytic solutions for $x_1$ and $x_2$, but Mathematica's FindRoot function yields the numerical solutions shown in Table 1.
\hspace*{-24pt} \begin{minipage}[c]{ \linewidth}
\begin{center}
\begin{footnotesize}
{\sc Table 1: \\
Toughness, ($ x_1/x_2$) and Player 1's Share \boldmath$\pi$ As Risk Aversion ($\alpha_1, \alpha_2)$ Changes (Rounded) }
\vspace*{ 12pt}
\begin{tabular}{ l r| ccc cc }
%\hline
\multicolumn{2}{ c } { \;} & \multicolumn{5}{c } {\boldmath$\alpha_2$}\\
% \multicolumn{2}{| c } { \;} & \multicolumn{5}{c |} { }\\
\multicolumn{2}{ c } { \;} &.01 &.50 &1.00 &2.00&5.00 \\
\cline{3-7 }
\multicolumn{2}{ c| } { \;} & \multicolumn{5}{c } { }\\
&.01 & 3.00/3.00 & & && \\
& & {\bf 50} & & \multicolumn{3}{c } { }\\
& & \multicolumn{5}{c } { }\\
&.50 & 2.82/2.99 &2.81/2.81 & & & \\
& &{\bf 49} & {\bf 50}& \multicolumn{3}{c } { }\\
& & \multicolumn{5}{c } { }\\
\boldmath$\alpha_1$ &1.00& 2.64/2.98 &2.63/2.79 & 2.61/2.61 & & \\
\multicolumn{2}{ c| } { \;} & {\bf 47} & {\bf 49 } & {\bf 50} & & \\
\multicolumn{2}{ c| } { \;} & & & & & \\
&2.00 & 2.33/2.95 & 2.31/2.75 & 2.28/2.56 & 2.21, 2.21 & \\
\multicolumn{2}{ c| } { \;} & {\bf 44 } & {\bf 45 } & {\bf 47 } &{\bf 50 } & \\
\multicolumn{2}{ c| } { \;} & & & & & \\
&5.00 & 1.64/2.79 &1.60/2.57 & 1.55/2.35 & 1.43/1.95 &1.10/1.10 \\
\multicolumn{2}{ c| } { \;} & {\bf 23 } &{\bf 38 } & {\bf 40 } &{\bf 42 } & {\bf 50 } \\
% \hline
\end{tabular}
\end{footnotesize}
\end{center}
\end{minipage}
\vspace*{ 12pt}
This makes sense. The more risk averse a player is relative to his rival, the lower his share of the pie. He doesn't want to be tough and risk breakdown, and both his direct choice to be less tough and the reaction of the other player to choose to be tougher in response reduce his share.
This is a different effect of risk aversion than has appeared in the earlier literature. In a cooperative game theory model such as Nash (1950), risk aversion seems to play a role, but there is no risk in those games. Nash's Efficiency axiom means that there is no breakdown and no delay. Since we conventionally model risk aversion as concave utility, risk seems to enter when it is really just the shape of the utility function that does the work; the more ``risk averse'' player is the one with sharper diminishing returns as his share of the pie increases. Alvin Roth discusses this in his 1977 and 1985 {\it Econometrica} papers, distinguishing between this ``strategic'' risk and the ``ordinary'' or ``probabilistic'' risk that arises from uncertainty. Osborne (1985) does look at risk aversion in a model with uncertainty, but the uncertainty is the result of the equilibrium being in mixed strategies. One might also look at risk aversion this way in the mixed-strategy equilibria of Splitting a Pie examined in Malueg (2010) and Connell \& Rasmusen (2018). In the breakdown model, however, the uncertainty comes from the probability of breakdown, not from randomized strategies.
\bigskip
\noindent
{\sc 6. Breakdown Causing Delay, Not Permanent Breakdown-- A Model in the Style of Rubinstein (1982)}
In Rubinstein (1982),
breakdown--- meaning, the rejection of an offer--- causes delay, not permanent loss of the bargaining surplus. The players have positive discount rates, though, so each period of delay does cause some loss. Crucially, that loss is proportional to the player's eventual share of the pie, so in the end the Rubinstein model might be said to have the same driver as the present model. As in the static model, the probability of breakdown is zero or one rather than rising continuously with bargaining toughness. The dynamics are driven by the asymmetry between offeror and receiver, the offeror having a slight advantage because of the delay cost to both players from his offer being rejected.
Even temporary breakdown never occurs in equilibrium in the Rubinstein model, because the game has no uncertainty and no asymmetric information. The players move sequentially, taking turns making the offer. The present model adapts very naturally to the setting of infinite periods. Breakdown simply means that the game is repeated in the next period, with new choices of toughness. Of course, the players must now have positive discount rates, or no equilibrium will exist, because being tougher in a given period and causing breakdown would have no cost.
One interpretation of discounting is as a probability that breakdown occurs exogenously with some probability each period, an interpretation of the Rubinstein model that makes it look something like the present paper's model: the players are apprehensive that if they are tougher and delay agreement, they risk losing the entire surplus. The Rubinstein model, however, relies crucially on multiple rounds, because it is the looking forward to future rounds that determines what offer a player makes currently and what offers the other player would accept. Also, the probability of breakdown is constant and exogenous, rather than directly depending on the toughness of the bargainers. Where it depends on the bargainers is by creating a threat that if the offeror is tough past a known point, the receiver will reject his offer and exogenous permanent breakdown may occur.
Let's look at the effect of repetition and discounting the present model. Let the two players simultaneously choose their toughness levels but if breakdown occurs, it just causes the game to be repeated, as many times as necessary until agreement is reached.
Apply Proposition 1's assumptions for the breakdown $p$ and sharing $\pi$ functions, but omit the cost $c$ and the general utility function $u$, instead having the players be risk neutral with no direct cost of toughness, and with discount rates $r_1$ and $r_2$, both strictly positive.
Now that the game has multiple periods, we will also require the equilibrium to be subgame perfect. We will also require to to be Markov, a stationary equilibrium in which a player's strategy does not depend on previous play of the game. There exist other equilibria, but let us wait to discuss the justification for excluding them.
Let's denote the equilibrium expected payoff of player 1 by $V_1$, which will equal
\begin{equation} \label{V1}
\mathit{Payoff}_1= V_1 =p \frac{ V_1}{1+r_1} + (1-p) \pi
\end{equation}
Player 1's choice of $x_1$ this period will not affect $V_1$ next period (because we are looking for a stationary subgame perfect equilibrium), so the first-order condition is
$$% \begin{equation} \label{e0}
\frac{\partial \mathit{Payoff}_1}{\partial x_1}= \frac{\partial p}{\partial x_1} \frac{ V_1}{1+r_1} + (1-p) \frac{\partial \pi}{\partial x_1} -\frac{\partial p}{\partial x_1} \pi =0
$$% \end{equation}
We can rewrite the payoff equation \eqref{V1} as $V_1 (1 - \frac{p}{1+r_1}) = (1-p)\pi$ and $V_1 \frac{1 + r_1 - p}{1+r_1} = (1-p)\pi$ and
\begin{equation} \label{V1sub}
V_1 = \frac{ (1+r_1) (1-p) \pi}{(1+r_1-p) }
\end{equation}
Substituting in the first-order condition using \eqref{V1sub} gives
\begin{equation} \label{focv1}
\frac{\partial Payoff(1)}{\partial x_1} = \frac{\partial p}{\partial x_1} \frac{\frac{ (1+r_1) (1-p) \pi}{(1+r_1-p) }}{1+r_1} + (1-p) \frac{\partial \pi}{\partial x_1} -\frac{\partial p}{\partial x_1} \pi =0
\end{equation}
so
$$% \begin{equation} \label{e0}
\frac{\partial \mathit{Payoff}_1}{\partial x_1} = \frac{\partial p}{\partial x_1} \frac{ (1-p) \pi} {(1+r_1-p) } + (1-p) \frac{\partial \pi}{\partial x_1} -\frac{\partial p}{\partial x_1} \pi =0
$$% \end{equation}
which simplifies to
\begin{equation} \label{foc-rubinstein}
\frac{\partial \mathit{Payoff}_1}{\partial x_1} = (1-p) \frac{\partial \pi}{\partial x_1} -\frac{\partial p}{\partial x_1} \frac{ r_1 \pi} {(1+r_1-p) } =0
\end{equation}
Note that the marginal benefit of toughness, the first term, is the same as in the one-period game--- a larger share of the pie if bargaining does not break down--- but the marginal cost, the second term, is now an increased probability of a delayed payoff, which is increasing in the discount rate $r_1$.
\noindent
{\bf Proposition 3. } {\it In the unique stationary equilibrium of the multiperiod bargaining game, a player's toughness and equilibrium share falls in his discount rate. }
\noindent
{\bf Proof. } Differentiating the payoff again, the second-order condition is
\begin{equation} \label{e0}
\begin{array}{lll}
\frac{\partial^2 \mathit{Payoff}_1}{\partial x_1^2} &= & (1-p) \frac{\partial^2 \pi}{\partial x_1^2} -
\frac{\partial p}{\partial x_1}\frac{\partial \pi}{\partial x_1}
-\frac{\partial^2 p}{\partial x_1^2} \frac{ r_1 \pi} {(1+r_1-p) } \\
&& -\frac{\partial p}{\partial x_1} \frac{ r_1 } {(1+r_1-p) } \frac{\partial \pi}{\partial x_1} - (\frac{\partial p}{\partial x_1})^2 \frac{ r_1 \pi} {(1+r_1-p)^2 } <0
\end{array}
\end{equation}
This expression is negative because we have assumed that $\frac{\partial^2 \pi}{\partial x_1^2}>0$, $\frac{\partial p}{\partial x_1}>0$, and $\frac{\partial \pi}{\partial x_1}>0$.
Note that
\begin{equation} \label{crossrx}
\frac{\partial^2 \mathit{Payoff}_1}{\partial x_1 \partial r_1} = \frac{\partial p}{\partial x_1} \frac{ - (1-p) \pi} {(1+r_1-p)^2 },
\end{equation}
which is negative. As a result, since the second-order condition for choice of $x_1$ is also negative, the implicit function theorem tells us that the optimal choice of $x_1$ falls with $r_1$, for a given level of $x_2$. Player 2 chooses $x_2$ by maximizing his own payoff function,
$$% \begin{equation} \label{e0}
\mathit{Payoff}_2 = V_2 =p \frac{ V_2}{1+r_2} + (1-p) (1-\pi ).
$$% \end{equation}
Player 2's first order condition can be derived in the same way as player 1's:
$$% \begin{equation} \label{e0}
\frac{\partial \mathit{Payoff}_2}{\partial x_2} = - (1-p) \frac{\partial \pi}{\partial x_2} -
\frac{\partial p}{\partial x_2} \frac{ r_2(1-\pi)}{(1+r_2-p) } =0
$$% \end{equation}
Since player 2's first-order condition does not depend on $r_1$, player 2's choice of $x_2$ is independent of $r_1$ and is unchanged when $r_1$ changes infinitesimally. Thus, the effect of an increase in $r_1$ is to reduce $x_1$, and hence player 1's equilibrium share. The argument for why an increase in $r_2$ reduces $x_2$ is parallel.
$ \blacksquare$
We will explore this more in Example 5.
\noindent
{\bf Example 5: Multiple Rounds of Bargaining }
This example adds unlimited rounds of bargaining to Example 1, which tells us that $\frac{\partial p}{\partial x_1} = 1/12$ and $\frac{\partial \pi}{\partial x_1} = \frac{1}{x_1+x_2} - \frac{x_1}{(x_1+x_2)^2}$. Solving \eqref{foc-rubinstein} yields
\begin{equation} \label{x1discounting}
x_1 = \frac{ -12 r_1 x_2 + x_2^2 - 12 x_2 + 12 \sqrt{ r_1 x_2 (12 r_1 - x_2 + 12 ) }}{12 r_1 - x_2}
\end{equation}
Player 1's toughness is a function of his own discount rate and of player 2's toughness but depends only indirectly on player 2's discount rate. Let's look at the limiting cases of $r_1=0$ and $r_1=\infty$.
\begin{equation} \label{limit0}
\begin{array}{lll}
\stackrel{\rm \displaystyle lim }{\scriptscriptstyle r_1 \to 0} x_1 & = & \frac{ -12x_2 (0) +x_2^2 -12x_2 + 12 \sqrt{0}}{0 - x_2} \\
&&\\
& = & 12-x_2 \\
\end{array}
\end{equation}
and
\begin{equation} \label{limitinfty}
\begin{array}{lll}
\stackrel{\rm \displaystyle lim }{\scriptscriptstyle r_1 \to\infty}
x_1 & = & \frac{ -12 r_1 x_2}{ 12 r_1 - x_2} + \frac{ 12 \sqrt{ (12x_2 r_1^2 - x_2r_1 + 12r_1 )} } {12 r_1 - x_2}+ \frac{ + x_2^2 - 12 x_2 }{\infty} \\
& & \\
& = & -x_2 + \sqrt{ (12x_2 } \\
\end{array}
\end{equation}
If the two players have the same discount rates, their first order conditions are parallel and $x_1=x_2 $ in equilibrium. As the discount rate approaches zero, equation (\ref{limit0}) tells us that toughness approaches 6 for each player, not 3, as in the one-shot Example 1, and the probability of breakdown in any given period approaches 1. That is because breakdown is relatively harmless, so players find it worthwhile to be extremely tough in order to increase their share of the pie.\footnote{The extreme case of $r_1=r_2=0$ would yield $x_1=x_2=6$ and $p=1$ if the players followed the strategy of equation (\ref{limit0}). That is paradoxical because each player would have a payoff of zero, and either of them could get a positive payoff by deviating to be less tough. No Nash equilibrium would exist, even in mixed strategies. } As the discount rate approaches infinity, on the other hand, each player's toughness approaches $x = -x + \sqrt{ (12x }$, so $2x = \sqrt{ (12x }$, $4x^2= 12x$, and $x_1=x_2=3$. This is Example 1's result, which, indeed, is equivalent to the present game when the pie is worthless if the players have to wait to consume it till the second period.
The diagonal values with the boldfaced 50\% split in Table 2 show how the equilibrium toughnesses fall with the discount rate in the symmetric game.
\hspace*{-48pt} \begin{minipage}[c]{ \linewidth}
\begin{center}
{\sc Table 2: \\
Toughness ($x_1/x_2$) and Player 1's Share (\boldmath$\pi$) As Impatience ($r_1, r_2)$ Increases (rounded) }\\
\medskip
\begin{tabular}{ c r| ccc ccc }
& & \multicolumn{6}{c } {\boldmath$r_2$}\\
% & & \multicolumn{6}{|c } { }\\
& &.001 &.010 &.050 & .100&.500&2.000\\
\hline
\multicolumn{2}{ c| } { \;} & & & & & & \\
&.001 & 5.5/5.5 && & & &\\
\multicolumn{2}{ c| } { \;} & {\bf 50} & & & & & \\
\multicolumn{2}{ c| } { \;} & & & & & & \\
&.010 & 2.9/7.4 &5.5/5.5 & & & &\\
\multicolumn{2}{ c| } { \;} & {\bf 28} & {\bf 50 } & & & & \\
\multicolumn{2}{ c| } { \;} & & & & & & \\
{\;\;\;} \boldmath$r_1$ &.050 & 2.5/7.6 & 3.5/7.0 &4.9/4.9 & &&\\
\multicolumn{2}{ c| } { \;} & {\bf 25} & {\bf 33 } & {\bf 50} & & & \\
\multicolumn{2}{ c| } { \;} & & & & & & \\
&.100& 1.4/9.6 & 2.9/7.4 & 4.2/5.4 & 4.6/4.6 &&\\
\multicolumn{2}{ c| } { \;} & {\bf 13} & {\bf 25 } & {\bf 44} &{\bf 50 } & & \\
\multicolumn{2}{ c| } { \;} & & & & & & \\
&.500 & 1.1/9.8 & 2.1/7.8 & 3.0/6.0 & 3.3/5.2 & 3.8/3.8 &\\
\multicolumn{2}{ c| } { \;} & {\bf 10} & {\bf 21 } & {\bf 33} &{\bf 39 } &{\bf 50 } & \\
\multicolumn{2}{ c| } { \;} & & & & & & \\
&2.000 & 1.0/9.9 & 1.9/7.9 & 2.6/6.2 & 2.8/5.4 & 3.2/3.9 &3.3/3.3 \\
\multicolumn{2}{ c| } { \;} &{\bf \;\;9} & {\bf 19 } & {\bf 30} &{\bf 34} &{\bf 45 } & {\bf 50} \\
\end{tabular}
\end{center}
\end{minipage}
% \vspace{32pt}
Table 2 shows the equilibrium shares, but it does not show the expected payoffs, which depend not just on the shares but on the breakdown probability and the expected time delay before agreement.
Denote the expected payoff when $r_1=r_2=r$ by $V(r)$. The expected payoff equals, for interior solutions where $x_1+x_2 <12$, \footnote{$V = (1+r ) (6- x) / (12+12r -2x)= (1+r ) (6- [6 r - 6 \sqrt{r^2+r} + 6]) / (12+12r -2[6 r - 6 \sqrt{r^2+r} + 6]) = (1+r ) ( - r+ \sqrt{r^2+r} ) / (2+2r -2[ r - \sqrt{r^2+r} +1] =(1+r ) ( - r + \sqrt{r^2+r} ) / 2 \sqrt{r^2+r} =\frac{(1+r ) ( \sqrt{r^2+r}-r) }{ 2 \sqrt{r^2+r}}
= \frac{(1/r) (r^2+r)}{2\sqrt{r^2+r}} ( \sqrt{r^2+r}-r) =\frac{ \sqrt{r^2+r}}{2r}( \sqrt{r^2+r}-r) = \frac{r^2+r}{2r} - \frac{r\sqrt{r^2+r}}{2r} = \frac{r}{2}+ .5 - \frac{\sqrt{r^2+r}}{2} $. }
$$% \begin{equation} \label{e0}
\begin{array}{lll}
V(r)& =& (1+r ) (1-p)*(.5) / (1+r -p) \\
&&\\
& =& (1+r ) (1- \frac{2x}{12})*(.5) / (1+r -\frac{2x}{12})\\
&&\\
&=& \frac{r}{2}+ .5 - \frac{\sqrt{r^2+r}}{2} \\
\end{array}
$$% \end{equation}
The expected payoff falls with the discount rate,\footnote{$dV/dr = .5 - \frac{2r+1}{4\sqrt{r^2+r}} $, which has the same sign, multiplying by $4 \sqrt{r^2+r}$, as $ 2\sqrt{r^2+r} - (2r +1)$. Square the first, positive, term and we get $4r^2+4r$. Square the second, negative, term and we get the larger amount $4r^2+1 + 4r$. Thus, the derivative is negative. } with an upper bound of .5 and a lower bound of .25.\footnote{ As $r \rightarrow 0$, $V(r) \rightarrow .5$.
As $r \rightarrow \infty$, we know $x \rightarrow 3 $, so $V(r) \rightarrow\frac{ (1+r ) (1- \frac{6}{12}) (.5)}{ (1+r -\frac{6}{12}) }= \frac{.25(1+r)}{.5+r}$. As $r \rightarrow \infty$, this last expression approaches $\frac{.25 r}{r} = .25$.
} Recall from Example 1 that if the surplus falls to 0 after breakdown, the equilibrium probability of breakdown is .5. The expected payoff when players are more patient is higher because although agreement takes longer, the cost per period of delay is enough lower to outweigh that.
In the Rubinstein model, the split approaches 50-50 as the discount rate approaches zero. Here, the probability of breakdown approaches zero as the discount rate approaches zero.
The present game does not have Rubinstein's first-mover advantage, because both player's choose toughness simultaneously. Also, agreement may well take more than one round of bargaining, unlike in the Rubinstein game's equilibrium.
\hspace*{-48pt} \begin{minipage}[c]{ \linewidth}
\begin{center}
{\sc Figure 2:\\
Reaction Curves for Toughness $x_1$ and $x_2$ \\
(a) $r_1=r_2=.05$ \hspace{48pt} (b) $r_1=.25$, $r_2=.05$ }
\label{reaction-curves.pdf}
\includegraphics[width=2in]{example-5-r05reaction-curves.pdf} \hspace{ 32pt} \includegraphics[width=2in]{reaction-curves.pdf}
\end{center}
\end{minipage}
Particular reaction functions show what is going on. We have already seen that $\frac{\partial x_i}{\partial r_i}<0$. The reaction curves are plotted in $(x_1,x_2)$ space in Figure 2. In the relevant range, near where they cross, they are downward sloping. Not only does this make the equilibrium unique, it also tells us that the indirect effect of an increase in $r_1$ goes in the same direction as the direct effect. If $r_1$ rises, that reduces $x_1$, which increases $x_2$, which has the indirect effect of reducing $x_2$ further, and so the indirect effects continue ad infinitum.
Recall that I said the game has multiple equilibria. This is not because of the Folk Theorem, because this is not a repeated game in the sense of having per-period payoffs. It is, however, a game that allows punishment strategies in a subgame-perfect equilibrium if it is infinitely repeated. The Markov equilibrium can function as a punishment strategy. If discount rates are low enough, other equilibria will exist in which players have payoffs as great or greater than in the Markov equilibrium--- and possibly asymmetric payoffs, so splits that are not 50-50. The Markov equilibria has $x_1^*= x_2^*$, we will see. There will be other equilibria in which $x_1< x_2 < x_2^*$, so player 2 gets a bigger share of the pie. This can happen because another part of the equilibrium strategy would be that if player 1 deviates and plays a bigger $x_1$, both players revert to the $x_1^*= x_2^*$ equilibrium, which has greater expected delay and is worse for both of them, for $T$ periods, where $T$ is big enough to deter deviation.
If the game has a finite number of periods, however, the multiplicity of equilibria disappears and we are left with a unique subgame perfect equilibrium that has a 50-50 split and resembles the Markov equilibrium. Suppose there are $T$ periods. The last period is identical to the one-shot game and will have the same choices of $x_1$ and $x_2$. The previous period will have somewhat higher $x_1$ and $x_2$, but the two-period game also has a unique equilibrium. The earlier the period, the higher the $x_1$ and $x_2$, but the limiting case is the Markov infinite-period equilibrium.
This is not the case in the Rubinstein game. There, even the infinite-period game has a unique equilibrium. The intuition is that the Rubinstein game reaches immediate agreement in equilibrium, so there is no opportunity for Pareto improvement by having equilibria with less delay--- there is no delay to begin with.
\bigskip
\noindent
{\sc 7. Outside Options}
We have been assuming that the ``threat point'', the result of breakdown, is a payoff of zero for each player. Shaked \& Sutton (1984) show that the idea of the threat point is more complicated than it first seems. Suppose, for example, that the two players are bargaining over a surplus equal to 1, but player 1 has an ``outside option'' which gives him a payoff of .3 if he chooses to take it instead of continuing to bargain. If we incorporate this outside option into the basic static model in which both players propose shares and breakdown occurs, no outcome in which player 1 receives less than .3 can be an equilibrium, but any share for him between .3 and 1 continues to be an equilibrium. If we try to choose an equilibrium by thinking of what equilibrium is a focal point, or corresponds to social custom, (.5, .5) remains attractive, but so does a split of the social gains from bargaining of .7, which would give a share of .3 + .5(.7) = .65 to player 1 and .35 to player 2. The alternating-offer game of Rubinstein (1982) puts more structure on the situation. Shaked \& Sutton (1984) show that if player 1 has the possibility of taking his outside option of .3 at any point in the game, it makes absolutely no difference. Assuming that his equilibrium share is something close to .5 (a little more if it is his turn to make the offer, a little less if he is the receiver and can only accept or reject), player 1 would never take his outside option, so it does not affect his behavior, or player 2's. Moreover, if player 1's outside option were greater than what would otherwise be his equilibrium share--- .8, say--- then his equilibrium share would rise just to the outside option of .8, no higher. Player 2 would offer .8 in the first period, and player 1 would accept, knowing that if he rejected, he could do no better with a counteroffer because player 2 would always retreat to a new offer of .8.
Shaked \& Sutton's result is counterintuitive because our natural thought is that an outside option of .3 would improve player 1's bargaining position and result in him getting a bigger equilibrium share. Their insight is that a small outside option is irrelevant, because player 1's threat to take it is not credible; player 2 can safely be tough because even his tough offer of .5 is still better than the outside option.
In the breakdown model, an outside offer has a different impact. It has an effect somewhere between irrelevancy and improving the threat point. The reason is that a threat's lack of credibility is not a factor, but a player's choice of toughness does depend on what happens to him in case of breakdown. We will see this in Example 6.
\noindent
{\bf Example 6: Player 1 Has an Outside Option of $z$}. As in Example 1, let the breakdown probability be
$p(x_1, x_2) = \frac{x_1+x_2}{12}$ and player 1's share be $\pi(x_1, x_2)=\frac{x_1 }{x_1+x_2}$. Player 1 has an outside option of $z$, a payoff he receives if bargaining breaks down. The payoff functions are
\begin{equation} \label{payoff1}
\mathit{Payoff}_1 = pz + (1-p)\pi = \frac{x_1+x_2}{12} z+ (1- \frac{x_1+x_2}{12})\frac{x_1 }{x_1+x_2} = \frac{x_1 }{x_1+x_2}- \frac{x_1}{12}
\end{equation}
and
$$% \begin{equation} \label{e0}
\mathit{Payoff}_2 = p(0) + (1-p)(1-\pi) = (1- \frac{x_1+x_2}{12}) (1-\frac{x_1 }{x_1+x_2}) = \frac{x_2 }{x_1+x_2}- \frac{x_2}{12}
$$% \end{equation}
Maximizing equation (\ref{payoff1}) with respect to $x_1$,
Player 1's first order condition is
$$% \begin{equation} \label{e0}
\frac{\partial \mathit{Payoff}_1}{\partial x_1} = \frac{z}{12} + \frac{1 }{x_1+x_2}- \frac{x_1 }{(x_1+x_2)^2} - 1/12=0
$$% \end{equation}
so $x_1+x_2 - x_1 - \frac{(1-z)(x_1+x_2)^2}{12}=0 $ and $\frac{12}{1-z} x_2 - (x_1+x_2)^2=0$.
Player 1's reaction curve is
$$% \begin{equation} \label{e0}
x_1 = \sqrt{\frac{12}{1-z} } \sqrt{x_2} - x_2,
$$% \end{equation}
Player 2's reaction curve is
$$% \begin{equation} \label{e0}
x_2 = \sqrt{12 } \sqrt{x_1} - x_1,
$$% \end{equation}
As a result, $ \sqrt{\frac{12}{1-z} } \sqrt{x_2}= \sqrt{12 } \sqrt{x_1} - x_1$, so $ \frac{x_2}{1-z} = x_1$ and player 1's share is $\frac{x_1}{x_1+x_2} = \frac{1}{2-z}$.
Why is it that the outside option does not operate the same way as a different threat point? If $z=.2$, then player 1's share would be .6 if the social surplus from reaching a bargain were split evenly, but as an outside option, it yields him less: $\frac{1}{2-z}=5/9$. The outside option also helps player 1 if it is bigger than .5. If $z=.8$ it would be .83 (approximately)--- but if the threat point were .8, player 1's share would be .9. Player 1's outside option improves his bargaining position, but not as much as if he started with .8 and bargaining occurred over the difference between .8 and 1.
The reason is that the outside option is not a base level, but a replacement, and the toughness necessary for player 1 just to obtain the equivalent of his outside option can itself induce breakdown. Let us return to the general model with risk neutrality ($u(a)=a $) and no direct cost of threats ($c=0$), but giving player 1 an outside option of $z$.
\noindent
{\bf Proposition 4. } {\it If player 1's outside option is $z$, his equilibrium bargaining share will be strictly greater than .5 and no greater than $.5 + .5z$, attaining the upper bound only if $p$ and $\pi$ are both linear. }
\noindent
{\bf Proof. } The expected payoffs are:
$$% \begin{equation} \label{e0}
\mathit{Payoff}_1= p(x_1, x_2) z + (1-p(x_1, x_2)) \pi(x_1, x_2)
$$% \end{equation}
and
$$% \begin{equation} \label{e0}
\mathit{Payoff}_2 = p(x_1, x_2)(0) + (1-p(x_1, x_2)) (1-\pi(x_1, x_2))
$$% \end{equation}
The first-order conditions are
$$% \begin{equation} \label{e0}
\frac{\partial \mathit{Payoff}_1}{\partial x_1}= \frac{\partial p }{\partial x_1}z - \frac{\partial p}{\partial x_1} \pi(x_1, x_2) + (1-p(x_1, x_2)) \frac{\partial \pi}{\partial x_1} =0
$$% \end{equation}
and
$$% \begin{equation} \label{e0}
\frac{\partial \mathit{Payoff}_2}{\partial x_2}= - \frac{\partial p}{\partial x_2} (1-\pi(x_1, x_2)) + (1-p(x_1, x_2)) \frac{\partial \pi}{\partial x_2} =0
$$% \end{equation}
and in equilibrium those two derivatives must be equal:
\begin{equation} \label{outside}
- \frac{\partial p }{\partial x_1}(\pi-z) + (1-p ) \frac{\partial \pi}{\partial x_1} = - \frac{\partial p}{\partial x_2} (1-\pi ) - (1-p ) \frac{\partial \pi}{\partial x_2}
\end{equation}
We can see that $x_1 = x_2$ cannot be an equilibrium, since then $\frac{\partial p }{\partial x_1} = \frac{\partial p }{\partial x_2}$ and $\frac{\partial \pi }{\partial x_1} = -\frac{\partial \pi }{\partial x_2}$, reducing the previous equation to $ \pi-z = 1-\pi$, which does not permit $\pi= 1/2$. Instead, we need $x_1 >x_2$ so that $\pi > 1/2$ and $\frac{\partial \pi }{\partial x_1} < -\frac{\partial \pi }{\partial x_2}$.
Rearranging the last equation, we have
\begin{equation} \label{outside2}
- \frac{\partial p }{\partial x_1}(\pi-z) + \frac{\partial p}{\partial x_2} (1-\pi ) = (1-p ) (- \frac{\partial \pi}{\partial x_2} - \frac{\partial \pi}{\partial x_1})
\end{equation}
Player 1's share cannot be as great as
$\pi = z + .5(1-z) = .5 + .5z$, however. Then $\pi-z = 1- \pi$, so the left side of the previous equation is negative or zero if $\frac{\partial p }{\partial x_1} \geq
\frac{\partial p }{\partial x_2}$, which must be the case since when $\pi>.5$, as here, it must be that $x_1>x_2$ and the second derivatives of $p$ are negative or zero.
At the same time, when $x_1>x_2$ then $ \frac{\partial \pi}{\partial x_1} \leq - \frac{\partial \pi}{\partial x_2}$, so the right side of the equation must be zero or positive. The left side can equal the right side only if they are both zero, which happens only if both $p$ and $\pi$ are linear.
Hence, we can conclude that $.5 < \pi \leq .5+ .5z$.
$ \blacksquare$
Think of the left-hand side of (\ref{outside}) as player 1's marginal cost and benefit of toughness and the right-hand side as player 2's. Suppose we start with player 2's first-order condition satisfied and with $x_1=x_2$. This means that $\pi = 1/2$ and the marginal benefits of increasing share via increasing toughness are the same for both players. Player 1's marginal cost, however, is less than player 2's, because it consists of the marginal increase in the probability of breakdown, which is the same for both players, times what is lost, which is just $\pi-z$ for player 1 but $1-\pi$ for player 2. Hence, for player 1 to satisfy his first order condition, $x_1$ must increase relative to $x_2$, which has the effect of reducing player 1's marginal benefit (because $\frac{\partial \pi }{\partial x_1}$ falls in $x_1$ ) and increasing his marginal cost (both because $\pi-z$ rises and because $\frac{\partial p }{\partial x_1}$ rises).
\bigskip
\noindent
{\bf Concluding Remarks}
The purpose of this model is to show how a simple and intuitive force--- the fear of inducing bargaining breakdown by being too tough--- leads to a 50-50 split being the unique equilibrium outcome in bargaining. Such a model also implies that the more risk-averse player gets a smaller share of the pie, and it can be easily adapted to $n$ players. The force at work is intuitive: the bargainers fear that being too tough will induce breakdown and cause them to lose their equilibrium share, but that means that if one player's equilibrium share were greater, his potential loss would be greater too and he would scale back his toughness level to that of the other player. If one player is better at being tough without inducing breakdown, however, that player would be willing to push harder and would indeed receive a bigger share. Thus, the interpretation of bargaining power in this model is that a player is better at being tough without pushing the other player too far. All of this operates in the context of complete information and without any need for multiple periods of bargaining. The game can be extended to multiple periods, in which case it becomes similar to Rubinstein (1982) but without the asymmetry of one player being privileged to make the first offer and with the possibility that bargaining lasts more than one round.
\newpage
\noindent
{\bf References}
\noindent
{\bf Abreu}, Dilip \& Faruk Gul (2000) \href{https://onlinelibrary.wiley.com/doi/pdf/10.1111/1468-0262.00094 } { ``Bargaining and Reputation,'' }
{\it Econometrica}, 68: 85--117.
\noindent
Ambrus, Attila and Shih En Lu
\href{https://www.jstor.org/stable/24467041 } {``A Continuous-Time Model of Multilateral Bargaining,''}
American Economic Journal: Microeconomics, Vol. 7, No. 1 (February 2015) 208-249
\noindent
{\bf
Anbarci,} Nejat (2001) \href{https://link.springer.com/content/pdf/10.1023/A:1010363409312.pdf } { ``Divide-the-Dollar Game Revisited,''} {\it Theory and Decision}, 50: 295–304.
\noindent
{\bf Baron}, David P. \& John A. {\bf Ferejohn} (1989) \href{https://www.uibk.ac.at/economics/bbl/lit_se/papieress08/baron_ferejohn(1989).pdf } { ``Bargaining in Legislatures,''}
{\it The American Political Science Review}, 83: 1181--1206.
\noindent
{\bf Bastianello,} Lorenzo \& Marco {\bf LiCalzi } (2019) \href{https://doi.org/10.3982/ECTA13673 } {
``The Probability to Reach an Agreement as a Foundation for Axiomatic Bargaining,''} {\it Econometrica}, 87: 837--865.
\noindent
{\bf Binmore}, Ken G. (1980) \href{ } { ``Nash Bargaining Theory II,"} ICERD, London School of Economics, D.P. 80/14 (1980). CHECK THIS
\noindent
{\bf Binmore}, Ken G. (1985) \href{ } { ``Bargaining and Coalitions,"} in {\it Game-Theoretic Models of Bargaining,} Alvin Roth, ed., Cambridge: Cambridge
University Press (1985).
\noindent
{\bf
Binmore,} Ken G. (1987) \href{ } { ``Nash Bargaining Theory II,''} in Ken G. Binmore \& Partha Dasgupta (eds.) {it The Economics of Bargaining,}
chap. 4, Oxford: Blackwell.CHECK THIS
\noindent
{\bf Binmore}, Ken G., Ariel {\bf Rubinstein} \& Asher {\bf Wolinsky} (1986) \href{https://www.jstor.org/stable/2555382 } {
``The Nash Bargaining Solution in Economic Modelling,''}
{\it The RAND Journal of Economics}, 17: 176--188.
\noindent
{\bf
Carlsson}, Hans (1991) \href{https://www.jstor.org/stable/2938376 } { ``A Bargaining Model Where Parties Make Errors,''} {\it Econometrica}, 59: 1487–1496.
\noindent
{\bf
Chae, Suchan \& Jeong-Ae Yang } (1988) \href{https://doi.org/10.1016/0165-1765(88)90118-8 } { ``The Unique Perfect Equilibrium of an N-Person Bargaining Game,''} {\it Economic Letters},
28: 221-223.
\noindent
{\bf
Chae, Suchan \& Jeong-Ae Yang } (1994) \href{https://doi.org/10.1006/jeth.1994.1005 } { ``An N-Person Pure Bargaining Game,''} {\it Journal of Economic Theory}, 62: 86-102.
\noindent
{\bf
Chatterjee, K. \& H. Sabourian } (2000) \href{https://onlinelibrary.wiley.com/doi/pdf/10.1111/1468-0262.00169 } { ``Multiperson Bargaining and Strategic Complexity,''} {\it Econometrica}, 68:
1491--1509.
\noindent
{\bf Connell,} Christopher \& Eric {\bf Rasmusen} (2019) \href{http://www.rasmusen.org/papers/mixedpie.pdf } { ``Splitting a Pie: Mixed Strategies in Bargaining under Complete Information,''} Indiana University Dept of Business Economics and Public Policy working paper.
\noindent
{\bf Crawford}, Vincent (1991) \href{https://econweb.ucsd.edu/~vcrawfor/ArrowPrattTyped.pdf } { ``Arrow-Pratt Characterization of Comparative Risk,''}
Econ 200C notes, \url{https://econweb.ucsd.edu/~vcrawfor/ArrowPrattTyped.pdf}.
\noindent
{\bf Crawford}, Vincent (1982) ``A Theory of Disagreement in Bargaining,'' {\it Econometrica}, 50:
607–637.
\noindent
{\bf
Ellingsen}, Tore, and Topi Miettinen (2008) ``Commitment and Conflict in Bilateral Bargaining,''
{\it The American Economic Review}, 98: 1629–1635.
\noindent
{\bf Herrero}, M. (1985) \href{ } { ``Strategic Theory of Market Institutions,''} unpublished Ph.D dissertation, LSE.
\noindent
{\bf Huang}, Chen-Ying (2002) \href{https://link.springer.com/content/pdf/10.1007 2Fs001990100192.pdf } { ``Multilateral Bargaining: Conditional and Unconditional Offers,''} {\it Economic Theory}, 20: 401--412.
\noindent
{\bf Krishna, Vijay \& Roberto Serrano} (1996) \href{https://doi.org/10.2307/2298115 } { ``Multilateral Bargaining,''} {\it Review of Economic Studies}, 63: 61--80.
\noindent
{\bf
Kultti, Klaus \& Hannu Vartiainen} (2010) \href{https://link.springer.com/article/10.1007/s00182-009-0212-3 } {
``Multilateral Non-Cooperative Bargaining in a General
Utility Space,''} {\it International Journal of Game Theory}, 39: 677--689.
\noindent
{\bf Malueg}, David A. (2010) \href{https://link.springer.com/article/10.1007/s00199-009-0478-5 } { ``Mixed-Strategy Equilibria in the Nash Demand Game,''} {\it Economic Theory}, 44: 243--270 .
\noindent {\bf
Muthoo}, Abhinay (1992) ``Revokable Commitment and Sequential Bargaining,'' {\it Economic
Journal}, 102: 378–387.
\noindent
{\bf Nash}, John F. (1950) \href{ https://www.jstor.org/stable/1907266} { ``The Bargaining Problem,''}
{\it Econometrica}, 18(2): 155--162.
\noindent
{\bf Nash}, John F. (1953) \href{https://www.jstor.org/stable/1906951 } { ``Two-Person Cooperative Games,''}
{\it
Econometrica},
21(1): 128--140.
\noindent
{\bf Osborne}, Martin (1985) \href{https://books.google.com/books?hl=en&lr=&id=zY0Ljv6AeV8C&oi=fnd&pg=PA181&dq=Osborne%7D,++Martin+(1985)++%5Chref%7B+%7D+%7B+%60%60The+Role+of+Risk+Aversion+in+a+Simple+Bargaining+Model,&ots=RfUmVZHale&sig=djo2DIJgMewbaYtTCjiRvDKTFow#v=onepage&q&f=false } { ``The Role of Risk Aversion in a Simple Bargaining Model,"} in {\it Game-Theoretic Models of Bargaining,} Alvin Roth, ed., Cambridge: Cambridge
University Press.
\noindent
{\bf Osborne}, Martin \& Ariel {\bf Rubinstein} (1990) {\it Bargaining and Markets,} Bingley: Emerald Group Publishing.
\noindent
{\bf Rasmusen}, Eric (1989/2007) \href{ http://www.rasmusen.org/GI/index.html} {
{\it Games and Information: An Introduction to Game Theory}}, Oxford: Blackwell Publishing (1st ed. 1989; 4th ed. 2007).
\noindent
{\bf Roth}, Alvin E. (1977) \href{ http://web.stanford.edu/~alroth/papers/1977_E_Shapley_Value_as.pdf} { ``The Shapley Value as a von Neumann-Morgenstern Utility Function,"}
{\it Econometrica, } 45: 657--664.
\noindent
{\bf Roth}, Alvin E. (1979) {\it Axiomatic Models of Bargaining.}
\noindent
{\bf Roth}, Alvin E. (1985) \href{https://www.jstor.org/stable/1911733 } { ``A Note on Risk Aversion in a Perfect Equilibrium Model of Bargaining,"}
{\it Econometrica, } 53: 207--212.
\noindent
{\bf Rubinstein,} Ariel (1982) \href{ https://www.jstor.org/stable/1912531 } { ``Perfect Equilibrium in a Bargaining Model,''}
{\it Econometrica}, 50: 97--109.
\noindent
{\bf Shaked}, Avner \& John {\bf Sutton} (1984) \href{https://www.jstor.org/stable/1913509 } { ``Involuntary Unemployment as a Perfect Equilibrium in a Bargaining Model,''}
{\it Econometrica}, 52: 1351--1364.
\noindent
{\bf
Suh S-C. \& Wen Q.} (2006) \href{https://reader.elsevier.com/reader/sd/pii/S0304406805000674?token=06D38172E422AA896E357AFF068D31408DE0B3945EAB7DA6DC82EB00ADE205BAD60F11283C4B4DBFAEBF25C7B462BBE0 } { ``Multi-Agent Bilateral Bargaining and the Nash Bargaining Solution,''} {\it Journal of Mathematical Economics},
42: 61--73.
\noindent
{\bf
Sutton, John } (1986) \href{https://watermark.silverchair.com/53-5-709.pdf?token=AQECAHi208BE49Ooan9kkhW_Ercy7Dm3ZL_9Cf3qfKAc485ysgAAAmcwggJjBgkqhkiG9w0BBwagggJUMIICUAIBADCCAkkGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQMWpW28TkNEIH56uV2AgEQgIICGr6V1hJTg6enyUtZg9Vcwf6Bc-NK9IzmaqEnoCiLwS63K14EkfsLXOFqADWD1YmvLjjkO3v_Rdcc-zgPXKY3uCjb0dfdZUwKhAYuupLXXvi65DM6jdQQY4_WpYDM_XBOcVz1SuzCx-Kr7ewY8d8paHeL8BZycqWK3zD3zhLSZ1GndMlAxOt1BYnZcVz_4QbATzCBbz8dVhWyIZlwj4rM7WHAThVvKoRNp1GuA_rYT56BRenLmO1_26Fk0Stx8sM5K7PgbfNQXKkH2M0QacxCymXUFVLxdJ2M_6_Wt0yRKrBQmcq2tuvLQkBAn3LkG2M5X-xiV254craqR7_Jp5hz2VHJ5srxSk7do8k-4zUB3fikz6NN8VoYMl0ZktRqOSGtgcMtHI-KNlT8yRLBM7nzAbVsPUKBJBQctwnXpuAiI3YWBdy_KwNredovSZcZBv9LaM_AbVvXPckGhX0r81j7o2D4liSXVmZI5mMjr4hldNjbKSH5UwWiPsrL_8SnOvDvNUAo6dc-6ujMubBwoiOOtNbyApsiAlxuSwlo0g3StaGPwef4o21eJO77ivxq9dbfpPCn0BWaAPt02n9fDXJ0QWZf8d-BeSSD1iPaTd1-73Bb0p5QNhkIr9UcwHtjX0daP8T0Cj4TW8-HwpLH8vUF4m31QIvUb_lAiTtrcN3seieYAareU18Fo25PhKymzKRdeHVNOKDyF9tL5uw } { ``Non-Cooperative Bargaining Theory: An Introduction,'' {\it Review of Economic Studies} 53: 709--724.}
\newpage
NOTES: TO discuss
Vincent P. Crawford \href{ https://www.jstor.org/stable/1912604}{}``A Theory of Disagreement in Bargaining,''
Econometrica, Vol. 50, No. 3 (May, 1982), pp. 607-637
Tore Ellingsen and Topi Miettinen
\href{ https://www.jstor.org/stable/29730138}{``Commitment and Conflict in Bilateral Bargaining''} The American Economic Review, Vol. 98, No. 4 (Sep., 2008), pp. 1629-1635
Volume 74, Issue 1, January 2012, Pages 144-153
\href{https://doi.org/10.1016/j.geb.2011.06.006}{
``Bargaining with revoking costs?'' } {\it Games and Economic Behavior},
Alexander Wolitzky \href{https://doi.org/10.3982/ECTA9865}{``Reputational Bargaining With Minimal Knowledge of Rationality,''}
Econometrica
Econometrica, Vol. 87, No. 6 (November, 2019), 1835–1865
BARGAINING UNDER STRATEGIC UNCERTAINTY: THE ROLE OF
SECOND-ORDER OPTIMISM
AMANDA FRIEDENBERG
Send to AER.
\end{document}