\documentclass[12pt,reqno,twoside,usenames,dvipsnames]{amsart}
\usepackage{setspace}
\usepackage{amssymb}
\usepackage{graphicx}
\usepackage{amsmath}
\usepackage{hyperref}
\usepackage{verbatim}
\hypersetup{breaklinks=true,
pagecolor=white,
colorlinks=true,
linkcolor= blue,
hyperfootnotes= true,
urlcolor=blue
}
\urlstyle{rm}
% \reversemarginpar
%\topmargin -.3in \oddsidemargin -.1in
%\textheight 9in \textwidth 7.5in
\newcommand{\margincomment}[1]
{\mbox{}\marginpar{\tiny\hspace{0pt}#1}}
\newcommand{\comments}[1]{}
\renewcommand{\baselinestretch}{1.2}
\parindent 24pt
\parskip 10pt
% \doublespacing
\begin{document}
\titlepage
%\vspace*{12pt}
\begin{center}
{\large {\bf Back to Bargaining Basics
}}
September 26, 2018
\bigskip
Eric Rasmusen
{\it Abstract}
\end{center}
\begin{small}
Nash (1950) and Rubinstein (1982) give two different justifications for a 50-50 split of surplus to be the outcome of bargaining with two players. I offer a simple static theory that reaches a 50-50 split as the unique equilibrium of a game in which each player chooses a ``toughness level'' simultaneously, but greater toughness always generates a risk of breakdown. If constant risk aversions of $\alpha_i$ are added to the model, player
1's share is smaller. If breakdown is merely delay, then the players' discount rates affect their toughness and their shares, as in Rubinstein. The model is easily extended to three or more players, unlike earlier models, and requires minimal assumptions on the functions which determine breakdown probability and surplus share as functions of toughness.
\noindent
Rasmusen: Dan R. and Catherine M.
Dalton
Professor, Department of Business Economics and Public Policy, Kelley
School
of Business, Indiana University. 1309 E. 10th Street,
Bloomington,
Indiana, 47405-1701. (812) 855-9219.
\href{mailto:erasmuse@indiana.edu}{ erasmuse@indiana.edu}, \url{http://www.rasmusen.org}. Twitter: @erasmuse.
{\small
\noindent This paper:
\url{http://www.rasmusen.org/papers/bargaining.pdf}. }
{\small
\noindent
Keywords: bargaining, splitting a pie, Rubinstein model, Nash bargaining solution, hawk-dove game, Nash Demand Game, Divide the Dollar }\\
%JEL Codes: xxx.
{\small I would like to thank Benjamin Rasmusen, Michael Rauh, and participants in the BEPP Brown Bag for helpful comments. }
\end{small}
\newpage
\noindent
{\sc 1. Introduction}
Bargaining shows up as part of so many models in economics that it's especially useful to have simple models of it with the properties appropriate for the particular context. Often, the modeller wants the simplest model possible, because the outcome doesn't matter to his question of interest, so he assumes one player makes a take-it-or-leave it offer and the equilibrium is that the other player accepts the offer. Or, if it matters that both players receive some surplus (for example, if the modeller wishes to give both players some incentive to make relationship-specific investments, the modeller chooses to have the surplus split 50-50. This can be done as a ``black box'' reduced form. Or, it can be taken as the unique symmetric equilibrium and the focal point in the ``Splitting a Pie'' game (also called ``Divide the Dollar''), in which both players simultaneously propose a surplus split and if their proposals add up to more than 100\% they both get zero. The caveats ``symmetric'' and ``focal point'' need to be applied because this game, the most natural way to model bargaining, has a continuum of equilibria, including not only 50-50, but 70-30, 80-20, 50.55-49.45, and so forth. Moreover, it is a large infinity of equilibria: as shown in Malueg (2010) and Connell \& Rasmusen (2018), there are also continua of mixe-strategy equilibria such as the Hawk-Dove equilibria (both players mixing between 30 and 70), more complex symmetric discrete mixed-strategy equilibria (both players mixing between 30, 40, 60, and 70), asymmetric discrete mixed-strategy equilibria (one player mixing between 30 and 40, and the other mixing betwen 60 and 70), and continuous mixed-strategy equilibria (both players mixing over the interval [30, 70]).
Commonly, though, modellers cite to Nash (1950) or Rubinstein (1982), which have unique equilibria. On Google Scholar these two papers had 9,067 and 6,343 cites, as of September 6, 2018.
It is significant that the Nash model is the entire subject of Chapter 1 and the Rubinstein model is the entire subject of Chapter 2 of the best-know books on the theory of bargaining, Martin Osborne and Ariel Rubinstein's 1990 {\it Bargaining and Markets} and Abhinay Muthoo's 1999 {\it Bargaining Theory with Applications} (though, to be sure, William Spaniel's 2014 {\it Game Theory 101: Bargaining}, and my own treatment in Chapter 12 of {\it Games and Information} are organized somewhat differently).
Nash (1950) finds his unique 50-50 split using four axioms. {\it Invariance } says that the solution is independent of the units in which utility is
measured.
{\it Efficiency} says that the solution is pareto optimal, so the players cannot
both be made better off by any change.
{\it Independence of Irrelevant Alternatives } says that if we drop some possible
pie divisions, then if the equilibrium division is not one of those dropped, the equilibrium division does not change.
{\it Anonymity (or Symmetry) } says that switching the labels on players 1
and 2 does not affect the solution.
Rubinstein (1982) obtains the 50-50 split quite differently. Nash's equilibrium is in the style of cooperative games, a reduced form without rational behavior. The idea is that somehow the players will reach a split, and while we cannot characterize the process, we can characterize implications of any reasonable process.
The
``Nash program" as described in Binmore (1980, 1985) is to give noncooperative microfoundations for the 50-50 split. Rubinstein (1982) is the great success of the Nash program. In Rubinstein's model, each player in turn proposes a split of the pie, with the other player responding with Accept or Reject. If the response is Reject, the pie's value shrinks according to the discount rates of the players. This is a stationary game of complete information with an infinite number of possible rounds. In the unique subgame perfect equilibrium, the first player proposes a split giving slightly more than 50\% to himself, and the other player Accepts, knowing that if he Rejects and waits to the second period so he has the advantage of being the proposer, the pie will have shrunk, so it is not worth waiting. If one player is more impatient, that player's equilibrium share is smaller.
The split in Rubinstein (1982) is not exactly 50-50, because the first proposer has a slight advantage. As the time periods become shorter, though, the asymmetry approaches zero. Also, it is not unreasonable to assume that each player has a 50\% chance of being the one who gets to make the first offer, an idea used in the Baron \& Ferejohn (1989) model of legislative bargaining. In that case, the split will not be exactly 50-50, but the ex ante expected payoffs are 50-50, which is what is desired in many applications of bargaining as a submodel.
Note that this literature is distinct from the mechanism design approach to bargaining of Myerson (1981). The goal in mechanism design is to discover what bargaining procedure the players would like to be required to follow, with special attention to situations with incomplete information about each other's preferences. In the perfect-information Nash bargaining context, an optimal mechanism can be very simple: the players must accept a 50-50 split of the surplus. The question is how they could impose that mechanism on themselves. Mechanism design intentionally does not address the question of how the players can be got to agree on a mechanism, because that is itself a bargaining problem.
In Rubinstein (1982), the player always reach immediate agreement. That is because he interprets the discount rate as time preference, but another way to interpret it--- if both players have the same discount rate--- is as an exogenous probability of complete bargaining breakdown, as in
Binmore, Rubinstein \& Wolinsky (1986) and the fourth chapter of Muthoo (2000). If there is an exogenous probability that negotiations break down and cannot resume, so the surplus is forever lost, then even if the players are infinitely patient they will want to reach agreement quickly to avoid the risk of losing the pie entirely. Especially when this assumption is made, the idea in Shaked and Sutton (1984) of looking at the ``outside options'' of the two players becomes important.
The model below will depend crucially on a probability of breakdown. Here, however, the probability of breakdown will not be exogenous. Rather, the two players will each choose how tough to be, and both their shares of the pie and the probability of breakdown will increase with their toughnesses. This is different from Nash (1950) because his axiom of Efficiency rules out breakdown by assumption. This is different from Rubinstein (1982), because in his model there is not even temporary breakdown unless a player chooses to reject an offer, and in equilibrium no player will make an offer he knows will be rejected. This is different from Binmore, Rubinstein \& Wolinsky (1986), because the probability of breakdown in that model is constant, independent of the actions of the players, except for breakdown caused by rejection of offers, which as in Rubinstein's model will not occur in equilibrium.
The significance of endogenous breakdown is that it imposes a continuous cost on a player who chooses to be tougher. In Rubinstein (1982), the proposer's marginal cost of toughness is zero as he proposes a bigger and bigger share for himself up until the point where the other player would reject his offer---where the marginal cost becomes infinite. In the model you will see below, the marginal cost of toughness is the increase in the probability of breakdown times the share that is lost, so a player cannot become tougher without positive marginal cost. Moreover, since ``the share that is lost'' is part of the marginal cost, that cost will be higher for the player with the bigger share. This implies that if the player with the bigger share is indifferent about being tougher, the other player will have a lower marginal cost of being tougher and will not be indifferent. As a result, a Nash equilibrium will require that both players have the same share. This is subject to caveats about symmetric preferences and convexity of the payoff and breakdown functions, but as you will see, these caveats will be quite weak. Moreover, the model is even simpler than Rubinstein (1982), because it is a static model, so stationarity and subgame perfectness do not come into play. It nonetheless can be interpreted as a multi-period model, with breakdown being temporary, in which case its behavior is much like Rubinstein's model.
\newpage
\bigskip
\noindent
{\sc 2. The Model}
Players 1 and 2 are splitting a pie of size 1. Each simultaneously chooses a toughness level $x_i$ in $[0, \infty)$. With probability $p(x_1, x_2)$, bargaining fails and each ends up with a payoff of zero. Otherwise, player 1 receives $\pi(x_1, x_2)$ and Player 2 receives $1-\pi(x_1, x_2)$.
\noindent
{\bf Example 1: The Basics}.
Let $p(x_1, x_2) = \frac{x_1+x_2}{12}$ and $\pi(x_1, x_2)=\frac{x_1 }{x_1+x_2}$. We want an equilibrium maximizing Player 1's payoff, which is
\begin{equation} \label{e0}
Payoff_1 = p (0) + (1-p)\pi = (1- \frac{x_1+x_2}{12})\frac{x_1 }{x_1+x_2} = \frac{x_1 }{x_1+x_2}- \frac{x_1}{12}
\end{equation}
The first order condition is
\begin{equation} \label{e0}
\frac{\partial Payoff_1}{\partial x_1} = \frac{1 }{x_1+x_2}- \frac{x_1 }{(x_1+x_2)^2} - 1/12=0
\end{equation}
so $x_1+x_2 - x_1 - \frac{(x_1+x_2)^2}{12}=0 $ and $12x_2 - (x_1+x_2)^2=0$. For Player 2,
\begin{equation} \label{e0}
Payoff_2 = p(0) + (1-p)(1-\pi) = (1- \frac{x_1+x_2}{12}) (1-\frac{x_1 }{x_1+x_2}) = \frac{x_2 }{x_1+x_2}- \frac{x_2}{12}
\end{equation}
The equilibrium is symmetric, since the payoff functions are. Player 1's reaction curve is
\begin{equation} \label{e0}
x_1 = 2 \sqrt{3} \sqrt{x_2} - x_2,
\end{equation}
as shown in Figure 1.
Solving with $ x_1=x_2 =x$ we obtain $x= 3$ in the unique Nash equilibrium,with no need to apply the refinement of subgame perfectness. The pie is split equally, and the probability of breakdown is $p = \frac{3}{3+3} = 50\%$.
\newpage
\begin{center}
{\sc Figure 1:\\
Reaction Curves for Toughnesses $x_1$ and $x_2$ in Example 1 } \label{example-1-reaction-curves.pdf}
\includegraphics[width=3in]{example-1-reaction-curves.pdf}
\end{center}
\bigskip
\noindent
{\bf The General Model}
Let us now generalize.
As in Example 1, let the probability of bargaining breakdown be $p(x_1, x_2)$ and let Player 1's share of the pie be $\pi(x_1, x_2)$. Let us add an effort cost $c(x_i)$ for player $i$, with $c \geq 0$, $c' \geq 0, c'' \geq 0$. We will also allow players to be risk averse, and with differing degrees of risk aversion: quasilinear utility $u_1(\pi) - c(x_1)$ and $u_2(\pi) - c(x_2)$ with $u_1'>0, u_1'' \leq 0$ and $u_2'<0, u_2'' \leq 0$.
We will assume for the breakdown probability $p$ that $p_1>0$ $p_2>0$, $p_{11} \geq 0$, $p_{22} \geq 0$, and $p_{12} \geq 0$ for all values of $x_1, x_2$ such that $p<1$, and that $p_1= p_2=0$ for greater values. The probability of breakdown rises with each player's toughness, and it rises weakly convexly up until it reaches 1, after which $p_1=p_2=0$. Also, let us assume that $p(a,b) = p(b,a)$, which is to say that the breakdown probability does not depend on the identity of the players, just the combination (not the permutation) of toughnesses they choose.
We will assume for Player 1's share of the pie $\pi \in [0,1]$ that $\pi_1>0$, $\pi_{11} \leq 0$, and $\pi_{12} \geq 0$. A player's share rises with toughness, and rises weakly concavely. Also, $\pi(a, b) = 1- \pi(b, a)$, which is to say that if one player chooses $a$ and the other chooses $b$, the share of the player choosing $a$ does not depend on whether he is player 1 or player 2.
These assumptions on $\pi$ imply that $lim_{x_1 \rightarrow \infty} \pi_1 \rightarrow 0$, since $\pi_1>0$, $\pi_{11} \leq 0$, and $\pi \leq 1$; as $x_1$ grows, if its marginal effect on $\pi$ is constant then $p$ will hit the ultimate level $\pi=1$ eventually and for higher $x_1$ we would have $\pi_1=0$, but if the marginal effect on $\pi$ diminishes, it must diminish to zero.
\noindent
{\bf
Theorem.}{\it The general model has a unique Nash equilibrium, and that equilibrium is in pure strategies with a 50-50 split of the surplus. }
\noindent
{\bf Proof.} The expected payoffs are
\begin{equation} \label{e0}
Payoff(1)= p(x_1, x_2)(0) + (1-p(x_1, x_2)) u_1(\pi(x_1, x_2)) - c(x_1)
\end{equation}
and
\begin{equation} \label{e0}
Payoff(2) = p(x_1, x_2)(0) + (1-p(x_1, x_2)) u_2(1-\pi(x_1, x_2)) - c(x_2).
\end{equation}
The first order conditions are
\begin{equation} \label{FOC-general}
\frac{\partial Payoff(1)}{\partial x_1}= (u_1' \cdot \pi_1 - p u_1'\pi_1) - (p_1 u_1 + c'(x_1)) =0
\end{equation}
and
\begin{equation} \label{e0}
\frac{\partial Payoff(2)}{\partial x_2}= (u_2' \cdot \pi_2 - p u_2'\pi_2) - (p_2 u_2 + c'(x_2)) =0,
\end{equation}
where the first two terms in parentheses are the marginal benefit of increasing one's toughness and the second two terms are the marginal cost. The marginal benefit is an increased share of the pie, adjusted for diminishing marginal utility of consumption. The marginal cost is the loss from more breakdown plus the marginal cost of toughness.
First, note that if there is a corner solution at $x_1=x_2=0$, that is a unique solution with a 50-50 split of the surplus. That occurs if
$\frac{\partial Payoff(1) (0,0)}{\partial x_1}<0$, since the weak convexity assumptions tell us that higher levels of toughness would also have marginal cost greater than marginal benefit. That is why we did not need to make a limit assumption such as $lim_{x_1 \rightarrow 0} \pi_1 \rightarrow \infty$ and $c'' <\infty$ for the theorem to be valid, though of course the model is trivial if the toughness levels of both players are zero.
There is not a corner solution with large $x_1$. Risk-neutral utility with zero direct toughness costs makes risking breakdown by choosing large $x_1$ most attractive, so it is sufficient to rule it out for that case. Set $u_1(\pi) = \pi$ and $c(x_1)=0$, so $ \frac{\partial Payoff(1)}{\partial x_1}= (1-p)\pi_1 - p_1 \pi$. The function $p$ is linear or convex, so it equals 1 for some finite $x_1 \equiv \overline{x}$ (for given $x_2$). $p_1>0$, by assumption, and does not fall below $p_1(0, x_2)$ by the assumption of $p_{11} \geq 0$. Hence, at $x_1=\overline{x}$, $ (1-p(\overline{x}, x_2 ) \pi_1 - p_1(\overline{x}, x_2 ) \pi = 0 - p_1(\overline{x}, x_2 ) \pi <0$ and the solution to player 1's maximization problem must be $x_1 < \overline{x}$.
We can now look at interior solutions.
It will be useful to establish that the marginal return to toughness is strictly decreasing, which we can do by showing that $ \frac{\partial^2 Payoff(1)}{\partial x_1^2} <0$.
The derivative of the first two terms in \eqref{FOC-general} with respect to $x_1$ is
\begin{equation} \label{first-two-terms}
\begin{array}{l}
[ u_1'' \pi_1^2 + u_1' \pi_{11}] +[ - p_1 u_1'\pi_1 - p u_1''\pi_1^2 - p u_1'\pi_{11}] \\
\\
= (1-p) u_1'' \pi_1^2 + (1-p) u_1' \pi_{11} - p_1 u_1'\pi_1 \\
\end{array}
\end{equation}
The first term of \eqref{first-two-terms}, the marginal benefit, is zero or negative because $ (1-p)>0$ and $u_1'' \leq 0$. The second term is zero or negative because $ (1-p)>0$, $u_1'>0$ and $ \pi_{11} \leq 0$. The third term--- the key one--- is strictly negative because $p_1>0$, $u_1'>0$, and $\pi_1 >0$.
The derivative of the third and fourth terms of \eqref{FOC-general} with respect to $x_1$, the marginal cost, is
\begin{equation} \label{second-two-terms}
- p_{11} u_1 - p_1 u_1' \pi_1 - c''(x_1)
\end{equation}
The first term of \eqref{second-two-terms} is zero or negative because $p_{11} \geq 0$ and $u_1>0$. The second term--- another key one--- is strictly negative because $p_1>0$, $u_{1}' > 0$, and $\pi_1>0$. The third term is zero or negative because $c''(x_1) \geq 0$. Thus, the marginal return to toughness is strictly decreasing.
The derivative of \eqref{FOC-general} with respect to $x_2$, the other player's toughness, is
\begin{equation} \label{d2}
\begin{array}{l}
\frac{\partial^2 Payoff(1)}{\partial x_1 \partial x_2}= (1-p)u_{1}'' \pi_{12} - p_2 u_1'\pi_1 - p_{12}u_1 - p_1 u_1'\pi_2 - 0.
\end{array}
\end{equation}
The first term is zero or negative because $u'' \leq 0$ and $\pi_{12} \geq 0$ by assumption. The third term is zero or negative because $p_{12} \geq 0$ by assumption. The second and fourth terms sum to $-u_1'(p_2\pi_1 + p_1 \pi_2)$. The sign of this depends on whether $x_1x_2$, the sum is negative, and if $x_1=x_2$ the sum is zero. Using the implicit function theorem, we can conclude that if $x_1 >x_2 , \frac{dx_1}{dx_2}<0$, but for $x_1 < x_2$, we cannot determine the sign of $\frac{dx_1}{dx_2}$ without narrowing the model. Figure 1 illustrates this using Example 1. (See, too, Figure 2 below, though that is for the infinite-period model.)
Note that this means the reaction curve will start rising, but as soon as it $x_1=x_2$ it will start falling, and the reaction curves will never cross again (see Figures 1 and 2 for illustration, noting that the apparent $x_1=x_2=0$ intersection is not actually on the reaction curves because the two first derivatives are both positive there). So the equilibrium must be unique, and with $x_1=x_2=a$.The assumption that the pie-splitting function is symmetric ensures that $\pi(a, a)=.5$.
There are no mixed-strategy equilibria, because unless $x_1=x_2$, one player's marginal return to toughness will be greater than the other's, so they cannot both be zero, and existence of a mixed-strategy equilibrium requires that two pure strategies have the same payoffs given the other player's strategy. $\blacksquare$
\bigskip
Many of the assumptions behind the Theorem are stated in terms of weak inequalities. The basic intuition is about linear relations; we can add convexity to strengthen the result and ensure interior solutions, but convexity of functions is not driving the result like it usually does in economics. Rather, the basic intuition is that if one player is tougher than the other, he gets a bigger share and so has more to lose from breakdown, which means he has less incentive to be tough. Even if his marginal benefit of toughness--- an increase in his share--- were to be the same as the other player's (a linear relationship between $\pi$ and $x_i$), his marginal cost--- the increase in breakdown probability times his initial share--- is bigger, and that is true even if his marginal effect on breakdown probability is the same as the other player's (again, a linear relationship, between $p$ and $x_i$). That is why we get a 50-50 equilibrium in Example 1 even though $p$ is linear, $u$ is linear (and so the notation $u$ does not even have to appear), and $c=0$. The Theorem just tells us that if we make the natural convexity assumptions about $p$, $u$, and $c$, the 50-50 split continues to be the unique equilibrium, {\it a fortiori}.
This model relies on a positive probability of breakdown in equilibrium. In Example 1, the particular breakdown function $p$ leads to a very high equilibrium probability of breakdown--- 50\%. The model retains its key features, however, even if the equilibrium probability of breakdown is made arbitrarily small by choice of a breakdown function with sufficiently great marginal increases in breakdown as toughness increases. Example 2 shows how that works.
\bigskip
\noindent
{\bf Example 2: A Vanishingly Small Probability of Breakdown. }
Keep $\pi(x_1, x_2)=\frac{x_1 }{x_1+x_2}$ as in Example 1, but let the breakdown probability be $p(x_1, x_2) = \frac{(x_1+x_2)^k}{12k^2}$ for $k$ to be chosen. We want an equilibrium maximizing
\begin{equation} \label{e0}
Payoff (1) = (1- \frac{(x_1+x_2)^k}{12k})\frac{x_1 }{x_1+x_2} = \frac{x_1 }{x_1+x_2}- \frac{x_1(x_1+x_2)^{k-1}}{12k}
\end{equation}
The first order condition is
\begin{equation} \label{e0}
\frac{1 }{x_1+x_2}- \frac{x_1 }{(x_1+x_2)^2} - x_1(k-1) (x_1+x_2)^{k-2}/12k - \frac{ (x_1+x_2)^{k-1}}{12k}=0
\end{equation}
so
$ 12k(x_1+x_2)- 12k x_1 - x_1(k-1) (x_1+x_2)^{k } - (x_1+x_2)^{k+1 } =0$ and
$ 12k x_2 - x_1(k-1) (x_1+x_2)^{k } - (x_1+x_2)^{k+1 } =0$. Player 2's payoff functions is
\begin{equation} \label{e0}
Payoff (2) = (1- \frac{(x_1+x_2)^k}{12k})(1-\frac{x_1 }{x_1+x_2}) = \frac{x_2 }{x_1+x_2}- \frac{x_2(x_1+x_2)^{k-1}}{12k}
\end{equation}
The equilibrium is symmetric since the payoff functions are. We solve
$ 12 kx - x (k-1) (2x)^{k } - (2x)^{k+1 } =0$, so
$x=( \frac{ 12k (2^{-k})}{k+1})^{1/k}$, and
\begin{equation} \label{e0}
x= .5( \frac{ 12k }{k+1})^{1/k}
\end{equation}
If $k=1$ then $ x =( \frac{ 12 (2^{-1})}{2})^{1} =3$, and $p = \frac{6}{12} = .5$, as in Example 1.
If $k=2$ then $ x \approx 1.4$ and $p = 1/3$.
If $k=5$ then $ x \approx .79$ and $p \approx .17$.
This converges to $x=.5$. Since the probability of breakdown is $p(x_1, x_2) = \frac{(x_1+x_2)^k}{12k}$, the probability of breakdown converges to $p= \frac{1^k}{12k}$ as $k$ increases, which approaches 0.
Thus, it is possible to construct a variant of the model in which the probability of breakdown approaches zero, but we retain the other features, including the unique 50-50 split of the surplus. Note that it is also possible to construct a variant with the equilibrium probability of breakdown approaching one, by using a breakdown probability function with a very low marginal probability of breakdown as toughness increases.
\bigskip
\noindent
{\sc 3. $N>2$ Players and Risk Aversion }
Let's next modify Example 1 by generalizing to $N$ bargainers, and then by adding risk aversion.
\bigskip
\noindent
{\bf Example 3: N Players. }
Now return to the risk neutrality of Example 1, but with $N$ players instead of 2. Player $i$'s payoff function will be
\begin{equation} \label{e0}
Payoff (i) = (1- \frac{\sum_{i=1}^N x_i}{12}) \frac{x_i }{\sum_{i=1}^N x_i}
\end{equation}
with first order condition
\begin{equation} \label{e0}
\frac{1 }{\sum_{i=1}^N x_i} - \frac{x_i }{(\sum_{i=1}^N x_i})^2 - \frac{1}{12} = 0
\end{equation}
All $N$ players have this same first order condition, so $x_i=x$ and
\begin{equation} \label{e0}
\frac{1 }{Nx} - \frac{x }{(Nx)^2} - \frac{1}{12} = 0
\end{equation}
so $ 12Nx - 12 x - N^2x^2 = 0 $ and $12(N-1) x - N^2x^2 = 0$ and $12(N-1) - N^2x = 0$ and
\begin{equation} \label{e0}
x = \frac{12(N-1)}{N^2},
\end{equation}
so the probability of breakdown is
\begin{equation} \label{e0}
p(x, /dots,x) = \frac{ n \frac{12(N-1)}{N^2}}{12} = \frac{(N-1)}{N^2}
\end{equation}
Thus, as $N$ increases, the probability of breakdown approaches but does not equal one.
If $N=2$, $x=12\cdot 1/4 = 3$ and the probability of breakdown is 50\%. If $N= 3$, $x = 12\cdot 2/9 \approx 2.67 $ so the probability of breakdown rises to about $3 \cdot 2.67 /12$, about 67\%. If $N = 10$, $x = 12\cdot 9/100 = 1.08$ and the probability rises further, to $10 \cdot 1.08/12$, which is 90\%. There is a negative externality from increasing toughness, and the effect of this externality increases with the number of players because each player's equilibrium share becomes smaller, so by being tougher he is mostly risking the destruction of the other players' payoffs.
\bigskip
\noindent
{\bf Example 4: Risk Aversion. } Now add risk aversion to Example 1. Let the players have the constant average risk aversion (CARA) utility functions $u ( y_i;\alpha_i) =- e^{-\alpha_i y_i}$. Before finding the equilibrium, though, let's prove a general proposition:
\noindent
{\bf Proposition 2:} {\it If for any $\pi$, player 1 is more risk averse than player 2, player 2 gets a bigger share of the pie in equilibrium. }
\noindent
{\bf Proof.}
\begin{equation} \label{e0}
Payoff (1) = p u(0; \alpha_1) + (1 - p) u (\pi; \alpha_1)
\end{equation}
which has the first-order condition
\begin{equation} \label{e0}
p_1 u(0; \alpha_1) - p_1 u(\pi; \alpha_1) + (1-p) u'(\pi; \alpha_1) \pi_1 =0
\end{equation}
We can rescale the units of utility functions of two people, so let's
normalize so $u(0; \alpha_1) \equiv u(0; \alpha_2) \equiv 0$ and $u'(0; \alpha_1) \equiv u'(0; \alpha_2)$. Then,
\begin{equation} \label{e0}
p_1 u(\pi; \alpha_1] = [1-p] u'(\pi; \alpha_1) \pi_1,
\end{equation}
so
\begin{equation} \label{e0}
\frac{ p_1} {1-p} = \frac{ u'(\pi; \alpha_1) \pi_1}{ u(\pi; \alpha_1] }
\end{equation}
Similarly, for player 2's choice of $x_2$,
\begin{equation} \label{e0}
\frac{ p_2} {1-p} = \frac{ u'(1-\pi; \alpha_2) \pi_2}{ u(1-\pi; \alpha_2] }
\end{equation}
If player 1 is less risk averse, his utility function is a concave increasing transformation of player 2's. \href{http://econweb.ucsd.edu/~vcrawfor/ArrowPrattTyped.pdf}{http://econweb.ucsd.edu/~vcrawfor/ArrowPrattTyped.pdf}.
This means that for a given $y$, for player 1 the marginal utility $u'(y)$ is bigger than for player 2, which also means that the average utility $u(y)/y$ is further from the marginal utility, because $u''<0$, and $u'(0)$ is the same for both. In that case, however, $\frac{u'(y)}{u(y)/y}$ is bigger for player 1, so $\frac{u'(y)}{u(y) }$ is also bigger. If $y = \pi = 1-\pi = .5 \pi$, we would need $p_1>p_2$ (unless both equalled zero) and $\pi_1< \pi_2$, which would require $x_1 \neq x_2$, which would contradict $\pi=.5$. The only way both conditions could be valid is if $x_1>x_2$, so that $p_1 \geq p_2$ and $\pi_1 < \pi_2$. $\blacksquare$
Now we can return to Example 4.
\begin{equation} \label{e0}
Payoff (1) = p u(0) + (1 - p) u (\pi) = \frac{x_1+x_2}{12} (-1) + (1- \frac{x_1+x_2}{12}) u_1(\frac{x_1 }{x_1+x_2}) ,
\end{equation}
which has the first-order condition
\begin{equation} \label{e0}
-\frac{1}{12} - \frac{1}{12} u +
(1- \frac{x_1+x_2}{12}) u' \cdot [\frac{1 }{x_1+x_2} - \frac{x_1 }{(x_1+x_2)^2}] =0.
\end{equation}
With CARA utility, if $\alpha_1 \neq 0$ then $u' = - \alpha_1 u$, so
\begin{equation} \label{e0}
-\frac{1}{12} - \frac{1}{12} u_1-
(1- \frac{x_1+x_2}{12}) \alpha_1 u_1 \cdot [\frac{1 }{x_1+x_2} - \frac{x_1 }{(x_1+x_2)^2}] =0
\end{equation}
and
\begin{equation} \label{e0}
-\frac{1}{12} + e^{-\alpha_1 \frac{x_1 }{x_1+x_2}} \left( \frac{1}{12} -
(1- \frac{x_1+x_2}{12}) \alpha_1 \cdot [\frac{1 }{x_1+x_2} - \frac{x_1 }{(x_1+x_2)^2}] \right) =0.
\end{equation}
From this point, I need to give numerical solutions, depending on the $\alpha$ parameters, with a table. UNFINISHED.
\begin{center}
{\sc Table 1: \\
Toughnesses, ($ x_1/x_2$), and Player 1's Share, {\bf $\pi$}, as Risk Aversion, ($\alpha_1, \alpha_2)$, Increases } NOT FINSIEHD-- WRONG NUMBERS
\begin{tabular}{l r| ccc ccc }
& & \multicolumn{6}{c} {$\alpha_1$}\\
& &.01 &.50 &1.00 &2.00&5.00 &10.00 \\
\hline
&2.000 & 9.9/1.0 {\bf 91} & 7.9/1.9 {\bf 81} & 6.2/2.6 {\bf 70} & 5.4/2.8 {\bf 66}& 3.9/3.2 {\bf .55} &3.3/3.3 {\bf 50}\\
&.500 & 9.8/1.1 {\bf 90} & 7.8/2.1 {\bf 79} &6.0/3.0 {\bf 67}& 5.2/3.3 {\bf 61} & 3.8/3.8 {\bf 50}&\\
&.100& 9.6/1.4 {\bf 87} &7.4/2.9 {\bf 75} & 5.4/4.2 {\bf .56} & 4.6/4.6 {\bf 50} &&\\
$\alpha_2$ &.050 &7.6/2.5 {\bf 75} & 7.0/3.5 {\bf 67} &4.9/4.9 {\bf 50} & &&\\
&.010 & 7.4/2.9 {\bf 72} &5.5/5.5 {\bf 50} & & & &\\
&.001 & 5.5/5.5 {\bf 50} && & & &\\
\end{tabular}
\end{center}
This makes sense. The more risk averse a player is relative to his rival, the lower his share of the pie. He doesn't want to be tough and risk breakdown, and both his direct choice to be less tough and the reaction of the other player to choose to be tougher in response reduce his share.
Note that this is a different effect of risk aversion than has appeared in the earlier literature. In a cooperative game theory model such as Nash (1950), risk aversion seems to play a role, but there is no risk in those games. Nash's Efficiency axiom means that there is no breakdown and no delay. Since we in economics conventionally model risk aversion as concave utility, however, risk seems to enter in when it is really just the shape of the utility function that does all the work; the more ``risk averse'' player is the one with sharper diminishing returns as his share of the pie increases. Alvin Roth discusses this in 1977 and 1985 papers in {\it Econometrica}, distinguishing between this ``strategic'' risk and ``ordinary'' or ``probabilistic'' risk that arises from uncertainty. On the other hand, Osborne (1985) looks at risk aversion in a model that does have uncertainty, but the uncertainty is the result of the equilibrium being in mixed strategies. One might also look at risk aversion this way in the mixed-strategy equilibria of Splitting a Pie examined in Malueg (2010) and Connell \& Rasmusen (2018). In the breakdown model, however, the uncertainty comes from the probability of breakdown, not from randomized strategies.
\bigskip
\noindent
{\sc 4. Breakdown Causing Delay, Not Permanent Breakdown-- A Model in the Style of Rubinstein (1982)}
In Rubinstein (1982), breakdown just causes delay, not permanent loss of the bargaining surplus. The players have positive discount rates, though, so each period of delay does cause some loss, a loss which, crucially, is proportional to a player's eventual share of the pie. Note, too, that the probability of breakdown is zero or one, rather than rising continuously with bargaining toughness.
In Rubinstein (1982), breakdown never occurs in equilibrium. That is because the game has no uncertainty and no asymmetric information. The players move sequentially, taking turns making the offer. The present model adapts very naturally to the setting of infinite periods. Breakdown simply means that the game is repeated in the next period, with new choices of toughness. Of course, the players must now have positive discount rates, or no equilibrium will exist because being tougher in a given period and causing breakdown would have no cost. (Paradoxically, if this happened every period, there would be a cost--- eternal disagreement--- but it is enough for Nash equilibrium to fail that no pairs of finite toughness in a given period can be best responses to each other.)
We will look at the effect of repetition and discounting in Example 5.
\noindent
{\bf Example 5: Possibly Infinite Rounds of Bargaining. }
Let us return to Example 1, with two risk-neutral players, but say that if bargaining breaks down, it resumes in a second round and continues until eventual agreement. In addition, the players have discount rates $r_1$ and $r_2$, both positive, and we will require that the equilibrium be subgame perfect, not just Nash. Let's denote the equilibrium expected payoff of player 1 by $V_1$, which will equal
\begin{equation} \label{V1}
Payoff(1) = V_1 =p \frac{ V_1}{1+r_1} + (1-p) \pi
\end{equation}
Player 1's choice of $x_1$ this period will not affect $V_1$ next period (because we require subgame perfectness, which makes the game stationary), so the first order condition is
\begin{equation} \label{e0}
p_1 \frac{ V_1}{1+r_1} + (1-p) \pi_1 -p_1 \pi =0
\end{equation}
We can rewrite the payoff equation \eqref{V1} as $V_1 (1 - \frac{p}{1+r_1}) = (1-p)\pi$ and $V_1 \frac{1 + r_1 - p}{1+r_1} = (1-p)\pi$ and
\begin{equation} \label{V1sub}
V_1 = \frac{ (1+r_1) (1-p) \pi}{(1+r_1-p) }
\end{equation}
Put \eqref{V1sub} into the first order condition for $V_1$ and we get
\begin{equation} \label{e0}
p_1 \frac{\frac{ (1+r_1) (1-p) \pi}{(1+r_1-p) }}{1+r_1} + (1-p) \pi_1 -p_1 \pi =0
\end{equation}
so
\begin{equation} \label{solve-it}
p_1 \frac{ (1-p) \pi} {(1+r_1-p) } + (1-p) \pi_1 -p_1 \pi =0
\end{equation}
Let us now use the particular functional form of the examples, which tells us that $p_1 = 1/12$ and $\pi_1 = \frac{1}{x_1+x_2} - \frac{x_1}{(x_1+x_2)^2}$. Then solving \eqref{solve-it} yields
\begin{equation} \label{x1discounting}
x_1 = \frac{ -12 r_1 x_2 + x_2^2 - 12 x_2 + 12 \sqrt{ r_1 x_2 (12 r_1 - x_2 + 12 ) }}{12 r_1 - x_2}
\end{equation}
If the discount rates are the same, the first order conditions are the same for both players and we get a symmetric equilibrium with $x_1=x_2=x$. Thus,
\begin{equation} \label{e0}
x = \frac{ -12 r x + x^2 - 12 x + 12 \sqrt{ r x (12 r - x + 12 ) }}{12 r - x},
\end{equation}
which solves to
\begin{equation} \label{x-repeated}
x= 6 r - 6 \sqrt{r^2+r} + 6
\end{equation}
Equation \eqref{x-repeated} has the derivative
\begin{equation} \label{dfsdf}
\frac{d x}{d r} = 6 - 6 (.5) \frac{1}{\sqrt{r^2+r}} (2r+1) = 6 - 3\frac{2r +1}{\sqrt{r^2+r}}.
\end{equation}
Square the term $\frac{2r +1}{\sqrt{r^2+r}}$ and we get $\frac{4r^2 +1 +4r}{r^2+r} = = 4 + \frac{1 }{r^2+r} $, the square root of which is greater than 2. Thus, the derivative is a number less than $6 - 3\cdot 2$ and is negative: as the discount rate rises, toughness falls. The bounds of $x$ are $x=6$ as $r \rightarrow 0$ and $x=3$ as $r \rightarrow \infty$.\footnote{ $ r - \sqrt{r^2+r} = \frac{r}{\sqrt{r^2+r} +r} = \frac{r}{ r \sqrt{1+ \frac{1}{r}} +r}= \frac{1}{ \sqrt{1+ \frac{1}{r}} +1},$ which clearly has the limit of 1/2 as $r \rightarrow \infty$.} Note that we found $x=3$ as the equilibrium in Example 1. That happened because in Example 1 breakdown reduces the players' payoffs to 0 immediately, the same as if the game was repeated but they had infinitely high discount rates. If the players have very small discount rates, on the other hand, they have little reason to fear the delay from breakdown, so $x$ approaches 6. The diagonal values with the boldfaced 50\% split in Table 2 show the equilibrium toughnesses for various discount rates.
\begin{center}
{\sc Table 2: \\
Toughnesses, ($x_1/ x_2$), and Player 1's Share, {\bf $\pi$}, as Impatience, ($r_1, r_2)$, Increases }
\begin{tabular}{l r| ccc ccc }
& & \multicolumn{6}{c} {$r_1$}\\
& &.001 &.010 &.050 & .100&.500&2.000\\
\hline
&2.000 & 9.9/1.0 {\bf 91} & 7.9/1.9 {\bf 81} & 6.2/2.6 {\bf 70} & 5.4/2.8 {\bf 66}& 3.9/3.2 {\bf .55} &3.3/3.3 {\bf 50}\\
&.500 & 9.8/1.1 {\bf 90} & 7.8/2.1 {\bf 79} &6.0/3.0 {\bf 67}& 5.2/3.3 {\bf 61} & 3.8/3.8 {\bf 50}&\\
&.100& 9.6/1.4 {\bf 87} &7.4/2.9 {\bf 75} & 5.4/4.2 {\bf .56} & 4.6/4.6 {\bf 50} &&\\
$r_2$ &.050 &7.6/2.5 {\bf 75} & 7.0/3.5 {\bf 67} &4.9/4.9 {\bf 50} & &&\\
&.010 & 7.4/2.9 {\bf 72} &5.5/5.5 {\bf 50} & & & &\\
&.001 & 5.5/5.5 {\bf 50} && & & &\\
\end{tabular}
\end{center}
The expected payoff is $V$ when $r_1=r_2=r$. It equals\footnote{$V = (1+r ) (6- x) / (12+12r -2x)= (1+r ) (6- [6 r - 6 \sqrt{r^2+r} + 6]) / (12+12r -2[6 r - 6 \sqrt{r^2+r} + 6]) = (1+r ) ( - r+ \sqrt{r^2+r} ) / (2+2r -2[ r - \sqrt{r^2+r} +1] =(1+r ) ( - r + \sqrt{r^2+r} ) / 2 \sqrt{r^2+r} =\frac{(1+r ) ( \sqrt{r^2+r}-r) }{ 2 \sqrt{r^2+r}}
= \frac{(1/r) (r^2+r)}{2\sqrt{r^2+r}} ( \sqrt{r^2+r}-r) =\frac{ \sqrt{r^2+r}}{2r}( \sqrt{r^2+r}-r) = \frac{r^2+r}{2r} - \frac{r\sqrt{r^2+r}}{2r} = \frac{r}{2}+ .5 - \frac{\sqrt{r^2+r}}{2} $. }
\begin{equation} \label{e0}
\begin{array}{lll}
V& =& (1+r ) (1-p)*(.5) / (1+r -p) \\
&&\\
& =& (1+r ) (1- \frac{2x}{12})*(.5) / (1+r -\frac{2x}{12})\\
&&\\
&=& \frac{r}{2}+ .5 - \frac{\sqrt{r^2+r}}{2} \\
\end{array}
\end{equation}
As $r \rightarrow 0$, $V \rightarrow .5$.
As $r \rightarrow \infty$, we know $x \rightarrow 3 $, so $V \rightarrow\frac{ (1+r ) (1- \frac{6}{12}) (.5)}{ (1+r -\frac{6}{12}) }= \frac{.25(1+r)}{.5+r}$. As $r \rightarrow \infty$, this last expression approaches $\frac{.25 r}{r} = .25$.
The derivative is negative,\footnote{$dV/dr = .5 - \frac{2r+1}{4\sqrt{r^2+r}} $, which has the same sign, multiplying by $4 \sqrt{r^2+r}$, as $ 2\sqrt{r^2+r} -2r -1 $. Square the first term and we get $4r^2+4r$. Square the second term and we get $4r^2+1 + 4r$. Thus, the derivative is negative. } and $V(r)$ has a lower bound of V=.25. Recall from Example 1 that if the surplus falls to 0 after breakdown, the equilibrium probability of breakdown is .5. If the players are patient, agreement takes longer, but the cost per period of delay is enough lower to outweigh that.
In the Rubinstein model, the split approaches 50-50 as the discount rate approaches zero. Here, the probability of breakdown approaches zero as the discount rate approaches zero. Note, however, that for a fixed discount rate, another way we could generate a near-zero breakdown rate would be by using a more convex breakdown function, as in Example 2.
We do not have Rubinstein's first-mover advantage effect, because the present model does not have one player at a time making an offer. We also do necessarily have agreement in the first round, as in his equilibrium, though it becomes very likely if we use the convex breakdown function of Example 2. Another of his major results, though, that having a lower discount rate gives a player a bigger share of the pie, is present in the breakdown model, as we will next explore.
\begin{center}
{\sc Figure 2:\\
Reaction Curves for Toughnesses $x_1$ and $x_2$ \\
(a) $r_1=r_2=.05$ \hspace{48pt} (b) $r_1=.25$, $r_2=.05$ }
\label{reaction-curves.pdf}
\includegraphics[width=2in]{example-5-r05reaction-curves.pdf} \includegraphics[width=2in]{reaction-curves.pdf}
\end{center}
What if Player 1 has a lower discount rate than Player 2? No such neat functional form as in Rubinstein (1982) can be derived for the quartic functions $x_1$ (see equation \eqref{x1discounting}) and $x_2$ in terms of $r_1$ and $r_2$, but particular reaction functions show us what is going on. We have already seen that $\frac{\partial x_i}{\partial r_i}<0$. The reaction curves are plotted in $(x_1,x_2)$ space in Figure 2. In the relevant range, near where they cross, they are downward sloping. Not only does this make the equilibrium unique, it also tells us that the indirect effect of an increase in $r_1$ goes in the same direction as the direct effect. If $r_1$ rises, that reduces $x_1$, which increases $x_2$, which has the indirect effect of reducing $x_2$ further, and so the indirect effects continue ad infinitum. Table 1 above shows the equilibrium toughnesses and pie split for various combinations of discount rates.
\noindent
{\bf Concluding Remarks}
The purpose of this model is to show how a simple and intuitive force--- the fear of inducing bargaining breakdown by being too tough--- leads to a 50-50 split of the pie being the unique equilibrium outcome. Such a model also implies that the more risk-averse player gets a smaller share of the pie, and it can be easily adapted to $n$ players. All this has been in the context of complete information. I hope to write a companion paper on how incomplete information can be incorporated into the model.
\bigskip
\noindent
{\bf References}
\noindent
{\bf Baron}, David P. \& John A. {\bf Ferejohn} (1989) ``Bargaining in Legislatures,''
{\it The American Political Science Review} 83(4): 1181-1206 (December 1989).
\noindent
{\bf Binmore}, Ken G. (1980) ``Nash Bargaining Theory II," ICERD, London School of Economics, D.P. 80/14 (1980).
\noindent
{\bf Binmore}, Ken G. (1985) ``Bargaining and Coalitions," in {\it Game-Theoretic Models of Bargaining,}Alvin Roth, ed., Cambridge: Cambridge
University Press (1985).
\noindent
{\bf Connell,} Christopher \& Eric {\bf Rasmusen} (2018) ``Divide the Dollar: Mixed Strategies in Bargaining under Complete Information,''
(September 2018). [As of September 14 we do not have a working paper version, because we just came across Malueg's paper and need to revise heavily to note his contributions.]
\noindent
{\bf Binmore}, Ken G., Ariel {\bf Rubinstein} \& Asher {\bf Wolinsky} (1986)
``The Nash Bargaining Solution in Economic Modelling,''
{\it The RAND Journal of Economics} 17(2): 176-188 (Summer 1986).
\noindent
{\bf Malueg}, David A. (2010) ``Mixed-Strategy Equilibria in the Nash Demand Game,'' {\it Economic Theory} 44: 243–270 (2010).
\noindent
{\bf Nash}, John F. (1950) ``The Bargaining Problem,''
{\it Econometrica} 18(2): 155-162 (April 1950).
\noindent
{\bf Osborne}, Martin (1985) ``The Role of Risk Aversion in a Simple Bargaining Model," in {\it Game-Theoretic Models of Bargaining,} Alvin Roth, ed., Cambridge: Cambridge
University Press (1985).
\noindent
{\bf Osborne}, Martin \& Ariel {\bf Rubinstein} (1990) {\it Bargaining and Markets,} Bingley: Emerald Group Publishing (1990).
\noindent
{\bf Rasmusen}, Eric (1989/2007)
{\it Games and Information: An Introduction to Game Theory}, Oxford: Blackwell Publishing (1st ed. 1989; 4th ed. 2007).
\noindent
{\bf Roth}, Alvin E. (1977) ``The Shapley Value as a von Neumann-Morgenstern Utility Function,"
{\it Econometrica } 45: 657-664 (1977).
\noindent
{\bf Roth}, Alvin E. (1985) ``A Note on Risk Aversion in a Perfect Equilibrium Model of Bargaining,"
{\it Econometrica } 53(1): 207-212 (January 1985).
\noindent
{\bf Rubinstein,} Ariel (1982) ``Perfect Equilibrium in a Bargaining Model,''
{\it Econometrica} 50(1): 97-109 (January 1982).
\noindent
{\bf Shaked}, Avner \& John {\bf Sutton} (1984) ``Involuntary Unemployment as a Perfect Equilibrium in a Bargaining Model,''
{\it Econometrica} 52 (6): 1351-1364 (November 1984).
\noindent
{\bf Spaniel}, William (2014)
{\it Game Theory 101: Bargaining} (2014).
\end{document}