\documentclass[12pt,reqno,twoside,usenames,dvipsnames]{amsart}
\usepackage{amscd}
%\usepackage{fullpage}
\usepackage{marvosym}
\usepackage{amsmath}
\usepackage{amsgen}
\usepackage{amsbsy}
\usepackage{amsopn}
\usepackage{amsthm}
\usepackage{amscd}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{mathrsfs}
\usepackage{hyperref}
\usepackage{ifsym}
\usepackage[usenames,dvipsnames]{color}
\newcommand{\red}[1]{{\color{red}{#1}}}
\newcommand{\blue}[1]{{\color{blue}{#1}}}
\newcommand{\green}[1]{{\color{green}{#1}}}
\usepackage{todonotes}
\usepackage[marginratio=1:1,height=8.5in,width=6.5in,tmargin=1.25in]{geometry}
\usepackage[normalem]{ulem}
\usepackage{enumerate,enumitem}
%\usepackage{tikz-cd}
%\usepackage[pdftex]{hyperref}
%\usepackage[usenames]{color}
\usepackage{graphicx}
%\usepackage{mathbbol}
\usepackage{comment}
%\pdfoutput=1
\usepackage{latexsym}
\usepackage{MnSymbol}
%\DeclareFontFamily{U}{MnSymbolC}{}
%\DeclareSymbolFont{MnSyC}{U}{MnSymbolC}{m}{n}
%\DeclareFontShape{U}{MnSymbolC}{m}{n}{
% <-6> MnSymbolC5
% <6-7> MnSymbolC6
% <7-8> MnSymbolC7
% <8-9> MnSymbolC8
% <9-10> MnSymbolC9
% <10-12> MnSymbolC10
% <12-> MnSymbolC12}{}
%\DeclareMathSymbol{\intprod}{\mathbin}{MnSyC}{'270}
%\DeclareMathSymbol{\intprod1}{\mathbin}{MnSyC}{'271}
%\DeclareMathSymbol{\intprod2}{\mathbin}{MnSyC}{'269}
\newcommand{\marg}[1]
{\mbox{}\marginpar{\tiny\hspace{0pt}#1}}
\newcommand{\mc}{\mathcal}
\newcommand{\mbf}{\mathbf}
\newcommand{\mbb}{\mathbb}
\newcommand{\bb}{\mathbb}
\newcommand{\mf}{\mathfrak}
\newcommand{\msf}{\mathsf}
\newcommand{\scr}{\mathscr}
\newtheorem{Theorem}{Theorem}
\newtheorem{Lemma}[Theorem]{Lemma}
\newtheorem{lemma}[Theorem]{Lemma}
\newtheorem{Corollary}[Theorem]{Corollary}
\newtheorem{Proposition}[Theorem]{Proposition}
\newtheorem{Conjecture}[Theorem]{Conjecture}
\newtheorem{Definition}{Definition}
\newtheorem{Example}[Theorem]{Example}
\newtheorem{Remark}{Remark}
\newcommand{\Vol}{\operatorname{Vol}}
\DeclareMathOperator{\vol}{vol}
\DeclareMathOperator{\m}{\mf{m}}
\newcommand{\pa}{\partial}
\newcommand{\del}{\partial}
\newcommand{\op}{\operatorname}
\newcommand{\abs}[1]{\left| #1 \right|}
\newcommand{\norm}[1]{\left\| #1 \right\|}
\newcommand{\inner}[1]{\left\langle #1 \right\rangle}
\newcommand{\comp}{\mbox{\tiny{o}}}
\newcommand{\QED}{{\hfill$\Box$\medskip}}
\newcommand{\set}[1]{\left\{ #1 \right\} }
\newcommand{\tensor}{\otimes}
\newcommand{\grad}{\nabla}
\newcommand{\til}{\widetilde}
\newcommand{\of}{\circ}
\newcommand{\ol}{\overline}
\newcommand{\wh}{\widehat}
\newcommand{\h}{{hyp}}
\DeclareMathOperator{\Aut}{Aut}
\DeclareMathOperator{\Aff}{Aff}
\DeclareMathOperator{\Minvol}{Minvol}
\DeclareMathOperator{\Jac}{Jac}
%\DeclareMathOperator{\RCD}{RCD}
\DeclareMathOperator{\aint}{\strokedint}
\DeclareMathOperator{\argmin}{argmin}
\newcommand{\restr}{\mathbin{\raisebox{\depth}{\scalebox{1.5}{$\llcorner$}}}}
\DeclareMathOperator{\tr}{tr}
\DeclareMathOperator{\supp}{supp}
\newcommand{\cout}[1]{}
\newcommand{\co}{\colon\thinspace}
\newcommand{\R}{{\bf R}}
\newcommand{\Z}{{\bf Z}}
\newcommand{\N}{{\bf N}}
\newcommand{\C}{{\bf C}}
\newcommand{\HH}{{\bf H}}
\newcommand{\HK}{{\bf H}_{\bf{K}}}
\newcommand{\OO}{{\bf O}}
\newcommand{\eps}{\epsilon}
\newcommand{\Ga}{\Gamma}
\newcommand{\ga}{\gamma}
\newcommand{\la}{\lambda}
\newcommand{\La}{\Lambda}
\begin{document}
\titlepage
\vspace*{12pt}
\begin{center}
{\large {\bf Divide the Dollar: Mixed Strategies in Bargaining under Complete Information
}}
September 8, 2018
\bigskip
{\it Abstract}
\end{center}
We find the unique mixed-strategy equilibrium with continuous support for the classic bargaining game in which each player bids for a share of the pie simultaneously and receives a share proportional to his bid unless they add to more than 100\%. The equilibrium is unique for given $a \in (0,.5)$ and consists of an atom of probability at $a$ and a convex increasing density $f(v)$ on $[a, 1-a]$. The equilibrium has a continuum of possible bargaining outcomes, with positive probability of either a disagreement or a 50-50 split. \\
\begin{small}
\noindent
Connell: Indiana University, Department of
Mathematics, Rawles Hall,
RH 351, Bloomington,
Indiana, 47405. (812) 855-1883. Fax:(812) 855-0046.
\href{mailto:connell@indiana.edu}{connell@indiana.edu}.\\
Rasmusen: Dan R. and Catherine M.
Dalton
Professor, Department of Business Economics and Public Policy, Kelley
School
of Business, Indiana University. 1309 E. 10th Street,
Bloomington,
Indiana, 47405-1701. (812) 855-9219.
\href{mailto:erasmuse@indiana.edu}{ erasmuse@indiana.edu}.
{\small
\noindent This paper:
\url{http://www.rasmusen.org/mixedpie.pdf}. }
{\small
\noindent
Keywords: bargaining, splitting a pie, Rubinstein model, Nash bargaining solution, hawk-dove game, Nash Demand Game, Divide the Dollar}
\end{small}
\newpage
\noindent
{\bf 1. Introduction}
A fundamental problem in game theory is how to model bargaining between two players who must agree on shares of surplus or obtain zero payoff. In the classic bargaining problem, ``Splitting a Pie'' (also called the Nash Demand Game or Dividing the Dollar) the two players bid simultaneously with shares and there exists a continuum of equilibria in which the shares add up to one. This is what might call a folk model, so simple that no one ever published it as original and its origins are lost in time. A multitude of other models of bargaining exist, of which the two best known are the Nash bargaining solution (1950, 8,514 cites on Google scholar) and the Rubinstein model (1982, 6,017 cites). Nash takes the approach of cooperative game theory and finds axioms which guarantee a 50-50 split of surplus. Rubinstein takes the approach of finitely repeated games with discounting and finds close to a 50-50 split, with a small advantage to whichever player makes the first offer. In both models, the players always agree in equilibrium. Other economists have incorporated incomplete information into their models, in which case failure to agree can occur in equilibrium when a player refuses to back down because he thinks, wrongly, that the other player's payoff function will cause him to agree to accept his proposal.
In this paper, we return to the classic bargaining problem and look at mixed strategy equilibria in it. Some of these equilibria are ``folk equilibria''; interesting, but easily derived by anyone with modest experience in game theory. We will talk about those equilibria,
but we will focus on the possibility of mixing over a continuum of proposals. Such equilibria are known in what we call the ``easy game'' in which when proposals add up to less than one, each player takes his proposal as his share and the remaining bargaining surplus is discarded. We focus on the more natural game in which each player receives a share proportional to his proposal. We think this better represents the idea of players choosing how aggressively to bargain given the risk of pushing the other player to disagreement. In
equilibrium this game's most probable outcome is either a 50\% split or disagreement, but with a continuum of other possible outcomes sharing the pie between the players depending on how aggressively they bid. We obtain this result without the assumption of incomplete information or a continuum of types of players. The equilibrium consists of an atom of probability at some bid $a$ less than 50\% and an increasing mixing density for bids between $a$ and $1-a$.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\bigskip
\noindent
{\sc Pure Strategies or Hawk-Dove Mixing over Two Actions }
In the classic bargaining game, sometimes called ``Splitting a Pie,'' two players simultaneously choose shares of a pie in the interval [0,1]. If their bids for shares add up to more than 100\%, both get zero and we say the pie ``explodes''. Otherwise, when their bids are $p_1+p_2\leq 1$, player 1 gets share $p_1/ (p_1+p_2)$ and player 2 gets share $p_2/ (p_1+p_2)$.
This game has a continuum of pure strategy Nash equilibria $(p_1, p_2)$, every permutation such that $p_1+p_2 = 1$, and the pie never explodes.
What about mixed strategies? These do generate bargaining breakdown. One set of mixed-strategy equilibria are the Hawk-Dove equilibria, which we so term because they are mathematically the same as the well-known biological model of creatures deciding whether to pursue aggressive or pacific strategies. These are symmetric equilibria in which each player chooses $a$ with probability $\theta$ and $b$ with probability $1-\theta$ for $a \leq .5$, with $a+b =1. $
Suppose the two bids did not add up to one. Then it would be a profitable deviation to raise the lower bid, since it would increase the player's share without increasing the probability of exploding the pie. The mixing probability must make the expected payoff of each action the same in equilibrium, so
\begin{equation} \label{e0}
\pi(a) = \theta (.5) +(1-\theta)a = \pi(b) = \theta b +(1-\theta) (0),
\end{equation}
which solves to
\begin{equation} \label{e0}
\theta = 2a \;\;\;\; \;\;\;\; \;\;\;\; \;\;\;\; \pi = 2a - 2a^2
\end{equation}
The players share the pie equally in equilibrium with probability $4a^2$ and the pie explodes with probability $(1-2a)^2$. Note that there are a continuum of equilibria and they can be pareto-ranked, with higher payoffs if $ a$ is closer to .5. In the limiting equilibrium, both players choose $a=0$ with probability 0 and $b=1$ with probability 1, and the expected payoff is zero.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\bigskip
\noindent
{\sc Mixed-Strategy Continuous-Support Bargaining in the Easy Game }
We will start with a version of Splitting a Pie that is easier to solve. This ``Easy Game'' seems to be widely known. We do not know if it has ever been published. We haven't haven't found it anywhere, but we know game theorists who are aware that it has been solved. The easy game differs from Splitting a Pie in what happens if the shares the players choose add up to less than one. It uses the following assumption:
\noindent
{\bf The Easy Game's Assumption. } If $p+v<1$, the player who bids $p$ gets $p$ and the player who bids $v$ gets $v$. The remainder of the pie, amount $1-p-v$, is discarded.
This rule is somewhat strange since it says that even though the players have come to agreement, what they agree to is to throw away valuable surplus. The more natural way to model bargaining, used in Splitting a Pie, is that the two players split the pie in proportion to their bids.
We will solve for the equilibrium of the easy game, however, before we go on to the equilibrium for the first game.
First, there is still the usual continuum of pure strategy equilibria: every permutation such that $a+b = 100\%$. There is never bargaining breakdown and zero payoffs for both in these equilibria.
There is also a continuum of easy-game Hawk-Dove equilibria. They are not the same as we found before. Again, let each player chooses $a$ with probability $\theta$ and $b$ with probability $1-\theta$ for $a \leq .5$ and $a +b =1. $ The mixing probability must make the expected payoff of each action the same in equilibrium, so
\begin{equation} \label{e0}
\pi(a) = a = \pi(b) = \theta a +(1-\theta) (0),
\end{equation}
which solves to
\begin{equation} \label{e0}
\theta = \frac{a}{1-a} \;\;\;\; \;\;\;\; \;\;\;\; \;\;\;\; \pi = a.
\end{equation}
Let us now consider a mixed-strategy equilibrium, not necesssarily symmetric, in which the players use the probability measures $d\mu_1(v)$ and $d\mu_2(v)$ on $[a,b]$. We can write $\mu=\nu + f(v)dv$ where $f(v)dv$ is the absolutely continous part (with respect to Lebesgue measure on $[a,b]$) and $\nu$ is a possible singular part. We will see that $\nu$ consists of an atom $Q_a\delta_a$. We will write the cumulative probability distribution as $M(x)=\P(v\leq x)=\mu([a,x])$. Under the assumption that $\nu=Q_a\delta_a$, if one player mixes according to $d\mu(v)$, the other player's payoff from bidding $p$ in $[a,b]$ is \marg{really we need to consider measures here which we can break into abs. continuous and singular parts $d\mu=m(v)dv+d\nu(v)$ we can rule out the $\nu$ except for $\delta_a$ by intuitive bidding argument if you are bidding on points say in a cantor set then you can win by placing something in between so the result must be supported everywhere.}
\begin{equation} \label{e0}
\pi (p) = Q_ap+\int_a^{1-p} p f(v) dv = k ,
\end{equation}
which equals
\begin{equation} \label{e0}
\pi (p) = Q_a p + \left|_a^{1-p} F(v) p \right. = Q_a\cdot p+ F(1-p) \cdot p - F(a) \cdot p = k,
\end{equation}
where $F$ is the antiderivative of $f$ with $F(a)=0$ and $Q_a$ is the size of the atom at $a$.
The same reasoning applies as in the original game, so $a>0$, $b= 1-a$, and $Q_a >0$.
A bid of $a$ yields a payoff of $a$ with probability 1, so $\pi(a) = a$. A bid of $b$ yields a payoff of $b$ with probability $Q_a$ for when the other player bids exactly $a$ and a payoff of 0 otherwise. Thus, in equilibrium $\pi(b) = Q_a b =\pi(a)= a$, and we can conclude that $Q_a = a/b$.
Thus, we have that
\begin{equation} \label{e-payoff}
\pi (p) = M(1-p) \cdot p=(\frac{a}{b}+F(1-p)) \cdot p = a.
\end{equation}
so, letting $v \equiv 1-p$,
\begin{equation} \label{e0}
\pi(v) = (\frac{a}{b}+F(v)) \cdot (1-v) = a.
\end{equation}
and
\begin{equation} \label{e0}
F(v) = \frac{a}{1-v } - \frac{a}{b}
\end{equation}
% Note that this does not apply to $(b)$, because there, \eqref{e-payoff} does not apply because it would double-count the $Q_a$ term.
If we differentiate the absolutely continuous part of the cumulative density we get the mixing density shown in Figure 1 which is
\begin{equation} \label{e0}
f(v) = \frac{a}{(1-v)^2 },
\end{equation}
and this combined with the atom of $a/b$ at $v=a$ is the equilibrium strategy.
% \begin{minipage}{1.0\textwidth}
\begin{figure}[ht!]
\centering
\includegraphics[width=2in]{fig-easydensity}
\caption{\sc The Mixing Function, $ f(v)$, for the Easy Problem, $a = .3$ } \label{fig-easydensity}
\end{figure}
%\end{minipage}
Note that the mixing density is always strictly greater than zero and increasing, from $f(a)= \frac{a}{(1-a)^2 }>0$ to $f(b)=\frac{a}{(1-b)^2 }=\frac{1}{a^2}$. Any value of $a$ between 0 and .5 can be chosen, so there is a continuum of these equilibria.
\bigskip
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\noindent
{\sc Mixed-Strategy Continuous-Support Bargaining in the Original Game }
Now let us return to our original bargaining game and its assumption that shares are proportional to aggressiveness, so if player 1 bids $p$ and player 2 bids $v$ with $p+ v \leq 1$ then player 1's payoff is
$p/(p+v)$.
Let us begin with some qualitative features the equilibrium must have. Let us again think of a symmetric equilibrium that has player 1 bidding pure strategy $p$ in response to player 2's use of the probability measure $d\mu(v)$ on $[a,b]$ to choose $v$, with continuous part $f(v)>0$ and possibly with atoms of probability.
\noindent
{\it (1) The lower bound is strictly greater than zero: $a>0$.}\\
If pure-strategy player 1 plays $p=0$, his expected payoff is zero unless the other player bids zero with a positive atom of probability $Q_a$, in which case it is $.5Q_a $ because the two players will split the pie evenly.
If $Q_a>0$, however, then the other player's payoff from bidding any price $b$ above zero is bounded below by $Q_a (1)$ because he gets the entire pie with probability $Q_a$. Since $.5Q_a1$. Then if the pure player plays $p=b$, it always turns out that $p+v>1$ and his payoff is zero, but if he plays a smaller $p$ he can get a strictly positive payoff. Thus, $a+b \leq 1$.
But suppose the mixing player mixes over $[a, b]$ with $a+b<1$. Then if the pure player plays $p=a$, it always turns out that $p+v<1$. But if the pure player had played $p = 1-b>a $ instead, there is still zero breakdown, and he gets a bigger share of the pie. Thus, $a+b \geq 1$, which combined with our previous result means that $a+b=1$.
\noindent
{\it (3) If $f(v) >0$ on $(a,b]$, there cannot be a probability atom at $v$ except possibly at $v=a$.} \\
Suppose there were such an atom so there is positive probability of $v$. Playing the pure strategy of $p=1-v$ against that has positive payoff, since there will not be breakdown. The pure strategy of $p= 1-v +\epsilon$ for small enough $\epsilon$, however, will be discontinuously worse than the payoff from $p=1-v$, because there will be a positive probability of breakdown with $p= 1-v +\epsilon$ and only a tiny increase in share when there is not a breakdown. Hence, playing $p= 1-v +\epsilon$ cannot be part of the equilibrium and we cannot have $f(v) >0$ on $(a,b]$. Note that this does not apply to $v=a$, however, because then $1-v=b$ and the slightly higher bid would be $p = b+\epsilon$, which is not in the support of the mixing distribution and so does not have to have an equal payoff to bids within it.
\noindent
{\it (4) There is an atom $Q_a>0$ of probability at $v=a$.} \\
Suppose there is infinitesimal probability of $v=a$. Then the pure strategy of $p=b$ will have infinitesimal payoff, since its payoff is positive only when $v=a$ and otherwise there is breakdown. But we know that the pure strategy $p=a$ has strictly positive payoff. Hence, if $v=a$ has infinitesimal probability, $b$ cannot be part of the mixing support, which contradicts our earlier conclusions. If, however, there is an atom $z$ of probability at $v=a$ then the pure strategy of $p=b$ will have strictly positive payoff because there is strictly positive probability of $(p=b, v=a$, no breakdown). A slightly smaller $p$, just below $b$, will only have slightly less chance of breakdown, and will have a slightly lower share of the pie too.
\noindent
{ \it (5) In equilibrium, the payoff from deviating to the pure strategy $b$ is $\pi(b) = Q_a (1-a)$. } \\
We have seen that $b= 1-a$ in equilibrium. Thus, if a player chooses $b$, the pie explodes and his payoff is zero unless his rival chooses $a$, which has probability $Q_a$. If that happens, our deviating player gets share $b=1-a$, so his expected payoff is $Q_a(1-a)$.
\noindent
{\it (6) In equilibrium the two players must use the same support.} Suppose player 1 uses support $ [a, 1-a]$. For player 2 to bid less than $a$ would generate a lower share than bidding $a$ and bidding $a$ will not explode the pie anyway, so player 2's lower support must also be $a$. For player 2 to ever bid more than $1-a$ would also be unprofitable, because the pie would always explode and the payoffs would be zero.
What is left to show is that player 2 would not choose an upper bound $b$ less than $1-a$. Suppose it were. Then player 1 would deviate from choosing $a$ as his lower bound, because choosing $1-b$, which is greater than $a$, would yield a higher share and would not explode the pie. Thus, player 2 will also use $1-a$ as his upper bound in equilibrium.
\bigskip
If player 2 is using the probability measure $d\mu(v)$ to choose $v$, with continuous part $f(v)$ and atom $Q_a$ at $a$, then player 1's expected payoff from the pure strategy of bidding $p$ is
\begin{equation} \label{payoff-p}
\pi(p) = \int_a^{1-p} \frac{p}{p+v} d\mu(v) =Q_a \frac{p}{p+a}+\int_a^{1-p} \frac{p}{p+v}f(v) dv
\end{equation}
for $p\in [a,b]$. All bids $p$ must have the same pure-strategy payoff when the other player uses his equilibrium mixing distribution. Using our finding that $\pi(b) = Q_a (1-a)$, we thus know that $ \pi(p)= Q_a (1-a)$ for all $p$ in $[a,b]$ so we also know that
\begin{equation} \label{payoff}
Q_a \frac{p}{p+a}+\int_a^{1-p}\frac{p}{p+v}f(v) dv = Q_a (1-a)\;\;\; ({\it the\; crucial \;equation})
\end{equation}
We will use equation \eqref{payoff} to find a solution for $f(v)$ in the next section of the paper. First, though, let us use the derivative of the payoff equation to help us with the intuition of what the solution will look like.
When the payoff from the pure-strategy best response of $p$ is constant, the derivative of the payoff in \eqref{payoff-p} must equal zero. That derivative is (after using the change of variable $x=1-p$, reversing that change, and collecting terms),
\begin{equation} \label{payoffderivative}
\frac{d \pi(p)}{dp} = \frac{a Q_a }{ (p+a)^2} + \int_a^{1-p} \frac{v}{(p+v)^2} f(v) dv - p f(1-p) =0
\end{equation}
This expression has important intuition. The first two terms are the advantages of using a higher bid $p$. The first term represents the extra payoff from the bargainer increasing his share when the other player plays the atom bid of $a$, in which case the pie never explodes. The second term represents the extra payoff from increasing his share when the other player plays between $a$ and $1-p$, also where the pie never explodes but a limited range because if the other player plays higher than $1-p$ the pie does explode. The third term is the disadvantage to the bargainer of bidding higher. It represents the increase in the probability that the pie explodes and he loses the fraction he would otherwise have gotten. Thus, his tradeoff in choosing $p$ is between the first two terms and the third term.
We can rewrite the last equation using the change of variables $x \equiv 1-p$ as
\begin{equation} \label{payoffderivative1}
f(x) = \frac{a Q_a }{ (1-x)( 1-x+a)} + \int_a^{x} \frac{v}{(1-x)(1-x+v)^2} f(v) dv
\end{equation}
Note that $f(x)$ is strictly positive, with $f(a ) = \frac{a Q_a }{ 1-a } $.
We can differentiate this to get
\begin{equation} \label{payoffderivative1}
\begin{array}{lll}
f'(x)& =& \frac{a Q_a }{ (1-x)(1-x+a)^2} +\frac{a Q_a }{ (1-x)^2(1-x+a) }\\
&& +\frac{x f(x) }{ (1-x) } + \int_a^{x} \frac{2v}{(1-x)(1-x+v)^3} f(v) dv + \int_a^{x} \frac{ v}{(1-x)^2(1-x+v)^2} f(v) dv \\
\end{array}
\end{equation}
Note that $f'>0$, since $x <1$.
Differentiating again,
\begin{equation} \label{payoffderivative2}
\begin{array}{lll}
f''(x)& =& \frac{2a Q_a }{ (1-x)(1-x+a)^3} +\frac{2a Q_a }{ (1-x)^2(1-x+a)^2 }+\frac{2a Q_a }{ (1-x)^3(1-x+a) }\\
&&\\
&& +\frac{ f(x) }{ (1-x) } +\frac{ 2xf(x) }{ (1-x)^2 }+\frac{ 2xf(x) }{ 1-x } +\frac{ xf'(x) }{ 1-x } \\
&&\\
&&+ \int_a^{x} \frac{6v}{(1-x)(1-x+v)^4} f(v) dv + \int_a^{x} \frac{ 4v}{(1-x)^2(1-x+v)^3} f(v) dv \\
&&\\
&& + \int_a^{x} \frac{ 2v}{(1-x)^3(1-x+v)^2} f(v) dv \\
\end{array}
\end{equation}
Note that the second derivative of $f(x)$ is positive, and, indeed, every derivative will be positive. Taking the derivative will always leave the fractions positive: they always start positive and have the terms $(1-x)^m$ $(1-x+a)^n$ in the denominator for some integers $m$ and $n$, and the numerator will either be constants, $v$ in an integral, or a multiple of $x$ or $f^{h-1}(x)$ when it is the derivative of $f^h$ being taken. Differentiating the integral will always generate a new integral from with the same bounds of $a$ and $x$, which will thus remain positive.
Proposition 1 collects the most interesting results we have discovered.
\noindent
{\bf Proposition 1}.
Any continuous mixing equilibrium for the classic bargaining game would consist of an atom of probability $Q_a$ at $p=a$ and a convex density $f(p)$ on $[a, 1-a]$. The density $f(p)$ would begin with $f(a) = \frac{a Q_a }{ 1-a }$, increase in $p$, and have equilibrium payoff $\pi = Q_a (1-a)$. Every derivative of $ f(p)$ would be positive.
We have not yet shown that a continuous mixing equilibrium exists, or that it is unique for given $[1,1-a]$ or found a solution for $f(p)$.
In the next section we will do that, using a power series approach to solve our crucial equation (\ref{payoff}) for $f(v)$ and $Q_a$ for given bid support $[a, 1-a]$. Note that equation (\ref{payoff}) characterizes not just symmetric equilibria, but all equilibria mixing over $[1, 1-a]$, since it describes the $f(v)$ that when used by one player makes the other player willing to mix at all. Thus, if we find a unique solution, we will have found that the equilibrium is symmetric.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{A Continuous Distribution Solution}
Our next task is to find the $f(v)$ and $Q_a$ that give the players a constant payoff for all pure strategies that they mix over. For a general signed probability measure $\mu$ supported on $[a,1-a]$ the equation corresponding to the crucial constant-payoff equation (\ref{payoff}) would be, after setting $t=1-p$,
\begin{equation}\label{eq:fund}
\int_a^t \frac{1-t}{1-t+v} d\mu(v)=k
\end{equation}
for some constant $k \in [a,1-a]$ and all $t\in [a,1-a]$.
We know that our solution must be a probability measure and hence positive, but it will be useful, for our solution technique, to note that even if we did not impose that requirements and allowed any signed measure, the measure would be positive:
\noindent
{\it Any solution to \eqref{eq:fund} over signed measures must be a positive measure.} The function $\frac{1-t}{1-t+v}=\frac{1}{1+\frac{v}{1-t}}$ is positive and decreasing in both $t$ and $v$ over $[a,1-a]$. So if \eqref{eq:fund} is satisfied for $t\in [a,T]$ and $\mu$ is nonnegative on $[a,T]$ then
\begin{align*}
\mu([T,T+\eps]) &\geq \int_T^{T+\eps} \frac{1-T-\eps}{1-T-\eps+v} d\mu(v)=k-\int_a^T \frac{1-T-\eps}{1-T-\eps+v} d\mu(v) \\
&=\int_a^T \left(\frac{1-T}{1-T+v}-\frac{1-T-\eps}{1-T-\eps+v}\right) d\mu(v)\geq 0
\end{align*}
Consequently, $\mu$ is nonnegative on all subsets since the hypothesis holds at the initial time $T=a$. This will help us prove existence since it is easier to solve in the affine space of signed probability measures and then conclude that it lies in the positive cone of nonnegative probability measures.
Thus, let us now assume that the mixing distribution $\mu$ is a measure on $[a,1-a]$ consisting of a single dirac measure at $a$, as is necessary, together with a positive measure in the Lebesgue class,
\[
\mu(v)=\frac{k}{1-a} \delta_a(v) +m(v)dv.
\]
Here $m$ is any function in $L^1([a,1-a])$.
Rewriting the fundamental constant-payoff equation \eqref{eq:fund} with this $\mu$ we obtain,
\begin{equation}
\int_a^t \frac{1-t}{1-t+v} m(v)dv + \left(1-\frac{1-t}{(1-a)(1-t+a)}\right)k = k.
\end{equation}
Moving the contribution from the point mass to the right hand side yields
\begin{equation}\label{eq:main}
\int_a^t \frac{1-t}{1-t+v} m(v)dv=k\left(1-\frac{1-t}{(1-a)(1-t+a)}\right)=\frac{k a (t-a)}{(1-a)(1+a-t)}
\end{equation}
for all $t\in [a,1-a]$.
Differentiating both sides of \eqref{eq:main} by $t$ yields
\begin{align}\label{eq:recurse}
(1-t)m(t)-\int_a^t \frac{v}{(1-t+v)^2} m(v)\, dv=\frac{k a}{(1-a) (a-t+1)^2}
\end{align}
We obtain, for example by setting $t=a$, that $m(a)=\frac{k a}{(1-a)^2}$.
Taking the first derivative of both sides of equation \eqref{eq:recurse} we obtain
\[
(1-t)m'(t)-(1+t)m(t)-2\int_a^t \frac{v}{(1-t+v)^{3}} m(v)\, dv=\frac{2 k a }{(1-a) (1-t+a)^{3}}
\]
Recursively define $p_i(t)$ by
\begin{align}\label{eq:p-rec}
p_0(t)=(1-t)m(t)\quad\text{and}\quad p_{i}(t)=p_{i-1}'(t)-i!\, t\, m(t) \quad\text{for}\quad i>0.
\end{align}
Taking the $n$-th derivative of equation \eqref{eq:recurse} in $t$ we obtain,
\begin{align}\label{eq:p-integ}
p_n(t)-(n+1)!\int_a^t \frac{v}{(1-t+v)^{n+2}} m(v)\, dv=\frac{k a (n+1)!}{(1-a) (1-t+a)^{n+2}}.
\end{align}
The recursion relation \eqref{eq:p-rec} can be solved for the $p_n(t)$ in \eqref{eq:p-integ} to get
\begin{align}\label{eq:p-exp1}
p_n(t)=(1-t)m^{(n)}(a)-\sum_{i=0}^{n-1} \left( (i+1)(n-1-i)! +(n-i)! t \right) m^{(i)}(t).
\end{align}
Evaluating \eqref{eq:p-integ} at $t=a$ gives $p_n(a)=\frac{k a (n+1)!}{(1-a)}$ since the integral vanishes. We can substitute $p_n(a)=\frac{k a (n+1)!}{(1-a)}$ into \eqref{eq:p-exp1} in evaluating it at $t=1$ to get
\begin{equation}\label{eq:lin}
\frac{k a (n+1)!}{(1-a)} = (1-a)m^{(n)}(a)-\sum_{i=0}^{n-1} ((i+1)(n-1-i)! +(n-i)! a)m^{(i)}(a),
\end{equation}
which with rearranging terms becomes
\begin{equation}\label{eq:lin2}
m^{(n)}(a)=\frac{k a (n+1)!}{(1-a)^2}+\sum_{i=0}^{n-1} \frac{((i+1)(n-1-i)! +(n-i)! a)}{1-a} m^{(i)}(a).
\end{equation}
Equation \eqref{eq:lin2} is a recursive formula yielding the derivative $m^{(n)}(a)$ in terms of $m^{(i)}(a)$ for $i 0$ by the previous proposition. In particular, since $-\frac{1-2a-a^2}{1+a}0$. Thus, the interval of mixing is $[a, 1-2a]$. Note that this requires $a< 1-2a$, so there will be an equilibrium only for $a \in (0, 1/3]$.
Player 1's expected payoff from the pure strategy of bidding $p$ is, where we use $v$ for player 2's bid and $w$ for player 3's is something like this (not checked):
\begin{equation} \label{payoff3}
\begin{array}{ll}
\pi(p)& = \int_a^{1-p-v} ( \int_a^{1-p} \frac{p}{p+w+v} d\mu(v) ) d\mu(w) \\
&=2Q_a \int_a^{1-a-p }f(v) \frac{p}{p+v+a} dv + \int_a^{1-p-v} ( \int_a^{1-p} \frac{p}{p+w+v} f(v)dv ) f(w) dw \\
&= Q_a^2 (1-2a) \\
\end{array}
\end{equation}
for $p\in [a, 1-2a]$, where this last equality is because we know all payoffs have to be equal. That is probably a deadend for us, but maybe we can use the Laplace Transform there.
In the easy game, if there are three players, $\pi(a) =a$ still, and $\pi(b) = Q_a^2 b$. Thus, $a= Q_a^2 b$, so $Q_a = (\frac{a}{b})^{1/2}$. A bid of $p$ gives you $Q_a p$ plus $F(1-p)F(1-p) p$. If we let $v \equiv 1-p$, we have $
\pi(v) =[ (\frac{a}{b})^{1/2} + F(v)^2] (1-v)$, which equals $a$ since it has to have the same payoff as bidding $a$. Then
\begin{equation} \label{e0}
F(v) = ( \frac{a}{1-v} - (\frac{a}{b})^{1/2} )^{1/2}
\end{equation}
We could differentiate that to find $f(v)$.
In the easy game with $n$ players, $\pi(a) =a$ still, and $\pi(b) = Q_a^{n-1} b$. Thus, $a= Q_a^{n-1} b$, so $Q_a = (\frac{a}{b})^{1/(n-1)}$. A bid of $p$ gives you $\pi(p) = Q_a p+F(1-p)^{n-1} p$. If we let $v \equiv 1-p$, we have $
\pi(v) =[ (\frac{a}{b})^{1/{n-1}} + F(v)^{n-1}] (1-v)$, which equals $a$ since it has to have the same payoff as bidding $a$. Then
\begin{equation} \label{e0}
[ (\frac{a}{b})^{1/{n-1}} + F(v)^{n-1}] (1-v) = a
\end{equation}
and
\begin{equation} \label{e0}
(\frac{a}{b})^{1/{n-1}} + F(v)^{n-1} = \frac{a }{1-v}
\end{equation}
and
\begin{equation} \label{e0}
F(v)^{n-1} = \frac{a }{1-v} - (\frac{a}{b})^{1/{n-1}}
\end{equation}
and
$$
sdfsd
$$
We could differentiate that to find $f(v)$.
In Hawk-Dove bargaining, it should be straightforward to have 3 players. Let $\theta$ be the probability of bid $a$ and $(1-\theta)$ of bid $1-2a$, with $a < 1/3$.
\begin{equation}
\pi(a) = \theta^2 (1/3) + 2(1-\theta) \theta (a)+ (1-\theta)^2 (0)
\end{equation}
\begin{equation}
\pi(1-2a) = \theta^2 (1-2a) + 2(1-\theta) \theta (0)+ (1-\theta)^2 (0)
\end{equation}
Equating these we get
\begin{equation}
\theta^2 (1/3) + 2(1-\theta) \theta (a) = \theta^2 (1-2a),
\end{equation}
so $ \theta (1/3) + 2(1-\theta)a = \theta (1-2a)$ and
$ 2a = \theta (-1/3 +2a + 1 - 2a) $ and $\theta = 3a$.
Now do Hawk-Dove bargaining with $N$ players. Let $\theta$ be the probability of bid $a$ and $(1-\theta)$ of bid $1-(N-1)a$. Note that this means $a$ will have to be very small--- less than $1/N$.
\begin{equation}
\pi(a) = \theta^{N-1} (1/N) + (N-1)(1-\theta) \theta^{N-2} (a)+ 0
\end{equation}
\begin{equation}
\pi(1-(N-1)a) = \theta^{N-1} (1-(N-1)a) +0
\end{equation}
Equating these we get
\begin{equation}
\theta^{N-1} (1/N) + (N-1)(1-\theta) \theta^{N-2} (a)= \theta^{N-1} (1-(N-1)a)
\end{equation}
so, dividing by $\theta^{N-2}$
\begin{equation}
\theta (1/N) + (N-1)(1-\theta) a= \theta (1-(N-1)a)
\end{equation}
Then, $\theta [(1/N)- (N-1)a -1 + (N-1)a] + (N-1) a = 0$, and $ (N-1) a = \theta (1-1/N) $ and $ (N-1) a = \theta ( N-1)/N $
\begin{equation}
\theta = Na.
\end{equation}
Thus, with probability $(Na)^N$ we get equal split. Suppose we say that $a = 1/(N+1)$. Then the probability of an equal split is falling with $N$ but asymptoting near .4. That is making the shares of Hawk and Dove nearly equal. The probability of the pie exploding is $1 - \theta^N - N \theta^{N-1} (1-\theta)$. This rises and approaches .2.
If $a = .1$, then the probability of an equal split is falling with $N$ to about .03 till about $n=3.5$ and then rises slightly.
\bigskip
\noindent
{\sc Discussion}
We see that Splitting a Pie does have a mixed-strategy equilibrium with a continuous support. We can find an expression depicting it by using Laplace transforms, but that expression is too complicated to tell us much. We can prove the equilibrium exists using a fixed-point theorem--or so we hope. And we can calculate an approximation to the equilibrium strategy by numerical methods.
The most common split is 50-50, which is the result when both players have chosen the lower bound of the support, $a$. It also can happen that no bargain is reached and the pie is lost. [How often does that happen? We will have to use numerical approximations.] And there can be a wide range of unequal shares, the most unequal possible being $\frac{a}{1-a}$. Bids just above $a$ are the least common, and the frequency rises all the way up to the upper bound of $1-a$.
We can tell a story based on this pattern. The players have a choice between bargaining hard or bargaining soft. The most common outcome is the 50-50 split that results when both play soft. Since they sometimes play hard, however, there will also be disagreement sometimes. Since there are many degrees of playing hard, there will be many possible splits. No player will push too hard, though, because he knows that even the soft player's bid is positive.
This story also fits the Hawk-Dove equilibrium, which is so much simpler that it is probably the preferred option. How about Hawk-Dove for three or more players?
We can use a population interpretation of this too--- that each player uses a dterministic strategy, but the population uses the equilibrium distribution of them.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\bigskip
\noindent
{\sc Concluding Remarks}
sdfsdfsdfsdfdfssdfdf
\newpage
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\noindent
{\sc References}
Baron, David P. \& John A. Ferejohn (1989) ``Bargaining in Legislatures,''
{\it The American Political Science Review} 83(4): 1181-1206 (Dec., 1989).
Binmore, Ken G. (1980) "Nash Bargaining Theory II," ICERD, London School of Economics, D.P. 80/14 (1980).
Binmore, Ken G. (1985) "Bargaining and Coalitions," in A. Roth, ed., {\it Game-Theoretic Models of Bargaining,} Cambridge: Cambridge
University Press, 1985.
Binmore, Ken G., Ariel Rubinstein \& Asher Wolinsky (1986)
``The Nash Bargaining Solution in Economic Modelling,''
{\it The RAND Journal of Economics} 17)(2): 176-188 (Summer 1986).
Nash, John F. (1950) ``The Bargaining Problem,''
{\it Econometrica} 18(2): 155-162 (April 1950).
Rubinstein, Ariel (1982) ``Perfect Equilibrium in a Bargaining Model,''
{\it Econometrica} 50(1): 97-109 (January 1982).
Dynamic unstructured bargaining with private
information and deadlines: theory and experiment
Colin F. Camerer, Gideon Nave \& Alec Smith
February 6, 2015
Kennan \& Robert Wilson (1993)
``Bargaining with Private Information,'' {\it
Journal of Economic Literature} Kennan vol:31
iss:1 pg:45
Malueg, David A. (2010) ``Mixed-Strategy Equilibria in the Nash Demand Game,'' {\it Economic Theory} 44: 243–270 (2010).
Thomson, William (1994) "Chapter 35: Cooperative models of Bargaining," {\it Handbook of Game Theory with Economic Applications} (1994).
Bilateral Bargaining: Theory and Applications
By Stefan Napel
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\newpage
FEB 4 Dear Eric,
Along the lines of your apporoximation, say we approximate f(v) by characteristic functions on
n equal width intervals on (a,1-p). Like so:
$$
f(v)\approx \sum_{i=1}^n x_i Char(a+(1-p-a)/n (i-1),a+(1-p-a)/n i)(v)
$$
Integrating one characteristic function $Char(a+(1-p-a)/n (i-1),a+(1-p-a)/n i)(v) $ gives:
$$
2 p ArcTanh[(1-a-p)/(2n-(1-a-p)(1+2(n-i)))]
$$
So we want to minimize (over $x_i$’s) the difference
$$
|2p \sum_{i=1}^n x_i ArcTanh[(1-a-p)/(2n-(1-a-p)(1+2(n-i)))] – k|
$$
Where k is the average value of the sum (integrated over p). (Or easier, do least squares
fit to the
mean value)
Or differentiate and minimize the quantity:
$$
| \sum_{i=1}^n x_i (\frac{n p}{((1-a-p) (n+1-i)-n) (n-(n-i) (1-a-p)})
+2 ArcTanh[\frac{1-a-p}{2n-(1-a-p)(1+2(n-i)))]}|
$$
Or the sum squared terms, either way, over the interval (a,1-p). That is, we can integrate
and minimize the resulting numbers for any given value of n.
So…minimizing the $L^1$ norm then is equivalent to finding $x_i >0$ such that
the sum of $x_i$ is n/(b-a) and
$$
\sum_{i=1}^n |x_i| w_i
$$
Is as close to 0 as possible, where the weights $w_i$ (all positive) are explicitly,
$$
w_i= (b ArcTanh[\frac{1-a-b)}{2n-(1-a-b)(1+2(n-i))}]
-a ArcTanh[\frac{1-2a}{2n-(1-2a)(1+2(n-i))}])
$$
This mathematica can do.
I’ve added such an example (just with n=10 partitions to the file to show how to do this).
Best,
Chris
FEB 5-1. Dear Eric,
I went ahead and minimized for a n=100 (normalized to be a probability measure) and
here is the resulting f(v):
Notice that this minimizer is not very continuous (there may be a more continuous minimizer,
as the minimization here is numerical)
However, the mean square error (to the 0 function) is still .16, much like the n=10 case,
and therefore the payoff does not get much flatter:
So there may not be a solution for that choice of a0 and b0 (may need to play with a and b)…
the minimization could have failed. Hard to know, but there were no signs that
it failed (and it is a simple weighted quadratic objective function in 100 variables).
FEB 5.
Dear Eric,
I’ll think about what you wrote…however, here is one cautionary point about an atom at $a>0:$
If you have an atom at $ a>0$ of size $c>0$ then
$$
\int_a^b f(v) p/(p+v) dv = c*p/(p+a) + \int_a^b m0(v) p/(p+v) dv
$$
where m0 is the
remaining part of f(v), say atomless. So in order to be constant,
We need $\int_a^b m0(v) p/(p+v) dv=k- c p/(p+a). $So m0 cannot be 0 and cannot be
a single atom (or I think even a finite number of atoms).
In particular, the Hawk-Dove strategy of (f) does not work since say you have just two
atoms one of size c at a and one of size d at b, then you get:
$$
c p/(p+a) +d p/(p+b)=(cp(p+b)+dp(p+a))/((p+a)(p+b))=p*((c+d)p+(cb+da))/(p^2+(a+b)p+ab)
$$
which cannot be a constant for all p in any nontrivial interval for any positive choices of a,b,c,d.
So how are you thinking of (f) as a solution?
On the other hand, if the position of one of the atoms was allowed to depend on p,
it might work (e.g. if b=1-p or some such relationship).
d)-e) seem plausible (both) to me, I’d like to see how you rule out e).
FEB 8.
Dear Eric,
Thanks…I investigated the function q(v) a bit more. If $\alpha \approx 0.37250741078136 $
is the sole root of H(s) then it sole pole of 1/H(s) and
I found that $ 1/H(s)= (0.25666)/(s-\alpha) + G(s)$ where G(v) is smooth (except at a
vertical tangent at 0). It is not a totally monotone function, not even positive, but no poles.
So $q(v)= 0.25666 e^{\alpha v} +g(v).$
It turns out the Pade approximant approach seems to work ok, since the Laplace transform
of rational functions is a rational function (and these can be effectively inverted). I’ll try
this to get a handle on q(v), and therefore f(v).
Ideas from Aaron Kolb on bargaining:
1 .he has worked out that if we say 0-100 is NOT agreement, then there are only mixed strategy equlibria, no pure stragegy ones existing.
2. What is the probability of breakdown in the easy continuum game? NOT 16\% for 60-40---that is the lower bound, because 60+35 is OK too. I need to work it out.
3. Another possible game is the threshold game, where whoever bids higher has a MUCH higher payoff.
4. How about risk aversion? It doesn’t matter for pure stragegies, but it does for mixed strategies.
5. Maybe have Aaron join as a co-author.
\noindent
{\bf Thoughts of August 23}
(1) I found a good way to do a numerical example: use polynomial approximations. Mathematica will minimize a loss function consisting of the spread in payoffs by choice of polynomial coefficients. This works well for $a=.2$ and $a=.3$. For $a=.1$ we get some negative densities. What I did there, though, was to use the solution as a start to give some $(v, f(v))$ pairs and then make up some other good-looking pairs and use mathematica to fit polynomials to them and eyeball the plot of the payoffs to get it to be fairly level.
(2) In our numerical examples, we should show what happens as A changes. Or, maybe we should do this as theory. It seems Qa is bigger if A is bigger, getting more towards .5. There is good theory behind that. The payoff to playing A, for constant Qa, rises with A, since B becomes smaller and hence playing A gets a bigger share whatever the other player bids.
Hence, the proportion of the time that playing other strategies does not explode the pie must increase, which means more probability put on A.
(3) Maybe we should add risk aversion too, as another section.
(6) Does there exist an equilibrium for $a=0$? No. Then a bid of any $v$ would be better, because it will give at least $Q_a$ as a payoff, getting the entire pie with that probability, and bidding $a$ only gives $Q_a/2$.
(8) Is there a way to prove that $Q_a$ gets smaller as $a$ gets smaller?
We should present this to our children.
\end{document}