\documentclass[12pt,reqno,twoside,usenames,dvipsnames]{amsart}
\usepackage{amssymb}
\usepackage{graphicx}
\usepackage{amsmath}
\usepackage{hyperref}
\usepackage{verbatim}
\hypersetup{breaklinks=true,
pagecolor=white,
colorlinks=true,
linkcolor= blue,
hyperfootnotes= true,
urlcolor=blue
}
\urlstyle{rm}
\newcommand{\margincomment}[1]
{\mbox{}\marginpar{\tiny\hspace{0pt}#1}}
\newcommand{\comments}[1]{}
\renewcommand{\baselinestretch}{1.2}
\parindent 12pt
\parskip 10pt
\begin{document}
\titlepage
%\vspace*{12pt}
\begin{center}
{\large {\bf Odd, Enter; Even, Out: \\
Sequential Contests with Entry Costs }
August 12, 2017}
\bigskip
Eric Rasmusen
{\it Abstract}
\end{center}
\begin{small}
What happens when
an incumbent contest winner faces possible entry and challenge from later rivals? In deciding whether to pay a cost to enter immediately, later, or never, each rival must look ahead to future rivals. If the prize is relatively small, the first rival's expected payoff from entry is negative if he faces the prospect of having to defeat a second rival, but positive if the second rival himself fears to enter because of later rivals. As a result, in equilibrium no
rivals enter when the number of rivals is even, and exactly one enters when the number is odd. If the prize is larger, the equilibria become complex but still include equilibria with no entry. In the setting of challenges to a political leader, it can happen that the incumbent can survive even if he is weaker than his rivals, he may wish to end the advantage of incumbency, and a player may benefit by weakening his ability win a given contest.
\noindent
Rasmusen: Dan R. and Catherine M.
Dalton
Professor, Department of Business Economics and Public Policy, Kelley
School
of Business, Indiana University. 1309 E. 10th Street,
Bloomington,
Indiana, 47405-1701. (812) 855-9219.
\href{mailto:erasmuse@indiana.edu}{ erasmuse@indiana.edu}, \url{http://www.rasmusen.org}.
{\small
\noindent This paper:
\url{http://www.rasmusen.org/papers/oddeven-rasmusen.pdf}. }
{\small
\noindent
Keywords: Contests, tournaments, entry, extortion, deterrence. \\
JEL Codes: C72, C73, L12}
{\small I would like to thank Michael Baye, Jeffrey Prince, and the BEPP Brown Bag Seminar for helpful comments. }
\end{small}
\newpage
\begin{small}
\begin{quotation}
Canst thou, O partial sleep, give thy repose \\
To the wet sea-boy in an hour so rude; \\
And in the calmest and most stillest night, \\
With all appliances and means to boot, \\
Deny it to a king? Then, happy low, lie down! \\ Uneasy lies the head that wears a crown.\footnote{ William Shakespeare, {\it Henry IV, Pt.II:} III,1. }\\
\end{quotation}
\end{small}
\noindent
{\sc 1. Introduction}
Winning a power struggle does not mean you live happily ever after. Harry Bolingbroke, speaker of the lines above, defeated his cousin Richard II to become King Henry IV of England, but Shakespeare devotes two more plays to the failed attempts to overthrow him. Not until five plays later can King Henry VII finally rest secure. Winning once is not enough. This paper will explore what happens when the winner of the first round of a contest must be concerned about the entry of new rivals, who provoke new contests for the prize. The first entrant must take into account that even if he defeats the incumbent he must still defeat or intimidate later entrants to obtain the prize. That prospect can deter entry altogether.
We will look at a situation that combines a struggle for survival, as in the literature on contests and the war of attrition, with entry costs, as in the literature on oligopoly. An incumbent leader will face one or more rivals. Rivals must pay a one-time entry cost to be able to contend for the leadership. Each rival must decide whether to enter immediately or wait, and whether to challenge the incumbent immediately or wait, and his decision will depend on what he thinks later rivals will do. This is like a pre-emption game in that sometimes early entry deters later entry. It is like a war of attrition in that the goal is to reach a position secure from future entry. It is like a tournament in that a rival who enters early must take into account later contests even if he defeats the incumbent in the first contest.
The situation to which the model is applied will be a
leadership contest in a parliamentary democracy. The party in power has chosen one of its members as prime minister. Otehr members are allies of the prime minister against the opposition party but rivals for the leadership. To challenge the prime minister, a rival must first reveal himself as a potential challenger, laying the groundwork with the public and the party. This is costly in terms of effort, perhaps also results in ejection from the cabinet, since prime ministers ordinarily co-opt rivals by appointing them to subordinate ministries. Eventually there is a formal leadership election and the prime minister or a rival wins. The winner, however, is not secure. He, too, must fear that rivals will arise and challenge him. Eeach rival must worry about how many rivals he will face if he knocks down the current leader. When Senator Eugene McCarthy challenged President Johnson's re-election in 1968, he did scare Johnson in to declining to run again, but when Vice-President Humphrey and Senator Robert Kennedy jumped into the race for the Democratic nomination, McCarthy lost. When Michael Heseltine ran against Prime Minister Margaret Thatcher for the leadership of the Conservative Party in 1990, his strong performance on the first ballot led to her resignation, but John Major entered the race and won on the third ballot.\footnote{This is not the same idea as the ``stalking horse'', a weak candidate covertly pushed into a contest by the real rival in order to assess the prime minister's strength and make him reveal his strategy. }
What happens depends very much on the number of rivals, the prize for ending up as leader, and the cost of entry. The most surprising result will be that if the cost of entry is high enough to make a three-way contest unattractive, then in the unique equilibrium there will be one entrant if the number of rivals is odd, and no entry if the number of rivals is even.
Many of the forces at work can be conveyed by thes case where the entry cost is high relative to the value of the prize, as in losing the position of finance minister or in the case of a natural monopoly with high capital expenditures. Section 2 will start with an example of the sequential appearance of one to four rivals and generalize to $N$ rivals. Section 3 will use the same model but with simultaneous entry decisions. Section 4 will address what happens with sequential entry but imperfect information on the number of rivals. Section 5 will return to the sequential full-information model, but with the prize big relative to the entry cost.
\noindent
{\sc Related Literature}
A contest is a struggle for a prize where the size of the prize is independent of the rivals' efforts. Contests occupied center stage in game theory's earliest days as a setting for zero-sum games. One variety of this, the duel, was introduced in Blackwell (1949). In a two-person duel, the players approach each other and must decide when to fire, based on the remaining distance and the abilities of each player to shoot at various distances. This can be extended to three persons, in which case the players must also decide which other player to target first; for a survey see Laraki \& Solan (2005) or the {\it Handbook of Game Theory} entry by Radzik \& Raghavan (1994). Pre-emption games are games in which it is advantageous to move first, and this is a pre-emption game in the sense that it is advantageous to be able to shoot first. Pre-emption games also commonly arise in investment and entry deterrence. A recent example is Argenziano \&
Schmidt-Dengler (2014), which shows how clustering of investments can occur among early investors when firms make decisions about investment in continuous time and the the pre-emption game is intense among later investors.
The war of attrition is another model of conflict. In a war of attrition, the players who enter a contest suffer utility costs from continuing to fight, but whoever continues to fight the longest wins. If there is a series of such contests, it is preferable to avoid the early ones and enter late, so as to avoid the cost of eliminating early rivals. The game originated in the biology context of Maynard Smith (1974) where two animals fight over a food source and the winner is the animal which fights longest, but fighting is costly and so avoiding conflict might be better than being the winner. In economics, classic examples are two firms competing in a natural monopoly market with room for one but only one to make positive profits, and two bidders in an all-pay auction such as firms lobbying for a political favor. Hendricks, Weiss \& Wilson (1988) model the continuous-time game with complete information and Bulow \& Klemperer (1999) model it with incomplete information.
Park \& Smith (2008) title their paper ``Caller Number Five and Related Timing Games'' in reference to radio contests in which the fifth caller receives a prize but the first four receive nothing, a game where contestants wish to avoid entering both too early and too late. They generalize both pre-emption games and wars of attrition to games of timing in continuous time under complete information where the game may move from a phase of pre-emption to a phase of war of attrition. Players make discrete decisions in continuous time and under complete information.
Another way to look at contests is in the context of a tournament, a series of contests in which players must decide how much of their effort to expend in each round. Sports tournaments are good illustrations: should you use your best players in the first round, or save them for the last round to avoid the possibility of injury in a game against easier rivals? This literature dates back to Rosen (1986) on elimination tournaments. Fullerton \& McAfee (1999) looks at how to limit the number of players in a tournament, with the aim of incentivizing maximum effort. Harbaugh \& Klumpp (2005) shows that low-ability contestants will exert their effort early in the tournament, high-ability contestants later. Szymanski (2003) reviews the literature. The present paper models a tournament, but one without a fixed number of contests, or, indeed, perhaps without any entry at all.
Typically, a contest is like an all-pay auction, where the participants choose their levels of effort and often the value of the prize is dissipated by this rent seeking. Corcoran (1984), Corcoran \& Karels (1985), and Morgan, Orzen \& Sefton (2012) all findzero-profit conditions, and Baye, Kovenock \& de Vries (1999) show how the structure of the contest affects the amount of surplus after more limited dissipation.
In the analysis of the present paper, contest effort is assumed to be zero for simplicity once the entry cost is paid, but what matters is that some value remain after contest effort--- enough value that contestants can be willing to pay the fixed entry cost. The fixed entry cost is a key feature, however, since it will determine how many players could rationally contest the prize if conditions were ideal for them.
There are also large literatures on entry games and technological rivalry, which often feature sequential entry and entrants having to consider the profit-reducing effect of future entry. Bernheim (1984) models a sequence of potential entrants decides whether to enter an oligopoly. The amount of industry profits falls with the number of entrants, and each firm which has entered decides on a continuous level of effort into an entry deterrence technology. The focus is on which industry sizes are stable, which depends on how much effort is put into the deterrence technology at the different industry sizes. Bernheim's main finding is that the stable industry sizes are spread out--- that if $n$ is an equilibrium industry size, then $n-1$ and $n+1$ are not. The reasons are that if the next closest equilibrium size is only slightly larger, then (a) incumbent firms do not lose much from allowing entry, and so will not expend the effort necessary to deter it, and (b) entrants would not have to worry that their entry's profitability would be reduced by future entrants. The present paper has no analog of the entry deterrence technology. Though it will be true that entrants worry about future entrants, that will not prevent different equilibria from having similar numbers of entrants. Most important, in Bernheim's context all entrants continue actively in the game once they have entered, whereas in the present paper a crucial feature is that a contest eliminates the losers, so that later entrants may face a very different number of contestants than early entrants. This means that while early entrants have the advantage of pre-emption, as in Bernheim's model, they have the disadvantage of having to fight in more contests to survive. Other papers in this literature also have the feature that all entrants survive and that they make pricing or investment choices that affect the profitability of entry. Eaton \& Ware (1987), for example, have firms choosing capacities, and in their model this results in the possibility that later entrants have higher profits than earlier entrants, though all have positive profits. Waldman (1991) synthesizes the literature, trying to show why there is a free-rider problem for entry deterrence effort in some models and an excess-investment problem in others. My own Rasmusen (1988) has incumbent and entrant choosing capacity, though in that model the incumbent buys out entrants, coming closer to the present paper's model because of the move to monopoly. That paper shows that it can be easier to deter entry with two potential entrants than with one, but it does not explore what happens with larger numbers.
\noindent
{\bf 2. The Model with Sequential Appearance of Rivals}
Let an incumbent hold the position of leader, which is worth $V$ to its ultimate holder. In each of $N$ periods a new rival ``appears'', though the game can continues for an indefinite number of periods after that. Each rival who has appeared may choose to ``enter'' at cost $c$ or to wait, these decisions being made in order of seniority of appearance. The players who have entered and not yet been defeated decide whether to challenge and ``fight'' one or more of the other entrants and the incumbent. In a given contest each contestant has an equal chance of winning. The prize is received by whichever player has won out as leader after all rivals have appeared and none has positive probability of entering in a future period in the particular equilibrium being played out.
A rival who appears never disappears unless he has entered and been defeated, and an entrant continues as a potential challenger even if he does not fight in the period he enters.
Anyone challenged to a contest must fight, and the losers disappear forever. If the incumbent is part of a contest, the winner will be the leader next period. There is no cost to fighting once the entry cost $c$ is paid.\footnote{Zero cost of fighting is a simplifying assumption, not a substantive one. What matters is that the contest has a prize strictly greater than the cost of fighting, to compensate for the entry cost.} ``Equilibrium'' will mean subgame-perfect equilibrium. We will assume zero discounting, but when equilibria are payoff-equivalent we will restrict attention to the equilibrium in which entry and fighting occurs earliest.
For most of the paper, we will assume that the prize is large enough for one and only one rival to enter with probability one and fight the entrant with a positive expected payoff: that $ \frac{V}{2} -c >0 $ and
$ \frac{V}{3}<0$. An exmple with
$V= 100$ and $c=40$ will show the forces at work.
We will consider several values of $N$, the number of rivals.
\noindent
{\bf N=1.}
In equilibrium, the lone rival will appear, enter immediately, challenge the incumbent immediately, and win with probability 1/2, for an expected payoff of
\begin{equation} \label{e1}
\pi = (1/2) (100) -40 = 10,
\end{equation}
compared with a payoff of zero from not entering. Note that because there is no discounting, the rival earns the same payoff if he delays entry or fighting to a later period; hence, our restriction of attention to the equilibrium with the earliest entry and fighting.
\noindent
{\bf N=2. }
In equilibrium, neither of the two rivals will enter. It cannot be the case that one rival enters first, fights, and then the other rival enters (as just described), because although the second entrant's payoff would be positive, the first entrant would have only a (1/2)(1/2) chance of ultimate victory and his payoff would be
\begin{equation} \label{e4}
\pi= (1/2) (1/2) (100) - 40 =-15.
\end{equation}
On the other hand, there cannot be an equilibrium in which both rivals enter but wait to fight in a three-way contest with a 1/3 probability of victory. That would generate a payoff for each of them of
\begin{equation} \label{e2}
\pi = (1/3) (100) - 40 = -6 \frac{2}{3}
\end{equation}
\hspace{12pt} No mixed-strategy equilibrium exists either. Each rival would have a strictly higher payoff from waiting and entering after the other rival, and mixing over entry requires indifference between waiting and entering.
Thus, each rival stays out because he fears competition from the other rival even if he defeats the incumbent.
\noindent
{\bf N=3.}
In equilibrium, the first rival will enter and challenge the incumbent, but the other two rivals will stay out. The entrant's expected payoff is
$
-40 + (1/2) (100) =10.
$
Once the fight is over, the game is equivalent to the game with $N=2$, because there is an incumbent (the original leader or the victorious rival 1) and two potential entrants. We have seen that with $N=2$ there is just one equilibrium, with no entry. Thus, rivals 2 and 3 will not enter.
\noindent
{\bf N=4.}
In equilibrium, no rival will enter. If one did, no other rival would follow him until he had fought the incumbent, since the payoff with a 1/3 chance of victory is too small to justify the entry cost. As in the game with $N=2$, the first entrant would have a negative payoff.
The example shows that the leader would prefer to face two or four rivals instead of one, because each of the rivals would abstain from entry for fear of later entry by the others. This points to the idea that when $N$ is odd, one rival enters, but when $N$ is even, none do. Proposition 1 states this generally.
\noindent
{\bf Proposition 1:} {\it In the $N$-rival sequential game, one rival will enter in equilibrium if $N$ is odd and none if $N$ is even. }
\noindent
{\bf Proof. } Since by assumption $c< \frac{V}{2}$, if one rival enters and challenges the incumbent but no others enter at any time, his expected payoff would be positive. If a second rival were to enter, his payoff would be at most $-c + V/3$ from a three-way contest even if there were no further entry, so since by assumption $c> \frac{V}{3} $ he will not enter, regardless of future entry.
It remains to show the conditions under which none but the first rival will enter. If $N=1$ it is satisfied trivially. If $N=2$, then if one rival enters, the second rival will not enter before the first rival and the incumbent fight, because his payoff of $-c + V/3$ would be negative. He will enter, however, if the first rival and the incumbent fight, because the continuation game would have the positive payoff $-c + V/2$. As a result, the first entrant's payoff from entering and fighting immediately would be $-c + (1/2) (1/2) V$. Because $-c + V/3 $ is negative, so is $-c + V/4$. The first entrant's alternative is never to challenge the incumbent , which has payoff $-c$. Thus, the first rival will never enter. The same reasoning means that the second rival will not enter, so if $N=2$ there is no entry.
We can now use induction.
Suppose there is no entry when $N=z$ for some $z$. Then if $N=z+1$, the first rival can enter, challenge the leader immediately, and the continuation game is identical to the full game with $N=z$, so he is safe from entry. If $N=z+2$, then if the first rival entered and challenged the leader, the continuation game is identical to the full game with $N=z+1$, so the second rival would enter. The first rival's expected payoff would be $-c+ V/4<0$. If the first rival entered and did not challenge the incumbent, then even if the second rival never entered the first rival's payoff would be $-c$. Thus, if $N=z+2$ there would be no entry. The game would switch between one entrant and no entrants as $N$ increased. We have shown that when $N=1$ there is entry and when $N=2$, there is no entry, so we can use $z=2$ and conclude that there is no entry when $N$ is even and one entrant when $N$ is odd. $\blacksquare$
Proposition 1's logic has an interesting application if we alter the model so that different players have different strengths--- different probabilities of success in a contest.
\noindent
{\bf Observation 1:} In the right circumstances, an incumbent can survive without challenge even if he is weaker than any of his rivals.
\noindent
Explanation: Suppose the entrant's probability of winning a two-way contest against the incumbent is $\theta>1/2$ and each entrant's probability of winning a three-way fight is $\gamma>1/3$. The maximum possible payoff for a rival if he alone enters is then $ \theta V-c$ and from two rivals entering is $ \gamma V -c$. If $ V/c \in [1/\theta, 1/\gamma]$, entry by one and only one entrant is potentially profitable, and by the reasoning of Proposition 1's proof, no entrant will challenge the incumbent if $N$ is even.
\bigskip
\noindent
{\bf 3. Simultaneous Appearance of Rivals}
Let us now consider what happens if the $N$ entrants appear simultaneously at the start of the first period and make their entry decisions simultaneously in each period.
In the sequential game we saw there were no contests between entrants in equilibrium. Only one rival entered at a time, and he immediately challenged the incumbent and was either defeated or became the new leader. That will not be the case in the simultaneous model, so
the following lemma is useful.
\noindent
{\bf Lemma 1: }{\it If the incumbent faces $n$ rivals who have entered but not fought yet, any fight that occurs will be between all $n+1$ players. }
\noindent
{\bf Proof. } Suppose one of the players challenged just $m0$, we can solve equation (\ref{e7}) as
\begin{equation} \label{e9}
Payoff (wait) = \frac{1}{ 1- (1-\theta)^2} \theta^2 (V/2-c) >0
\end{equation}
$Payoff(wait)$ is strictly positive for $\theta>0$, proving the last part of the lemma if the equilibrium exists.
The closed-form solution for $\theta$ which equates the two payoffs has too many terms to be helpful, but we can see that a solution between 0 and 1 does exist. If $\theta=0$, the payoff from entering as a pure strategy is $V/2-c$ and the payoff from waiting is 0. If $\theta=1$, the entry payoff is $V/4-c$, which is negative, and the waiting payoff is $V/2-c$, which is positive. For $\theta \in (0,1]$, the difference between the entry and waiting payoffs is continuous, so since it starts positive and ends negative it must be zero for some $\theta$, which is the equilibrium value. $\blacksquare$
It is perhaps surprising that the payoff in the mixed-strategy equilibrium is strictly positive. The reason is that the strategy of waiting has a positive payoff if there is any chance that both of the other players will enter together.
If both of the other rivals end up entering together when they mix, our waiting rival can enter after they both fight the incumbent, for a positive expected payoff. If only one enters, the waiting player still has the option of staying out permanently, for a payoff of zero. In expectation, his payoff at the start of the game is therefore strictly positive.
Note that there also exists an equilibrium of the $N=3$ game in which two players mix with probability $\gamma$ and the third player follows the pure strategy of entering only if the other two players have both entered and fought each other. In this equilibrium, if just one player enters, he is safe from further entry because what remains is an $N=2$ subgame, so his payoff in that subgame will be $V/2-c$. If he follows the pure strategy of entering, given that the other mixing rival is entering with probability $\gamma$, then with probability $(1-\gamma)$ his subgame payoff is $V/2-c$. With probability $\gamma$ the other rival enters too, in which case the first two rivals and the incumbent will fight in a first contest, followed by entry of the waiting rival and a second contest, with a subgame payoff of $V/6 -c$, which is negative.
\begin{equation} \label{e9}
Payoff (enter) = (1-\gamma) (V/2-c) + \gamma (V/6-c)
\end{equation}
The payoff of a mixing player who follows the pure strategy of not entering is zero, however. When he follows that strategy, the other mixing player will enter alone, so our zero-entry mixing player and the waiting player will never enter. Since the mixing probability $\gamma$ is set so that the pure strategies of entering and staying out have equal payoffs, this pins down $\gamma$, which solves
\begin{equation} \label{e9}
(1-\gamma) (V/2-c) + \gamma (V/6-c) =0
\end{equation}
The waiting player will have a positive payoff, because he enters only after the other two rivals have entered and fought and he faces only a two-way contest. His payoff is
\begin{equation} \label{e9}
Payoff (waiting) = \gamma^2(V/2-c)
\end{equation}
Although the waiting player has a positive payoff, a mixing player cannot achieve that payoff by deviating to not entering. Such a deviation would only result in the other mixing rival eventually entering, fighting, and leaving them in the $N=2$ subgame.
Now let us generalize to $N$ rivals.
\noindent
{\bf Proposition 2:} {\it Let there be $N$ rivals in the simultaneous-entry game. If the number of rivals is even, none enter. If it is odd, there exist $N$ asymmetric equilibria in which rival $i=1,\ldots N$ enter and the other rivals stay out, and a symmetric mixed-strategy equilibrium in which all $N$ rivals have strictly positive payoffs and entry probabilities. }
\noindent
{\bf Proof. }
If $N$ is odd, the sequential game has a pure-strategy equilibrium in which an arbitrary rival enters and the others stay out, by Proposition 1. If the game is simultaneous, and one rival uses the strategy of entering with probability one, the other $N-1$ rivals would have the same negative payoffs as the last $N-1$ rivals in the sequential game, and so will not enter. The identity of the entering rival in the simultaneous-move game is arbitrary, so there are $N$ pure-strategy asymmetric equilibria.
The remainder of the proposition is proved by induction.
Start by considering the game with an even number
$ n_s \geq 4$ of rivals under the hypothesis that all games with $ N & p_1 = \frac{ 2 V/c -4 } {3 V/c -4 } \\
\end{array}
\end{equation}
The expected payoff from entering first is negative if and only if
\begin{equation} \label{e13}
\begin{array}{lll}
p_{even} & > & p_2 = 2 - \frac{4}{V/c} \\
\end{array}
\end{equation}
We can confirm that $p_1 \leq p_2$ because
\begin{equation} \label{e13}
\begin{array}{lll}
& & p_1 = \frac{ 2 V/c -4 } {3 V/c -4 } = \bigg( 2 - \frac{4}{V/c} \bigg) \bigg( \frac{ V/c } {3 V/c -4 } \bigg) = p_2 \bigg( \frac{ V/c } {3 V/c -4 } \bigg) \\
\end{array}
\end{equation}
and since we have assumed $V/c \geq 2$, it follows that $\frac{ V/c } {3 V/c -4 } \leq 1$.
Thus, we have obtained the two probability bounds of the proposition. The first rival to appear adopt the more profitable of the two strategies, and the second will adopt the other if it has non-negative profit.
$\blacksquare$
\begin{figure}[ht!]
\centering
\includegraphics[width=5in]{oddeven-figure1}
\caption{\sc Uncertainty over the Number of Rivals} \label{quasi1}
\end{figure}
Figure 1 shows how the equilibrium in Proposition 3 depends on the ratio of prize to entry cost and the probability that the number of rivals is even. If $p_{even}$ is very small, the number of entrants will likely be odd. The first rival would like to wait and enter only after that number is pinned down, but if he does, the second rival will forestall him, so he enters. If $p_{even}$ is somewhat higher, the payoff from entering first falls and the payoff from waiting and hoping for an even number after the other rival has entered correspondingly rises. If $p_{even}$ is even higher, the payoff from entering first falls below zero. Even for $V/c =3$, if $p_{even}> 2/3$, no rival will enter.
It may seem strange that two rivals could both enter with positive expected payoff. After all, a three-way contest would yield a negative expected payoff, and if two rivals enter sequentially the first entrant has an even lower payoff than in the three-way contest. The reason is that in most realizations of the equilibrium, only one rival enters. With high probability, his expected payoff is the $V/2-c$ from being the only entrant. Only with low probability is his expected payoff the negative amount $V/4-c$ from sequential entry.
It is also curious that despite the low probability of there being a second entrant, the expected payoff from entering second can be higher than from entering first. The reason is that the second entrant does not risk having to fight twice for the prize. If the prize is sufficiently valuable relative to the entry cost, it is worth giving up some probability of winning any prize at all in order to obtain a bigger chance at the full prize.
What if information about
about $N$'s parity is disclosed gradually. There are different ways this might be modelled. It might be that each potential rival appears in sequence with probability $\theta$, or that the value of $\theta$ itself is gradually revealed as potential rivals enter or not, or that the parity becomes known with probability one at some random point $N^*$ in the sequence of potential rivals, that only the existence of the last rival is uncertain, etc.
Whatever the specification may be, the rivals who have appeared but not entered will focus on the probability that at the end of the day, after the appearance or nonappearance of the last potential rival, the realized value of $N$ is odd or even. In essence, the outcome is the same as in Proposition 3: that one of two strategies may have the highest expected payoff and both might have positive payoff: the waiting strategy of waiting till the number $N$ of actual rivals is known before deciding whether to enter, or the
pre-emptive strategy of entering earlier in the hope that the eventual number of rivals turns out to be odd.
If the first rival to appear uses the waiting strategy, his payoff will always have positive expected payoff. Even if other rivals enter first, he can credibly wait until they fight the incumbent, and then enter if the number of unentered rivals who have appeared turns out to be odd, for a realized payoff of $V/2-c$.
The pre-emptive strategy may also be profitable. As information about the number of rivals to appear gradually is disclosed, there may come a period in which the probability that the number of rivals is even is low enough for the expected payoff from entry to be positive. If this probability is $p_{even}$,the payoff from this strategy is $ (1-p_{even}) (V/2-c) + p_{even}(V/4-c)$.
As in Proposition 3, whether the waiting or the pre-emptive strategy has the higher expected payoff depends on $p_{even}$ and the ratio of prize to entry cost, $V/c$. The crucial value above which waiting is better is the same as from Proposition 3: $p_{1} = \frac{ 2 V/c -4 } {3 V/c -4 }$. The crucial value above which the pre-emptive strategy is unprofitable is also the same: $p_{2} = 2 - \frac{4}{V/c}$. The first rival to appear will adopt whichever of the strategies is the most profitable, and the second will adopt the other, if it is profitable. Rivals who appear later in the sequence will not enter. What is different from Proposition 3 is that whether the pre-emptive strategy has a higher payoff, and, indeed, whether it has a positive expected payoff, can change repeatedly as information is revealed. This means that the player using the pre-emptive strategy would prefer to wait for the estimate of parity to become more accurate but may have to enter immediately on the expected payoff becoming positive lest he be pre-empted by the third rival to appear.
One way to model the uncertainty is to suppose that the number of rivals follows a binomial distribution in which $M$ rivals each have probability $p$ of appearing in sequence. Then the probability that $N$, the number actually appearing, is even is
\begin{equation}\label{e19}
\frac{1}{2} + \frac{1}{2} (1-2p)^M
\end{equation}
Given $M$ potential rivals, if $p=.5$ then the probability that $N$ is even is also .5. In addition, for any $p$, not just .5, as $M$ increases the probability that the number of rivals is even approaches .5. Parity is harder to predict when there are many potential rivals. Uncertainty does not necessarily discourage entry, though. The effect of even $N$'s probability being close to .5 depends on $V/c$. Recall that pre-emptive entry is profitable if $p_{even} > p_2 = 2 - \frac{4}{V/c}$. If $p_{even}=.5$, this requires
$V/c> 2 \frac{2}{3}$. Thus, when the prize is large relative to the entry cost, a larger number $M$ of remaining potential entrants makes entry more likely, not less, by bringing the probability of even parity close to .5.
Thus, the effect of uncertainty on entry is ambiguous. The rivals still focus on the parity of their number, but it is now just the probability of the parity that they use for their decisions. If the game starts with that probability close to .5 and $V/c$ is between 2 and $2 \frac{2}{3}$, uncertainty discourages entry compared to equal probabilities of even parity and odd certainty games. If
$V/c$ is between $2 \frac{2}{3}$ and 3, uncertainty encourages entry relative to that certainty game gamble.
An effect not present in the sequential certainty game is
that two rivals might enter.
In the certainty game, only one rival enters in equilibrium. He fights the incumbent, and if he wins there is no further entry. In the uncertainty games, two rivals might enter. One of them, using the pre-emptive strategy, hopes the parity of the game will be odd, in which case he fights the incumbent at the end and there is no further entry. The other one, using the waiting strategy, hopes that the parity will be even, in which case he will enter after the first entrant and the incumbent have fought and he will fight the winner in a second contest. Two rivals entering could not enter with certainty or their expected payoffs would be negative, but if one of them enters with small enough probability, they can both enter with positive expected payoffs.
\bigskip
\noindent
{\bf 5. Entry When the Prize Is Big Relative to the Entry Cost: $N^*>1$ }
So far, we have assumed $V/c \in (2, 3)$, so only one rival at a time could enter profitably. Let us now return to the original game of Proposition 1 and see what happens when the prize is bigger and the incentive for entry greater. Define $N^*$ as
\begin{equation} \label{e26}
N^*= Integer(\frac{V}{c} -1 ),
\end{equation}
where ``$Integer(x)$'' equals $x$ rounded down to the nearest integer. $N^*$ is the maximum number of rivals who could enter and take part with non-negative expected payoff in a single group contest. In the opening example, $V/c=100/40 =2.5$, so $N^*=1$. If the entry cost had been 30, we would have $V/c=100/30 =3 \frac{1}{3}$ and $N^*=2$---two rivals could profitable enter, each with a 1/3 chance of victory in the contest against the incumbent.
We will go through the cases of $N^*=2, 3$, and $4$ individually.
Suppose that $N^*=2$. We will start with some general properties of an equilibrium that apply to games of any size $N$. If two rivals enter and fight simultaneously with the incumbent, they have an expected payoff of $V/3-c$, which is positive by definition of $N^*=2$. If they fight in sequence, the first rival has to fight two contests to win ultimate victory, so his expected payoff is $V/4-c$, which is negative for $N^*=2$. If three or more rivals enter, at least some will have negative payoffs since at least some must have an expectation of ultimate victory of 1/4 or less. Any equilibrium will therefore have either two rivals entering and then fighting simultaneously (by Lemma 1), or just one entering, or none entering.
If $N=1$, one rival enters. If $N=2$, the unique equilibrium has both enter and fight simultaneously, for payoffs of $V/3-c$, since if just one entered, the other would enter after he fought.
If $N=3$, the unique equilibrium has zero entry. If two rivals entered and fought simultaneously, that would leave the third, who would enter in the $N=1$ subgame, making the payoffs of the first two negative. If just one entered, that would leave the other two to enter together in the $N=2$ subgame, making the first entrant's payoff negative.
If $N=4$, the unique equilibrium has entry by one rival, resulting in the $N=3$ subgame and no further entry. Entry of two rivals would result in the $N=2$ subgame and entry of two more, reducing the payoffs of the first wo below zero.
If $N=5$, the unique equilibrium has entry of the first two rivals, whose contest with the incumbent would leaves the $N=3$ subgame and no further entry. Entry of just one rival would result in the $N=4$ subgame after he fought the incumbent, so it would be followed by entry of a second rival and the first entrant's payoff would be negative.
This pattern repeats when $N$ is greater than 5, as stated in Proposition 4.
\noindent
{\bf Proposition 4.} {\it If $N^*=2$ in the sequential game, then for $m=0, 1, 2,... $) the unique equilibrium has entry of two rivals who fight in the same contest if the number of rivals is $N=2+3m$, entry of no rivals if $N = 3 + 3m$, and entry of one rival if $N = 4 + 3m$. }
\noindent
Proof.
The proof uses induction. Hypothesize that if and only if the game with (a) $N=2+ 3m$ has a unique equilibrium with two entrant, (b) $N= 3+3m$ has a unique equilibrium with no entrants, and (c) $N= 4+3m$ has a unique equilibrium with one entrant, then (a$'$) $N=2+ 3 m+ 1$ has a unique equilibrium with no entrants, (b$'$) $N= 3+3m +1$ has a unique equilibrium with one entrant, and (c$'$) $N= 4+3m+1$ has a unique equilibrium with two entrants. To see why this is true, note that (a$'$) merely repeats the hypothesis (b) and (b$'$) merely repeats hypothesis (c). In game (c$'$), entry of two rivals leaves a subgame with $N'= 3m+3$, which by (b) has no further entry and is thus an equilibrium. Entry of one rival leaves a subgame with $N'= 3m+2$, which by (a) would result in entry by two more rivals, which cannot be an equilibrium. Entry of no rivals cannot be an equilibrium because two rivals would profit by deviating and entering.
We have established earlier that (a), (b), and (c) are true for $m=0$ ($N=2,3,4$), so since the inductive hypothesis is true, so is the proposition. $\blacksquare$
The lessons of the $N^*=1$ case thus extend to $N^*=2$. The incumbent can prefer that there be more potential entrants rather than fewer. An entrant can prefer that the number of other potential entrants be increased, so long as that increase is common knowledge and he has the option to enter before them. The amount of entry continues to depend strangely on the number of rivals, though it is no longer a pattern of odd versus even. And--- a lesson that will not be true for higher values of $N^*$--- the equilibrium for given $N$ is unique.
There is something else new with $N^*=2$.
Observation 1 noted that an incumbent who performs worse than rivals in contests can nonetheless survive without challenge in the right circumstances, i.e., if there is an even number of rivals. It did not say that it would actually be helpful, as opposed to neutral, for the incumbent to make himself weaker. It becomes possible that the incumbent might at least wish to reduce the power of incumbency in general, both for himself and for later incumbents.
\noindent
{\bf Observation 2:} {\it The incumbent may wish to forego an incumbency advantage, to discourage entry.}
To show this, we need to add incumbency advantage to the model. A natural way is to define a real number $I \geq 1 $ as the incumbency advantage and then let the probability of each of $M$ entrants winning a contest be $\frac{1}{M +I}$ and the probability of the incumbent winning be $\frac{ I}{M +I}$. Observation 2 can be demonstrated by example. Let
$N=3$, $V=100$, and $c=30$, and $I=1.5$. $N^*=1$ because at most one rival enters and fights, which is the unique equilibrium. The entrant's payoff will be
\begin{equation} \label{e37}
\pi = \frac{V}{M+ I} - c = \frac{100}{1+1.5 } - 30= 10
\end{equation}
If after he fought the other two rivals deviated and entered, each would have a payoff of
\begin{equation} \label{e38}
\pi = \frac{V}{M+ I} - c = \frac{100}{2+1.5} - 30 \approx -1.4
\end{equation}
If after the entrant fought, one of the other two rivals entered, the remaining rival would wait for the second entrant to fight and then enter himself, making the second entrant's payoff negative.
Compare this with the game in which $I=1$ so there is no incumbency advantage. The game now has room for two rivals to enter profitably because the payoff from a three-way fight between the incumbent and two entrants would be $100/3-30$ and $N^*=2$. We have just analyzed this game for Proposition 3, however, and if $N=3$ and $N^*=2$, the only equilibrium has zero entry. If one rival enters and defeats the incumbent, the expected value of the prize is big enough that he will have to face entry by the two remaining rivals. The incumbent thus can deter entry altogether by eliminating the advantage of incumbency by some means such as
legislating campaign funding for challengers. Once the incumbency has no advantage, the rivals know that whoever enters first will not only have to defeat the incumbent in a contest (which does become easier), but also defeat future entrants because entry is more attractive than before. We have here another example of the idea running through this paper that a feature which would seem to discourage entry can end up encouraging it by making the first entrant safe from competition.
\bigskip
\noindent
{\bf $N^*=3$: Room for Three Entrants}
Now suppose that $N^*=3$, so that entry in the expectation of a four-way contest between three rivals and the incumbent is profitable. Note, too, that sequential contests become possible in equilibrium, since $V/4>c$.
\noindent
{\bf N=1.} \\
The rival will enter for a payoff of $V/2-c$.
\noindent
{\bf N=2.} \\
Both rivals will enter, but there are three equilibria, which differ in the order of entry.
In equilibrium 2a, both rivals fight simultaneously, with a payoff of $V/3 - c>0$.
Off the equilibrium path, if one entrant refrains from entry, the equilibrium specifies that he enter in the next period after his deviation. Since both the incumbent and the first entrant have a higher expected payoff from a 3-way contest than a 2-way contest, they refrain from fighting until the deviating rival enters, so he has no incentive to delay.
In equilibrium 2b, the first rival enters and challenges the incumbent immediately after he appears. Once he has fought the incumbent--- but only then--- the second rival enters and fights the winner. The first rival's payoff is $V/4-c$ and the second's is $V/2-c$.
Off the equilibrium path, equilibrium 2b specifies that if the first rival delays entry and fighting, so will the second rival. The first rival's strategy is to enter and challenge in the next period after his deviation. These off-equilibrium strategies are best responses to each other.
Equilibrium 2c is like equilibrium 2b except it is the second rival who enters and challenges the incumbent first, when he appears in period 2. The first rival refrains from entry in periods 1 and 2 and enters the period after the second rival and the incumbent fight. The payoffs and strategies off the equilibrium path are analogous to equilibrium 2.
\noindent
{\bf N=3.} \\
In equilibrium 3a all 3 rivals enter and fight, for an expected payoff of $V/4-c$.
In equilibrium 3b, none enter. If one deviated and did enter, the other two would wait until he had fought the incumbent before they entered, and then play out equilibrium 2a, 2b, or 2c in the subgame with $N=2$. In all three of those $N=2$ equilibria, both of the remaining rivals enter. Thus, a deviator in equilibrium 3b would either face one two-way contest and one four-way contest or three two-way contests, either alternative yielding a negative payoff of $V/8-c$.
When $N=3$ there cannot be equilibria in which just 1 or just 2 rivals enter. After they fought the incumbent, there would remain a subgame with $N=2$ or $N=1$. We know that in such subgames, all equilibria have the remaining rivals entering, so entry cannot stop at just 1 or 2 in them. Entry is by all rivals or by none--- the equilibrium upon which Observation 3 is based. We will see in Proposition 5 that no matter how big the value of the prize is, there will still exist equilibrium with zero entry because of the first entrant's fear that he will have to compete with future entrants.
\noindent
{\bf N=4.} \\
There are two equilibria. In equilibrium 4a, the first rival enters and fights, and the other rivals stay out, the subgame being the game with $N=3$ and playing out Equilibrium 3b, which has zero entry.
In equilibrium 4b, none enter. If one deviated and did enter, the other players would not enter until he had fought the incumbent, and then they would all enter simultaneously, playing out Equilibrium 3a in the $N=3$ subgame. The first entrant's expected payoff from this two-way contest followed by a four-way contest would be $V/8-c$ , which is negative.
There cannot be an equilibrium with all 4 entering, because when $N^*=3$ their payoff would be negative if they all fought at once and Lemma 1 says that if they fought in more than one contest their payoff would be even lower. There cannot be an equilibrium with 3 entering, because then we would have an $ N=1$ subgame and the fourth would enter. There cannot be an equilibrium with 2 entering, because that would leave an $N=2$ subgame and the other two would enter.
\noindent
{\bf N=5.} \\
In equilibrium 5a, rival 1 enters and fights, and the other rivals stay out, the subgame being the game with $N=4$ and playing out equilibrium 4b, which has no entry.
In equilibrium 5b, rivals 1 and 2 enter and fight in the same contest, as in equilibrium 2a. The other rivals stay out, the subgame being the game with $N=3$ and playing out equilibrium 3b, which has no entry.
In equilibria 5c and 5d, rivals 1 and 2 enter and fight sequentially, with one or the other entering first, as in equilibria 2b and 2c. The other rivals stay out, the subgame being the game with $N=3$ and equilibrium 3b.
There cannot be an equilibrium in which 3 rivals enter, because that would leave an $N=2$ subgame and the remaining two would enter, which would reduce their payoff to at most $V/6-c$. There cannot be an equilibrium in which 4 or 5 rivals enter, because since $N^*=3$ at most three rivals can enter and earn positive profit.
Finally, there cannot be an equilibrium in which 0 rivals enter. In such an equilibrium, suppose one rival deviated by entering and fighting. This would leave the $N=4$ subgame, which has two equilibria. In equilibrium 4a, one other rival enters, which would leave the first entrant's payoff positive at $V/4-c$, and so his deviation will have been profitable. In equilibrium 4b, no other rival enters, which makes his deviation even more profitable.
At this point, note that for particular values of $N$, there exist equilibria with zero entry for both $N^*=2$ and $N^*=3$. It was perhaps surprising that when $N^*=1$ entry was deterred despite being profitable for an isolated entrant, but it is more surprising that this remains true with larger prize. Thus we have observation 3.
\noindent
{\bf Observation 3}: Even if the value of the prize is more than four times the entry cost, there can exist equilibria with no entry.
\bigskip
\noindent
{\bf $N^* >3$: Room for Four or More Entrants}
The multiple equilibria for $N^* =3$ show
how the number of equilibria expands with $N^*>2$. The increased number of subgames in the bigger games generate an increased number of alternatives for behavior off the equilibrium path, which in turn increases the variety of equilibrium behavior. Proposition 5 tell us that for games with a prize big enough that $N^* \geq 4$ and enough rivals, there exist equilibria with any number of entrants from 0 to $N^*$.
\noindent
{\bf Proposition 5: } {\it If $N^* \geq 4$ is the maximal number of rivals who could profitably enter and fight in one contest, then for any number $N$ of rivals, if $N >N^*$
there exists an equilibrium with zero entry and if $N \geq N^*+ Integer(N^*/2)+2 $ there exist equilibria with any number from zero to $N^* $ of entrants.}
\noindent
{\bf Proof: }
The proof will proceed by showing: (1) For $N = m N^*$ and any positive integer $m$ there is a zero-entry equilibrium, (2) For $N \in [mN^*+ Int(N^*/2) +2, (m+1) N^*] $ there is a zero-entry equilibrium. (3) For $N>N^*$ there is a zero-entry equilibrium, (4) For $N \geq N^*+ Integer(N^*/2) +2$ there exist equilibria with any number from zero to $N^* $ of entrants.
\noindent
(1) For $N = m N^*$ for any integer $m \geq 1$, there is a zero-entry equilibrium. Suppose $N =mN^*$. An equilibrium with zero entry can specify that if one rival does enter, $N^*-1$ followers can enter too, but only after he fights and that they fight together in one contest. If $m=1$ then under this strategy all rivals have entered by that point and the followers' payoff are positive by definition of $N^*$. As for the deviator, he would have a .5 chance of defeating the incumbent and then would have the same expected payoff as each follower, so his expected payoff is $.5(\frac{V}{N^*}) -c$. This is negative if
\begin{equation} \label{e33}
.5 \frac{V/c}{N^*} -1 <0,
\end{equation}
which since $V/c < N^*+1$ is true if $ \frac{N^*+2}{2N^*}-1<0$, which is true because the proposition postulates that $N^*>2$.
If $m=2$ there would be $N^*$ rivals left to enter after the initial deviator entrant and the $N^*-1$ followers, but we have shown that there is an equilibrium for the $m=1$ game in which there is no entry, and we can use that for the $m=1$ subgame of the $m=2$ game. The equilibrium strategies support the outcome of zero entry in $m'$ games if the $m'-1$ game has zero entry, so by induction any game with $N=mN^*$ has a zero-entry equilibrium.
\noindent
(2) For $N \in [mN^*+ Int(N^*/2) +2, (m+1) N^*] $ there is a zero-entry equilibrium.
Suppose that $N= mN^*+ Int(N^*/2) +2$ . For the zero-entry equilibrium, specify that if a deviator enters, exactly $ Int(N^*/2)+1 $ other rivals enter after he fights and they wait to fight each other until the entire group has entered.
Once they have all entered and fought, the subgame has $mN^*$ rivals, and we can specify that the equilibrium follows the zero-entry equilibrium for that subgame, which part (1) of the proof showed does exist.
Each follower has a payoff of $\frac{V}{ Int(N^*/2) +1 }-c$, which is positive because the denominator is less than $N^*$ and by definition of $N^*$, $\frac{V}{N^* +1} -c\geq 0$. The deviator has a payoff of $.5 \frac{V}{ Int(N^*/2) +1 }-c$ because he must fight the incumbent too. This is negative because it is no greater than than $V/(N^*+1) -c $, which is negative by definition of $N^*$.
\noindent
(3) For $N>N^*$ there is a zero-entry equilibrium,.
For $N = m N^* +1$, this is easy to see: If a deviator enters, specify that after he fights he is followed by $N^*$ rivals who enter and fight together. They each have payoff $V/N^*-c$, which is positive by definition of $N^*$, and he has payoff $.5V/ N^* -c$, which is negative.
The crucial step is to show that
for $N = mN^*+2$ there is a zero-entry equilibrium. If a deviator enters, specify that after he fights he is followed by $k$ rivals, but let these rivals enter after the deviator fights, and let them fight in sequence, with each rival fighting before the next rival enters. Choose $k$ to be the highest integer such that $ .5^{k-1}V -c \geq 0$, so the first rival entering after the deviator has a non-negative payoff, and such that subtracting it from $N = mN^*+2$ does not leave fewer than $(m-1)N^*+ Int(N^*/2) +2$ rivals in the subgame and we can be sure from part (2) of the proof that the subgame has a zero-entry equilibrium. The question is whether if $k$ takes its upper bound the deviator's payoff is negative. If it is, then we can reduce $k$ to the highest level at which the payoff to the first rival entering after the deviator is positive, and the payoff to the deviator, with only half the expected prize value, will be negative.
If we consider higher values of $N$, with $mN^*+3, mN^*+4,... mN^*+ Int(N^*/2) +1$, it becomes easier to have enough rivals follow the deviator to both make his payoff negative and end up with a zero-entry subgame. Thus, if we can show that deviation is unprofitable when $N = mN^*+2$, we can show it for higher values of $N$.
We will start by treating $N^*=4, 5$ and 6 with $N = N^*+2$ as special cases (we will omit the $mN^* $ prefix, since the analysis is exactly parallel for any value of $m$). First, consider $N^*=4$ with $N = 6$. Attaining a subgame of size $Int(N^*/2) +2 $ is the same as attaining a subgame of size $N=4$, which would only allow for one rival to follow the deviator, so we cannot use $k$ based on part (1) of the proof to find a number of entrants that would take us to a zero-entry subgame. Such a subgame nonetheless exists.
For a zero-entry equilibrium, let a deviator's entry be followed by 2 entrants, taking us to the $N= 3$ subgame. Each of the 2 entrants would have a payoff of $V/3-c$ and the deviator's payoff would be $V/6-c$, which is negative for $N^*=4$. Then, we could specify that entry in the $N=3$ subgame is followed by the other two rivals waiting for the entrant to fight and then both entering themselves and fighting. They would have payoffs of $V/3-c$, and the deviator would have the payoff of $V/6-c<0$.
Second, consider $N^*=5$ with $N=7$. The subgame with $ Int(N^*/2) +2 $ rivals has 4 rivals. For a zero-entry equilibrium, let a deviator's entry be followed by 2 other entrants who fight in separate contests in sequence after the deviator fights. The second entrant's payoff will be $V/4-c>0$ but the deviator's payoff will be $V/8-c<0$.
Third, consider $N^*=6$ with $N=8$. The subgame with $ Int(N^*/2) +2 $ rivals has 5 rivals. For a zero-entry equilibrium, let a deviator's entry be followed by 2 other entrants who fight in separate contests in sequence after the deviator fights. The second entrant's payoff will be $V/4-c>0$ but the deviator's payoff will be $V/8-c<0$.
The maximum number of following entrants $k$ that can be subtracted from $mN^*+2$ to give us a subgame with $(m-1)N^*+ Int(N^*/2) +2$ is the same for every odd value of $N^*$ and its succeeding even value $N^* $, since the $Int()$ operator reduces the size of its argument to the nearest integer below. When $N^*=7$ or 8 we subtract $k\leq 3$ from $N^*+2$ to reach $N = Int(N^*/2) +2$ ; when $N^*=9$ or 10, we subtract $k \leq 4$; and so forth.
To deter entry when $k$ other rivals follow the deviator's entry, we need for his payoff to be negative. When $N^*$ increases by two, the bound on $k$ rises by one, so the deviator's payoff falls by $.5^{k }V- .5^{k+1 }V$ if $k$ reaches its bound. This rate of decline in the deviator's payoff is increasing in $k$ and thus in $N^*$. Going the other way, to make deterring entry more difficult, when $N^*$ increases, the necessary probability of winning a contest that generates zero expected payoff for the deviator also falls. That probability falls by $ V/N^*- V/(N^*+2)$ when $N^*$ increases by two, which is decreasing in $N^*$. Since we have established for $N^*=5$ and $N^*=6$ that the need to reach a zero-entry post-entry subgame allows enough rivals to enter to drive the deviator's payoff is negative, we can conclude that for larger $N^*$ it is even easier to reach a zero-entry subgame and drive the deviator's payoff negative.
\noindent
(5) For $N \geq N^*+ Int(N^*/2)+2$ there exist equilibria with any number from zero to $N^* $ of entrants. Suppose $N \geq mN^*+ Int(N^*/2)+2$ for $m \geq 1$ and we wish to construct an equilibrium with $q$ entrants, $q \in \{0,1,... N^*\}$. In equilibrium, let the first $q$ rivals enter, and challenge the incumbent immediately once all have entered. Each will have a payoff of $V/(N^*+1)-c$, which is positive by definition of $N^*$. This will leave $mN^*+Int(N^*/2)+2-q$ rivals, which is at most $mN^*+Int(N^*/2)+2 -N^*$, which equals $(m-1)N^*+ Int(N^*/2)+2$. In step (2) we proved that there exists a zero-entry equilibrium for $N \geq Int(N^*/2)+2$. Thus, we can support the equilibrium with $q$ entrants by the out-of-equilibrium strategy that supports the zero-entry equilibrium in the $(m-1)N^*+ Int(N^*/2)+2-q $ subgame.
$\blacksquare$
The case of $N* \geq 4$ is different from $N*=3$. With $N*=3$, we do not get zero-entry equilibria for $N=N*+1$, for example, whereas we do for $N* \geq 4$. The reason is the increasing multiplicity of equilibria as $N^*$ increases, which allows for more and more equilibria in the subgames that support particular equilibrium outcomes. Strange conclusions are perhaps not surprising with so many equilibria, but I will make on final observation, since it will have intuition that extends beyond a contrived example.
\noindent
{\bf Observation 4:}{ A rival can be better off if his probability of winning a contest is reduced. }
Observation is stronger than
Observation 3, which said that the incumbent could sometimes benefit by reducing the incumbency advantage for himself and future incumbents, and its reasoning is different. Again, let us use an example, this time with
$N=3$, , $ V=100$ and $ c= 20$, so $N^*=3$. As with incumbency advantage, we need to specify how a contestant's power enters his winning probability, and we will use the same function but with each player having his own individual power. Let the probability contestant $j$ wins a four-way contest be $\frac{p_j}{p_{inc}+ p_1+p_2+p_3 }$.
Let $p_{inc}=p_1=p_2=p_3 =1 $ initially, corresponding to our original game with all contestants equal. There will be two equilibria. In one, all three rivals enter, for payoffs of $100/4 - 20=5$. In the other, none enter. If one did, the other two would wait for him to fight, and then enter themselves, making the first entrant's payoff $.5 (100/3) -20 = -3 \frac{2}{3}$.
Suppose, however, that
rival 1 is weaker than the others, with $p_1=.5$, so in a 4-way fight his probability of winning is $\frac{.5}{3.5}$ and each of the other three has probability $\frac{1}{3.5}$. There is no longer an equilibrium in which all three entrants fight at once, since rival 1's payoff would be $ (1/7) 100 -20 \approx -6. $ There is also no equilibrium in which rival 1 enters before the other two rivals enter and fight, since his payoff would be $ \frac{.5}{1.5} \frac{.5}{2.5} (100) - 20 = -13 \frac{1}{3}$. And there is no equilibrium in which rival 1 enters and fights together with another rival after which the remaining rival enters. Rival 1's payoff would again be $ -13 \frac{1}{3}$ since he would again be fighting both a two-way and a three-way contest to win.
There is, however, an equilibrium in which rivals 2 and 3 enter and fight the incumbent, after which rival 1 also enters and fights the winner. The second fight is between a player with power 1 and rival 1 with his power of .5, so rival 1 has probability $\frac{.5}{1.5}$ of winning and the other player has probability $\frac{1}{1.5}$. Rivals 2 and 3 each have probability 1/3 of winning the first contest, so each has a payoff $ (2/3)(1/3) (100) -20 \approx 2$. Rival 1 has payoff $ (1/3)(100) -20 = 13 \frac{1}{3}$. This is higher than either of the payoffs he would receive in equilibria of the original game--- 0 or 5. Rival 1's payoff is higher now that he is weaker.
The intuition behind Observation 4 is that weakness is a way for a rival to credibly commit not to enter for the first, big, contest. Having to fight only in the second round is enough of a benefit to be worth the cost of having lower probability of winning that contest.
This has some similarity to the Three Stooges Duel, a version of the game in Shubik's 1954 paper, ``Does the Fittest Necessarily Survive?'' Players Moe, Curly, and Larry are fighting a three-way duel. In each round, player s shoot once simultaneously at one other player. Moe hits his target with probability .9, Curly with probability .8, and Larry with probability .7. In equilibrium, in the first round (and succeeding ones until someone is hit), Moe and Curly shoot at each other and Larry shoots at Moe. When either Moe or Curly is hit, the game becomes a two-way duel between the survivor and Larry. Larry has the highest expected payoff, despite being the weakest.
The difference is that in the present game player 1 avoids being involved at all in the first contest. The other players know that it is useless to wait for him to enter before they fight, since his payoff would be too low, so they go ahead without him, knowing that the winner will have to face him. They would prefer to have him participate in the first contest, but they cannot force him, whereas Larry is safe but dangerous in the first round of the Three Stooges Duel. What the two games have in common, though, is the idea that weakness can be an advantage by turning the other player's attention to each other for the first part of the game.
\bigskip
\noindent
{\bf 6. Concluding Remarks}
The most curious result of this paper is that a rival enters if the total number of rivals is even but stays out if it is odd. Is this too contrived, a result special to the model, something that would make a good exercise but yields no real insight? The robustness of the result to simultaneous entry and uncertainty just make it more outrageous. How can such an peculiar (not to say odd) result be robust?
Whether it is robust to situation of large $N$ or not, I do think the model helps us understand a useful idea, or, rather, a combination of two ideas. The first is that a rival will not challenge an incumbent if he knows that even in the event that he wins he will run high risk of being toppled by someone else. The second is that a rival will indeed challenge the incumbent if he thinks that later challengers will fear to imitate him because they are thinking about the precariousness of winning.
A potential Democratic candidate for President will not challenge the party leader if he thinks he is too weak to hold front-runner status even if he wins, unless his motivation is something other than becoming President, such as Eugene McCarthy's 1968 Vietnam War challenge to President Johnson. A general will not rebel against the emperor if he thinks that even if he wins another general will rebel against him. A gang deputy will not try to kill the head gangster if he thinks the others will not accept his leadership without challenge. The best defense for a weak leader may be to have strong enough alternative leaders that none of them is confident he can withstand the others. Even a strong leader is well-advised to use this kind of deterrence. Adolf Hitler was known for carefully maintaining several different power groups in Germany--- the army, the SS, the Nazi Party, the Luftwaffe, and others. If one group overthrew Hitler, it would have had to deal with the others.
I have used a political setting, but another setting for the model is natural monopoly, with the decision being whether to incur the fixed cost of entry rather than how long to remain in the market. The prospect of future entry would make this a pre-emption game when the number of rivals was odd and would block entry entirely when the number was even.
Still another setting is extortion (see Choi \& Thum (2004) and Konrad \& Skaperdas (1998)). A blackmailer incurs risk in approaching his victim---the entry cost. When he negotiates his payment, that is analogous to the fight, a bargaining game that would split the surplus rather than a two-way contest. The bargaining game depends on how many blackmailers will approach the victim in the future. Shleifer \& Vishny (1993) noted that the cost of corruption is inefficiently high even for the officials being bribed if many bribes must be paid instead of one big bribe. We see the opposite in the present setting: if the victim's funds must be split among extortioners, each bribe might become too small to be worth the risk. The essential elements are that several players must consider incurring a fixed cost of entry into a contest or some other splitting of a prize and that they must think about successive entrants as well as the direct outcome for themselves.
\bigskip
\noindent
{\bf References}
Argenziano, Rossella \& Philipp Schmidt-Dengler (2014) ``Clustering in N-Player Preemption Games,''
{\it Journal of the European Economic Association},
12: 368-396.
Michael R. Baye, Dan Kovenock \& Casper G. de Vries (1999) ``The Incidence of Overdissipation in Rent-Seeking Contests,'' {\it
Public Choice},
99: 439–454.
Bernheim, B. Douglas (1984)
``Strategic Deterrence of Sequential Entry into an Industry,''
{\it The RAND Journal of Economics}, 15: 1-11.
Eaton, B. Curtis \& Roger Ware (1987) ``A Theory of Market Structure with Sequential Entry,'' {\it
The RAND Journal of Economics}, 18: 1-16.
Blackwell, David (1949) ``The Noisy Duel, One Bullet Each, Arbitrary Nonmonotone Accuracy,''
Rand Publication RM-131, Rand Corp., Santa Monica, CA.
Brunnermeier, Markus K. \& John Morgan (2010) ``Clock Games: Theory and Experiments,''
{\it Games and Economic Behavior}, 68: 532-50.
Bulow, Jeremy \& Paul Klemperer (1999)
``The Generalized War of Attrition''
{\it American Economic Review}, 89: 175-189.
Choi, Jay Pil \& Marcel Thum (2004)
``The Economics of Repeated Extortion,''
{\it The RAND Journal of Economics},
35: 203-223.
Fullerton, Richard L. \& R. Preston McAfee (1999)
``Auctioning Entry into Tournaments,''
{\it Journal of Political Economy}, 107: 573-605.
Gul, Faruk \& Russell Lundholm (1995) ``Endogenous Timing and the Clustering of Agents' Decisions,''
{\it Journal of Political Economy},
103: 1039-1066.
Harbaugh, Rick \& Tilman Klumpp (2005) ``Early Round Upsets and Championship Blowouts,''
{\it Economic Inquiry}, 43: 316-329.
Hendricks, Ken, Andrew Weiss \& Charles Wilson (1988) ``The War of Attrition in
Continuous Time with Complete Information,''{\it International Economic Review}, 29: 663–
680.
Konrad, Kai A. \& Stergios Skaperdas (1998)
``Extortion,''
{\it Economica
New Series,}, 65: 461-477.
Laraki, Rida \& Eilon Solan (2005) ``The Value of Zero-Sum Stopping Games in Continuous Time,"
{\it SIAM Journal of Control and Optimization,}
43: 1913–1922.
Maynard Smith, John (1974) ``The Theory of Games and the Evolution of Animal Conflicts,'' {\it
Journal of Theoretical Biology}, 47: 209–221.
Morgan, John, Henrik Orzen \& Martin Sefton (2012) ``Endogenous Entry in Contests,''
{\it Economic Theory}, 51: 435-463.
Nalebuff, Barry \& Joseph E. Stiglitz (1983)
``Prizes and Incentives: Towards a General Theory of Compensation and Competition,''
{\it
Bell Journal of Economics},
14: 21-43.
Park, Andreas \& Lones Smith (2008)
``Caller Number Five and Related Timing Games,''
{\it Theoretical Economics}, 3: 231-256.
Radzik, T. \& T. E. S. Raghavan (1994) ``Appendix: Duels," { \it Handbook of Game Theory with Economic Applications},
Vol. 2, {\it Handbooks in Economics}, 11, Robert Aumann and Sergius Hart eds., North–Holland,
Amsterdam, pp. 761–768.
Rasmusen, Eric (1988) ``Entry for Buyout,''
{\it
The Journal of Industrial Economics},
36: 281-299.
Rosen, Sherwin (1986), ``Prizes and Incentives in Elimination Tournaments,'' {\it American Economic Review},
76: 701—715.
Shleifer, Andrei \& Robert W. Vishny (1993) ``Corruption," {\it Quarterly Journal of Economics, } 108:
599-617.
Shubik, Martin (1954)
``Does the Fittest Necessarily Survive?'' in
Martin Shubik (Ed.), {\it Readings in Ggame Theory and Political Behavior,} Doubleday:Garden City, NY (1954), pp. 43-46.
Szymanski, Stefan (2003) ``The Economic Design of Sporting Contests,'' {\it Journal of Economic Literature},
41: 1137—1187.
Waldman, Michael (1991) ``The Role of Multiple Potential Entrants/Sequential Entry in Noncooperative Entry
Deterrence,''
{\it The RAND Journal of Economics}, 22: 446-453.
Yildirim, Huseyin (2005) ``Contests with Multiple Rounds,'' {\it Games and Economic Behavior}, 51: 213–227.
\end{document}