\documentclass[12pt]{article}
\usepackage{graphicx}
\usepackage{amsmath}
\usepackage{amssymb}
\reversemarginpar
\topmargin -1in
\oddsidemargin .25in \textheight 9.4in \textwidth 6.4in
\begin{document}
\parindent 24pt
\parskip 10pt
\setcounter{page}{226}
\noindent
21 November 2005 Eric Rasmusen, Erasmuse@indiana.edu. Http://wwww.rasmusen.org.
\begin{LARGE}
\begin{center}
{\bf 8 Further Topics in Moral Hazard }
\end{center}
\end{LARGE}
Moral hazard deserves two chapters. As we will see, adverse selection will
sneak two in also, since signalling is really just an elaboration of the
adverse selection model, but moral hazard is perhaps even more important, as
being the study of incentives, one of the central concepts of economics. In
this chapter we will be going through a hodge-podge of special situations in
which Chapter 7's paradigm of providing the right incentives for effort by
satisfying a participation constraint and an incentive compatibility constraint
do not apply so straightforwardly.
The chapter begins with efficiency wages-- high wages provided when incentive
compatibility is so important that the principal is willing to abandon a tight
participation constraint. Section 8.2 will be about tournaments-- situations
where competition between two agents can be used to simplify the optimal
contract. After an excursion into various institutions, we will go to a big
problem for incentive contracts: how does the principal restrain himself from
being too merciful to a wayward agent when mercy is not only kind but
profitable? Section 8.3 look at some of the institutions that solve agency
problems, and Section 8.4 shows that one institution, contractual punishment,
often fails because both parties are willing to renegotiate the contract.
Section 8.5 abandons the algebraic paradigm altogether to pursue a
diagrammatic approach to the classic problem of moral hazard in insurance, and
Section 8.6 looks at another special case: the teams problem, in which the
unobservable efforts of many agents produce one observable output. Section 8.7
concludes with the multitask agency problem, in which the agent allocates his
effort among more than one task.
\vspace{1in}
\noindent
{ \bf 8.1 Efficiency Wages }
Is the aim of an incentive contract to punish the agent if he chooses the
wrong action? Not exactly. Rather, it is to create a difference between the
agent's expected payoff from right and wrong actions, something which can be
done either with the stick of punishment or the carrot of reward.
It is important to keep this in mind, because sometimes punishments are simply
not available. Consider the following game.
\begin{center}
{\bf The Lucky Executive Game}
\end{center}
{\bf Players}\\
A corporation and an executive.
\noindent
{\bf The Order of play}\\
1 The corporation offers the executive a contract which pays $w(q) \geq 0$
depending on profit, $q$. \\
2 The executive accepts the contract, or rejects it and receives his
reservation utility of $\overline{U} = 5$\\
3 The executive exerts effort $e$ of either 0 or 10. \\
4 Nature chooses profit according to Table 1. \\
\noindent
{\bf Payoffs}\\
Both players are risk neutral. The corporation's payoff is $q-w$. The
executive's payoff is $(w-e)$ if he accepts the contract.
\begin{center}
{\bf Table 1: Output in the Lucky Executive Game}
\begin{tabular}{ l| cc|c }
& \multicolumn{2}{c|}{\bf Probability of Outputs} & \\
{\bf Effort} & 0 & 400 & Total \\
& & & \\
\hline
& & & \\
$Low$ ($e=0$) & 0.5 & 0.5 & 1\\
& & & \\
$High$ ($e=10$) &0.1 & 0.9 & 1\\
& & & \\
\hline \end{tabular} \end{center}
Since both players are risk neutral, you might think that the first-best
can be achieved by selling the store, putting the entire risk on the agent. The
participation constraint if the executive exerts high effort is
\begin{equation} \label{e1}
0.1[w(0) -10] +0.9[w(400)-10] \geq 5,
\end{equation}
so his expected wage must equal 15. The incentive compatility constraint is
\begin{equation} \label{e87}
0.5 w(0) + 0.5 w(400) \leq 0.1 w(0) + 0.9w(400)-10,
\end{equation}
which can be rewritten as $w(400) - w(0) \geq 25,$ so the gap between the
executive's wage for high output and low output must equal at least 25.
A contract that satisfies both constraints is $\{w(0)= -345, w(400)= 55\}$.
But this contract is not feasible, because the game requires $w(q) \geq 0$. This
is an example of the common and realistic {\bf bankruptcy constraint}; the
principal cannot punish the agent by taking away more than the agent owns in
the first place-- zero in The Lucky Executive Game. (If the executive's
initial wealth were positive that would help a little, and perhaps that is a
reason why a company should prefer hiring rich people to poor people.) The
worst the principal can do is fire the agent. So what can be done?
What can be done is to use the carrot instead of the stick and abandon
satisfying the participation constraint as an equality. All that is needed
for constraint (\ref{e87}) is a gap of 25 between the high wage and the low
wage. Setting the low wage as low as is feasible, the corporation can use
the contract $\{w(0)= 0, w(400)= 25\}$ and induce high effort. The
executive's expected utility, however, will be $0.1(0)+ 0.9(25) - 10 =
12.5$, more than double his reservation utility of 5. He is very happy in this
equilibrium-- but the corporation is reasonably happy, too. The corporation's
payoff is $337.5 (= 0.1 (0-0) + 0.9 (400-25)$, compared with the $195 (= 0.5
(0-5) + 0.5 (400-5))$ it would get if it paid a lower expected wage. Since
high enough punishments are infeasible, the corporation has to use higher
rewards.
Executives, of course, will be lining up to work for this corporation,
since they can get an expected utility of 12.5 there and only 5 elsewhere. If
there was some chance of the current executive dying and his job opening up,
potential successors would be willing to pass up alternative jobs in order to
be in position to get this unusually attractive job. Thus, the model generates
unemployment. These are the two parts of the idea of the {\bf efficiency wage}:
the employer pays a wage higher than that needed to attract workers, and workers
are willing to be unemployed in order to get a chance at the efficiency-wage
job.
Shapiro \& Stiglitz (1984) show in more detail how involuntary
unemployment can be explained by a principal-agent model. When all workers are
employed at the market wage, a worker who is caught shirking and fired can
immediately find another job just as good. Firing is ineffective, and effective
penalties like boiling-in-oil are excluded from the strategy spaces of legal
businesses. Becker \& Stigler (1974) suggest that workers post performance
bonds, and Gaver \& Zimmerman (1977) describe how a performance bond of 100
percent was required for contractors building the BART subway system in San
Francisco. ``Surety companies'' generally bond a contractor for five to 20 times
his net worth, at a charge of 0.6 percent of the bond per year, and absorption
of their bonding capacity is a serious concern for contractors in accepting
jobs. If workers are poor, though, bonds are impractical, and without bonds or
boiling-in-oil, the worker chooses low effort and receives a low wage.
To induce a worker not to shirk, the firm can offer to pay a premium over
the market-clearing wage, which he loses if he is caught shirking and fired. If
one firm finds it profitable to raise the wage, however, so do all firms. One
might think that after the wages equalized, the incentive not to shirk would
disappear. But when a firm raises its wage, its demand for labor falls, and
when all firms raise their wages, the market demand for labor falls, creating
unemployment. Even if all firms pay the same wage, a worker has an incentive
not to shirk, because if he were fired he would stay unemployed, and even if
there is a random chance of leaving the unemployment pool, the unemployment rate
rises sufficiently high that workers choose not to risk being caught shirking.
The equilibrium is not first-best efficient, because even though the marginal
revenue of labor equals the wage, it exceeds the marginal disutility of effort,
but it is efficient in the second-best sense. By deterring shirking, the hungry
workers hanging around the factory gates are performing a socially valuable
function (but they mustn't be paid for it!).
While the efficiency wage model does explain involuntary unemployment, though,
it does not explain cyclical changes in unemployment. There is no reason for
the unemployment needed to control moral hazard to fluctuate widely and create a
business cycle.
The idea of paying high wages to increase the threat of dismissal is old, and
can even be found in {\it The Wealth of Nations} (Smith [1776] p. 207). What is
new in Shapiro \& Stiglitz (1984) is the observation that unemployment is
generated by these ``efficiency wages.'' The firms behave paradoxically. They
pay workers more than necessary to attract them, and outsiders who offer to work
for less are turned away. Can this explain why ``overqualified'' jobseekers are
unsuccessful and mediocre managers are retained? Employers are unwilling to hire
someone talented, because he could find another job after being fired for
shirking, and trustworthiness matters more than talent in some jobs. The idea
also explains the paradoxical phenomenon of slaveowners paying wages to their
slaves, as happened sometimes in the American South. The slaveowner had no legal
obligation to pay his slave anything, but if he wanted careful effort from the
slave-- something important not so much for picking cotton as for such things as
carpentry-- the slaveowner had to provide positive incentives.
This discussion should remind you of Section 5.4's Product Quality Game.
There too, purchasers paid more than the reservation price in order to give
the seller an incentive to behave properly, because a seller who misbehaved
could be punished by termination of the relationship. The key characteristics
of such models are a constraint on the amount of contractual punishment for
misbehavior and a participation constraint that is not binding in equilibrium.
In addition, although The Lucky Executive Game works even with just one
period, many versions, including The Product Quality Game, rely on there being
a repeated game (infinitely repeated, or otherwise avoiding the Chainstore
Paradox). Repetition allows for a situation in which the agent could
considerably increase his payoff in one period by misbehavior such as stealing
or low quality but refrains because he would lose his position and lose all
the future efficiency wage payments.
\vspace{1in}
\noindent
{\bf 8.2 Tournaments}
\noindent Games in which relative performance is important are called {\bf
tournaments}. Tournaments are similar to auctions, the difference being that the
actions of the losers have real rather than just pecuniary effects; even the
effort of an agent who loses a tournament benefits the principal, but the losing
bids in an auction don't enrich the seller. (As we will see in Chapter 13,
though, the ``all-pay'' auction is one way to model a tournament.) Like
auctions, tournaments are especially useful when the principal wants to elicit
information from the agents. A principal-designed tournament is sometimes called
a {\bf yardstick competition} because the agents provide the measure for their
wages.
Farrell (2001) uses a tournament to explain how ``slack'' might be the major
source of welfare loss from monopoly, an old idea usually prompted by faulty
reasoning. The usual claim is that monopolists are inefficient because unlike
competitive firms, they do not have to maximize profits to survive. This
relies on the dubious assumption that firms care about survival, not profits.
Farrell makes a subtler point: although the shareholders of a monopoly maximize
profit, the managers maximize their own utility, and moral hazard is severe
without the benchmark of other firms' performances.
Let firm Apex have two possible production techniques, $Fast$ and
$Careful$. Independently for each technique, Nature chooses production cost $c=
1$ with probability $\theta$ and $c=2$ with probability $(1-\theta)$. The
manager can either choose a technique at random or investigate the costs of both
techniques at a utility cost to himself of $\alpha$. The shareholders can
observe the resulting production cost, but not whether the manager investigates.
If they see the manager pick $Fast$ and a cost of $c=2$, they do not know
whether he chose it without investigating, or investigated both techniques and
found they were both costly. The wage contract is based on what the
shareholders can observe, so it takes the form $(w_1,w_2)$, where $w_1$ is the
wage if $c=1$ and $w_2$ if $c=2$. The manager's utility is $log (w)$ if he does
not investigate and $log (w) -\alpha$ if he does, or the reservation utility
of $ log ( \bar{w})$ if he quits.
If the shareholders want the manager to investigate, the contract must
satisfy the self-selection constraint
\begin{equation} \label{e4}
U({\rm not \; investigate }) \leq U ({\rm investigate }).
\end{equation}
If the manager investigates, he still fails to find a low-cost technique with
probability $(1-\theta)^2$, so inequality (\ref{e4}) is equivalent to
\begin{equation} \label{e5}
\theta {\rm log}\;(w_1) + (1-\theta) {\rm log}\; (w_2) \leq [1- (1-\theta) ^2]
{\rm log} \;(w_1) + (1-\theta)^2 {\rm log} \;(w_2) - \alpha.
\end{equation}
The self-selection constraint is binding, since the shareholders want to keep
the manager's compensation to a minimum. Turning inequality (\ref{e5}) into an
equality and simplifying yields
\begin{equation} \label{e6}
{\displaystyle \theta (1-\theta) {\rm log}\; \left( \frac{w_1}{w_2} \right) =
\alpha.}
\end{equation}
The participation constraint, which is also binding, is $ U(\bar{w}) = U ({\rm
investigate})$, or
\begin{equation} \label{e7}
{\rm log}\; (\bar{w}) =[1- (1-\theta)^2] {\rm log}\; (w_1) + (1-\theta)^2 {\rm
log}\; (w_2) - \alpha.
\end{equation}
Solving equations (\ref{e6}) and (\ref{e7}) together for $w_1$ and $w_2$
yields
\begin{equation} \label{e8}
\begin{array}{rl}
w_1 = & \bar{w}e^{\alpha/\theta}.\\
w_2 = & \bar{w}e^{-\alpha/(1-\theta)},\\
\end{array}
\end{equation}
where $e$ does not here denote effort, but the constant $e \approx 2.72$ in
natural logarithms. . The expected cost to
the firm is
\begin{equation} \label{e9}
[1- (1-\theta)^2] \bar{w}e^{\alpha/\theta} + (1-\theta)^2 \bar{w}
e^{-\alpha/(1-\theta)}.
\end{equation}
If the parameters are $\theta = 0.1$, $\alpha = 1$, and $\bar{w} = 1$, the
rounded values are $w_1 =22,026$ and $w_2 = 0.33$, and the expected cost is
$4,185$. Quite possibly, the shareholders decide it is not worth making the
manager investigate.
But suppose that Apex has a competitor, Brydox, in the same situation. The
shareholders of Apex can threaten to boil their manager in oil if Brydox adopts
a low-cost technology and Apex does not. If Brydox does the same, the two
managers are in a prisoner's dilemma, both wishing not to investigate, but each
investigating from fear of the other. Apex's forcing contract specifies $w_1=
w_2$ to fully insure the manager, and boiling-in-oil if Brydox has lower costs
than Apex. The contract need satisfy only the participation constraint that log
$(w - \alpha) =$ log $(\bar{w})$, so $ w = 2.72$ and Apex's cost of extracting
the manager's information is only $2.72$, not $4,185$. Competition raises
efficiency, not through the threat of firms going bankrupt but through the
threat of managers being fired.
\vspace{1in}
\noindent
{ \bf *8.3 Institutions and Agency Problems}
\noindent
{\bf Ways to Alleviate Agency Problems}
\noindent Usually when agents are risk averse, the first-best cannot be
achieved, because some tradeoff must be made between providing the agent with
incentives and keeping his compensation from varying too much for reasons not
under his control such as between states of the world, or because it is not
possible to punish him sufficiently. We have looked at a number of different
ways to solve the problem, and at this point a listing might be useful. Each
method is illustrated by application to the particular problem of executive
compensation, which is empirically important, and interesting both because
explicit incentive contracts are used and because they are not used more often
(see Baker, Jensen \& Murphy [1988]).
\noindent
1 {\bf Reputation} (sections 5.3, 5.4, 6.4, 6.6). \\ Managers are promoted
on the basis of past effort or truthfulness.
\noindent
2 {\bf Risk-sharing contracts} (sections 7.2, 7.3, 7.4 ). \\ The executive
receives not only a salary, but call options on the firm's stock. If he reduces
the stock value, his options fall in value.
\noindent 3 {\bf Boiling in oil} (section 7.4). \\ If the firm would only
become unable to pay dividends if the executive shirked and was unlucky, the
threat of firing him when the firm skips a dividend will keep him working hard.
\noindent
4 {\bf Selling the store} (section 7.4). \\ The managers buy the firm in
a leveraged buyout.
\noindent
5 {\bf Efficiency wages} (section 8.1).\\ To make him fear losing his job, the
executive is paid a higher salary than his ability warrants (cf. Rasmusen
[1988b] on mutual banks).
\noindent
6 {\bf Tournaments} (section 8.2). \\ Several vice presidents compete and
the winner succeeds the president.
\noindent
7 {\bf Monitoring} (section 3.4). \\ The directors hire a consultant to
evaluate the executive's performance.
\noindent
8 {\bf Repetition}. \\ Managers are paid less than their marginal
products for most of their career, but are rewarded later with higher salaries
or generous pensions if their career record has been good.
\noindent
9 {\bf Changing the type of the agent}\\ Older executives encourage the younger
by praising ambition and hard work.
We have talked about all but the last two solutions. Repetition enables
the contract to come closer to the first-best if the discount rate is low
(Radner [1985]). Production Game V failed to attain the first-best in
section 7.2 because output depended on both the agent's effort and random noise.
If the game were repeated 50 times with independent drawings of the noise, the
randomness would average out and the principal could form an accurate estimate
of the agent's effort. This is, in a sense, begging the question, by saying
that in the long run effort can be deduced after all.
Changing the agent's type by increasing the direct utility of desirable
or reducing the utility of undesirable behavior is a solution that has
received little attention from economists, who have focussed on changing the
utility by changing monetary rewards. Akerlof (1983), one of the few papers on
the subject of changing type, points out that the moral education of children,
not just their intellectual education, affects their productivity and success.
The attitude of economics, however, has been that while virtuous agents exist,
the rules of an organization need to be designed with the unvirtuous agents in
mind. As the Chinese thinker Han Fei Tzu said some two thousand years ago,
\begin{quotation} \begin{small}
Hardly ten men of true integrity and good faith can be found today, and yet
the offices of the state number in the hundreds. If they must be filled by men
of integrity and good faith, then there will never be enough men to go around;
and if the offices are left unfilled, then those whose business it is to govern
will dwindle in numbers while disorderly men increase. Therefore the way of the
enlighted ruler is to unify the laws instead of seeking for wise men, to lay
down firm policies instead of longing for men of good faith. (Han Fei Tzu
[1964], p. 109 from his chapter, ``The Five Vermin'')
\end{small} \end{quotation}
The number of men of true integrity has probably not increased as fast as the
size of government, so Han Fei Tzu's observation remains valid, but it should
be kept in mind that honest men do exist and honesty can enter into rational
models. There are tradeoffs between spending to foster honesty and spending for
other purposes, and there may be tradeoffs between using the second-best
contracts designed for agents indifferent about the truth and using the simpler
contracts appropriate for honest agents.
\bigskip
\noindent
{\bf Government Institutions and Agency Problems}
The field of law is well suited to analysis by principal-agent models. Even
in the nineteenth century, Holmes (1881, p. 31) conjectured in {\it The Common
Law} that the reason why sailors at one time received no wages if their ship was
wrecked was to discourage them from taking to the lifeboats too early instead of
trying to save it. As is typical, incentive compatibility and insurance work in
opposite directions here. If sailors are more risk averse than ship owners,
and pecuniary advantage would not add much to their effort during storms, then
the owner ought to provide insurance to the sailors by guaranteeing them wages
whether the voyage succeeds or not. If not, the old rule could be efficient.
Another legal question is who should bear the cost of an accident, the victim
(for example, a pedestrian hit by a car) or the person who caused it (the
driver). The economist's answer is that it depends on who has the most severe
moral hazard. If the pedestrian could have prevented the accident at the lowest
cost, he should pay; otherwise, the driver. Indeed, as Coase (1960) points out,
the term ``cause'' can be misleading: both driver and victim cause the accident,
in the sense that either of them could prevent it, though at different costs.
This {\bf least-cost avoider principle} is extremely useful in the economic
analysis of law and a major theme of Posner's classic treatise (Posner
[1992]).
Criminal law is also concerned with tradeoffs between incentives and
insurance. Holmes (1881, p. 40) notes approvingly that Macaulay's draft of
the Indian Penal Code made breach of contract for the carriage of passengers a
criminal offense. Palanquin-bearers were too poor to pay damages for abandoning
their passengers in desolate regions, so the power of the State was needed to
provide for heavier punishments than bankruptcy. In general, however, the legal
rules actually used seem to diverge more from optimality in criminal law than
civil law. If, for example, there is no chance that an innocent man can be
convicted of embezzlement, boiling embezzlers in oil might be good policy, but
most countries would not allow this. Taking the example a step further, if the
evidence for murder is usually less convincing than for embezzling, our analysis
could easily indicate that the penalty for murder should be less, but such
reasoning offends the common notion that the severity of punishment should be
matched with harm from the crime.
\bigskip
\noindent
{\bf Private Institutions and Agency Problems}
While agency theory can be used to explain and perhaps improve government
policy, it also helps explain many curious private institutions. Agency
problems are an important hindrance to economic development, and may explain a
number of apparently irrational practices. Popkin (1979, pp. 66, 73, 157) notes
a variety of these. In Vietnam, for example, absentee landlords were more
lenient than local landlords, but improved the land less, as one would expect of
principals who suffer from informational disadvantages {\it vis-\`{a}-vis} their
agents. Along the pathways in the fields, farmers would plant early-harvesting
rice that the farmer's family could harvest by itself in advance of the regular
crop, so that hired labor could not grab handfuls as they travelled. In
thirteenth century England, beans were seldom grown, despite their nutritional
advantages, because they were too easy to steal. Some villages tried to solve
the problem by prohibiting anyone from entering the beanfields except during
certain hours marked by the priest's ringing the church bell, so everyone could
tend and watch their beans at the same official time.
In less exotic settings, moral hazard provides another reason besides tax
advantages for why employees take some of their compensation in fringe benefits.
Professors are granted some of their wages in university computer time because
this induces them to do more research. Having a zero marginal cost of computer
time is a way around the moral hazard of slacking on research, despite being a
source of moral hazard in wasting computer time. A less typical example is
the bank in Minnesota which, concerned about its image, gave each employee \$100
in credit at certain clothing stores to upgrade their style of dress. By
compromising between paying cash and issuing uniforms the bank could hope to
raise both profits and employee happiness. (``The \$100 Sounds Good, but What
Do They Wear on the Second Day?'' {\it The Wall Street Journal}, October 16,
1987, p. 17.)
Longterm contracts are an important occasion for moral hazard, since so many
variables are unforeseen, and hence noncontractible. The term {\bf opportunism}
has been used to describe the behavior of agents who take advantage of
noncontractibility to increase their payoff at the expense of the principal (see
Williamson [1975] and Tirole [1986]). Smith may be able to extract a greater
payment from Jones than was agreed upon in their contract because Smith can
threaten to harm Jones by failing to perform his contractual duties unless the
contract is renegotiated. This is called {\bf hold-up potential} (Klein,
Crawford, \& Alchian [1978]), often modelled in the style of Hart and Moore's
seminal 1990 article on who should own assets. Hold-up potential can even make
an agent introduce competing agents into the game, if competition is not so
extreme as to drive rents to zero. Michael Granfield tells me that Fairchild
once developed a new patent on a component of electronic fuel injection systems
that it sought to sell to another firm, TRW. TRW offered a much higher price if
Fairchild would license its patent to other producers, fearing the hold-up
potential of buying from just one supplier. TRW could have tried writing a
contract to prevent hold-up, but knew that it would be difficult to prespecify
all the ways that Fairchild could cause harm, including not only slow delivery,
poor service, and low quality, but also sins of omission like failing to
sufficiently guard the plant from shutdown due to accidents and strikes.
It should be clear from the variety of these examples that moral hazard is a
common problem. Now that the first flurry of research on the principal-agent
problem has finished, researchers are beginning to use the new theory to study
specific institutions and practices like these that were formerly relegated to
descriptive ``soft'' scholarly work in management, law, and anthropology.
\vspace{1in}
\noindent
{\bf *8.4 Renegotiation: The Repossession Game}
\noindent
Renegotiation comes up in two very different contexts in game theory. Chapter 4
looked at when players can coordinate on pareto-superior subgame equilibria
that might be pareto inferior for the entire game, an idea linked to the
problem of selecting among multiple equilibria. This section looks at a
different context, in which the players have signed a binding contract, but in
a subsequent subgame both might agree to scrap the old contract and write a
new one, using the old contract as the starting point in their negotiations.
Here, the questions are not about equilibrium selection but about which
strategies should be part of the game. The issue frequently arises in
principal-agent models, especially in the hidden knowledge literature starting
from Dewatripont (1989) and Hart \& Moore (1990). Here we will use a model of
hidden actions to illustrate renegotiation, a model in which a bank that wants
to lend money to a consumer to buy a car must worry about whether he will work
hard enough to repay the loan.
\begin{center}
{\bf The Repossession Game}
\end{center}
{\bf Players}\\
A bank and a consumer.
\noindent
{\bf The Order of Play}\\
1 The bank can do nothing or it can offer the consumer an auto loan which
allows him to buy a car that costs 11 but requires him to pay back $L$ or lose
possession of the car to the bank. \\
2 The consumer accepts or rejects the loan.\\
3 The consumer chooses to $Work$, for an income of 15, or $Play$, for an
income of 8. The disutility of work is 5. \\
4 The consumer repays the loan or defaults. \\
4a In one version of the game, the bank offers to settle for an amount $S$
and leave possession of the car to the consumer.\\
4b The consumer accepts or rejects the settlement $S$. \\ 5 If the bank has
not been paid $L$ or $S$, it repossesses the car.
\noindent
{\bf Payoffs}\\
If the bank does not make any loan or the consumer rejects it, both players'
payoffs are zero. The value of the car is 12 to the consumer and 7 to the bank,
so the bank's payoff if the loan is made is
$$
\pi_{bank}= \left\{ \begin{tabular}{ll} L-11 & {\rm if the original loan is
repaid}\\ S-11 & if a settlement is made\\ 7-11 & if the car is repossessed \\
\end{tabular}\right.$$
If the consumer chooses $Work$, his income is $W =15$ and his disutility of
effort is $D=-5$. If he chooses $Play$, then $W=8$ and $D=0$. His payoff
is
$$
\pi_{consumer}= \left\{
\begin{tabular}{ll}
$W+12-L-D$ & if the original loan
is repaid\\
$ W+12-S-D$ & if a settlement is made\\
$ W-D$ & if the car is
repossessed \\ \end{tabular}\right. $$
We will consider two versions of the game which differ in whether they allow
the renegotiation moves (4a) and (4b). As we will see, the outcome is pareto
superior if renegotiation is not possible.
\bigskip
\noindent
{\bf Repossession Game I}
The first version of the game does not allow renegotiation, so moves (4a) and
(4b) are omitted. In equilibrium, the bank will make the loan at a rate of $L=
12$, and the consumer will choose $Work$ and repay the loan. Working back from
the end of the game in accordance with sequential rationality, the consumer is
willing to repay because by repaying 12 he receives a car worth 12.\footnote{As
usual, we could change the model slightly to make the consumer strongly desire
to repay the loan, by substituting a bargaining subgame that splits the gains
from trade between bank and consumer rather than specifying that the bank make
a take-it-or-leave-it offer. See Section 4.3. We could also change the game to
give all the bargaining power to the agent, and the outcomes of the two versions
of the game would remain similar. } He will choose $Work$ because he can
then repay the loan and his payoff will be 10 $(= 15 + 12-12 -5 )$, but if he
chooses $Play$ he will not be able to repay and the bank will repossess the
car, reducing his payoff to 8 $(= 8 -0 )$. The bank will offer a loan at
$L=12$ because the consumer will repay it and that is the maximum repayment
to which the consumer will agree. The bank's equilibrium payoff is 1 $(=12-11)$.
This outcome is efficient because the consumer does buy the car, which
he values at more than its cost to the car dealer. The bank ends up with the
surplus, however, because of our assumption that the bank has all the
bargaining power.
\bigskip
\noindent
{\bf Repossession Game II}
The second version of the game does allow renegotiation, so moves (4a) and
(4b) are included in the game. Renegotiation turns out to be harmful, because
it results in an equilibrium in which the bank refuses to make the loan,
reducing the payoffs of bank and consumer to (0,10) instead of (1,10). The
gains from trade vanish.
The equilibrium in Repossession Game I breaks down in Repossession Game II
because the consumer would deviate by choosing $Play$. In Repossession Game I
the consumer would not do that because the bank would repossess the car.
In Repossession Game II, the bank still has the right to repossess it, for a
payoff of $-4\; (=7-11) $. The bank now has the alternative to renegotiate and
offer $S=8$, on the other hand, and the offer will be accepted by the consumer
since in exchange he gets to keep a car worth 12. The payoffs of bank and
consumer would be $- 3\; (=8-11)$ and 12 $(= 8 + 12-8 )$. Since the bank
prefers $-3$ to $-4$ it will renegotiate and the consumer will have increased
his payoff from 10 to 12 by choosing $Play$. Looking ahead to this from move
(1), however, the bank will see that it can do better by refusing to make the
loan, resulting in the payoffs (0,10). One might think the bank could adjust by
raising the the loan rate $L$, but that is no help. Even if $L=30$, for
instance, the consumer will still happily accept, knowing that when he chooses
$Play$ and defaults the ultimate amount he will pay will be just $S=8$. Since
both parties know there will be renegotiation later, the contract amount $L$ is
meaningless, the first, empty, offer in a bargaining game.
Renegotiation is paradoxical. In the subgame starting with consumer default it
increases efficiency, by allowing the players to make a pareto improvement over
an inefficient punishment. In the game as a whole, however, it reduces
efficiency by preventing players from using punishments to deter inefficient
actions. This is true of any situation in which punishment imposes a deadweight
loss instead of being simply a transfer from punished to punisher. This
may be why American judges are less willing than the general public to impose
punishments on criminals. By the time a criminal reaches the courtroom, extra
years in jail have no deterrent effect for that crime and impose real costs on
both criminal and society For every particular case, viewed in isolation, the
sentences are inefficient, retribution aside.
The renegotiation problem also comes up in principal-agent models because of
risk bearing by a risk-averse agent when the principal is risk neutral.
Optimal contracts impose risk on risk-averse agents to provide incentives for
high effort or self selection. If at some point in the game it is common
knowledge that the agent has chosen his action, but Nature has not yet moved,
the agent bears needless risk. The principal knows the agent has already moved,
so the two of them are willing to recontract to shift the risk from Nature's
move back on the principal. But the expected future recontracting makes a joke
of the original contract and reduces the agent's incentives for effort or
truthfulness.
The Repossession Game illustrates other ideas too. It is a game of perfect
information, but it has the feel of a game of moral hazard with hidden
actions. This is because it has an implicit bankruptcy constraint, so the
contract cannot sufficiently punish the consumer for an inefficient choice of
effort. Restricting the strategy space has the same effect as restricting the
information available to a player. It is another example of the distinction
between observability and contractibility--- the consumer's effort is
observable, but it is not really contractible, because the bankruptcy
constraint prevents him from being punished for his low effort.
This game also illustrates the difficulty of deciding what ``bargaining power''
means. This is a term that is very important to how many people think about
law and public policy but which they define hazily. The natural way to think
of bargaining power is to treat it as the ability to get a bigger share of the
surplus from a bargaining interaction. Here, there is a surplus of 1 from the
consumer's purchase of a car that costs 11 and yields 12 in utility. Both
versions of the Repossession Game give all the bargaining power to the bank
in the sense that where there is a surplus to be split, the bank gets 100
percent of it. But this does not help the bank in Repossession Game II,
because the consumer can put himself in a position where the bank ends up a
loser from the transaction despite its bargaining power.
Both
models allow commitment in the sense of legally binding agreements over
transfers of money and wealth but do not allow the consumer to commit directly
to $Work$. If the consumer does not repay the loan, the bank has the legal right
to repossess the car, but the bank cannot have the consumer thrown into prison
for breaking a promise to choose $Work$. Where the two versions of the game will
\vspace{1in}
\noindent
{\bf *8.5 State-Space Diagrams: Insurance Games I and II }
\noindent
An approach to principal-agent problems especially useful when the strategy
space is continuous is to use diagrams. The term ``moral hazard'' comes from
the insurance industry, where it refers to the idea that if a person is insured
he will be less careful and the danger from accidents will rise. Suppose
Smith (the agent) is considering buying theft insurance for a car with a value
of 12. Figure 1, which illustrates his situation, is an example of a {\bf
state-space diagram}, a diagram whose axes measure the values of one variable in
two different states of the world. Before Smith buys insurance, his dollar
wealth is 0 if there is a theft and 12 otherwise, depicted as his endowment,
$\omega= (12,0)$. The point (12,0) indicates a wealth of 12 in one state and 0
in the other, while the point (6,6) indicates a wealth of 6 in each state.
One cannot tell the probabilities of each state just by looking at the state-
space diagram. Let us specify that if Smith is careful where he parks, the
state {\it Theft} occurs with probability 0.5, but if he is careless the
probability rises to 0.75. He is risk averse, and, other things equal, he has a
mild preference to be careless, a preference worth only some small amount
$\epsilon$ to him. Other things are not equal, however, and he would choose to
be careful were he uninsured, because of the high correlation of carelessness
with carlessness.
The insurance company (the principal) is risk neutral, perhaps because it is
owned by diversified shareholders. We assume that no transaction costs are
incurred in providing insurance and that the market is competitive, a switch
from Production Game V, where the principal collected all the gains from trade.
If the insurance company can require Smith to park carefully, it offers him
insurance at a premium of 6, with a payout of 12 if theft occurs, leaving him
with an allocation of $C_1 = (6,6).$ This satisfies the competition constraint
because it is the most attractive contract any company can offer without making
losses. Smith, whose allocation is 6 no matter what happens, is {\bf fully
insured}. In state-space diagrams, allocations which like $C_1$ fully insure
one player are on the 45$^\circ$ line through the origin, the line along which
his allocations in the two states are equal.
\includegraphics[width=150mm]{fig08-01.jpg}
\begin{center} {\bf Figure 1: Insurance Game I }
\end{center}
The game is described below in a specification that includes two insurance
companies to simulate a competitive market. For Smith, who is risk averse, we
must distinguish between dollar {\it allocations} such as (12,0) and utility
{\it payoffs} such as $0.5 U(12) + 0.5U(0)$. The curves in Figure 1 are
labelled in units of utility for Smith and dollars for the insurance company.
\begin{center}
{\bf Insurance Game I: Observable Care}
\end{center}
{\bf Players}\\
Smith and two insurance companies.
\noindent
{\bf The Order of Play}\\
1 Smith chooses to be either {\it Careful} or {\it Careless}, observed by the
insurance company. \\
2 Insurance company 1 offers a contract $(x,y)$, in which Smith pays premium
$x$ and receives compensation $y$ if there is a theft.\\
3 Insurance company 2 also offers a contract of the form $(x,y)$. \\
4 Smith picks a contract. \\
5 Nature chooses whether there is a theft, with probability 0.5 if Smith is
$Careful$ or 0.75 if Smith is $Careless$.
\noindent
{\bf Payoffs}\\
Smith is risk averse and the insurance companies are risk neutral. The insurance
company not picked by Smith has a payoff of zero.\\
Smith's utility function $U$ is such that $U' >0$ and $U''< 0.$ If Smith
picks contract $(x,y)$, the payoffs are:
\begin{tabular}{ll}
If Smith chooses {\it Careful}, & \\
& $\pi_{Smith}= 0.5 U(12-x) + 0.5U(0 + y - x) $\\
&$ \pi_{company} = 0.5x + 0.5(x-y)$, for his insurer.\\
If Smith chooses {\it Careless}, & \\
& $\pi_{Smith}= 0.25 U(12- x) + 0.75U(0 + y-x) + \epsilon$\\
& $\pi_{company} = 0.25x + 0.75(x-y)$, for his insurer. \end{tabular}
\bigskip
In equilibrium, Smith chooses to be $Careful$ because he foresees that
otherwise his insurance will be more expensive. Figure 1 is the corner of an
Edgeworth box which shows the indifference curves of Smith and his insurance
company given that Smith's care keeps the probability of a theft down to 0.5.
The company is risk neutral, so its indifference curve, $\pi_i = 0$, is a
straight line with slope $-1/1$. Its payoffs are higher on indifference curves
such as $\pi_i = 6$ that are closer to the origin and thus have smaller expected
payouts to Smith. The insurance company is indifferent between points $\omega$
and $C_1$, at both of which its profits are zero. Smith is risk averse, so if
he is $Careful$ his indifference curves are closest to the origin on the
45$^\circ$ line, where his wealth in the two states is equal. Picking the
numbers 66 and 83 for concreteness, I have labelled his original indifference
curve $\pi_s=66$ and drawn the preferred indifference curve $\pi_s=83$ through
the equilibrium contract $C_1$. The equilibrium contract is $C_1$, which
satisfies the competition constraint by generating the highest expected utility
for Smith that allows nonnegative profits to the company.
Insurance Game I is a game of symmetric information. Insurance Game II
changes that. Suppose that
\begin{enumerate}
\item
The company cannot observe Smith's action (care is {\it unobservable}); or
\item
The state insurance commission does not allow contracts to require Smith to be
careful (care is {\it noncontractible}); or
\item
A contract requiring Smith to be careful is impossible to enforce because of
the cost of proving carelessness (care is {\it nonverifiable} in a court of
law).
\end{enumerate}
In each case Smith's action is a noncontractible variable, so we model all
three the same way, by putting Smith's move second. The new game is like
Production Game V, with uncertainty, unobservability, and two levels of output,
{\it Theft} and {\it No Theft}. The insurance company may not be able to
directly observe Smith's action, but his dominant strategy is to be $Careless$,
so the company knows the probability of a theft is 0.75. Insurance Game II is
the same as Insurance Game I except for the following.
\begin{center}
{\bf Insurance Game II: Unobservable Care} \\
\end{center}
\noindent
{\bf The Order of Play}
\begin{enumerate}
\item
Insurance company 1 offers a contract of form $(x,y)$, under which Smith pays
premium $x$ and receives compensation $y$ if there is a theft.
\item Insurance company 2 offers a contract of form $(x,y)$
\item Smith picks a contract.
\item Smith chooses either {\it Careful} or {\it Careless}.
\item Nature chooses whether there is a theft, with probability 0.5 if Smith
is $Careful$ or 0.75 if Smith is $Careless$.
\end{enumerate}
\bigskip
Smith's dominant strategy is $Careless $ in Insurance Game II, so in contrast
to Insurance Game I the insurance company must offer a contract with a
premium of 9 and a payout of 12 to prevent losses, which leaves Smith with an
allocation $C_2 = (3,3)$. Making thefts more probable reduces the slopes of both
players' indifference curves, because it decreases the utility of points to the
southeast of the 45$^\circ$ line and increases utility to the northwest. In
Figure 2, the insurance company's isoprofit curve swivels from the solid line $
{\pi_i} = 0$ to the dotted line $\tilde{\pi_i} = 0$. It swivels around $\omega$
because that is the point at which the company's profit is independent of how
probable it is that Smith's car will be stolen (at point $\omega$ the company is
not insuring him at all). Smith's indifference curve also swivels, from the
solid curve $ {\pi_s}= 66$ to the dotted curve $\tilde{\pi_s}= 66+ \epsilon$.
It swivels around the intersection of the $ {\pi_s}= 66$ curve with the
45$^\circ$line, because on that line the probability of theft does not affect
his payoff. The $\epsilon$ difference appears because Smith gets to choose the
action $Careless$, which he slightly prefers.
\includegraphics[width=150mm]{fig08-02.jpg}
\begin{center}
{\bf Figure 2: Insurance Game II with Full and Partial Insurance }
\end{center}
Figure 2 shows that no full-insurance contract will be offered. The contract
$C_1$ is acceptable to Smith, but not to the insurance company, because it earns
negative profits, and the contract $C_2$ is acceptable to the insurance company,
but not to Smith, who prefers $\omega$. Smith would like to commit himself to
being careful, but he cannot make his commitment credible. If the means existed
to prove his honesty, he would use them even if they were costly. He might, for
example, agree to buy off-street parking even though locking his car would be
cheaper, if verifiable.
Although no full-insurance contract such as $C_1$ or $C_2$ is mutually
agreeable, other contracts can be used. Consider the partial-insurance contract
$C_3$ in Figure 2, which has a premium of 6 and a payout of 8. Smith would
prefer $C_3$ to his endowment of $\omega =(12,0)$ whether he chooses $Careless$
or $Careful$. We can think of $C_3$ in two ways:
\begin{enumerate}
\item
Full insurance except for a {\bf deductible} of four. The insurance company pays
for all losses in excess of four.
\item Insurance with a {\bf coinsurance} rate of one-third. The insurance
company pays two-thirds of all losses.
\end{enumerate}
The outlook is bright because Smith chooses {\it Careful} if he only has
partial insurance, as with $C_3$. The moral hazard is ``small'' in the sense
that Smith barely prefers $Careless$. With even a small deductible, Smith would
choose $Careful$ and the probability of theft would fall to 0.5, allowing the
company to provide much more generous insurance. The solution of full insurance
is ``almost'' reached. In reality, we rarely observe truly full insurance,
because insurance contracts repay only the price of the car and not the bother
of replacing it, a bother big enough to deter owners from leaving their cars
unlocked.
Figure 3 illustrates effort choice under partial insurance. Smith has a
choice between dashed indifference curves ($Careless$) and solid ones
($Careful$). To the southeast of the 45$^\circ$ line, the dashed indifference
curve for a particular utility level is always above that utility's solid
indifference curve. Offered contract $C_4$, Smith chooses $Careful$, remaining
on the solid indifference curve, so $C_4$ yields zero profit to the insurance
company. In fact, the competing insurance companies will offer contract $C_5$ in
equilibrium, which is almost full insurance, but just almost, so that Smith will
choose $Careful$ to avoid the small amount of risk he still bears.
\includegraphics[width=150mm]{fig08-03.jpg}
\begin{center}
{\bf Figure 3: More on Partial Insurance in Insurance Game II } \end{center}
Thus, as in the principal-agent model there is a tradeoff between efficient
effort and efficient risk allocation. Even when the ideal of full insurance and
efficient effort cannot be reached, there exists some best choice like $C_5$ in
the set of feasible contracts, a second-best insurance contract that recognizes
the constraints of informational asymmetry.
The idea of ``care'' as effort in a moral hazard model comes up in many
contexts. Two of the most important are in tort law and in the law and the
economics of renting and leasing. The great problem of tort law is how to create
the correct incentives for care by the various people who can act to prevent an
accident by the way the law allocates liability when an accident occurs, or by
regulation to directly require care. One of the biggest problem in rentals,
whether of cars, apartments, or tuxedos is that the renters lack efficient
incentives to take care. As P.J. O'Rourke says (as attributed by McAfee [2002,
p. 189] and others),
\begin{quotation}
\begin{small}
``There's a lot of debate on this subject - about what kind of car handles
best. Some say a a front-engined car, some say a rear-engined car. I say a
rented car. Nothing handles better than a rented car. You can go faster, turn
corners sharper, and put the transmission into reverse while going forward at a
higher rate of speed in a rented car than in any other kind.''
\end{small}
\end{quotation}
\vspace{1in}
\noindent
{\bf *8.6 Joint Production by Many Agents: The Holmstrom Teams Model}
\noindent To conclude this chapter, let us switch our focus from the individual
agent to a group of agents. We have already looked at tournaments, which
involve more than one agent, but a tournament still takes place in a situation
where each agent's output is distinct. The tournament is a solution to the
standard problem, and the principal could always fall back on other solutions
such as individual risk-sharing contracts. In this section, the existence of
a group of agents results in destroying the effectiveness of the individual
risk-sharing contracts, because observed output is a joint function of the
unobserved effort of many agents. Even though there is a group, a tournament is
impossible, because only one output is observed. The situation has much of the
flavor of the Civic Duty Game of chapter 3: the actions of a group of players
produce a joint output, and each player wishes that the others would carry out
the costly actions. A teams model is defined as follows.
\noindent
{\it A {\bf team} is a group of agents who independently choose effort levels
that result in a single output for the entire group.}
We will look at teams using the following game.
\begin{center}
{\bf Teams}\\
(Holmstrom [1982])
\end{center}
{\bf Players}\\
A principal and $n$ agents.
\noindent
{\bf The order of play}\\
1 The principal offers a contract to each agent $i$ of the form $w_i(q) $,
where $q$ is total output.\\
2 The agents decide whether or not to accept the contract.\\
3 The agents simultaneously pick effort levels $e_i$, ($i = 1,\dots,n$).\\
4 Output is $q(e_1,\ldots e_n)$.
\noindent
{\bf Payoffs}\\
If any agent rejects the contract, all payoffs equal zero. Otherwise,
\begin{center}
\begin{tabular}{ll} $ \pi_{principal}$ & $ = q - \sum_{i=1}^n w_i; $\\ & \\
$\pi_{i}$ & $= w_i - v_i(e_i)$, where $v'_i> 0$ and $v''_i > 0$.\\
\end{tabular}
\end{center}
Despite the risk neutrality of the agents, ``selling the store'' fails to work
here, because the team of agents still has the same problem as the employer had.
The team's problem is cooperation between agents, and the principal is
peripheral.
\includegraphics[width=150mm]{fig08-04.jpg}
\begin{center}
{\bf Figure 4: Contracts in the Holmstrom Teams Model }
\end{center}
\noindent
Denote the efficient vector of actions by $e^*$. An efficient contract,
illustrated in Figure 4(a), is
\begin{equation} \label{e10}
w_i(q) = \left\{
\begin{array}{ll} b_i& {\rm if}\; q \geq q(e^*)\\
& \\
0 & {\rm if} \;q < q(e^*)\\
\end{array} \right.
\end{equation}
where $\sum_{i=1}^n b_i = q(e^*)$ and $b_i > v_i(e^*_i)$.
Contract (\ref{e10}) gives agent $i$ the wage $b_i$ if all agents pick the
efficient effort, and nothing if any of them shirks (in which case the
principal keeps the output). The teams model gives one reason to have a
principal: he is the residual claimant who keeps the forfeited output. Without
him, it is questionable whether the agents would carry out the threat to discard
all the output if, say, output were 99 instead of the efficient 100. There is a
problem of dynamic consistency or renegotiation similar to the problem in the
Repossession Game earlier in this chapter. The agents would like to commit in
advance to throw away output, but only because they never have to do so in
equilibrium. If the modeller wishes to disallow discarding output, he imposes
the {\bf budget-balancing constraint} that the sum of the wages exactly equal
the output, no more and no less. But budget balancing creates a problem for the
team that is summarized in Proposition 1. (The proof is simplest for
differentiable contracts such as that in Figure 4(b) but the intuition applies
to nondifferentiable contracts too.)
\noindent
{\bf Proposition 1.} { \it If there is a budget-balancing constraint, no
differentiable wage contract $w_i(q)$ generates an efficient Nash equilibrium.}
\noindent
Agent $i$'s problem is
\begin{equation}\label{e11}
\stackrel{Maximize}{e_i} \;\;\; w_i(q(e)) - v_i(e_i).
\end{equation}
His first-order condition is
\begin{equation}\label{e12}
\left( \frac{dw_i}{dq} \right) \left( \frac{dq}{de_i} \right) - \frac{dv_i}
{de_i} = 0.
\end{equation}
With budget balancing and a linear utility function, the pareto optimum
maximizes the sum of utilities (something not generally true), so the optimum
solves
\begin{equation} \label{e13}
\begin{array}{cl} Maximize & {\displaystyle q(e) - \sum_{i=1}^n v_i(e_i)}\\
e_1,\ldots, e_n & \\
\end{array}
\end{equation}
The first-order condition is that the marginal dollar contribution to output
equal the marginal disutility of effort:
\begin{equation} \label{e14}
\frac{dq}{d{e_i}} - \frac{d{v_i}}{d{e_i}} = 0.
\end{equation}
Equation (\ref{e14}) contradicts equation (\ref{e12}), the agent's first-order
condition, because $\frac{dw_i}{dq}$ is not equal to one. If it were, agent
$i$ would be the residual claimant and receive the entire marginal increase in
output--- but under budget balancing, not every agent can do that. Because each
agent bears the entire burden of his marginal effort and only part of the
benefit, the contract does not achieve the first-best. Without budget
balancing, on the other hand, if the agent shirked a little he would gain the
entire leisure benefit from shirking, but he would lose his entire wage under
the optimal contract in equation (\ref{e10}).
\bigskip
\noindent
{\bf Discontinuities in Public Good Payoffs}
\noindent Ordinarily, there is a free rider problem if several players each
pick a level of effort which increases the level of some public good whose
benefits they share. Noncooperatively, they choose effort levels lower than if
they could make binding promises. Mathematically, let identical risk-neutral
players indexed by $i$ choose effort levels $e_i$ to produce amount
$q(e_1,\ldots,e_n$) of the public good, where $q$ is a continuous function.
Player $i$'s problem is
\begin{equation} \label{e15}
\stackrel{Maximize}{e_i} q(e_1,\ldots,e_n) - e_i,
\end{equation}
which has first-order condition
\begin{equation} \label{e16}
\frac{\partial q}{\partial e_i} - 1 = 0,
\end{equation}
whereas the greater, first-best effort $n$-vector $e^*$ is characterized by
\begin{equation} \label{e17}
\sum_{i=1}^n \frac{\partial q}{\partial e_i} - 1 = 0.
\end{equation}
If the function $q$ were discontinuous at $e^*$ (for example, if $q= 0$ if
$e_i < e^*_i$ for any $i$), the strategy profile $e^*$ could be a Nash
equilibrium. In the game of Teams, the same effect is at work. Although
output is not discontinuous, contract (\ref{e10}) is constructed as if it were
(as if $q=0$ if $e_i \neq {e_i}^*$ for any $i$), in order to obtain the same
incentives.
The first-best can be achieved because the discontinuity at $e^*$ makes
every player the marginal, decisive player. If he shirks a little, output falls
drastically and with certainty. Either of the following two modifications
restores the free rider problem and induces shirking:
\noindent
1 Let $q$ be a function not only of effort but of random noise--- Nature moves
after the players. Uncertainty makes the {\it expected} output a continuous
function of effort.
\noindent
2 Let players have incomplete information about the critical value---Nature
moves before the players and chooses $e^*$. Incomplete information makes the
estimated output a continuous function of effort.
The discontinuity phenomenon is common. Examples include:
\noindent
1 Effort in teams (Holmstrom [1982], Rasmusen [1987]) \\
2 Entry deterrence by an oligopoly (Bernheim [1984b], Waldman [1987]) \\
3 Output in oligopolies with trigger strategies (Porter [1983a]) \\
4 Patent races \\
5 Tendering shares in a takeover (Grossman \& Hart [1980] \\
6 Preferences for levels of a public good
\vspace{1in}
\noindent
{ \bf *8.7 The Multitask Agency Problem} (New in the 4th edition)
Holmstrom \& Milgrom (1991) point out an omission in the standard principal-
agent model: often the principal wants the agent to split his time among
several tasks, each with a separate output, rather than just working on one of
them. If the principal uses one of the incentive contracts we have described in
these chapters to incentivize just one of the tasks, this ``high-powered
incentive'' can result in the agent completely neglecting his other tasks and
leave the principal worse off than under a flat wage. We will see that in the
next two models, in which the principal can observe the output from one of the
agent's tasks ($q_1$) but not from the other ($q_2$).
\begin{center}
{\bf Multitasking I: Two Tasks, No Leisure}
\end{center}
{\bf Players}\\
A principal and an agent.
\noindent
{\bf The Order of Play}\\
1 The principal offers the agent either an incentive contract of the form
$w(q_1)$ or a monitoring contract that pays $m$ under which he pays the agent a
base wage of $\overline{m}$ plus $ m_1$ if he observes him working on Task 1
and $ m_2$ if he observes him working on Task 2 (the $\overline{m}$ base is
superfluous notation in Multitasking I, but is used in Multitasking II). \\
2 The agent decides whether or not to accept the contract.\\
3 The agent picks efforts $e_1$ and $e_2$ for the two tasks such that $e_1+e_2
=1$, where 1 denotes the total time available.\\
4 Outputs are $q_1(e_1)$ and $q_2(e_2)$, where $\frac{dq_1}{de_1}>0$ and
$\frac{dq_2}{de_2}>0$ but we do not require decreasing returns to effort.
\noindent
{\bf Payoffs}\\
If any agent rejects the contract, all payoffs equal zero. Otherwise,
\begin{equation} \label{e17a}
\begin{array}{ll}
\pi_{principal} & = q_1 + \beta q_2 - m - w - C ;\\
& \\
\pi_{agent} & = m + w - e_1^2 -e_2^2, \\
\end{array}
\end{equation}
where $C$, the cost of monitoring, is $\overline{C}$ if a monitoring
contract is used and zero otherwise.
\bigskip
Let's start with the first best. This can be found by choosing $e_1$ and
$e_2$ (subject to $e_1+e_2 = 1$) and $C$ to maximize the sum of the payoffs,
\begin{equation} \label{e100}
\pi_{principal}+ \pi_{agent} = q_1(e_1) + \beta q_2(e_2) - C
- e_1^2 -e_2^2,
\end{equation}
In the first-best, $C=0$ of course-- no costly monitoring is needed.
Substituting $e_2= 1-e_1$ and using the first-order condition for $e_1$
yields
\begin{equation} \label{e101}
C^*=0 \;\;\;\;\; e_1^* = \frac{1}{2} + \left( \frac{\frac{dq_1}{de_1} -
\beta \left( \frac{dq_2}{de_2} \right)}{4} \right) \;\;\;\;\; e_2^* = \frac{1}
{2} - \left( \frac{\frac{dq_1}{de_1} - \beta \left( \frac{dq_2}{de_2} \right)
}{4} \right).
\end{equation}
Thus, which effort should be bigger depends on $\beta$ (a measure of the
relative value of Task 2) and the diminishing returns to effort in each task.
If, for example, $\beta >1$ so Task 2's output is more valuable and the
functions $q_1(e_1)$ and $q_2(e_2)$ produce the same output for the same effort,
then from (\ref{e101}) we can see that $e_1^*< e_2^*$, as one would expect.
Can an incentive contract achieve the first best?
Let's define $q_1^*, q_2^*, e_1^*$ and $e_2^*$ as the first-best levels of those
variables and define the minimum wage payment that would induce the agent to
accept a contract requiring the first-best effort as
\begin{equation} \label{e101a}
w^* \equiv (e_1^*)^2 + (e_2^*)^2
\end{equation}
Next, let's think about what happens with the profit-maximizing flat-wage
contract, which could be either the incentive contract $w(q_1) = w^* $ or the
monitoring contract $\{w^*, w^* \}$. The agent's effort choice would then be to
split his effort equally between the two tasks, so $e_1=e_2=0.5$. To satisfy
the participation constraint it would be necessary that $\pi_{agent} = w^*
+w - e_1^2 -e_2^2 \geq 0,$ so $\pi_{agent} = w^* - 0.25 -0.25=0$ and $w^* =
0.5$. The principal would prefer to use an ``incentive contract,'' rather than a
monitoring contract, of course, if the wage is going to be the same regardless
of what costly monitoring would discover.
What about a sharing-rule incentive contract, in which the wage rises with
output (that is, $ \frac{dw}{dq_1}>0$)? The problem is not quite the
principal's accustomed difficulty of inducing high effort as cheaply as possible
without paying for effort whose marginal disutility exceeds the value of the
extra output. That difficulty is still present, but in addition the principal
must worry about an externality of sorts: the greater the agent's effort on Task
1, the less will be his effort on Task 2. Even if extra $e_1$ could be achieved
for free, the principal might not want it-- and, in fact, might be willing to
pay to stop it.
Consider the simplest sharing-rule contract, the linear one with $ \frac{dw}
{dq_1}=b,$ so $w(q_1) = a + bq_1$. The agent will pick $e_1$ and $e_2$ to
maximize
\begin{equation} \label{e103}
\pi_{agent} = a + bq_1(e_1)- e_1^2 -e_2^2,
\end{equation}
subject to $e_1+e_2 =1$ (which allows us to rewrite the maximand in terms
of just $e_1$, since $e_2= 1-e_1$). The first-order condition is
\begin{equation} \label{e104}
\frac{d\pi_{agent}}{de_1} = b \left( \frac{dq_1}{de_1} \right) - 2e_1^* -
2(1-e_1^*) (-1)=0,
\end{equation}
so
\begin{equation} \label{e105}
e_1^* = \frac{1}{2} + \left( \frac{b}{4} \right) \left( \frac{dq_1}
{de_1} \right).
\end{equation}
If $e_1^* \geq 0.5$, the linear contract will work just fine. The contract
parameters $a$ and $b$ can be chosen so that the linear-contract effort in
equation (\ref{e105}) is the same as the first-best effort in equation
(\ref{e101}), with $a$ taking a value to extract all the surplus so the
participation constraint is barely satisfied.
If $e_1^* <0.5$, though, the linear contract cannot achieve the first best
with a positive value for $b$. Even under a flat wage ($b=0$), the agent will
choose $e_1=0.5$, which is too high. If the principal rewards the agent for more
of the observable output $q_1$, the principal will get too little of the
unobservable output $q_2$. Instead, the contract must actually punish the agent
for high output! It must have at least a slightly negative value for $b$, so as
to defeat the agent's preferred allocation of effort evenly across the tasks.
One context for this is sales jobs, where it is easy for the firm to
measure the orders a salesmen takes down from customers, but hard to observe how
much good feeling he leaves behind, how much he helps other salesmen, or how
much care he takes to obey laws against bribery and fraud. If the firm rewards
orders alone, the salesman will maximize orders. If the salesman actually likes
spending his time on the other efforts enough, this does not necessarily lead to
inefficiency, but it could also happen that the unrewarded efforts are slighted
more than they should be.
Chapter 7 compared three contracts: linear, threshold, and forcing
contracts. The threshold contract will work as well or better than the linear
contract in Multitasking I. It at least does not provide incentive to go above
the threshold, which is positively bad in this model. The forcing contract is
even better, because the principal positively dislikes having $e_1$ be too
great. In Chapter 7, the forcing contract's low wage for an output that was
too high seemed unrealistic, and forcing contracts were used for simplicity
rather than realism; here, it makes intuitive sense. Perhaps this is one way to
look at the common fear that winning high ratings from students will hurt a
professor's tenure chances because it indicates that he is not spending enough
time on his research.
Thus, in equilibrium the principal chooses some contract that elicits the
first-best effort $e^*$, such as the forcing contract,
\begin{equation} \label{e105a}
\begin{array}{l}
w(q_1=q_1^*) = w^*, \\
\\
w(q_1 \neq q_1^*) =0.\\
\end{array}
\end{equation}
A monitoring contract, which would incur monitoring cost $\overline{C}$, is
suboptimal, since an incentive contract can achieve the first-best anyway, but
let's see how the optimal monitoring contract would work. Let us set
$\overline{m}=0$ in Multitasking I, since we can add the constant part of the
wage to $m_1$ and $m_2$ anyway. The agent will choose his effort to maximize
\begin{equation} \label{e106}
\begin{array}{ll}
\pi_{agent}& = e_1 m_1 + e_2 m_2 - e_1^2 -e_2^2 \\
& \\
& = e_1 m_1 + (1-e_1) m_2 - e_1^2 - (1-e_1)^2, \\
\end{array}
\end{equation}
since with probability $e_1$ the monitoring finds him working on Task 1 and
with probability $e_2$ it finds him on Task 2. Maximizing by choice of $e_1$
yields
\begin{equation} \label{e107}
\frac{d \pi_{agent}}{de_1} = m_1 -m_2 - 2e_1 - 2(-1)(1-e_1) =0,
\end{equation}
so if the principal wants the agent to pick the particular effort $e_1=e_1^*$
that we found in equation (\ref{e101}) he should choose $m_1^*$ and $m_2^*$ so
that
\begin{equation} \label{e108}
m_1^* = 4e_1^* +m_2^* - 2
\end{equation}
Note that if $e_1^*> e_2^*$, which means that $e_1^*>0.5$, equation
(\ref{e108}) tells us that $m_1^* > m_2^*$, just as we would expect.
We have one equation for the two unknowns of $m_1^*$ and $m_2^*$ in
(\ref{e108}), so we need to add some information. Let us use the fact that if
the participation constraint is satisfied exactly then we can set the agent's
payoff from (\ref{e106}) equal to zero,
which is a second equation for our two unknowns. After going through the
algebra to solve (\ref{e108}) together with the binding participation
constraint, we get
\begin{equation} \label{e109}
m_1^* = 4e_1^* - 2(e_1^*)^2 -1
\end{equation}
from which we can find,using (\ref{e108}),
\begin{equation} \label{e110}
\begin{array}{ll}
m_2^* & = [4e_1^* - 2(e_1^*)^2 -1] +2 - 4e_1^*\\
& \\
& = 1- 2(e_1^*)^2\\
\end{array}
\end{equation}
These have the expected property that $\frac{d m_1^*}{de_1^*} = -4e_1^* +4 >0$
and $\frac{d m_2^*}{de_1^*} = -4e_1^* <0$.
In this risk-neutral model, occasional errorless monitoring is just as
good as the principal being able to observe effort 100\% of the time. Indeed,
if we changed the model description to say that the principal could at a cost of
$\overline{C}$ observe the levels of $e_1$ and $e_2$ rather than just a
snapshot of what the agent is doing at some random time, the mathematics could
stay exactly the same. Risk aversion of the principal or agent would complicate
things, of course, since then the randomness of the principal's monitoring would
create risk that was costly. What about if the agent's time was not split
between just the two tasks, but between the two tasks and shirking? That will be
our next model.
\bigskip
\begin{center}
{\bf Multitasking II: Two Tasks Plus Leisure}
\end{center}
This game is the same as Multitasking I, except that now the agent's effort
budget constraint is not $e_1+e_2 =1$, but $e_1+e_2 \leq 1$.
The amount $(1-e_1-e_2)$ represents leisure, whose value we set equal to zero
in the agent's utility function (compared to effort, which enters in negatively
and with increasing marginal disutility). This is not the typical use of
leisure in an economic model: here leisure represents not time off the job, but
time on the job (a total timespan of 1) spent shirking rather than working.
Again let us begin with the first best. This can be found by choosing $e_1$
and $e_2$ and $C$ to maximize the sum of the payoffs:
\begin{equation} \label{e111}
q_1(e_1) + \beta q_2(e_2) - C - e_1^2 -e_2^2,
\end{equation}
subject to $e_1+e_2 \leq 1$, the only change in the optimization problem
from Multitasking I.
We now cannot use the trick of substituting for $e_2$ using the constraint
$e_2=1-e_1$, since it might happen that the effort budget constraint is not
binding at the optimum.
Solving the two first-order conditions for $e_1$ and $e_2$ for those two
unknowns is straightforward but messy, so we will not do it here. We will just
represent the solutions by $e_1^*, e_2^*, q_1^*$, and $ q_2^*$ and the payment
necessary to give the agent a payoff of zero by $w^*$, just as in Multitasking
I, with the understanding that the values of those solutions will be the same
if it is efficient for the agent to have zero leisure and smaller otherwise.
It might happen that $e_1^*+ e_2^*=1$, as in Multitasking I, so that the
first-best effort levels are the same as in that game.
Positive leisure for the agent in the first-best, i.e., the effort budget
constraint being non-binding, is a realistic case. It means that paying the
agent enough to work on the two tasks every minute of the day is inefficient.
Instead, it might be profit-maximizing to give the agent a coffee break, or
time off for lunch, or permission to talk to his wife if she telephones.
Next, let's think about a flat-wage contract. In Multitasking I, a flat wage
led to $e_1=e_2=0.5$. In Multitasking II, it would lead to $e_1=e_2=0$, quite a
different result. Now the agent has the option of leisure, which he prefers to
either task. Even if the first-best effort levels are identical in
Multitasking I and II, with zero leisure, we cannot expect the second-best
contracts to be the same. A low-powered incentive contract is disastrous,
because pulling the agent away from high effort on Task 1 does not leave him
working harder on Task 2.
A high-powered sharing-rule incentive contract in which the wage rises with
output performs much better, even though we cannot reach the first best as we
did in Multitasking I. Since the flat wage leads to $e_2=0$ anyway, adding
incentives for the agent to increase $e_1$ cannot do any harm. Effort on Task 2
will remain zero-- so the first-best is unreachable-- but a suitable sharing
rule can lead to $e_1=e_1^*$.
The combination $(e_1=e_1^*, e_2=0)$ is the second-best incentive-contract
solution in Multitasking II, since at $e_1^*$ the marginal disutility of
effort equals the
marginal utility of the marginal product of effort. That conclusion might be
misleading, though. We have assumed that the disutility of effort on Task I is
separable from the disutility of effort on Task II. That is why even if the
agent is devoting no effort to Task II he should not work any harder on Task I.
More realistically, the disutility of effort would be some nonseparable function
$f(e_1, e_2)$ such that the efforts are ``substitute bads'' and $\frac{d^2f }
{de_1 de_2 } >0$. In that case, in the second-best the principal, unable to
induce $e_2$ to be positive, would push $e_1$ above the first-best level, since
the agent's marginal disutility of $e_1$ would be less at $(e_1^*,0)$ than at
$(e_1^*, e_2^*)$.
Thus, one lesson of Multitasking II is that if
an agent has a strong temptation to spend his time on tasks which have no
benefit for the principal, the situation is much closer to the conventional
agency models than to Multitasking I. The agent does not substitute between
the task with easy-to-measure output and the task with hard-to-measure output,
but between each task and leisure. The best the principal can do may be to
ignore the multitasking feature of the problem and just get the incentives right
for the task whose output he can measure.
Things are not quite so bleak, though. The
first-best effort levels {\it can} be attained, but it requires a
monitoring contract instead of an incentive contract. Monitoring is costly, so
this is not quite the first best, and it might not even be superior to the
second-best incentive contract if the monitoring cost $\overline{C}$ were too
big, but monitoring can induce any level of $e_2$ the principal desires.
The agent will choose his effort to maximize
\begin{equation} \label{e112}
\pi_{agent} = \overline{m} + e_1 m_1 + e_2 m_2 - e_1^2 -e_2^2,
\end{equation}
subject to $e_1+e_2 \leq 1$. Unlike in Multitasking I, the base wage
$\overline{m}$ matters, since it may happen that the principal monitors the
agent and finds him working on neither Task 1 nor Task 2. The base wage may even
be negative, which can be interpreted as a bond for good effort posted by the
agent or as a fee he pays for the privilege of filling the job and possibly
earning $m_1$ or $m_2$.
The principal will pick $m_1$ and $m_2$ to induce the agent to choose
$e_1^*$ and $e_2^*$, so he will pick them to solve the first-order conditions
of the agent's problem for $e_1^*$ and $e_2^*$:
\begin{equation} \label{e113}
\begin{array}{ll}
\frac{\partial \pi_{agent}}{\partial e_1} &= m_1 - 2e_1=0\\
& \\
\frac{\partial \pi_{agent}}{\partial e_2} &= m_2 - 2e_2=0\\
& \\
\end{array}
\end{equation}
These can be solved to yield $ m_1 = \frac{e_1^*}{2}$ and $m_2 =
\frac{e_2^*}{2}$. We still need to determine the base wage, $ \overline{m} $.
Substituting into the participation constraint, which will be binding, and
recalling that we defined the agent's reservation expected wage as $w^* =
e_1^2 +e_2^2$,
\begin{equation} \label{e113a}
\begin{array}{ll}
\pi_{agent} &= \overline{m} + e_1 m_1 + e_2 m_2 - e_1^2 -e_2^2 =0 \\
& \\
& \displaystyle{= \overline{m} + e_1^* \left( \frac{e_1^*}{2} \right) +
e_2^*\left( \frac{e_2^*}{2} \right) - w^* =0} \\
& \\
&= \overline{m} + \left( \frac{1}{2} \right) w^* - w^* =0 \\
\end{array}
\end{equation}
so $ \overline{m} =\frac{w^*}{2}$.
The base wage is thus positive; even if the principal finds the agent
shirking when he monitors, he will pay him more than zero. That is intuitive
when $e_1^* +e_2^* <1$, because then the principal wants the agent to take some
leisure in equilibrium, rather than have to pay him more for a leisureless job.
It is more surprising that the base wage is positive when $e_1^* +e_2^* =1$;
that is, when efficiency requires zero leisure. Why pay the agent anything at
all for inefficient behavior?
The answer is that the base wage is important only for inducing the agent to
take the job and has no influence whatsoever on the agent's choice of effort.
Increasing the base wage does not make the agent more likely to take leisure,
because he gets the base wage regardless of how much time he spends on each
activity. If $e_1^* +e_2^* =1$, then the agent chooses zero leisure despite
knowing that he would still receive his base pay for doing nothing, because the
incentive of $m_1$ and $m_2$ is great enough that he does not want to waste
any opportunity to get that incentive pay.
Thus, we end our two chapters on moral hazard on a happy note: the agent whose
work is so valuable and whose pay is so good that he willing works just as the
principal wants him to.
\newpage
\begin{small}
\bigskip
\noindent
{\bf Notes}
\bigskip
\noindent
{\bf N8.1 Efficiency Wages}
\begin{itemize}
\item
Which is the better, the carrot or the stick? I will mention two other
considerations besides the bankruptcy constraint. First, if the agent is
risk averse, equal dollar punishments and rewards lead to the punishment
disutility being greater than the reward utility. Second, regression to the
mean can easily lead a principal to think sticks work better than carrots in
practice. Suppose a teacher assigns equal utility rewards and punishments to
a student depending on his performance on tests, and that the student's effort
is, in fact, constant. If the student is lucky on a test, he will do well
and be rewarded, but will probably do worse on the next test. If the student is
unlucky, he will be punished, and will do better on the next test. The naive
teacher will think that rewards hurt performance and punishments help it. See
Robyn Dawes's 1988 book, {\it Rational Choice in an Uncertain World}
(especially pages 84-87) for a good exposition of this and other pitfalls of
reasoning. Kahneman, Slovic \& Tversky (1982) covers similar material.
\item
For surveys of the efficiency wage literature, see the article by L. Katz
(1986), the book of articles edited by Akerlof \& Yellen (1986), and the book-
length survey by Weiss (1990).
\item
The efficiency wage idea is essentially the same idea as in the Klein \&
Leffler (1981) model of product quality formalized in section 5.3. If no
punishment is available for player who is tempted to misbehave, a punishment
can be created by giving him something to take away. This something can be a
high-paying job or a loyal customer. It is also similar to the idea of {\bf co-
opting} opponents familiar in politics and university administration. To tame
the radical student association, give them an office of their own which
can be taken away if they seize the dean's office. Rasmusen (1988b) shows yet
another context: when depositors do not know which investments are risky and
which are safe, mutual bank managers can be highly paid to deter them from
making risky investments that might cost them their jobs.
\item
Adverse selection can also drive an efficiency wage model. We will see in
Chapter 9 that a customer might be willing to pay a high price to attract
sellers of high-quality cars when he cannot detect quality directly.
\end{itemize}
\bigskip \noindent
{\bf N8.2 Tournaments}
\begin{itemize}
\item
An article which stimulated much interest in tournaments is Lazear \& Rosen
(1981), which discusses in detail the importance of risk aversion and adverse
selection.
Antle \& Smith (1986) is an empirical study of tournaments in managers'
compensation. Rosen (1986) is a theoretical model of a labor tournament in
which the prize is promotion.
\item
One example of a tournament is the two-year, three-man contest for the new
chairman of Citicorp. The company named three candidates as vice-chairmen: the
head of consumer banking, the head of corporate banking, and the legal counsel.
Earnings reports were even split into three components, two of which were the
corporate and consumer banking (the third was the ``investment'' bank,
irrelevant to the tournament). See ``What Made Reed Wriston's Choice at
Citicorp,'' {\it Business Week}, July 2, 1984, p. 25.
\item General Motors has tried a tournament among its production workers. During
a depressed year, management credibly threatened to close down the auto plant
with the lowest productivity. Reportedly, this did raise productivity. Such a
tournament is interesting because it helps explain why a firm's supply curve
could be upward sloping even if all its plants are identical, and why it might
hold excess capacity. Should information on a plant's current performance have
been released to other plants? See ``Unions Say Auto Firms Use Interplant
Rivalry to Raise Work Quotas,'' {\it Wall Street Journal}, November 8, 1983, p.
1.
\item
Under adverse selection, tournaments must be used differently than under moral
hazard because agents cannot control their effort. Instead, tournaments are used
to deter agents from accepting contracts in which they must compete for a prize
with other agents of higher ability.
\item
Interfirm management tournaments run into difficulties when shareholders want
managers to cooperate in some arenas. If managers collude in setting prices,
for example, they can also collude to make life easier for each other.
\item
Suppose a firm conducts a tournament in which the best-performing of its vice-
presidents becomes the next president. Should the firm fire the most talented
vice-president before it starts the tournament? The answer is not obvious. Maybe
in the tournament's equilibrium, Mr Talent works less hard because of his
initial advantage, so that all of the vice-presidents retain the incentive to
work hard.
\item A tournament can reward the winner, or shoot the loser. Which is better?
Nalebuff \& Stiglitz (1983) say to shoot the loser, and Rasmusen (1987) finds a
similar result for teams, but for a different reason. Nalebuff \& Stiglitz's
result depends on uncertainty and a large number of agents in the tournament,
while Rasmusen's depends on risk aversion. If a utility function is concave
because the agent is risk averse, the agent is hurt more by losing a given sum
than he would benefit by gaining it. Hence, for incentive purposes the carrot is
inferior to the stick, a result unfortunate for efficiency since penalties are
often bounded by bankruptcy or legal constraints.
\item
Using a tournament, the equilibrium effort might be greater in a second-best
contract than in the first-best, even though the second-best is contrived to get
around the problem of inducing sufficient effort. An agent-by-agent contract
might lead to zero effort, for example, but a tournament, while better, might
overshoot and lead to inefficiently high effort. Also, a pure tournament, in
which the prizes are distributed solely according to the ordinal ranking of
output by the agents, is often inferior to a tournament in which an agent must
achieve a significant margin of superiority over his fellows in order to win
(Nalebuff \& Stiglitz [1983]). Companies using sales tournaments sometimes have
prizes for record yearly sales besides ordinary prizes, and some long distance
athletic races have nonordinal prizes to avoid dull events in which the best
racers run ``tactical races.''
Ehrenberg \& Bognanno (1990) find that professional golfers' scores are
not as good if the prize money is lower, especially for scores in later rounds
of the tournament when they are tired. Duggan \& Levitt (2002) find evidence
that Japanese sumo wrestlers purposely lose if they are above the threshold
number of victories to maintain their status but their opponent badly needs a
victory.
\item
Organizational slack of the kind described in the Farrell model has important
practical implications. In dealing with bureaucrats, one must keep in mind that
they are usually less concerned with the organization's prosperity than with
their own. In complaining about bureaucratic ineptitude, it may be much more
useful to name particular bureaucrats and send them copies of the complaint
than to stick to the abstract issues at hand. Private firms, at least, are well
aware that customers help monitor agents.
\item
The idea that an uninformed player can design a scheme to extract
information from two or more informed players by having them make
independent reports and punishing them for discrepancies has other
applications. The tournament in the Farrel model is similar to the ``Maskin
Scheme'' that will be discussed in Chapter 10 on mechanism design, where an
uninformed court punishes two players if they give different reports of the same
private information.
\end{itemize}
\bigskip
\noindent
{\bf N8.3 Institutions and Agency Problems}
\begin{itemize}
\item
Even if a product's quality need not meet government standards, the seller may
wish to bind himself to them voluntarily. Stroh's {\it Erlanger} beer proudly
announced on every bottle that although it is American, ``Erlanger is a special
beer brewed to meet the stringent requirements of Reinheitsgebot, a German
brewing purity law established in 1516.'' Inspection of household electrical
appliances by an independent lab to get the ``$U_L$'' listing is a similarly
voluntary adherence to standards.
\item The stock price is a way of using outside analysts to monitor an
executive's performance. When General Motors bought EDS, they created a special
class of stock, GM-E, which varied with EDS performance and could be used to
monitor it. \end{itemize}
\bigskip
\noindent
{\bf *N8.6 Joint Production by Many Agents: The Holmstrom Teams Model }
\begin{itemize} \item {\bf Team theory}, as developed by Marschak \& Radner
(1972) is an older mathematical approach to organization. In the old usage of
``team'' (different from the current, Holmstrom [1982] usage), several agents
who have different information but cannot communicate it must pick decision
rules. The payoff is the same for each agent, and their problem is coordination,
not motivation.
\item The efficient contract (\ref{e10}) supports the efficient Nash
equilibrium, but it also supports a continuum of inefficient Nash equilibria.
Suppose that in the efficient equilibrium all workers work equally hard.
Another Nash equilibrium is for one worker to do no work and the others to work
inefficiently hard to make up for him.
\item
{\bf A teams contract with hidden knowledge.} In the 1920s, National City Co.
assigned 20 percent of profits to compensate management as a group. A management
committee decided how to share it, after each officer submitted an unsigned
ballot suggesting the share of the fund that Chairman Mitchell should have, and
a signed ballot giving his estimate of the worth of each of the other eligible
officers, himself excluded. (Galbraith [1954] p. 157)
\item {\bf A first-best, budget-balancing contract when agents are risk
averse}. Proposition 8.1 can be shown to hold for any contract, not just for
differentiable sharing rules, but it does depend on risk neutrality and
separability of the utility function.Consider the following contract from
Rasmusen (1987):\\
\begin{equation}
w_i = \left\{ \begin{array}{ll }
\; b_i& {\rm if}\; q \geq q(e^*).\\
\left\{ \begin{array}{ll }
0& {\rm with\; probability}\; (n-1)/n \\ q& {\rm with\; probability}\; 1/n
\\
\end{array}
\right. & \;{\rm if}\; q < q(e^*) \\
\end{array} \right.
\end{equation}
If the worker shirks, he enters a lottery. If his risk aversion is strong
enough, he prefers the riskless return $b_i$, so he does not shirk. If agents'
wealth is unlimited, then for any positive risk aversion we could construct such
a contract, by making the losers in the lottery accept negative pay.
\item
A teams contract such as (\ref{e10}) is not a tournament. Only absolute
performance matters, even though the level of absolute performance depends on
what all the players do.
\item
{\bf The budget-balancing constraint.} The legal doctrine of ``consideration''
makes it difficult to make binding, Pareto-suboptimal promises. An agreement is
not a legal contract unless it is more than a promise: both parties have to
receive something valuable for the courts to enforce the agreement.
\item
Adverse selection can be incorporated into a teams model. A team of workers who
may differ in ability produce a joint output, and the principal tries to ensure
that only high-ability workers join the team. (See Rasmusen \& Zenger [1990]).
\end{itemize}
\newpage
\noindent {\bf Problems}
\bigskip
\noindent
{\bf 8.1. Monitoring with error} (easy) \\
An agent has a utility function $U= \sqrt {w} - \alpha e$, where $ \alpha =
1$ and $e$ is either 0 or 5. His reservation utility level is $\overline{U} =
9$, and his output is 100 with low effort and 250 with high effort. Principals
are risk neutral and scarce, and agents compete to work for them. The principal
cannot condition the wage on effort or output, but he can, if he wishes, spend
five minutes of his time, worth 10 dollars, to drop in and watch the agent. If
he does that, he observes the agent $Daydreaming$ or $Working$, with
probabilities that differ depending on the agent's effort. He can condition the
wage on those two things, so the contract will be $\{\underline{w},\overline{w}
\}$. The probabilities are given by Table 1.
\begin{center} {\bf Table 1:
Monitoring with Error}
\begin{tabular}{ l|cc }
& \multicolumn{2}{c}{{\bf Probability of }}\\ {\bf Effort} & {\bf
$Daydreaming$} &{\bf $Working$}\\ \hline $ Low $($e=0$) & 0.6 &0.4\\ \hline $
High $($e=5$) & 0.1 & 0.9\\
\end{tabular}
\end{center}
\begin{enumerate}
\item[(a)] What are profits in the absence of monitoring, if the agent is
paid enough to make him willing to work for the principal?
\item[(b)]
Show that high effort is efficient under full information.
\item[(c)]
If $\alpha = 1.2$, is high effort still efficient under full information?
\item[(d)]
Under asymmetric information, with $\alpha = 1$, what are the participation
and incentive compatibility constraints?
\item[(e)]
Under asymmetric information, with $\alpha =1$, what is the optimal contract?
\end{enumerate}
%---------------------------------------------------------------
\bigskip
\noindent
{\bf 8.2. Monitoring with Error: Second Offenses }(medium) (see Rubinstein
[1979]) \\
Individuals who are risk-neutral must decide whether to commit zero, one, or
two robberies. The cost to society of robbery is 10, and the benefit to the
robber is 5. No robber is ever convicted and jailed, but the police beat up any
suspected robber they find. They beat up innocent people mistakenly sometimes,
as shown by Table 2, which shows the probabilities of zero or more beatings for
someone who commits zero, one, or two robberies. \begin{center} {\bf Table 2:
Crime}
\begin{tabular}{ l|ccc }
& \multicolumn{3}{c } {\bf Beatings}\\
\hline {\bf Robberies} & 0 & 1 & 2\\
\hline 0 & 0.81 & 0.18 & 0.01 \\
1 & 0.60 & 0.34 & 0.06 \\ 2 & 0.49 & 0.42 & 0.09\\
\end{tabular}
\end{center}
\begin{enumerate}
\item[(a)]
How big should $p^*$, the disutility of a beating, be made to deter crime
completely while inflicting a minimum of punishment on the innocent?
\item[(b)]
In equilibrium, what percentage of beatings are of innocent people? What is
the payoff of an innocent man?
\item[(c)]
Now consider a more flexible policy, which inflicts heavier beatings on repeat
offenders. If such flexibility is possible, what are the optimal severities for
first- and second-time offenders? (call these $p_1$ and $p_2$). What is the
expected utility of an innocent person under this policy?
\item[(d)]
Suppose that the probabilities are as given in Table 3. What is an optimal
policy for first and second offenders? \\
\begin{center}
{\bf Table 3: More Crime}
\begin{tabular}{ l|ccc }
& \multicolumn{3}{c } {\bf Beatings}\\
{\bf Robberies} & 0 & 1 & 2\\
\hline 0 & 0.9 & 0.1 & 0 \\
1 & 0.6 & 0.3 & 0.1 \\
2 & 0.5 & 0.3 & 0.2\\
\end{tabular} \end{center}
\end{enumerate}
\bigskip
\noindent
\textbf{8.3. Bankruptcy Constraints } (hard) \\
A
risk- neutral principal hires an agent with utility function $U= w-e$ and
reservation utility $ \overline{U} = 5$. Effort is either 0 or 10. There is a
bankruptcy constraint: $w \geq 0$. Output is given by Table 4.
\begin{center} \textbf{Table 4: Bankruptcy}
\begin{tabular}{l|cc|c} & \multicolumn{2}{c}
{\textbf{Probability of Output of}}
& \\ \textbf{Effort} & 0 & 400 & Total \\
\hline $Low$ ($e=0$) & 0.5 & 0.5 & 1
\\ $High$ ($e=10$) & 0.2 & 0.8 & 1 \end{tabular} \end{center}
\begin{enumerate}
\item[(a)] What would be the agent's effort choice and utility if he owned the
firm?
\item[(b)] If agents are scarce and principals compete for them what will be the
agent's contract under full information? His utility?
\item[(c)] If principals are scarce and agents compete to work for them, what
will the contract be under full information? What will the agent's utility be?
\item[(d)] If principals are scarce and agents compete to work for them, what
will the contract be when the principal cannot observe effort? What will the
payoffs be for each player?\\
\item[(e)] Suppose there is no bankruptcy constraint. If principals are the
scarce factor and agents compete to work for them, what will the contract be
when the principal cannot observe effort? What will the payoffs be for principal
and agent?
\end{enumerate}
\bigskip
\noindent
{\bf 8.4. Teams} (medium)\\
A team of two workers produces and sells widgets for the
principal. Each worker chooses high or low effort. An agent's utility is $U= w -
20$ if his effort is high, and $U=w$ if it is low, with a reservation utility of
$\overline{U}=0$. Nature chooses business conditions to be excellent, good, or
bad, with probabilities $\theta_1$, $\theta_2$, and $\theta_3$. The principal
observes output but not business conditions, as shown in Table 5.
\begin{center}
{\bf Table 5: Team output}
\begin{tabular}{ l|c|c|c } &${\bf Excellent}$ &${\bf Good}$&${\bf Bad}$ \\
& ${\bf(\theta_1)}$& ${\bf(\theta_2)}$ & ${\bf(\theta_3)}$\\ \hline $High,
High$ & 100 & 100 & 60\\ \hline $ High, Low$ & 100 & 50 & 20\\ \hline $ Low,
Low$ & 50 & 20 & 0\\ \end{tabular}
\end{center}
\begin{enumerate}
\item[(a)] Suppose $\theta_1=\theta_2=\theta_3$. Why is $\{(w(100)=30,
w(not\; 100) =0),(High, High)\}$ not an equilibrium?
\item[(b)]
Suppose $\theta_1=\theta_2=\theta_3$. Is it optimal to induce high effort?
What is an optimal contract with nonnegative wages?
\item[(c)]
Suppose $\theta_1=0.5$, $\theta_2=0.5$, and $\theta_3=0$. Is it optimal to
induce high effort? What is an optimal contract (possibly with negative wages)?
\item[(d)]
Should the principal stop the agents from talking to each other?
\end{enumerate}
\bigskip
\noindent
{\bf 8.5. Efficiency Wages and Risk Aversion } (medium) (see Rasmusen
[1992c]) \\
In each of two periods of work, a worker decides whether to steal amount
$v$, and is detected with probability $\alpha$ and suffers legal penalty $p$ if
he, in fact, did steal. A worker who is caught stealing can also be fired,
after which he earns the reservation wage $w_0$. If the worker does not steal,
his utility in the period is $U(w)$; if he steals, it is $U(w+v) - \alpha p$,
where $U(w_0 +v) - \alpha p > U(w_0)$. The worker's marginal utility of income
is diminishing: $U' >0$, $U''<0$, and $ lim_{x \rightarrow \infty} U'(x) = 0.$
There is no discounting. The firm definitely wants to deter stealing in each
period, if at all possible.
\begin{enumerate}
\item[(a)]
Show that the firm can indeed deter theft, even in the second period, and, in
fact, do so with a second-period wage $w_2^*$ that is higher than the
reservation wage $w_0$.
\item[(b)]
Show that the equilibrium second-period wage $w_2^*$ is higher than the first-
period wage $w_1^*$.
\end{enumerate}
\bigskip
\noindent
{\bf 8.6. The Game Wizard } (medium) \\
A high-tech firm is trying to develop the game Wizard 1.0. It will have
revenues of \$200,000 if it succeeds, and \$0 if it fails. Success depends on
the programmer. If he exerts high effort, the probability of success is 0.8. If
he exerts low effort, it is 0.6. The programmer requires wages of at least
\$50,000 if he can exert low effort, but \$70,000 if he must exert high effort.
(Let's just use payoffs in thousands of dollars, so 70,000 dollars will be
written as 70.)
\begin{enumerate}
\item[(a)] Prove that high effort is first-best efficient.
\item[(b)] Explain why high effort would be inefficient if the probability of
success when effort is low were 0.75.
\item[(c)]
Let the probability of success with low effort go back to 0.6 for the
remainder
of the problem. If you cannot monitor the programmer and cannot pay him a wage
contingent on success, what should you do?
\item[(d)] Now suppose you can make the wage contingent on success. Let the wage
be $S $ if Wizard is successful, and $F$ if it fails. $S$ and$ F$ will have to
satisfy two conditions: a participation constraint and an incentive
compatibility constraint. What are they?
\item[(e)]
What is a contract that will achieve the first best?
\item[(f)]
What is the optimal contract if you cannot pay a programmer a negative wage?
\end{enumerate}
\bigskip
\noindent
{\bf 8.7 Machinery } (medium) \\
Mr. Smith is thinking of buying a custom-designed machine from either Mr.
Jones or Mr. Brown. This machine costs 5000 dollars to build, and it is
useless to anyone but Smith. It is common knowledge that with 90 percent
probability the machine will be worth 10,000 dollars to Smith at the time of
delivery, one year from today, and with 10 percent probability it will only be
worth 2,000 dollars. Smith owns assets of 1,000 dollars. At the time of
contracting, Jones and Brown believe there is a 20 percent chance
that Smith is actually acting as an ``undisclosed agent'' for Anderson, who has
assets of 50,000 dollars.
Find the price be under the following two legal regimes: (a) An undisclosed
principal is not responsible for the debts of his agent; and (b) even an
undisclosed principal is responsible for the debts of his agent. Also, explain
(as part [c]) which rule a moral hazard model like this would tend to support.
%---------------------------------------------------------------
\newpage
\begin{center}
{\bf Lobbying Teams: A Classroom Game for Chapter 8}
\end{center}
Some of you will be Manufacturing firms and some Agricultural firms. Each firm
will consist of two people.
The President of the United States is deciding between two policies. Free Trade
will yield \$10 million in benefit to each agricultural firm. Protectionism
will yield \$10 million in benefit to each manufacturing firm.
In each year, each firm will write down its favored policy and its lobbying
expenditure, amount $X$, on a notecard and hand it in.
Whichever policy gets the most lobbying expenditure wins. If your favored
policy wins, your payoff is $10-X$. If it loses, your payoff is $-X$.
Each round the rules will variously allow communication and agreements of
different kinds.
\end{small}
\end{document}