\documentclass[12pt]{article}
\usepackage{graphicx}
\usepackage{amsmath}
\usepackage{amssymb}
\reversemarginpar
\topmargin -1in
\oddsidemargin -.9in \textheight 10in \textwidth 7.4in
\begin{document}
\baselineskip 16pt
\parindent 24pt
\parskip 10pt
22 November 2005. 31 August 2006. Eric Rasmusen, Erasmuse@indiana.edu.
http://www.rasmusen.org/.
\begin{LARGE} \begin{center}
{ \bf 2 Information}
\end{center} \end{LARGE}
\begin{Huge}
\begin{center} {\bf Table 1: Ranked Coordination }
\begin{tabular}{lllccc}
& & &\multicolumn{3}{c}{\bf Jones}\\ & & &
$Large$ & & $Small$ \\ & & $Large$ & {\bf 2,2} & $\leftarrow$ & $-1,
-1$ \\
& {\bf Smith} & & $\uparrow$ & & $\downarrow$ \\ & & $Small$ &
$-1, -1$ & $\rightarrow$ & {\bf 1,1} \\ & & & &\\
\end{tabular} \end{center}
\vspace{-24pt}
{\it Payoffs to: (Smith, Jones). Arrows show how a player can increase his
payoff. }
\newpage
The term {\bf bayesian equilibrium} is used to refer to a Nash equilibrium in
which players update their beliefs according to Bayes's Rule. Since Bayes's Rule
is the natural and standard way to handle imperfect information, the adjective,
``bayesian,'' is really optional. But the two-step procedure of checking a Nash
equilibrium has now become a three-step procedure:
1 Propose a strategy profile.
2 See what beliefs the strategy profile generates when players update their
beliefs in response to each others' moves.
3 Check that given those beliefs
together with the strategies of the other players each player is choosing a best
response for himself.
\newpage
The rules of the game specify each player's initial beliefs, and Bayes's Rule
is the rational way to update beliefs. Suppose, for example, that Jones starts
with a particular prior belief, $Prob(Nature \; chose \;(A))$.
In { Follow-the-
Leader III}, this equals 0.7.
He then observes Smith's move --- $Large$,
perhaps. Seeing $Large$ should make Jones update to the {\bf posterior}
belief, $Prob ( Nature \; chose \; (A)) |Smith \; chose\; Large)$, where the
symbol ``$|$'' denotes ``conditional upon'' or ``given that.''
\newpage
Bayes's Rule shows how to revise the prior belief in the light of new
information such as Smith's move. It uses two pieces of information, the
likelihood of seeing Smith choose $Large$ given that Nature chose state of
the world (A), $Prob ( Large| (A) )$, and the likelihood of seeing Smith
choose $Large$ given that Nature did not choose state (A), $Prob (
Large| (B)\; or\; (C) )$.
\newpage
From these numbers, Jones can calculate
$Prob
(Smith \; chooses\; Large )$,
the {\bf marginal likelihood} of seeing $Large$ as
the result of one or another of the possible states of the world that Nature
might choose.
\begin{Large}
\begin{equation}\label{e2.1} \begin{array}{ll}
Prob (Smith \;
chooses\; Large ) &= Prob ( Large| A ) Prob(A) + Prob ( Large| B ) Prob(B)
\\ & + Prob ( Large| C ) Prob(C).\\ \end{array} \end{equation}
\end{Large}
Bayes's Rule is not purely mechanical. It is the only way to rationally
update beliefs. The derivation is worth understanding, because Bayes's Rule is
hard to memorize but easy to rederive.
\newpage
\begin{Large}
\begin{equation}\label{e2.1} \begin{array}{ll}
Prob (Smith \;
chooses\; Large ) &= Prob ( Large| A ) Prob(A) + Prob ( Large| B ) Prob(B)
\\ & + Prob ( Large| C ) Prob(C).\\ \end{array} \end{equation}
\end{Large}
To find his posterior,
$Prob ( Nature \; chose \;(A)) |Smith \; chose\; Large)
$,
Jones uses the likelihood and his priors. The joint probability of
both seeing Smith choose $Large$ and Nature having chosen (A) is
\begin{Large}
\begin{equation}\label{e2.2} Prob(Large,A) = Prob(A|Large)Prob(Large) =
Prob(Large|A)Prob(A). \end{equation}
\end{Large}
Since what Jones is trying to calculate is $Prob(A|Large)$, rewrite the last
part of (\ref{e2.2}) as follows:
\begin{equation}\label{e2.3}
Prob(A|Large) = \frac{ Prob(Large|A)Prob(A)} {Prob (Large)}.
\end{equation}
Jones needs to calculate his new belief --- his posterior --- using $Prob(Large)
$, which he calculates from his original knowledge using (\ref{e2.1}).
Substituting the expression for $Prob(Large)$ from (\ref{e2.1}) into equation
(\ref{e2.3}) gives the final result, a version of Bayes's Rule.
\begin{normalsize} \begin{equation}\label{e2.4} Prob(A|Large) = \frac{Prob(Large|A)
Prob(A)} {Prob(Large|A)Prob(A) + Prob(Large|B )Prob(B)+ Prob(Large|C)Prob(C)}.
\end{equation}
\end{normalsize}
\newpage
\begin{Large}
\begin{equation}\label{e2.1} \begin{array}{ll}
Prob (Smith \;
chooses\; Large ) &= Prob ( Large| A ) Prob(A) + Prob ( Large| B ) Prob(B)
\\ & + Prob ( Large| C ) Prob(C).\\ \end{array} \end{equation}
\end{Large}
Let us now return to the numbers in Follow-the-Leader III to use
the belief-updating rule that was just derived.
Jones has a prior belief that
the probability of event ``Nature picks state (A)'' is 0.7 and he needs to
update that belief on seeing the data ``Smith picks $Large$''. His prior is
$Prob(A) = 0.7$, and we wish to calculate $Prob (A|Large)$.
To use Bayes's Rule from equation (\ref{e2.4}), we need the values of $Prob
(Large|A)$, $Prob (Large|B)$, and $Prob (Large|C)$.
These values depend on what
Smith does in equilibrium, so Jones's beliefs cannot be calculated
independently of the equilibrium. This is the reason for the three-step
procedure suggested above.
A candidate for equilibrium is
Smith $(L| A , L| B , S| C $
Jones $ ( L|L, S|S)$.
\newpage
Smith $(L| A , L| B , S| C $
Jones $ ( L|L, S|S)$.
Let us test that this is an
equilibrium, starting with the calculation of $Prob(A|Large)$.
If Jones observes $Large,$ he can rule out state (C), but he does not know
whether the state is (A) or (B).
Bayes's Rule tells him that the posterior
probability of state (A) is
\begin{equation} \label{e2.7}
\begin{array}{ll}
Prob(A|Large) & = \frac{(1)(0.7)}{(1)(0.7) + (1)(0.1) + (0)(0.2)}\\ & \\ & =
0.875. \end{array} \end{equation}
The posterior probability of state (B) must
then be $1-0.875 = 0.125$, which could also be calculated from Bayes's Rule, as
follows: \begin{equation} \label{e2.8} \begin{array}{ll}
(B|Large) &=
\frac{(1)(0.1)}{(1)(0.7) + (1)(0.1) + (0)(0.2)}\\ & \\ & = 0.125.
\end{array}
\end{equation}
\newpage
\includegraphics[width=7in]{fig02-08.jpg}
\begin{center}
{\bf Figure 8: Bayes's Rule} \end{center}
\newpage
Smith $(L| A , L| B , S| C $
Jones $ ( L|L, S|S)$.
Jones must use Smith's strategy in the proposed equilibrium to find numbers
for
$Prob(Large|A)$, $Prob(Large|B)$, and $Prob(Large|C)$.
Given that Jones believes that the state is (A) with probability 0.875 and
state (B) with probability 0.125, his best response is $Large$, even though he
knows that if the state were actually (B) the better response would be $Small$:
$E\pi(Small|Large)$ is $-0.625$
( $= 0.875[-1] + 0.125 [2]$),
$E\pi(Large|Large)$ is $ 1.875$ ( $= 0.875[2] +
0.125 [1]$).
A similar calculation can be done for $Prob (A|Small)$.
\begin{equation} \label{e2.9} Prob (A|Small) =
\frac{(0)(0.7)}{(0)(0.7) + (0)(0.1) + (1)(0.2)} = 0.
\end{equation}
Given that
he believes the state is (C), Jones's best response to $Small$ is $Small$.
Given that Jones will imitate his
action, Smith does best by following his equilibrium strategy of ($L| A , L|B,
S| C$).
\newpage
The calculations are relatively simple because Smith uses a nonrandom strategy
in equilibrium, so, for instance, $Prob(Small|A) =0$ in equation (\ref{e2.9}).
Consider what happens if Smith uses a random strategy of picking $Large$ with
probability 0.2 in state (A), 0.6 in state (B), and 0.3 in state (C) (we will
analyze such ``mixed'' strategies in Chapter 3).
\begin{equation} \label{e2.7}
\begin{array}{ll}
Prob(A|Large) & = \frac{(1)(0.7)}{(1)(0.7) + (1)(0.1) + (0)(0.2)}\\ & \\ & =
0.875. \end{array} \end{equation}
The equivalent of equation
(\ref{e2.7}) is
\begin{equation} \label{e2.10}
Prob(A|Large) = \frac{(0.2)(0.7)}
{(0.2)(0.7) + (0.6)(0.1) + (0.3)(0.2)} = 0.54 \;\;(rounded). \end{equation}
If
he sees $Large$, Jones's best guess is still that Nature chose state (A), even
though in state (A) Smith has the smallest probability of choosing $Large$,
but Jones's subjective posterior probability, $Pr(A|Large)$, has fallen to
0.54 from his prior of $Pr(A)=0.7$.
The last two lines of Figure 8 illustrate this case.
\end{Huge}
\end{document}