February 01, 2005

Al Qaeda Rule 18 on Torture; Noninformative Actions and Bayesian Updating

Via some blogger who I forgot to note, see The Telegraph on Rule 18 of the Al Qaeda handbook -- claim you were tortured. The U.S. government has posted therulebook.

Feroz Abbasi, Martin Mubanga, Moazzam Begg and Richard Belmar finally arrived back in Britain last week after their three-year imprisonment in Guantanamo, to near-universal acclaim and sympathy. Their lawyers insist that they are totally innocent of any involvement in terrorism. The men themselves say that they have been tortured,...

...the al-Qa'eda training manual discovered during a raid in Manchester a couple of years ago. Lesson 18 of that manual, whose authenticity has not been questioned, emphatically states, under the heading "Prison and Detention Centres", that, when arrested, members of al-Qa'eda "must insist on proving that torture was inflicted on them by state security investigators. [They must] complain to the court of mistreatment while in prison".

This is a good example of a principle I frequently apply: if people will do X in either state A or state B, then we you see them do X, don't change your beliefs as to which is more likely, A or B. You have not gotten any information from observing them do X.

The Al Qaeda book looks like it might have other interesting cloak and dagger stuff too-- how to do surveillance, and so forth.

Permalink: 02:36 PM | Comments (0) | TrackBack

December 22, 2004

A Bayes Rule Classroom Game: Killers in the Bar

Your instructor has wandered into a dangerous bar in Jersey City. There are six people in there. Based on past experience, he estimates that three are cold-blooded killer and three are cowardly bullies. He also knows that 2/3 of killers are aggressive and 1/3 reasonable; but 1/3 of cowards are aggressive and 2/3 are reasonable. Unfortuntely, your instructor then spills his drink on a mean- looking rascal who responds with an aggressive remark.

In crafting his response in the two seconds he has to think, your instructor would like to know the probability he has offended a killer. Give him your estimate. Your instructor has wandered into a dangerous bar in Jersey City. There are six people in there. Based on past experience, he estimates that three are cold-blooded killer and three are cowardly bullies. He also knows that 2/3 of killers are aggressive and 1/3 reasonable; but 1/3 of cowards are aggressive and 2/3 are reasonable. Unfortuntely, your instructor then spills his drink on a mean- looking rascal who responds with an aggressive remark. In crafting his response in the two seconds he has to think, your instructor would like to know the probability he has offended a killer. Give him your estimate.

After writing the estimates and discussion, the story continues. A friend of the wet rascal comes in the door and discovers what has happened. He, too, turns aggressive. We know that the friend is just like the first rascal-- a killer if the first one was a killer, a coward otherwise. Does this extra trouble change your estimate that the two of them are killers?

This game is a descendant of the games in Holt, Charles A., \& Lisa R. Anderson. ``Classroom Games: Understanding Bayes’ Rule,'' {\it Journal of Economic Perspectives}, 10: 179-187 (Spring 1996), but I use a different heuristic for the rule, and a barroom story instead of urns. Psychologists have found that people can solve logical puzzles better if the puzzles are associated with a story involving people's identities. (See Dawes, Machiavellian intelligence theory).

I have the instructors' notes, which explain the answers in detail, at http://www.rasmusen.org/GI/probs/2bayesgame.pdf

Permalink: 03:30 PM | Comments (0) | TrackBack

November 18, 2004

A Lobbying Game-- All-Pay Auction with Free Riding

Yesterday I had my students play this lobbying game in class:
For this game, some of you will be Manufacturing firms and some Agricultural firms. The President is deciding between two policies. Free Trade will yield $10 million in benefit to each agricultural firm. Protectionism will yield $10 million in benefit to each manufacturing firm. In each year, each firm will write down its favored policy and its lobbying expenditure, amount X, on a notecard and hand it in. Whichever policy has the most lobbying wins. If your favored policy wins, your payoff is 10-X. If it loses, your payoff is -X.
In my class, which lasted 50 minutes, I had 11 Ag firms and 10 Manufacturing firms. I imposed a limit of X=10 for a firm's annual lobbying. The pattern of lobbying went like this:
Year  Protection  (Manufacturing)   Free Trade (Agriculture)

1                    49                       48
2                    48                       61
3                     0                       30
4                    52                       43
5                    21                       25
6                    56                       42
It's interesting that the total expenditure was often near the total value of the policy prize over all the firms--100 or 110. Note, too, that the lobbying swung from high to low a couple of times.

Within each industry, the amount of lobbying varied tremendously, with lots of zeroes. The student who had the highest payoff over all rounds had a payoff of 30, because his policy won three times and he never did any lobbying. We'd expect that-- the free rider always does best, even though if everyone free-rides, the industry does badly because it always loses the policy battle.

This game is variant on the "all-pay auction", with the twist that the prize is a public good, going to any firm in the industry rather than just the one that bids highest. Thus, it adds that free-riding element. The theoretical equilibrium is in mixed strategies-- carefully chosen randomizations each year.

My lobbying game scoresheet and overheads of lessons and caveats is up on the G202 course website. This is a good game for classroom use, not just for teaching about free riding but also because it is administratively easier than a lot of classroom games. The payoff structure is very simple. After explaining the game, I made each student a separate firm, and gave them each a notecard on which to write their lobbying expenditure. Each student brought up his notecard to me, except in the last round, when I allowed an industry rep to collect them all (which allows for enforcement of deals they might make to all lobby high). For the first few rounds, I did not allow talking, and then I did allow it. The students were eager to talk with each other, and seemed to enjoy the game.

Permalink: 10:15 AM | Comments (0) | TrackBack

November 12, 2004

Voting Cycles: A Game Theory Problem

I've just been inspired, on reading a draft chapter of Burt Monroe's Electoral Systems in Theory and Practice, to write up a long game theory problem for the next edition of Games and Information. It gets very technical, but I'll post it in case anybody might be interested.

Uno, Duo, and Tres are three people voting on whether the budget devoted to a project should be Increased, kept the Same, or Reduced. Their payoffs from the different outcomes, given below, are not monotonic in budget size. Uno thinks the project could be very profitable if its budget were increased, but will fail otherwise. Duo mildly wants a smaller budget. Tres likes the budget as it is now.


Uno Duo Tres

Increase 100 2 4
Same 3 6 9
Reduce 9 8 1

Each of the three voters writes down his first choice. If a policy gets a majority of the votes, it wins. Otherwise, Same is the chosen policy.


(a) Show that (Same, Same, Same) is a Nash equilibrium. Why does this equilibrium seem unreasonable to us?

... I continue to have severe Movable Type problems. I can't do Extended entries, so for more, go to Problem 4.7 in this page. When I've got time, I'll think about whether to switch weblog software.

November 14: Now maybe this will work:

ANSWER. The policy outcome is Same regardless of any one player's deviation. Thus, all three players are indifferent about their vote. This seems strange, though, because Uno is voting for his least-preferred alternative. Parts (c) and (d) formalize why this is implausible.

(b) Show that (Increase, Same, Same) is a Nash equilibrium.

ANSWER. The policy outcome is Same, but now only by a bare majority. If Uno deviates, his payoff remains 3, since he is not decisive. If Duo deviates to Increase,Increase wins and he reduces his payoff from 6 to 2; if Duo deviates to Reduce, each policy gets one vote and Same wins because of the tie, so his payoff remains 6. If Tres deviates to Increase, Increase wins and he reduces his payoff from 9 to 4; if Tres deviates to Reduce, each policy gets one vote and Same wins because of the tie, so his payoff remains 9.

(c) Show that if each player has an independent small probability epsilon of ``trembling'' and choosing each possible wrong action by mistake, (Same, Same, Same) and (Increase, Same, Same) are no longer equilibria.

ANSWER. Now there is positive probability that each player's vote is decisive. As a result, Uno deviates to Increase. Suppose Uno himself does not tremble. With probability epsilon (1-epsilon) Duo mistakenly chooses Increase while Tres chooses Same, in which case Uno's choice of Increase is decisive for Increase winning and will raise his payoff from 3 to 100. With the same probability, epsilon (1-epsilon), Tres mistakenly chooses Increase while Duo chooses Same. Again, Uno's choice of Increase is decisive for Increase winning. Thus, (Same, Same, Same) is no longer an equilibrium.

(With probability epsilon*epsilon, both Duo and Tres tremble and choose Increase by mistake. In that case, Uno's vote is not decisive; Increase wins even without his vote.)

How about (INCREASE, SAME, SAME)? First, note that a player cannot benefit by deviating to his least-preferred policy.

Could Uno benefit by deviating to Reduce, his second-preferred policy? No, because the probability of trembles that would make his vote for Reduce decisive is 2*epsilon (1-epsilon), as in the previous paragraph, and he would rather be decisive for Increase than for Reduce.

Could Duo benefit by deviating to Reduce, his most-preferred policy? If no other player trembles, that deviation would leave his payoff unchanged. If, however, one of the two other players trembles to Reduce and the other does not, which has probability 2*epsilon (1-epsilon). then Duo's voting for Reduce would be decisive and Reduce would win, raising Duo's payoff from 6 to 8. Thus, (Increase, Same, Same) is no longer an equilibrium.

Just for completeness, think about Tres's possible deviations. He has no reason to deviate from Same, since that is his most preferred policy. Reduce is his least-preferred policy, and if he deviates to Increase, Increase will win, in the absence of a tremble, and his payoff will fall from 9 to 4-- and since trembles have low probability, this reduction dominates any possibilities resulting from trembles.

(d) Show that (Reduce, Reduce, Same) is a Nash equilibrium that survives each player has an independent small probability epsilon of ``trembling'' and choosing each possible wrong action by mistake.


ANSWER. If Uno deviates to Increase or Same, the outcome will be Same and his payoff will fall from 9 to 3 If Duo deviates to Increase or Same, the outcome will be Same and his payoff will fall from 8 to 6. Tres's vote is not decisive, so his payoff will not change if he deviates. Thus, (Reduce, Reduce, Same) is a Nash equilibrium

How about trembles? The votes of both Uno and Duo are decisive in equilibrium, so if there are no trembles, they lose by deviating, and the probability of trembles is too small to make up for that. Tres is not decisive unless there is tremble. With probability 2*epsilon (1-epsilon) just one of the other players trembles and chooses Same, in which case Duo's vote for Same would be decisive; with probability 2*epsilon (1-epsilon) just one of the other players trembles and chooses Increase, in which case Duo's vote for Increase would be decisive. Since Tres's payoff from Same is bigger than his payoff from Increase, he will choose Same in the hopes of a tremble.

(e) Part (d) showed that if Uno and Duo are expected to choose Reduce, then Tres would choose Same if he could hope they might tremble-- not Increase. Suppose, instead, that Tres votes first, and publicly. Construct a subgame perfect equilibrium in which Tres chooses Increase. You need not worry about trembles now.

ANSWER. Tres's strategy is just an action, but Uno and Duo's strategies are actions conditional upon Tres's observed choice.

Tres: Increase
Uno: Increase|Increase;Reduce|Same, Reduce|Reduce
Duo: Reduce|Increase; Reduce|Same, Reduce|Reduce

Uno's equilibrium payoff is 100. If he deviated to Same|Increase and Tres chose Increase, his payoff would fall to 3; if he deviates to Reduce|Increase and Tres chose Increase, his payoff would fall to 9. Out of equilibrium, if Tres chose Same, Uno's payoff if he responds with Reduce is 9, but if he responds with Same it is 4. Out of equilibrium, if Tres chose Reduce, Uno's payoff is 9 regardless of his vote.

Duo's equilibrium payoff is 2. If Tres chooses Increase, Uno will choose Increase too and Duo's vote does not affect the outcome. If Tres chooses anything else, Uno will choose Reduce and Duo can achieve his most preferred outcome by choosing Reduce.

(f) Consider the following voting procedure. First, the three voters vote between Increase and Same. In the second round, they vote between the winning policy and Reduce. If, at that point, Increase is not the winning policy, the third vote is between Increase and whatever policy won in the second round.

What will happen? (watch out for the trick in this question!)

ANSWER. If the players are myopic, not looking ahead to future rounds, this is an illustration of the Condorcet paradox. In the first round, Same will beat Increase. In the second round, Reduce will beat Same. In the third round, Increase will be Reduce. The paradox is that the votes have cycled, and if we kept on holding votes, the process would never end.

The trick is that this procedure does not keep on going-- it only lasts three rounds. If the players look ahead, they will see that Increase will win if they behave myopically. That is fine with Uno, but Duo and Tres will look for a way out. They would both prefer Same to win. If the last round puts Same to a vote against Increase, Same will win. Thus, both Duo and Tres want Same to win the second round. In particular, Duo will {\it not} vote for Reduce in the second round, because he knows it would lose in the third round.

Rather, in the first round Duo and Tres will vote for Same against Increase; in the second round they will vote for Same against Reduce; and in the third round they will vote for Same against Increase again.

This is an example of how particular procedures make voting deterministic even if voting would cycle endlessly otherwise. It is a little bit like the T-period repeated game versus the infinitely repeated one; having a last round pins things down and lets the players find their optimal strategies by backwards induction.

Arrow's Impossibility Theorem says that social choice functions cannot be found that always reflect individual preferences and satisfy various other axioms. The axiom that fails in this example is that the procedure treat all policies symmetrically-- our voting procedure here prescribes a particular order for voting, and the outcome would be different under other orderings.


(g) Speculate about what would happen if the payoffs are in terms of dollar willingness to pay by each player and the players could make binding agreements to buy and sell votes. What, if anything, can you say about which policy would win, and what votes would be bought at what price?

ANSWER. Uno is willing to pay a lot more than the other two players to achieve his preferred outcome, He would be willing, to deviate from any equilibrium in which Increase would lose by offering to pay 20 for Duo's vote. Thus, we know Increase will win.

But Uno will not have to pay that much to get the vote. We have just shown that Increase will win. The only question is whether it is Duo or Tres that has his payoff increased by a vote payment from Uno. Duo and Tres are thus in a bidding war to sell their vote. Competition will drive the price down to zero! See Ramseyer & Rasmusen (1994).

This voting procedure, with vote purchases, also violates one of Arrow's Impossibility axioms-- his ``Independence of Irrelevant Alternatives'' rules out procedures that, like this one, rely on intensity of preferences.

Permalink: 03:06 AM | Comments (0) | TrackBack

November 07, 2004

Unifying Ideas in Game Theory: Symmetric-Player Games vs. Principal-Agent Games

I'm trying to work on a 4th edition of http://www.rasmusen.org/GI/index.html, and thinking about big ideas.

There is one large class of games in which one player moves first to try to get another to do something-- the principal-agent games, broadly construed. These include games of boss and worker, voter and politician, customer and seller. The player in this first class of games are in asymmetric positions-- they choose different sorts of actions. In some of these games-- the "moral hazard" ones-- the problem is that the agent's action is unobserved. In others-- the "adverse selection" games-- the problem is that the agent has some information the principal does not know.

In a second large class of games-- shall I call them "symmetric player games"?-- the players are all in the same sort of position-- two countries at war, or five firms setting prices, or two politicians choosing campaign spending. The idea of strategic substitutes and complements applies to these games, and is a unifying idea I'd like to use more. The idea is that in some games, when the other player does more of his strategy, I want to do more of mine. If my competitor raises his price, I want to raise mine. If my rival for elected office spends more on advertising in Wisconsin, I want to spend more too. We call this a situation of "strategic complements". In other situations, when my rival does more of his strategy, I do *less* of mine. If the rival firm increases capacity, I reduce my capacity. If the other firm spends more on research, I give up on research altogether. This is a situation of "strategic substitutes"....

... I realize that my chapters on Bargaining and Auctions can be roughly differentiated in this way. The usual bargaining game is one of strategic substitutes. If my rival is tougher, I will be softer, lest the bargain fall through. The usual auction game is one of strategic complements. If my rival bids higher, I will bid higher too. This is true even though the auction game is a mixed principal-agent/symmetric-player game, the principal being the seller and the agents being the bidders.

This makes me wonder where I should put my Pricing chapter. Already, I've decided to carve up the Entry chapter and move its pieces to other chapters or delete them. Pricing is the lone remaining application-centered chapter. Maybe it should be carved up too.

Another dichotomy is between games in which players take the rules as given, and "mechanism design" games, or "contracting games", in which they start by trying to bind themselves to the rules that will incentivize their behavior later in the game. I am not sure how to incorporate that dichotomy. Contracting games obviously arise most in principal-agent games, with the boss designing a contract for a worker (which, usually but not always, must satisfy a "participation constraint" that the worker be willing to accept it instead of quitting the job), or voters designing a constitution for politicians. In other principal-agent games, however, there is no contracting. Signalling games are the most prominent of these: workers choose credentials to signal their ability, without any formal contract offer beforehand by firms.

But contracting arises in symmetric player games too. Classic mechanism design problems include a seller setting the rules for an auction for lots of symmetric bidders, or a boss setting the rules for promotion for workers in a tournament with each other. Those two examples are mixed principal-agent/symmetric-player games, but mechanism design can even arise in pure symmetric-player games: a cartel chooses rules for punishing members who cut prices, a team of workers agrees to a sharing rule for output, or a group of citizens agrees to a choice rule for choosing how much each person pays for a new streetlight and whether it is built based based on announced preferences.

Permalink: 03:11 AM | Comments (0) | TrackBack

October 29, 2004

Sender-Receiver Games: Truthful Announcement, Cheap Talk, and Signalling

After a chat with Professor Harbaugh, I thought I'd collect my thoughts on communication games, thinking about revisions to my Games and Information. These notes won't mean much to non-economists, I'm afraid.

There are a variety of games in which one player, the Sender, tries to communicate something-- which we can call "his type"-- to another, the Receiver. The Sender is the informed player, so he is often an Agent; the Receiver is uninformed, and so is often a Principal.

I wonder if the games can usefully be divided into Truthful Announcement, Cheap Talk, and Signalling....

...In Truthful Announcement games, the Sender may be silent or send a message, but the message must be truthful if it is sent. There is no cost to sending the message, but it may induce the Receiver to take actions that affect the Sender. If the Receiver ignores the message, the Sender's payoff is unaffected by the message. The Sender's type varies from bad to good in these models usually.

An example of a Truthful Announcement game is when the Sender's ability A is uniformly distributed on [0,1], and the Sender can send a message Y such as "A>.5" or "A=.2".

In Cheap Talk games, the Sender's message is costless, but need not be truthful. If the Receiver ignores the message, the Sender's payoff is unaffected by the message. If the Receiver acts, though, that might affect the Sender. Usually, these are coordination games, where the Sender's preferred Receiver-action, given the true state of the world that he knows, is positively correlated with the Receiver's preferred Receiver-action.

An example of a Cheap Talk game is when the Sender and Receiver want to go to the same restaurant, either A or B, but only the Sender knows which restaurant is better. The Sender send a message-- "A" or "B"-- and if the Receiver ignores it, there is no cost to the Sender.

In Signalling games, the Sender's message is costly-- or at least a false message is-- but need not be truthful. The Sender's payoff is affected even if the Receiver ignores his message. The Sender's type varies from bad to good in these models usually. The "single-crossing property" is crucial-- that if the Sender's type is better, it is cheaper for him to send a message that his type is good.

An example is credentials. The Sender is dull or bright. If he is bright, it is easier for him to acquire credentials, which is his message to a Receiver employer.

In writing this up, some awkwardnesses strike me.

1. Zero-Cost Signals. A signalling game doesn't change its essential properties if sending the message of high quality is costless for the truly high quality type. It could even have negative cost for him-- that he gets a reward for truthfully declaring his type. What matters is that the same signal be too costly for a low quality type to think worth sending.

2. Lying Being what Is Costly. In the usual models, if a high signal is sent, that is more expensive than a low signal, especially for the low type of Sender. But I think the model would work out very much the same if what is expensive is not a high signal, but a false signal. The difference is that in the usual models, it is cheap for the High type to falsely signal that he is low, but in a truth-based model, it would be expensive for him to be modest.

3. Expensive-Talk Games. Imagine a cheap-talk game in which the signal is costly-- but the cost is the same for everyone, regardless of type. The usual sort of signalling won't work, because signalling high quality is no more expensive for the Low type than for the High type. But truthful communication might still work, for reasons more akin to those of the Cheap-Talk Game, if the High type Sender has a greater desire than the Low type for the Receiver to adopt a High response.

Thus, imagine that the Low Sender could make $100 as a salesman for himself and $100 for the Receiver if the Receiver hires him, and the High Sender could make $900 for himself and $900 for the Receiver. If messages are costless, both Senders would send the message "I am a High type" (not, I guess, "Hire me--I'm high"), and the message would be uninformative. If the message costs $200, only the truly High Sender would send the message. There is now an equilibrium in which the message is informative (there is also a pooling equilibrium, perhaps implausible, in which messages are still ignored).

People might think of this as a signalling game, applying the single-crossing property to the ultimate payoffs, but it is really more akin to the cheap-talk game, I think. It is like the PHD Admissions Game in Chapter 6 of my book. Perhaps it like Mechanism Design games too, which might be thought of as a form of cheap-talk games, since they have Senders and Receivers and costless messages, though in Mechanism Design games there is commitment to the mechanism.

Permalink: 09:10 PM | Comments (0) | TrackBack

October 09, 2004

A Mechanism for Eliciting One Buyer's Reserve Price

How do you figure out how much consumers might pay for a new product? I came across a good idea yesterday in a paper by George Geis, though it is not new with him. The problem is that if you simply ask people for the greatest price, P, they will pay, they will not think hard enough, and you will get an inaccurate estimate of their maximum value, V. Or, if you offer to sell it to them for some price P and they accept, all you know is that V>P, but not V exactly.

So here is another idea. Tell the person to give you a price P that equals their value, V, and tell them what will happen next. What will happen next is that you randomly pick a price, R, for the product. If P>R, they may buy the product at price R. If P

This mechanism is truthtelling-- the person's best strategy is to choose P=V. If they choose lower, they might miss their chance to buy the product at a price they'd like-- maybe R>P but R

I think you could also run this with slightly different rules, saying that they MUST buy at R if RV. That might be a better idea, since my original rules (which might be different from what Geis had--I forget) would make a very high P an easy strategy that would keep all the consumer's options open.

Permalink: 08:55 AM | Comments (0) | TrackBack

September 21, 2004

Does not Calling Indicate not Liking? Ted and Sheila

Via Alex Tabarrok at MR, I discover Glen Whitman at Agoraphilia

Say Ted would like to talk on the phone every two days , whereas Sheila would like to talk every day . You might think Sheila would call Ted about two-thirds of the time -- but in fact, she will call him every time . If they talk on Monday, Ted plans to call on Wednesday; but then Sheila calls him Tuesday. His clock reset, Ted plans to call on Thursday. And then Sheila calls on Wednesday. Eventually, Sheila decides Ted doesn�t care about her, because he never calls....

Sheila's conclusion is not "rational" in the economic sense, because she ought to have figured this out. If her prior belief is that there are equal probabilities that Ted would want to call her every half-day, every two days (which is in fact the truth) and never, then after the experience described above, she should revise it to put 50-50 probability on Two Days and Never.

Furthermore, if she cares enough, she can experiment and learn. She can purposely refrain from calling. If Ted does not call after Two Days, she can conclude that the truth is he Never wants to call.

...This is a truly useful idea.

I can carry it a little further than Whitman did. Suppose Sheila is rational. She therefore continues to call every day, knowing that there is a 50% probability Ted does like talking, if not as much as she does. In fact, though, let us change the story so Ted's true preference is Never. If Ted thinks that never calling Sheila is going to give her the message that he doesn't like her, he is mistaken. She will continue to put the probability that he doesn't like her at just 50%.

This last story sounds more realistic if we make Sheila's priors on Two Days and Never at 95-5. Remember, these are subjective beliefs of Sheila, so it would not be surprising if she put a high probability on Ted liking her. When Ted never calls, she will continue to hold the 95-5 beliefs. With beliefs that skewed, she will also find no point in experimenting by incurring the cost of not calling and letting two days go by. Ted therefore must bite the bullet and tell her he doesn't like her, or else endure those phone calls indefinitely.

Permalink: 10:14 AM | Comments (0) | TrackBack

September 20, 2004

Bribes, Airport Security, and Helpful Entrapment

One reason our precautions against hijackers are silly is that a simple bribe can get around any of them. Via Tyler Cowen, the September 17 Washington Post tells us

A thousand rubles, or about $34, was enough to bribe an airline agent to put a Chechen woman on board a flight just before takeoff, according to Russian investigators. The agent took the cash, and on a ticket the Chechen held for another flight simply scrawled, "Admit on board Flight 1047."

The woman was admitted onto the flight, while a companion boarded another plane leaving Moscow's Domodedovo Airport the same evening. Hours later, both planes exploded in midair almost simultaneously, killing all 90 people aboard.

It would take more than $34 in America, but I think $2,000,000 would do it, not a large sum for a group that is willing to use up its own members' lives.

I do have a solution, though I don't think we're using it: entrapment. We need to immediately send out FBI agents to offer numerous two million dollar bribes to airport personnel, and we must publicize the firing (and perhaps the criminal prosecution, even if conviction fails) of those who succumb to temptation. Lots of people would give up their honor for two million dollars, but if there is only a 1 in 100 probability that the briber will pay rather than turn you in, the expected payment falls to $20,000, with a 99% chance of losing your job.

Permalink: 02:28 PM | Comments (0) | TrackBack

August 28, 2004

The Cry Bar in Nanking; Norm Entrepreneurs

From the
July 31 WORLD magazine:

One of the hottest bars in the Chinese city of Nanjing sports only a sofa, a few tables, and tissue paper - a lot of tissue paper. The AFP news service reports that the city�s first "cry bar," where customers can sit and cry for $6 per hour, is growing in popularity. Owner Luo Jun says he opened the bar when clients of his last business said they often wanted to cry but didn�t know when or where it would be appropriate to do so.

I was just talking with my grad students yesterday about social norms and multiple equilibria. This is a great example of Norm Entrepreneurship. Mr. Jun saw an opportunity, and took it.

Permalink: 04:25 PM | Comments (0) | TrackBack

August 21, 2004

No-Trade Theorems; L. Samuelson(2004)

I was reading Larry Samuelson's survey, "Modeling Knowledge in Economic Analysis," in the June 2004 Journal of Economic Literature. Much of it is about No-Trade Theorems. I'll modify one of his first examples to illustrate.

Basic Model. Alice owns 1 share of a company. That share is worth $200 if the company's new product will be a success, and $300 if it is a failure. Each of these has equal probability, so the market price is $250.

Alice can make one take-it-or-leave-it offer to sell the stock to Bob. Clearly, so far Alice would offer P=250 and he would accept, but both players would be indifferent. We don't really have a model of trade yet, since a small transaction cost would block all trade.

We will think about adding two additional assumptions.

Assumption 1: Alice is better informed. Alice finds out whether the product will be a success, Bob knows she has found out, she knows Bob knows, and so forth (her finding out is "common knowledge", though whether she has found success or found failure is unknown to Bob).

Under Assumption 1, if Alice find out FAILURE, what happens? She will believe (correctly) that the value is $200, and Bob will believe it is $250. But now if she offers to sell to him at P=250, he will change his belief to value V=200 and refuse to buy. Indeed, the only equilibrium with trade is if Alice offers P=200, and, again, both players are then indifferent about trade.

This is a No-Trade result. Our intuition that difference of opinion will result in the low-valuer Alice selling to the high-valuer Bob fails, because the very act of Alice trying to sell converts Bob to being a low-valuer.

Most of Samuelson's survey looks at papers that generalize the result to more complicated situations than this little example and to fancy ways to try to generate trade, many of them fiddling with the standard Bayesian assumption of common priors (that both players know the probability of failure is .5, that both of them know Alice has found the information, etc.) But I wonder whether the paradox can be resolved even within this little example.

Suppose if instead of Assumption 1, we used Assumption 2.

Assumption 2. With probability .1, Alice gets into a fight with the president of the company, and holds a grudge which means her share of stock is worth $95 less to her than to anyone else in the world. Bob does not know whether she really had the fight, but he knows the probability is .1 and the consequence is a $95 difference.

Under Assumption 2, if Alice has the fight, then she will offer P=250 to Bob and Bob will accept. Unlike in the basic game, Alice now has a strong incentive to sell-- she is not indifferent. Bob is still indifferent, but that is an example of the purely technical "open-set" problem-- Alice would be willing to offer P=249 if she had to, and Bob would then be strongly desirous of accepting.

Assumption 2 is an example of a non-informational reason for trade, a reason that requires trade to attain efficient allocation of resources. This is, of course, the second reason we intuit for why trade occurs. It is by far the main reason for trade in goods, and it is also important for trade in securities, though Samuelson and others argue that efficiency reasons can't explain the volume of trade in securities.

Now let's use both Assumption 1 and Assumption 2. Note that this means that with 100% probability Alice has an informational reason for trade, but with 10% probability she also has an efficiency reason. What will happen?

First, suppose Alice hears that the product will be a failure. She will offer to sell to Bob at some price P*. She will tell Bob that she is selling because she had a fight with the president, but Bob won't believe that. He knows that with high probability she is selling because the company's value is only 200. What is the highest value of P* that Bob will accept?

If Alice had no fight and heard that V=300, she would offer P=300 (or make no offer) and Bob would deduce what happened. This has probability .9 (.5).

With probability .9(.5), Alice had no fight but heard that V=200 and is selling for that reason.

With probability .1(.5) Alice had a fight and heard V=200, and so has two reason to sell.

With probability .1(.5), Alice had a fight and heard V=300, and so will sell if P*>205.

That means that if P*>240, a sale will occur with probability .45+.05+.05= .55.

Bob's expected payoff from accepting P* is zero if

[(.1)/ (.55)] [.5 (300) + .5 (200) - P*] + [.45/ (.55)] [200-P*] =0.

This reduces to

(2/11) (250-P*) + (9/11) (200-P*)=0,

500 -2P* +1800 - 9P* =0

2300 = 11P*

P* = 2300/11= 209 (approximately)

If P* =209, then Alice is willing to sell even if she heard good news, if she really had a fight with the president, and Bob is willing to accept her offer, because he can at least break even (and if Alice offered 208, Bob would be strongly willing to accept).

Thus, a small probability of an efficiency reason for trade has generated a high probability of trade. Trade will occur 55% of the time, but 45/55 of the time trade occurs, its direct motivation will be Alice's superior information, not her possible efficiency motivation. So if you think that most securities trading is not motivated by efficiency, this model explains what is going on. The market (Bob) knows that most people are selling because they have private information, but the market is only willing to trade with them because it knows that some people are selling for efficiency reasons.

And I've shown what is going on with a much simpler and more conventional model than what's in the literature. To be sure, it's just a numerical example, but it's got 90% of what we need for an explanation of the real-world factoid.

Still, it might be worth expanding a bit, if this is not already in the literature. It would be interesting to see what would happen if Alice's probability of being informed is not 100%, but X%, and compare the effect of X with the effect of the probability of having an efficiency reason (here, 10%).

Permalink: 11:49 PM | Comments (0) | TrackBack

August 20, 2004

Kakutani's Death; Fixed Point Theorems

Shizuo Kakutani, author of the Kakutani Fixed Point Theorem, has died at age 92. I started auditing his Real Analysis class one fall while I was an undergrad. I didn't realize that he was the author of a theorem important for economics, or that real analysis was one of the most useful math courses I could take. Rather, I knew the course was a base one for math majors, and very hard, and I was feeling very self-confident. I didn't last too long. Staying up till the wee hours doing problem sets for a course I was just auditing was too much for me. Still, I remember vividly how Professor Kakutani would clearly exposit series's and sums, filling up blackboard after blackboard in neat handwriting. And I remember his joke about the lady who was surprised at how after so many years in America he still spelled "if" as "iff" (for nonmathematical readers: "iff" means "if and only if" in math).

Alex Tabarrok has a good discussion of fixed point theorems at Marginal Revolution.

One morning, exactly at sunrise, a Buddhist monk began to climb a tall mountain. The narrow path, no more than a foot or two wide, spiraled around the mountain to a glittering temple at the summit. The monk ascended the path at varying rates of speed, stopping many times along the way to rest and to eat the dried fruit he carried with him. He reached the temple shortly before sunset. After several days of fasting and meditation he began his journey back along the same path, starting at sunrise and again walking at variable speeds with many pauses along the way. His average speed descending was, of course, greater than his average climbing speed.

Prove that there is a spot along the path that the monk will occupy on both trips at precisely the same time of day.

...

Take two pieces of 8*11 paper and lay them on top of one another so that every point on the top paper corresponds with a point on the bottom paper. Now crumple the top piece of paper in anyway that you wish and place it back on top. B's theorem tells us that there must be a point which has not moved, i.e. which lies exactly above the same point that it did initially.

...

Consider a cupful of coffee. Each point is somewhere in 3-dimensional space. Stir. At least one point ends up in the same place as it began.

Permalink: 12:12 AM | Comments (0) | TrackBack

August 13, 2004

Bait Cars in Vancouver; Auditing Games

A mall in Vancouver has many signs like the one I show here. Isn't it a good idea? Best of all would be to actually plant some bait cars too, but that isn't even necessary, if budgets are tight. Criminals will rightly be skeptical that the bait cars exist, but there is not much to be done about that unless some kind of certification or reputation becomes possible. Newspaper reports of successful baiting *might* work.

This is the same situation as in the following two of my articles:

``Lobbying When the Decisionmaker Can Acquire Independent Information,'' Public Choice (1993) 77: 899-913. Politicians trade off the cost of acquiring and processing information against the benefit of being re- elected. Lobbyists may possess private information upon which politicians would like to rely without the effort of verification. If the politician does not try to verify, however, the lobbyist has no incentive to be truthful. This is modelled as a game in which the lobbyist lobbies to show his conviction that the electorate is on his side. In equilibrium, sometimes the politician investigates, and sometimes the information is false. The lobbyists and the electorate benefit from the possibility of lobbying when the politician would otherwise vote in ignorance, but not when he would otherwise acquire his own information. The politician benefits in either case. Lobbying is most socially useful when the politician's investigation costs are high, when he is more certain of the electorate 's views, and when the issue is less important. In Ascii-Latex (43K) or pdf (204K, http://Pacioli.bus.indiana.edu/erasmuse/published/Rasmusen_93PUBCHO.lobbying.pdf ).

"Explaining Incomplete Contracts as the Result of Contract- Reading Costs," in the BE Press journal, Advances in Economic Analysis and Policy. Vol. 1: No. 1, Article 2 (2001). http://www.bepress.com/bejeap/advances/vol1/iss1/art2. Much real-world contracting involves adding finding new clauses to add to a basic agreement, clauses which may or may not increase the welfare of both parties. The parties must decide which complications to propose, how closely to examine the other side's proposals, and whether to accept them. This suggests a reason why contracts are incomplete in the sense of lacking Pareto-improving clauses: contract-reading costs matter as much as contract- writing costs. Fine print that is cheap to write can be expensive to read carefully enough to understand the value to the reader, and especially to verify the absence of clauses artfully written to benefit the writer at the reader's expense. As a result, complicated clauses may be rejected outright even if they really do benefit both parties, and this will deter proposing such clauses in the first place. In ascii-latex and pdf (http: //Pacioli.bus.indiana.edu/erasmuse/published/Rasmusen_01.negot.pdf).

It reminds me of the old joke about the farmer who, having noticed that watermelons were disappearing from his garden, posted a sign saying,

"One of the watermelons in this garden is poisoned."

The next day at dawn he looked out and saw that no more watermelons had been taken but the "One" on the sign had been crossed out. Now the sign said,
TWO of the watermelons in this garden is poisoned."
Note, however, that the last part of the joke does not carry over to parking lots in Vancouver.

Permalink: 02:02 PM | Comments (0) | TrackBack

July 25, 2004

James Miller's Game Theory at Work (McGraw Hill 2003)

This economist at Smith College was in tenure trouble because of his conservatism. His game theory book, one of the many competitors of my own book, looks pretty good, though so close in style to Dixit and Nalebuff, Dixit and Skeath, and Macmillan, good books all, that I wonder about the need for it-- especially when Dixit, Nalebuff, and Macmillan are such big names. I was only bold enough to write the 1st edition of Games and Information as an assistant professor because in 1989 nobody else had written a book on game theory in the post-1975 style and everybody wanted to read such a book. I knew I'd have the best book simply because it would be the only book-- though it wasn't for long, it turned out.

Permalink: 11:13 PM | Comments (0) | TrackBack