November 12, 2004

Voting Cycles: A Game Theory Problem

I've just been inspired, on reading a draft chapter of Burt Monroe's Electoral Systems in Theory and Practice, to write up a long game theory problem for the next edition of Games and Information. It gets very technical, but I'll post it in case anybody might be interested.

Uno, Duo, and Tres are three people voting on whether the budget devoted to a project should be Increased, kept the Same, or Reduced. Their payoffs from the different outcomes, given below, are not monotonic in budget size. Uno thinks the project could be very profitable if its budget were increased, but will fail otherwise. Duo mildly wants a smaller budget. Tres likes the budget as it is now.

Uno Duo Tres

Increase 100 2 4
Same 3 6 9
Reduce 9 8 1

Each of the three voters writes down his first choice. If a policy gets a majority of the votes, it wins. Otherwise, Same is the chosen policy.

(a) Show that (Same, Same, Same) is a Nash equilibrium. Why does this equilibrium seem unreasonable to us?

... I continue to have severe Movable Type problems. I can't do Extended entries, so for more, go to Problem 4.7 in this page. When I've got time, I'll think about whether to switch weblog software.

Posted by erasmuse at 03:06 AM | Comments (0) | TrackBack

November 07, 2004

Unifying Ideas in Game Theory: Symmetric-Player Games vs. Principal-Agent Games

I'm trying to work on a 4th edition of, and thinking about big ideas.

There is one large class of games in which one player moves first to try to get another to do something-- the principal-agent games, broadly construed. These include games of boss and worker, voter and politician, customer and seller. The player in this first class of games are in asymmetric positions-- they choose different sorts of actions. In some of these games-- the "moral hazard" ones-- the problem is that the agent's action is unobserved. In others-- the "adverse selection" games-- the problem is that the agent has some information the principal does not know.

In a second large class of games-- shall I call them "symmetric player games"?-- the players are all in the same sort of position-- two countries at war, or five firms setting prices, or two politicians choosing campaign spending. The idea of strategic substitutes and complements applies to these games, and is a unifying idea I'd like to use more. The idea is that in some games, when the other player does more of his strategy, I want to do more of mine. If my competitor raises his price, I want to raise mine. If my rival for elected office spends more on advertising in Wisconsin, I want to spend more too. We call this a situation of "strategic complements". In other situations, when my rival does more of his strategy, I do *less* of mine. If the rival firm increases capacity, I reduce my capacity. If the other firm spends more on research, I give up on research altogether. This is a situation of "strategic substitutes"....

... I realize that my chapters on Bargaining and Auctions can be roughly differentiated in this way. The usual bargaining game is one of strategic substitutes. If my rival is tougher, I will be softer, lest the bargain fall through. The usual auction game is one of strategic complements. If my rival bids higher, I will bid higher too. This is true even though the auction game is a mixed principal-agent/symmetric-player game, the principal being the seller and the agents being the bidders.

This makes me wonder where I should put my Pricing chapter. Already, I've decided to carve up the Entry chapter and move its pieces to other chapters or delete them. Pricing is the lone remaining application-centered chapter. Maybe it should be carved up too.

Another dichotomy is between games in which players take the rules as given, and "mechanism design" games, or "contracting games", in which they start by trying to bind themselves to the rules that will incentivize their behavior later in the game. I am not sure how to incorporate that dichotomy. Contracting games obviously arise most in principal-agent games, with the boss designing a contract for a worker (which, usually but not always, must satisfy a "participation constraint" that the worker be willing to accept it instead of quitting the job), or voters designing a constitution for politicians. In other principal-agent games, however, there is no contracting. Signalling games are the most prominent of these: workers choose credentials to signal their ability, without any formal contract offer beforehand by firms.

But contracting arises in symmetric player games too. Classic mechanism design problems include a seller setting the rules for an auction for lots of symmetric bidders, or a boss setting the rules for promotion for workers in a tournament with each other. Those two examples are mixed principal-agent/symmetric-player games, but mechanism design can even arise in pure symmetric-player games: a cartel chooses rules for punishing members who cut prices, a team of workers agrees to a sharing rule for output, or a group of citizens agrees to a choice rule for choosing how much each person pays for a new streetlight and whether it is built based based on announced preferences.

Posted by erasmuse at 03:11 AM | Comments (0) | TrackBack

October 29, 2004

Sender-Receiver Games: Truthful Announcement, Cheap Talk, and Signalling

After a chat with Professor Harbaugh, I thought I'd collect my thoughts on communication games, thinking about revisions to my Games and Information. These notes won't mean much to non-economists, I'm afraid.

There are a variety of games in which one player, the Sender, tries to communicate something-- which we can call "his type"-- to another, the Receiver. The Sender is the informed player, so he is often an Agent; the Receiver is uninformed, and so is often a Principal.

I wonder if the games can usefully be divided into Truthful Announcement, Cheap Talk, and Signalling....

...In Truthful Announcement games, the Sender may be silent or send a message, but the message must be truthful if it is sent. There is no cost to sending the message, but it may induce the Receiver to take actions that affect the Sender. If the Receiver ignores the message, the Sender's payoff is unaffected by the message. The Sender's type varies from bad to good in these models usually.

An example of a Truthful Announcement game is when the Sender's ability A is uniformly distributed on [0,1], and the Sender can send a message Y such as "A>.5" or "A=.2".

In Cheap Talk games, the Sender's message is costless, but need not be truthful. If the Receiver ignores the message, the Sender's payoff is unaffected by the message. If the Receiver acts, though, that might affect the Sender. Usually, these are coordination games, where the Sender's preferred Receiver-action, given the true state of the world that he knows, is positively correlated with the Receiver's preferred Receiver-action.

An example of a Cheap Talk game is when the Sender and Receiver want to go to the same restaurant, either A or B, but only the Sender knows which restaurant is better. The Sender send a message-- "A" or "B"-- and if the Receiver ignores it, there is no cost to the Sender.

In Signalling games, the Sender's message is costly-- or at least a false message is-- but need not be truthful. The Sender's payoff is affected even if the Receiver ignores his message. The Sender's type varies from bad to good in these models usually. The "single-crossing property" is crucial-- that if the Sender's type is better, it is cheaper for him to send a message that his type is good.

An example is credentials. The Sender is dull or bright. If he is bright, it is easier for him to acquire credentials, which is his message to a Receiver employer.

In writing this up, some awkwardnesses strike me.

1. Zero-Cost Signals. A signalling game doesn't change its essential properties if sending the message of high quality is costless for the truly high quality type. It could even have negative cost for him-- that he gets a reward for truthfully declaring his type. What matters is that the same signal be too costly for a low quality type to think worth sending.

2. Lying Being what Is Costly. In the usual models, if a high signal is sent, that is more expensive than a low signal, especially for the low type of Sender. But I think the model would work out very much the same if what is expensive is not a high signal, but a false signal. The difference is that in the usual models, it is cheap for the High type to falsely signal that he is low, but in a truth-based model, it would be expensive for him to be modest.

3. Expensive-Talk Games. Imagine a cheap-talk game in which the signal is costly-- but the cost is the same for everyone, regardless of type. The usual sort of signalling won't work, because signalling high quality is no more expensive for the Low type than for the High type. But truthful communication might still work, for reasons more akin to those of the Cheap-Talk Game, if the High type Sender has a greater desire than the Low type for the Receiver to adopt a High response.

Thus, imagine that the Low Sender could make $100 as a salesman for himself and $100 for the Receiver if the Receiver hires him, and the High Sender could make $900 for himself and $900 for the Receiver. If messages are costless, both Senders would send the message "I am a High type" (not, I guess, "Hire me--I'm high"), and the message would be uninformative. If the message costs $200, only the truly High Sender would send the message. There is now an equilibrium in which the message is informative (there is also a pooling equilibrium, perhaps implausible, in which messages are still ignored).

People might think of this as a signalling game, applying the single-crossing property to the ultimate payoffs, but it is really more akin to the cheap-talk game, I think. It is like the PHD Admissions Game in Chapter 6 of my book. Perhaps it like Mechanism Design games too, which might be thought of as a form of cheap-talk games, since they have Senders and Receivers and costless messages, though in Mechanism Design games there is commitment to the mechanism.

Posted by erasmuse at 09:10 PM | Comments (0) | TrackBack

October 09, 2004

A Mechanism for Eliciting One Buyer's Reserve Price

How do you figure out how much consumers might pay for a new product? I came across a good idea yesterday in a paper by George Geis, though it is not new with him. The problem is that if you simply ask people for the greatest price, P, they will pay, they will not think hard enough, and you will get an inaccurate estimate of their maximum value, V. Or, if you offer to sell it to them for some price P and they accept, all you know is that V>P, but not V exactly.

So here is another idea. Tell the person to give you a price P that equals their value, V, and tell them what will happen next. What will happen next is that you randomly pick a price, R, for the product. If P>R, they may buy the product at price R. If P

This mechanism is truthtelling-- the person's best strategy is to choose P=V. If they choose lower, they might miss their chance to buy the product at a price they'd like-- maybe R>P but R

I think you could also run this with slightly different rules, saying that they MUST buy at R if RV. That might be a better idea, since my original rules (which might be different from what Geis had--I forget) would make a very high P an easy strategy that would keep all the consumer's options open.

Posted by erasmuse at 08:55 AM | Comments (0) | TrackBack

September 21, 2004

Does not Calling Indicate not Liking? Ted and Sheila

Via Alex Tabarrok at MR, I discover Glen Whitman at Agoraphilia

Say Ted would like to talk on the phone every two days , whereas Sheila would like to talk every day . You might think Sheila would call Ted about two-thirds of the time -- but in fact, she will call him every time . If they talk on Monday, Ted plans to call on Wednesday; but then Sheila calls him Tuesday. His clock reset, Ted plans to call on Thursday. And then Sheila calls on Wednesday. Eventually, Sheila decides Ted doesnít care about her, because he never calls....

Sheila's conclusion is not "rational" in the economic sense, because she ought to have figured this out. If her prior belief is that there are equal probabilities that Ted would want to call her every half-day, every two days (which is in fact the truth) and never, then after the experience described above, she should revise it to put 50-50 probability on Two Days and Never.

Furthermore, if she cares enough, she can experiment and learn. She can purposely refrain from calling. If Ted does not call after Two Days, she can conclude that the truth is he Never wants to call.

...This is a truly useful idea.

I can carry it a little further than Whitman did. Suppose Sheila is rational. She therefore continues to call every day, knowing that there is a 50% probability Ted does like talking, if not as much as she does. In fact, though, let us change the story so Ted's true preference is Never. If Ted thinks that never calling Sheila is going to give her the message that he doesn't like her, he is mistaken. She will continue to put the probability that he doesn't like her at just 50%.

This last story sounds more realistic if we make Sheila's priors on Two Days and Never at 95-5. Remember, these are subjective beliefs of Sheila, so it would not be surprising if she put a high probability on Ted liking her. When Ted never calls, she will continue to hold the 95-5 beliefs. With beliefs that skewed, she will also find no point in experimenting by incurring the cost of not calling and letting two days go by. Ted therefore must bite the bullet and tell her he doesn't like her, or else endure those phone calls indefinitely.

Posted by erasmuse at 10:14 AM | Comments (0) | TrackBack

September 20, 2004

Bribes, Airport Security, and Helpful Entrapment

One reason our precautions against hijackers are silly is that a simple bribe can get around any of them. Via Tyler Cowen, the September 17 Washington Post tells us

A thousand rubles, or about $34, was enough to bribe an airline agent to put a Chechen woman on board a flight just before takeoff, according to Russian investigators. The agent took the cash, and on a ticket the Chechen held for another flight simply scrawled, "Admit on board Flight 1047."

The woman was admitted onto the flight, while a companion boarded another plane leaving Moscow's Domodedovo Airport the same evening. Hours later, both planes exploded in midair almost simultaneously, killing all 90 people aboard.

It would take more than $34 in America, but I think $2,000,000 would do it, not a large sum for a group that is willing to use up its own members' lives.

I do have a solution, though I don't think we're using it: entrapment. We need to immediately send out FBI agents to offer numerous two million dollar bribes to airport personnel, and we must publicize the firing (and perhaps the criminal prosecution, even if conviction fails) of those who succumb to temptation. Lots of people would give up their honor for two million dollars, but if there is only a 1 in 100 probability that the briber will pay rather than turn you in, the expected payment falls to $20,000, with a 99% chance of losing your job.

Posted by erasmuse at 02:28 PM | Comments (0) | TrackBack

August 28, 2004

The Cry Bar in Nanking; Norm Entrepreneurs

From the
July 31 WORLD magazine:

One of the hottest bars in the Chinese city of Nanjing sports only a sofa, a few tables, and tissue paper - a lot of tissue paper. The AFP news service reports that the cityís first "cry bar," where customers can sit and cry for $6 per hour, is growing in popularity. Owner Luo Jun says he opened the bar when clients of his last business said they often wanted to cry but didnít know when or where it would be appropriate to do so.

I was just talking with my grad students yesterday about social norms and multiple equilibria. This is a great example of Norm Entrepreneurship. Mr. Jun saw an opportunity, and took it.

Posted by erasmuse at 04:25 PM | Comments (0) | TrackBack

August 21, 2004

No-Trade Theorems; L. Samuelson(2004)

I was reading Larry Samuelson's survey, "Modeling Knowledge in Economic Analysis," in the June 2004 Journal of Economic Literature. Much of it is about No-Trade Theorems. I'll modify one of his first examples to illustrate.

Basic Model. Alice owns 1 share of a company. That share is worth $200 if the company's new product will be a success, and $300 if it is a failure. Each of these has equal probability, so the market price is $250.

Alice can make one take-it-or-leave-it offer to sell the stock to Bob. Clearly, so far Alice would offer P=250 and he would accept, but both players would be indifferent. We don't really have a model of trade yet, since a small transaction cost would block all trade.

We will think about adding two additional assumptions.

Assumption 1: Alice is better informed. Alice finds out whether the product will be a success, Bob knows she has found out, she knows Bob knows, and so forth (her finding out is "common knowledge", though whether she has found success or found failure is unknown to Bob).

Under Assumption 1, if Alice find out FAILURE, what happens? She will believe (correctly) that the value is $200, and Bob will believe it is $250. But now if she offers to sell to him at P=250, he will change his belief to value V=200 and refuse to buy. Indeed, the only equilibrium with trade is if Alice offers P=200, and, again, both players are then indifferent about trade.

This is a No-Trade result. Our intuition that difference of opinion will result in the low-valuer Alice selling to the high-valuer Bob fails, because the very act of Alice trying to sell converts Bob to being a low-valuer.

Most of Samuelson's survey looks at papers that generalize the result to more complicated situations than this little example and to fancy ways to try to generate trade, many of them fiddling with the standard Bayesian assumption of common priors (that both players know the probability of failure is .5, that both of them know Alice has found the information, etc.) But I wonder whether the paradox can be resolved even within this little example.

Suppose if instead of Assumption 1, we used Assumption 2.

Assumption 2. With probability .1, Alice gets into a fight with the president of the company, and holds a grudge which means her share of stock is worth $95 less to her than to anyone else in the world. Bob does not know whether she really had the fight, but he knows the probability is .1 and the consequence is a $95 difference.

Under Assumption 2, if Alice has the fight, then she will offer P=250 to Bob and Bob will accept. Unlike in the basic game, Alice now has a strong incentive to sell-- she is not indifferent. Bob is still indifferent, but that is an example of the purely technical "open-set" problem-- Alice would be willing to offer P=249 if she had to, and Bob would then be strongly desirous of accepting.

Assumption 2 is an example of a non-informational reason for trade, a reason that requires trade to attain efficient allocation of resources. This is, of course, the second reason we intuit for why trade occurs. It is by far the main reason for trade in goods, and it is also important for trade in securities, though Samuelson and others argue that efficiency reasons can't explain the volume of trade in securities.

Now let's use both Assumption 1 and Assumption 2. Note that this means that with 100% probability Alice has an informational reason for trade, but with 10% probability she also has an efficiency reason. What will happen?

First, suppose Alice hears that the product will be a failure. She will offer to sell to Bob at some price P*. She will tell Bob that she is selling because she had a fight with the president, but Bob won't believe that. He knows that with high probability she is selling because the company's value is only 200. What is the highest value of P* that Bob will accept?

If Alice had no fight and heard that V=300, she would offer P=300 (or make no offer) and Bob would deduce what happened. This has probability .9 (.5).

With probability .9(.5), Alice had no fight but heard that V=200 and is selling for that reason.

With probability .1(.5) Alice had a fight and heard V=200, and so has two reason to sell.

With probability .1(.5), Alice had a fight and heard V=300, and so will sell if P*>205.

That means that if P*>240, a sale will occur with probability .45+.05+.05= .55.

Bob's expected payoff from accepting P* is zero if

[(.1)/ (.55)] [.5 (300) + .5 (200) - P*] + [.45/ (.55)] [200-P*] =0.

This reduces to

(2/11) (250-P*) + (9/11) (200-P*)=0,

500 -2P* +1800 - 9P* =0

2300 = 11P*

P* = 2300/11= 209 (approximately)

If P* =209, then Alice is willing to sell even if she heard good news, if she really had a fight with the president, and Bob is willing to accept her offer, because he can at least break even (and if Alice offered 208, Bob would be strongly willing to accept).

Thus, a small probability of an efficiency reason for trade has generated a high probability of trade. Trade will occur 55% of the time, but 45/55 of the time trade occurs, its direct motivation will be Alice's superior information, not her possible efficiency motivation. So if you think that most securities trading is not motivated by efficiency, this model explains what is going on. The market (Bob) knows that most people are selling because they have private information, but the market is only willing to trade with them because it knows that some people are selling for efficiency reasons.

And I've shown what is going on with a much simpler and more conventional model than what's in the literature. To be sure, it's just a numerical example, but it's got 90% of what we need for an explanation of the real-world factoid.

Still, it might be worth expanding a bit, if this is not already in the literature. It would be interesting to see what would happen if Alice's probability of being informed is not 100%, but X%, and compare the effect of X with the effect of the probability of having an efficiency reason (here, 10%).

Posted by erasmuse at 11:49 PM | Comments (0) | TrackBack

August 20, 2004

Kakutani's Death; Fixed Point Theorems

Shizuo Kakutani, author of the Kakutani Fixed Point Theorem, has died at age 92. I started auditing his Real Analysis class one fall while I was an undergrad. I didn't realize that he was the author of a theorem important for economics, or that real analysis was one of the most useful math courses I could take. Rather, I knew the course was a base one for math majors, and very hard, and I was feeling very self-confident. I didn't last too long. Staying up till the wee hours doing problem sets for a course I was just auditing was too much for me. Still, I remember vividly how Professor Kakutani would clearly exposit series's and sums, filling up blackboard after blackboard in neat handwriting. And I remember his joke about the lady who was surprised at how after so many years in America he still spelled "if" as "iff" (for nonmathematical readers: "iff" means "if and only if" in math).

Alex Tabarrok has a good discussion of fixed point theorems at Marginal Revolution.

One morning, exactly at sunrise, a Buddhist monk began to climb a tall mountain. The narrow path, no more than a foot or two wide, spiraled around the mountain to a glittering temple at the summit. The monk ascended the path at varying rates of speed, stopping many times along the way to rest and to eat the dried fruit he carried with him. He reached the temple shortly before sunset. After several days of fasting and meditation he began his journey back along the same path, starting at sunrise and again walking at variable speeds with many pauses along the way. His average speed descending was, of course, greater than his average climbing speed.

Prove that there is a spot along the path that the monk will occupy on both trips at precisely the same time of day.


Take two pieces of 8*11 paper and lay them on top of one another so that every point on the top paper corresponds with a point on the bottom paper. Now crumple the top piece of paper in anyway that you wish and place it back on top. B's theorem tells us that there must be a point which has not moved, i.e. which lies exactly above the same point that it did initially.


Consider a cupful of coffee. Each point is somewhere in 3-dimensional space. Stir. At least one point ends up in the same place as it began.

Posted by erasmuse at 12:12 AM | Comments (0) | TrackBack

August 13, 2004

Bait Cars in Vancouver; Auditing Games

A mall in Vancouver has many signs like the one I show here. Isn't it a good idea? Best of all would be to actually plant some bait cars too, but that isn't even necessary, if budgets are tight. Criminals will rightly be skeptical that the bait cars exist, but there is not much to be done about that unless some kind of certification or reputation becomes possible. Newspaper reports of successful baiting *might* work.

This is the same situation as in the following two of my articles:

``Lobbying When the Decisionmaker Can Acquire Independent Information,'' Public Choice (1993) 77: 899-913. Politicians trade off the cost of acquiring and processing information against the benefit of being re- elected. Lobbyists may possess private information upon which politicians would like to rely without the effort of verification. If the politician does not try to verify, however, the lobbyist has no incentive to be truthful. This is modelled as a game in which the lobbyist lobbies to show his conviction that the electorate is on his side. In equilibrium, sometimes the politician investigates, and sometimes the information is false. The lobbyists and the electorate benefit from the possibility of lobbying when the politician would otherwise vote in ignorance, but not when he would otherwise acquire his own information. The politician benefits in either case. Lobbying is most socially useful when the politician's investigation costs are high, when he is more certain of the electorate 's views, and when the issue is less important. In Ascii-Latex (43K) or pdf (204K, ).

"Explaining Incomplete Contracts as the Result of Contract- Reading Costs," in the BE Press journal, Advances in Economic Analysis and Policy. Vol. 1: No. 1, Article 2 (2001). Much real-world contracting involves adding finding new clauses to add to a basic agreement, clauses which may or may not increase the welfare of both parties. The parties must decide which complications to propose, how closely to examine the other side's proposals, and whether to accept them. This suggests a reason why contracts are incomplete in the sense of lacking Pareto-improving clauses: contract-reading costs matter as much as contract- writing costs. Fine print that is cheap to write can be expensive to read carefully enough to understand the value to the reader, and especially to verify the absence of clauses artfully written to benefit the writer at the reader's expense. As a result, complicated clauses may be rejected outright even if they really do benefit both parties, and this will deter proposing such clauses in the first place. In ascii-latex and pdf (http: //

It reminds me of the old joke about the farmer who, having noticed that watermelons were disappearing from his garden, posted a sign saying,

"One of the watermelons in this garden is poisoned."

The next day at dawn he looked out and saw that no more watermelons had been taken but the "One" on the sign had been crossed out. Now the sign said,
TWO of the watermelons in this garden is poisoned."
Note, however, that the last part of the joke does not carry over to parking lots in Vancouver.

Posted by erasmuse at 02:02 PM | Comments (0) | TrackBack

July 25, 2004

James Miller's Game Theory at Work (McGraw Hill 2003)

This economist at Smith College was in tenure trouble because of his conservatism. His game theory book, one of the many competitors of my own book, looks pretty good, though so close in style to Dixit and Nalebuff, Dixit and Skeath, and Macmillan, good books all, that I wonder about the need for it-- especially when Dixit, Nalebuff, and Macmillan are such big names. I was only bold enough to write the 1st edition of Games and Information as an assistant professor because in 1989 nobody else had written a book on game theory in the post-1975 style and everybody wanted to read such a book. I knew I'd have the best book simply because it would be the only book-- though it wasn't for long, it turned out.

Posted by erasmuse at 11:13 PM | Comments (0) | TrackBack