The Traveler’s Dilemma: Irrational Choices, Altriuism, or Implicit Collusion

One of the things I love about travel is that I tend to get a chance to catch up on back issues of Scientific American.  This trip is no exception.

Over lunch, I read an article in the June 2007 issue called “The Traveler’s Dilemma”, by Kaushik Basu.  In it he explains research on why people give what seem to be irrational responses to the game called, “The Traveler’s Dilemma”.  I’m going to use the forum of this blog post to propose an alternative answer, one not suggested in his article.

First, it will help to define what “The Traveler’s Dilemma” is.  Many people are familiar with “The Prisoner’s Dilemma”, made famous by recent interest in the movie “A Beautiful Mind”, about John Nash, one of the original theorists behind Game Theory.  The Traveler’s Dilemma is defined well in the article, so I’ll quote it here:

Lucy and Pete, returning from a remote Pacific island, find that the airline has damaged the identical antiques that each had purchased. An airline manager says that he is happy to compensate them but is handicapped by being clueless about the value of these strange objects. Simply asking the travelers for the price is hopeless, he figures, for they will inflate it.

Instead he devises a more complicated scheme. He asks each of them to write down the price of the antique as any dollar integer between 2 and 100 without conferring together. If both write the same number, he will take that to be the true price, and he will pay each of them that amount. But if they write different numbers, he will assume that the lower one is the actual price and that the person writing the higher number is cheating. In that case, he will pay both of them the lower number along with a bonus and a penalty–the person who wrote the lower number will get $2 more as a reward for honesty and the one who wrote the higher number will get $2 less as a punishment. For instance, if Lucy writes 46 and Pete writes 100, Lucy will get $48 and Pete will get $44.

What numbers will Lucy and Pete write? What number would you write?

Before I give away the answer… think about what number you would guess.  For whatever reason, I didn’t recognize this game, and I immediately jumped to the answer 98 for some reason.

Wrong.

Mathematically, there is only one rational guess.  It’s 2.

The article also gives a great explanation of why 2 is the right answer:

To see why 2 is the logical choice, consider a plausible line of thought that Lucy might pursue: her first idea is that she should write the largest possible number, 100, which will earn her $100 if Pete is similarly greedy. (If the antique actually cost her much less than $100, she would now be happily thinking about the foolishness of the airline manager’s scheme.)

Soon, however, it strikes her that if she wrote 99 instead, she would make a little more money, because in that case she would get $101. But surely this insight will also occur to Pete, and if both wrote 99, Lucy would get $99. If Pete wrote 99, then she could do better by writing 98, in which case she would get $100. Yet the same logic would lead Pete to choose 98 as well. In that case, she could deviate to 97 and earn $99. And so on. Continuing with this line of reasoning would take the travelers spiraling down to the smallest permissible number, namely, 2. It may seem highly implausible that Lucy would really go all the way down to 2 in this fashion. That does not matter (and is, in fact, the whole point)–this is where the logic leads us.

The rest of the article dives into detail about different experiments that were executed to try and understand why people reliably do not guess the rational answer.  I won’t repeat it all here, but it was a very impressive set of empirical studies.

The one that impressed me the most was the experiment in 2002 by Tilman Becker, Michael Carter, and Jorg Naeve at the University of Hohenheim in Germany.  They actually played this game, for real money, with 51 members of the Game Theory Society – all of which who were professional game theorists!

But with real money at stake, 45 of the 51 chose a single number to play every round, and of those, only 3 chose the Nash equilibrium value (2).  10 chose 100, and 23 chose numbers between 95 and 99 (phew, I’m not completely off base).

Now, this is where I think I have some value to add.

The rest of the article theorizes that the unexplained choice of strategies of either 100 or the high 90s is based on an evaluation of altruism, an intrinsic human trait that may be hard-wired into our brains.

The author gets close to what I believe the right answer is here, in the last paragraph:

If I were to play this game, I would say to myself: “Forget game-theoretic logic. I will play a large number (perhaps 95), and I know my opponent will play something similar and both of us will ignore the rational argument that the next smaller number would be better than whatever number we choose. What is interesting is that this rejection of formal rationality and logic has a kind of meta-rationality attached to it. If both players follow this meta-rational course, both will do well. The idea of behavior generated by rationally rejecting rational behavior is a hard one to formalize. But in it lies the step that will have to be taken in the future to solve the paradoxes of rationality that plague game theory and are codified in Traveler’s Dilemma.

So close… but let me put my own words around the concept:

Implicit Collusion

What if we, as humans, are hard-wired to “collaborate”.  Collaboration, cooperation… these are nice words.  Collusion is the variant where two parties actually pool efforts to control the outcome of a situation their personal advantage.

My guess is that we are wired, either genetically or socially, to infer collusion opportunities when they present themselves.

The rational choice might be 2, but even without talking to the other person, I might guess that they know, without talking, that if we just both guess 100, we will both win.  Collusion without communication.  The fact that the price for being wrong is just “2 dollars” is a relatively low price to pay versus the gain of “98 dollars” if I’m right about the implicit collusion opportunity.  Even with loss aversion of 3:1, I’m going to guess 100.  The guesses in the high 90s are likely a slight nod to the goal of squeezing out a couple of dollars of upside, but without risking the large $90+ upside of the collusive opportunity.

In fact, I believe that the ratio of loss aversion and evaluation of the probability of silent collusion explains the guessing ranges displayed.

Loss aversion is well known, but I’ve never considered the idea of implicit collusion before.  It seems like a powerful idea to explain human behavior in games where communication between parties is prevented.

10 thoughts on “The Traveler’s Dilemma: Irrational Choices, Altriuism, or Implicit Collusion

  1. The argument for 2 being the logical choice is flawed. I presumes that both people will think similarly, which is just not realistic. Unless you’ve been watching Princess Bride too many times.

    On the contrary, the most logical answer to the real-life example is to use the actual value of the item. If you are both honest, you both get what is deserved. If you are honest and the other person lies with a higher amount, you benefit by $2. The only scenario where you’d lose is if the other person lies with a lower amount, which no one in their right mind would do.

    If you remove the real-life example, and instead change the objective from getting a fair value for a damaged item, to maximizing your take, then it always makes sense to guess 100. If the other person is like-minded, you both win big. If the other person is less greedy, you make $4 less than them, but still maximize your chances of getting at very least a reasonable amount. The floor is raised to $2 less than the least ambitious guess. So it’s a race to the top rather than the race to the bottom the article theorizes.

    It is only if you again change the objective, this time to making more of a profit than your opponent, that the race to the bottom logic comes in to play. And even then, as I mentioned earlier, the race to the bottom is NOT logical, because it assumes the other person thinks like you do. In this scenario, it is no longer an issue of pure math, but a 50/50 mix of math and psychology. It’s a poker game. Is your opponent greedy? Conservative? Does he think you are one or the other? How can you carry yourself to make him think you are one or the other, to your advantage?

    In any case, those are three very different “games” and the rules and theory that apply to them should be handles separately.

    When I was in college, I had a Social Psych teacher who played a similar game with the class. Instead of money, there were points at stake, which applied directly to your grade. This made the stakes very, very real.

    First, he broke us into five groups of five (or thereabouts). Within each group, everyone had to choose either a red or green card, without consulting with anyone else, and everyone flipper their card at the same time. If everyone chose green, everyone gained 10 points. If everyone chose red, everyone lost 10 points. If it was mixes, everyone who chose green lost 20 points and everyone who chose red gained 20 points.

    We played the game several times this way, and none of the groups were all-green. Most were mixed, and a few ended up with all red. Each successive time, people would change their answers based on what they thought the rest of the group was thinking, but the overall mix stayed more or less the same. Without communication, people mostly tried to screw one another, and ended up screwing themselves just as much. This is the kind of blind “I’ll base my guess on what I think he’s thinking” logic that makes the whole stock market seem like such a pit of chaos to me.

    Next, he changed it so that everyone had an opportunity to walk within their group, and make a group plan before choosing their colors. But when it came time to flip, you could still choose whatever you wanted to. In this scenario, the majority of groups agreed to all flip green, so they would all win, and most actually did flip green. But invariable, there was at least one group in which someone would flip red so he could gain more while everyone else lost. I think this reflects how most of the world works. We have generally agreed upon rules, and most people follow them, but there’s always a few ass-hats out there screwing it up for the rest.

    The final phase was group-to-group. Each group would talk within themselves to decide which color the group would choose, without any of the other groups hearing. Then all the groups would reveal their choices at the same time, and the same point rules would apply at the group level.

    In this scenario, things got chaotic again. Instead of being able to judge individual members in the group fairly well by their body language and such, you then had to judge an entire group based on what you could see of each of them, what you thought of them as a whole, and who within the group you thought was most likely to be influencing the group’s decisions. It ended up being much more in line with nation-to-nation politics.

    What I learned:
    – Lack of communication leads to chaos and mistrust.
    – Tribe-level collaboration leads to the most beneficial situation, but you still have to watch out for the few bastards.
    – Group-to-group political collaboration went back to chaos.

    But again, at the individual level, it is fairly easy to predict someone’s behavior in the game based on what they think the objective is. If their objective is to just to earn a decent amount of points, then they’ll tend toward the all-green option. If it’s to earn as many points as possible, or to get more points than the other people, they’re more likely to be the one flipping red when everyone else flips green. The real drama comes in guessing what other people think the objective is, and convincing them to behave how you want them to.

  2. I think there is a rule against comments that are longer than the original post 🙂

    You make a number of good points, but fundamentally, the rational answer is 2, if people are trying to maximize their payout, according to existing game theory.

    Some of your points echo mine, however, which is that depending on the probability you assign to the other person picking the high number, that can change the rational answer to 100 or the high 90s.

    Part of the problem with behavioral economics and game theory is that “defining the objective” as you put it, is becoming difficult, as people seem to have consistent, non-rational ways of defining objectives.

    That’s why my blog is called Psychohistory, after all. 🙂

    Thanks for reading and posting. You don’t need a blog – you can just aggregate your comments here.

    Adam

  3. Adam,
    An interesting and insightful post, as always.
    It made me remember an interesting book called:
    “The Evolution of Cooperation” by Robert Axelrod.
    http://en.wikipedia.org/wiki/The_Evolution_of_Cooperation
    http://www.amazon.com/Evolution-Cooperation-Robert-Axelrod/dp/0465021212

    In it the author describes how soldiers on opposing sides of a trench battle developed “implicit collusion opportunities”:

    Here’s the specific passage from page 60:
    “A fascinating case of the development of cooperation based on continuing interaction occurred in the trench warfare of WWI. In the middle of this very brutal war there developed between the men facing each other what came to be called the “live-and-let-live system.” The troops would attack each other when ordered to do so, but between large battles each side would deliberately avoid doing much harm to the other side — provided that the other side reciprocated.”

    http://books.google.com/books?id=d7B5WloK_tIC&dq=evolution+of+cooperation&pg=PP1&ots=_JXcX-2jCr&sig=WeD54HhfgtRJF31xr9PJckOOUtA&prev=http://www.google.com/search%3Fclient%3Dsafari%26rls%3Den-us%26q%3Devolution%2Bof%2Bcooperation%26ie%3DUTF-8%26oe%3DUTF-8&sa=X&oi=print&ct=title&cad=one-book-with-thumbnail#PPA60,M1

  4. Adam,

    You say, “The rational answer is 2…”

    But, I disagree. The *theoretical* optimal answer is 2, but you’re asking about playing this game in real life, and in real life, not taking into account the likely actions of real people is completely *irrational*.

    I would prefer to view this as an expected value problem. And in the case where you choose $2 as your answer, your expected value is necessarily between $0 and $4.

    While I won’t go into a lot of math, I think it would be relatively easy to prove that in real life, by picking a number larger than $2 (perhaps even $100), that your expected value would be greater than $4.

    In fact, I’m guessing that if you and some random person who you didn’t know ended up in this situation, you would *not* pick $2. If you did, I’d be highly disappointed in you… 🙂

  5. As Mark twain once said, “I didn’t have time to write a short letter, so I wrote a long one instead.” 🙂

    I guess this particular game theory just poked at two of my pet peeves:

    1) Improper use of metaphor. The objective in the “traveller’s dilema” metaphor is not the same as the objective in the experiments based upon it.

    2) Theories based on inconsistent assumptions. The scenarios for each objective are very different from one another, but they treat them as if they’re the same in their theorizing.

    The interesting offshoot of this, I’d think, would be comparing the real-world results of instances where the different objectives are applied. For instance, comparing companies that cling to the idea that they must each have their own proprietary solution for the same problem, versus those who cooperate with their competition on mutually needed resources, to raise the tide for all involved.

    Who cares what the optimal numbers are within each objective framework. It’s more useful to know which framework works best under what circumstances.

    PS: And I do have my own blog, actually. Sadly, it’s mostly media stuff and funnies lately, since I haven’t had time for much thought provoking goodness.

  6. The original article, in the magazine, has the payoff matrix. I thought it was in the online version, but I can’t find it.

    Maybe I’ll reproduce it here – I should stop saying the “rational choice” and start saying the “Nash Equilibrium” of the game is at “2” for each player.

    I agree fundamentally that the Nash Equilibrium does not adequately explain the actual results here, which is why I found this topic so interesting. And I still like my term, “implicit collusion”.

    Adam

  7. Adam; Your concept of implicit collusion is what the Nobel Prize winner, Thomas Schelling, called a “focal point”. Sometimes people refer to them as “Schelling points”. Areas in the pay-off space which just seem the right place to be, agreed upon without explicit communication.

    Ray: The game you are talking about, the get as much as you can game, is a 4 person dilemma game in which the size of the permitted free-riding group is 0. Again, Thomas Schelling modelled these types of interactions: see Micromotives.

Comments are closed.