In the 1500s, Dirk Willems was arrested for practicing a faith that differed from the Catholicism and Protestantism practiced in the Netherlands. Willems was locked up in a residence converted into a prison, but escaped by climbing down a rope made of knotted rags to the frozen moat below. Due to his light weight from malnourishment, Willems did not break the ice. But a guard that saw Willems and gave chase fell through.
Instead of escaping, Willems returned to the water and saved the life of his pursuer. The guard then returned Willems to prison, where he remained until he was burned at the stake.
After his death, Willems became a celebrated martyr both for adhering to his faith and for the humanity of his actions. For researchers of evolution, however, his actions represent a puzzle. Why did Willems, a product of millennia of natural selection, act so clearly against his self-interest? For that matter, why would a soldier sacrifice him or herself? In a Darwinian world of self-interest, why do cooperation, generosity, and altruism exist at all?
An Evolutionary Basis for Cooperation
To understand how researchers study the tension between evolution, cooperation, and altruism, we spoke with Alexander Stewart, a postdoc working on mathematical biology at the University of Pennsylvania.
Societies honor brave soldiers with medals of honor and celebrate individuals like Dirk Willems as martyrs because their actions are extraordinary – a triumph of ideals over base instincts. But as a biologist, Stewart can’t be satisfied by viewing altruism this way. Biology views populations as collections of nearly identical individuals. When mutations lead to differences, those differences either flourish or disappear.
Stewart, who recently published a paper with Professor Joshua Plotkin of UPenn on the subject of cooperation and evolution, described a variety of approaches taken to explain how generosity, altruism, and cooperation could lead to evolutionary success.
One explanation is kin selection, which proposes that evolution will favor altruism but not uniformly. In a nutshell, people will act against their self-interest to help someone as long as he or she is closely related. Kin selection can’t explain Willems saving a stranger, but it can explain someone diving into a raging river to save their son, cousin, or maybe even second cousin. Doing so protects their own genes.
But people do not survive and thrive by themselves. The related approach of group selection explains altruism, generosity, and cooperation by whether it helps the group as a whole, especially in circumstances of populations fighting or competing over scarce resources.
Indirect reciprocity addresses the more complicated case of cooperating with a stranger. It does so by introducing reputation. Individuals develop reputations as either trustworthy or not. We help people with good reputations and avoid people who take advantage of generosity.
Stewart approaches the problem from an angle popular with economists: game theory. Often used to model the interaction between two superpowers or several competing firms, game theory looks at interactions between rational decision makers who aim to maximize their utility. In other words, it models interactions like a series of turns in a game in which each player is trying to get the most points.
Priceonomics enjoys good economic analysis, but this struck us as perhaps the exact wrong approach. When we help our mothers or a friend, we don’t do it to maximize utility. Nobel Prize winner Elinor Ostrom made a career studying how economists’ models of “rational actors” fail to reflect interactions in the real world. As one Forbes writer described her work in an acknowledged oversimplification, “people who don’t know each other, tend to screw each other. But people who do know each other, tend to help each other.” Why look at game theory when the definition of altruism is acting against one’s self-interest?
For Stewart, however, that is the point. “Yes, you don’t behave in a blind, utility-maximize way,” he told us. However, when we act differently toward a friend, we consider reputation à la indirect reciprocity approach. If our friend was a jerk, we wouldn’t help him. The importance of using game theory is investigating whether people will cooperate before you even incorporate ideas like reputation. Stewart notes:
When you think about the [game theory] setup, that’s the most bleak circumstance you can imagine. So will you see cooperation here? It’s the most basic question. If you don’t, you move onto other approaches. But [if] you don’t even need those other elements to get cooperation – it’s actually a positive, sunny perspective.
The Evolution of Prisoner’s Dilemmas
Economists, mathematicians, and biologists have a long history of showing why cooperation will arise even under the pure self-interest framework of game theory. The model of choice for showing this is the prisoner’s dilemma.
The prisoner’s dilemma describes the situation facing two arrested criminals being worked by the police. Wikipedia does a dang good job describing it:
Each prisoner is in solitary confinement with no means of speaking to or exchanging messages with the other. The police admit they don’t have enough evidence to convict the pair on the principal charge. They plan to sentence both to a year in prison on a lesser charge. Simultaneously, the police offer each prisoner a Faustian bargain. Here’s how it goes:
1) If A and B both confess the crime, each of them serves 2 years in prison
2) If A confesses but B denies the crime, A will be set free whereas B will serve 3 years in prison (and vice versa)
3) If A and B both deny the crime, both of them will only serve 1 year in prison
As anyone who has watched a crime show understands, the prisoners should stay loyal and remain silent to limit their prison time. But each prisoner can reduce his sentence by defecting. Under the selfish assumptions of game theory, each prisoner will confess.
Academics use game theory to understand much more than police interrogations. They generalize the dilemma as any situation in which two “players” can benefit from cooperation, yet have an incentive to betray each other and must make the decision blindly. The prisoner’s dilemma could apply, for example, to two large companies in a monopolistic market trying to cooperate in fixing their prices. But it’s not necessarily a one-off event. Researchers model games known as the “iterated prisoner’s dilemma” with each round being one play of the prisoner’s dilemma.
Simulating the prisoner’s dilemma is a staple of introductory economics courses. This author can still remember the sting of choosing to cooperate when the other player defected. But when you play multiple rounds, staying silent begins to look like a better strategy.
One of the most successful strategies in iterated games is “tit for tat.” Under the tit for tat strategy, Player A starts by staying silent. For the rest of the game, Player A copies the previous move of the other player. If Player B cooperates, then Player A will cooperate in the next round. But if Player B defects, then Player A will confess in the next round.
Whether we’re hunting and gathering, competing over the best grazing areas, or building similar iPhone apps, cooperation is often the best collective strategy even though selfishness can pay on an individual level. For this reason, academics have modelled the evolutionary basis for cooperation with prisoner’s dilemmas.
The most famous examples are the computer tournaments run by Robert Axelrod. Axelrod solicited strategies for the game which he then pitted against each other in a round robin of 200 rounds of the prisoner’s dilemma. A second tournament was held to see if the the winner of the first tournament could be bested. Tit for tat won both times.
In his analysis, Axelrod noted that the best performing strategies were all “nice”: they were never the first to defect. They also did not win by “beating” their opponents. Instead, tit for tat and other “nice” strategies increased the total share of spoils by pushing everyone toward the larger collective gains of cooperation.
Further research simulated the performance of tit for tat against other strategies in an evolving population. The more a strategy won, the more it converted other members to adopt its strategy, mimicking survival of the fittest.
The simulations found that tit for tat could survive an influx of players with selfish strategies (ie outsiders or mutations) and that tit for tat players could “invade” a population of players using noncooperative strategies. If evolution was a game, tit for tat was winning.
The results seem to show that even in a world of self-interested players, people that cooperate have an evolutionary advantage. The tit for tat strategy of cooperating as long as everyone else does is not exactly altruism. But it demonstrates why cooperation as a default succeeds and why playing on people’s generosity fails. It’s almost – as Stewart put it – a “sunny” outlook.
Game theory offered sunny conclusions until a new approach found a strategy for cheaters to consistently beat other players. Published in a paper entitled “Iterated Prisoner’s Dilemma contains strategies that dominate any evolutionary opponent,” it challenged the belief that playing the prisoner’s dilemma over and over would lead to cooperation.
As Stewart explained, the paper introduces the discovery of “zero determinant” strategies. Some nifty linear algebra proves that “you can write down a simple equation so that if we play the prisoner’s dilemma many times, my score will be proportional to yours. And if I fiddle with the proportion, I can find what’s the best I can do for myself.”
So in the case of two people playing the prisoner’s dilemma over and over, a player using a zero determinant strategy can control and win the game. As long as the opponent is unaware of zero determinant strategies (which is likely given the math involved and the fact that it cannot be applied or understood through a simple heuristic like tit for tat), the opponent’s best play is to give in to the extortion by cooperating.
Despite the presence of the word ‘evolution’ in the title of the paper, it only looks at a game between two players. As a biologist, Alexander Stewart noted that he would not use the same framework for two reasons. The first is that populations can’t be dissimilar. A mutation may lead to individuals that act differently, but eventually a strategy will either win out or disappear. The asymmetry of the zero determinant strategy, in which one player is aware of it and the other is not, does not make sense in an evolving population over the long-term. (And if both players were aware of the strategy and that each was playing it, then it would be a matter of negotiating an outcome.) More importantly, evolution can’t be simulated with a one on one game.
Stewart and Plotkin investigated how these new zero determinant strategies fare in evolving populations. According to Stewart, zero determinant strategies are cool because they allow researchers to codify lots of possible strategies rather than manually proposing them – as was the case during Axelrod’s tournaments. But when Stewart and Plotkin did the math and ran the simulation for zero determinant and classic strategies in a large population, they found that nice guys came out first once again.
Stewart referred to the strategy that populations consistently took up as “generous tit for tat.”1 It was a zero determinant strategy, but one that chose an equitable split of the “points.” Generous tit for tat responds to people that refuse to cooperate by defecting in turn. But against cheating opponents, it periodically cooperates to offer the other player a chance to reciprocate. If tit for tat is a friend who reciprocates generosity, then generous tit for tat is a parent who is not naive, but still keeps offering you second chances.
Dirk Willems making a good/terrible decision
Game theory and prisoner’s dilemmas are not the only ways to understand the evolution of generosity and cooperation. While discussing approaches like group selection and indirect reciprocity, Stewart told us that he thought they were “all important.” Other approaches can flesh out our understanding of the evolutionary basis for cooperation, and they are certainly needed to explain the altruistic actions of someone like Dirk Willems.
But game theory offers an important base for understanding cooperation and has an explanatory power other frameworks lack. If humans only cooperated with people in our family or tribe, or with people whose reputations we knew, then city life would be impossible. Stewart and Plotkin’s analysis found that the larger the population they studied, the more successful the generous strategies were in the simulation. Only small populations were vulnerable to the extortionist zero determinant strategies – a fact that may help explain why the plotline of a charming stranger seducing and then taking advantage of a small town’s inhabitants is so prevalent.
The discovery of zero determinant strategies has set the small circle of game theorists abuzz. It will be useful for analyzing situations like the Cold War, in which there are two players whose goal is to “beat” their opponent. (Or perhaps we missed a lesson there.) But rather than challenge game theory’s sunny findings about cooperation, zero determinant strategies have vindicated them. And by showing the advantages of a more generous tit for tat strategy over regular tit for tat, it even provides evidence for the benefits of generosity.
The discovery of a curmudgeon strategy for manipulating others has actually helped show us that we may be more generous than we realized.
1 In communication after the publication of this article, Alexander Stewart clarified that generous-tit-for-tat is a strategy that has been around for awhile. In their paper, Stewart and Plotkin actually refer to a class of generous-tit-for-tat strategies that are less generous (and more successful) than the generous-tit-for-tat strategy. He also noted that the terminology in the field is horribly cluttered.