Economic game theory's "folk theorem" is not empirically relevant

I study a lot of game dynamics: how people learn as they make the same socially-inflected decision over and over. A branch of my career has been devoted to finding out that people do neat unexpected things that are totally unpredicted by established models. Like in most things, anything close to opposition to this work looks less like resistance and more like indifference. One concrete reason, in my area, is that it is old news that strange things can happen in repeated games. That is thanks to the venerated folk theorem. As Fisher (89) put it, the “folk theorem” is as follows

in an infinitely repeated game with low enough discount rates, any outcome that is individually rational can turn out to be a Nash equilibrium (Fudenberg and Maskin, 1986). Crudely put: anything that one might imagine as sensible can turn out to be the answer

It is a mathematical result, a result about formal systems. And it is used to say that, in the real world, anything goes in the domain of repeated games. But it can’t be wrong: no matter what one finds in the real world, a game theorist could say “Ah yes, the folk theorem said that could happen.” What’s that mean for me? Good news. The folk theorem, as much as we love it, is fine logic, but it isn’t science. It says a lot about system of equations, but because it can’t be falsified, it has nothing to offer the empirical study of human behavior.
Oh, FYI, I’d love to be wrong here. If you can find a way to falsify the Folk Theorem, let me know. Alternatively, I’d love to find a citation that says this better than I do here.
Fisher F.M. (1989). Games Economists Play: A Noncooperative View, The RAND Journal of Economics, 20 (1) 113. DOI: http://dx.doi.org/10.2307/2555655


Use Shakespeare criticism to inspire language processing research in cognitive science

I have a side-track of research in the area of “empirical humanities.” I got to present this abstract recently at a conference called “Cognitive futures in the humanities.”

It might seem self-evident that “the pun … must be noticed as such for it to work its poetic effect.” Joel Fineman says it confidently in his discussion of Shakespeare’s “Sonnet 132.” But experimental psychologists have proven that people are affected by literary devices that they did not notice. That is a problem with self-evidence, and it reveals one half of the promise of empirical humanities.
Counterintuition pervades every aspect of language experience. Consider the four versions of the following sentence, and how the semantic connections they highlight could affect conscious recognition of the malapropism at pack: “Parker could not have died by [suicide/cigarettes], as he made a [pact with the devil/pack with the devil] that guaranteed immortal life.” Pack is an error. Cigarette semantically “primes” it, just as suicide primes pact. Will readers be more disturbed by pack when it is primed, or less? Does cigarette disguise pack or make it pop out? Classic theories in cognitive science would argue for the latter, that priming the malapropism will make it more disruptive and harder to miss. But no scientific theory has considered the alternative. I hadn’t myself until I reviewed the self-evidence of Shakespeare scholar Stephen Booth. This is the other half of the promise of empirical humanities. Literary criticism can reveal new possibilities in unquestioned cognitive theories, and inspire new tracks of thought.
After reviewing some lab work in the human mind, and some literary fieldwork there, I will tell you what cigarette does to pack.

It was fun spending a week learning how humanities people think. The experiment is work with Melody Dye and Greg Cox that I was a part of.


The law of welfare royalty

To propose that human society is governed by laws is generally foolhardy. I wouldn’t object to a Law of Social Laws to push along the lines that all generalizations are false. But this observation has a bit going for it, namely that it depends on the inherent complexity of society, and on human limits. Those are things we can count on.

The law of welfare royalty: Every scheme for categorizing members of a large-scale society will suffer from at least one false positive and at least one false negative.

The law says that every social label will be misapplied in two ways: It will be used to label people it shouldn’t (false positive), and it will fail to be applied to people it should (false negative). Both errors will exist.
The ideas of false positives and false negatives come from signal detection theory, which is about labeling things. If you fired a gun in the direction of someone who might be friend or foe, four things can happen: a good hit, a good miss, a bad hit (friendly fire), and a bad miss.** Failing to keep all four outcomes in mind leads to bad reasoning about humans and society, especially when it comes to news and politics.
Examples:

  • No matter how generous a social welfare system, it will always be possible to find someone suffering from starvation and exposure, and to use their story to argue for more generosity.
  • No matter how stingy and inadequate a welfare system, it will always be possible to cry “waste” and “scandal” on some kind of welfare royalty abusing the system.
  • No matter the inherent threat of violence from a distant ethnic group, it will always be possible to report a very high and very low threat of violence.
  • Airport security measures are all about tolerating a very very high rate of false positives (they search everybody) in order to prevent misses (letting actual terrorists board planes unsearched), but it cannot be guaranteed to succeed, and the cost of searching everybody has to be measured against that.
  • In many places, jaywalking laws are only used to shut down public protests. During street protests, jaywalking laws have a 0% hit rate and a 0% correct reject (true negative) rate: they never catch people they should, and they catch all of the people they shouldn’t.

The law of welfare royalty is important for how we think about society and social change. The upshot is that trustworthy reporting about social categories must report using lots of data. Anecdotes will always be available to support any opinion about any act on society. You can also infer from my formulation of the law a corollary that there will always be a talking head prepared to support your opinion, though that isn’t so deep or interesting or surprising.
In fact, none of this is so surprising once a person thinks about it. The challenge is getting a person to think about it, even once. That’s the value of giving the concept a name. If I could choose one facet of statistical literacy to upload into the head of every human being, it would be a native comfort with the complementary concepts of false positives and negatives. Call it a waste of an upload if you want, but signal detection theory has become a basic part of my daily intellectual hygiene.


Back by one forward by two: Does planning for norm failure encourage it?

Most people who care about resource management care about big global common resources: oceans, forests, rivers, the air. But the commons that we deal with directly — shared fridges, flagging book clubs, public restrooms — may be as important. These “mundane” commons give everyday people experiences of governance, possibly the only type of experience that humanity can rely on to solve global commons dilemmas.
I think that’s important, and so the problems of maintaining mundane commons always get me. One community of mine, my lab, has recently had trouble with a norm of “add one clean two.” Take a sink shared with many people, at an office or in a community. There are a million ways to keep this kind of resource clean, and I see new ideas everywhere I look. Still, most shared sinks have dirty dishes. One recent proposed idea was “add one clean two.” If you can’t count on every individual to clean their own dish, why not appeal to the prosocial people (the ones most likely to discuss the problem as a problem) to clean two dishes for every one they add?
On the one hand, this cleverly embraces homogeneity of cooperativeness to solve an institutional design problem. On the other, a norm built on the premise that violators exist makes it OK for people continue to leave their dishes undone. It isn’t clear to me what conditions would make the first effect overpower the second. Seems testable though.


Common-knowledge arbitrage

Hypothesis 1: Ask people what they think about a stock or a political issue, and also what they think “most people” think. Where these guesses are the same, predictions about the outcome will be right. Where they differ, outcomes will have more upsets.
There are a few places where I would ultimately want to see this perspective go. One would look at advertising and other goal-oriented broadcasts as aimed at strategically creating a difference between what people think and what they think others think. Another would try to predict changes in finance markets based on these differences. This perspective will be useful in any domain where people don’t merely act on what they think, but on the differences with their estimate of common knowledge. It will also be useful in domains where people’s expressed opinions differ from their privately held ones.
Hypothesis 2: Holding everything else still, average opinion and the average of estimates of public opinion will tend toward being equal.
If this second guess is true, a systematic significant difference between the average opinion and the average estimate of public opinion could provide an objective measure of propaganda pressure, one that could be used to assign a number to the strength of social pressure that is being applied by a goal-oriented agent working on a population through the mass media ecosystem.
But maybe that is too conspiracy theory-ey, and too top-down. The same measure could indicate a bottom-up dynamic. Take a social taboo that is privately ignored but still publicly upheld. In such a domain, it will be common for expressed opinions to differ from held opinions, which will drive a consistent non-zero difference between average opinion and average received opinion. Over a dozen taboos, those with a large or growing divergence will be those that are most likely to become outmoded. Anecdotally, I’m thinking here of the surprise, and surprisingly-robust, changes in opinion and policy around controlled substances, most striking in California.
Hypothesis 3: This is a little idle, but I would also guess that people with larger differences tend to be less happy, particularly where the differences concentrate on highly-politicized topics. Causation there could go either way — I’d guess both way.
This subject has some relationship to some extensions to Schelling’s opinion models and to my dissertation work (on surprising group-scale effects of “what you think I think you think I think” reasoning).


Do social preferences break "I split you chose"?

Hypothesis: Social preferences undermine the fairness, efficiency, and stability of “I cut, you choose” rules.
A lot of people chafe at the assumptions behind game theory and standard economic theory, and I don’t blame them. If those theories were right, there are a lot of things in our daily lives that wouldn’t work as well as they obviously do. But I came up with an example of the opposite: an everyday institution that would work a lot better if we weren’t so generous and egalitarian — if we didn’t have “social preferences.” Maybe; this is just a hypothesis, one that I may never get around to testing, but here it is.
“I cut, you choose” is a pretty common method for splitting things. Academically, it is appealing because it is easy to describe mathematically. It is a clean real world version of a classic Nash bargaining problem. There is a finite resource and two agents must agree about how to split it. The first person divides it into two parts and the second is free to pick the biggest. It is common in domains where the resource is hard to split evenly. The splitter knows that the picker will choose the larger part, and that he or she can do no better than getting 50%. This incentivizes the splitter to try for a completely fair distribution. Binmore has a theory that cultural evolution will select for social situations that are stable, efficient, and fair, and “I split, you choose” has those qualities, in theory.
It sounds fine, and I’ve seen it work great, but I’ve also seen it go wrong, particularly among the guilty and shy. In the splitter role they get anxious and in the receiver role they tend to pick the smaller share. It might sound heartless for someone to exploit that, but my wonderful boss did: He was splitting a candy bar with an anxious friend and proposed “I split, you chose.” He volunteered also to be the splitter, and proceeded to divide the bar blatantly 70/30. What did the victim do? He knew he was being manipulated, he watched the split with horror, but, however wounded, mysteriously picked the smaller share. Social preferences, in that case make “I split, you choose” into an institution that is neither stable nor fair and, if it’s efficient, it’s only because every possible outcome is equally efficient.
That’s interesting because we normally think of game theory as this sterile thing that implies a selfish existence whose only redeeming value is that it’s contradicted by our social preferences, which make everything better. But, if I’m right, this is a clean example of the opposite. Game theory would be offering a very nice clean institution, and social preferences break it.


Xeno's paradox

There is probably some very deep psychology behind the age-old tradition of blaming problems on foreigners. These days I’m a foreigner, in Switzerland, and so I get to see how things are and how I affect them. I’ve found that I can trigger a change in norms even by going out of my way to have no effect on them. It’s a puzzle, but I think I’ve got it modeled.
In my apartment there is a norm (with a reminder sign) around locking the door to the basement. It’s a strange custom, because the whole building is safe and secure, but the Swiss are particular and I don’t question it. Though the rule was occasionally broken in the past (hence the sign), residents in my apartment used to be better about locking the door to the basement. The norm is decaying. Over the same time period, the number of foreigners (like me) has increased. From the naïve perspective, the mechanism is obvious: Outsiders are breaking the rules. The mechanism I have in mind shows some of the subtlety that is possible when people influence each other under uncertainty. I’m more interested in the possibility that this can exist than in showing it does. Generally, I don’t think of logic as the most appropriate tool for fighting bigotry.
When I moved in to this apartment I observed that the basement door was occasionally unlocked, despite the sign. I like to align with how people are instead of how the signs say they should be, and so I chose to just remain a neutral observer for as long as possible while I learned the how things run. I adopted a heuristic of leaving things how I found them. If the door was locked, I locked it behind me on my way out, and if the door wasn’t I left it that way.
That’s well and good, but you can’t just be an observer. Even my policy of neutrality has side effects. Say that the apartment was once full of Swiss people, including one resident who occasionally left the door unlocked but was otherwise perfectly Swiss. The rest of the residents are evenly split between orthodox door lockers and others who could go either way and so go with the flow. Under this arrangement, the door stays locked most of the time, and the people on the cusp of culture change stay consistent with what they are seeing.
Now, let’s introduce immigration and slowly add foreigners, but a particular kind that never does anything. These entrants want only to stay neutral and they always leave the door how they found it. If the norm of the apartment was already a bit fragile, then a small change in the demographic can tip the system in favor of regular norm violations.
If the probability of adopting the new norm depends on the frequency of seeing it adopted, then a spike in norm adoptions can cause a cascade that makes a new norm out of violating the old one. This is all standard threshold model: Granovetter, Schelling, Axelrod. Outsiders change the model by creating a third type that makes it look like there are more early adopters than there really are.
Technically, outsiders translate the threshold curve up and don’t otherwise change its shape. In equations, (1) is a cumulative function representing the threshold model. It sums over some positive function f() as far as percentile X to return value Y in “X% of people (adopters early adopters (E) plus non-adopters (N)) need to see that at least Y% of others have adopted before they do.” Equation (2) shifts equation (1) up by the percentage of outsiders times their probability of encountering an adopter rather than a non-adopter.
latex-image-2
If you take each variable and replace it with a big number you should start to see that the system needs either a lot of adopters or a lot of outsiders for these hypothetical neutral outsiders to be able to shift the contour very far up. That says to me that I’m probably wrong, since I’m probably the only one following my rule. My benign policy probably isn’t the explanation for the trend of failures to lock the basement door.
This exercise was valuable mostly for introducing a theoretical mechanism that shows how it could be possible for outsiders to not be responsible for a social change, even if it seems like it came with them. Change can come with disinterested outsiders if the system is already leaning toward a change, because outsiders can be mistaken for true adopters and magnify the visibility of a minority of adopters.

Update a few months later

I found another application. I’ve always wondered how it is that extreme views — like extreme political views — take up so much space in our heads even though the people who actually believe those things are so rare. I’d guess that we have a bias towards over estimating how many people are active in loud minorities, anything from the Tea Party to goth teenagers. With a small tweak, this model can explain how being memorable can make your social group seem to have more converts than it has, and thereby encourage more converts. Just filter people’s estimates of different group’s representations through a memory of every person that has been seen in the past few months, with a bias toward remembering memorable things. I’ve always thought that extreme groups are small because they are extreme, but this raises the possibility that it’s the other way around, that when you’re small, being extreme is a pretty smart growth strategy.


The empirics of identity: Over what timescale does self-concept develop?

There is little more slippery than who we think we are. It is mixed up with what we do, what we want to do, who we like to think we are, who others think we are, who we think others want us to think we are, and dozens of other equally slippery concepts. But we emit words about ourselves, and those statements — however removed from the truth — are evidence. For one, their changes over time they can give insight into the development of self-concept. Let’s say that you just had a health scare and quit fast food. How long do you have to have been saying “I’ve been eating healthy” before you start saying “I eat healthy”? A month? Three? A few years? How does that time change with topic, age, sex, and personality? Having stabilized, what is the effect of a relapse in each of these cases? Are people who switch more quickly to “I eat healthy” more or less prone to sustained hypocracy — hysteresis — after a lapse into old bad eating habits? And, on the subject of relapse, how do statements about self-concept feed back into behavior; All else being equal, do ex-smokers who “are quitting” relapse more or less than those who “don’t smoke”? What about those who “don’t smoke” against those who “don’t smoke anymore”; does including the regretted-past make it more or less likely to return? With the right data — large longitudinal corpora of self-statements and creative/ambitious experimental design — these may become empirical questions.


The market distribution of the ball, a thought experiment.

The market is a magical thing.  Among other things, it has been entrusted with much of the production and distribution the world’s limited resources. But markets-as-social-institutions are hard to understand because they are tied up with so many other ideas: capitalism, freedom, inequality, rationality, the idea of the corporation, and consumer society. It is only natural that the value we place on these abstractions will influence how we think about the social mechanism called the market. To remove these distractions, it will help to take the market out of its familiar context and put it to a completely different kind of challenge.

Basketball markets

What would basketball look like if it was possible to play it entirely with markets, if the game was redesigned so that players within a team were “privatized” during the game and made free of the central planner, their stately coach: free to buy and sell favors from each other in real time and leave teamwork to an invisible hand?  I’m going to take my best shot, and in the process I’ll demonstrate how much of our faith in markets is faith, how much of our market habit is habit.
We don’t always know why one player passes to another on the court. Sometimes the ball goes to the closest or farthest player, or to the player with the best position or opening in the momentary circumstances of the court. Sometimes all players are following the script for this or that play. Softer factors may also figure in, like friendship or even the feeling of reciprocity. It is probably a mix of all of these things.  But the market is remarkable for how it integrates diverse sources of information.  It does so quickly, adapting almost magically, even in environments that have been crafted to break markets.
So what if market institutions were used to bring a basketball team to victory? For that to work, we’d have to suspend a lot of disbelief, and make a lot of things true that aren’t. The process of making those assumptions explicit is the process of seeing the distance of markets from the bulk of real world social situations.
The most straightforward privatization of basketball could class behavior into two categories, production (moving the ball up court) and trade (passing and shooting). In this system, the coach has already arranged to pay players only for the points they have earned in the game. At each instant, players within a team are haggling with the player in possession, offering money to get the ball passed to them. Every player has a standing bid for the ball, based on their probability of making a successful shot. The player in possession has perfect knowledge of what to produce, of where to go to have either the highest chances of making a shot or of getting the best price for the ball from another teammate.
If the player calculates a 50% chance of successfully receiving the pass and making a 3-point shot, then that pass is worth 1.5 points to him. At that instant, 1.5 will be that player’s minimum bid for the ball, which the player in possession is constantly evaluating against all other bids. If, having already produced the best set of bids, any bid is greater then that possessing player’s own estimated utility from attempting the shot, then he passes (and therefore sells) to the player with the best offer. The player in possession shoots when the probability of success exceeds any of the standing bids and any of the (perfectly predicted) benefits of moving.
A lot is already happening, so it will help to slow down. The motivating question is how would reality have to change for this scheme to lead to good baskeball? Most obviously, the pace of market transactions would have to speed up dramatically, so that making, selecting, and completing transactions happened instantaneously, and unnoticably. Either time would have to freeze at each instant or the transaction costs of managing the auction institution would have to be reduced to an infinitesimal. Similarly, each player’s complex and inarticulable process of calculating their subjective shot probabilities would have to be instantaneous as well.
Players would have to be more than fast at calculating values and probabilities, they would also have to be accurate. If players were poor at calcuating their subjective shot probabilities, and at somehow converting those into cash values, they would not be able to translate their moment’s strategic advantage into the market’s language. And it would be better that players’ bids reflect only the probability of making a shot, and not any other factors. If players’ bids incorporate non-cash values, like the value of being regarded well by others, or the value of not being in pain, then passes may be over- or under-valued. To prevent players from incorporating non-cash types of value the coach has to pay enough per point to drown out the value of these other considerations. Unline other parts of this thought experiment, that is probably already happening.
It would not be enough for players to accurately calculate their own values and probabilities, but those of every other player, at every moment. Markets are vulnerable to assymmetries in information. This means that if these estimates weren’t common knowledge, players could take advantage of each other artificially inflating prices and reducing the efficiency of the team (possibly in both the technical and colloquial senses). Players that fail to properly value or anticipate future costs and benefits will pass prematurely and trap their team in suboptimal states, local maxima. To prevent that kind of short-sightedness, exactly the kind of shortsightedness that teamwork and coaching are designed to prevent, it would be necessary for players to be able to divine not only perfect trading, but perfect production. Perfect production would mean knowing where and when on the court a pass or a shot will bring the highest expected payoff, factoring in the probability of getting to that location at that time.
I will be perfectly content to be proven wrong, but I believe that players who could instantaneously and accurately put a tradable cash value on their current and future state — and on the states of every other player on the court — could use market transactions to create perfectly coherent teams. In such a basketball, the selfish pursuit of private value could be manuevered by the market institution to guarantee the good of the team.

The kicker

With perfect (instantaneous and accurate) judgement and foresight a within-team system of live ball-trading could produce good basketball. But with those things, a central planner could also produce good basketball. Even an anarchist system of shared norms and mutual respect could do so. In fact, as long as those in charge all share the goal of winning, the outputs of all forms governance will become indistinguishable as transaction costs, judgement errors, and prediction errors fall to zero. With no constraints it doesn’t really matter what mechanisms you use to coordinate individual behavior to produce optimal group behavior.
So the process of making markets workable on the court is the process of redeeming any other conceivable form of government. Suddenly it’s trivial that markets are a perfect coordination mechanism in a perfect world.  The real question is which of these mechanisms is the closest to its perfect form in this the real world. Markets are not. In some cases, planned economies like board-driven corporations and coach-driven teams probably are.

Other institutions

What undermines bosshood, what undermines a system of mutual norms, and what undermines markets?  Which assumptions are important to each?  

  • A coach can prescribe behavior from a library of taught plays and habits. If the “thing that is the best to do” changes at a pace that a coach can meaningfully engage with, and if the coached behavior can be executed by players on this time scale, than a coach can prescribe the best behavior and bring the team close to perfect coherence.
  • If players have a common understanding of what kinds of coordinated behavior is the best for what kinds of situations, and they reliably
    and independently come to the same evaluation of the court, than consensual social norms can model perfect coherence satisfactorily.
  • And if every instant on the court is different, and players have a perfect ability to evaluate the state of the court and their own abilities, then an institution that organizes self-interest for the common good will be the one that brings it closest to perfect coherence

Each has problems, each is based on unrealistic assumptions, each makes compromises, and each has its place. But even now the story is still too simple. What if all of those things are true at different points over the course of a game? If the answer is “all of the above,” players should listen to their coach, but also follow the norms established by their teammates, and also pursue their own self-interest. From here, it is easy to see that I am describing the status quo. The complexity of our social institutions must match the complexity of the problems they were designed for. Where that complexity is beyond the bounds that an individual can comprehend, the institutional design should guide them in the right direction. Where that complexity is beyond the bounds of an institution, it should be allowed to evolve beyond the ideological or conceptual boxes we’ve imposed on it.

The closer

Relative to the resource systems we see every day, a sport is a very simple world.  The rules are known, agreed upon by both teams, and enforced closely. The range of possible actions is carefully prescribed and circumscribed, and the skills necessary to thrive are largely established and agreed upon. The people are occupying each position are world-class professionals. So if even basketball is too complicated for any but an impossible braid of coordination mechanisms, why should the real world be any more manageable? And what reasonable person would believe that markets alone are up to the challenge of distributing the world’s limited resources?

note

It took a year and a half to write this. Thanks to Keith Taylor and Devin McIntire for input.


Breaking the economist's monopoly on the Tragedy of the Commons.

Summary

After taking attention away from economic rationality as a cause of overexploitation of common property, I introduce another more psychological mechanism, better suited to the mundane commons of everyday life. Mundane commons are important because they are one of the few instances of true self-governance in Western society, and thus one of the few training grounds for civic engagement. I argue that the “IAD” principles of the Ostrom Workshop, well-known criteria for self-governance of resource systems, don’t speak only to the very narrow Tragedy of the Commons, but to the more general problem of overexploitation.

Argument

The Tragedy of the Commons is the tragedy of good fudge at a crowded potluck. Individual guests each have an incentive to grab a little extra, and the sum of those extra helpings causes the fudge to run out before every guest got their share. For another mundane example, I’ve seen the same with tickets for free shows: I am more likely to request more tickets than I need if I expect the show to be packed.
The Tragedy has been dominated by economists, defined in terms of economic incentives. That is interesting because the Tragedy is just one mechanism for the very general phenomenon of overexploitation. In predatory animal species that are not capable of rational deliberation, population imbalances caused by cycles, introduced species, and overpopulation can cause their prey species to be overexploited. The same holds between infectious agents and their hosts: parasites or viruses may wipe out their hosts and leave themselves nowhere else to spread. These may literally be tragedies of commons, but they have nothing to do with the Tragedy as economists have defined it, and as researchers treat it. In low-cost, routine, or entirely non-economic domains, humans themselves are less likely to be driven by economic incentives. If overexploitation exists in these domains as well, then other mechanisms must be at work.
Economics represents the conceit that human social dynamics are driven by the rational agency that distinguishes us from animals. The Tragedy is a perfect example: Despite the abundance of mechanisms for overexploitation in simple animal populations, overexploitation in human populations is generally treated as the result of individually rational deliberation. But if we are also animals, why add this extra deliberative machinery to explain a behavior that we already have good models for?
I offer an alternative mechanism that may be responsible for engendering overexploitation of a resource in humans. It is rooted in a psychological bias. It may prove the more plausible mechanism in the case of low cost/low value “mundane” commons, where the incentives are too small for rational self-interest to distinguish itself from the noise of other preferences.
This line of thinking was motivated by many years of experience in shared living environments, which offer brownies at potlucks, potlucks generally, dishes in sinks, chores in shared houses, trash in shared yards, book clubs, and any instance where everyday people have disobeyed my culture’s imperative to distribute all resources under a system of private property. The imperative may be Western, or modern, or it may just be that systems of private property are the easiest for large central states to maintain. The defiance of the imperative maybe intentional, accidental, incidental, or as mundane as the resource being shared.
Mundane commons are important for political science, and political life, because they give citizens direct experience with self-governance. And theorists from Alexis de Toqueville to Vincent Ostrom argue that this is the kind of citizen education that democracies must provide if they aren’t going to fall to anarchy on the one side or powerful heads-of-state on the other. People cannot govern themselves without training in governance. I work in this direction because I believe that a culture of healthy mundane commons will foster healthy democratic states.
I don’t believe that the structural mechanisms of economics are those that drive mundane resource failure. This belief comes only from unstructured experience, introspection, and intuition. But those processes have suggested an alternative: the self-serving bias. Self-serving bias, interpreting information in a way that benefits us at the expense of others, is well-established in the decision-making literature.
How could self-serving cause overexploitation? Lets say that it is commonly known that different people have different standards for acceptable harvesting behavior. This is plausible in low-cost/ low-reward environments, where noise and the many weak and idiosyncratic social preferences of a social setting might drown out any effects of the highly-motivated goal-oriented profit-maximizing behavior that economists attend to. I know my own preference for the brownies, but I have uncertainty about the preferences of others for them. If, for every individual, self-serving bias is operating on that uncertainty about the preferences of others, then every person in the group may decide that they like brownies more than the other people, and that their extra serving is both fair and benign.
The result will be the overexploitation that results from the Tragedy of the Commons, and from the outside it maybe indistinguishable from the Tragedy, but the mechanism is completely different. It is an interesting mechanism because it is prosocial: no individual percieves that their actions were selfish or destructive. It predicts resource collapse even among agents who identify as cooperative.
The self-serving bias can help to answer a puzzle in the frameworks developed by the Ostrom Workshop. In their very well-known work, members of the Workshop identified eight principles that are commonly observed in robust common-property regimes. But only one of these, “graduated sanctions,” speaks directly to rational self-interest. The other principles invoke the importance of definitions, of conflict resolution, of democratic representation, and other political and social criteria.
Why are so many of the design principles irrelevant to rational self-interest, the consensus mechanism behind the Tragedy? Because it is not the only cause of overexploitation in self-governing resource distribution systems. The design principles are not merely a solution to the economist’s Tragedy of the Commons, but to the more general problem of overexploitation, with all of the many mechanisms that encourage it. If that is the case, then principles that don’t speak to the Tragedy may still speak to other mechanisms. For my purposes, the most relevant is Design Principle 1, in both of its parts:

1A User boundaries:
Clear boundaries between legitimate users and nonusers must be clearly defined.
1B Resource boundaries:
Clear boundaries are present that define a resource system and separate it from the larger biophysical environment.
(http://www.ecologyandsociety.org/vol15/iss4/art38)

By establishing norms, and the common knowledge of norms, this principle may prevent self-serving bias from promoting overexploitation. Norms provide a default preference to fill in for others when their actual preferences are unknown. By removing uncertainty about the preferences of others, the principle leaves participants no uncertainty to interpret in a self-serving manner.
Other psychological processes can cause overexploitation, but the design principles of the Ostrom Workshop are robust to this twist because they weren’t developed by theorizing, but by looking at real resource distribution systems. So even though they define themselves in terms of just one mechanism for overexploitation, they inadvertently guard against more than just that.