Psychoactives in governance | The double-blind policy process

I’m often surprised at how casual so many communities are about who they let in. To add people to your membership is to steer your community in a new direction, and you should know what direction that is. There’s nothing more powerful than a group of aligned people, and nothing more difficult than steering a group when everyone wants something different for it. I’ve seen bad decisions on who to include ruin many communities. And, on the other hand, being intentional about it can have a transformative effect, leading to inspiring alignment and collaboration. The best collaborations of my life have all been in discerning communities.

So what does it mean to be intentional about membershipping? You could say that there are two overall strategies. One is to go slow and really get to know every prospective member before inviting them fully into the fold. The other is to be very explicit and providing narrow objective criteria for membership. These both have upsides and downsides. If you spend a lot of time getting to know someone, there will be no surprises. But this can produce cliqueishness and cronyism: who else have you spent that much time with than your own friends? On the other hand are communities that base membership on explicit objective criteria can be exploited. A community I knew wanted tidy and thoughtful people, so would filter people on whether they helped with the dishes and brought desert. The thinking was that a person who does those things naturally is certainly tidy and thoughtful. But every visitor knew to bring desert and help with the dishes, regardless of what kind of person they were, so the test failed as an indicator.

We need better membershipping processes. Something with the fairness and objectivity of explicit criteria, but without their vulnerability to being faked. There are lots of ways that scholars solve this kind of problem. They will theorize special mechanisms and processes. But wouldn’t it be nice if we could select people who just naturally bring desert, help with dishes, ask about others, and so on? Is that really so hard? To solve it, we’re going to do something different.

The mechanism: the double-blind policy process with collective amnesia

Amnesia is usually understood as memory loss. But that’s actually just one kind, called retrograde amnesia, the inability to access memories from before an event. The opposite kind of amnesia is anterograde. It’s an inability to form new memories after some event. It’s not that you lost them, you never got them in the first place. We’re going to imagine a drug that induces temporary anterograde amnesia. It prevents a person from forming memories for a few hours.

To solve the problem of bad membershipping, we’re going to artificially induce it in everyone. Here’s the process:

  1. A community’s trusted core group members sit and voluntarily induce anterograde amnesia in themselves (with at least two observers monitoring for safety).
  2. In a state of temporary collective amnesia, the group writes up a list of membership criteria that are precise, objective, measurable, and fair. As much as possible, items should be the result of deliberation rather than straight from the mind of any one person.
  3. They then seal the secret criteria in an envelope and forget everything.
  4. Later, the core group invites a prospective new member to interview.
  5. The interview isn’t particularly well structured because no one knows what it’s looking for. So instead it’s a casual wide-ranging affair involving a range of activities that really have nothing to do with the community’s values. These activities are diverse and wide-ranging enough to reveal a variety of dimensions of the prospectives personality. An open-ended personality test or two could work as well. What you need is a broad activity pool that elicits a range of illuminating choices and behaviors. These are being observed by the membership committee members, but not discussed or acted upon until ….
  6. After the interview, a group of members sits to deliberate on the prospective’s membership, by
    • collectively inducing anterograde amnesia,
    • opening the envelope,
    • recalling the prospective’s words and choices and behavior over the whole activity pool,
    • judging all that against the temporarily revealed criteria,
    • resealing the criteria in the envelope,
    • writing down their decision, and then
    • forgetting everything
  7. Later this membership committee reads the decision they came to to find out if they will be welcoming a new peer to the group.

The effect is that the candidate got admitted in a fair, systematic way that can’t be abused. Why does it work? No one knows how to abuse it. In a word, you can’t game a system if literally nobody knows what its rules are. Not knowing the rules that govern your society is normally a problem, but it seems to be just fine for membership rules, maybe because they are defined around discrete intermittent events.

Psychoactives in decision-making

If this sounds fanciful, it’s not: the sedatives propofol and midazolam both have this effect. They are common enough in the cocktails of sedatives, anesthetics, analgesics, and tranquilizers that anaesthesiologists administer during surgical procedures.

If this sounds feckless or reckless, it’s not. There is an actual heritage of research that uses psychoactives to understand decision-making. I’m a cognitive scientist who studies governance. I learned about midazolam from Prof Richard Shiffrin, a leading mathematical psychologist and expert in memory and decision-making. He invoked it while proposing a new kind of solution to a social dilemma game from economic game theory. In the social dilemma, two people can cooperate but each is tempted to defect. Shiffrin suggests that you’ll cooperate if the person is so similar to you that you know they’ll do whatever you do. He makes the point by introducing midazolam to make it so the other person is you. In Rich’s words:

You are engaged in the simple centipede game decision tree [Ed. if you know the Prisoner’s Dilemma, just imagine that] without communication. However the other agent is not some other rational agent, but is yourself. How? You make the decision under the drug midazolam which leaves your reasoning intact but prevents your memory for what you thought about or decided. Thus you decide what to do knowing the other is you making the other agent’s decision (you are not told and don’t know and care whether the other decision was made earlier or after because you don’t remember). Let us say that you are now playing the role of agent A, making the first choice. Your goal is to maximize your return as agent A, not yourself as agent B. When playing the role of agent B you are similarly trying to maximize your return.

The point is correlation of reasoning: Your decision both times is correlated, because you are you and presumably think similarly both times. If you believe it is right to defect, would you nonetheless give yourself the choice, knowing you would defect? Or knowing you would defect would you not choose (0,0)? On the other hand if you think it is correct to cooperate, would it not make sense to offer yourself the choice? When playing the role of B let us say you are given the choice – you gave yourself the choice believing you would cooperate – would you do so?

— a 2021/09/15 email

The upshot is that if you know nothing except that you are playing against yourself, you are more likely to cooperate because you know your opponent will do whatever you do, because they’re you. As he proposed it, it was a novel and creative solution to the problem of cooperation among self-interested people. And it’s useful outside of the narrow scenario it isolates. The idea of group identity is precisely that the boundaries of our conceptions of ourselves can expand to include others, so what looks like a funny idea about drugs is used by Shiffrin to offer a formal mechanism by which group identity improves cooperation.

Research at the intersection of drugs and decision-making isn’t restricted to thought experiments. For over a decade, behavioral economists in the neuroeconomics tradition have been piecing together the neurophysiology of decision-making by injecting subjects with a variety of endogenous and exogenous substances. For example, see this review of the effects of oxytocin, testosterone, arginine vasopressin, dopamine, serotonin, and stress hormones.

Compared this other work, all that’s unusual about this post is the idea of administering to a whole group instead of individuals.

Why save democracy when you can save dictatorship? | The connection to incentive alignment

This mechanism is serious for another reason too. The problem of membershipping is a special case of a much more general problem: “incentive alignment” (also known as “incentive compatibility”).

  • When people answering a survey tell you what they want you hear instead of the truth
  • When someone lies at an interview
  • Just about any time that people aren’t incentivized to be transparent

Those are all examples of mis-alignment in the sense that individual incentives don’t point to system goals.

Incentive compatibility is especially challenging for survey design. That’s important because surveys are the least bad way to learn things about people in a standardized way. Incentive compatible survey design is a real can of worms.

That’s what’s special about double-blind policy. It’s a step in the direction of incentive compatibility for self-evaluation. You can’t lie about a question if nobody knows what was asked.


For all kinds of reasons this is not a full solution to the problem. One obvious problem: even if no one knows the rules, anyone can guess. The whole point of introducing midazolam into the social dilemma game was that you know that you will come to the same conclusions as yourself in the future. So just because you don’t know the criteria doesn’t mean you don’t “know” the criteria. You just guess what you would have suggested, and that’s probably it. To solve this, the double-blind policy mechanism has to be collaborative. It requires that several people participate, and that a collaborative deliberation process over many members will produce integrated or synergistic criteria that no single member would have thought of.

Other roles for psychoactives in governance design

The uses of psychoactives in community governance are, as far as I know, entirely unconsidered. Some cultures have developed ritualistic sharing of tobacco or alcohol to formalize an agreement. Others have developed ordering the disloyal to drink hemlock juice, a deadly choline antagonist. That’s all I can think of. I’m simultaneously intrigued to imagine what else is out there and baseline suspicious of anyone who tries.

The ethics

For me this is all one big thought experiment. But I live in the Bay Area, which is governed by strange laws like “The Pinocchio Law of The Bay” which states:

“All thought experiments want to go to the San Francisco Bay Area to become real.”

(I just made this up but it scans)

Hypothetically, I’m very pleased with the idea of solving governance problems psychoactives, but I’ll admit that it suffers from being awful-adjacent: It’s very very close to being awful. I see three things that could tip it over:
1) If you’re not careful it can sound pretty bad, especially to any audience that wants to hate it.
2) If you don’t know that the idea has a legitimate intellectual grounding in behavioral science, then it just sounds druggy and nuts.
3) If it’s presented without any mention of the potential for abuse then it’s naive and dangerous.

So let’s talk about the potential for abuse. The double-blind policy process with collective amnesia has serious potential for abuse. Non-consensual administration of memory drugs is inherently horrific. Consensual administration of memory drugs automatically spawns possibilities for non-consensual use. Even if it didn’t, consensual use itself is fraught, because what does that even mean? The framework of consent requires being able and informed. How able and informed are you when you can’t form new memories?

So any adoption or experimentation around this kind of mechanism should provide for secure storage and should come with a security protocol for every stage. Recording video or having observers who can see (but not hear?!) all deliberations could help. I haven’t thought more deeply than this, but the overall ethical strategy would go like this: You keep abuse potential from being the headline of this story by credibly internalizing the threat at all times, and by never being satisfied that you’ve internalized it enough. Expect something to go wrong and have a mechanism in place for nurturing it to the surface. Honestly there are very few communities that I’d trust to do this well. If you’re unsure you can do it well, you probably shouldn’t try. And if you’re certain you can do it well, then definitely don’t try.

The unpopular hypothesis of democratic technology. What if all governance is onboarding?

There’s this old organizer wisdom that freedom is an endless meeting. How awful. Here the sprightly technologist steps in to ask:

“Does it have to be? Can we automate all that structure building and make it maintain itself? All the decision making, agenda building, resource allocating, note taking, emailing, and even trust?
We can; we must

That’s the popular hypothesis, that technology should fix democracy by reducing friction and making it more efficient. You can find it under the hood of most web technologies with social ideals, whether young or old. The people in this camp don’t dispute the need for structure and process, but they’re quick to call it bureaucracy when it doesn’t move at the pace of life, and they’re quick to start programming when they notice it sucking up their own free time. Ideal governance is “the machine that runs itself“, making only light and intermittent demands for citizen input.

And against it is the unpopular hypothesis. What if part of the effectiveness of a governance system is in the tedious work of keeping it going? What if that work builds familiarity, belonging, bonding, sense of agency, and organizing skills? Then the work of keeping the system up is itself the training in human systems that every member needs to have for a community to become healthy. It instills in every member pragmatic views of collective action and how to get things done in a group. Elinor Ostrom and Ganesh Shivakoti give a case of this among Nepali farmers when state-funds replaced hard-to-maintain dirt irrigation canals with robust concrete irrigation canals and farmer communities stopped sharing water equitably. What looked like maintaining ditches was actually maintaining an obligation to each other.

That’s important because under the unpopular hypothesis, the effectiveness of a governance system depends less on its structure and process (which can be virtually anything and still be effective) and more on what’s in the head of each participant. If they’re trained, aligned, motivated, and experienced, any system can work. This is a part of Ostrom’s “institutional diversity”. The effective institution focuses on the members rather than the processes by making demands of everyone, or “creating experiences.”

Why are organizations bad computers? Because that isn’t their only goal.

In tech circles I see a lot of computing metaphors for organizations and institutions. Looking closer at that helps pinpoint the crux of the difference between the popular and unpopular hypotheses. In a computer or a program, many gates or function are linked into a flow that processes inputs into outputs. In this framework, a good institution is like a good program, efficiently and reliably computing outputs. Under the metaphor all real-world organizations look bad. In a real program, a function will compute reliably, quickly, and accurately without having to provide permission or buy-in or interest? In an organization each function needs all those things.

So organizations are awful computers. But that’s not a problem because it’s goal isn’t to compute, but to compute things that all the functions want computed. It’s a computer that exists by and for its parts. The tedium of getting buy-in from all the functions isn’t an impediment to proper functioning, it is proper functioning. The properly functioning organization-computer is constantly doing the costly hygiene of ensuring the alignment of all its parts, and if it starts computing an output wrong, it’s not a problem with the computer, it’s a problem with the output.

If the unpopular hypothesis is right, then we shouldn’t focus on processes and structures—those might not matter at all—but on training people, keeping them aligned with each other, and keeping the organization aligned with them. It supports another hypothesis I’ve been exploring, that all governance is onboarding.

Less Product, more HR?

This way of thinking opens a completely different way of thinking about governance. Through this lens,

  • Part of the work of governance is agreeing what to internalize
  • a rule is the name of the thing that everyone agrees that everyone should internalize.
  • The other part of governing is creating a process that helps members internalize (whether via training, conversation, negotiation, even a live-action tabletop role playing simulation).
  • once it’s internalized by everyone the rule is irrelevant and can be replaced by the next rule to work on.

In this system, the constraints on the governance system depend on human limits. You need rules because an org needs to be intentional about what everyone internalizes. You’ll keep needing rules because the world is changing and the people are changing and so what to internalize is going to change. You can’t have too many rules at one time because people can’t remember too rules-in-progress at once. You need everyone doing and deciding the work together because it’s important that the system’s failures feel like failures of us rather than them.

With all this, it could be tempting to call the popular hypothesis the tech friendly one. But there’s still a role for technology in governance systems following the unpopular hypothesis. It’s just a change in focus, into technologies that support habit building, skill learning, training, onboarding, and that monitor the health of the shared agreements underlying all of these things. It encourages engineers and designers to move from the easy problems of system and structure to the hard ones of culture, values, and internalization. The role of technology in supporting self-governance can still be to make it more efficient, but with a tweak: not more efficient at arranging parts into computations, but more efficient at maintaining its value to those parts.

Maybe freedom is an endless meeting and technology can make that palatable and sustainable. Or maybe the work of saving democracy isn’t in the R&D department, but HR.

Subjective utility paradox in a classic gift economy cycle with loss aversion


Decision research is full of fun paradoxes.  Here’s one I came up with the other day. I’d love to know if it’s already been explored.

  1. Imagine a group of people trading Kahneman’s coffee cup amongst themselves.
  2. If you can require that it will keep being traded, loss aversion predicts that it’ll become more valuable over time, as everyone sells it for more than they got it.
  3. Connect those people in a ring and as the cup gets traded around its value will diverge. It will become invaluable.

Kula bracelet


  • This could be a mechanism for things transitioning from having economic to cultural value, a counter-trend to the cultural->economic trend of Israeli-daycare-style crowding out.
  • The cup of course doesn’t actually have to attain infinite value for this model to be interesting.  If it increases in value at all over several people, then that’s evidence for the mechanism.
  • Step 2 at least, and probably 3, aren’t giant leaps. Who would know if this argument has been made before?
  • There is a real world case for this.  A bit too complicated to be clean-cut evidence, but at least suggestive.  The archetypal example of gift economies was the Kula ring, in which two types of symbolic gift were obligatorily traded for each other over a ring of islands, with one type of gift circulating clockwise  and the other counter-clockwise through the islands. These items had no practical use, they existed only to trade.  They became highly sought-after over time, as indicators of status.  In the variant described, both types of items should become invaluable over both directions around the circle, but should remain tradable for each other.
  • This example ends up as a fun paradox for utilitarianism under boundedly rational agents, a la Nozick’s utility monster, which subjectively enjoys everything more than everyone, and therefore under a utilitarian social scheme should rightfully receive everything.
  • The effect should be smaller as the number of people in the ring gets smaller.  A smaller ring means fewer steps until I’ve seen the object twice (less memory decay).  My memory that the thing was less valuable yesterday acts here as a counterbalance to the inflationary effect of loss aversion.

New work using a video game to explain how human cultural differences emerge

Video games can help us sink our teeth into some of the thorniest questions about human culture. Why are different people from different places different? Is it because their environments differ, because they differ, or is it all random? These are important questions, at the very root of what makes us act they way we do. But answering them rigorously and responsibly is a doozy. To really reliably figure out what causes differences in human cultures, you’d need something pretty crazy. You’d need a sort of human culture generator that creates lots of exact copies of the same world, and puts thousands of more or less identical people in each of them, lets them run for a while and does or does not produce cultural differences. In essence, you’d need God-like power over the nature of reality. A tall order, except, actually, this happens all the time. Multiplayer video games and other online communities are engineered societies that attract millions of people. It turns out that even the most powerful computers can’t host all of those visitors simultaneously, so game developers often create hundreds of identical copies of their game worlds, and randomly assign new players to one or another instance. This creates the circumstances necessary, less real than reality, but much more realistic than any laboratory experiment, to test fundamental theories about why human cultures differ. For example, if people on different copies of the same virtual world end up developing different social norms or conceptions of fairness, that’s evidence: mere tiny random fluctuations can cause societies to differ!

This theory, that societies don’t need fundamental genetic, or deep-seated environmental divergences to drift apart has revealed itself in many disciplines. It is known as the Strong Cultural Hypothesis in cultural anthropology, and has emerged with different names in economics, sociology, and even the philosophy of science. But stating a hypothesis is one thing, pinning it down with data is another.

With survey data from evolutionary anthropologist Pontus Strimling at the Institute for the Future in Sweden, from players of the classic multiplayer game World of Warcraft, we showed that populations can come to differ even when demographics and environment are the same. The game gives many opportunities for random strangers, who are likely to never meet again, to throw their lots together, and try to cooperate in taking down big boss characters. Being that these are strangers with no mutual accountability, players have lots of strategies for cheating each other, by playing nice until some fancy object comes along, and then stealing it and running away before anyone can do anything. The behavior is so common that it has a name in the game, “ninja-ing”, that reflects the shadowy and unassailable nature of the behavior.

Given all this opportunity for bad behavior, players within cultures have developed norms and rules for how and when to play nice and make sure others do. For those who want to play nice, there are lots of ways of deciding who should get a nice object. The problem then is which to choose? It turns out that, when you ask several people within one copy of the game world how they decide to share items, you’ll get great agreement on a specific rule. But when you look at the rules across different copies, the rule that everyone agreed on is different. Different copies of the world have converged on different consensuses for what counts as a fair distribution of resources. These differences emerge reliably between huge communities even though the player demographics between communities are comparable, and the game environments across those communities are literally identical.

If it seems like a tall order that video games can tell us about the fundamentals nature of human cultural differences, that’s fair: players are mostly male, often young, and the stakes in a game are much different than those in life. Nevertheless, people care about games, they care about being cheated, and incentives to cheat are high, so the fact that stable norms emerge spontaneously in these little artificial social systems is evidence that, as Jurassic Park’s Dr. Ian Malcolm said of life, “culture finds a way.”

Here is the piece: paywall,
no paywall

Bringing big data to the science of community: Minecraft Edition

Looking at today’s Internet, it is easy to wonder: whatever happened to the dream that it would be good for democracy? Well, looking past the scandals of big social media and scary plays of autocracy’s hackers, I think there’s still room for hope. The web remains full of small experiments in self-governance. It’s still happening, quietly maybe, but at such a tremendous scale that we have a chance, not only to revive the founding dream of the web, but to bring modern scientific methods to basic millenia-old questions about self-governance, and how it works.

Minecraft? Minecraft.

That’s why I spent five years studying Minecraft. Minecraft, the game you or your kid or niece played anytime between 5 minutes and 10 years ago, consists of joining one of millions of boundless virtual worlds, and building things out of cubic blocks. Minecraft doesn’t have a plot, but narrative abhors a vacuum, so people used the basic mechanics of the game to create their own plots, and in the process catapulted it into its current status as the best-selling video game of all time. Bigger than Tetris.

Minecraft’s players and their creations have been the most visible facet of the game, but they are supported by a class of amateur functionaries that have made Minecraft special for a very different reason. These are the “ops” and administrators, the people who do the thankless work of running each copy of Minecraft’s world so that it works well enough that the creators can create.

Minecraft, it turns out, is special not just for its open-ended gameplay, but because it is “self-hosted”: when you play on a world with other people, there is a good chance that it is being maintained not by a big company like Microsoft, but by an amateur, a player, who somehow roped themselves in to all kinds of uncool, non-cubic work writing rules, resolving conflicts, fixing problems, and herding cats. We’re used to leaving critical challenges to professionals and, indeed, most web services you use are administered by people who specialize in providing CPU, RAM, and bandwidth publicly. But there is a whole underworld of amateur-run server communities, in which people with no governance training, and no salary, who would presumably prefer to be doing something else, take on the challenge of building and maintaining a community of people who share a common vision, and work together toward it. When that works, it doesn’t matter if that vision is a block-by-block replica of the starship Enterprise, it’s inspiring. These people have no training in governance, they are teaching themselves to build governance institutions. Each world they create is a political experiment. By my count, 19 of 20 fail, and each success and failure is a miraculous data point in the quest to make self-governance a science.

That’s the dream of the Internet in action, especially if we can bring that success rate up from 1/20, 5 percent. To really understand the determinants of healthy institutions, we’d have to be able to watch 100,000s of the nations of Earth rise and fall. Too bad Earth only has a few hundred nations. Online communities are the next best thing: they give us the scale to run huge comparisons, and even experiments. And there is more to governing them than meets the eye.

Online communities as resource governance institutions

Minecraft servers are one example of an interesting class of thing: the public web server. A web server is a computer that someone is using to provide a web service, be it a computer game, website, mailing list, wiki, or forum. Being computers, web servers have limits: finite processing power (measured in gigahertz), memory (measured in gigabytes), bandwidth (measured in gigabytes per second), and electricity (measured in $$$ per month). Failing to provide any of these adequately means failing to provide a service that your community can rely on. Being a boundless 3D multiplayer virtual world open to virtually anyone, Minecraft is especially resource intensive, making these challenges especially critical.

Any system that manages to thrive in these conditions, despite being available to the entire spectrum of humanity, from anonymous adolescents with poor impulse control to teams of professional hackers, is doing something special. Public web servers are “commons” by default. Each additional user or player who joins your little world imposes a load on it. Even if all of your users are well intentioned your server will grind to a halt if too many are doing too much, and your community will suffer. When a valuable finite resource is available to all, we call it a common pool resource, and we keep our eyes out for the classic Tragedy of the Commons: the problem of too many people taking too much until everyone has nothing.

The coincidence of the Information Age with the global dominance of market exchange is that virtually every application of advancing technology has been toward making commons extinct. Anything that makes a gadget smaller or cheaper makes it easier to privately own, and more legible to systems that understand goods as things that you own and buy and sell. This goes back all the way to barbed wire, which turned The Wild West from the gigantic pasture commons that created cowboys to one that could feasibly to fence off large tracts of previously wild land, and permit the idea of private property. (Cowboys were common pool resource managers who ranged the West bringing cow herds back to their owners, through round-ups.). Private servers like those in Minecraft are a counterpoint to this narrative. With modern technology’s adversity to the commons, it’s funny every time you stumble on a commons that was created by technology. It’s like they won’t go away.

That brings up a big question. Will commons go away? Can they be privatized and technologized away? This is one foundation of the libertarian ideology behind cryptocurrency. But the stakes are higher than the latest fad.

One claim that has been made by virtually every philosopher of democracy is that successful self-governance depends not only on having good rules in place, but on having members who hold key norms and values. Democracy has several well-known weak spots, and norms and values are its only reliable protection from demagogues, autocrats, elites, or mob rule. This sensitivity to culture puts institutions like democracy in contrast with institutions like markets, hierarchies, and autocracies, whose reliance on base carrots and sticks makes them more independent of value systems. Economist Sam Bowles distinguishes between Machiavellian and Aristotelian institutions, those that are robust to the worst citizen, and those that create good ones. The cynical versus the culture-driven institutions.

The same things that make cynical institutions cynical make them easy to analyze, design, and engineer. We have become good at building them, and they have assumed their place at the top of the world order. Is it their rightful place? In the tradition that trained me, only culture-driven institutions are up to the challenge of managing commons. If technology cannot take the commons from our our future, we need to be as good at engineering culture-driven institutions as we are at engineering markets and chains of command. Minecraft seems like just a game, a kid’s game, but behind its success are the tensions that are defining the role of democracy in the 21st century.

Unfortunately, the same factors that make cynical institutions easy to build and study make culture-driven institutions hard. It is possible to make thousands of copies of a hierarchy and test its variations: that’s what a franchise is: Starbucks, McDonalds, copy, paste. By contrast, each inspiring participatory community you discover in your life is a unique snowflake whose essence is impossible to replicate, for better and worse.

By researching self-organizing communities on the Internet, wherever they occur, we take advantage of a historic opportunity to put the “science” in “political science” to an extent that was once unimaginable. When you watch one or ten people try to play God, you are practicing history. When you watch a million, you are practicing statistics. We can watch millions of people trying to build their own little Utopia, watch them succeed and fail, distinguish bad choices from bad luck, determine when a bad idea in most contexts will be good somewhere else, and build general theories of institutional effectiveness.

There are several features that make online communities ideal for the study of culture-driven institutions. Their low barrier to entry means that there are many more of them. Amateur servers are also more transparent, their smaller scale makes them simpler, their shorter, digitally recorded histories permit insights into the processes of institutional change, and the fact that they serve identical copies of known software makes it possible to perform apples-to-apples comparisons that make comparisons of the Nations of Earth look apples-to-elephants by comparison.

A study of the emergence of formal governance

Big ideas are nice, but you’ve got to pin it down somehow. I began my research asking a more narrow question: how and why do communities develop their governance systems in the direction of increasing integration and formalization. This is the question of where states come from, and bureaucracy, and rules. Do we need rules. Is there a right way to use them to govern? Is it different for large and small populations? To answer this, I wrote a program that scanned the Internet every couple of hours for two years, visiting communities for information about how they are run, who visits them, and how regularly those visitors return. I defined community success as the emergence of a core group, the number of players who return to a specific server at least once a week for a month, despite the thousands of other communities they could have visited. And because the typical lifespan of a server is nine weeks, it was possible to observe thousands of communities, over 150,000, over their entire life histories. Each starts from essentially the same initial conditions, a paradoxical “tyrano-anarchy” with one ruler and no rules. And each evolves in accordance with a sovereign administrators naïve sense of what brings people together. As they develop that sense, administrators can install bits of software that implement dimensions of governance, including private property rights, peer monitoring, social hierarchy, trade, communication, and many others. Most fail, some succeed.

According to my analysis, large communities seem to be the most successful the more actively they attend to the full range of resource management challenges, and, interestingly, the more they empower the sole administrator. Leadership is a valuable part of successful community, especially as communities grow. The story becomes much harder to align with a favorite ideology when we turn our focus to small communities. It turns out that if your goal is to run a community of 4, rather than 400 regular users, there is no governance style that is clearly more effective than any other: be a despot, be a socialist, use consensus or dice, with few enough people involved, arrangements that seem impossible can be made to work just fine.

The future

What this project shows is that rigorous comparisons of very large, well-documented populations of political experiments make it possible to understand the predictors of governance success. This is important for the future of participatory, empowering governance institutions. Until effective community building can be reduced to a formula, effective communities will be rare, and we humans will continue to fail to tap the full potential of the Internet to make culture-driven institutions scalable, replicable, viable competitors to the cynical institutions that dominate our interactions.

With more bad news every day about assaults on our privacy and manipulation of our opinions, it is hard to be optimistic about the Internet, and what it will contribute to the health of our institutions. But, working diligently in the background, is a whole generation of youth who have been training themselves to design and lead successful communities. Their sense of what brings people together doesn’t come from a charismatic’s speechifying, but their own past failures to bring loved ones together. They can identify the warning signs of a nascent autocrat, not because they read about autocracies past, but because they have personally experienced the temptation of absolute power over a little virtual kingdom. And as scientist learn these lessons vicariously, at scale, self-governance online promises not only to breed more savvy defenders of democracy, but to inform the design and growth of healthy, informed participatory cultures in the real world.

New in Journal of Computational Social Science: “Cognitive mechanisms for human flocking dynamics”

New in Journal of Computational Social Science:
“Cognitive mechanisms for human flocking dynamics” with Rob Goldstone

Think of it as Cognitive Science meets Human Collective Behavior meets Game Theory. This is (only) the second paper to come out of my dissertation (5 years ago). It’s three chapters jammed into one, so if it feels like it’s about level-k being social, and mental models revealing themselves on the fly, and games being open to interpretation, and flocking being robust, and human being capable of faking 10 levels of what-you-think-I-think-you-think-I-think, then, well, it is.
(free version:

This is the next level of the work that got me a BBC Radio documentary appearance, and some other lots of other rock-paper-scissors coverage.

Economic game theory’s “folk theorem” is not empirically relevant

I study a lot of game dynamics: how people learn as they make the same socially-inflected decision over and over. A branch of my career has been devoted to finding out that people do neat unexpected things that are totally unpredicted by established models. Like in most things, anything close to opposition to this work looks less like resistance and more like indifference. One concrete reason, in my area, is that it is old news that strange things can happen in repeated games. That is thanks to the venerated folk theorem. As Fisher (89) put it, the “folk theorem” is as follows

in an infinitely repeated game with low enough discount rates, any outcome that is individually rational can turn out to be a Nash equilibrium (Fudenberg and Maskin, 1986). Crudely put: anything that one might imagine as sensible can turn out to be the answer

It is a mathematical result, a result about formal systems. And it is used to say that, in the real world, anything goes in the domain of repeated games. But it can’t be wrong: no matter what one finds in the real world, a game theorist could say “Ah yes, the folk theorem said that could happen.” What’s that mean for me? Good news. The folk theorem, as much as we love it, is fine logic, but it isn’t science. It says a lot about system of equations, but because it can’t be falsified, it has nothing to offer the empirical study of human behavior.

Oh, FYI, I’d love to be wrong here. If you can find a way to falsify the Folk Theorem, let me know. Alternatively, I’d love to find a citation that says this better than I do here.

Fisher F.M. (1989). Games Economists Play: A Noncooperative View, The RAND Journal of Economics, 20 (1) 113. DOI:

Use Shakespeare criticism to inspire language processing research in cognitive science

I have a side-track of research in the area of “empirical humanities.” I got to present this abstract recently at a conference called “Cognitive futures in the humanities.”

It might seem self-evident that “the pun … must be noticed as such for it to work its poetic effect.” Joel Fineman says it confidently in his discussion of Shakespeare’s “Sonnet 132.” But experimental psychologists have proven that people are affected by literary devices that they did not notice. That is a problem with self-evidence, and it reveals one half of the promise of empirical humanities.

Counterintuition pervades every aspect of language experience. Consider the four versions of the following sentence, and how the semantic connections they highlight could affect conscious recognition of the malapropism at pack: “Parker could not have died by [suicide/cigarettes], as he made a [pact with the devil/pack with the devil] that guaranteed immortal life.” Pack is an error. Cigarette semantically “primes” it, just as suicide primes pact. Will readers be more disturbed by pack when it is primed, or less? Does cigarette disguise pack or make it pop out? Classic theories in cognitive science would argue for the latter, that priming the malapropism will make it more disruptive and harder to miss. But no scientific theory has considered the alternative. I hadn’t myself until I reviewed the self-evidence of Shakespeare scholar Stephen Booth. This is the other half of the promise of empirical humanities. Literary criticism can reveal new possibilities in unquestioned cognitive theories, and inspire new tracks of thought.

After reviewing some lab work in the human mind, and some literary fieldwork there, I will tell you what cigarette does to pack.

It was fun spending a week learning how humanities people think. The experiment is work with Melody Dye and Greg Cox that I was a part of.

The law of welfare royalty

To propose that human society is governed by laws is generally foolhardy. I wouldn’t object to a Law of Social Laws to push along the lines that all generalizations are false. But this observation has a bit going for it, namely that it depends on the inherent complexity of society, and on human limits. Those are things we can count on.

The law of welfare royalty: Every scheme for categorizing members of a large-scale society will suffer from at least one false positive and at least one false negative.

The law says that every social label will be misapplied in two ways: It will be used to label people it shouldn’t (false positive), and it will fail to be applied to people it should (false negative). Both errors will exist.

The ideas of false positives and false negatives come from signal detection theory, which is about labeling things. If you fired a gun in the direction of someone who might be friend or foe, four things can happen: a good hit, a good miss, a bad hit (friendly fire), and a bad miss.** Failing to keep all four outcomes in mind leads to bad reasoning about humans and society, especially when it comes to news and politics.


  • No matter how generous a social welfare system, it will always be possible to find someone suffering from starvation and exposure, and to use their story to argue for more generosity.
  • No matter how stingy and inadequate a welfare system, it will always be possible to cry “waste” and “scandal” on some kind of welfare royalty abusing the system.
  • No matter the inherent threat of violence from a distant ethnic group, it will always be possible to report a very high and very low threat of violence.
  • Airport security measures are all about tolerating a very very high rate of false positives (they search everybody) in order to prevent misses (letting actual terrorists board planes unsearched), but it cannot be guaranteed to succeed, and the cost of searching everybody has to be measured against that.
  • In many places, jaywalking laws are only used to shut down public protests. During street protests, jaywalking laws have a 0% hit rate and a 0% correct reject (true negative) rate: they never catch people they should, and they catch all of the people they shouldn’t.

The law of welfare royalty is important for how we think about society and social change. The upshot is that trustworthy reporting about social categories must report using lots of data. Anecdotes will always be available to support any opinion about any act on society. You can also infer from my formulation of the law a corollary that there will always be a talking head prepared to support your opinion, though that isn’t so deep or interesting or surprising.

In fact, none of this is so surprising once a person thinks about it. The challenge is getting a person to think about it, even once. That’s the value of giving the concept a name. If I could choose one facet of statistical literacy to upload into the head of every human being, it would be a native comfort with the complementary concepts of false positives and negatives. Call it a waste of an upload if you want, but signal detection theory has become a basic part of my daily intellectual hygiene.

Back by one forward by two: Does planning for norm failure encourage it?

Most people who care about resource management care about big global common resources: oceans, forests, rivers, the air. But the commons that we deal with directly — shared fridges, flagging book clubs, public restrooms — may be as important. These “mundane” commons give everyday people experiences of governance, possibly the only type of experience that humanity can rely on to solve global commons dilemmas.

I think that’s important, and so the problems of maintaining mundane commons always get me. One community of mine, my lab, has recently had trouble with a norm of “add one clean two.” Take a sink shared with many people, at an office or in a community. There are a million ways to keep this kind of resource clean, and I see new ideas everywhere I look. Still, most shared sinks have dirty dishes. One recent proposed idea was “add one clean two.” If you can’t count on every individual to clean their own dish, why not appeal to the prosocial people (the ones most likely to discuss the problem as a problem) to clean two dishes for every one they add?

On the one hand, this cleverly embraces homogeneity of cooperativeness to solve an institutional design problem. On the other, a norm built on the premise that violators exist makes it OK for people continue to leave their dishes undone. It isn’t clear to me what conditions would make the first effect overpower the second. Seems testable though.

Common-knowledge arbitrage

Hypothesis 1: Ask people what they think about a stock or a political issue, and also what they think “most people” think. Where these guesses are the same, predictions about the outcome will be right. Where they differ, outcomes will have more upsets.

There are a few places where I would ultimately want to see this perspective go. One would look at advertising and other goal-oriented broadcasts as aimed at strategically creating a difference between what people think and what they think others think. Another would try to predict changes in finance markets based on these differences. This perspective will be useful in any domain where people don’t merely act on what they think, but on the differences with their estimate of common knowledge. It will also be useful in domains where people’s expressed opinions differ from their privately held ones.

Hypothesis 2: Holding everything else still, average opinion and the average of estimates of public opinion will tend toward being equal.

If this second guess is true, a systematic significant difference between the average opinion and the average estimate of public opinion could provide an objective measure of propaganda pressure, one that could be used to assign a number to the strength of social pressure that is being applied by a goal-oriented agent working on a population through the mass media ecosystem.

But maybe that is too conspiracy theory-ey, and too top-down. The same measure could indicate a bottom-up dynamic. Take a social taboo that is privately ignored but still publicly upheld. In such a domain, it will be common for expressed opinions to differ from held opinions, which will drive a consistent non-zero difference between average opinion and average received opinion. Over a dozen taboos, those with a large or growing divergence will be those that are most likely to become outmoded. Anecdotally, I’m thinking here of the surprise, and surprisingly-robust, changes in opinion and policy around controlled substances, most striking in California.

Hypothesis 3: This is a little idle, but I would also guess that people with larger differences tend to be less happy, particularly where the differences concentrate on highly-politicized topics. Causation there could go either way — I’d guess both way.

This subject has some relationship to some extensions to Schelling’s opinion models and to my dissertation work (on surprising group-scale effects of “what you think I think you think I think” reasoning).

Do social preferences break “I split you chose”?

Hypothesis: Social preferences undermine the fairness, efficiency, and stability of “I cut, you choose” rules.

A lot of people chafe at the assumptions behind game theory and standard economic theory, and I don’t blame them. If those theories were right, there are a lot of things in our daily lives that wouldn’t work as well as they obviously do. But I came up with an example of the opposite: an everyday institution that would work a lot better if we weren’t so generous and egalitarian — if we didn’t have “social preferences.” Maybe; this is just a hypothesis, one that I may never get around to testing, but here it is.

“I cut, you choose” is a pretty common method for splitting things. Academically, it is appealing because it is easy to describe mathematically. It is a clean real world version of a classic Nash bargaining problem. There is a finite resource and two agents must agree about how to split it. The first person divides it into two parts and the second is free to pick the biggest. It is common in domains where the resource is hard to split evenly. The splitter knows that the picker will choose the larger part, and that he or she can do no better than getting 50%. This incentivizes the splitter to try for a completely fair distribution. Binmore has a theory that cultural evolution will select for social situations that are stable, efficient, and fair, and “I split, you choose” has those qualities, in theory.

It sounds fine, and I’ve seen it work great, but I’ve also seen it go wrong, particularly among the guilty and shy. In the splitter role they get anxious and in the receiver role they tend to pick the smaller share. It might sound heartless for someone to exploit that, but my wonderful boss did: He was splitting a candy bar with an anxious friend and proposed “I split, you chose.” He volunteered also to be the splitter, and proceeded to divide the bar blatantly 70/30. What did the victim do? He knew he was being manipulated, he watched the split with horror, but, however wounded, mysteriously picked the smaller share. Social preferences, in that case make “I split, you choose” into an institution that is neither stable nor fair and, if it’s efficient, it’s only because every possible outcome is equally efficient.

That’s interesting because we normally think of game theory as this sterile thing that implies a selfish existence whose only redeeming value is that it’s contradicted by our social preferences, which make everything better. But, if I’m right, this is a clean example of the opposite. Game theory would be offering a very nice clean institution, and social preferences break it.

Xeno’s paradox

There is probably some very deep psychology behind the age-old tradition of blaming problems on foreigners. These days I’m a foreigner, in Switzerland, and so I get to see how things are and how I affect them. I’ve found that I can trigger a change in norms even by going out of my way to have no effect on them. It’s a puzzle, but I think I’ve got it modeled.

In my apartment there is a norm (with a reminder sign) around locking the door to the basement. It’s a strange custom, because the whole building is safe and secure, but the Swiss are particular and I don’t question it. Though the rule was occasionally broken in the past (hence the sign), residents in my apartment used to be better about locking the door to the basement. The norm is decaying. Over the same time period, the number of foreigners (like me) has increased. From the naïve perspective, the mechanism is obvious: Outsiders are breaking the rules. The mechanism I have in mind shows some of the subtlety that is possible when people influence each other under uncertainty. I’m more interested in the possibility that this can exist than in showing it does. Generally, I don’t think of logic as the most appropriate tool for fighting bigotry.

When I moved in to this apartment I observed that the basement door was occasionally unlocked, despite the sign. I like to align with how people are instead of how the signs say they should be, and so I chose to just remain a neutral observer for as long as possible while I learned the how things run. I adopted a heuristic of leaving things how I found them. If the door was locked, I locked it behind me on my way out, and if the door wasn’t I left it that way.

That’s well and good, but you can’t just be an observer. Even my policy of neutrality has side effects. Say that the apartment was once full of Swiss people, including one resident who occasionally left the door unlocked but was otherwise perfectly Swiss. The rest of the residents are evenly split between orthodox door lockers and others who could go either way and so go with the flow. Under this arrangement, the door stays locked most of the time, and the people on the cusp of culture change stay consistent with what they are seeing.

Now, let’s introduce immigration and slowly add foreigners, but a particular kind that never does anything. These entrants want only to stay neutral and they always leave the door how they found it. If the norm of the apartment was already a bit fragile, then a small change in the demographic can tip the system in favor of regular norm violations.

If the probability of adopting the new norm depends on the frequency of seeing it adopted, then a spike in norm adoptions can cause a cascade that makes a new norm out of violating the old one. This is all standard threshold model: Granovetter, Schelling, Axelrod. Outsiders change the model by creating a third type that makes it look like there are more early adopters than there really are.

Technically, outsiders translate the threshold curve up and don’t otherwise change its shape. In equations, (1) is a cumulative function representing the threshold model. It sums over some positive function f() as far as percentile X to return value Y in “X% of people (adopters early adopters (E) plus non-adopters (N)) need to see that at least Y% of others have adopted before they do.” Equation (2) shifts equation (1) up by the percentage of outsiders times their probability of encountering an adopter rather than a non-adopter.

If you take each variable and replace it with a big number you should start to see that the system needs either a lot of adopters or a lot of outsiders for these hypothetical neutral outsiders to be able to shift the contour very far up. That says to me that I’m probably wrong, since I’m probably the only one following my rule. My benign policy probably isn’t the explanation for the trend of failures to lock the basement door.

This exercise was valuable mostly for introducing a theoretical mechanism that shows how it could be possible for outsiders to not be responsible for a social change, even if it seems like it came with them. Change can come with disinterested outsiders if the system is already leaning toward a change, because outsiders can be mistaken for true adopters and magnify the visibility of a minority of adopters.

Update a few months later

I found another application. I’ve always wondered how it is that extreme views — like extreme political views — take up so much space in our heads even though the people who actually believe those things are so rare. I’d guess that we have a bias towards over estimating how many people are active in loud minorities, anything from the Tea Party to goth teenagers. With a small tweak, this model can explain how being memorable can make your social group seem to have more converts than it has, and thereby encourage more converts. Just filter people’s estimates of different group’s representations through a memory of every person that has been seen in the past few months, with a bias toward remembering memorable things. I’ve always thought that extreme groups are small because they are extreme, but this raises the possibility that it’s the other way around, that when you’re small, being extreme is a pretty smart growth strategy.

The empirics of identity: Over what timescale does self-concept develop?

There is little more slippery than who we think we are. It is mixed up with what we do, what we want to do, who we like to think we are, who others think we are, who we think others want us to think we are, and dozens of other equally slippery concepts. But we emit words about ourselves, and those statements — however removed from the truth — are evidence. For one, their changes over time they can give insight into the development of self-concept. Let’s say that you just had a health scare and quit fast food. How long do you have to have been saying “I’ve been eating healthy” before you start saying “I eat healthy”? A month? Three? A few years? How does that time change with topic, age, sex, and personality? Having stabilized, what is the effect of a relapse in each of these cases? Are people who switch more quickly to “I eat healthy” more or less prone to sustained hypocracy — hysteresis — after a lapse into old bad eating habits? And, on the subject of relapse, how do statements about self-concept feed back into behavior; All else being equal, do ex-smokers who “are quitting” relapse more or less than those who “don’t smoke”? What about those who “don’t smoke” against those who “don’t smoke anymore”; does including the regretted-past make it more or less likely to return? With the right data — large longitudinal corpora of self-statements and creative/ambitious experimental design — these may become empirical questions.

The market distribution of the ball, a thought experiment.

The market is a magical thing.  Among other things, it has been entrusted with much of the production and distribution the world’s limited resources. But markets-as-social-institutions are hard to understand because they are tied up with so many other ideas: capitalism, freedom, inequality, rationality, the idea of the corporation, and consumer society. It is only natural that the value we place on these abstractions will influence how we think about the social mechanism called the market. To remove these distractions, it will help to take the market out of its familiar context and put it to a completely different kind of challenge.

Basketball markets

What would basketball look like if it was possible to play it entirely with markets, if the game was redesigned so that players within a team were “privatized” during the game and made free of the central planner, their stately coach: free to buy and sell favors from each other in real time and leave teamwork to an invisible hand?  I’m going to take my best shot, and in the process I’ll demonstrate how much of our faith in markets is faith, how much of our market habit is habit.

We don’t always know why one player passes to another on the court. Sometimes the ball goes to the closest or farthest player, or to the player with the best position or opening in the momentary circumstances of the court. Sometimes all players are following the script for this or that play. Softer factors may also figure in, like friendship or even the feeling of reciprocity. It is probably a mix of all of these things.  But the market is remarkable for how it integrates diverse sources of information.  It does so quickly, adapting almost magically, even in environments that have been crafted to break markets.

So what if market institutions were used to bring a basketball team to victory? For that to work, we’d have to suspend a lot of disbelief, and make a lot of things true that aren’t. The process of making those assumptions explicit is the process of seeing the distance of markets from the bulk of real world social situations.

The most straightforward privatization of basketball could class behavior into two categories, production (moving the ball up court) and trade (passing and shooting). In this system, the coach has already arranged to pay players only for the points they have earned in the game. At each instant, players within a team are haggling with the player in possession, offering money to get the ball passed to them. Every player has a standing bid for the ball, based on their probability of making a successful shot. The player in possession has perfect knowledge of what to produce, of where to go to have either the highest chances of making a shot or of getting the best price for the ball from another teammate.

If the player calculates a 50% chance of successfully receiving the pass and making a 3-point shot, then that pass is worth 1.5 points to him. At that instant, 1.5 will be that player’s minimum bid for the ball, which the player in possession is constantly evaluating against all other bids. If, having already produced the best set of bids, any bid is greater then that possessing player’s own estimated utility from attempting the shot, then he passes (and therefore sells) to the player with the best offer. The player in possession shoots when the probability of success exceeds any of the standing bids and any of the (perfectly predicted) benefits of moving.

A lot is already happening, so it will help to slow down. The motivating question is how would reality have to change for this scheme to lead to good baskeball? Most obviously, the pace of market transactions would have to speed up dramatically, so that making, selecting, and completing transactions happened instantaneously, and unnoticably. Either time would have to freeze at each instant or the transaction costs of managing the auction institution would have to be reduced to an infinitesimal. Similarly, each player’s complex and inarticulable process of calculating their subjective shot probabilities would have to be instantaneous as well.

Players would have to be more than fast at calculating values and probabilities, they would also have to be accurate. If players were poor at calcuating their subjective shot probabilities, and at somehow converting those into cash values, they would not be able to translate their moment’s strategic advantage into the market’s language. And it would be better that players’ bids reflect only the probability of making a shot, and not any other factors. If players’ bids incorporate non-cash values, like the value of being regarded well by others, or the value of not being in pain, then passes may be over- or under-valued. To prevent players from incorporating non-cash types of value the coach has to pay enough per point to drown out the value of these other considerations. Unline other parts of this thought experiment, that is probably already happening.

It would not be enough for players to accurately calculate their own values and probabilities, but those of every other player, at every moment. Markets are vulnerable to assymmetries in information. This means that if these estimates weren’t common knowledge, players could take advantage of each other artificially inflating prices and reducing the efficiency of the team (possibly in both the technical and colloquial senses). Players that fail to properly value or anticipate future costs and benefits will pass prematurely and trap their team in suboptimal states, local maxima. To prevent that kind of short-sightedness, exactly the kind of shortsightedness that teamwork and coaching are designed to prevent, it would be necessary for players to be able to divine not only perfect trading, but perfect production. Perfect production would mean knowing where and when on the court a pass or a shot will bring the highest expected payoff, factoring in the probability of getting to that location at that time.

I will be perfectly content to be proven wrong, but I believe that players who could instantaneously and accurately put a tradable cash value on their current and future state — and on the states of every other player on the court — could use market transactions to create perfectly coherent teams. In such a basketball, the selfish pursuit of private value could be manuevered by the market institution to guarantee the good of the team.

The kicker

With perfect (instantaneous and accurate) judgement and foresight a within-team system of live ball-trading could produce good basketball. But with those things, a central planner could also produce good basketball. Even an anarchist system of shared norms and mutual respect could do so. In fact, as long as those in charge all share the goal of winning, the outputs of all forms governance will become indistinguishable as transaction costs, judgement errors, and prediction errors fall to zero. With no constraints it doesn’t really matter what mechanisms you use to coordinate individual behavior to produce optimal group behavior.

So the process of making markets workable on the court is the process of redeeming any other conceivable form of government. Suddenly it’s trivial that markets are a perfect coordination mechanism in a perfect world.  The real question is which of these mechanisms is the closest to its perfect form in this the real world. Markets are not. In some cases, planned economies like board-driven corporations and coach-driven teams probably are.

Other institutions

What undermines bosshood, what undermines a system of mutual norms, and what undermines markets?  Which assumptions are important to each?  

  • A coach can prescribe behavior from a library of taught plays and habits. If the “thing that is the best to do” changes at a pace that a coach can meaningfully engage with, and if the coached behavior can be executed by players on this time scale, than a coach can prescribe the best behavior and bring the team close to perfect coherence.
  • If players have a common understanding of what kinds of coordinated behavior is the best for what kinds of situations, and they reliably
    and independently come to the same evaluation of the court, than consensual social norms can model perfect coherence satisfactorily.
  • And if every instant on the court is different, and players have a perfect ability to evaluate the state of the court and their own abilities, then an institution that organizes self-interest for the common good will be the one that brings it closest to perfect coherence

Each has problems, each is based on unrealistic assumptions, each makes compromises, and each has its place. But even now the story is still too simple. What if all of those things are true at different points over the course of a game? If the answer is “all of the above,” players should listen to their coach, but also follow the norms established by their teammates, and also pursue their own self-interest. From here, it is easy to see that I am describing the status quo. The complexity of our social institutions must match the complexity of the problems they were designed for. Where that complexity is beyond the bounds that an individual can comprehend, the institutional design should guide them in the right direction. Where that complexity is beyond the bounds of an institution, it should be allowed to evolve beyond the ideological or conceptual boxes we’ve imposed on it.

The closer

Relative to the resource systems we see every day, a sport is a very simple world.  The rules are known, agreed upon by both teams, and enforced closely. The range of possible actions is carefully prescribed and circumscribed, and the skills necessary to thrive are largely established and agreed upon. The people are occupying each position are world-class professionals. So if even basketball is too complicated for any but an impossible braid of coordination mechanisms, why should the real world be any more manageable? And what reasonable person would believe that markets alone are up to the challenge of distributing the world’s limited resources?


It took a year and a half to write this. Thanks to Keith Taylor and Devin McIntire for input.

Breaking the economist’s monopoly on the Tragedy of the Commons.


After taking attention away from economic rationality as a cause of overexploitation of common property, I introduce another more psychological mechanism, better suited to the mundane commons of everyday life. Mundane commons are important because they are one of the few instances of true self-governance in Western society, and thus one of the few training grounds for civic engagement. I argue that the “IAD” principles of the Ostrom Workshop, well-known criteria for self-governance of resource systems, don’t speak only to the very narrow Tragedy of the Commons, but to the more general problem of overexploitation.


The Tragedy of the Commons is the tragedy of good fudge at a crowded potluck. Individual guests each have an incentive to grab a little extra, and the sum of those extra helpings causes the fudge to run out before every guest got their share. For another mundane example, I’ve seen the same with tickets for free shows: I am more likely to request more tickets than I need if I expect the show to be packed.

The Tragedy has been dominated by economists, defined in terms of economic incentives. That is interesting because the Tragedy is just one mechanism for the very general phenomenon of overexploitation. In predatory animal species that are not capable of rational deliberation, population imbalances caused by cycles, introduced species, and overpopulation can cause their prey species to be overexploited. The same holds between infectious agents and their hosts: parasites or viruses may wipe out their hosts and leave themselves nowhere else to spread. These may literally be tragedies of commons, but they have nothing to do with the Tragedy as economists have defined it, and as researchers treat it. In low-cost, routine, or entirely non-economic domains, humans themselves are less likely to be driven by economic incentives. If overexploitation exists in these domains as well, then other mechanisms must be at work.

Economics represents the conceit that human social dynamics are driven by the rational agency that distinguishes us from animals. The Tragedy is a perfect example: Despite the abundance of mechanisms for overexploitation in simple animal populations, overexploitation in human populations is generally treated as the result of individually rational deliberation. But if we are also animals, why add this extra deliberative machinery to explain a behavior that we already have good models for?

I offer an alternative mechanism that may be responsible for engendering overexploitation of a resource in humans. It is rooted in a psychological bias. It may prove the more plausible mechanism in the case of low cost/low value “mundane” commons, where the incentives are too small for rational self-interest to distinguish itself from the noise of other preferences.

This line of thinking was motivated by many years of experience in shared living environments, which offer brownies at potlucks, potlucks generally, dishes in sinks, chores in shared houses, trash in shared yards, book clubs, and any instance where everyday people have disobeyed my culture’s imperative to distribute all resources under a system of private property. The imperative may be Western, or modern, or it may just be that systems of private property are the easiest for large central states to maintain. The defiance of the imperative maybe intentional, accidental, incidental, or as mundane as the resource being shared.

Mundane commons are important for political science, and political life, because they give citizens direct experience with self-governance. And theorists from Alexis de Toqueville to Vincent Ostrom argue that this is the kind of citizen education that democracies must provide if they aren’t going to fall to anarchy on the one side or powerful heads-of-state on the other. People cannot govern themselves without training in governance. I work in this direction because I believe that a culture of healthy mundane commons will foster healthy democratic states.

I don’t believe that the structural mechanisms of economics are those that drive mundane resource failure. This belief comes only from unstructured experience, introspection, and intuition. But those processes have suggested an alternative: the self-serving bias. Self-serving bias, interpreting information in a way that benefits us at the expense of others, is well-established in the decision-making literature.

How could self-serving cause overexploitation? Lets say that it is commonly known that different people have different standards for acceptable harvesting behavior. This is plausible in low-cost/ low-reward environments, where noise and the many weak and idiosyncratic social preferences of a social setting might drown out any effects of the highly-motivated goal-oriented profit-maximizing behavior that economists attend to. I know my own preference for the brownies, but I have uncertainty about the preferences of others for them. If, for every individual, self-serving bias is operating on that uncertainty about the preferences of others, then every person in the group may decide that they like brownies more than the other people, and that their extra serving is both fair and benign.

The result will be the overexploitation that results from the Tragedy of the Commons, and from the outside it maybe indistinguishable from the Tragedy, but the mechanism is completely different. It is an interesting mechanism because it is prosocial: no individual percieves that their actions were selfish or destructive. It predicts resource collapse even among agents who identify as cooperative.

The self-serving bias can help to answer a puzzle in the frameworks developed by the Ostrom Workshop. In their very well-known work, members of the Workshop identified eight principles that are commonly observed in robust common-property regimes. But only one of these, “graduated sanctions,” speaks directly to rational self-interest. The other principles invoke the importance of definitions, of conflict resolution, of democratic representation, and other political and social criteria.

Why are so many of the design principles irrelevant to rational self-interest, the consensus mechanism behind the Tragedy? Because it is not the only cause of overexploitation in self-governing resource distribution systems. The design principles are not merely a solution to the economist’s Tragedy of the Commons, but to the more general problem of overexploitation, with all of the many mechanisms that encourage it. If that is the case, then principles that don’t speak to the Tragedy may still speak to other mechanisms. For my purposes, the most relevant is Design Principle 1, in both of its parts:

1A User boundaries:
Clear boundaries between legitimate users and nonusers must be clearly defined.
1B Resource boundaries:
Clear boundaries are present that define a resource system and separate it from the larger biophysical environment.

By establishing norms, and the common knowledge of norms, this principle may prevent self-serving bias from promoting overexploitation. Norms provide a default preference to fill in for others when their actual preferences are unknown. By removing uncertainty about the preferences of others, the principle leaves participants no uncertainty to interpret in a self-serving manner.

Other psychological processes can cause overexploitation, but the design principles of the Ostrom Workshop are robust to this twist because they weren’t developed by theorizing, but by looking at real resource distribution systems. So even though they define themselves in terms of just one mechanism for overexploitation, they inadvertently guard against more than just that.

Difficulties replicating Kashtan & Alon (2005)

I love the paper, its about the evolution of neural structure. Do brains have parts? Do bodies have parts? If you think so, you’re very forward thinking, because science has no idea how that could possibly have evolved. Kashtan and Alon published a mechanism for the evolution of structure. They proposed that if environments have modular structure then things that evolve in them will as well. Or something like that.

I had trouble replicating their result. By the time I did, I had lost all faith in it. There are some tricks to make the effect seem bigger than it is, and there might be some confounds, though I stopped short of proving it. I’ve got a proposal all written up, but I changed disciplines before I could implement. I’m not the only one who couldn’t replicate — I’ve met others who had the same problem.

I still love that paper, but I personally believe that the mystery of evolved structure is more unsolved than we think.