Why decentralization is always ripe for co-optation

or
Will your transformative technology just entrench the status quo?

Things have come a long way since I was first exposed to cryptocurrency. Back in 2011 it was going to undermine nation-states by letting any community form its own basis of exchange. A decade later, crypto has little chance of fulfilling its destiny as a currency, but that’s OK because it’s proven ideal for the already wealthy, as a tool for tax evasion, money laundering, market manipulation, and infrastructure capture. States like it for the traceability and conventional banks incorporate it to chase the wave and diversify to a new high risk asset class.

This is not what crypto imagined for itself.

But it’s not a surprise. You can see the same dynamic play out in Apple Music, YouTube, Substack, and the post-Twitter scramble for social media dominance. These technologies are sold to society on their ability to raise the floor, but they cash out on their ability to raise the ceiling. The debate on this played out between Chris Anderson (a founder of Wired) and Anita Elberse (in her 2013 book Blockbusters). In response to Anderson’s argument that social media technologies empower the “fat tail” of regular-people contributors, Elberse countered with evidence of how it has increased market concentration by making the biggest bigger.

To skip to the end of that debate, the answer is “both”. Technologies that make new means available to everyone make those means available to the entrenched as well. The tail gets fatter at the same time as the peaks get taller. It’s all the same process.

So the question stops being “will this help the poor or the rich?” It becomes “who will it help faster?” The question is no longer transformative potential, but differential transformative power. Can this technology undermine the status quo faster than it bolsters it?

And for most of these technologies, the answer is “no”. Maybe, like crypto, a few people fell up and a few fell down. That is not transformation.

Why do people miss this? Because they stop at

“centralization = bad for the people; decentralization = good for the people”.

We forget it’s dual, that

“centralization = good for the entrenched; decentralization = good for the entrenched”

Centralization increases the efficiency of an already-dominant system, while decentralization increases its reach.

This all applies just fine to the latest technology that has people looking for transformative potential: decentralized identity (DID). It’s considered important because so many new mechanisms in web3 require that an address has an onto and 1-1 mapping to a human individual. So if identity can be solved then web3 is unleashed. But, thinking for just a second, decentralized identity technologies will fall into the same trap of entrenching the status quo faster than they isolate their transformative potential. Let’s say that DID scales privacy and uniqueness. If that happens then nothing keeps an existing body from running with uniqueness features and dropping privacy features.

If you’re bought into my argument so far, then you see that it’s not enough to develop technologies that have the option of empowering people, because most developers won’t take that option. You can’t take over just by growing because you can’t grow faster than the already grown. What is necessary is systems that are designed to actively counter accumulation and capture.

I show it in this paper looking at the accumulation of power by US basketball teams. For over a century, American basketball teams have been trying to gain and retain advantages on each other. Over the same time period, the leagues hosting them have served “sport over team,” exercising their power to change the rules to maintain competitive balance between teams. By preventing any one team from becoming too much more powerful than any other, you keep the sport interesting and you keep fans coming.

But what we’ve actually seen is that, over this century, basketball games have become more predictable: if Team A beat Team B and Team B beat Team C, then over a century Team A has become more and more likely to beat Team C. This is evidence that teams have diverged from each other in skill, despite all the regulatory power that leagues have been given to keep them even. If the rich get richer even in systems with an active enduring agency empowered to prevent the rich from getting richer, then concentration of power is deeply endemic and can’t just be wished away. It has to be planned for and countered.

This is why redistribution is a core principle of progressive and socialist politics. You can’t just introduce a new tweak and wait for things to correct. You need a mechanism to actively redistribute at regular intervals. Like taxes.

In web3, there aren’t many technologies that succeed at the higher bar of actively resisting centralization. One example might be quadratic voting, which has taken off probably because it’s market-centric branding has kept it from being considered redistributive (it is).

So for now my attitude toward decentralization is “Wake me up when you have a plan to grow faster than you can be co-opted.” Wake me up when you’ve decentralized taxation.


Psychoactives in governance | The double-blind policy process

I’m often surprised at how casual so many communities are about who they let in. To add people to your membership is to steer your community in a new direction, and you should know what direction that is. There’s nothing more powerful than a group of aligned people, and nothing more difficult than steering a group when everyone wants something different for it. I’ve seen bad decisions on who to include ruin many communities. And, on the other hand, being intentional about it can have a transformative effect, leading to inspiring alignment and collaboration. The best collaborations of my life have all been in discerning communities.

So what does it mean to be intentional about membershipping? You could say that there are two overall strategies. One is to go slow and really get to know every prospective member before inviting them fully into the fold. The other is to be very explicit and providing narrow objective criteria for membership. These both have upsides and downsides. If you spend a lot of time getting to know someone, there will be no surprises. But this can produce cliqueishness and cronyism: who else have you spent that much time with than your own friends? On the other hand are communities that base membership on explicit objective criteria can be exploited. A community I knew wanted tidy and thoughtful people, so would filter people on whether they helped with the dishes and brought desert. The thinking was that a person who does those things naturally is certainly tidy and thoughtful. But every visitor knew to bring desert and help with the dishes, regardless of what kind of person they were, so the test failed as an indicator.

We need better membershipping processes. Something with the fairness and objectivity of explicit criteria, but without their vulnerability to being faked. There are lots of ways that scholars solve this kind of problem. They will theorize special mechanisms and processes. But wouldn’t it be nice if we could select people who just naturally bring desert, help with dishes, ask about others, and so on? Is that really so hard? To solve it, we’re going to do something different.

The mechanism: the double-blind policy process with collective amnesia

Amnesia is usually understood as memory loss. But that’s actually just one kind, called retrograde amnesia, the inability to access memories from before an event. The opposite kind of amnesia is anterograde. It’s an inability to form new memories after some event. It’s not that you lost them, you never got them in the first place. We’re going to imagine a drug that induces temporary anterograde amnesia. It prevents a person from forming memories for a few hours.

To solve the problem of bad membershipping, we’re going to artificially induce it in everyone. Here’s the process:

  1. A community’s trusted core group members sit and voluntarily induce anterograde amnesia in themselves (with at least two observers monitoring for safety).
  2. In a state of temporary collective amnesia, the group writes up a list of membership criteria that are precise, objective, measurable, and fair. As much as possible, items should be the result of deliberation rather than straight from the mind of any one person.
  3. They then seal the secret criteria in an envelope and forget everything.
  4. Later, the core group invites a prospective new member to interview.
  5. The interview isn’t particularly well structured because no one knows what it’s looking for. So instead it’s a casual wide-ranging affair involving a range of activities that really have nothing to do with the community’s values. These activities are diverse and wide-ranging enough to reveal a variety of dimensions of the prospectives personality. An open-ended personality test or two could work as well. What you need is a broad activity pool that elicits a range of illuminating choices and behaviors. These are being observed by the membership committee members, but not discussed or acted upon until ….
  6. After the interview, a group of members sits to deliberate on the prospective’s membership, by
    • collectively inducing anterograde amnesia,
    • opening the envelope,
    • recalling the prospective’s words and choices and behavior over the whole activity pool,
    • judging all that against the temporarily revealed criteria,
    • resealing the criteria in the envelope,
    • writing down their decision, and then
    • forgetting everything
  7. Later this membership committee reads the decision they came to to find out if they will be welcoming a new peer to the group.

The effect is that the candidate got admitted in a fair, systematic way that can’t be abused. Why does it work? No one knows how to abuse it. In a word, you can’t game a system if literally nobody knows what its rules are. Not knowing the rules that govern your society is normally a problem, but it seems to be just fine for membership rules, maybe because they are defined around discrete intermittent events.

Psychoactives in decision-making

If this sounds fanciful, it’s not: the sedatives propofol and midazolam both have this effect. They are common enough in the cocktails of sedatives, anesthetics, analgesics, and tranquilizers that anaesthesiologists administer during surgical procedures.

If this sounds feckless or reckless, it’s not. There is an actual heritage of research that uses psychoactives to understand decision-making. I’m a cognitive scientist who studies governance. I learned about midazolam from Prof Richard Shiffrin, a leading mathematical psychologist and expert in memory and decision-making. He invoked it while proposing a new kind of solution to a social dilemma game from economic game theory. In the social dilemma, two people can cooperate but each is tempted to defect. Shiffrin suggests that you’ll cooperate if the person is so similar to you that you know they’ll do whatever you do. He makes the point by introducing midazolam to make it so the other person is you. In Rich’s words:

You are engaged in the simple centipede game decision tree [Ed. if you know the Prisoner’s Dilemma, just imagine that] without communication. However the other agent is not some other rational agent, but is yourself. How? You make the decision under the drug midazolam which leaves your reasoning intact but prevents your memory for what you thought about or decided. Thus you decide what to do knowing the other is you making the other agent’s decision (you are not told and don’t know and care whether the other decision was made earlier or after because you don’t remember). Let us say that you are now playing the role of agent A, making the first choice. Your goal is to maximize your return as agent A, not yourself as agent B. When playing the role of agent B you are similarly trying to maximize your return.

The point is correlation of reasoning: Your decision both times is correlated, because you are you and presumably think similarly both times. If you believe it is right to defect, would you nonetheless give yourself the choice, knowing you would defect? Or knowing you would defect would you not choose (0,0)? On the other hand if you think it is correct to cooperate, would it not make sense to offer yourself the choice? When playing the role of B let us say you are given the choice – you gave yourself the choice believing you would cooperate – would you do so?

— a 2021/09/15 email

The upshot is that if you know nothing except that you are playing against yourself, you are more likely to cooperate because you know your opponent will do whatever you do, because they’re you. As he proposed it, it was a novel and creative solution to the problem of cooperation among self-interested people. And it’s useful outside of the narrow scenario it isolates. The idea of group identity is precisely that the boundaries of our conceptions of ourselves can expand to include others, so what looks like a funny idea about drugs is used by Shiffrin to offer a formal mechanism by which group identity improves cooperation.

Research at the intersection of drugs and decision-making isn’t restricted to thought experiments. For over a decade, behavioral economists in the neuroeconomics tradition have been piecing together the neurophysiology of decision-making by injecting subjects with a variety of endogenous and exogenous substances. For example, see this review of the effects of oxytocin, testosterone, arginine vasopressin, dopamine, serotonin, and stress hormones.

Compared this other work, all that’s unusual about this post is the idea of administering to a whole group instead of individuals.

More: The connection to incentive alignment | I’m most into exciting things when they have boring names

This mechanism is serious for another reason too. The problem of membershipping is a special case of a much more general problem: “incentive alignment” (also known as “incentive compatibility”).

  • When people answering a survey tell you what they want you hear instead of the truth
  • When someone lies at an interview
  • Just about any time that people aren’t incentivized to be transparent

Those are all examples of mis-alignment in the sense that individual incentives don’t point to system goals.

The phenomenon of “buyer’s remorse” gives a more economic example of the same idea. This example comes from auction theory. It’s good because it also shows how small elegant tweaks you can restore alignment. In a normal auction, where people are bidding for a thing, it turns out that the structure of the decision doesn’t actually incentivize an honest evaluation by buyers of what they think a thing is worth. In real world auctions people often overbid, in part because they are influenced by the fear of losing. So typical “first-price” auctions are actually not incentive aligned.

But there’s an auction design out there that actually does incentivize honesty. It’s the “second-price” auction. In a second price auction the winner doesn’t pay the price they bid, but the next highest price. Why does that change anything? To see the trick you have to think a bit. At first thought you might just think that the smart strategy is to name a crazy high price and pay the losing bidder’s fair price. But what if all bidders think that? Then you’re going to overpay. You don’t want that: you don’t want to pay more for a thing than it’s worth to you. Where this reasoning gets you is that all bidders in a second-price setting will decide to name the price that they are actually willing to pay, no more, no less.

In so many real world settings, incentives don’t support honest disclosure. We have workarounds in most parts of our life, but the problem still matters, and it attracts a lot of attention from economists. Their work depends crucially on the idea that incentives determine behavior.

Incentive compatible survey design | Why save democracy when you can save dictatorship?

Incentive compatibility is especially challenging for survey design. That’s important because surveys are the least bad way to learn things about people in a standardized way. Incentive compatible survey design is a real can of worms. For some things the problem is easy to solve once you’ve spotted it. Say you’re studying philanthropy, and you ask “Do you donate more or less to charity than your peers?” But you realize that most people will say that they donate more than they do. The incentive compatible way of getting an (honest) answer will be to invite people to non-hypothetically donate some of their survey reward to a charity. If they donate and that donation is smaller or larger than average, you the researcher found out if they donate more than others without actually having had to ask. By replacing hypothetical questions with costly behavior you get honesty.

Another strategy is to ask questions with verifiable answers. Instead of asking “What is your height and weight?” you might say “What is your height and weight? We will measure you after this and you’ll only get paid for participation if the difference is 0.” But if you’re verifying then why ask in the first place? And what if verification is impractical? And, most relevant for us, what if it’s impossible, such as with subjective self-evaluations (“Are you kind to others?”)?

The problem there is clearest if we hit pause on saving democracy and take a moment to try and save dictatorship. As a longtime scholar and organizer of self-governing communities, I’m comfortable saying that many communities could do worse than structure their governance under a benevolent dictatorship. A lot of groups, organizations, and communities that I admire do. In its most ideal form, benevolent dictatorship is not that different from democracy, because the dictator, being benevolent, is caring, curious, and motivated to understand and integrate everyone’s needs. As a result, the dictator will generate just the kinds of solutions that a healthy democracy would, and they’ll probably so it with fewer meetings.

So why not replace everything with benevolent dictatorship? The main problem is fragility. Nothing systemic keeps the “benevolent” in there. It’s generally luck, and you have to keep getting lucky because your first dictator won’t be your last. Benevolent dictatorship slips very easily into the bad kind that has reliably been there for humanity’s darkest moments. Whether it’s through bad succession or the corrupting influence of power, no tool we have can reliably keep a benevolent dictatorship benevolent.

Well there might be one tool. What if we had incentive compatible personality tests? It’s easy to imagine the important questions you would want to ask a candidate for dictator.

  • “How likely are you to abuse power?”
  • “How do you respond to disagreement?”
  • “How do you respond to insults?”
  • “If a brakeless trolley is hurtling toward a loved one, and you’re at the switch that can divert it on to another track with n people you’ve never met, what is the largest n you’ll tolerate.”

Asking is easy; what’s hard is to know if their answer is honest. If there was a way to know what someone really thinks, you’d just disqualify the people who give bad answers and appoint the people who give good answers.

That’s what’s special about double-blind policy. It’s a step in the direction of incentive compatibility for self-evaluation. You can’t lie about a question if nobody knows what was asked.

Quibbles

For all kinds of reasons this is not a full solution to the problem. One obvious problem: even if no one knows the rules, anyone can guess. The whole point of introducing midazolam into the social dilemma game was that you know that you will come to the same conclusions as yourself in the future. So just because you don’t know the criteria doesn’t mean you don’t “know” the criteria. You just guess what you would have suggested, and that’s probably it. To solve this, the double-blind policy mechanism has to be collaborative. It requires that several people participate, and that a collaborative deliberation process over many members will produce integrated or synergistic criteria that no single member would have thought of.

Other roles for psychoactives in governance design

The uses of psychoactives in community governance are, as far as I know, entirely unconsidered. Some cultures have developed ritualistic sharing of tobacco or alcohol to formalize an agreement. Others have developed ordering the disloyal to drink hemlock juice, a deadly choline antagonist. That’s all I can think of. I’m simultaneously intrigued to imagine what else is out there and baseline suspicious of anyone who tries.

The ethics

For me this is all one big thought experiment. But I live in the Bay Area, which is governed by strange laws like “The Pinocchio Law of The Bay” which states:

“All thought experiments want to go to the San Francisco Bay Area to become real.”

(I just made this up but it scans)

Hypothetically, I’m very pleased with the idea of solving governance problems psychoactives, but I’ll admit that it suffers from being awful-adjacent: It’s very very close to being awful. I see three things that could tip it over:
1) If you’re not careful it can sound pretty bad, especially to any audience that wants to hate it.
2) If you don’t know that the idea has a legitimate intellectual grounding in behavioral science, then it just sounds druggy and nuts.
3) If it’s presented without any mention of the potential for abuse then it’s naive and dangerous.

So let’s talk about the potential for abuse. The double-blind policy process with collective amnesia has serious potential for abuse. Non-consensual administration of memory drugs is inherently horrific. Consensual administration of memory drugs automatically spawns possibilities for non-consensual use. Even if it didn’t, consensual use itself is fraught, because what does that even mean? The framework of consent requires being able and informed. How able and informed are you when you can’t form new memories?

So any adoption or experimentation around this kind of mechanism should provide for secure storage and should come with a security protocol for every stage. Recording video or having observers who can see (but not hear?!) all deliberations could help. I haven’t thought more deeply than this, but the overall ethical strategy would go like this: You keep abuse potential from being the headline of this story by credibly internalizing the threat at all times, and by never being satisfied that you’ve internalized it enough. Expect something to go wrong and have a mechanism in place for nurturing it to the surface. Honestly there are very few communities that I’d trust to do this well. If you’re unsure you can do it well, you probably shouldn’t try. And if you’re certain you can do it well, then definitely don’t try.


Community building as one of your basic skills

A healthy democracy requires a citizenry full of people who have built communities, held office, and started initiatives. You can’t expertly serve and intentionally organized group if you haven’t built and nearly broken several. That is why my strategy for serving democracy is to focus on online communities, which provide this opportunity to more people than ever before. Of course, as strategy it isn’t very strategic: it boils down to “change everybody.” But it’s necessary, so you have to proceed as if it’s possible. I’m still struggling with it. But there are patterns that have pulled it off. Organizations like the Scouts (Boys and Girls Clubs of America, officially the largest paramilitary organizations in the US) happen to require all of these things, and they built a system that lets people learn in flexible mentor-driven ways. Making this poster helped me get these ideas down clearly.


The unpopular hypothesis of democratic technology. What if all governance is onboarding?

There’s this old organizer wisdom that freedom is an endless meeting. How awful. Here the sprightly technologist steps in to ask:

“Does it have to be? Can we automate all that structure building and make it maintain itself? All the decision making, agenda building, resource allocating, note taking, emailing, and even trust?
We can; we must

That’s the popular hypothesis, that technology should fix democracy by reducing friction and making it more efficient. You can find it under the hood of most web technologies with social ideals, whether young or old. The people in this camp don’t dispute the need for structure and process, but they’re quick to call it bureaucracy when it doesn’t move at the pace of life, and they’re quick to start programming when they notice it sucking up their own free time. Ideal governance is “the machine that runs itself“, making only light and intermittent demands for citizen input.

And against it is the unpopular hypothesis. What if part of the effectiveness of a governance system is in the tedious work of keeping it going? What if that work builds familiarity, belonging, bonding, sense of agency, and organizing skills? Then the work of keeping the system up is itself the training in human systems that every member needs to have for a community to become healthy. It instills in every member pragmatic views of collective action and how to get things done in a group. Elinor Ostrom and Ganesh Shivakoti give a case of this among Nepali farmers when state-funds replaced hard-to-maintain dirt irrigation canals with robust concrete irrigation canals and farmer communities stopped sharing water equitably. What looked like maintaining ditches was actually maintaining an obligation to each other.

That’s important because under the unpopular hypothesis, the effectiveness of a governance system depends less on its structure and process (which can be virtually anything and still be effective) and more on what’s in the head of each participant. If they’re trained, aligned, motivated, and experienced, any system can work. This is a part of Ostrom’s “institutional diversity”. The effective institution focuses on the members rather than the processes by making demands of everyone, or “creating experiences.”

Why are organizations bad computers? Because that isn’t their only goal.

In tech circles I see a lot of computing metaphors for organizations and institutions. Looking closer at that helps pinpoint the crux of the difference between the popular and unpopular hypotheses. In a computer or a program, many gates or function are linked into a flow that processes inputs into outputs. In this framework, a good institution is like a good program, efficiently and reliably computing outputs. Under the metaphor all real-world organizations look bad. In a real program, a function will compute reliably, quickly, and accurately without having to provide permission or buy-in or interest? In an organization each function needs all those things.

So organizations are awful computers. But that’s not a problem because it’s goal isn’t to compute, but to compute things that all the functions want computed. It’s a computer that exists by and for its parts. The tedium of getting buy-in from all the functions isn’t an impediment to proper functioning, it is proper functioning. The properly functioning organization-computer is constantly doing the costly hygiene of ensuring the alignment of all its parts, and if it starts computing an output wrong, it’s not a problem with the computer, it’s a problem with the output.

If the unpopular hypothesis is right, then we shouldn’t focus on processes and structures—those might not matter at all—but on training people, keeping them aligned with each other, and keeping the organization aligned with them. It supports another hypothesis I’ve been exploring, that all governance is onboarding.

Less Product, more HR?

This way of thinking opens a completely different way of thinking about governance. Through this lens,

  • Part of the work of governance is agreeing what to internalize
  • a rule is the name of the thing that everyone agrees that everyone should internalize.
  • The other part of governing is creating a process that helps members internalize (whether via training, conversation, negotiation, even a live-action tabletop role playing simulation).
  • once it’s internalized by everyone the rule is irrelevant and can be replaced by the next rule to work on.

In this system, the constraints on the governance system depend on human limits. You need rules because an org needs to be intentional about what everyone internalizes. You’ll keep needing rules because the world is changing and the people are changing and so what to internalize is going to change. You can’t have too many rules at one time because people can’t remember too rules-in-progress at once. You need everyone doing and deciding the work together because it’s important that the system’s failures feel like failures of us rather than them.

With all this, it could be tempting to call the popular hypothesis the tech friendly one. But there’s still a role for technology in governance systems following the unpopular hypothesis. It’s just a change in focus, into technologies that support habit building, skill learning, training, onboarding, and that monitor the health of the shared agreements underlying all of these things. It encourages engineers and designers to move from the easy problems of system and structure to the hard ones of culture, values, and internalization. The role of technology in supporting self-governance can still be to make it more efficient, but with a tweak: not more efficient at arranging parts into computations, but more efficient at maintaining its value to those parts.

Maybe freedom is an endless meeting and technology can make that palatable and sustainable. Or maybe the work of saving democracy isn’t in the R&D department, but HR.