Adaptive governance: tuning for emergent politics

For me, the single biggest factor in the effectiveness of a decision process is whether the group members are all able to assume good faith in each other. This common belief in mutual good faith means that everyone wants what is best for the organization. They may differ on what that is, but they trust the system and, more importantly, each other, that they want the best for the group and its members. This assumption is an explicit principle of prominent consensus organizations such as Wikipedia.

As mutual assumption of good faith erodes, governance starts to have to operate under conditions of politics. There is an opposition. It should be stymied. Its beliefs are dangerous for the organization. Its members are motivated by maliciousness or ignorance. I think there is more to this difference between governance with and without mutual good faith than the presence or absence of contestation. There is conflict and competition in good faith systems, but members engage in it with a belief that those differences are pursued sincerely and at least theoretically reconcilable.

It’s not a clean dichotomy but those are the poles, and you can divide political science by which it focuses on, with the spiritual core of political science interested in “politics,” organizing without good faith, and a constellation of secondary areas—policy studies, public administration, community governance, deliberative and participatory democracy—interested in “governance,” organizing with good faith.

Naturally, how you should make decisions depends on what regime you’re in: collaborative or political. In political arenas, consensus is easily gamed but unitary authority is more easily abused and less legitimate. Voting is more robust to politics, and systems like parliamentary procedure are very much designed to help systems move forward in the face of political rifts. At the extreme are mechanisms for casting votes under hostile regimes, as with the holographic voting scheme I proposed in Human Computation (http://doi.org/cwnn).

And just as there are many forms that are not robust to politics, there are many that are not adapted to consensus. The excessive formality of parliamentary procedure is clunky, inefficient, and even awkward within a small trusted core group. Voting, which is almost elegant for navigating political conflicts, permits a lazy and marginalizing shortcut around a whole family of processes designed to create nuanced win-win solutions. And with enough alignment and training these can be surprisingly efficient as well.

This would be a straightforward design problem except that communities can, of course, change what regime they are in. The consensus assumption of mutual good faith must be a literal assumption. Once one person breaks it, either by acting in bad faith or assuming bad faith in another, their defection will cause defection in others. In this sense, a community’s common sense of mutual good faith is a fragile common-pool resource that is vulnerable to sudden resource crash; it only takes one defection and the cascade of failure will ratchet down. Transitions in the other direction, up from political to collaborative, are trust building projects that take time, patience, and vulnerability, especially when they occur without the luxury of a common enemy or other shared outside threat.

The idea

How a group should work depends on where it is at the moment. But how do you know where you are? Imagine a system that decided the process for every decision on the basis of how political it is. The procedures of US Congress and Robert’s Rules already have versions of this, in which a simple vote or process is the default but any member or minority can trigger a more formal, politically robust process: from straw poll to roll call or public to closed ballots. Precisely because good faith implies consensus, allowing a single person to request a politically robust procedure amounts to a mechanism for detecting breaks from the consensus belief in mutual good faith. More mundanely, many internally friendly groups will adopt Roberts Rules formally but proceed informally until internal conflict erupts. When that happens the community will open up their operating manual or bylaws and start operating their (now political) process by the book.

These examples are valuable for illustrating both the practical utility of adaptive governance processes and the weakness of how they are currently implemented. By basing adaptation on individual prerogative, rather than objective signal, these modulations of gameability are themselves gameable, as can be seen with filibuster and other strategies that minorities use within bad faith regimes to have their way by gumming up the works of the entire decision body.

What if there were a non-gameable automatic procedure for detecting political rifts, whose output would determine the process for each decision, ranging from informal consensus or delegation to fully formal voting? This would allow a governance system that can automatically tune to conditions on the ground. It could even be designed to incentivize a community to maintain good faith by producing rewards for sustaining the collaborative regime.

I can think of a few mechanisms. Depending on how robust they turn out to be to bad faith manipulation (I haven’t thought them all through), a system might use one or a combination when deciding. Some ideas include:

  • State of the art. Define a 1-vote threshold for more formal processes, in order to detect consensus breaks by permitting any member to break consensus.
  • Simple surveys. Periodically asking members how many other members they doubt, privately, could provide a measure of the emergence of politics in a body. This is probably gameable, but tied to a threshold, this mechanism could trigger other payoffs or consequences that stabilize it or at least help it complement other interventions. Better is dual ballots.
  • Dual ballots. When you cast a public vote two things happen: you signal a preference and you implement a preference. And those things can be different from each other. That’s why politicians will vote down legislation proposed by an opposing party, even when they agree with it. But if a vote plays two roles at the same time, it would seem impossible to know how much any given vote is just a signal or not. To separate these two roles, imagine conducting each vote twice, one with public ballots and one with private ballots. This could provide a running barometer of politicking in a political body. To prevent gaming, it would be important to ensure that the private ballot is the binding one in the event of a difference between the two. Alternatively the vote could be re-held under too high of a threshold, or the outcome could be drawn randomly from between the two ballots, which would lead to undermeasurement of the delta, but could manage the incentive to always lie.

With these barometers, a community gains tools for tracking their successes at finding agreement under difference. Pushed further, other interventions could be tied to the emergence of politics that manage it or rein it in.

With tools for detecting how political a political body is, we open up the possibility of governance processes that meet the community where it is at, and that may even structurally reinforce cooperation and collaboration in governance..

About

This entry was posted on Monday, June 24th, 2024 and is filed under governance.


Why decentralization is always ripe for co-optation

or
Will your transformative technology just entrench the status quo?

Things have come a long way since I was first exposed to cryptocurrency. Back in 2011 it was going to undermine nation-states by letting any community form its own basis of exchange. A decade later, crypto has little chance of fulfilling its destiny as a currency, but that’s OK because it’s proven ideal for the already wealthy, as a tool for tax evasion, money laundering, market manipulation, and infrastructure capture. States like it for the traceability and conventional banks incorporate it to chase the wave and diversify to a new high risk asset class.

This is not what crypto imagined for itself.

But it’s not a surprise. You can see the same dynamic play out in Apple Music, YouTube, Substack, and the post-Twitter scramble for social media dominance. These technologies are sold to society on their ability to raise the floor, but they cash out on their ability to raise the ceiling. The debate on this played out between Chris Anderson (a founder of Wired) and Anita Elberse (in her 2013 book Blockbusters). In response to Anderson’s argument that social media technologies empower the “fat tail” of regular-people contributors, Elberse countered with evidence of how it has increased market concentration by making the biggest bigger.

To skip to the end of that debate, the answer is “both”. Technologies that make new means available to everyone make those means available to the entrenched as well. The tail gets fatter at the same time as the peaks get taller. It’s all the same process.

So the question stops being “will this help the poor or the rich?” It becomes “who will it help faster?” The question is no longer transformative potential, but differential transformative power. Can this technology undermine the status quo faster than it bolsters it?

And for most of these technologies, the answer is “no”. Maybe, like crypto, a few people fell up and a few fell down. That is not transformation.

Why do people miss this? Because they stop at

“centralization = bad for the people; decentralization = good for the people”.

We forget it’s dual, that

“centralization = good for the entrenched; decentralization = good for the entrenched”

Centralization increases the efficiency of an already-dominant system, while decentralization increases its reach.

This all applies just fine to the latest technology that has people looking for transformative potential: decentralized identity (DID). It’s considered important because so many new mechanisms in web3 require that an address has an onto and 1-1 mapping to a human individual. So if identity can be solved then web3 is unleashed. But, thinking for just a second, decentralized identity technologies will fall into the same trap of entrenching the status quo faster than they isolate their transformative potential. Let’s say that DID scales privacy and uniqueness. If that happens then nothing keeps an existing body from running with uniqueness features and dropping privacy features.

If you’re bought into my argument so far, then you see that it’s not enough to develop technologies that have the option of empowering people, because most developers won’t take that option. You can’t take over just by growing because you can’t grow faster than the already grown. What is necessary is systems that are designed to actively counter accumulation and capture.

I show it in this paper looking at the accumulation of power by US basketball teams. For over a century, American basketball teams have been trying to gain and retain advantages on each other. Over the same time period, the leagues hosting them have served “sport over team,” exercising their power to change the rules to maintain competitive balance between teams. By preventing any one team from becoming too much more powerful than any other, you keep the sport interesting and you keep fans coming.

But what we’ve actually seen is that, over this century, basketball games have become more predictable: if Team A beat Team B and Team B beat Team C, then over a century Team A has become more and more likely to beat Team C. This is evidence that teams have diverged from each other in skill, despite all the regulatory power that leagues have been given to keep them even. If the rich get richer even in systems with an active enduring agency empowered to prevent the rich from getting richer, then concentration of power is deeply endemic and can’t just be wished away. It has to be planned for and countered.

This is why redistribution is a core principle of progressive and socialist politics. You can’t just introduce a new tweak and wait for things to correct. You need a mechanism to actively redistribute at regular intervals. Like taxes.

In web3, there aren’t many technologies that succeed at the higher bar of actively resisting centralization. One example might be quadratic voting, which has taken off probably because it’s market-centric branding has kept it from being considered redistributive (it is).

So for now my attitude toward decentralization is “Wake me up when you have a plan to grow faster than you can be co-opted.” Wake me up when you’ve decentralized taxation.


Psychoactives in governance | The double-blind policy process

I’m often surprised at how casual so many communities are about who they let in. To add people to your membership is to steer your community in a new direction, and you should know what direction that is. There’s nothing more powerful than a group of aligned people, and nothing more difficult than steering a group when everyone wants something different for it. I’ve seen bad decisions on who to include ruin many communities. And, on the other hand, being intentional about it can have a transformative effect, leading to inspiring alignment and collaboration. The best collaborations of my life have all been in discerning communities.

So what does it mean to be intentional about membershipping? You could say that there are two overall strategies. One is to go slow and really get to know every prospective member before inviting them fully into the fold. The other is to be very explicit and providing narrow objective criteria for membership. These both have upsides and downsides. If you spend a lot of time getting to know someone, there will be no surprises. But this can produce cliqueishness and cronyism: who else have you spent that much time with than your own friends? On the other hand are communities that base membership on explicit objective criteria can be exploited. A community I knew wanted tidy and thoughtful people, so would filter people on whether they helped with the dishes and brought desert. The thinking was that a person who does those things naturally is certainly tidy and thoughtful. But every visitor knew to bring desert and help with the dishes, regardless of what kind of person they were, so the test failed as an indicator.

We need better membershipping processes. Something with the fairness and objectivity of explicit criteria, but without their vulnerability to being faked. There are lots of ways that scholars solve this kind of problem. They will theorize special mechanisms and processes. But wouldn’t it be nice if we could select people who just naturally bring desert, help with dishes, ask about others, and so on? Is that really so hard? To solve it, we’re going to do something different.

The mechanism: the double-blind policy process with collective amnesia

Amnesia is usually understood as memory loss. But that’s actually just one kind, called retrograde amnesia, the inability to access memories from before an event. The opposite kind of amnesia is anterograde. It’s an inability to form new memories after some event. It’s not that you lost them, you never got them in the first place. We’re going to imagine a drug that induces temporary anterograde amnesia. It prevents a person from forming memories for a few hours.

To solve the problem of bad membershipping, we’re going to artificially induce it in everyone. Here’s the process:

  1. A community’s trusted core group members sit and voluntarily induce anterograde amnesia in themselves (with at least two observers monitoring for safety).
  2. In a state of temporary collective amnesia, the group writes up a list of membership criteria that are precise, objective, measurable, and fair. As much as possible, items should be the result of deliberation rather than straight from the mind of any one person.
  3. They then seal the secret criteria in an envelope and forget everything.
  4. Later, the core group invites a prospective new member to interview.
  5. The interview isn’t particularly well structured because no one knows what it’s looking for. So instead it’s a casual wide-ranging affair involving a range of activities that really have nothing to do with the community’s values. These activities are diverse and wide-ranging enough to reveal a variety of dimensions of the prospectives personality. An open-ended personality test or two could work as well. What you need is a broad activity pool that elicits a range of illuminating choices and behaviors. These are being observed by the membership committee members, but not discussed or acted upon until ….
  6. After the interview, a group of members sits to deliberate on the prospective’s membership, by
    • collectively inducing anterograde amnesia,
    • opening the envelope,
    • recalling the prospective’s words and choices and behavior over the whole activity pool,
    • judging all that against the temporarily revealed criteria,
    • resealing the criteria in the envelope,
    • writing down their decision, and then
    • forgetting everything
  7. Later this membership committee reads the decision they came to to find out if they will be welcoming a new peer to the group.

The effect is that the candidate got admitted in a fair, systematic way that can’t be abused. Why does it work? No one knows how to abuse it. In a word, you can’t game a system if literally nobody knows what its rules are. Not knowing the rules that govern your society is normally a problem, but it seems to be just fine for membership rules, maybe because they are defined around discrete intermittent events.

Psychoactives in decision-making

If this sounds fanciful, it’s not: the sedatives propofol and midazolam both have this effect. They are common enough in the cocktails of sedatives, anesthetics, analgesics, and tranquilizers that anaesthesiologists administer during surgical procedures.

If this sounds feckless or reckless, it’s not. There is an actual heritage of research that uses psychoactives to understand decision-making. I’m a cognitive scientist who studies governance. I learned about midazolam from Prof Richard Shiffrin, a leading mathematical psychologist and expert in memory and decision-making. He invoked it while proposing a new kind of solution to a social dilemma game from economic game theory. In the social dilemma, two people can cooperate but each is tempted to defect. Shiffrin suggests that you’ll cooperate if the person is so similar to you that you know they’ll do whatever you do. He makes the point by introducing midazolam to make it so the other person is you. In Rich’s words:

You are engaged in the simple centipede game decision tree [Ed. if you know the Prisoner’s Dilemma, just imagine that] without communication. However the other agent is not some other rational agent, but is yourself. How? You make the decision under the drug midazolam which leaves your reasoning intact but prevents your memory for what you thought about or decided. Thus you decide what to do knowing the other is you making the other agent’s decision (you are not told and don’t know and care whether the other decision was made earlier or after because you don’t remember). Let us say that you are now playing the role of agent A, making the first choice. Your goal is to maximize your return as agent A, not yourself as agent B. When playing the role of agent B you are similarly trying to maximize your return.

The point is correlation of reasoning: Your decision both times is correlated, because you are you and presumably think similarly both times. If you believe it is right to defect, would you nonetheless give yourself the choice, knowing you would defect? Or knowing you would defect would you not choose (0,0)? On the other hand if you think it is correct to cooperate, would it not make sense to offer yourself the choice? When playing the role of B let us say you are given the choice – you gave yourself the choice believing you would cooperate – would you do so?

— a 2021/09/15 email

The upshot is that if you know nothing except that you are playing against yourself, you are more likely to cooperate because you know your opponent will do whatever you do, because they’re you. As he proposed it, it was a novel and creative solution to the problem of cooperation among self-interested people. And it’s useful outside of the narrow scenario it isolates. The idea of group identity is precisely that the boundaries of our conceptions of ourselves can expand to include others, so what looks like a funny idea about drugs is used by Shiffrin to offer a formal mechanism by which group identity improves cooperation.

Research at the intersection of drugs and decision-making isn’t restricted to thought experiments. For over a decade, behavioral economists in the neuroeconomics tradition have been piecing together the neurophysiology of decision-making by injecting subjects with a variety of endogenous and exogenous substances. For example, see this review of the effects of oxytocin, testosterone, arginine vasopressin, dopamine, serotonin, and stress hormones.

Compared this other work, all that’s unusual about this post is the idea of administering to a whole group instead of individuals.

Why save democracy when you can save dictatorship? | The connection to incentive alignment

This mechanism is serious for another reason too. The problem of membershipping is a special case of a much more general problem: “incentive alignment” (also known as “incentive compatibility”).

  • When people answering a survey tell you what they want you hear instead of the truth
  • When someone lies at an interview
  • Just about any time that people aren’t incentivized to be transparent

Those are all examples of mis-alignment in the sense that individual incentives don’t point to system goals.

Incentive compatibility is especially challenging for survey design. That’s important because surveys are the least bad way to learn things about people in a standardized way. Incentive compatible survey design is a real can of worms.

That’s what’s special about double-blind policy. It’s a step in the direction of incentive compatibility for self-evaluation. You can’t lie about a question if nobody knows what was asked.

Quibbles

For all kinds of reasons this is not a full solution to the problem. One obvious problem: even if no one knows the rules, anyone can guess. The whole point of introducing midazolam into the social dilemma game was that you know that you will come to the same conclusions as yourself in the future. So just because you don’t know the criteria doesn’t mean you don’t “know” the criteria. You just guess what you would have suggested, and that’s probably it. To solve this, the double-blind policy mechanism has to be collaborative. It requires that several people participate, and that a collaborative deliberation process over many members will produce integrated or synergistic criteria that no single member would have thought of.

Other roles for psychoactives in governance design

The uses of psychoactives in community governance are, as far as I know, entirely unconsidered. Some cultures have developed ritualistic sharing of tobacco or alcohol to formalize an agreement. Others have developed ordering the disloyal to drink hemlock juice, a deadly choline antagonist. That’s all I can think of. I’m simultaneously intrigued to imagine what else is out there and baseline suspicious of anyone who tries.

The ethics

For me this is all one big thought experiment. But I live in the Bay Area, which is governed by strange laws like “The Pinocchio Law of The Bay” which states:

“All thought experiments want to go to the San Francisco Bay Area to become real.”

(I just made this up but it scans)

Hypothetically, I’m very pleased with the idea of solving governance problems psychoactives, but I’ll admit that it suffers from being awful-adjacent: It’s very very close to being awful. I see three things that could tip it over:
1) If you’re not careful it can sound pretty bad, especially to any audience that wants to hate it.
2) If you don’t know that the idea has a legitimate intellectual grounding in behavioral science, then it just sounds druggy and nuts.
3) If it’s presented without any mention of the potential for abuse then it’s naive and dangerous.

So let’s talk about the potential for abuse. The double-blind policy process with collective amnesia has serious potential for abuse. Non-consensual administration of memory drugs is inherently horrific. Consensual administration of memory drugs automatically spawns possibilities for non-consensual use. Even if it didn’t, consensual use itself is fraught, because what does that even mean? The framework of consent requires being able and informed. How able and informed are you when you can’t form new memories?

So any adoption or experimentation around this kind of mechanism should provide for secure storage and should come with a security protocol for every stage. Recording video or having observers who can see (but not hear?!) all deliberations could help. I haven’t thought more deeply than this, but the overall ethical strategy would go like this: You keep abuse potential from being the headline of this story by credibly internalizing the threat at all times, and by never being satisfied that you’ve internalized it enough. Expect something to go wrong and have a mechanism in place for nurturing it to the surface. Honestly there are very few communities that I’d trust to do this well. If you’re unsure you can do it well, you probably shouldn’t try. And if you’re certain you can do it well, then definitely don’t try.


Community building as one of your basic skills

A healthy democracy requires a citizenry full of people who have built communities, held office, and started initiatives. You can’t expertly serve and intentionally organized group if you haven’t built and nearly broken several. That is why my strategy for serving democracy is to focus on online communities, which provide this opportunity to more people than ever before. Of course, as strategy it isn’t very strategic: it boils down to “change everybody.” But it’s necessary, so you have to proceed as if it’s possible. I’m still struggling with it. But there are patterns that have pulled it off. Organizations like the Scouts (Boys and Girls Clubs of America, officially the largest paramilitary organizations in the US) happen to require all of these things, and they built a system that lets people learn in flexible mentor-driven ways. Making this poster helped me get these ideas down clearly.


The unpopular hypothesis of democratic technology. What if all governance is onboarding?

There’s this old organizer wisdom that freedom is an endless meeting. How awful. Here the sprightly technologist steps in to ask:

“Does it have to be? Can we automate all that structure building and make it maintain itself? All the decision making, agenda building, resource allocating, note taking, emailing, and even trust?
We can; we must

That’s the popular hypothesis, that technology should fix democracy by reducing friction and making it more efficient. You can find it under the hood of most web technologies with social ideals, whether young or old. The people in this camp don’t dispute the need for structure and process, but they’re quick to call it bureaucracy when it doesn’t move at the pace of life, and they’re quick to start programming when they notice it sucking up their own free time. Ideal governance is “the machine that runs itself“, making only light and intermittent demands for citizen input.

And against it is the unpopular hypothesis. What if part of the effectiveness of a governance system is in the tedious work of keeping it going? What if that work builds familiarity, belonging, bonding, sense of agency, and organizing skills? Then the work of keeping the system up is itself the training in human systems that every member needs to have for a community to become healthy. It instills in every member pragmatic views of collective action and how to get things done in a group. Elinor Ostrom and Ganesh Shivakoti give a case of this among Nepali farmers when state-funds replaced hard-to-maintain dirt irrigation canals with robust concrete irrigation canals and farmer communities stopped sharing water equitably. What looked like maintaining ditches was actually maintaining an obligation to each other.

That’s important because under the unpopular hypothesis, the effectiveness of a governance system depends less on its structure and process (which can be virtually anything and still be effective) and more on what’s in the head of each participant. If they’re trained, aligned, motivated, and experienced, any system can work. This is a part of Ostrom’s “institutional diversity”. The effective institution focuses on the members rather than the processes by making demands of everyone, or “creating experiences.”

Why are organizations bad computers? Because that isn’t their only goal.

In tech circles I see a lot of computing metaphors for organizations and institutions. Looking closer at that helps pinpoint the crux of the difference between the popular and unpopular hypotheses. In a computer or a program, many gates or function are linked into a flow that processes inputs into outputs. In this framework, a good institution is like a good program, efficiently and reliably computing outputs. Under the metaphor all real-world organizations look bad. In a real program, a function will compute reliably, quickly, and accurately without having to provide permission or buy-in or interest? In an organization each function needs all those things.

So organizations are awful computers. But that’s not a problem because it’s goal isn’t to compute, but to compute things that all the functions want computed. It’s a computer that exists by and for its parts. The tedium of getting buy-in from all the functions isn’t an impediment to proper functioning, it is proper functioning. The properly functioning organization-computer is constantly doing the costly hygiene of ensuring the alignment of all its parts, and if it starts computing an output wrong, it’s not a problem with the computer, it’s a problem with the output.

If the unpopular hypothesis is right, then we shouldn’t focus on processes and structures—those might not matter at all—but on training people, keeping them aligned with each other, and keeping the organization aligned with them. It supports another hypothesis I’ve been exploring, that all governance is onboarding.

Less Product, more HR?

This way of thinking opens a completely different way of thinking about governance. Through this lens,

  • Part of the work of governance is agreeing what to internalize
  • a rule is the name of the thing that everyone agrees that everyone should internalize.
  • The other part of governing is creating a process that helps members internalize (whether via training, conversation, negotiation, even a live-action tabletop role playing simulation).
  • once it’s internalized by everyone the rule is irrelevant and can be replaced by the next rule to work on.

In this system, the constraints on the governance system depend on human limits. You need rules because an org needs to be intentional about what everyone internalizes. You’ll keep needing rules because the world is changing and the people are changing and so what to internalize is going to change. You can’t have too many rules at one time because people can’t remember too rules-in-progress at once. You need everyone doing and deciding the work together because it’s important that the system’s failures feel like failures of us rather than them.

With all this, it could be tempting to call the popular hypothesis the tech friendly one. But there’s still a role for technology in governance systems following the unpopular hypothesis. It’s just a change in focus, into technologies that support habit building, skill learning, training, onboarding, and that monitor the health of the shared agreements underlying all of these things. It encourages engineers and designers to move from the easy problems of system and structure to the hard ones of culture, values, and internalization. The role of technology in supporting self-governance can still be to make it more efficient, but with a tweak: not more efficient at arranging parts into computations, but more efficient at maintaining its value to those parts.

Maybe freedom is an endless meeting and technology can make that palatable and sustainable. Or maybe the work of saving democracy isn’t in the R&D department, but HR.