Adaptive governance: tuning for emergent politics

For me, the single biggest factor in the effectiveness of a decision process is whether the group members are all able to assume good faith in each other. This common belief in mutual good faith means that everyone wants what is best for the organization. They may differ on what that is, but they trust the system and, more importantly, each other, that they want the best for the group and its members. This assumption is an explicit principle of prominent consensus organizations such as Wikipedia.
Without mutual assumption of good faith a group enters the regime of politics: in which there is an opposition and it should be stymied. It’s beliefs are dangerous for the organization and its members are motivated by maliciousness or ignorance. The difference is bigger than the presence or absence of contestation. There is conflict and competition in good faith systems, but members engage in it with a belief that those differences are pursued sincerely and at least theoretically reconcileable.

It’s not a clean dichotomy but those are the poles, and you can divide political science by which it focuses on, with the spiritual core of political science interested in politics, and a constellation of secondary areas—policy studies, public administration, community governance, deliberative and participatory democracy—interested in governance.

Naturally, how you should make decisions depends on what regime you’re in: collaborative or political. In political arenas, consensus is easily gamed and authority is easily abused. Voting is more robust to politics, and systems like parliamentary procedure are very much designed to help systems move forward in the face of political rifts. At the extreme are mechanisms for casting votes under hostile regimes, as with the holographic voting scheme I proposed in Human Computation (http://doi.org/cwnn). Holographic voting is a non-cryptographic anonymity scheme in which individual ballots are unreadable point clouds but their sum produces the face of the winning candidate.

And just as there are many forms that are not robust to politics, there are many that are not adapted to consensus. The excessive formality of parliamentary procedure is clunky, inefficient, and even awkward within a small trusted core group. Voting, which is almost elegant for navigating political conflicts, permits a lazy and marginalizing short cut around a whole family of processes designed to create nuanced win-win solutions. And with enough alignment and training these can be surprisingly efficient as well.

This would be a straightforward design problem except that communities can, of course, change what regime they are in. The consensus assumption of mutual good faith must be a literal assumption. Once one person break it, either by acting in bad faith or assuming bad faith in another, their defection will cause defection in others. In this sense, good faith assumptions are a fragile common-pool resource that is vulnerable to sudden resource crash; it only takes one, and it’s a ratchet: transitions in the other direction, from political to collaborative, are trust building projects that take time, patience, and vulnerability, particularly without the luxury of a common enemy or other shared outside threat.

The ideal could be a system that decided the process for every decision on the basis of how political it is. The procedures of US Congress and Robert’s Rules already have versions of this, in which a simple vote or process is the default but any member or minority can trigger a more formal, politically robust process: from straw poll to roll call or public to closed ballots. Precisely because good faith implies consensus, allowing a single person to request a politically robust procedure amounts to a mechanism for detecting breaks from the consensus belief in mutual good faith. More mundanely, many internally friendly groups will adopt Roberts Rules formally but proceed informally until internal conflict erupts and the community opens up their manual and starts operating the (now political) process by the book.

These examples are valuable for illustrating both the practical utility of adaptive governance processes and the weakness of how they are currently implemented. By basing adaptation on individual prerogative, rather than objective signal, these modulations of gameability are themselves gameable, as can be seen with filibuster and other strategies that minorities use within bad faith regimes to have their way by gumming up the works of the entire decision body.

What if there were a non-gameable automatic procedure for detecting political rifts, whose output would determine the process for each decision, ranging from informal consensus or delegation to fully formal voting? This would allow a governance system that can automatically tune to conditions on the ground. It could even be designed to incentivize a community to maintain good faith by producing rewards for sustaining the collaborative regime.

I can think of a few mechanisms. Depending on how robust they turn out to be to gaming (I haven’t thought the all through), a system might use one or a combination when deciding. Some ideas include:

  • State of the art. Define a 1-vote threshold for more formal processes, in order to detect consensus breaks by permitting any member to break consensus.
  • Simple surveys. Periodically asking members how many other members they doubt, privately, could provide a measure of the emergence of politics in a body. This is probably gameable, but tied to a threshold, this mechanism could trigger other payoffs or consequences that stabilize it or at least help it complement other interventions. Better is dual ballots.
  • Dual ballots. A public vote does two things: it casts a preference and it signals a preference. Those things can be different from each other, explaining why politicians will vote down legislation that they agree with if it is proposed by an opposing party. But if they can differ, it’s seems impossible to know how much any given vote is just a signal, and what the consequences are for the body. Imagine conducting each vote twice, one with public ballots and one with private ballots. This could provide a running barometer of politicking in a political body. To prevent gaming, it would be important to ensure that the private ballot is the binding one in the event of a difference between the two. Alternatively the vote could be re-held under too high of a threshold, or the outcome could be drawn randomly from between the two ballots, which would lead to undermeasurement of the delta, but could manage the incentive to always lie. With this barometer, the community gains a tool for tracking the ability to find agreement under difference. Pushed further, other interventions could be tied to the emergence of politics that manage it or rein it in.

With tools for detecting how political a political body is, we open up the possibility of governance processes that meet the community where it is at, and that may even structurally reinforce cooperation and collaboration in governance..

About

This entry was posted on Monday, June 24th, 2024 and is filed under governance.