Adaptive governance: tuning for emergent politics

For me, the single biggest factor in the effectiveness of a decision process is whether the group members are all able to assume good faith in each other. This common belief in mutual good faith means that everyone wants what is best for the organization. They may differ on what that is, but they trust the system and, more importantly, each other, that they want the best for the group and its members. This assumption is an explicit principle of prominent consensus organizations such as Wikipedia.

As mutual assumption of good faith erodes, governance starts to have to operate under conditions of politics. There is an opposition. It should be stymied. Its beliefs are dangerous for the organization. Its members are motivated by maliciousness or ignorance. I think there is more to this difference between governance with and without mutual good faith than the presence or absence of contestation. There is conflict and competition in good faith systems, but members engage in it with a belief that those differences are pursued sincerely and at least theoretically reconcilable.

It’s not a clean dichotomy but those are the poles, and you can divide political science by which it focuses on, with the spiritual core of political science interested in “politics,” organizing without good faith, and a constellation of secondary areas—policy studies, public administration, community governance, deliberative and participatory democracy—interested in “governance,” organizing with good faith.

Naturally, how you should make decisions depends on what regime you’re in: collaborative or political. In political arenas, consensus is easily gamed but unitary authority is more easily abused and less legitimate. Voting is more robust to politics, and systems like parliamentary procedure are very much designed to help systems move forward in the face of political rifts. At the extreme are mechanisms for casting votes under hostile regimes, as with the holographic voting scheme I proposed in Human Computation (http://doi.org/cwnn).

And just as there are many forms that are not robust to politics, there are many that are not adapted to consensus. The excessive formality of parliamentary procedure is clunky, inefficient, and even awkward within a small trusted core group. Voting, which is almost elegant for navigating political conflicts, permits a lazy and marginalizing shortcut around a whole family of processes designed to create nuanced win-win solutions. And with enough alignment and training these can be surprisingly efficient as well.

This would be a straightforward design problem except that communities can, of course, change what regime they are in. The consensus assumption of mutual good faith must be a literal assumption. Once one person breaks it, either by acting in bad faith or assuming bad faith in another, their defection will cause defection in others. In this sense, a community’s common sense of mutual good faith is a fragile common-pool resource that is vulnerable to sudden resource crash; it only takes one defection and the cascade of failure will ratchet down. Transitions in the other direction, up from political to collaborative, are trust building projects that take time, patience, and vulnerability, especially when they occur without the luxury of a common enemy or other shared outside threat.

The idea

How a group should work depends on where it is at the moment. But how do you know where you are? Imagine a system that decided the process for every decision on the basis of how political it is. The procedures of US Congress and Robert’s Rules already have versions of this, in which a simple vote or process is the default but any member or minority can trigger a more formal, politically robust process: from straw poll to roll call or public to closed ballots. Precisely because good faith implies consensus, allowing a single person to request a politically robust procedure amounts to a mechanism for detecting breaks from the consensus belief in mutual good faith. More mundanely, many internally friendly groups will adopt Roberts Rules formally but proceed informally until internal conflict erupts. When that happens the community will open up their operating manual or bylaws and start operating their (now political) process by the book.

These examples are valuable for illustrating both the practical utility of adaptive governance processes and the weakness of how they are currently implemented. By basing adaptation on individual prerogative, rather than objective signal, these modulations of gameability are themselves gameable, as can be seen with filibuster and other strategies that minorities use within bad faith regimes to have their way by gumming up the works of the entire decision body.

What if there were a non-gameable automatic procedure for detecting political rifts, whose output would determine the process for each decision, ranging from informal consensus or delegation to fully formal voting? This would allow a governance system that can automatically tune to conditions on the ground. It could even be designed to incentivize a community to maintain good faith by producing rewards for sustaining the collaborative regime.

I can think of a few mechanisms. Depending on how robust they turn out to be to bad faith manipulation (I haven’t thought them all through), a system might use one or a combination when deciding. Some ideas include:

  • State of the art. Define a 1-vote threshold for more formal processes, in order to detect consensus breaks by permitting any member to break consensus.
  • Simple surveys. Periodically asking members how many other members they doubt, privately, could provide a measure of the emergence of politics in a body. This is probably gameable, but tied to a threshold, this mechanism could trigger other payoffs or consequences that stabilize it or at least help it complement other interventions. Better is dual ballots.
  • Dual ballots. When you cast a public vote two things happen: you signal a preference and you implement a preference. And those things can be different from each other. That’s why politicians will vote down legislation proposed by an opposing party, even when they agree with it. But if a vote plays two roles at the same time, it would seem impossible to know how much any given vote is just a signal or not. To separate these two roles, imagine conducting each vote twice, one with public ballots and one with private ballots. This could provide a running barometer of politicking in a political body. To prevent gaming, it would be important to ensure that the private ballot is the binding one in the event of a difference between the two. Alternatively the vote could be re-held under too high of a threshold, or the outcome could be drawn randomly from between the two ballots, which would lead to undermeasurement of the delta, but could manage the incentive to always lie.

With these barometers, a community gains tools for tracking their successes at finding agreement under difference. Pushed further, other interventions could be tied to the emergence of politics that manage it or rein it in.

With tools for detecting how political a political body is, we open up the possibility of governance processes that meet the community where it is at, and that may even structurally reinforce cooperation and collaboration in governance..

About

This entry was posted on Monday, June 24th, 2024 and is filed under governance.