How Asimov was right and wrong about social prediction

https://retrobookcovers.com/foundation-by-isaac-asimov-avon-1966/

Mathematical social science is a joke …

“Humans aren’t predictable.”

“Society doesn’t follow a formula.

These common beliefs are true, and part of a larger tradition of thought that has been treating social science with ridicule and derision since its first glimmers. A contemporary of the first mathematical social scientists called their work “the new science of little men,” a dig at how the averaged qualities of idealized statistical persons lose the qualities of the Great individuals who really drive society.

That was earlier than you’d think, 1860, referencing the 1830’s “social physics” of philosopher Auguste Comte, and statistician Quetelet’s “social mechanics” shortly after. These thinkers planted the seeds for a mathematical modeling of social systems. And it turns out that, sometimes, simple toy mathematical models of social systems are astonishingly accurate and incisive.

But the mockery continues. Over a century later, American musical satirist and trained mathematician Tom Lehrer recorded a little ditty teasing those of us “laboring under this delusion in social science that you can make it into a science,” a delusion he saw up close as a teacher of mathematical social science at MIT in the 1960’s.

… but mathematical social science advanced physics

So labor we have, for 200 years now. So how’s it actually going? Are our mathematical models of ineffable humans a joke that’s gone two centuries without a punchline? What are the contours of this science, and how has its evolution deviated from the earliest imaginings of a mathematical sociology?

It is fairest to start off with a question for the question. Whether you think social science is succeeding or failing in its climb up the ladder of scientific progress you’re still accepting a linear picture of progress. The typical path of a science is a wilder ride, with as many snakes down as ladders up.

Take the example of statistical physics, also known as statistical mechanics, a fundamental branch of physics that explains steam engines, quantum mechanics, and even rubber bands. It is actually indifferent to what thing you apply it to, as long as there is a lot of that thing, so it works as well on people as particles. For its universal claims, insights about large populations, and ability to capture “emergent” phenomena whose wholes are greater than the sums of their parts, it is the darling of mathematical social science.

Where did statistical mechanics come from? Its initiator, physicist James Clerk Maxwell got the idea around 1835 while reading about Quetelet’s social mechanics. What do we make of the hierarchy of the sciences now—from physics through chemistry and biology to the sciences of humanity? Has mathematical thinking about humans been as formative to physics as it ultimately was to sociology?

The metaphysics of psychohistory as a yardstick for mathematical social science

With proper regard for social physics and its auspicious record, we’re in the right position to ask about its true potential. It’s not easy to reconcile the harsh reviews of “haters” like Lehrer against the hopeful visions of “dreamers” like his contemporary Isaac Asimov. Asimov’s classic science fiction trilogy Foundation (and recent television series) envisioned a fully expressed mathematical social science that would transform humanity.

\For me, Asimov’s science fiction remains the most provocative and maximal expression of the potential of social modeling. He uses Foundation to pose “psychohistory,” a fictional branch of mathematics that his far-future protagonists spend hundreds of years (and pages) using as a crystal ball to plan and then execute humanity’s destiny. In developing it, Asimov, a chemist by training, inadvertently traced Maxwell’s path backward to ground psychohistory directly in statistical physics.

As a result of its development in terms of physics, Asimov’s psychohistory ends up with several physics-inspired characteristics that have and have not borne out in the mathematical social science of today.

  • In the stories, psychohistory works best over larger populations, with the accuracy of its predictions decaying quickly for smaller groups.
  • Psychohistory also fails when populations know what is being predicted for them, because they can adapt.
  • It isn’t a whole science like physics, developing as a conversation between the hypotheses of theory and deductions of experiments, but strictly theoretical, like math.
  • And last, psychohistory is a tool of the elites and carefully stewarded by a technocratic priesthood. (eventually a literal one)

Among my colleagues today, many were heavily influenced by the Foundation books. Though they may publicly call themselves “computational social scientists”, “quantitative behavioral scientists”, “complexity scholars”, or even “econophysicists”, many consider themselves early psychohistorians. And I don’t fault them for it. As the physical and mathematical influences on social science grow, it is clear that there are predictabilities in human dynamics, even some laws. Yes, everything affects everything in ways that simple models can’t possibly capture, but some things affect everything more than others, and by focusing on the few key variables that certain systems surface, scientists can predict the statistics of collective emotions, collective memory, collective problem solving, role specialization, and countless other social phenomena.

The predictions

We can look at the imagined properties of Asimov’s mathematical social science one-by-one:

Does today’s social mechanics depend on large populations?

In some way, today’s social physics is more reliable with more people. We know that phenomena like the wisdom of the crowd and self-sorting work better with larger groups. But as social mechanics has developed, the decisive line between accurate and inaccurate hasn’t been a “horizontal” one about population size so much as a “vertical” one about the type of social phenomenon being predicted. Mathematical social scientist David Sumpter proposes four categories of social phenomenon: what he calls statistical (for aggregative things like voting and crowd wisdom), interactive (for social networks, flocking, and other collective behaviors), chaotic (for social “three-body problems,” in which the mutual interactions of things on each other take all of them in fundamentally unpredictable directions), and “complex” (which includes organizations, institutions, and other highly structured or nested systems).

In a nutshell, contemporary social mechanics is good at two of the four. It is excellent at predicting social dynamics in the statistical category, occasionally strong in the interactive category, a bit lost in the complexity category, and knows just enough about the chaotic category to successfully keep away. So there is currently a psychohistory for certain kinds of social organization, and not for others. It’s not a matter of size, but type.

… and does social mechanics stop working on small groups and individuals?

Unlike Asimov’s benchmark, today’s social mechanics faces no penalty for small groups or even individuals. That’s because statistical modeling is just as effective with lots of large things (people) as lots of small things (the neurons that make up people). Researchers like biophysicist William Bialek have shown that statistical mechanics can help us predict the behaviors of populations of neurons, and animal collective behavior researcher Iain Couzin’s mathematical models of decision-making at the neural level, intriguingly named the “Geometry of Decision Making”, show that mathematical models are as useful for social mechanics as neural mechanics. If neurons are just as beholden to statistics as people, and if it’s meaningful to see an individual as a very large population of neurons, then we can understand why social mechanics handles small and large populations equally well.

I think Asimov was wrong about where mathematical social science would draw the line between tractable and non-tractable.

Will a social prediction hold if the population knows the prediction? 

In the same way that an investor with a crystal ball might cash in on upcoming booms and busts, fictional populations might evade a public psychohistorical prediction for them by planning for it and adapting to it. Asimov takes this very seriously, shaping a lot of Foundation around the importance of keeping psychohistory’s predictions secret.

But in today’s social mechanics, populations are as likely to meet their prediction with indifference or even by doubling down on the predicted outcome. The regular COVID forecasts of 2020–2021 indicate the power of indifference: why doesn’t forecasting an infection spike prevent one? And the idea of self-fulfilling prophesy gives us cases in which knowing a prediction makes the predicted outcome more, not less likely. Sometimes the best way to cause a bank run is to predict one.

I think Asimov’s mathematical social science doesn’t need to wring its hands about secrecy as much as it does.

Is social mechanics just a body of theory?

Probably the biggest difference between today’s social mechanics and Asimov’s psychohistory is the role of observation. When we first encounter it, Asimov’s characters are able to chart the future history of humanity with just equations, and no “double-checking” against reality. You can take this as a sign that psychohistory is “done.” The fact that today’s social scientists are still deep in experiments, observations, and their implications for theory—and still improving their methods to be able to observe entire social systems—means that real world social mechanics is still all abustle, still working toward theory that is strong enough to lean on.

The areas that are closest to having theory that is “good enough” are those like epidemiology and operations research in which the physical world places major constraints on society’s range of motion. a virus cares less about the beliefs of the people in your life and more about how close you’re standing to them. Similarly, the estimated time that your package will arrive is going to be more accurate than the weather forecast for that day. That prediction is good because the trucks delivering it can only go so fast and so slow. In a way, social science today is closest to physics in quality when it is most physical in substance.

For better or worse, today’s mathematical social science is a science, while Asimov’s is a math.

Is social mechanics democratic? 

Today, you probably need a degree, and probably an advanced one, to make important contributions to the mathematics of society. In this way social mechanics is restricted like psychohistory. Still academics today are public intellectuals and serve society by disseminating their findings, more publicly every year. By making the tools and principles of social prediction accessible, we serve a world in which communities, not priesthoods, have the power to design their own futures. A recent talk by psychologist Mirta Galesic, on design principles for collective learning, shows how groups can improve their ability to learn together. Psychologist Stephen Lewandowsky’s distillations from the science of persuasion and physicist Filippo Menczer’s forensics of social media manipulation can help communities inoculate themselves against the ubiquitous misinformation of the digital age.

Asimov’s mathematical social science is closed, while today’s, while esoteric, is ultimately open.

Wrap

As an academic, I have devoted my life to the scientific approach to society. And even I am not holding my breath for it to be the next physics. It is hard to apply the scientific method to social systems, certainly much harder than to physical systems. Nevertheless, after a few centuries, especially this recent 20th one, social mechanics has developed enough that we can get a sense of what kind of science it is shaping up to be, and what the prospects are for mathematical models to help us take our own reins.

Seth Frey (website, twitter) is a professor at the University of California Davis. His training is in cognitive science and computational social science (and therefore psychohistory?). This article is an output of the Augmented Intelligence Workshop, a project funded by the US National Science Foundation to start conversations on human collective behavior, the science of learning, and computational approaches to social systems.


Why should all languages be preserved? The problem is the question

Of the ~7,000 known human languages, about 40% are endangered, dead, or dying.* And by 2100 less than half of them will remain, possibly less than 1,000.* There isn’t much missing; one recently discovered language, Bengime of Mali, has managed to stay hidden to the present only because it is spoken in secret.

In talking about language death, I’ve heard a funny question come up — I’ve even asked it myself: “Why do we need all of these languages? Why not just one?” How many systems of communication do we actually need? And wouldn’t we all get along better if there were fewer languages?

The point of this post is to sidestep the question and very concisely argue that its existence is a problem.

First, why are there 110-some elements in the periodic table? Why not just one?

More importantly, why haven’t you heard that question before?

Because we don’t control how many elements there are. It’s not up to us. It shouldn’t be. And the number of languages shouldn’t be either. These are ways of being, not curiosities. The UN recognizes deliberate elimination of languages as genocide. It should be easy from there to condemn systems that result in the elimination of languages more indirectly. The languages and cultures of other peoples should have protection and resources. If it’s our choice that some survive and some don’t, then there is an immoral exercise of power over the variety of human experience. The existence of the question is evidence of a way of thinking that is based in an evil attitude toward other cultures. Losing an element from the periodic table or a language from the Ethnologue is a tragedy because it artificially limits the kinds of things that can exist. If we’ve been given 7000 languages then there should be about 7000 when we’re done.

  • stats:
    • http://en.wikipedia.org/wiki/Language_diversity
    • http://www.ethnologue.com/endangered-languages
  • why save them?
    • http://www.unesco.org/new/en/culture/themes/endangered-languages/
    • http://www.unesco.org/new/en/culture/themes/endangered-languages/faq-on-endangered-languages/
    • http://www.unesco.org/new/en/culture/themes/endangered-languages/biodiversity-and-linguistic-diversity/

Why decentralization is always ripe for co-optation

or
Will your transformative technology just entrench the status quo?

Things have come a long way since I was first exposed to cryptocurrency. Back in 2011 it was going to undermine nation-states by letting any community form its own basis of exchange. A decade later, crypto has little chance of fulfilling its destiny as a currency, but that’s OK because it’s proven ideal for the already wealthy, as a tool for tax evasion, money laundering, market manipulation, and infrastructure capture. States like it for the traceability and conventional banks incorporate it to chase the wave and diversify to a new high risk asset class.

This is not what crypto imagined for itself.

But it’s not a surprise. You can see the same dynamic play out in Apple Music, YouTube, Substack, and the post-Twitter scramble for social media dominance. These technologies are sold to society on their ability to raise the floor, but they cash out on their ability to raise the ceiling. The debate on this played out between Chris Anderson (a founder of Wired) and Anita Elberse (in her 2013 book Blockbusters). In response to Anderson’s argument that social media technologies empower the “fat tail” of regular-people contributors, Elberse countered with evidence of how it has increased market concentration by making the biggest bigger.

To skip to the end of that debate, the answer is “both”. Technologies that make new means available to everyone make those means available to the entrenched as well. The tail gets fatter at the same time as the peaks get taller. It’s all the same process.

So the question stops being “will this help the poor or the rich?” It becomes “who will it help faster?” The question is no longer transformative potential, but differential transformative power. Can this technology undermine the status quo faster than it bolsters it?

And for most of these technologies, the answer is “no”. Maybe, like crypto, a few people fell up and a few fell down. That is not transformation.

Why do people miss this? Because they stop at

“centralization = bad for the people; decentralization = good for the people”.

We forget it’s dual, that

“centralization = good for the entrenched; decentralization = good for the entrenched”

Centralization increases the efficiency of an already-dominant system, while decentralization increases its reach.

This all applies just fine to the latest technology that has people looking for transformative potential: decentralized identity (DID). It’s considered important because so many new mechanisms in web3 require that an address has an onto and 1-1 mapping to a human individual. So if identity can be solved then web3 is unleashed. But, thinking for just a second, decentralized identity technologies will fall into the same trap of entrenching the status quo faster than they isolate their transformative potential. Let’s say that DID scales privacy and uniqueness. If that happens then nothing keeps an existing body from running with uniqueness features and dropping privacy features.

If you’re bought into my argument so far, then you see that it’s not enough to develop technologies that have the option of empowering people, because most developers won’t take that option. You can’t take over just by growing because you can’t grow faster than the already grown. What is necessary is systems that are designed to actively counter accumulation and capture.

I show it in this paper looking at the accumulation of power by US basketball teams. For over a century, American basketball teams have been trying to gain and retain advantages on each other. Over the same time period, the leagues hosting them have served “sport over team,” exercising their power to change the rules to maintain competitive balance between teams. By preventing any one team from becoming too much more powerful than any other, you keep the sport interesting and you keep fans coming.

But what we’ve actually seen is that, over this century, basketball games have become more predictable: if Team A beat Team B and Team B beat Team C, then over a century Team A has become more and more likely to beat Team C. This is evidence that teams have diverged from each other in skill, despite all the regulatory power that leagues have been given to keep them even. If the rich get richer even in systems with an active enduring agency empowered to prevent the rich from getting richer, then concentration of power is deeply endemic and can’t just be wished away. It has to be planned for and countered.

This is why redistribution is a core principle of progressive and socialist politics. You can’t just introduce a new tweak and wait for things to correct. You need a mechanism to actively redistribute at regular intervals. Like taxes.

In web3, there aren’t many technologies that succeed at the higher bar of actively resisting centralization. One example might be quadratic voting, which has taken off probably because it’s market-centric branding has kept it from being considered redistributive (it is).

So for now my attitude toward decentralization is “Wake me up when you have a plan to grow faster than you can be co-opted.” Wake me up when you’ve decentralized taxation.


Psychoactives in governance | The double-blind policy process

I’m often surprised at how casual so many communities are about who they let in. To add people to your membership is to steer your community in a new direction, and you should know what direction that is. There’s nothing more powerful than a group of aligned people, and nothing more difficult than steering a group when everyone wants something different for it. I’ve seen bad decisions on who to include ruin many communities. And, on the other hand, being intentional about it can have a transformative effect, leading to inspiring alignment and collaboration. The best collaborations of my life have all been in discerning communities.

So what does it mean to be intentional about membershipping? You could say that there are two overall strategies. One is to go slow and really get to know every prospective member before inviting them fully into the fold. The other is to be very explicit and providing narrow objective criteria for membership. These both have upsides and downsides. If you spend a lot of time getting to know someone, there will be no surprises. But this can produce cliqueishness and cronyism: who else have you spent that much time with than your own friends? On the other hand are communities that base membership on explicit objective criteria can be exploited. A community I knew wanted tidy and thoughtful people, so would filter people on whether they helped with the dishes and brought desert. The thinking was that a person who does those things naturally is certainly tidy and thoughtful. But every visitor knew to bring desert and help with the dishes, regardless of what kind of person they were, so the test failed as an indicator.

We need better membershipping processes. Something with the fairness and objectivity of explicit criteria, but without their vulnerability to being faked. There are lots of ways that scholars solve this kind of problem. They will theorize special mechanisms and processes. But wouldn’t it be nice if we could select people who just naturally bring desert, help with dishes, ask about others, and so on? Is that really so hard? To solve it, we’re going to do something different.

The mechanism: the double-blind policy process with collective amnesia

Amnesia is usually understood as memory loss. But that’s actually just one kind, called retrograde amnesia, the inability to access memories from before an event. The opposite kind of amnesia is anterograde. It’s an inability to form new memories after some event. It’s not that you lost them, you never got them in the first place. We’re going to imagine a drug that induces temporary anterograde amnesia. It prevents a person from forming memories for a few hours.

To solve the problem of bad membershipping, we’re going to artificially induce it in everyone. Here’s the process:

  1. A community’s trusted core group members sit and voluntarily induce anterograde amnesia in themselves (with at least two observers monitoring for safety).
  2. In a state of temporary collective amnesia, the group writes up a list of membership criteria that are precise, objective, measurable, and fair. As much as possible, items should be the result of deliberation rather than straight from the mind of any one person.
  3. They then seal the secret criteria in an envelope and forget everything.
  4. Later, the core group invites a prospective new member to interview.
  5. The interview isn’t particularly well structured because no one knows what it’s looking for. So instead it’s a casual wide-ranging affair involving a range of activities that really have nothing to do with the community’s values. These activities are diverse and wide-ranging enough to reveal a variety of dimensions of the prospectives personality. An open-ended personality test or two could work as well. What you need is a broad activity pool that elicits a range of illuminating choices and behaviors. These are being observed by the membership committee members, but not discussed or acted upon until ….
  6. After the interview, a group of members sits to deliberate on the prospective’s membership, by
    • collectively inducing anterograde amnesia,
    • opening the envelope,
    • recalling the prospective’s words and choices and behavior over the whole activity pool,
    • judging all that against the temporarily revealed criteria,
    • resealing the criteria in the envelope,
    • writing down their decision, and then
    • forgetting everything
  7. Later this membership committee reads the decision they came to to find out if they will be welcoming a new peer to the group.

The effect is that the candidate got admitted in a fair, systematic way that can’t be abused. Why does it work? No one knows how to abuse it. In a word, you can’t game a system if literally nobody knows what its rules are. Not knowing the rules that govern your society is normally a problem, but it seems to be just fine for membership rules, maybe because they are defined around discrete intermittent events.

Psychoactives in decision-making

If this sounds fanciful, it’s not: the sedatives propofol and midazolam both have this effect. They are common enough in the cocktails of sedatives, anesthetics, analgesics, and tranquilizers that anaesthesiologists administer during surgical procedures.

If this sounds feckless or reckless, it’s not. There is an actual heritage of research that uses psychoactives to understand decision-making. I’m a cognitive scientist who studies governance. I learned about midazolam from Prof Richard Shiffrin, a leading mathematical psychologist and expert in memory and decision-making. He invoked it while proposing a new kind of solution to a social dilemma game from economic game theory. In the social dilemma, two people can cooperate but each is tempted to defect. Shiffrin suggests that you’ll cooperate if the person is so similar to you that you know they’ll do whatever you do. He makes the point by introducing midazolam to make it so the other person is you. In Rich’s words:

You are engaged in the simple centipede game decision tree [Ed. if you know the Prisoner’s Dilemma, just imagine that] without communication. However the other agent is not some other rational agent, but is yourself. How? You make the decision under the drug midazolam which leaves your reasoning intact but prevents your memory for what you thought about or decided. Thus you decide what to do knowing the other is you making the other agent’s decision (you are not told and don’t know and care whether the other decision was made earlier or after because you don’t remember). Let us say that you are now playing the role of agent A, making the first choice. Your goal is to maximize your return as agent A, not yourself as agent B. When playing the role of agent B you are similarly trying to maximize your return.

The point is correlation of reasoning: Your decision both times is correlated, because you are you and presumably think similarly both times. If you believe it is right to defect, would you nonetheless give yourself the choice, knowing you would defect? Or knowing you would defect would you not choose (0,0)? On the other hand if you think it is correct to cooperate, would it not make sense to offer yourself the choice? When playing the role of B let us say you are given the choice – you gave yourself the choice believing you would cooperate – would you do so?

— a 2021/09/15 email

The upshot is that if you know nothing except that you are playing against yourself, you are more likely to cooperate because you know your opponent will do whatever you do, because they’re you. As he proposed it, it was a novel and creative solution to the problem of cooperation among self-interested people. And it’s useful outside of the narrow scenario it isolates. The idea of group identity is precisely that the boundaries of our conceptions of ourselves can expand to include others, so what looks like a funny idea about drugs is used by Shiffrin to offer a formal mechanism by which group identity improves cooperation.

Research at the intersection of drugs and decision-making isn’t restricted to thought experiments. For over a decade, behavioral economists in the neuroeconomics tradition have been piecing together the neurophysiology of decision-making by injecting subjects with a variety of endogenous and exogenous substances. For example, see this review of the effects of oxytocin, testosterone, arginine vasopressin, dopamine, serotonin, and stress hormones.

Compared this other work, all that’s unusual about this post is the idea of administering to a whole group instead of individuals.

Why save democracy when you can save dictatorship? | The connection to incentive alignment

This mechanism is serious for another reason too. The problem of membershipping is a special case of a much more general problem: “incentive alignment” (also known as “incentive compatibility”).

  • When people answering a survey tell you what they want you hear instead of the truth
  • When someone lies at an interview
  • Just about any time that people aren’t incentivized to be transparent

Those are all examples of mis-alignment in the sense that individual incentives don’t point to system goals.

Incentive compatibility is especially challenging for survey design. That’s important because surveys are the least bad way to learn things about people in a standardized way. Incentive compatible survey design is a real can of worms.

That’s what’s special about double-blind policy. It’s a step in the direction of incentive compatibility for self-evaluation. You can’t lie about a question if nobody knows what was asked.

Quibbles

For all kinds of reasons this is not a full solution to the problem. One obvious problem: even if no one knows the rules, anyone can guess. The whole point of introducing midazolam into the social dilemma game was that you know that you will come to the same conclusions as yourself in the future. So just because you don’t know the criteria doesn’t mean you don’t “know” the criteria. You just guess what you would have suggested, and that’s probably it. To solve this, the double-blind policy mechanism has to be collaborative. It requires that several people participate, and that a collaborative deliberation process over many members will produce integrated or synergistic criteria that no single member would have thought of.

Other roles for psychoactives in governance design

The uses of psychoactives in community governance are, as far as I know, entirely unconsidered. Some cultures have developed ritualistic sharing of tobacco or alcohol to formalize an agreement. Others have developed ordering the disloyal to drink hemlock juice, a deadly choline antagonist. That’s all I can think of. I’m simultaneously intrigued to imagine what else is out there and baseline suspicious of anyone who tries.

The ethics

For me this is all one big thought experiment. But I live in the Bay Area, which is governed by strange laws like “The Pinocchio Law of The Bay” which states:

“All thought experiments want to go to the San Francisco Bay Area to become real.”

(I just made this up but it scans)

Hypothetically, I’m very pleased with the idea of solving governance problems psychoactives, but I’ll admit that it suffers from being awful-adjacent: It’s very very close to being awful. I see three things that could tip it over:
1) If you’re not careful it can sound pretty bad, especially to any audience that wants to hate it.
2) If you don’t know that the idea has a legitimate intellectual grounding in behavioral science, then it just sounds druggy and nuts.
3) If it’s presented without any mention of the potential for abuse then it’s naive and dangerous.

So let’s talk about the potential for abuse. The double-blind policy process with collective amnesia has serious potential for abuse. Non-consensual administration of memory drugs is inherently horrific. Consensual administration of memory drugs automatically spawns possibilities for non-consensual use. Even if it didn’t, consensual use itself is fraught, because what does that even mean? The framework of consent requires being able and informed. How able and informed are you when you can’t form new memories?

So any adoption or experimentation around this kind of mechanism should provide for secure storage and should come with a security protocol for every stage. Recording video or having observers who can see (but not hear?!) all deliberations could help. I haven’t thought more deeply than this, but the overall ethical strategy would go like this: You keep abuse potential from being the headline of this story by credibly internalizing the threat at all times, and by never being satisfied that you’ve internalized it enough. Expect something to go wrong and have a mechanism in place for nurturing it to the surface. Honestly there are very few communities that I’d trust to do this well. If you’re unsure you can do it well, you probably shouldn’t try. And if you’re certain you can do it well, then definitely don’t try.


The unpopular hypothesis of democratic technology. What if all governance is onboarding?

There’s this old organizer wisdom that freedom is an endless meeting. How awful. Here the sprightly technologist steps in to ask:

“Does it have to be? Can we automate all that structure building and make it maintain itself? All the decision making, agenda building, resource allocating, note taking, emailing, and even trust?
We can; we must

That’s the popular hypothesis, that technology should fix democracy by reducing friction and making it more efficient. You can find it under the hood of most web technologies with social ideals, whether young or old. The people in this camp don’t dispute the need for structure and process, but they’re quick to call it bureaucracy when it doesn’t move at the pace of life, and they’re quick to start programming when they notice it sucking up their own free time. Ideal governance is “the machine that runs itself“, making only light and intermittent demands for citizen input.

And against it is the unpopular hypothesis. What if part of the effectiveness of a governance system is in the tedious work of keeping it going? What if that work builds familiarity, belonging, bonding, sense of agency, and organizing skills? Then the work of keeping the system up is itself the training in human systems that every member needs to have for a community to become healthy. It instills in every member pragmatic views of collective action and how to get things done in a group. Elinor Ostrom and Ganesh Shivakoti give a case of this among Nepali farmers when state-funds replaced hard-to-maintain dirt irrigation canals with robust concrete irrigation canals and farmer communities stopped sharing water equitably. What looked like maintaining ditches was actually maintaining an obligation to each other.

That’s important because under the unpopular hypothesis, the effectiveness of a governance system depends less on its structure and process (which can be virtually anything and still be effective) and more on what’s in the head of each participant. If they’re trained, aligned, motivated, and experienced, any system can work. This is a part of Ostrom’s “institutional diversity”. The effective institution focuses on the members rather than the processes by making demands of everyone, or “creating experiences.”

Why are organizations bad computers? Because that isn’t their only goal.

In tech circles I see a lot of computing metaphors for organizations and institutions. Looking closer at that helps pinpoint the crux of the difference between the popular and unpopular hypotheses. In a computer or a program, many gates or function are linked into a flow that processes inputs into outputs. In this framework, a good institution is like a good program, efficiently and reliably computing outputs. Under the metaphor all real-world organizations look bad. In a real program, a function will compute reliably, quickly, and accurately without having to provide permission or buy-in or interest? In an organization each function needs all those things.

So organizations are awful computers. But that’s not a problem because it’s goal isn’t to compute, but to compute things that all the functions want computed. It’s a computer that exists by and for its parts. The tedium of getting buy-in from all the functions isn’t an impediment to proper functioning, it is proper functioning. The properly functioning organization-computer is constantly doing the costly hygiene of ensuring the alignment of all its parts, and if it starts computing an output wrong, it’s not a problem with the computer, it’s a problem with the output.

If the unpopular hypothesis is right, then we shouldn’t focus on processes and structures—those might not matter at all—but on training people, keeping them aligned with each other, and keeping the organization aligned with them. It supports another hypothesis I’ve been exploring, that all governance is onboarding.

Less Product, more HR?

This way of thinking opens a completely different way of thinking about governance. Through this lens,

  • Part of the work of governance is agreeing what to internalize
  • a rule is the name of the thing that everyone agrees that everyone should internalize.
  • The other part of governing is creating a process that helps members internalize (whether via training, conversation, negotiation, even a live-action tabletop role playing simulation).
  • once it’s internalized by everyone the rule is irrelevant and can be replaced by the next rule to work on.

In this system, the constraints on the governance system depend on human limits. You need rules because an org needs to be intentional about what everyone internalizes. You’ll keep needing rules because the world is changing and the people are changing and so what to internalize is going to change. You can’t have too many rules at one time because people can’t remember too rules-in-progress at once. You need everyone doing and deciding the work together because it’s important that the system’s failures feel like failures of us rather than them.

With all this, it could be tempting to call the popular hypothesis the tech friendly one. But there’s still a role for technology in governance systems following the unpopular hypothesis. It’s just a change in focus, into technologies that support habit building, skill learning, training, onboarding, and that monitor the health of the shared agreements underlying all of these things. It encourages engineers and designers to move from the easy problems of system and structure to the hard ones of culture, values, and internalization. The role of technology in supporting self-governance can still be to make it more efficient, but with a tweak: not more efficient at arranging parts into computations, but more efficient at maintaining its value to those parts.

Maybe freedom is an endless meeting and technology can make that palatable and sustainable. Or maybe the work of saving democracy isn’t in the R&D department, but HR.


“Why can’t I work with this person?”: Your collaborator’s secret manual

In collaborations it can take time to learn to work with certain people. They might be hard to handle in many ways: in the way they volunteer feedback; or have to be asked; in being supportive about ideas they actually don’t like, or showing that they like an idea with no other signal than vigorous attack; in expecting constant reminders; being excessively hands-off or hands-on; demanding permission for everything or resenting it. It’s complicated, especially when there’s a power dynamic on top of all that: boss/employee, advisor/advisee, principal/agent.

Fortunately, in active communities of practice, there are many collaborative linkages and the accumulated experience of those collaborators amounts to a manual for how to work with that person. Even for someone hard to work with, you have a couple of peers who manage just fine, often because they have strategies they aren’t even aware of for making it work. That knowledge gets harnessed naturally, if spottily, in my lab because my students talk to each other. One thing a student told me, that she has passed on to others, is that Seth thinks out loud during project meetings so if he’s going fast and it seems scattered and you’re totally lost about what you’re supposed to do, just wait and eventually he’ll finish and summarize.

Is there a more systematic way to harness this knowledge? The idea I came up with is a secret manual. It’s a Google Doc. The person it’s about is not invited to the doc, although they can share the link. Only past, present, or upcoming collaborators can be members. The norms are to keep it specific to collaboration tips, to keep it civil and constructive, to assume good faith and not gossip, and to keep disagreement in the comments (or otherwise distinguish advise that others have found useful from less proven or agreed upon ideas). People with access to the manual can mention parts of it while talking with its subject, but that person can’t be shown the raw doc (it’s not secret, but it is private). The person who it’s about obviously can’t contribute, but they can offer suggestions to a member for things to add (in my case, I’d want some to add: “please feel comfortable sending persistent reminders if you need something; it’s not a bother, it’s a favor”). People could maybe be members of each others’ manuals, though maybe it’s good to have a rule that the only members of one’s secret manual are equal or lesser in power.

Here’s a template.

UPDATE: if you’re a collaborator of mine, here’s a manual that someone made for me
https://0w.uk/sethmanual
Because I’m not supposed to see it, you’ll have to request access to have it opened to you.


Simple heuristic for breaking pills in half


Quickly:
I have to give my son dramamine on road trips, but only half a pill. That’s been a bit tricky. Even scored pills don’t always break cleanly, and then what do you do? Break it again? Actually yes. I did a simple simulation to show how you can increase your chances of breaking a pill into two half-sized portions by 15-20% (e.g. from 70% to about 85%):
1. Try to break the pill in half.
2. If you succeed, great, if not, try to break each half in half.
3. Between your four resulting fragments, some pair of them has its own probability of adding up to half a pill, plus or minus.

Honestly I thought it would work better. This is the value of modeling.

Explanation:
If after a bad break from one to two pieces you break again to four pieces, you will end up with six possible combinations of the four fragments. Some of these are equivalent so all together going to four pieces amounts to creating two more chances to create a combination that adds to 50%. And it works: your chances go up. This is simple and effective. But not incredibly effective. I thought it would increase your chances of a match by 50% or more, but the benefit is closer to 15-20%. So it’s worth doing, but not a solution to the problem. Of course, after a second round of splitting you can keep splitting and continue the gambit. In the limit, you’ve reduced the pill to a powder whose grains can add to precisely 50% in uncountable numbers of combinations, but that’s a bit unwieldy for road trip dramamine. For the record, pill splitters are also too unwieldy for a roadtrip, but maybe they’re worth considering if my heuristic only provides a marginal improvement.

The code:
Here is the simulation. Parameters: I allowed anything within 10% of half of a pill to be “close enough”, so anything in the 40% to 60% range counts. Intention and skill make the distribution of splits non-uniform, so I used a truncated normal with standard deviation set to a 70% chance of splitting the pill well on the first try.

#install.packages("truncnorm")
library(truncnorm)
inc_1st <- 0
inc_2nd <- 0
tol <- 0.1
for (i in 1:100 ) {
  #print(i);
  #a <- runif(1)
  a <- rtruncnorm(1, a=0, b=1, mean=0.5, sd=0.5^3.3)
  b <- 1 - a
  if ( a > (0.5 - tol) & a < (0.5 + tol)) {
    inc_1st <- inc_1st + 1
  } else {
    #aa <- runif(1, 0, a)
    aa <- rtruncnorm(1, a=0, b=a, mean=a/2, sd=(a*2)^3.3)
    ab <- a - aa
    #ba <- runif(1, 0, b)
    ba <- rtruncnorm(1, a=0, b=b, mean=b/2, sd=(b*2)^3.3)
    bb <- b - ba
    totals <- c(aa+ba, aa+bb)
    if (any( totals > (0.5 - tol) & totals < (0.5 + tol)) ) {
      #print(totals)
      inc_2nd <- inc_2nd + 1
    } else {
      #print(totals)
    }
  }
}

#if you only have a 20% chance of getting it right with one break, you have a 50% chance by following the strategy
#if you only have a 30% chance of getting it right with one break, you have a 60% chance by following the strategy
#if you only have a 60% chance of getting it right with one break, you have a 80% chance by following the strategy
#if you only have a 70% chance of getting it right with one break, you have a 85% chance by following the strategy

print(inc_1st)
print(inc_2nd)
print(inc_1st + inc_2nd)

What’s the thing in your life that you’ve looked at more than anything else?

Ernst Mach "Reclining"What’s the thing in your life that you’ve looked at more than anything else? Your walls? Your mom? Your hands? Not counting the backs of your eyelids, the right answer is your nose and brow. They’ve always been there, right in front of you, taking up a steady twentieth or so of your vision every waking moment.

That’s important because to have access to wonder, the joy of knowing you don’t know, you need to realize there are things that are right there that you can’t notice. If you’re wired to miss the obvious, then how can you be confident of anything?

There are answers, of course, but the question has always haunted me, and still does.


How to order a coffee in the minefield of preexisting categories


There are mostly useless of bits of cognitive psychology that I’ve always loved. For example, a lot of categorization research about life on the edge of what objects are what. How flat can a bowl be before it’s a plate? How narrow can a mug be before it’s a cup? How big can a cup be before it’s a bowl? Can it have a handle and not be a cup? When does too much handle make it a spoon? These are questions that can be used to create little microcosms for the study of things like culture, learning, expectations, and all kinds of complexities around the kinds of traits we’re surprisingly sensitive to.

Again, I haven’t found much of it very useful, until recently, trying to order my coffee just the way I like it, I’ve encountered all kinds of unexpected roadblocks. The problem is that my drink doesn’t have a name, and is very close to several drinks that do, each of which comes with it’s own traits and customs and baggage. As a result, I’ve learned that when I’m not careful my drink gets sucked up semantically into the space of its bossy neighbors. The way I like my coffee is close-ish to ways coffee is already commonly served, but different in some important ways that can be very tough to get into a kindly, but overworked barrista’s busy head. Being in a non-category, close to existing ones, means that the meaning of my order has to avoid the semantic basin of other more familiar drinks in endlessly surprising and confounding ways.

To make it concrete, here’s how I like my coffee: double shot of espresso with hot water and cold heavy cream in a roughly 4 to 3 to 2 ratio. For some reason the drink just isn’t as good with too much more or less water, or half and half instead of cream, or steamed or whipped cream instead of liquid. A long-drawn shot isn’t as good as a short shot with hot water added, even though that’s almost the definition of a long shot. I don’t know why or how, but this all matters, so I try to get exactly that. I could just order it how I like it, “double shot of espresso with hot water and cold heavy cream in a roughly 4 to 3 to 2 ratio”, but I’m trying to do a few things at once:
* Keep it concise
* Get what I want
* Not be “that guy”
* And find the ask that will work on anyone: I go to a lot of different coffee shops, and I want a way to ask for this that anyone can hear and produce the same thing.

So,
“Double shot of espresso with water and heavy cream in a roughly 4 to 3 to 2 ratio”
fails on both concise and sparing me from being that guy. Fortunately there are a lot of ways of asking for what I want. Fascinatingly, they all fail in interesting ways:

“Give me a double Americano with less water and heavy cream”
The major nearest neighbor to what I want is the Americano. So it makes sense to use that as a shortcut, by giving directions to my drink from the Americano landmark. Seems straightforward, but Americano, it turns out, is a bossy category, and asking for it asks for a surprising lot of its unexpected baggage as well. Mainly the amount of water. In the US at least, the ratio of coffee to water is often 10:1. Just asking for “less” tends to get me 5:1 or 8:1, meaning there is still several times more water than coffee. No matter how I ask there’s always at least twice as much.

Another bit of the Americano’s baggage is that it’s pretty commonly taken with half and half, meaning that even when I ask for heavy cream, it’s very common for me to end up with half and half, probably due to muscle memory alone. And you can’t ask for “cream,” you have to ask for “heavy cream,” or you’ll almost always get half-and-half.

“Give me a short double Americano with heavy cream”
This should work and it just doesn’t. Something about the word Americano coming out of my mouth means that I’ll get 2 or 5 or 10 times more water than coffee, no matter how I ask.

“Give me a double Americano with very little water and also heavy cream”
Same deal. Simply doesn’t work.

And all of these problems get worse depending who got the order. Your chances are actually OK if you’re talking to the person who will make the drink. But if you’re talking to a cashier who will then communicate, verbally, in writing, or through a computer, to the person who makes your drink, then the regularizing function of language almost guarantees that your drink will be passed on as a normal Americano. The lossy game of telephone loves a good semantic attractor.

“Give me a double Americano with heavy cream in an 8oz cup”
They’ll usually still add too much water, and just not have room for more than a drop or two of cream. This order also gets dangerously close to making me that guy.

“Give me a double espresso with hot water and heavy cream”
With all the Americano trouble I eventually learned to back further away from the Americano basin and closer to my drink’s even bigger, but somehow less assumption laden neighbor, Espresso. Somehow, with this order and the refinement below I end up with what I wanted more often than not. I wish I could say that this obviously works better. It works better, but it’s still not obvious. And it still goes wrong regularly, and still occasionally in strange and new ways. The most impressive is when the barrista mentally translates “espresso with water” to “americano,” pulling me fully back into the first basin, and back into all of the traps above. Less commmonly they’ll mentally translate “espresso with cream” into macchiato or breve and steam the cream. This means that some categories are distorting my drink even when I’m in neighboring categories. They have that much gravity.

“Give me a double espresso with hot water and heavy cream; not an Americano, just a bit of water”
Fails on concision, and definitely makes me that guy.

“Give me equal parts espresso (a double), hot water, and heavy cream”
I came up with this to get out of the Americano trap elegantly, and it works pretty well. It shouldn’t because I actually like a bit less cream than water, and less of both than coffee (4:3:2, not 4:4:4), but the strength of the Americano attractor ends up working in my favor: the temptation to add less cream than anything means that they’ll tend to subconsciously ignore me and put the right amount of cream. But they’re also likely to still put more water than coffee. And another common failure occurs when I actually get taken literally and get equal proportions. That results in way too much cream, and I can’t complain because it’s literally what I asked for. It’s one of the more confounding failures because I can only blame myself.

“Give me a double espresso with equal parts hot water and heavy cream”
A little variation on the above, that also depends on the subconscious strength of the Americano trap. Less concise, but overall more effective.
Again, I really want 4:3:2, not 1:1:1, but it’s happened before that a subconscious understanding leads a barista to give me more water than cream. The most common failure, again, is when I’m taken literally and get equal proportions (too much) cream. The most hilarious failure was a barrista who listened perfectly but Also fell into the Americano trap (“espresso + water = Americano”). I ended up with 2 parts espresso, 10 parts water, and 10 parts heavy cream. You literally couldn’t taste any coffee. Who would even do that? It was like drinking watery melted butter. Totally absurd. I was too impressed to be annoyed.

“Italiano with heavy cream”
This really would be the winner, certainly on concision, except nobody knows what an Italiano is. It’s an espresso with a tiny amount of water added—perfect—so in humans with this category in their head it’s perfect, because the work has already been done carving these traits out of the Americano basin. The problem is universality: this fine category only exists in a small subset of heads. Somehow it’s the rare barrista that’s heard of an Italiano. What I could do is ask for it, and if they don’t know what it is, explain it. Something new having a word is more powerful at overcoming the Americano trap than something new not having its own word. But you really can’t get more “That Guy” than explaining to barristas obscure coffee drinks.

“Give me a cafe con panna with a bit of hot water”
Literally, this is just what I want, panna=cream, but in practice panna is understood as whipped cream and there’s not a concise way to specify liquid.

“Espresso with heavy cream”
If you just don’t mention water at all, a lot of confusion disappears. I don’t get what I want but it’s close and concise and easy and universal. Except, I should have mentioned this sooner, a lot of places don’t even have heavy cream, just half and half. Totally different thing.

“Espresso with heavy cream … … Oh! Also, could you add a bit of hot water?”
Affected afterthought aside, this works pretty water. Asking for water after cream is a good signal to not add very much. But it’s kind of a pain for everyone, and this only works at a place once before it starts coming off as inauthentic. You can’t ask the cashier, you have to ask the person making the drink, or it’ll get lost in translation and you’ll get an Americano.

“I’d like a coffee please”
This really fails on being what I want, but succeeds on so many other dimensions that, well, sometimes I’ll just give up and do this.

A note about half-and-half. Half-and-half is supposedly equal parts milk and heavy cream. I say supposedly because, well, try this: order two drinks, one espresso that you drown in half-and-half (equal parts of each) and one espresso “with a bit of milk and heavy cream” (2:1:1). They should be identical (both are two parts espresso, one part milk, one heavy cream) but you’ll find them to be very different. Half-and-half is very much its own thing.

OK, what was this pointless madness? Here’s the idea. Think of every drink as a point on the axes of coffee, water, cream, milk, half-and-half, foam, sugar, whatever. Now carve up that space. Americano gets a big space. What happens if you’re in it is that your coordinates get distorted, maybe toward the middle, of whatever space you’re in. Not just that, but points near the boundary, but outside of it, get sucked in. Something about human meaning makes it so that the act of carving a state space into a semantic regions distorts it and moves it around. By understanding these processes, and how they work, how to correct for them or even exploit them, we not only get bettter at meaning and its games, but, in the case of a nameless, obscure, specific and disregarded form of coffee, get what we want despite everything.


The crises of a quantitative social scientist

  1. So I’ve always identified as an empirical-first person, and v. cagey about theory contributions in social thought. I need the world to tell me how it is, I don’t want to tell it.
  2. But I’ve been doing a lot of theory this last two years with theory people.
  3. But I’ve had to get over being self-conscious about it, since theory is so made up.
  4. But I’m starting to appreciate that made up isn’t so bad, because the name of the game is figuring stuff out together, and that applies as much to useful distinctions and language as to facts and data.
  5. But I think that data is ultimately the thing that sciences of sociality are short on
  6. But my theory pieces are quickly eclipsing my data pieces in terms of “what the people want”
  7. But data is still a strategic advantage of mine, and something I enjoy a ton.
  8. But it takes a lot more work for a lot less out.
  9. And I’m starting to question more whether science is really the appropriate tool for learning about society: whether science as method is even ready for humans as subject. If you think about it, from cell and mouse research through the Nobel prize for lobotomies even to Facebook’s “emotion manipulation” experiments, the only times that science is really “in its element” for building knowledge about living systems is when it’s murdery.

Therefore … I don’t know. I should keep doing both I guess. So everything is exactly as it should be

About

This entry was posted on Monday, December 6th, 2021 and is filed under nescience, science.


Calvino excerpt: the wisdoms of knowing and not knowing

ichac00001p1

Calvino’s Mr. Palomar, “Serpents and skulls.” Mr Palomar is getting a tour of Toltec city of Tula from a knowledgable local scholar who goes deep into the mythos, symbolism, and network of associations. But they interrupted by a schoolteacher telling his students a simpler story.

The line of schoolboys passes. And the teacher is saying, “Esto es un chac-mool. No se sabe lo que quiere decir.” (“This is a chac-mool. We don’t know what it means.”) And he moves on.

Though Mr. Palomar continues to follow the explanation of his friend acting as guide, he always ends up crossing the path of the schoolboy and overhearing the teacher’s words. He is fascinated by his friends’s wealth of mythological references: the play of interpretation and allegorical reading has always seemed to him a supreme exercise of the mind. But he feels attracted also by the opposite attitude of the schoolteacher: what had at first seemed only a brisk lack of interest is being revealed to him as a scholarly and pedagogical position, a methodological choice by this serious and conscientious young man, a rule from which he will not serve. A stone, a figure, a sign, a word reaching us isolated from its context is only that stone, figure, sign, or word: we can try to define them, to describe them as they are, and no more than that; whether, beside the face they show us, they also have a hidden face, is not for us to know. The refusal to comprehend more than what the stones show us is perhaps the only way to evince respect for their secret; trying to guess is a presumption, a betrayal of that true, lost meaning.

About

This entry was posted on Tuesday, November 9th, 2021 and is filed under books, nescience, science.


Toothbrushes are up to 95% less effective after 3 months and hugging your children regularly can raise their risk of anxiety, alcoholism, or depression by up to 95%


It sounds impossible, but this statistic is true:

Hugging your child regularly can raise his or her risk of anxiety, alcoholism, or depression by up to 95%.

I don’t even need a citation. Does it mean parents should stop hugging their children? No. You’d think that it couldn’t possibly be right, but the truth is even better: it couldn’t possibly be wrong.

And there are other statistics just like it. I was at a Walmart and on the side of a giant bin of cheap toothbrushes I read that “a new toothbrush is up to 95% more effective than a 3 month old toothbrush in reducing plaque between teeth.”

If you’ve heard related things like “Your toothbrush stops working after three months,” from TV or word of mouth, I’ve found that they all come as butchered versions of this original statistic, which actually says something completely different.

I’d only heard the simplified versions of that stat myself, and it had always set off my bullshit detector, but what was I going to do, crusade passionately against toothbrushes? Seeing the claim written out in science speak changed things a little. The mention of an actual percentage must have struck me because I pushed my giant shopping cart in big mindless circles before the genius of the phrasing bubbled up. This is textbook truthiness: At a glance, it looks like science is saying you should buy more toothbrushes, but merely reading right showed that the sentence means nothing at all. The key is in the “up to.” All this stat says is that if you look at a thousand or a million toothbrushes you’ll find one that is completely destroyed (“95% less effective”) after three months. What does that say about your particular old toothbrush? Pretty much nothing.

And that’s how it could be true that hugging your child regularly can raise his or her risk of anxiety, alcoholism, or depression by up to 95%. Once again, the key is in the “up to.” To prove it, all I have to do is find someone who is a truly terrible hugger, parent, and person. If there exists anyone like that — and there does — then this seemingly crazy claim is actually true. If any person is capable of causing psychological distress through inappropriate physical contact, the phrase “up to” lets you generalize to everyone. Should you stop hugging your child because there exist horrible people somewhere in the world? Of course not. These statistics lead people to conclusions that are the opposite of the truth. Is that even legal?

If it’s in your mind that you should buy a new toothbrush every three months, that’s OK, it’s in mine too. And as everyone who comes within five feet of me will be happy to hear, me and dental hygiene have no conflict. But you have to know that this idea of a three month freshness isn’t based in facts. If I had to guess, I’d say that it’s a phrase that was purchased by the dental industrial complex to sell more toothbrushes, probably because they feel like they don’t sell enough toothbrushes. If it sounds tinfoil hat that an industry would invest in fake science just to back up its marketing, look at just one of the exploits pulled by Big Tobacco, very well documented in testimony and subpoenas from the 1990’s.

Press release by Colgate cites an article that never existed

Hunting to learn more about the statistic, I stumbled on some Colgate fan blogs (which I guess exist) pointing to a press release citing “Warren et al, J Dent Res 13: 119-124, 2002.”

Amazingly, it’s a fake paper! There is nothing by Warren in the Journal of Dental Research in 2002, or in any other year. But I kept looking and eventually found something that seems to fit the bill:
Conforti et al. (2003) An investigation into the effect of three months’ clinical wear on toothbrush efficacy: results from two independent studies. Journal of Clinical Dentristry 14(2):29-33. Available at http://www.ncbi.nlm.nih.gov/pubmed/12723100.

First author Warren in the fictional paper is the last author in this one. It’s got to be the right paper, because their results say exactly what I divined in Walmart, that a three month old toothbrush is fine and, separately, that if you look hard enough you’ll find really broken toothbrushes. Here it is in their own words, from the synopsis of the paper:

A comparison of the efficacies of the new and worn D4 toothbrushes revealed a non-significant tendency for the new brush head to remove more plaque than the worn brush head. However, when plaque removal was assessed for subjects using brush heads with the most extreme wear, i.e., scores of 3 or 4 (n = 15), a significant difference (p < 0.05) between new and worn brush heads was observed for the whole-mouth and approximal surfaces.

This study should never have been published. The phrase “revealed a non-significant tendency” is jargon for “revealed nothing.” To paraphrase the whole thing: “We found no effect between brand new and three month old toothbrushes, but we wanted to find one, and that’s almost good enough. Additionally, a few of the toothbrushes were destroyed during the study, and we found that those toothbrushes don’t work.” The only thing in the original stat that isn’t in the Conforti synopsis is the claim about effect size: “up to 95% less effective.” The synopsis mentions no effect size regarding the destroyed toothbrushes, so either it’s only mentioned in the full version of the paper (which I can’t get my hands on) or it’s based on a really incredibly flawed interpretation of the significance claim, “(p < 0.05)." The distinguished Paul J Warren works or worked for Braun (but not Colgate), and has apparently loved it. Braun is owned by Gillette which is owned by Proctor & Gamble. The paper’s first author, Conforti, works, with others of the paper’s authors, for Hill Top Research, Inc., a clinical research contractor based in West Palm Beach, Florida. I don’t think there’s anything inherently wrong with working for a corporate research lab — I do — but it looks like they produce crap for money, and the reviewers who let Braun’s empty promotional material get published in a scientific journal should be embarrassed with themselves.

The original flawed statistic snowballs, accumulating followers, rolling further and further from reality

I did a lot of digging for the quote, and found lots of versions of it, each further from reality than the one before it. Here is the first and best attempt at summarizing the original meaningless study:

A new toothbrush is up to 95% more effective than a three month old toothbrush in reducing plaque between teeth.*

A later mention by Colgate gets simpler (and adds “normal wear and tear,” even though the study only found an effect for extreme wear and tear.)

Studies show that after three months of normal wear and tear, toothbrushes are much less effective at removing plaque from teeth and gums compared to new ones.*

… and simpler ….

Most dental professionals agree you should change your toothbrush every three months.*.

That last one might come from a different source, and it might reflect the statistic’s transition from a single vacuous truthy boner to vacuous widespread conventional wisdom. The American Dental Association now endorses a similar message: “Replace toothbrushes at least every 3–4 months. The bristles become frayed and worn with use and cleaning effectiveness will decrease.” To their credit, their citations don’t include anything by Warren or Conforti, but the paper they do cite isn’t much better: Their evidence for the 3–4 month time span comes from a study that only ran for 2.5 months (Glaze & Wade, 1986). Furthermore, the study only tested 40 people, and it wasn’t blind, and it’s stood unelaborated and unreplicated for almost 30 years. It’s an early, preliminary result that deserves followup. But if that’s enough for the ADA to base statements on then they are a marketing association, not the medical or scientific one they claim to be.

They also cite evidence that toothbrushes you’ve used are more likely to contain bacteria, but they’re quick to point out that those bacteria are benign and that exposure to them is not linked to anything, good or bad. Of course, those bacteria on your toothbrush probably came from your body. Really, you infect your toothbrush, not the other way around, so why not do it a favor and get a new mouth every three months?

So what now?

Buy a new toothbrush if you want, but scientifically, the 3–4 months claim is on the same level with not hugging your kids. Don’t stop hugging your kids. Brush your teeth with something that can get between them, like a cheap toothbrush, an old rag dipped in charcoal, or a stick. You can use toothpaste if you want, it seems to have an additional positive effect, probably a small one. Your toothbrush is probably working fine. If your toothbrush smells bad, you probably have bad breath.

Disclaimer is that I’m sure I could have read more, and I might be working too fast, loose, and snarky. I haven’t even read the full Conforti paper (If you have good institutional access, see if you can get it for me). I’ll dig deeper if it turns out that anyone cares; leave a comment.

Update

  • That paper that doesn’t exist actually does, sort of. The press release got the journal wrong. But that doesn’t help, because its findings have nothing to do with the claim. Conforti is still the go to resource, and it’s still crap.
  • That journal with both the Warren and Conforti results, the Journal of Clinical Dentistry, bills itself “the highest quality peer-reviewed medium for the publication of industry-supported oral care product research and reviews.” It’s a shill venue. And they don’t offer any online access to past issues or articles, so it’s real tough to dive deeper on any of these pubs, or how they were funded.
  • The industry has now organized its science supporting its claim at this site: https://www.dentalcare.com/en-us/research/research-database-landing?research-topics=toothbrush-wear&research-products=&author=&year=&publication=&research-types=
  • Warren is now at NYU. ugh.
  • Looking outside of journals that get paid by toothbrush companies to publish meaningless research, there are failures to replicate Warren’s 3 month findings:
    • “no statistically significant differences were found for plaque score reductions for 3-month-old toothbrushes exhibiting various degrees of wear.” (Malekafzali, 2011)
    • and stronger: “A total of 238 papers were identified and retrieved in full text. Data on tooth-brushing frequency and duration, ideal bristle stiffness, and tooth-brushing method were found to be equivocal. Worn tooth brushes were not shown to be less effective than unworn brushes, and no ideal toothbrush replacement interval is evident.”(Asadoorian, 2006)

Refs

Conforti N.J., Cordero R.E., Liebman J., Bowman J.P., Putt M.S., Kuebler D.S., Davidson K.R., Cugini M. & Warren P.R. (2003). An investigation into the effect of three months’ clinical wear on toothbrush efficacy: results from two independent studies., The Journal of clinical dentistry, 14 (2) 29-33. PMID: http://www.ncbi.nlm.nih.gov/pubmed/12723100

Glaze P.M. & Wade A.B. (1986). Toothbrush age and wear as it relates to plaque control*, Journal of Clinical Periodontology, 13 (1) 52-56. DOI: http://dx.doi.org/10.1111/j.1600-051x.1986.tb01414.x.


Subjective utility paradox in a classic gift economy cycle with loss aversion

 

Decision research is full of fun paradoxes.  Here’s one I came up with the other day. I’d love to know if it’s already been explored.

  1. Imagine a group of people trading Kahneman’s coffee cup amongst themselves.
  2. If you can require that it will keep being traded, loss aversion predicts that it’ll become more valuable over time, as everyone sells it for more than they got it.
  3. Connect those people in a ring and as the cup gets traded around its value will diverge. It will become invaluable.

Kula bracelet
 

Thoughts

  • This could be a mechanism for things transitioning from having economic to cultural value, a counter-trend to the cultural->economic trend of Israeli-daycare-style crowding out.
  • The cup of course doesn’t actually have to attain infinite value for this model to be interesting.  If it increases in value at all over several people, then that’s evidence for the mechanism.
  • Step 2 at least, and probably 3, aren’t giant leaps. Who would know if this argument has been made before?
  • There is a real world case for this.  A bit too complicated to be clean-cut evidence, but at least suggestive.  The archetypal example of gift economies was the Kula ring, in which two types of symbolic gift were obligatorily traded for each other over a ring of islands, with one type of gift circulating clockwise  and the other counter-clockwise through the islands. These items had no practical use, they existed only to trade.  They became highly sought-after over time, as indicators of status.  In the variant described, both types of items should become invaluable over both directions around the circle, but should remain tradable for each other.
  • This example ends up as a fun paradox for utilitarianism under boundedly rational agents, a la Nozick’s utility monster, which subjectively enjoys everything more than everyone, and therefore under a utilitarian social scheme should rightfully receive everything.
  • The effect should be smaller as the number of people in the ring gets smaller.  A smaller ring means fewer steps until I’ve seen the object twice (less memory decay).  My memory that the thing was less valuable yesterday acts here as a counterbalance to the inflationary effect of loss aversion.

New work using a video game to explain how human cultural differences emerge

Video games can help us sink our teeth into some of the thorniest questions about human culture. Why are different people from different places different? Is it because their environments differ, because they differ, or is it all random? These are important questions, at the very root of what makes us act they way we do. But answering them rigorously and responsibly is a doozy. To really reliably figure out what causes differences in human cultures, you’d need something pretty crazy. You’d need a sort of human culture generator that creates lots of exact copies of the same world, and puts thousands of more or less identical people in each of them, lets them run for a while and does or does not produce cultural differences. In essence, you’d need God-like power over the nature of reality. A tall order, except, actually, this happens all the time. Multiplayer video games and other online communities are engineered societies that attract millions of people. It turns out that even the most powerful computers can’t host all of those visitors simultaneously, so game developers often create hundreds of identical copies of their game worlds, and randomly assign new players to one or another instance. This creates the circumstances necessary, less real than reality, but much more realistic than any laboratory experiment, to test fundamental theories about why human cultures differ. For example, if people on different copies of the same virtual world end up developing different social norms or conceptions of fairness, that’s evidence: mere tiny random fluctuations can cause societies to differ!

This theory, that societies don’t need fundamental genetic, or deep-seated environmental divergences to drift apart has revealed itself in many disciplines. It is known as the Strong Cultural Hypothesis in cultural anthropology, and has emerged with different names in economics, sociology, and even the philosophy of science. But stating a hypothesis is one thing, pinning it down with data is another.

With survey data from evolutionary anthropologist Pontus Strimling at the Institute for the Future in Sweden, from players of the classic multiplayer game World of Warcraft, we showed that populations can come to differ even when demographics and environment are the same. The game gives many opportunities for random strangers, who are likely to never meet again, to throw their lots together, and try to cooperate in taking down big boss characters. Being that these are strangers with no mutual accountability, players have lots of strategies for cheating each other, by playing nice until some fancy object comes along, and then stealing it and running away before anyone can do anything. The behavior is so common that it has a name in the game, “ninja-ing”, that reflects the shadowy and unassailable nature of the behavior.

Given all this opportunity for bad behavior, players within cultures have developed norms and rules for how and when to play nice and make sure others do. For those who want to play nice, there are lots of ways of deciding who should get a nice object. The problem then is which to choose? It turns out that, when you ask several people within one copy of the game world how they decide to share items, you’ll get great agreement on a specific rule. But when you look at the rules across different copies, the rule that everyone agreed on is different. Different copies of the world have converged on different consensuses for what counts as a fair distribution of resources. These differences emerge reliably between huge communities even though the player demographics between communities are comparable, and the game environments across those communities are literally identical.

If it seems like a tall order that video games can tell us about the fundamentals nature of human cultural differences, that’s fair: players are mostly male, often young, and the stakes in a game are much different than those in life. Nevertheless, people care about games, they care about being cheated, and incentives to cheat are high, so the fact that stable norms emerge spontaneously in these little artificial social systems is evidence that, as Jurassic Park’s Dr. Ian Malcolm said of life, “culture finds a way.”

Here is the piece: paywall,
no paywall


Why Carl Sagan wasn’t an astronaut

Astronomer Carl Sagan probably loved space more than most people who get to go there. So why did it never occur to me that he maybe wanted to go himself? We don’t really think of astronomers as wanting to be astronauts. But once you think about it, how could they not? I was in the archives of Indiana University’s Lilly Library, looking through the papers of Herman Joseph Muller, the biologist whose Nobel Prize was for being the first to do biology by irradiating fruit flies. He was advisor to a precocious young high-school-aged Sagan, and they had a long correspondence. Flipping through it, you get to watch Sagan evolve from calling his advisor “Prof. Muller” to “Joe” over the years. You see him bashfully asking for letters of recommendation. And you get to see him explain why he was never an astronaut.

The letter

HARVARD COLLEGE OBSERVATORY
Cambridge 38, Massachusetts

November 7, 1966

Professor H. J. Muller
Department of Zoology
Jordan Hall 222
University of Indiana
Bloomington, Indiana

Dear Joe,

Many thanks for the kind thoughts about the scientist-astronaut program. I am not too old, but I am too tall. There is an upper limit of six feet! So I guess I’ll just stay here on the ground and try to understand what’s up in the sky. But a manned Mars expedition — I’d try and get shrunk a little for that.

With best wishes,
Cordially,
Carl Sagan

A little note on using special collections

A library’s Special Collections can be intimidating and opaque. But they have amazing stuff once you’re started. The easiest way to get started is to show and up and just ask to be shown something cool. It’s the librarian’s job to find things, and they’ll find something. But that only shows you things people know about. How do you find things that no one even knew was in there? The strategy I’m converging on is to start by going through a library’s “finding aids”, skip to the correspondence, skip to the alphabetized correspondence, Google the people who have been pulled out, and pull the folder of the first person who looks interesting. The great thing about this strategy is that even if your Library only has the papers of boring people, those papers will include letters from that boring person’s interesting friends.


Bringing big data to the science of community: Minecraft Edition

https://www.hmc.edu/calendar/wp-content/uploads/sites/39/2019/01/Rubiks-Cube-images.jpg

Looking at today’s Internet, it is easy to wonder: whatever happened to the dream that it would be good for democracy? Well, looking past the scandals of big social media and scary plays of autocracy’s hackers, I think there’s still room for hope. The web remains full of small experiments in self-governance. It’s still happening, quietly maybe, but at such a tremendous scale that we have a chance, not only to revive the founding dream of the web, but to bring modern scientific methods to basic millenia-old questions about self-governance, and how it works.

Minecraft? Minecraft.

That’s why I spent five years studying Minecraft. Minecraft, the game you or your kid or niece played anytime between 5 minutes and 10 years ago, consists of joining one of millions of boundless virtual worlds, and building things out of cubic blocks. Minecraft doesn’t have a plot, but narrative abhors a vacuum, so people used the basic mechanics of the game to create their own plots, and in the process catapulted it into its current status as the best-selling video game of all time. Bigger than Tetris.

Minecraft’s players and their creations have been the most visible facet of the game, but they are supported by a class of amateur functionaries that have made Minecraft special for a very different reason. These are the “ops” and administrators, the people who do the thankless work of running each copy of Minecraft’s world so that it works well enough that the creators can create.

Minecraft, it turns out, is special not just for its open-ended gameplay, but because it is “self-hosted”: when you play on a world with other people, there is a good chance that it is being maintained not by a big company like Microsoft, but by an amateur, a player, who somehow roped themselves in to all kinds of uncool, non-cubic work writing rules, resolving conflicts, fixing problems, and herding cats. We’re used to leaving critical challenges to professionals and, indeed, most web services you use are administered by people who specialize in providing CPU, RAM, and bandwidth publicly. But there is a whole underworld of amateur-run server communities, in which people with no governance training, and no salary, who would presumably prefer to be doing something else, take on the challenge of building and maintaining a community of people who share a common vision, and work together toward it. When that works, it doesn’t matter if that vision is a block-by-block replica of the starship Enterprise, it’s inspiring. These people have no training in governance, they are teaching themselves to build governance institutions. Each world they create is a political experiment. By my count, 19 of 20 fail, and each success and failure is a miraculous data point in the quest to make self-governance a science.

That’s the dream of the Internet in action, especially if we can bring that success rate up from 1/20, 5 percent. To really understand the determinants of healthy institutions, we’d have to be able to watch 100,000s of the nations of Earth rise and fall. Too bad Earth only has a few hundred nations. Online communities are the next best thing: they give us the scale to run huge comparisons, and even experiments. And there is more to governing them than meets the eye.

Online communities as resource governance institutions

Minecraft servers are one example of an interesting class of thing: the public web server. A web server is a computer that someone is using to provide a web service, be it a computer game, website, mailing list, wiki, or forum. Being computers, web servers have limits: finite processing power (measured in gigahertz), memory (measured in gigabytes), bandwidth (measured in gigabytes per second), and electricity (measured in $$$ per month). Failing to provide any of these adequately means failing to provide a service that your community can rely on. Being a boundless 3D multiplayer virtual world open to virtually anyone, Minecraft is especially resource intensive, making these challenges especially critical.

Any system that manages to thrive in these conditions, despite being available to the entire spectrum of humanity, from anonymous adolescents with poor impulse control to teams of professional hackers, is doing something special. Public web servers are “commons” by default. Each additional user or player who joins your little world imposes a load on it. Even if all of your users are well intentioned your server will grind to a halt if too many are doing too much, and your community will suffer. When a valuable finite resource is available to all, we call it a common pool resource, and we keep our eyes out for the classic Tragedy of the Commons: the problem of too many people taking too much until everyone has nothing.

The coincidence of the Information Age with the global dominance of market exchange is that virtually every application of advancing technology has been toward making commons extinct. Anything that makes a gadget smaller or cheaper makes it easier to privately own, and more legible to systems that understand goods as things that you own and buy and sell. This goes back all the way to barbed wire, which turned The Wild West from the gigantic pasture commons that created cowboys to one that could feasibly to fence off large tracts of previously wild land, and permit the idea of private property. (Cowboys were common pool resource managers who ranged the West bringing cow herds back to their owners, through round-ups.). Private servers like those in Minecraft are a counterpoint to this narrative. With modern technology’s adversity to the commons, it’s funny every time you stumble on a commons that was created by technology. It’s like they won’t go away.

That brings up a big question. Will commons go away? Can they be privatized and technologized away? This is one foundation of the libertarian ideology behind cryptocurrency. But the stakes are higher than the latest fad.

One claim that has been made by virtually every philosopher of democracy is that successful self-governance depends not only on having good rules in place, but on having members who hold key norms and values. Democracy has several well-known weak spots, and norms and values are its only reliable protection from demagogues, autocrats, elites, or mob rule. This sensitivity to culture puts institutions like democracy in contrast with institutions like markets, hierarchies, and autocracies, whose reliance on base carrots and sticks makes them more independent of value systems. Economist Sam Bowles distinguishes between Machiavellian and Aristotelian institutions, those that are robust to the worst citizen, and those that create good ones. The cynical versus the culture-driven institutions.

The same things that make cynical institutions cynical make them easy to analyze, design, and engineer. We have become good at building them, and they have assumed their place at the top of the world order. Is it their rightful place? In the tradition that trained me, only culture-driven institutions are up to the challenge of managing commons. If technology cannot take the commons from our our future, we need to be as good at engineering culture-driven institutions as we are at engineering markets and chains of command. Minecraft seems like just a game, a kid’s game, but behind its success are the tensions that are defining the role of democracy in the 21st century.

Unfortunately, the same factors that make cynical institutions easy to build and study make culture-driven institutions hard. It is possible to make thousands of copies of a hierarchy and test its variations: that’s what a franchise is: Starbucks, McDonalds, copy, paste. By contrast, each inspiring participatory community you discover in your life is a unique snowflake whose essence is impossible to replicate, for better and worse.

By researching self-organizing communities on the Internet, wherever they occur, we take advantage of a historic opportunity to put the “science” in “political science” to an extent that was once unimaginable. When you watch one or ten people try to play God, you are practicing history. When you watch a million, you are practicing statistics. We can watch millions of people trying to build their own little Utopia, watch them succeed and fail, distinguish bad choices from bad luck, determine when a bad idea in most contexts will be good somewhere else, and build general theories of institutional effectiveness.

There are several features that make online communities ideal for the study of culture-driven institutions. Their low barrier to entry means that there are many more of them. Amateur servers are also more transparent, their smaller scale makes them simpler, their shorter, digitally recorded histories permit insights into the processes of institutional change, and the fact that they serve identical copies of known software makes it possible to perform apples-to-apples comparisons that make comparisons of the Nations of Earth look apples-to-elephants by comparison.

A study of the emergence of formal governance

Big ideas are nice, but you’ve got to pin it down somehow. I began my research asking a more narrow question: how and why do communities develop their governance systems in the direction of increasing integration and formalization. This is the question of where states come from, and bureaucracy, and rules. Do we need rules. Is there a right way to use them to govern? Is it different for large and small populations? To answer this, I wrote a program that scanned the Internet every couple of hours for two years, visiting communities for information about how they are run, who visits them, and how regularly those visitors return. I defined community success as the emergence of a core group, the number of players who return to a specific server at least once a week for a month, despite the thousands of other communities they could have visited. And because the typical lifespan of a server is nine weeks, it was possible to observe thousands of communities, over 150,000, over their entire life histories. Each starts from essentially the same initial conditions, a paradoxical “tyrano-anarchy” with one ruler and no rules. And each evolves in accordance with a sovereign administrators naïve sense of what brings people together. As they develop that sense, administrators can install bits of software that implement dimensions of governance, including private property rights, peer monitoring, social hierarchy, trade, communication, and many others. Most fail, some succeed.

According to my analysis, large communities seem to be the most successful the more actively they attend to the full range of resource management challenges, and, interestingly, the more they empower the sole administrator. Leadership is a valuable part of successful community, especially as communities grow. The story becomes much harder to align with a favorite ideology when we turn our focus to small communities. It turns out that if your goal is to run a community of 4, rather than 400 regular users, there is no governance style that is clearly more effective than any other: be a despot, be a socialist, use consensus or dice, with few enough people involved, arrangements that seem impossible can be made to work just fine.

The future

What this project shows is that rigorous comparisons of very large, well-documented populations of political experiments make it possible to understand the predictors of governance success. This is important for the future of participatory, empowering governance institutions. Until effective community building can be reduced to a formula, effective communities will be rare, and we humans will continue to fail to tap the full potential of the Internet to make culture-driven institutions scalable, replicable, viable competitors to the cynical institutions that dominate our interactions.

With more bad news every day about assaults on our privacy and manipulation of our opinions, it is hard to be optimistic about the Internet, and what it will contribute to the health of our institutions. But, working diligently in the background, is a whole generation of youth who have been training themselves to design and lead successful communities. Their sense of what brings people together doesn’t come from a charismatic’s speechifying, but their own past failures to bring loved ones together. They can identify the warning signs of a nascent autocrat, not because they read about autocracies past, but because they have personally experienced the temptation of absolute power over a little virtual kingdom. And as scientist learn these lessons vicariously, at scale, self-governance online promises not only to breed more savvy defenders of democracy, but to inform the design and growth of healthy, informed participatory cultures in the real world.


A recent history of astrology


I made a site this summer—http://whatsyoursign.baby—that’s a sort of glorified blog post about what happens when you go ahead and give astrology a basis in science. I wrapped it up with an explainer that was hidden in the back of the site, so I’m publishing the full thing here.

The history

Before the 17th century, Westerners used similarity as the basis for order in the world. Walnuts look like brains? They must be good for headaches. The planets seem to wander among the stars, the way that humans walk among the plants and trees? The course of those planets must tell us something about the course of human lives. This old way of thinking about the stars persists today, in the form of astrology, an ancestor of the science of astronomy that understands analogy as a basis of cosmic order.

Planets and the earthbound objects that exert the same gravitational pull at about two meters away. If you don’t believe that a shared spiritual essence binds the objects of the cosmos to a common fate, keep in mind that the everyday object most gravitationally similar to Uranus is the toilet.
It was our close relationship to the heavenly bodies that slowly changed the role of similarity in explanation, across the sciences, from something in the world to something in our heads. Thanks to the stargazers, similarity has been almost entirely replaced by cause and mechanism as the ordering principle of nature. This change was attended by another that brought the heavenly bodies down to earth, literally.

Physics was barely a science before Isaac Newton’s insights into gravity. But Newton’s breakthrough was due less to apples than to cannons. He asked what would happen if a cannonball were shot with such strength that, before it could fall some distance towards the ground, the Earth’s curvature had moved the ground away by the same amount. The cannonball would be … a moon! Being in orbit is nothing more than constantly falling! Through this and other thought experiments, he showed that we don’t need separate sciences for events on Earth and events beyond it (except, well, he was also an occultist). Newton unified nature.

Gravity—poorly understood to this day—causes masses to be attracted to each other. It is very weak. If you stand in front of a large office building, its pull on you is a fraction of the strength of a butterfly’s wing beats. The Himalyas, home of the tallest peak on Earth, have enough extra gravity to throw off instruments, but not enough to make you heavier. Still enough to matter: Everests’s extra gravity vexed the mountain’s first explorers, who couldn’t get a fix on its height because they could couldn’t get their plumb lines plumb.

So is the gravity of faraway bodies strong enough to change your fate? And if so, how? Following the three-century impact of science on thought, modern astrologists have occasionally ventured to build out from the mere fact of astrology into explanations of how it must actually work; to bring astrology into nature right there with physics. Most explanations have focused on gravity, that mysterious force, reasoning that the patterns of gravitational effects of such massive bodies must be large, unique, or peculiar enough to leave some imprint at the moment of birth.

But by making astrology physical, we leave open the possibility that it is subject to physics. If we accept astrology and know the laws of gravity, we should be able reproduce the gravitational fingerprint of the planets and bring our cosmic destiny into our own hands.

Refs

Foucault’s “The order of things”
Bronowski’s “The common sense of science”
Pratt, 1855 “I. On the attraction of the Himalaya Mountains, and of the elevated regions beyond them, upon the plumb-line in India”

About

This entry was posted on Wednesday, October 24th, 2018 and is filed under believers, science.


Instagram Demo: Your friends are more popular than you


I’m teaching a class that uses code to discover unintuitive things about social systems (UC Davis’ CMN 151). One great one shows how hard it is to think about social networks, and it’s easy to state: “On average, your friends are more popular than you” (Feld 1991).

It’s one thing to explain, but something more to show it. I had a demo coded up on Facebook, but it was super fragile, and more of my students use Instagram anyway, so I coded it up again.

To run the demo you

  1. Consider not to participating (because, for a student, the demo involves logging into your Instagram account on a public computer and running code written by someone with power over you).
  2. Log in to your Instagram account
  3. Click to show your Followers, and scroll down that list all the way until they are all showing. This could take a while for people with many followers.
  4. Open up View -> Developer -> JavaScript Console (in Chrome. “Web Console” in Firefox. Slightly different for other browsers. In Safari you need to find developer mode first and turn it on)
  5. Ask them to paste the code below, which will be accessible to them via Canvas, into their browser’s JavaScript Console. If Followers aren’t showing, it won’t work. This could also take a while if you have many followers. Keep pasting the last part until the numbers are stable. You computer is working in the background growing the list of your followers’ numbers of followers.
  6. Open this Google Sheet.
  7. Paste your values into the sheet.
  8. Calculate the average number of followers, and the average number of followers of followers. Compare them. With enough participants, the second will be bigger, even if you exclude giant robot accounts.

This post isn’t an explainer, so I won’t get into how and why it’s true. But the way you set it up beforehand in class is by reasoning that there shouldn’t be a systematic difference between your and your friends’ popularities. The numbers should be the same. You wrap the lesson up after the data is in by hopping onto the spreadsheet live and coding up the averages of their followers, and of their friends followers, to show that their friends’ average is higher on averages. After explaining about fat tails, you drive it home on the board by drawing a star-shaped network and showing that the central node is the only one that is more popular than her friends, and all others are less popular.

The code

Open your Instagram Followers (so that the URL in the location bar reads https://www.instagram.com/yourusername/followers/) and paste this into your JavaScript console.



// from https://stackoverflow.com/questions/951021/what-is-the-javascript-version-of-sleep
function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
function instaFollowerCount(page) {
return parseInt(page.querySelector("a[href$='/followers/']").firstElementChild.textContent.replace(/,/g, ""))
}
function instaFollowerCount2(page) {
return parseInt(page.querySelector("head meta[name='description']").attributes['content'].value.match(/([\d,]+)\sFollowers/)[1].replace(/,/g, "") )
}
function instaFollowerList(page) {
return Array.prototype.slice.call(page.querySelector("div[role='presentation'] div[role='dialog']").querySelector("ul").querySelectorAll("a[title]")).map(x => x.href)
}
// https://stackoverflow.com/questions/247483/http-get-request-in-javascript#4033310
function httpGet(theUrl)
{
var xmlHttp = new XMLHttpRequest();
xmlHttp.responseType = 'document';
xmlHttp.open( "GET", theUrl, false ); // false for synchronous request
xmlHttp.send( null );
return xmlHttp.response;
}
function httpGetAsync(theUrl, callback)
{
var xmlHttp = new XMLHttpRequest();
xmlHttp.responseType = 'document';
xmlHttp.onreadystatechange = function() {
if (xmlHttp.readyState == 4 && xmlHttp.status == 200)
callback(xmlHttp.response);
}
xmlHttp.open("GET", theUrl, true); // true for asynchronous
xmlHttp.send(null);
}
var iFollowers = instaFollowerCount(document);
var aFollowers = instaFollowerList(document);
var docs = [];
for (f in aFollowers) {
httpGetAsync(aFollowers[f] + "followers/", function(response) {
docs.push(instaFollowerCount2(response));
});
if(f % 100 == 0 & f > 0) {
await sleep( 1000 * 60 * 30 + 10000); // in ms, so 1000 = 1 second.
// instagram limits you to 200 queries per hour, so this institutes a 30 minute (plus wiggle) wait every 100 queries
// If you're fine running the demo with just a sample of 200 of your followers, that should be fine, and it's also way faster: this demo can run in seconds instead of taking all night. To have it that way, delete the above 'await sleep' line.

}
}


And then, after waiting until docs.length is close enough to iFollowers, run



console.log(`You have ${iFollowers} followers`);
console.log(`(You've heard from ${docs.length} of them)`);
console.log("");
console.log(`On average, they have ${docs.reduce((total, val, i, arr) => total + val) / docs.length} followers`);
console.log(`Your most popular follower has ${docs.reduce((incumbent, challenger, i, arr) => incumbent > challenger ? incumbent : challenger)} followers`);
console.log(`Your least popular follower has ${docs.reduce((incumbent, challenger, i, arr) => incumbent < challenger ? incumbent : challenger)} followers`);


The result isn't meaningful for just one person, but with enough people, it's a strong lively demo. See how things are coming along for others on this Sheet.

Technical details

Instagram crippled their API, so it isn't possible to run this demo above board, not even with the /self functionality, which should be enough since all participants are logged in to their own accounts. This code works by getting the list of usernames of all followers and posting a GET request for that users page. But Instagram can tell you are scraping so it cripples the response. That's why instaFollowerCount differs from instaFollowerCount2. In the main user's page, the followers are prominent and relatively easy to scrape, but the requested page of the friend can't be reached through a console request. Fortunately, Instagram's "meta" summary description of a user's page in the lists their number of followers, so a simple regex yields it. Of course, even scraping the follower count and IDs from the main page is tricky because Instagram has some scheme to scramble all class names for every page load or account or something. Fortunately it's still a semantic layout, so selector queries for semantic attributes like "content", "description", and "presentation" work just fine to dig up the right elements. Of course, this could all change tomorrow: I have no idea how robust this code is, but it works on Oct 24, 2018. Let me know if your mileage varies.


Do you lose things? Here’s the magical way to find them.

Let’s say you make a trip to the store, making sure to lock the door behind your on the way out. When you return and try to let yourself in, you discover that you lost your keys somewhere along the way. Round-trip, the whole distance traveled was longish to hunt for a pair of lost keys, like 1km. They could be anywhere!

How should you go about finding your keys? Should you spend the whole cold day and night slowly scouring your path? That sounds awful. But reality isn’t going to do you any favors: there’s no way your keys are more likely to be in one place along the way than another. So, for example, if the space within ten meters of your door accounts for 2% of the whole trip, the probability of finding your keys within that space must be equal to 2%, not greater than or less than 2%. Right?

Nope. It turns out that reality wants to do you a favor. There’s a good place to look for your keys.

The answer

Intuition says that they are as likely to be in one place along the way as any other. And intuition is right for the special case that your keys were definitely very secure and very unlikely to have fallen out on that particular trip. But they probably weren’t. After all, if it was so unlikely, they shouldn’t have fallen out. So we can’t just consider the world where the very unlikely happened. We have to consider several possible worlds of two rough types:
* The worlds in which your keys were very secure, but the very unlikely happened and they fell out anyway.
* The worlds in which your keys, on that particular trip, were unusually loose and bound to fall out.
So those are the two types of possible world we’re in, and we don’t have to consider them equally. The mere fact that your keys fell out means it’s more likely that you’re in the second type of world, that they were bound to fall out. And if they were bound to fall out, then they probably fell out right away. Why? We can take those worlds and divide them again, into those where your keys were likely but not too too likely to fall out, and those in which your keys were not just very likely, but especially very likely to fall out. And so on. Of the worlds in which your keys were bound to fall out, the ones that are most likely are the ones in which they fell out right away.

So there it is. If you lost your keys somewhere along a long stretch, you don’t have to search every bit of it equally, because they most likely fell out on your way down the doorstep, or thereabouts. The probability of finding your keys within 10 meters of the door is greater than 2%, possibly much greater.

What is the probability exactly? If you’d had several keys to lose, we might be able to better estimate which specific world we’re in of the millions. But even with just one key lost, the mere fact that it got lost means it was most likely to have gotten lost immediately.

Why is it magic?

If you know the likelihood of losing your keys, that makes them impossible to find. If you have no idea the chances they fell out, then they’re more than likely near the door. It’s your uncertainty about how you lost them that causes them to be easy to find. It’s as if the Universe is saying “Aww, here you go, you pitiful ignorant thing.”

Solving the puzzle, with and without data

So you can’t get the actual probability without estimates of how often this trick works.  But even without hard data, we can still describe the general pattern. The math behind this is tractable, in that someone who knows how to prove things can show that the distribution of your key over the length of the route follows an exponential distribution, not a uniform distribution, with most of the probability mass near the starting point, and a smooth falling off as you get further away. The exponential distribution is commonly used for describing waiting times between events that are increasingly likely to have happened at least once as time goes by. Here is my physicist friend, “quantitative epistemologist” Damian Sowinski explaining how it is that your uncertainty about the world causes the world to put your keys close to your door.

If you get in this situation and try this trick, write me whether it worked or not and I’ll keep a record that we can use to solve for lambda in Damian’s notes.

In the meantime, we do have one real-world data point. This all happened to me recently on my way to and from the gym. I was panicking until I realized that if they fell out at all, they probably fell out right away. And like magic, I checked around my starting point And There They Were. It’s an absolutely magical feeling when mere logic helps you solve a real problem in the real world. I’ve never been so happy to have lost my keys.

 

UPDATE: How strong is the effect?

All of the above tells us that there’s a better than 2% chance of finding your keys in the first 10 meters. But how much better than 2%?  20% or 2.001%?  If the latter, then we’re really talking intellectual interest more than a pro-tip; even if the universe is doing you a favor, it’s not exactly bending over backwards for you.  To tackle this, we have mathematician Austin Shapiro.  Backing him up I can add that, on the occasion on which this trick worked for me, my keys were super super loose, just like he predicts.  A takeaway is going to be that if this trick works for you, you did a very bad job of securing your keys.

I read your blog post, including Damian’s note. I have some things to add, but to clearly explain where they fit in, let me try to delineate two separate “chapters” in the solution to your key problem.

In chapter 1, we narrow our set of models for the location of the keys to the exponential distributions. Damian gives a good account of how this can be justified from first principles. But after doing this, we still have an infinite set of models, because an exponential distribution depends on a parameter \lambda (the expected rate of key losses per kilometer walked, which may be high if the keys are loose and hanging out of your pocket, or low if they are well secured).

In chapter 2, we use conditional probability to select among the possible values of \lambda, or, as you put it in your blog post, try to figure out which world we are in. This is the part that interests me, and it’s also the part that still needs mathematical fleshing-out. All Damian says about it is “So what is the value of \lambda? That’s a question for experiment — one must measure it.” But as you say, we’ve already done one experiment: you observed that your keys did fall out during a 1 km walk. This is enough to put a posterior distribution on \lambdaif we posit a prior distribution.

However… what does a neutral prior for \lambda look like? I don’t know any principled way to choose. A uniform distribution between 0 and some finite ceiling is unsuitable, since according to such a model, if you’re ever very likely to lose your keys, you’re usually pretty likely to lose your keys.

Assigning \lambda itself an exponential prior distribution seems murkily more realistic, so I tried that. If \lambda\sim{\rm Exp}(k), then, if I did my math right, your probability of having lost your keys in the first x km of your walk works out to k(k+1)\left(\frac 1k-\frac 1{k+x}\right), which is (1+\frac 1k)x+O(x^2) for small x. So in this case, Bayesian reasoning boosts the chances that you lost your keys in the first, say, 10 meters, by a factor of 1+\frac 1k. Observe that for this effect to be large, k has to be pretty small… and the smaller k is, the higher your average propensity to lose your keys (the mean of the exponential distribution is \frac 1k). Thus, for example, to achieve the result that the universe is helping you find your keys to the tune of a factor of 5 — i.e., that your chance of having lost your keys in the first 10 meters is 5% instead of the “intuitive” 1% — you need to assume that, a priori, you’re so careless with your keys as to lose them 4 times per kilometer on an average trip. That prior seems just as implausible as the uniform prior.

I can think of one kind of prior that could lead to a strong finding that the universe wants to help you find your keys. That would be a bimodal prior, with a high probability that \lambda is close to 0 (key chained to both nipple rings) and a small probability that \lambda is very large (key scotch-taped to beard), with nothing in between. But I can’t think of any reason to posit such a prior that isn’t transparently circular reasoning, motivated by the answer we’re trying to prove.

So… while all the exponential models definitely give you a better chance of finding your keys near the beginning of your route than near the end, I’m not convinced the effect size is all that strong; or, if it is (and you do have one magical experience to suggest it is), I’m not convinced that math is the reason!

Au.

Two GIFs about peer review, and an embarrassing story …

1)

unnamed

2)









It is common to have your papers rejected from journals. I forwarded a recent rejection to my advisor along with the first GIF. Shortly after, I got the second GIF from the journal editor, with a smiley. It turns out that I’d hit Reply instead of Forward.

At least he had a sense of humor.

About

This entry was posted on Saturday, December 17th, 2016 and is filed under audio/visual, science.


Natural selection, statistical mechanics, and the idea of germs were all inspired by social science

It’s only natural to want to hold your scientific field as the most important, or noble, or challenging field. That’s probably why I always present the sciences of human society as the ones that are hardest to do. It’s not so crazy: it is inherently harder to learn about social systems than biological, engineered, or physical ones because we can’t, and shouldn’t ever, have the same control over humans that we do over bacteria, bridges, or billiard balls. But maybe I take it too far. I usually think of advances in social science as advances in what it is possible for science to teach us, and I uncritically think of social science as where scientific method will culminate.

So imagine my surprise to learn that social science isn’t the end of scientific discovery, but a beginning. According to various readings in John Carey’s Faber Book of Science, three of the most important scientific discoveries since the Enlightenment — the theory of natural selection, the germ theory of disease, and the kinetic theory of gasses — brought inspiration from human social science to non-human domains. One of Darwin’s key insights toward the theory of evolution came while reading Malthus’s work on human population. Just in case you think that’s a fluke, Alfred Russell Wallace’s independent discovery of natural selection came while he was reading Malthus. (And Darwin was also influenced by Adam Smith). Louis Pasteur developed the implications of the germ theory of disease by applying his French right-wing political philosophy to animalcules. The big leap there was that biologists rejected that very small insignificant animals could possibly threaten a large and majestic thing like a human, but Pasteur had seen how the unworthy masses threatened the French elite, and it gave him an inkling. Last, James Maxwell, the man right under Newton and Einstein in physics stature, was reading up on the new discipline of Social Statistics when he came up with the kinetic theory of gases, which in turn sparked statistical mechanics and transformed thermodynamics. Physicists have started taking statistical mechanics out of physical science and applying it to social science, completely ignorant of the fact that it started there.

All of these people were curious enough about society to think and read about it, and their social ponderings were rewarded with fresh ideas that ultimately transformed each of their fields.

I think of science as a fundamentally social endeavor, but when I say that I’m usually thinking of the methods of science. These connections out of history offer a much deeper sense in which all of natural science is the science of humanity.

Thanks to Jaimie Murdock and Colin Allen for the connection between Malthus and Darwin, straight from Darwin’s autobiography

In October 1838, that is, fifteen months after I had begun my systematic inquiry, I happened to read for amusement Malthus on Population, and being well prepared to appreciate the struggle for existence which everywhere goes on from long-continued observation of the habits of animals and plants, it at once struck me that under these circumstances favorable variations would tend to be preserved, and unfavorable ones to be destroyed. The results of this would be the formation of a new species. Here, then I had at last got a theory by which to work.


How would science be different if humans were different?

How would science be different if humans were different — if we had different physiological limits? Obviously, if our senses were finer, we wouldn’t need the same amount of manufactured instrumentation to reach the same conclusions. But there are deeper implications. If our senses were packed denser, and if we could faithfully process and perceive all of the information they collect, we would probably have much more sensitive time perception, or one way or another a much refined awareness of causal relations in the world. This would have the result that raw observation would be a much more fruitful methodology within the practice of natural science, perhaps so much so that we would have much less need for things like laboratory experiments (which are currently very important).

Of course, a big part of the practice of science is the practice of communication, and that becomes clear as soon as we change language. Language is sort of a funny way to have to get things out of one head and into another. It is slow, awkward, and very imperfect. If “language” was perfect — if we could transfer our perfect memories of subjective experience directly to each other’s heads with the fidelity of ESP — there would be almost no need for reproducibility, one of the most important parts of science-as-we-know-it. Perfect communication would also supersede the paratactic writeups that scientific writing currently relies on to make research reproducible. It may be that in some fields there would be no articles or tables or figures. Maybe there would still be abstracts. And if we had unlimited memories, it’s possible that we wouldn’t need statistics, randomized experiments, or citations either.

The reduction in memory limits would probably also lead to changes in the culture of science. Science would move faster, and it would be easier to practice without specialized training. The practice of science would probably no longer be restricted to universities, and the idea of specialized degrees like Ph.D.s would probably be very different. T.H. Huxley characterized science as “organized common sense.” This “organization” is little more than a collection of crutches for our own cognitive limits, without which the line between science and common sense would disappear entirely.

That’s interesting enough. But, for me, the bigger implication of this exercise is that science as we know it is not a Big Thing In The Sky that exists without us. Science is fundamentally human. I know people who find that idea distasteful, but chucking human peculiarities into good scientific practice is just like breaking in a pair of brand-new gloves. Having been engineered around some fictional ideal, your gloves aren’t most useful until you’ve stretched them here and there, even if you’ve also nicked them up a bit. It’s silly to judge gloves on their fit to the template. In practice, you judge them on their fit to you.


The unexpected importance of publishing unreplicable research

There was a recent attempt to replicate 100 results out of psychology. It succeeded in replicating less than half. Is Psychology in crisis? No. Why would I say that? Because unreplicable research is only half of the problem, and we’re ignoring the other half. As with most pass/fail decisions by humans, a decision to publish after peer review can go wrong in two ways:

  1. Accepting work that “shouldn’t” be published (perhaps because it will turn out to have been unreplicable; a “false positive” or “Type I” error)
  2. Rejecting work that, for whatever reason, “should” be published (a “false negative” or “Type II” error).

It is impossible to completely eliminate both types of error, and I’d even conjecture that it’s impossible for any credible peer review system to completely eliminate either type of error: even the most cursory of quality peer review will occasionally reject good work, and even the most conservative of quality peer review will accept crap. It is naïve to think that error can ever be eliminated from peer review. All you can do is change the ratio of false positives to false negatives, are your own relative preference for the competing values of skepticism and credulity.

So now you’ve got a choice, one that every discipline makes in a different way: you can build a conservative scientific culture that changes slowly, especially w.r.t. its sacred cows, or you can foster a faster and looser discipline with lots of exciting, tenuous, untrustworthy results getting thrown about all the time. Each discipline’s decision ends up nestling within a whole system of norms that develop for accommodating the corresponding glut of awful published work in the one case and excellent anathematic work in the other. It is hard to make general statements about whole disciplines, but peer review in economics tends to be more conservative than in psychology. So young economists, who are unlikely to have gotten anything through the scrutiny of their peer review processes, can get hired on the strength of totally unpublished working papers (which is crazy). And young psychologists, who quickly learn that they can’t always trust what they read, find themselves running many pilot experiments for every few they publish (which is also crazy). Different disciplines have different ways of doing science that are determined, in part, by their tolerances for Type I relative to Type II error.

In short, the importance of publishing unreplicable research is that it helps keep all replicable research publishable, no matter how controversial. So if you’re prepared to make a judgement call and claim that one place on the error spectrum is better than another, that really says more about your own high or low tolerance for ambiguity, or about the discipline that trained you, than it does about Science And What Is Good For It. And if you like this analysis, thank psychology, because the concepts of false positives and negatives come out of signal detection theory, an important math-psych formalism that was developed in early human factors research.

Because a lot of attention has gone toward the “false positive” problem of unreplicable research, I’ll close with a refresher on what the other kind of problem looks like in practice. Here is a dig at the theory of plate tectonics, which struggled for over half a century before it finally gained a general, begrudging acceptance:

It is not scientific but takes the familiar course of an initial idea, a selective search through the literature for corroborative evidence, ignoring most of the facts that are opposed to the idea, and ending in a state of auto-intoxication in which the subjective idea comes to be considered an objective fact.*

Take that, plate tectonics.

About

This entry was posted on Friday, September 4th, 2015 and is filed under science.


Paper on Go experts in Journal of Behavioral and Experimental Economics

I just published a paper with Sascha Baghestanian​ on expert Go players.

Journal of Behavioral and Experimental Economics

It turns out that having a higher professional Go ranking correlates negatively with cooperation — but being better at logic puzzles correlates positively. This challenges the common wisdom that interactive decisions (game theory) and individual decisions (decision theory) invoke the same kind of personal-utility-maximizing reasoning. By our evidence, only the first one tries to maximize utility through backstabbing. Go figure!

This paper only took three years and four rejections to publish. Sascha got the data by crashing an international Go competition and signing up a bunch of champs for testing.

About

This entry was posted on Saturday, July 25th, 2015 and is filed under science, updates.


Prediction: Tomorrow’s games and new media will be public health hazards.

Every psychology undergraduate learns the same scientific parable of addiction. A rat with a line to its veins is put in a box, a “Skinner Box,” with a rat-friendly lever that releases small amounts of cocaine. The rat quickly learns to associate the lever with a rush, and starts to press it, over and over, in favor of nourishment or sociality, until death, often by stroke or heart failure.

Fortunately, rat self-administration studies, which go back to the 1960’s, offer a mere metaphor for human addiction. A human’s course down the same path is much less literal. People don’t literally jam a “self-stimulate” button until death. Right? Last week, Mr. Hsieh from Taiwan was found dead after playing unnamed “combat computer games” for three days straight. Heart failure. His case follows a handful of others from the past decade, from Asia and the West. Streaks of 12 hours to 29 days, causes of death including strokes, heart failure, and other awful things. One guy foamed at the mouth before dropping dead.

East Asia is leagues ahead of the West in the state of its video game culture. Multiplayer online games are a national pastimes with national heroes and nationally-televised tournaments.(And the South Korean government has taken a public health perspective on the downsides, with a 2011 curfew for online gamers under 18.) Among the young, games play the role that football plays for the rest of the world. With Amazon’s recent purchase of e-sport broadcaster twitch.tv, for $1.1 billion, there is every reason to believe that this is where things are going in the West.

Science and industry are toolkits, and you can use them to take the world virtually anywhere. With infinite possibilities, the one direction you ultimately choose says a lot about you, and your values. The gaming industry values immersion. You can see it in the advance of computer graphics and, more recently, in the ubiquity of social gaming and gamification. You can see it in the positively retro fascination of Silicon Valley with the outmoded 1950’s “behaviorist” school of psychology, with its Skinner boxes, stimuli and responses, classical conditioning, operant conditioning, positive reinforcement, new fangled (1970’s) intermittent reinforcement. Compulsion loops and dopamine traps. Betable.com, another big dreamer, is inpiring us all with its wager that the future of gaming is next to Vegas. Incidentally, behaviorism seems to be the most monetizable of the psychologies.

And VR is the next step in immersion, a big step. Facebook has bet $400 million on it. Virtual reality uses the human visual system — the sensory modality with the highest bandwidth for information — to provide seamless access to human neurophysiology. It works at such a fundamental level that the engineering challenges remaining in VR are no longer technological (real-time graphics rendering can now feed information fast enough to keep up with the amazing human eye). Today’s challenges are more about managing human physiology, specifically, nausea. In VR, the easiest game to engineer is “Vomit Horror Show,” and any other game is hard. Nausea is a sign that your body is struggling to resolve conflicting signals; your body doesn’t know what’s real. Developers are being forced to reimagine basic principles of game and interface design.*** Third-person perspective is uncomfortable, it makes your head swim. Cut scenes are uncomfortable for the lack of control. If your physical body is sitting while your virtual body stands, it’s possible to feel like you’re the wrong height (also uncomfortable). And the door that VR opens can close behind it: It isn’t suited to the forms that we think of when we think of video games: top-down games that makes you a mastermind or a god, “side-scroller” action games, detached and cerebral puzzle games. VR is about first-person perspective, you, and making you forget what’s real.

We use rats in science because their physiology is a good model of human physiology. But I rolled my eyes when my professor made his dramatic pause after telling the rat story. Surely, humans are a few notches up when it comes to self control. We wouldn’t literally jam the happy button to death. We can see what’s going on. Mr. Hsieh’s Skinner Box was gaming, probably first-person gaming, and he self-administered with the left mouse button, which you can use to kill. These stories are newsworthy today because they’re still news, but all the pieces are in place for them to become newsworthy because people are dying. The game industry has always had some blood on its hands. Games can be gory and they can teach and indulge violent fantasizing. But if these people are any indication, that blood is going to become a lot harder to tell from the real thing.

About

This entry was posted on Thursday, January 29th, 2015 and is filed under science, straight-geek.


The intriguing weaknesses of deep learning and deep neural networks

Deep learning (and neural networks generally) have impressed me a lot for what they can do, but much more so for what they can’t. They seem to be vulnerable to three of the very same strange, deep design limits that seem to constrain the human mind-brain system.

  • The intractability of introspection. The fact that we can know things without knowing why we know them, or even that we know them. Having trained a deep network, it’s a whole other machine learning problem just to figure out how it is doing what it is doing.
  • Bad engineering. Both neural networks and the brain are poorly engineered in the sense that they perform action X in a way that a mechanical or electrical engineer would never have designed a machine that can do X.** These systems don’t respect modularity and it is hard to analyze them with a pencil and paper. They are hard to diagnose, troubleshoot, and reverse-engineer. That’s probably important to why they work.
  • The difficulty of unlearning. The impossibility of “unseeing” the object in the image on the left (your right), once you know what it is. That is a property that neural networks share with the brain. Well, maybe that isn’t a fact, maybe I’m just conjecturing. If so, call it a conjecture: I predict that Facebook’s DeepFace, after it has successfully adapted to your new haircut, has more trouble than it should in forgetting your old one.
  • Very fast performance after very slow training. Humans make decisions in milliseconds, decisions based on patterns learned from a lifetime of experience and tons of data. In fact, the separation between the training and test phases that is standard in machine learning is more of an artifice in deep networks, whose recurrent varieties can be seen as lacking the dichotomy.
  • There are probably others, but I recognize them only slowly.

Careful. Once you know what this is, there's no going back.
Careful. Once you know what this is, there’s no going back.

Unlearning, fast learning, introspection, and “good” design aren’t hard to engineer: we already have artificial intelligences with these properties, and we humans can easily do things that seem much harder. But neither humans nor deep networks are good at any of these things. In my eyes, the fact that deep learning is reproducing these seemingly-deep design limitations of the human mind gives it tremendous credibility as an approach to human-like AI.

The coolest thing about a Ph.D. in cognitive science is that it constitutes license, almost literally, to speculate about the nature of consciousness. I used to be a big skeptic of the ambitions of AI to create human-like intelligence. Now I could go either way. But I’m still convinced that getting it, if we get it, will not imply understanding it.

Motivating links:

About

This entry was posted on Sunday, December 21st, 2014 and is filed under science.


Xeno’s paradox

There is probably some very deep psychology behind the age-old tradition of blaming problems on foreigners. These days I’m a foreigner, in Switzerland, and so I get to see how things are and how I affect them. I’ve found that I can trigger a change in norms even by going out of my way to have no effect on them. It’s a puzzle, but I think I’ve got it modeled.

In my apartment there is a norm (with a reminder sign) around locking the door to the basement. It’s a strange custom, because the whole building is safe and secure, but the Swiss are particular and I don’t question it. Though the rule was occasionally broken in the past (hence the sign), residents in my apartment used to be better about locking the door to the basement. The norm is decaying. Over the same time period, the number of foreigners (like me) has increased. From the naïve perspective, the mechanism is obvious: Outsiders are breaking the rules. The mechanism I have in mind shows some of the subtlety that is possible when people influence each other under uncertainty. I’m more interested in the possibility that this can exist than in showing it does. Generally, I don’t think of logic as the most appropriate tool for fighting bigotry.

When I moved in to this apartment I observed that the basement door was occasionally unlocked, despite the sign. I like to align with how people are instead of how the signs say they should be, and so I chose to just remain a neutral observer for as long as possible while I learned the how things run. I adopted a heuristic of leaving things how I found them. If the door was locked, I locked it behind me on my way out, and if the door wasn’t I left it that way.

That’s well and good, but you can’t just be an observer. Even my policy of neutrality has side effects. Say that the apartment was once full of Swiss people, including one resident who occasionally left the door unlocked but was otherwise perfectly Swiss. The rest of the residents are evenly split between orthodox door lockers and others who could go either way and so go with the flow. Under this arrangement, the door stays locked most of the time, and the people on the cusp of culture change stay consistent with what they are seeing.

Now, let’s introduce immigration and slowly add foreigners, but a particular kind that never does anything. These entrants want only to stay neutral and they always leave the door how they found it. If the norm of the apartment was already a bit fragile, then a small change in the demographic can tip the system in favor of regular norm violations.

If the probability of adopting the new norm depends on the frequency of seeing it adopted, then a spike in norm adoptions can cause a cascade that makes a new norm out of violating the old one. This is all standard threshold model: Granovetter, Schelling, Axelrod. Outsiders change the model by creating a third type that makes it look like there are more early adopters than there really are.

Technically, outsiders translate the threshold curve up and don’t otherwise change its shape. In equations, (1) is a cumulative function representing the threshold model. It sums over some positive function f() as far as percentile X to return value Y in “X% of people (adopters early adopters (E) plus non-adopters (N)) need to see that at least Y% of others have adopted before they do.” Equation (2) shifts equation (1) up by the percentage of outsiders times their probability of encountering an adopter rather than a non-adopter.
latex-image-2

If you take each variable and replace it with a big number you should start to see that the system needs either a lot of adopters or a lot of outsiders for these hypothetical neutral outsiders to be able to shift the contour very far up. That says to me that I’m probably wrong, since I’m probably the only one following my rule. My benign policy probably isn’t the explanation for the trend of failures to lock the basement door.

This exercise was valuable mostly for introducing a theoretical mechanism that shows how it could be possible for outsiders to not be responsible for a social change, even if it seems like it came with them. Change can come with disinterested outsiders if the system is already leaning toward a change, because outsiders can be mistaken for true adopters and magnify the visibility of a minority of adopters.

Update a few months later

I found another application. I’ve always wondered how it is that extreme views — like extreme political views — take up so much space in our heads even though the people who actually believe those things are so rare. I’d guess that we have a bias towards over estimating how many people are active in loud minorities, anything from the Tea Party to goth teenagers. With a small tweak, this model can explain how being memorable can make your social group seem to have more converts than it has, and thereby encourage more converts. Just filter people’s estimates of different group’s representations through a memory of every person that has been seen in the past few months, with a bias toward remembering memorable things. I’ve always thought that extreme groups are small because they are extreme, but this raises the possibility that it’s the other way around, that when you’re small, being extreme is a pretty smart growth strategy.


How we create culture from noise

learningnoise

I don’t like to act too knowledgable about society, but I’m ready to conjecture law: “Peoples will interpret patterns into the phenomena that affect their lives, even phenomena without patterns. Culture amplifies pareidolia.”

It’s interesting when those patterns are random, as in weather and gambling. “Random” is a pretty good model for weather outside the timescale of days. But we can probably count on every human culture to have narratives that give weather apprehensible causes. Gambling is random by definition, but that doesn’t stop the emergence of gambling “systems” that societies continue to honor with meaningfulness. Societies do not seem to permit impactful events to be meaningless.

This is all easy to illustrate in fine work by Kalish et al. (2007). The image above shows five series (rows) of people learning a pattern of dots from the person before them, one dot at a time, and then teaching it to the next person in the same way. Each n (each column) is a new person in the “cultural” transmission of the pattern. The experiment starts with some given “true” pattern (the first column).

The first row of the five tells a pretty clean story. The initial pattern was a positive linear function that people learned and transmitted with ease. But the second and third rows already raise some concern: the initial patterns were more complicated functions that, within just a couple of generations, got transformed into the same linear function as in the first row. This is impressive because the people were different between rows; Each row happened without any awareness of what happened in the other rows — people had only the knowledge of what just happened in the cell to their immediate left. Treating the five people in rows two or three as constituting a miniature society, we can say that they collectively simplified a complicated reality into something that was easier to comprehend and communicate.

And in the fourth and fifth rows the opposite happens: Subjects are not imposing their bias for positive lines on a more complicated hidden pattern, but on no pattern at all. Again, treating these five people as a society, their line is a social construct that emerges reliably across “cultures” from nothing but randomness. People are capable of slightly more complex cultural products (the negative line in the fifth row) but probably not much more, and probably rarely.

The robustness of this effect gives strong evidence that culture can amplify the tendencies of individuals toward pareidolia — seeing patterns in noise. It also raises the possibility that the cultural systems we hold dear are built on noise. I’m betting that any work to change such a system is going to find itself up against some very subtle, very powerful social forces.


The empirics of identity: Over what timescale does self-concept develop?

There is little more slippery than who we think we are. It is mixed up with what we do, what we want to do, who we like to think we are, who others think we are, who we think others want us to think we are, and dozens of other equally slippery concepts. But we emit words about ourselves, and those statements — however removed from the truth — are evidence. For one, their changes over time they can give insight into the development of self-concept. Let’s say that you just had a health scare and quit fast food. How long do you have to have been saying “I’ve been eating healthy” before you start saying “I eat healthy”? A month? Three? A few years? How does that time change with topic, age, sex, and personality? Having stabilized, what is the effect of a relapse in each of these cases? Are people who switch more quickly to “I eat healthy” more or less prone to sustained hypocracy — hysteresis — after a lapse into old bad eating habits? And, on the subject of relapse, how do statements about self-concept feed back into behavior; All else being equal, do ex-smokers who “are quitting” relapse more or less than those who “don’t smoke”? What about those who “don’t smoke” against those who “don’t smoke anymore”; does including the regretted-past make it more or less likely to return? With the right data — large longitudinal corpora of self-statements and creative/ambitious experimental design — these may become empirical questions.


What polished bronze can teach us about crowdsourcing

  1. Crowds can take tasks that would be too costly for any individual, and perform them effortless for years — even centuries.
  2. You can’t tell the crowd what it wants to do or how it wants to do it.

from http://photo.net/travel/italy/verona-downtown


The market distribution of the ball, a thought experiment.

The market is a magical thing.  Among other things, it has been entrusted with much of the production and distribution the world’s limited resources. But markets-as-social-institutions are hard to understand because they are tied up with so many other ideas: capitalism, freedom, inequality, rationality, the idea of the corporation, and consumer society. It is only natural that the value we place on these abstractions will influence how we think about the social mechanism called the market. To remove these distractions, it will help to take the market out of its familiar context and put it to a completely different kind of challenge.

Basketball markets

What would basketball look like if it was possible to play it entirely with markets, if the game was redesigned so that players within a team were “privatized” during the game and made free of the central planner, their stately coach: free to buy and sell favors from each other in real time and leave teamwork to an invisible hand?  I’m going to take my best shot, and in the process I’ll demonstrate how much of our faith in markets is faith, how much of our market habit is habit.

We don’t always know why one player passes to another on the court. Sometimes the ball goes to the closest or farthest player, or to the player with the best position or opening in the momentary circumstances of the court. Sometimes all players are following the script for this or that play. Softer factors may also figure in, like friendship or even the feeling of reciprocity. It is probably a mix of all of these things.  But the market is remarkable for how it integrates diverse sources of information.  It does so quickly, adapting almost magically, even in environments that have been crafted to break markets.

So what if market institutions were used to bring a basketball team to victory? For that to work, we’d have to suspend a lot of disbelief, and make a lot of things true that aren’t. The process of making those assumptions explicit is the process of seeing the distance of markets from the bulk of real world social situations.

The most straightforward privatization of basketball could class behavior into two categories, production (moving the ball up court) and trade (passing and shooting). In this system, the coach has already arranged to pay players only for the points they have earned in the game. At each instant, players within a team are haggling with the player in possession, offering money to get the ball passed to them. Every player has a standing bid for the ball, based on their probability of making a successful shot. The player in possession has perfect knowledge of what to produce, of where to go to have either the highest chances of making a shot or of getting the best price for the ball from another teammate.

If the player calculates a 50% chance of successfully receiving the pass and making a 3-point shot, then that pass is worth 1.5 points to him. At that instant, 1.5 will be that player’s minimum bid for the ball, which the player in possession is constantly evaluating against all other bids. If, having already produced the best set of bids, any bid is greater then that possessing player’s own estimated utility from attempting the shot, then he passes (and therefore sells) to the player with the best offer. The player in possession shoots when the probability of success exceeds any of the standing bids and any of the (perfectly predicted) benefits of moving.

A lot is already happening, so it will help to slow down. The motivating question is how would reality have to change for this scheme to lead to good baskeball? Most obviously, the pace of market transactions would have to speed up dramatically, so that making, selecting, and completing transactions happened instantaneously, and unnoticably. Either time would have to freeze at each instant or the transaction costs of managing the auction institution would have to be reduced to an infinitesimal. Similarly, each player’s complex and inarticulable process of calculating their subjective shot probabilities would have to be instantaneous as well.

Players would have to be more than fast at calculating values and probabilities, they would also have to be accurate. If players were poor at calcuating their subjective shot probabilities, and at somehow converting those into cash values, they would not be able to translate their moment’s strategic advantage into the market’s language. And it would be better that players’ bids reflect only the probability of making a shot, and not any other factors. If players’ bids incorporate non-cash values, like the value of being regarded well by others, or the value of not being in pain, then passes may be over- or under-valued. To prevent players from incorporating non-cash types of value the coach has to pay enough per point to drown out the value of these other considerations. Unline other parts of this thought experiment, that is probably already happening.

It would not be enough for players to accurately calculate their own values and probabilities, but those of every other player, at every moment. Markets are vulnerable to assymmetries in information. This means that if these estimates weren’t common knowledge, players could take advantage of each other artificially inflating prices and reducing the efficiency of the team (possibly in both the technical and colloquial senses). Players that fail to properly value or anticipate future costs and benefits will pass prematurely and trap their team in suboptimal states, local maxima. To prevent that kind of short-sightedness, exactly the kind of shortsightedness that teamwork and coaching are designed to prevent, it would be necessary for players to be able to divine not only perfect trading, but perfect production. Perfect production would mean knowing where and when on the court a pass or a shot will bring the highest expected payoff, factoring in the probability of getting to that location at that time.

I will be perfectly content to be proven wrong, but I believe that players who could instantaneously and accurately put a tradable cash value on their current and future state — and on the states of every other player on the court — could use market transactions to create perfectly coherent teams. In such a basketball, the selfish pursuit of private value could be manuevered by the market institution to guarantee the good of the team.

The kicker

With perfect (instantaneous and accurate) judgement and foresight a within-team system of live ball-trading could produce good basketball. But with those things, a central planner could also produce good basketball. Even an anarchist system of shared norms and mutual respect could do so. In fact, as long as those in charge all share the goal of winning, the outputs of all forms governance will become indistinguishable as transaction costs, judgement errors, and prediction errors fall to zero. With no constraints it doesn’t really matter what mechanisms you use to coordinate individual behavior to produce optimal group behavior.

So the process of making markets workable on the court is the process of redeeming any other conceivable form of government. Suddenly it’s trivial that markets are a perfect coordination mechanism in a perfect world.  The real question is which of these mechanisms is the closest to its perfect form in this the real world. Markets are not. In some cases, planned economies like board-driven corporations and coach-driven teams probably are.

Other institutions

What undermines bosshood, what undermines a system of mutual norms, and what undermines markets?  Which assumptions are important to each?  

  • A coach can prescribe behavior from a library of taught plays and habits. If the “thing that is the best to do” changes at a pace that a coach can meaningfully engage with, and if the coached behavior can be executed by players on this time scale, than a coach can prescribe the best behavior and bring the team close to perfect coherence.
  • If players have a common understanding of what kinds of coordinated behavior is the best for what kinds of situations, and they reliably
    and independently come to the same evaluation of the court, than consensual social norms can model perfect coherence satisfactorily.
  • And if every instant on the court is different, and players have a perfect ability to evaluate the state of the court and their own abilities, then an institution that organizes self-interest for the common good will be the one that brings it closest to perfect coherence

Each has problems, each is based on unrealistic assumptions, each makes compromises, and each has its place. But even now the story is still too simple. What if all of those things are true at different points over the course of a game? If the answer is “all of the above,” players should listen to their coach, but also follow the norms established by their teammates, and also pursue their own self-interest. From here, it is easy to see that I am describing the status quo. The complexity of our social institutions must match the complexity of the problems they were designed for. Where that complexity is beyond the bounds that an individual can comprehend, the institutional design should guide them in the right direction. Where that complexity is beyond the bounds of an institution, it should be allowed to evolve beyond the ideological or conceptual boxes we’ve imposed on it.

The closer

Relative to the resource systems we see every day, a sport is a very simple world.  The rules are known, agreed upon by both teams, and enforced closely. The range of possible actions is carefully prescribed and circumscribed, and the skills necessary to thrive are largely established and agreed upon. The people are occupying each position are world-class professionals. So if even basketball is too complicated for any but an impossible braid of coordination mechanisms, why should the real world be any more manageable? And what reasonable person would believe that markets alone are up to the challenge of distributing the world’s limited resources?

note

It took a year and a half to write this. Thanks to Keith Taylor and Devin McIntire for input.


Hayek’s “discovery” is the precognition of economics

I’m exaggerating, but I’m still suspcious. I think Vernon Smith does have some interesting, unconventional work in that direction. There are also null results.

About

This entry was posted on Tuesday, November 26th, 2013 and is filed under life and words, science.


My dissertation

In August I earned a doctorate in cognitive science and informatics. My dissertation focused on the role of higher-level reasoning in stable behavior. In experimental economics, researchers treat human “what you think I think you think I think” reasoning as an implementation of a theoretical mechanism that should cause groups of humans to behave consistently with a theory called Nash equilibrium. But there are also cases when human higher-level reasoning causes deviations from equilibrium that are larger than if there had been no higher-level reasoning at all. My dissertation explored those cases. Here is a video.

My dissertation. The work was supported by Indiana University, NSF/IGERT, NSF/EAPSI, JSPS, and NASA/INSGC.

Life is now completely different.

About

This entry was posted on Monday, November 25th, 2013 and is filed under books, science, updates.


Breaking the economist’s monopoly on the Tragedy of the Commons.

Summary

After taking attention away from economic rationality as a cause of overexploitation of common property, I introduce another more psychological mechanism, better suited to the mundane commons of everyday life. Mundane commons are important because they are one of the few instances of true self-governance in Western society, and thus one of the few training grounds for civic engagement. I argue that the “IAD” principles of the Ostrom Workshop, well-known criteria for self-governance of resource systems, don’t speak only to the very narrow Tragedy of the Commons, but to the more general problem of overexploitation.

Argument

The Tragedy of the Commons is the tragedy of good fudge at a crowded potluck. Individual guests each have an incentive to grab a little extra, and the sum of those extra helpings causes the fudge to run out before every guest got their share. For another mundane example, I’ve seen the same with tickets for free shows: I am more likely to request more tickets than I need if I expect the show to be packed.

The Tragedy has been dominated by economists, defined in terms of economic incentives. That is interesting because the Tragedy is just one mechanism for the very general phenomenon of overexploitation. In predatory animal species that are not capable of rational deliberation, population imbalances caused by cycles, introduced species, and overpopulation can cause their prey species to be overexploited. The same holds between infectious agents and their hosts: parasites or viruses may wipe out their hosts and leave themselves nowhere else to spread. These may literally be tragedies of commons, but they have nothing to do with the Tragedy as economists have defined it, and as researchers treat it. In low-cost, routine, or entirely non-economic domains, humans themselves are less likely to be driven by economic incentives. If overexploitation exists in these domains as well, then other mechanisms must be at work.

Economics represents the conceit that human social dynamics are driven by the rational agency that distinguishes us from animals. The Tragedy is a perfect example: Despite the abundance of mechanisms for overexploitation in simple animal populations, overexploitation in human populations is generally treated as the result of individually rational deliberation. But if we are also animals, why add this extra deliberative machinery to explain a behavior that we already have good models for?

I offer an alternative mechanism that may be responsible for engendering overexploitation of a resource in humans. It is rooted in a psychological bias. It may prove the more plausible mechanism in the case of low cost/low value “mundane” commons, where the incentives are too small for rational self-interest to distinguish itself from the noise of other preferences.

This line of thinking was motivated by many years of experience in shared living environments, which offer brownies at potlucks, potlucks generally, dishes in sinks, chores in shared houses, trash in shared yards, book clubs, and any instance where everyday people have disobeyed my culture’s imperative to distribute all resources under a system of private property. The imperative may be Western, or modern, or it may just be that systems of private property are the easiest for large central states to maintain. The defiance of the imperative maybe intentional, accidental, incidental, or as mundane as the resource being shared.

Mundane commons are important for political science, and political life, because they give citizens direct experience with self-governance. And theorists from Alexis de Toqueville to Vincent Ostrom argue that this is the kind of citizen education that democracies must provide if they aren’t going to fall to anarchy on the one side or powerful heads-of-state on the other. People cannot govern themselves without training in governance. I work in this direction because I believe that a culture of healthy mundane commons will foster healthy democratic states.

I don’t believe that the structural mechanisms of economics are those that drive mundane resource failure. This belief comes only from unstructured experience, introspection, and intuition. But those processes have suggested an alternative: the self-serving bias. Self-serving bias, interpreting information in a way that benefits us at the expense of others, is well-established in the decision-making literature.

How could self-serving cause overexploitation? Lets say that it is commonly known that different people have different standards for acceptable harvesting behavior. This is plausible in low-cost/ low-reward environments, where noise and the many weak and idiosyncratic social preferences of a social setting might drown out any effects of the highly-motivated goal-oriented profit-maximizing behavior that economists attend to. I know my own preference for the brownies, but I have uncertainty about the preferences of others for them. If, for every individual, self-serving bias is operating on that uncertainty about the preferences of others, then every person in the group may decide that they like brownies more than the other people, and that their extra serving is both fair and benign.

The result will be the overexploitation that results from the Tragedy of the Commons, and from the outside it maybe indistinguishable from the Tragedy, but the mechanism is completely different. It is an interesting mechanism because it is prosocial: no individual percieves that their actions were selfish or destructive. It predicts resource collapse even among agents who identify as cooperative.

The self-serving bias can help to answer a puzzle in the frameworks developed by the Ostrom Workshop. In their very well-known work, members of the Workshop identified eight principles that are commonly observed in robust common-property regimes. But only one of these, “graduated sanctions,” speaks directly to rational self-interest. The other principles invoke the importance of definitions, of conflict resolution, of democratic representation, and other political and social criteria.

Why are so many of the design principles irrelevant to rational self-interest, the consensus mechanism behind the Tragedy? Because it is not the only cause of overexploitation in self-governing resource distribution systems. The design principles are not merely a solution to the economist’s Tragedy of the Commons, but to the more general problem of overexploitation, with all of the many mechanisms that encourage it. If that is the case, then principles that don’t speak to the Tragedy may still speak to other mechanisms. For my purposes, the most relevant is Design Principle 1, in both of its parts:

1A User boundaries:
Clear boundaries between legitimate users and nonusers must be clearly defined.
1B Resource boundaries:
Clear boundaries are present that define a resource system and separate it from the larger biophysical environment.
(http://www.ecologyandsociety.org/vol15/iss4/art38)

By establishing norms, and the common knowledge of norms, this principle may prevent self-serving bias from promoting overexploitation. Norms provide a default preference to fill in for others when their actual preferences are unknown. By removing uncertainty about the preferences of others, the principle leaves participants no uncertainty to interpret in a self-serving manner.

Other psychological processes can cause overexploitation, but the design principles of the Ostrom Workshop are robust to this twist because they weren’t developed by theorizing, but by looking at real resource distribution systems. So even though they define themselves in terms of just one mechanism for overexploitation, they inadvertently guard against more than just that.


The birthplace of Western civilization was killed during the birth of Western civilization.

Deforestation from Classical Period (~1000BCE and on) mettallurgy in the Holy Land dramatically amplified the effects of an otherwise small regional trend towards a warmer and drier climate. Before 10,000 years ago, we were in a different geological and human era and you can’t say too much about civilization. But starting at 10,000 until 2,000 years ago that part of the fertile crescent is known to have been fertile. And from 2,000 years to the present, it has been a desert. On learning about metal, locals supported their furnaces by making the region one that is no longer covered in forests. The authors of the paper below showed that semi-arid climates are particularly vulnerable to the kinds of changes caused by humans. “Water availability” is the important variable for life on the ground. In semi-arid climates, a large change in rainfall actually has little effect on water availability. However, a large change in ground cover (trees) has a huge effect. Trees hold water, in and on themselves, but their biggest role is keeping soil in place. A tablespoon of healthy soil has the surface area of a football field, making soil one of the best ways to keep water in an ecosystem.

This is all from a very academic, but really fascinating interdisciplinary book “Water, Life, and Civilisation.” A bunch of people out of U. of Reading in the UK had a multi-year project to reconstruct ancient climate and habits. They went across disciplines (meteorology, geology, archaeology, paleontology, biology, sociology, geography) and therefore methods (lit reviews and metaanalyses, digging (taking biological samples, cultural samples, building samples, rock samples, water samples, cave samples, and other fieldwork), qualitative fieldwork, policy analysis, computer simulation, model fitting, GIS, carbon dating, isotope dating, and agricultural experiments. They even invented some new methods under the heading of archaeobotany). With these methods you gain amazing insight into the past. The authors can show how bad floods got, that wells dried up, that agriculture was adapted for dry vs. wet climates, and that populations swelled or dwindled.

Focusing on one site, Wadi Faynan in southern Jordan, they show high absorption of water by soil (“infiltration”), less runoff, and less evidence of floods during the early and middle Holocene (12—5 thousand years before present). “This hydrological regime would have provided an ideal niche for the development of early agriculture, providing a predictable, reliable, and perennial groundwater supply, augmented by gentle winter overbank flooding.” By contrast, “During the late Holocene (4, 2 ka BP), the hydrology of the Wadi Faynan was similar to that of today, a consequence of reduced infiltration caused by industrial-scale deforestation to support metallurgical activity.”

They add,

A review of regional and local vegetation histories suggests that major landscape changes have occurred during the Holocene. There appears to be consensus that the early Holocene in the Levant was far more wooded than the present day (Rossignol-Strick, 1999; Roberts, 2002; Hunt et al., 2007), as a consequence of small human populations and prevailing warm, wet climates. Since mid-Holocene times, the combined trends of increasing aridity and human impact upon the landscape have combined to cause deforestation and erosion of soils. In Wadi Faynen, there is clear evidence that Classical period industrial activity would have played a significant role in this process. We propose that these changes would have greatly reduced infiltration rates in Wadi Faynan since the middle Holocene.

This chapter stood out for looking at how humans influenced climate, where all of the others focused on the equally important subject of how climate affected humans. But this was just one fascinating chapter of a fascinating book. A lot of the meteorology and geology was over my head, but using computer simulations calibrated on today and other knowns, and managing their unknowns cleverly, they got a computer model of ancient climate at the regional scale. Using that they got various local models of ancient precipitation. They complimented that guesswork with fieldwork in which they used the sizes of surviving cisterns, dams, gutters, roofs, and other ancient evidence of water management to estimate the amount of rainfall, the extent of floods, the existence of this or that type of sophisticated irrigation, and other details at the intersection of hydrology, archaeology, and technology. They learned about how resource limits constrained human settlements by examining regional patterns in their placement: early and high settlements tended to be near springs while later on they tend to be on the roads to larger cities. They used extra clever carbon and nitrogen dating to learn what the livestock were fed, what the humans were eating, and if a given area had mostly desert or lush plants. They can prove using differences in the bone composition of pigs and goats from the same period that they were raised on different diets. And with almost no evidence from surviving plants or surviving fields they were still able to infer what plants were being cultivated, and by what kind of sophisticated agriculture. Every plant makes microscopic sand crystals and in arid environments, these crystals are the same for plants grown yesterday and plants grown thousands of years ago. Because different plants grow crystals of different shapes, they were able to identify date palms at 1000 years before date palms were thought to have been domesticated. The crystals also shed light on ancient irrigation technology. By growing some grain crops with different kinds of technology and examining the resulting crystals, they showed that the clumpy crystals they were finding in ancient sites could only have come from grain fields with sophisticated irrigation systems.

Altogether, I’m impressed by how much we can know about ancient life and climate when we combine the strengths of different disciplines. I’m also afraid. For me, the natural place to go from here is to Jared Diamond’s Collapse for more examples of how civilisations have followed the resources around the world and then burned them down, and for what we might be able to do about it.

The book was Water, Life, and Civilisation; Climate, Environment, and Society in the Jordan Valley (Steven Mithen and Emily Black Eds.) Cambridge Universiity Press International Hydrology Series. The chapter I focused on was number fifteen:
Sean Smith, Andrew Wade, Emily Black, David Brayshaw, Claire Rambeau, and Steven Mithen (2011) “From global climate change to local impact in Wadi Faynan, southern Jordan: ten millenia or human settlement in its hydrological context.”


Enfascination 2013

29742_396066756605_704462_n“Nothing in the world is more dangerous than sincere ignorance and conscientious stupidity.” Thus spoke Martin Luther King Jr. in a great endorsement for humility, curiosity, and discovery.

On Thinko de Mayo, from 1PM, you will have five minutes to help us see how dangerous we are. You may share anything at all during your five minutes, as long as you personally think it’s fascinating. Your goal is to transmit your sense of fascination to others. FB page: https://www.facebook.com/events/498466006869981/

If the constraints of themes help you brainstorm, try “Science towards nescience.” But generally, you should trust yourself. If you manage nothing more than five minutes of wobbling, inarticulate, ecstatic blubbering then Well Done: You have successfully expressed the unfathomable depth of your subject.

This is the ten-year anniversary of these lectures –– ten years since I attempted the world’s nerdiest 21st birthday kegger. This will be the fifth and probably last in Bloomington. Ask me for help if you’ll have slides or a demo.

Past topics have included:
Slide Rules, Counting the Permutations of Digit Strings, Conceptions of Time in History, Chili Peppers, How to cross a glacier, The Singularity, Indiana Jones, Rural desert water distribution systems, Hexaflexagons, Small precious things, Wilderness Camps as Commodity, DIY Cooking, Roman Emperor Deaths , Joy of Science, Salt , Three Great Banquets in Italian History, How to Sharpen a Chisel, Some Properties of Numbers in Base Ten, The Physiological Limits to Human Perception of Time, Geophagy, Pond Ecology, Superstition: For Fun and Profit, Counterintuitive Results in Hydrodynamics, The Wolof Conception of Time, Arctic String Figures, The Seven Axioms of Mathematics, Dr Seuss and his Impact on Contemporary Children’s Literature, Twee, Motorcycle Life and Culture, Cultural Differences Between Japan and the US, Brief history of the Jim Henson Company, Female Orgasm, Insider Trading: For Fun and Profit, Film of Peter Greenaway, A Typographical Incident with Implications for the Structure of Thought, Cooperative Birth Control, Tones in Mandarin, Unschooling and Deschooling, Q&A: Fine Beer, DIY Backpacking, Chinese Nationalism in Tibet, Biofuels, The Yeti, The Health Benefits of Squatting, The Big Bang, How to Pick Stocks Like a Pro, Food Preservation Technique, or Managing Rot, Infant Visual Perception, Demonstrations in Number Theory, Rangolis, Kolum, The Hollow Earth, Edible Mushrooms: For Fun and Profit, Human Asexuality, A History of the California Central Valley Watershed, An Account of the Maidu Creation, The Paleoclimatology of the Levant, Rural India, German Compound Words, Manipulating Children, Physics of Time, Animal Training on Humans, Constructed Languages, This Week’s Weather, The XYZs of Body Language, Light Filtration Through Orchards, Our Limits in Visualizing High Dimensional Spaces,Twin Studies.

Last year’s audio:
http://enfascination.com/weblog/archives/301
And video/notes from before that:
http://enfascination.com/wiki/index.php?title=Enfascination_2011#Enfascinations_Past

pow!
seth.

UPDATE post-party

Here is what happened:

  1. The Tiger Café by Ronak
  2. Jr. High School Poetry Slam by Lauren
  3. The “Border” language by Destin
  4. Perception/Objectivity by Paul Patton
  5. Readings from James Agee by Jillian
  6. “A signal detection theory of morality” or “The morality manatee” by Seth
  7. Dreams and the four candies by Danny
  8. Pick Two by Adam
  9. Trust and Trust Experiments by Jonathan

The fall of cybernetics in anthropology, with citations

I’m reading an ethnobotanical ethnography of the Huastec or “Teenek” Mayans. Its a big fat impressive monograph published by Janis B. Alcorn in 1984. Here is a passage suggesting that cybernetics had come and gone from anthropology by 1980. The criticism focused on the restriction of early cybernetics modeling to closed systems. The attack is well-targeted and well-cited, pointing to a bunch of lit I hope to check out at some point.

Ethnobotanical interactions occur in an open dynamic ecosystem of natural and social components. The closed cybernetics systems once used to descrbie natural and social systems have been criticised as inadequate representations of reality (Bennett, 1976; Connell, 1978; Ellen, 1979; Friedman 1979; Futuyma, 1979; and others). Although feedback has an important stabalizing effect, other non-feedback factors operate to influence the Teenek ecosystem and the directions of its development. The friction of opposing tendencies and the introduction of new variables (themselves often the products of other internal and external processes) create a dynamic ecosystem in non-equilibrium, evolving in ways shaped by its past and its present. Less than optimal adaptations may exist because of quirks of history and available variability. But, at the very least, suboptimal adaptations are not so maladaptive as to become unbearable “load.” Evolution often proceeds along a path of trade-offs in the midst of conflict.

Besides pointing out that no useful model is an adequate representation of reality, I think its worth asserting that the closed systems of cybernetics were not an ideological commitment but an assumption of convenience that the founders hoped to be able to break one day. I’m really only speaking to the first sentence or two, I didn’t totally get the bridge from cybernetics to the picture of trade-offs. Of course my role isn’t to defend cybernetics, I’ve got my own problems with it. But I’m always interested in problems that others have faced with influential theories. Here are those citations in full:

  • Bennett, C. F. 1976. The Ecological Transition: Cultural Anthropology and Human Adaptation. Pergamon Press, New York.
  • Connell, J. H. 1980. High diversity in tropical rain forests and coral reefs. Science 199:1302:1310.
  • Ellen, R. F. 1979. Sago subsistence and the trade in spices. IN Burnham, P. and R. F. Ellen (eds.) Social and Ecological Systems, Academic Press, New York.
  • Friedman, J. 1979. Hegelian ecology. IN Burnham, P. and R. F. Ellen (eds.) Social and Ecological Systems, Academic Press, New York.
  • Futuyma, D. J. 1979 Evolutionary Biology. Sinauer Associates, Sunderland, Massachusetts.

As a bonus, here are some fun bits from the glossary:
boliim – a large (25 cm. x 25 cm. x 10 cm.) square tamale-like ceremonial food prepared by placing an entire uncooked chicken or turkey, large chunks of meat, or a pig’s head on a flattened piece of masa dough, dribbling a thickened red chili sauce over the meat, wrapping the dough around the meat, and then wrapping the whole thing in banana or Heliconia schiedeana leaves and steaming it in a large earthen vessel for several hours. (Boliim are referred to elsewhere as “large tamales”)
Boo’waat – tranvestite male apparition.
ichich – illness caused by the heart of an older or more powerful person sapping strength from a more vulnerable heart leaving the person weak; in infants characterized by green diarrhea.
theben – weasel who climbs under the clothing of a curer-to-be as he walks down a path, tickles him/her until he/she falls unconscious, and the piles shoots of medicinal plants around him/her.
tepa’ – a person who flies over long distances repidly to steal from the rich, seen as a bright streak in the night sky.
te’eth k’al a iits’ – bitten by the moon; painful, swollon, purulent fingertips caused by pointing at the moon.
ts-itsiimbe – condition of suffering from an imposed spirit caused by spirit, human, or bird agent (usually following loss of patient’s own spirit); symptoms include midday drowsiness, poor appetite, and bad temper (occasionally equated with mestiso folk illnesses “tiricia,” “avecil,” or “mollera”)
walelaab – evil eye


Never too smart to be very wrong

A lot of my life choices and habits of thought have been devoted to never letting myself get permanently attached something that’s wrong. That would be my hell, and I think that there’s always a risk of it. Somehow there is no being humble enough. As an exercise for myself, and as an illustration of the risks, I went on a hunt for examples of famous scientists who got stuck and went to their graves as the last major holdout for a dead discredited theory. I figure I might learn some of the signs to watch for in myself.

It has been one of those things where you don’t fully understand what you’re looking for until you find it. The understanding happens in the process of sifting through lots of examples that you thought would fit and finding just one. Slightly different from what I described above –– the existential to my universal –– is the otherwise-incredible scientist who proposes a batshit theory that never catches on. There are lots of those, and they’re listed separately. I value them less because, well, I’m not sure. It probably has something to do with the subtle differences between superceded theories, pseudoscientific theories, fringe theories, and unscientific theories. [Ed. It took me a day, but I’m interested in the difference between attachment to a superceded theory and to a fringe theory. I’m focusing on the former, and I think its more dramatic.]

I found other side-categories over the course of refining my main list. There are enough Nobel Laureates going off the deep end that they get their own section. There are plenty examples of experts adopting wacky views outside their area of expertise. I also eliminated lots of potentially great examples because the scientist’s wacky commitment was one that was reasonable to believe at the time –– take physicist Einstein’s discomfort with quantum mechanics, anatomist Paul Broca’s affection for phrenology, and evolutionist George Gaylord Simpson’s pretty violent and unreasonable dismissal of plate tectonics.

There are also people who flirted with a crazy idea but didn’t let it get the better of them and those who, while they believed crazy stuff, didn’t accomplish enough for me to say “this person is way way smarter than everyone I know.”

I did my best, and I learned a lot, but I couldn’t research all of these totally thoroughly. If I had any doubt about someone’s being in the “way too smart to be a paleo holdout” category then I put them in one of the less impressive lists.

The vast majority of these examples are from other people’s brains. The branches of the taxonomy were also influenced as much by people’s comments as my own here-and-there experiences of dissatisfaction. Biggest thanks to Micah Josephy, Finn Brunton, Michael Bishop, all the people here, and at less wrong.

“I’m smart, but I will never stop believing in this wrong theory”

The most interesting cases are where a contested theory became consensus theory for all but a few otherwise thoughtful holdouts, like:

  • Astrophysicist Fred Hoyle who never accepted the Big Bang.
  • Biologist Alfred Russel Wallace who campaigned against vaccines
  • Physicist Heaviside against relativity.
  • Physicist Phillipp Lenard against relativity, thanks to Nazi Deutsche Physik (Nobel).
  • Physicist Johannes Stark against relativity, also from Deutsche Physik (Nobel).
  • Physicist Nikola Tesla against relativity.
  • Tesla against other chunks of modern physics.
  • Chemist Joseph Priestley‘s sustained defense of phlogiston.
  • Statistician and biologist Sir Ronald Fischer‘s rejection of the theory that smoking causes lung cancer.
  • Physicist and early psychologist Ernst Mach‘s rejection of atoms! (and relativity). He was arguing for a very subjective philosophy of science well after Einstein’s pre-relativity work to confirm the kinetic theory of gases.
  • Biologist Peter Duesberg‘s rejection that HIV causes AIDs, and his advocacy of alternative causes like drug use.
  • Biologist Trofim Lysenko‘s rejection of Mendelian inheritance, thanks to Michurinism, the Soviet Lamarckism.
  • Psychologist B. F. Skinner‘s rejection of the idea that humans have mental states (from his books, like About Behaviorism; This is cleverly falsified by Shephard and Metzler’s wonderful 1971 experiment).

Honorable mention

These people, despite their notability, didn’t make the list, either because they saw the light, because they weren’t a scientist, or because they are part of an ongoing controversy and might still redeem theirselves. Erdös and Simpson make it because of how badly behaved they were for the short time before they realized they were wrong.

  • Mathematician Erdős and the simple elegant Monty Hall problem. He was adamant about the solution until he was proven wrong. In fact, an embarrassing chunk of the professional mathematics community dismissed the female who posed it until they were all proven wrong. Recounted in The Man who Loved Only Numbers.
  • George Gaylord Simpson’s violent attacks on plate tectonics. Bad form Gaylord. He accepted it when it finally became consensus (p. 339 of this).
  • Florence Nightingale on miasma theory and always keeping the windows open in the hospital. She doesn’t make the list because she’s not really thought of as a scientist.
  • Psychologist Daryl Bem’s recent work on psi phenomena might count towards what I’m after, if the recent failures to reproduce it are definitive and Bem hasn’t recanted.
  • Recently, Luc Montagnier mingling in homeopathy and wacky autism theories (Nobel mention).
  • Maybe this is too political of me, but I’m going to add Noam Chomsky’s rhetorical maneuvers to make his linguistic theories unfalsifiable.
  • René-Prosper Blondlot and N-rays. Thanks to Martin Gardner, he’s usually considered to have taken these to his grave. He was deceiving himself, but I’m guessing he probably recanted after the big embarrassment.

“My pet fringe theory”

There are lots of examples of an otherwise good scientist inventing some crackpot theory are swearing by it forever.

  • Linus Pauling on Vitamin C (that it prevents/cures cancer) (Nobel)
  • Linus Pauling on orthomolecular medicine (Nobel)
  • Similarly, Louis Ignarro on the positive effects of NO on your heart (Nobel)
  • Physicist Gurwitsch on biophotons
  • While working on radios, Marconi was apparently v. predisposed to thinking he was talking to Martians
  • William Crookes on “radiant matter”
  • Ernst Haeckel’s pet continent Lemuria
  • Wilhelm Reich’s pet power Orgone
  • Tesla may have gone over the deep end for wireless energy transfer
  • Physicist Albert Crehore and the Crehore atom, recounted in Martin Gardner’s pretty purple book on fringe science
  • Biologist Alfred Russell Wallace’s allout occultism
  • Nobel Laureate Brian D. Josephson, ESP and homeopathy and PK and cold fusion
  • Carl Reichenbach, chemist, and the Odic Force
  • Physicist Samuel T. Cohen, inventor of the neutron bomb, and of “red mercury” nukes

“Sure I considered and even experimented with this wierd idea but I probably didn’t let it get the better of me”

Another less exciting category for people who redeemed and thus disqualified themselves from consideration above.

  • A lot of early 20th century scientists on established supernatural and extrasensory powers, incl. Albert Einstein, William James, and many more.
  • Jagadish Chandra Bose on sensation/perception in plants and inorganic compounds
  • Maybe Thomas Gold and abiogenic petroleum

“I’m smart and I believed this crazy thing but back then everyone else did too, so no biggie”

These are just people who believed in theories that became superceded, and there are more examples than I could ever enumerate. These are just the ones I sifted through looking for better examples

  • Anatomist Paul Broca and phrenology (covered in Martin Gardner’s Fads and Fallacies)
  • Isaac Newton and alchemy, the philosopher’s stone, and all kinds of other occult topics
  • Johann Joachim Becher and phlogiston
  • Einstein’s and Jaynes’ discomfort with QM
  • Astronomer Simon Newcomb was very skeptical that human flight would be possible, until it became possible. He was probably just being a good skeptic — after all, it is something people wanted to be true.
  • Michelson and aether. He accidentally disproved it and put lots of effort (too much?) into trying to show that his first experiment was wrong. Again, that’s maybe just good science.
  • Mendeleev’s coronium and the abiogenic theory of petroleum

“I’m not qualified to say so, but I’ll insist that this well-established thing in someone else’s field is a crock”

You’ll see that Nobel Prize winners are particularly susceptible

  • Hoyle against the Archaeopteryx
  • Hoyle on microbes from space
  • Lord Kelvin on microbes from space
  • William Shockley and eugenics (Nobel)
  • James Watson and his wackinesses (Nobel)
  • Kary Mullis off the deep end (Nobel)
  • Nikolaas Tinbergen’s controversial approach to autism (Nobel)
  • Arthur Schawlow and autism (Nobel)
  • Physicist Ivar Giaever against climate change (Nobel)

“I’m utterly fringe or worse”

Again, more of these than could ever be listed. These are just the ones I sifted through while hunting for better examples

  • Chandra Wickramasinghe carrying Hoyle’s panspermia flag
  • http://en.wikipedia.org/wiki/Chandra_Wickramasinghe
  • http://en.wikipedia.org/wiki/Masaru_Emoto
  • Andrew Wakefield and vaccines
  • Terence McKenna & timewave zero
  • Cleve Backster & primary perception
  • Franz Mesmer & animal magnetism

Recaps of the Nobel Prize winners

These are the best resources for learnings about Nobel Prize winners going off the deep end

  • http://rationalwiki.org/wiki/Nobel_disease
  • intelligent design specifically: http://www.uncommondescent.com/intelligent-design/seven-nobel-laureates-in-science-who-either-supported-intelligent-design-or-attacked-darwinian-evolution/
  • http://scienceblogs.com/insolence/2010/11/23/luc-montagnier-the-nobel-disease-strikes/
  • and two guys not on either source (thanks), Johannes Stark (the other Lenard), and Arthur Schawlow (autism)

Leads I would go to if I was looking for more examples, and also relevant or cool stuff

I’d love to continue to grow this manifest. Ideas welcome.

  • Many medical professionals and focal infection theory
  • Any big names that got caught up in polywater, cold fusion, and the hafnium bomb. I don’t know any.
  • http://en.wikipedia.org/wiki/Superseded_scientific_theories
  • http://en.wikipedia.org/wiki/Pathological_science
  • http://en.wikipedia.org/wiki/Fringe_science
  • http://en.wikipedia.org/wiki/List_of_topics_characterized_as_pseudoscience
  • http://en.wikipedia.org/wiki/Refrigerator_mother_theory
  • http://scienceblogs.com/insolence/2010/11/23/luc-montagnier-the-nobel-disease-strikes/
  • http://en.wikipedia.org/wiki/John_C._Lilly#Later_career
  • http://en.wikipedia.org/wiki/Philip_Henry_Gosse and Omphalos
  • Chalmers and Searle are dualists
  • The aiua of Leibniz
  • Barbara McClintock’s haters
  • http://en.wikipedia.org/wiki/Sharashka and
  • Kronecker against Cantor’s revolutionary approach to infinity

Ouroboros and the failures of complex systems

This is a little intense, it should be enough to just watch enough of the initial seconds to satisfy yourself that Ouroboros exists. I’d post a photo, but the photo I saw seemed photoshopped. That’s how I found the video.

A complex system has failed to integrate the proper information into its decision. I’d guess that the cause is a badly designed environment (what looks like a zoo enclosure) presenting an otherwise well-designed snake with exactly the wrong pattern of information. That said, the mere fact of the Ouroboros myth makes it conceivable that this can happen in the wild.

Ouroboros from Wikipedia Was this a failure of information diffusion in a distributed local-information system? Or was it a failure of a properly informed top-down system suppressing the right information and amplifying the wrong information? We don’t know, we don’t really have the language to articulate that question in a manner that it can be answered. In that respect this is not just a failure of a complex system, but a failure of complex systems, the field.

The “Ant well” is less ambiguously a failure of a decentralized system. It happens in the wild, when the head of a column of army ants short circuits. Millions of ants start marching in circles until too many have died to close the circuit. And here is a magically functional decentralized system. What does decentralized mean here? Does it mean the same thing in all three examples? How is it different from bottom-up, feedback, distributed, local, networked, hierarchical, modular, or any other concept? We’re still working on that. At least there’s more video than vocabulary out there.


Undrugs: Sugar pill may work even when you know it’s sugar pill

You’re sick? Here’s a sugar pill. We know that it can’t work. Take it anyway. You’ll feel better.

Introduced starting at 9:54. I think the interview is boring before then; he rambles.

My crush on the placebo effect started at Berkeley in Prof. Presti’s molecular neurobiology course. He introduced us to a very carefully controlled study showing that naloxone, a drug that can stop opiate overdoses, can also neutralize placebo pain. That’s a big deal. It can take pain that you started to feel only because you thought you were feeling it, and make that pain go away. The placebo effect is not just psychological, it’s chemical, and it can be influenced by chemistry. That means we can harness it.

I was so addicted to the placebo effect that I started collecting “the last week” of pills from all of my friends on birth control. I quickly amassed hundreds of sugar pills, an impressive drug collection even by Berkeley standards, even more impressive for its mystical power over the psyche. If I thought I was getting sick, I would take one so I could think I was getting better. And it really did always make me feel great, at least while telling that joke.

We don’t understand the mind, the brain, or the relationship between them. That’s true even though we have the perfect tool, drugs. Understanding consciousness will mean being able to describe mental states in chemical terms. Drugs change chemistry and cause predictable changes in mental states. They are they reason we know anything at all about the biological basis of consciousness. Of course, what we know is very little, and that it’s very complicated. The placebo effect is my favorite example: I described the effect of drugs as one-directional “drug -> brain chemistry -> mental states.” But the placebo effect seems to turn that chain on end: “sugar pill -> mental states -> chemistry.”


Rock-Paper-Scissors ms. on Smithsonian, NBC, and Science Daily blogs.

This is my first (inter)national press, I’m a little embarrassed to feel excited about it. Its also a pleasant surprise, I wouldn’t have expected it to have general appeal.

Links:


Enfascination 2012 number 2 at the Complex Systems Summer School in Santa Fe, NM

I spent the summer of 2012 with fascinating people. Seeing only their talent as scientists, I thought I knew how fascinating they were. But this short-notice series of short talks revealed their depth. There is no record of the proceedings, only the program:

SFI CSSS Enfascination, for we must stop at nothing to start at everything:
Priya on Symmetries
Kyle on the adversarial paradigm
Drew on the history of espionage in Santa FE
Tom’s song, the Power Law Blues
Seth on keiteki rio
“Yeats on robots sailing to Byzantium” by Chloe
Christa and her Feet
Xin on Disasters
“Kasparo, A robotics opera” by Katrien
Jasmeen on post-war polish poetry
Keith on voting
Madeleine’s “Paradoxes of modern agriculture”
Sandro singing “El Piscatore”
Robert on audio illusions, specifically Shephard tones and the McGurk effect
Isaac on biblical Isaac
Miguel on the diversity of an unpronounceably beautiful variety of sea creature
Nick on mechanical turk
Georg’s poetry


Enfascination 2012 audio

moby1

Some things take time, but it only takes an instant to realize that you have no idea what’s going on. This epiphany-every time it happens-is punctuated by the sound of 500 stars around the universe literally exploding, dissolving their planets and neighbors in flaming atoms, in silence. It happens every instant, forever. As right as you were, its impossible for you to know how right.

The 2012 program from May 5, 2012, featuring:

  • “Hoosier Talkin’,” Sarah on the southern Indiana dialect
  • “A brief history of Western art” by Eran
  • “Introduction to conducting” by Greg
  • “Infant perception” by Lisa
  • Poems read by Jillian
  • “The paleoclimatology of the Levant” by Seth
  • “Tweepop” by Robert
  • “Direct perception” by Paul Patton
  • “Slide rules” by Ben

Come Fall 2013, I’m working for Disney Research in Zurich

They don’t currently do social science, but they’ve gotten a taste of what it can do and where it can go. They’ve hired me to help launch an interdisciplinary behavioral research agenda — economics, sociology, psychology — lab experiments, web experiments, simulations, and big data. I don’t know what to expect, but I believe its in line with my goals for myself and I’m excited and grateful.

About

This entry was posted on Thursday, February 21st, 2013 and is filed under science, updates.


Seeing the Earth, in the sky, from Earth

Uncountably many photons have come from the sun, bounced off of me, and shot back into space. One day one of them is going to come back. Photons turn as they pass heavy things. A photon retreating from me is being turned, slowly, over billions of empty years, all the way around. A black hole can turn them around in one shot. Ancient photons are returning simultaneously, from all over, right now.
What it means is that we can see ourselves in the sky. At least one of those dots is the Earth in the past. If we manage to see it at all, we won’t start out seeing much more than a fuzzy dot, “Yep, there it is.” But there could be thousands or millions of earths in the sky. Each fragile broken circuit of light is a channel, or rather a mirror, showing the earth as it was or wasn’t 1, 2, 5, 8 billion years ago. Between them, you have the entire history of the earth being projected back to it at each moment.
The most interesting action is in the past millions and thousands of years. To open up the Earth’s human past we would need a black hole very close, within a few thousand light years, like V4641 Sgr, 1600 light years away. I want to watch the decline of Rome. Going further back, I want to see the earthquake that split the temple curtain. And I want to look in the sky and see an ancestor’s eyes as they look up to God. Not to be God, but to make eye contact full of love, and excitement, and no answers.


“In the days of the frost seek a minor sun”


From unsympathetic eyes, no science is more arrogant than astronomy. Astronomers think that we can know the universe and replace the dreams and the meaning in the skies with a cold place that is constantly dying.
But I think that there is no more humble science than astronomy. No science has had so much romance imposed on it by the things that we want to be true, no other science has found a starker reality, and no other science has submitted so thoroughly. They’ve been so pummelled by what they’ve seen that they will believe absolutely anything that makes the equations balance out. As the wild story currently goes, the universe is growing at an accelerating rate because invisible matter woven into the universe is pulling the stars from each other. Its hard to swallow, and we don’t appreciate how astronomers struggled to face that story. They’ve accepted that the universe has no regard for our sense of sensibility, and they are finally along for the ride. I wish it was me, I want to see how much I’m missing by thinking I understand.


In PLOS ONE: Cyclic dynamics driven by iterated reasoning

This paper, published with my advisor Rob Goldstone, reports a major result of my dissertation, that people can flock not only physically, but also in their depth of iterated reasoning through each other’s motives. It is interesting because of the many economists who hoped that type of reasoning would prevent flocking. Ha!

* Here is the paper: http://dx.plos.org/10.1371/journal.pone.0056416
which follows the preprint

* One-minute video of this emergent cyclical behavior: http://vimeo.com/50459678

* Three-minute video explaining it in terms of the movie The Princess Bride: http://posterhall.org/igert2012/posters/218

* And here is a press release draft to give you a sense of it:

Rock-Paper-Scissors reveals herd behavior in logical reasoning

“Poor Bart, always picks Rock.” In these telling words from Lisa Simpson, we see Rock-Paper-Scissors as a game of mind reading. Scientists have already used Rock-Paper-Scissors to study how we cooperate, to show that we are bad randomizers, and to build AIs that can beat us at our own game. But this simple game has many more tricks up its sleeve. Rock-Paper-Scissors gives us the ideal case study for herd behavior in higher-level reasoning: specifically, thoughts about the thoughts of others. You would like to think that your thoughts are your own, but recent work from the Indiana University Cognitive Science program shows that people playing Rock-Paper-Scissors subtly influence each other, converging on similar ways of reasoning over time. The natural analogy is to a flock of birds veering in concert.

In work appearing in PLoS ONE (XXX), Seth Frey and Robert L. Goldstone introduce a version of Rock-Paper-Scissors called the Mod Game. In each round, they gave IU psychology undergraduates a choice between the numbers 1 through 24. Participants earned money for picking a number exactly one greater than someone else, but the choices wrapped around in a circle so that 1 beat 24 (just as Ace beats King in card games). Participants just had to anticipate what others were going to pick, and pick the next number up — keeping in mind that everyone else was thinking the same thing. In this game of one-upmanship, the best performers aren’t the ones who think the most steps ahead, but the ones who think just the right number of steps ahead — about two, as it turned out in the experiment.

Many economists predict that with enough experience, people should be able to think infinite steps ahead, or at least that their number of steps should increase dramatically over time. But this isn’t what happened in the Mod Game. Instead, when participants were shown each previous round’s results, they tended to cluster in one part of the circle of choices and start bounding around it in synch. Groups produced a compelling periodic orbit around the choices, reminiscent of the cultural pendulum swinging back and forth, bringing, say, moustaches in and out of fashion. Interestingly, the cycling behavior consistently got faster with time. This means that people did learn to think further ahead with time — the economic prediction was partly correct — but the increase was much less dramatic than it ought to have been: after 200 rounds of the Mod Game, the average number of thinking steps increased by only half a step, from 2 to 2.5. Moreover, herding in this game benefited everyone; a tighter grouping of choices means a higher density of money to be earned in each round.

What does all this mean for society? Typical treatments of higher-level reasoning look to it as preventing herd behavior, but we can now see it as a source. Anticipation may be the motor that keeps fads running in circles. It could be a source of the violent swings that we see in financial markets. And if you’ve ever been in a bidding war on Ebay, you may have been caught in this dynamic yourself. If every bidder is tweaking their increasing bids based on the tweaks of others, then the whole group may converge in price and in how those prices rise. The process isn’t governed by the intrinsic value of that mint Star Wars lunch box you’re fighting for, but on the collective dynamics of people trying to reason through each other’s thoughts. Whether looking at benign social habits or mass panics, social theorists have always treated human herd behavior as though it resulted from mindlessness. But this simple lesson from Rock-Paper-Scissors suggests that even the most sophisticated reasoning processes may be drawn about by the subtle influence of social interaction.


What big titty b****** taught me about institution design

wifibigtittybitchesIn institutional economics, there are four main kinds of resource, classified by whether they are limited (yes or no) and whether you can keep others from using them (yes or no). Now everyone who uses these categories knows that they are fuzzy, and full of exceptions. They can vary in degree, by context, and in time. WiFi gives us a beautiful example of how technology (and defaults) can change the nature of a resource. These days, early 2013, wireless routers come password-protected out of the box, and they come initialized with unique hard-to-crack passwords. That wasn’t the case in the early 2000s, when routers either came unlocked by default or locked with an easy-to-find default password. In those days, wifi was a common-pool resource in that it was limited (only so much bandwidth) and you couldn’t keep others out of it by default. You needed special knowledge to create a proper password and turn your wireless into the private good (still limited, but excludable) that you get out of the box today.

The point about technology has been made. Governing the Commons contains a history of roundups in the Western US, showing how the invention of barbed wire turned the large cattle herds from a managed common-pool resource into a private (excludable) good. The WiFi example adds the influence of defaults, which makes it a bit more interesting, since we see a case in which the flip of a switch can change the nature of a good, and we see how, given the choice, society has chosen private property over common property over the past ten years.

But there is another facet to the WiFi resource. Another feature that comes default is the broadcast SSID, or the name of your wifi. These are often informative, but they can also be impressively inappropriate. Trying to steal wireless on the road, you can be driving around a beautiful peaceful thoroughly-family-looking neighborhood and stumble upon all kinds of sinister things in the air.

What kind of resource is the NSFW SSID? Well, lets be square and say that its a bad rather than a good. Its non-subtractable because unlike bandwidth my reading it doesn’t interfere with your reading it. Its common. By all that, NSFW SSID’s are a public bad, pollution. And what is interesting about all this is that a resource can be anything, even the name of an interesting resource can be an interesting resource, one that gets managed by norms and rules, and one that channels all the complexity of human society.


Postdoc ergo propter doc

People imagine that experts know lots of things. I mean, it’s true, but that’s like saying the ocean is full of sand. The ocean, as full of sand as it is, is more full of questions.

I think we all miss the point of expertise a little, but experts are the farthest off. I’m on the path to becoming an expert myself. When it happens, I’ll do my part to disappoint the people who expect answers. I’d sooner disappoint them than not. I think the cleanest pursuit of science is the pursuit of feeling small. Maybe it sounds depressing to have only this defiantly inadequate expertise, but it beats the alternative.

About

This entry was posted on Saturday, February 2nd, 2013 and is filed under nescience, science.


Percentile listings for ten Go and Chess Federations and their systems

I spent way too long trying to find percentile ranks for FIDE ELO scores (international professional chess players). Percentiles exists for USCF (USA-ranked Chess players; http://archive.uschess.org/ratings/ratedist.php) but not FIDE, which is different, and worth knowing, and worth being able to map. So I just did it myself. In the process I got percentile equivalences for many other systems and game Federations. I used this data: http://ratings.fide.com/download.phtml
and got the percentiles in the far right hand column.

Disclaimer: I pulled some tricks, this is all approximate, there are translations of translations of equivalences, but this is what we’ve got. Everyone who has pulled any of these numbers knows that they don’t really mean what they say as precisely as they aspire to mean what they say. Also, don’t interpret these as equivalences, for example, FIDE is more professional than USCF, so the worst players in it are way way better than the worst in USCF.


Percentile
AGA KGS USCF EGF UCSF2 EGF kyu/dan  Korean kyu/dan  Japan kyu/dan  A(ussie)CF  FIDE
1% -34.61 -24.26 444 100 20 k 22k  17+ k 100 1319
2% -32.58 -22.3 531 100 20 k 22 k 17+ k 200 1385
5% -27.69 -19.2 663 153 100 20 k 22 k 17 k 300 1494
10% -23.47 -15.36 793 456 16 k 18 k 13 k 600 1596
20% -18.54 -11.26 964 953 500 12 k 13 k 9 k 900 1723
30% -13.91 -8.94 1122 1200 9 k 10 k 6 k 1100 1815
40% -9.9 -7.18 1269 1387 7 k 8 k 4 k 1300 1890
50% -7.1 -5.65 1411 1557 1000 6 k 7 k 3 k 1400 1958
60% -4.59 -4.19 1538 1709 4 k 5 k 1 k 1500 2021
70% -1.85 -2.73 1667 1884 3 k 4 k 1 d 1600 2081
80% 2.1 -1.28 1807 2039 1500 1 k 2 k 3 d 1800 2147
90% 4.71 2.52 1990 2217 1800 2 d 1 d 4 d 1900 2236
95% 6.12 3.88 2124 2339 1900 3 d 2 d 5 d 2100 2308
98% 7.41 5.29 2265 2460 2100 4 d 3 d 5 d 2200 2398
99% 8.15 6.09 2357 2536 2200 5 d 4 d 6 d 2300 2454
99.50% 8.7 7.2 2470 2604 2300 6 d 5 d 6 d 2400 2516
99.90% 9.64 pro 2643 2747 2500 3p 2500 2625
top 10.12 9p 2789 2809 2700 5p
source 1 1 1 1 2 3 4 4 5 me, with 6

All useful links while I was doing this:

  • http://senseis.xmp.net/?FIDETitlesAndEGFGoRatings
  • http://senseis.xmp.net/?RatingHistogramComparisons
  • http://senseis.xmp.net/?FIDETitlesAndEGFGoRatings
  • http://senseis.xmp.net/?EloRating
  • http://senseis.xmp.net/?GoR
  • http://senseis.xmp.net/?topic=2550 (very bottom)
  • http://en.wikipedia.org/wiki/Go_ranks_and_ratings
  • http://www.europeangodatabase.eu/EGD/EGF_rating_system.php
  • http://ratings.fide.com/download.phtml
  • http://senseis.xmp.net/?RankWorldwideComparison

Another note: ELO is a “rating,” while dan/kyu is a “ranking.”


Enfascination 2012

Some things take time, but it only takes an instant to realize that you have no idea what’s going on. This epiphany—every time it happens—is punctuated by the sound of 500 stars around the universe literally exploding, dissolving their planets and neighbors in flaming atoms, in silence. It happens every instant, forever. As right as you were, it’s impossible for you to know how right.

Enfascination is a very tiny event that celebrates the act of being caught. You have five minutes to share something that you think is fascinating—that’s the only rule. You will find that the people you are sharing with are fascinated too, and you will be caught by things you’ve never thought to catch.

The 2012 Enfascination Lectures
Why: I would love for you to share.
When: Saturday, May 5th, or “Thinko de Mayo,” starting at, say, 5PM.
Where: Probably in the basement of Woodburn Hall, on the IU campus
Really?: Probably, maybe not. I just made this all up now so times and places can change. Check this webpage for updates.

This year’s occasion is my 30th birthday, but this is the ninth year that I’ve been hosting this birthday lecture series. Past topics have included Counting the Permutations of Digit Strings, Conceptions of Time in History, Chili Peppers, How to cross a glacier, The Singularity, Indiana Jones, Rural desert water distribution systems, Hexaflexagons, Small precious things, Wilderness Camps as Commodity, DIY Cooking, Roman Emperor Deaths , Joy of Science, Salt , Three Great Banquets in Italian History, How to Sharpen a Chisel, Some Properties of Numbers in Base Ten, The Physiological Limits to Human Perception of Time, Geophagy, Pond Ecology, Superstition: For Fun and Profit, Counterintuitive Results in Hydrodynamics, The Wolof Conception of Time, Arctic String Figures, The Seven Axioms of Mathematics, Dr Seuss and his Impact on Contemporary Children’s Literature, Motorcycle Life and Culture, Cultural Differences Between Japan and the US, Brief history of the Jim Henson Company, Female Orgasm, Insider Trading: For Fun and Profit, Film of Peter Greenaway, A Typographical Incident with Implications for the Structure of Thought, Cooperative Birth Control, Tones in Mandarin, Unschooling and Deschooling, Q&A: Fine Beer, DIY Backpacking, Chinese Nationalism in Tibet, Biofuels, The Yeti, The Health Benefits of Squatting, The Big Bang, How to Pick Stocks Like a Pro, Food Preservation Technique, or Managing Rot, Demonstrations in Number Theory, Rangolis, Kolum, The Hollow Earth, Edible Mushrooms: For Fun and Profit, Human Asexuality, A History of the California Central Valley Watershed, An Account of the Maidu Creation, Rural India, German Compound Words, Manipulating Children, Physics of Time, Animal Training on Humans, Constructed Languages, This Week’s Weather, The XYZs of Body Language, Light Filtration Through Orchards, Our Limits in Visualizing High Dimensional Spaces,Twin Studies. There is video for some of it, notes for others, collected here.

see you there,
seth.


Difficulties replicating Kashtan & Alon (2005)

I love the paper, its about the evolution of neural structure. Do brains have parts? Do bodies have parts? If you think so, you’re very forward thinking, because science has no idea how that could possibly have evolved. Kashtan and Alon published a mechanism for the evolution of structure. They proposed that if environments have modular structure then things that evolve in them will as well. Or something like that.

I had trouble replicating their result. By the time I did, I had lost all faith in it. There are some tricks to make the effect seem bigger than it is, and there might be some confounds, though I stopped short of proving it. I’ve got a proposal all written up, but I changed disciplines before I could implement. I’m not the only one who couldn’t replicate — I’ve met others who had the same problem.

I still love that paper, but I personally believe that the mystery of evolved structure is more unsolved than we think.


Grad school can make you smarter?

I really didn’t think I would come out of graduate school as a smarter person. I knew that I would know more about stuff, but I assumed, if anything, I would come out constrained by some understanding of how epiphany “should” happen. But I had a funny experience playing Minesweeper yesterday. It was a lapse: in high school I played 4–6 hours a day. It was the first thing I was ever good at. Even though my behavior back then was addictive, I credit Minesweeper with giving me experiences of life that have been indelible. That probably sounds crazy, but I found my first glimpse of self-worth in being good at Minesweeper. And since it is a talent that no normal person would value, I recognized immediately that self-worth was not a thing that has to be connected to what others think. It sounds obvious, but it was big and it changed me completely. I quit playing the game some time in there (around the time that my friend Sudano became way better than me–another valuable experience) and in the decade since I’ve picked it up for maybe a few days every year or so.

Every return to the game has made me feel good and familiar. I’ve recognized every time that if I invested the time I could get as good as I once was (the game is not very physical), and each time I’ve recognized as quickly that I don’t want that. The annual moment of weakness returned two days ago when I started playing a Minesweeper clone instead of reading papers. I only put in an hour, and I was as slow a player as ever, but the experience of playing had changed. I was seeing the game in a way that I never had before. I could recognize, with the consistency of habit, the irrelevance of my old approach to the game. The number-patterns are all the same, but patterns are just the beginning of Minesweeper. Two humps that I never even recognized before were a habitual hesitation before taking necessary risks and an attachment to the visual patterns made available by certainty. On Wednesday I saw the humps clearly, over my shoulder.

It can be really depressing with people, but there are some ways that it is great to interact with a thing that is exactly the same ten years later. Playing Minesweeper gave me an opportunity to measure myself in a very clean way, and it gave me a surprise. Honestly, I don’t really believe that the training I’m receiving in graduate school made me better at Minesweeper. Between challenges at school, at home, and in a relationship, I’m a very different person than I was a year ago. I still can’t describe-in-words any of the changes I feel, but I know I have some expectation of what the changes must have been because of how surprised I was to find “Better at Minesweeper” among them.

There was another time in my life when I was entirely devoted to learning how to draw. I was drawing at least four hours a day for a month. Every week or so I would run my work by an artist in town. On day 1, I was OK. Between day 1 and day 14 I got better. Between day 14 and day 30, I got worse. I had an urgent sense of time, so it was depressing to realize that I had learned to become worse; I didn’t draw at all for the next 30 days. But during that time I discovered the amazing complement to getting-worse-by-doing. I could tell by the way I was physically looking at objects that I was, in those moments, getting better at drawing (drawing is about seeing). Here is a great example of giving too much power to a person that isn’t ready for it: Take someone with an unhealthy commitment to productivity and show them that it is possible to get better at something by not doing it. Instead of accepting that rest and relaxation are a part of growth, I indulged the mystical realization that by doing nothing I could become good at Everything. It was a good time, only in part because it was grounded in the absurd.

Through all of it there is a me in a world putting meaning on things and feeling. I like the idea that I’m currently doing and learning everything. It isn’t just an appreciation that everything-affects-everything; I know the initial conditions are sensitive to me, that I can flap in hurricanes, but there is more. I cherish the invisible decrement to my ambition when a close friend does something that I have always wanted to do. I suddenly don’t need it as much anymore–vicarious experience is experience enough if you use a capital V. And suddenly, again, I’m presently doing nearly everything in the world, merely by caring about people.

What does it mean when the things you believe make you feel gigantic, but the corresponding growth they imply for the world makes you net invisible? The unfolding powers of ten leave enough room for meaning and meaninglessness to coexist, and they make it natural to feel good, busy, tiny, and lost all at the same time. The only real danger in being a busybody is forgetting that its silly. I’m totally content to be a silly creature imagining itself to be doing and learning everything. In fact, I’m thrilled.

About

This entry was posted on Friday, March 30th, 2012 and is filed under nescience, science.


What it means to know things about early Christianity

I’ve been reading a lot about the history of early Christianity, and a lot of the theories and ideas that define it. A lot of the scholarship is totally wild, and a lot is pretty sound; some is both, but its all confusing, because these things get mixed together indiscriminately. It motivated me to create a taxonomy of “knowability” for theories about Christ and early Christianity. The taxonomy allowed me to craft a test by which I judge if a theory is worth taking seriously. For me to take a Bible theory seriously, it has to have more evidence than the suspicious theory that Jesus was a hypocrite and demagogue.

First, the taxonomy. It isn’t exactly a scale, and there is room for overlap and grey. It is still loose enough that two people could put the same theory into the pragmatic or reach categories, so this is currently only a personal taxonomy for establishing one’s own sense or the sense of a community that shares one’s assumptions.

  • Universally know: Assert the truth of. The existence of this type of knowing is justified by faith and only faith. The type of knowledge that good Christians hold for the existence of Christ and God.
  • Humanly know: know as well as its possible to know something (that I’m standing on a floor and its not demons). Beyond reasonable doubt. It can be proven wrong. The existence of Pilate, and of Jews and early Christians in the first century A.D. Probably the existence of Paul. Herod killing all those kids on 0 A.D.
  • Functionally know: Whether theory is completely satisfying or not, you can’t imagine an alternative. Not necessarily a failure of imagination; often any competing theory that accounts for the evidence is much more complicated. Existence of the apostles and maybe Paul. The books of the Torah existed around 0 A.D. and people in the Levant often knew someone who had actually read them. They were acquainted with the lore of those books.
  • Pragmatically know: Probably the best theory. Alternative theories could be maintained by a reasonable person, even the same person—there is still reasonable doubt. Every physicist knows that Newton’s billiard ball mechanics is “wrong,” but indistinguishable from the truth in an impressively wide range of problems. Existence of a Yeshua from Nazareth. Existence of Q document.
  • Reach: Theory could of course be true, but no more plausible than its opposite. Still, one may be more accepted than the other for historical reasons. Birth of Jesus Christ in Bethlehem and then to and fro Egypt—Could as easily have been ad hoc fabrication to satisfy prophecies in Isaiah. I’m putting here everything else that was prophesied by Isaiah, because these are things that people at the time wanted to be true: a Christ will come, he will be killed, resurrected, and seen, virgin birth/immaculate Conception, and he will perform miraculous healings (which have really gone out of fashion in modern Christianity).
  • Fringe: Theory could be true, other reasonable theories are more supported, or better supported. Existence of secret gospels from the first century.
  • Spurious: Fundamentally not knowable except below Pragmatic sense. More specifically, not knowable given current knowledge, and possibly future knowledge. Things prophesied by Isaiah, the existence of secret gospels from the first century. Armageddon happened way back in the first or second century A.D.. Armageddon will happen. Armageddon won’t happen. Mary M. and Jesus were doing it. Mary M. was an Apostle.
  • Wrong (Know not): theory has been falsified. That is, it could always wriggle its way to being true, but there exists current evidence on the subject (itself impressive when it comes to the history of early Christianity), and that evidence speaks against the thing. Infancy gospels were almost certainly not written before 200 or 300AD.

I’ll only warily assert anything into the faith type of knowing, and “beyond a reasonable doubt” is a luxury reserved for very few aspects of Biblical history. In general, I’m wary to assume that I know anything with more certainty than I know that I’ve got two feet on the ground, and even that is fair to call suspect. Going down the ladder, none of the theories I’m willing to work with can really be proven false, so I’m lowering the bar; falsifiability is too strict a standard for ancient history. Even without it, historians can establish things that are worth trying to establish. So how far down should I go?

hyperhypocracy

Now that we’ve got a scale of knowing things about the history of early Christianity, I’m going to be the devil’s advocate and pose a reach/fringe theory that Jesus was a demagogue and a hypocrite. Its purpose is to serve as a criterion for judging other theories, and for establishing the legitimacy (in my eyes) of theories of ancient history. I’ll consider your theory if it is more plausible than the theory that Jesus was merely a human demagogue.

Here is the theory: Demagogues are people who preach a populist message, often to the poor, while themselves living within the means that they criticize.* Demamgogues happen. People want supernatural, and a demagogue can convey that without doing anything impossible. Here is the case that Jesus was living large, using only evidence from the Gospels, the most legitimate accounts of the Life of Christ: getting his hair perfumed, breaking Sabbath by not fasting, the thousands of loaves, the parable for rich people. From this theory it makes sense that he would say he isn’t having wine tomorrow night, and it explains the doting entourages that retrieved him donkey and presented lepers and blind people to him.

This theory is reach/fringe, but it errs on the side of pragmatic. It obviously has lots of problems as a theory, but that’s the point. I think that a more sympathetic read is at least as plausible, but also that a reasonable person could believe all of this.

Whether it is right or wrong is irrelevant. It is at least as true as the New Testament case against (for example) homosexuality *. Things I’m willing to work with: I think Q passes the test, also the existences of Herod and Pilate *, even the existence of Godfearers.

These theories that I’m willing to work with are above the border between reach/fringe and pragmatic. That’s the line I’ve drawn in helping myself know what I think.


Political use of the rhetoric of complex systems

I’m excited about the field called “complex systems” because it reflects of best of science’s inherent humility: everything affects everything, and we oughtn’t pretend that we know what we’re doing. I think of that as a responsible perspective, and I think it protects science from being abused (or being an abuser) in the sociopolitical sphere. So imagine my surprise to discover that the “everything affects everything” rhetoric of complex systems, ecology, and cybernetics was leveraged by tobacco companies in the 1990s to take attention away from second-hand smoke in office health investigations. Second-hand smoke wasn’t causing sickness, the hard-to-pin-down “sick building syndrome” was. For your reading pleasure, I’ve pulled a lot of text from “Sick building syndrome and the problem of uncertainty,” by Michelle Murphy. I’ve focused on Chapter 6, “Building ecologies, tobacco, and the politics of multiplicity.” Thanks to Isaac.

The meat of the chapter is pp. 146-148, and on a bit:

In the 1980s, the largest building investigation company was healthy Buildings Internations (HBI), located in Fairax, Virginia. HBI had been a modest ventilation cleaning service called ACVA Atlantic until the Tobacco Institute, an industry lobby group, contacted its president, Gray Robertson 46. Tobacco companies hoped to thwart the regulation of secondhand smoke in workspaces, restaurants, bars, and public spaces. Sick building syndrome appealed to the Tobacco Institute because it drew attention to the multiple causes of indoor pollution. Only a few cases of SBS had been attributed to tobacco smoke, a fact that Robertson, HBI, and the literature sponsored by the Tobacco Institute emphasized over and over 47. Soon the Tobacco Institute and Philip Morris were building a database together on sick building syndrome cases, collecting a literature review, and contacting sympathetic indoor air quality experts who could spread news of sick building syndrome. In 1988, five big tobacco companies found the nonprofit Center for Indoor Air Research (CIAR), which quickly became the largest nongovernmental source of funding for indoor air pollution studies.

Robertson, with a monthly retainer from the Tobacco Institute, began to underbid other companies for lucrative building investigation contracts in the Washington area–the US Capitol, the CIA headquarters, the Supreme Court, as well as corporate buildings on the East Coast such as the offices of IBM, MCI WorldCom, and Union Carbide. 49. Underwritten by Philip Morris, HBI expanded its scope by publishing a free glossy magazine that distributed over three-hundred thousand copies in multiple languages 50.

While Robertson was promoting sick building syndrome on the road, his company continued collecting data that later became tobacco industry evidence demonstrating that secondhand smoke —— unlike other culprits such as fungi, dust, humidity, bacteria, and formaldehyde —— was rarely a problem in buildings 54. His testimony before city councils, in court cases, and at federal hearings was pivotal to the tobacco industry’s case that secondhand smoke was not a substantive indoor pollutant and thus not in need of regulation 55.

the effort was so successful that the Tobacco Institute launched similar promotions of SBS in Canada, Hong Kong, and Venezuela.

Healthy Buildings International was not the only building investigation company wooed by the tobacco industry, nor was the Tobacco Institute the only industry association invested in derailing possible regulation of indoor pollution 60. The Business Council on Indoor Air, founded in 1988, represented industry sponsors such as Dow Chemical and Owens-Corning at fifteen thousand dollars for board membership. It too promoted a “building systems approach” 61. In addition, the Tobacco Industry Labor/Management Committee developed a presentation on indoor pollution for unions, creating a coast-to-coast roadshow that ran from 1988 to 1990 62. Conferences, professional associations, and particularly newsletters proliferated in which industry sponsored experts rubbed elbows with independent building investigators.

The appeal of sick building syndrome was that pollution and its effects could be materialized in a way impossible to regulate —— as an unpredictable multiplicity. “Virtually every indoor decoration, building material or piece of furniture sheds some type of gaseous or particulate pollutant,” testified Robertson 63. In its manual for building managers, the EPA warned that indoor pollution was “the product of multiple influences, and attempts to bring problems under control do not always produce the expected results” 64. Managing complex relationships among many “factors” and “symptoms” replaced a “naive,” “single-minded,” and even “dangerous” attention to specific pollutants.

and last,

The implication is that multiplicity was not a quality that could be simply celebrated for its eschewing of reductionism and embracing of diversity. Materializing an object as a multiplicity allowed historical actors to do concrete things about chemical exposure; at the same time, it disallowed and excluded other actions. It was precisely this capacity to exclude specific causal narratives and affirm ambiguity that made ecology and multiplicity such powerful ways to manage the physical corridors of capitalism. p.150

All this comes with interpretation. Murphy takes ecology and cybernetics to be fundamentally “establishment.” She documents the affection of management rhetoric for ecological and cybernetic concepts, but she goes further, citing Eugene Odum’s declaration of ecosystems ecology as “a new managerial ethos for society” (p.134). Then she moves into buildings, the business of buildings, the rhetoric of buildings as living things, wrapping up with research on the idea of questionnaires.

Throughout the book the author rocks a latent hostility to these concepts and also to criticisms of them. The author pulls the same trick with sick building syndrome itself: criticizing the establishment for not recognizing it as a disease, but also criticizing the people who suffer from it because they are too privileged to have actual problems. I guess that’s why they call it critical theory, but I can’t help but feel like critical theorists do it as a hyperdefensive maneuver to avoid being vulnerable in front of their own peers. So I did find myself reading past her writing for the content, but there is a lot of that. She collected a ton of evidence, and its an impressive case in showing that everything has got politics.

Here are all of the citations, copied straight out of the footnotes.

46 Myron Levin, “Who’s Behind the Building Doctor?”; Mintz, “Smoke Screen.”
47. Using its own building investigations as the data, HBI often cited its estimate that tobacco smoke played a role in 3% of SBS cases. However, this obscures incidents when tobacco smoke might have been named as an irritant unassociated with any larger SBS episode.
48. The CIAR was disbanded in 1998 as part of the Master Settlement Agreement.
49. On the sponsorship of Robertson, see Mintz, “Smoke Screen.” For a list of buildings the firm investigated, see References, Healthy Buildings Internationsl, Web site, http://www.hbiamerica.com/references/index.htm (accessed Nov. 19, 2003).
50 Myron Levin, “Who’s Behind the Building Doctor?”; Mintz, “Smoke Screen.”
51. Healthy Buildings International, “Sick Building Syndrome Causes and Cures,” 1991. Legacy Tobacco Documents Library, Philip Morris Collection, Bates No. 2022889303-9324, http://legacy.library.ucsf.edu/tid/hpc78e00 (accessed Nov. 27, 2003).
52. “Business Council on Indoor Air: A Multi-industry Response,” 6.
53. Gra Roberston, Healthy Buidings International, Sick Building Syndrome—Facts and Fallacies, Obt. 23, 1991, Legacy Tobacco Documents Library, R. J. Reynolds, Bates No. 509915547-5568, http://legacy.library.ucsf.edu/tid/qbr63d00. Recent Advances in Tobacco Science, v. 17. Topics of Current Scientific Interest in Tobacco Research, Proceedings of a Symposium Presented at the Forty-Fifth Meeting of the Tobacco Chemists’ Research Conference (accessed Nov. 27, 2003): 151-52.
54. Healthy Buildings International, “HBI Experience.”
55. HBI’s relationship with the tobacco industry was revealed in 1992 when a fired employee turn whistle-blower. By 1998 the Master Settlement Agreement, a settlement between the U.S. state attorneys general and major tobacco companies, along with the Tobacco Institute, mandated that the industry release digital snapshots of millions of pages of internal documents, which have since demonstrated the industry’s support of indoor air s quality research and investigators, establishing ties not only with Rboertson but a host of other indoor air quality specialists.
56. U.s> Environemtnal Prote tionAgentcy, “Indoor Air Facts.” Much of the credit for the successful publication of this pamphlet is due to James Repace, a senior EPA scientist, whistle-cloer, and active NFEE union member, who widely published his rebuttals to the tobacco industry. On the EPA’s building assessment approach, see U.S. Envionmental Protection Agency and National Instutute of Occupational Safety and Health, “Building Air Quality.”
57. Healthy Buildings International, “About Us,” http://www.hbiamerica.com/aboutus/index.htm (accessed Nov. 11, 2003).
58. Ibid.
59. Gray Robertson, “Sick Building Syndrome,” Nov. 18, 1987. Legacy Tobacco Documents Library, Philip Morris Collection, Bates No. 2061692010-2012, http://legacy.library.ucsf.edu/tid/pjf49e00 (accessed Nov. 27, 2003).
60. See, e.g., the role of tobacco industry representatives within ASHRAE; Glantz and Bialous, “ASHRAE Standard 62.”
61. Business Council on Indoor Air, “Indoor Air Quality: A Public Healthy Issue in the 1990s; How Will It Affect Your Company?,” undated brochure, received on April 11, 1996, and “Building Systems Approach.”
62. “Labor Indoor Air Quality Presentations and Events,” Jan 1990, Legacy Tobacco Documents Library, Tobacco Institute, Bates No. TI02120328-0338, http://legacy.library.ucsf.edu/tid/wht30c00 (accessed Nov. 23, 2003).
63. “Investigating the ‘Sick Building Syndrome’:ETS in Context,” statement of Gray Robertson, president, ACVA Atlantic, Inc., before the National Academy of Sciences Concerning the Contribution of Environmental Tobacco Smoke to Indoor Air Pollution, Jan. 14, 1986, Legacy Tobacco Documents Library, Philip Morris Collection, Bates No. 2021005103-5125, http://legacy.library.ucsf.edu/tid/epj34e00 (accessed Nov. 27, 2003) 7.
64. U.S. Environmental Protection Agency and National Institute of Occupational Safety and Health, “Building Air Quality,” x.
65. Robertson, “Investigating the ‘Sick Building Syndrome’,” 21.

And, as an extra snippet, Here is an excerpt bringing ecology in:

… moreover, the healthfulness of buildings was of deep interest to a selection of industries and their associations, most particularly the chemical, carpet, and tobacco industries. Ecology proved a very useful frame to this set of financially driven actors, each of which brought distinct motivation to the materialization of sick building syndrome. Ecology gave a framework for affirming the nonspecific and multiplous quality of sick building syndrome that was especially appealing to the tobacco industry, which actively resisted regulation. This chapter concludes that the concept of sick building syndrome achieved the prominence it did in the last two decades of the twentieth century largely because of the tobacco industry’s efforts to promote an ecological and systems approach to indoor pollution
Sick building syndrome would have looked very different without the cybernetically inflected ecology of the 1970s. ‘Ecology’ was a word used to describe both a field of study (the scientific discipline of ecology) and an object of study (ecologies that existed in the world). Systems ecology took as its primary focus the study of the abstract patterns of relations between the organic and inorganic elements of a system. An emphasis on the management of the system, on the regulation of its flows, relationships, and second-order consequences, made systems ecology enormously attractive as a management ideology for business. This chapter traces how ecology was used to grant a complex, fluid, and multi causal form to business practices, building systems, and finally to sick building syndrome itself. The foregrounding of relationships defined by contingencies made ecological explanations extremely useful for assembling accounts that did not lay blame for indoor pollution on any one thing. p. 132


A list of human universals

This is a list of some of the things that pretty much all cultures have in common. It is drawn from Steven Pinker’s Language Instinct (pp. 413-415), citing anthropologist Donald Brown:

Value placed on articulateness. Gossip. Lying. Misleading. Verbal humor. Humorous insults. Poetic and rhetorical speech forms. Narrative and storytelling. Metaphor. Poetry with repetition of linguistic elements and three-second lines separated by pauses. Words for days, months, seasons, years, past, present, future, body parts, inner states (emotions, sensations, thoughts), behavioral propensities, flora, fauna, weather, tools, space, motion, speed, location, spatial dimensions, physical properties, giving, lending, affecting things and people, numbers (at the very least “one,” “two,” and “more than two”), proper names, possession. Distinctions between mother and father. Kinship categories, defined in terms of mother, father, son, daughter, and age sequence. Binary distinctions, including male and female, black and white, natural and cultural, good and bad. Measures. Logical relations including “not,” “and,” “same,” “equivalent,” “opposite,” general versus particular, part versus whole. Conjectural reasoning (inferring the presence of absent and invisible entities from their perceptible traces).

Nonlinguistic vocal communication such as cries and squeals. Interpreting intention from behavior. Recognized facial expressions of happiness, sadness, anger, fear, surprise, disgust, and contempt. Use of smiles as a friendly greeting. Crying. Coy flirtation with the eyes. Masking, modifying, and mimicking facial expressions. Displays of affection.

Sense of self versus other, responsibility, voluntary versus involuntary behavior, intention, private inner life, normal versus abnormal mental states. Empathy. Sexual attraction. Powerful sexual jealousy. Childhood fears, especially of loud noises, and, at the end of the first year, strangers. Fear of snakes. “Oedipal” feelings (possessiveness of mother, coolness toward her consort). Face recognition. Adornment of bodies and arrangement of hair. Sexual attractiveness, based in part on signs of health and, in women, youth. Hygiene. Dance. Music. Play, including play fighting.

Manufacture of, and dependence upon, many kinds of tools, many of them permanent, made according to culturally transmitted motifs, including cutters, pounders, containers, string, levers, spears. Use of fire to cook food and for other purposes. Drugs, both medicinal and recreational. Shelter. Decoration of artifacts.

A standard pattern and time for weaning. Living in groups, which claim a territory and have a sense of being a distinct people. Families built a round a mother and children, usually the biological mother, and one or more men. Institutionalized marriage, in the sense of publicly recognized right of sexual access to a woman eligible for childbearing. Socialization of children (including toilet training) by senior kin. Children copying their elders. Distinguishing of close kin from distant kin, and favoring of close kin. Avoidance of incest between mothers and sons. Great interest in the topic of sex.

Status and prestige, both assigned (by kinship, age, sex) and achieved. Some degree of economic inequality. Division of labor by sex and age. More childcare by women. More aggression and violence by men. Acknowledgment of differences between male and female natures. Domination by men in the public political sphere. Exchange of labor, goods, and services. Reciprocity, including retaliation. Gifts.

Social reasoning. Coalitions. Government, in the sense of binding collective decisions about public affairs. Leaders, almost always nondictatorial, perhaps ephemeral. Laws, rights, and obligations, including laws against violence, rape, and murder. Punishment. Conflict, which is deplored. Rape. Seeking of redress for wrongs. Mediation. In-group/out-group conflicts. Property. Inheritance of property. Sense of right and wrong. Envy.

Etiquette. Hospitality. Feasting. Diurnality. Standards of sexual modesty. Sex generally in private. Fondness for sweets. Food taboos. Discreetness in elimination of body wastes. Supernatural beliefs. Magic to sustain and increase life, and to attract the opposite sex. Theories of fortune and misfortune. Explanations of disease and death. Medicine. Rituals, including rites

If that isn’t enough for you, try:
J. Henrich, S. J. Heine, A. Norenzayan, The weirdest people in the world?, Behavioral and brain sciences 33, 61–83 (2010). * It pitches itself as rejecting universality, but in the process presents the best review of robust similarities that I’ve found.


The free market: Burning man’s less successful social experiment

Burning Man is a big classic successful event sort of thing out in a Nevada desert. It has been getting more and more popular, but there is only room for 40,000 people. So what’s the best way to distribute 40,000 tickets among 80,000 people fairly and efficiently? They’ve always done it one way, but as demand grows, they’ve been feeling pressure for a new system.

This year they changed to an entirely new market-based system. They created a brand new social system designed from the top-down from scratch. That last clause should give you a hint that I’m not going to like it, and that I’m going to criticize it for not taking into account important things like reality. If you know me well enough, you might even suspect that this will get into libertarians.

The new system introduced a small variety of bidding and market mechanisms, all at once. The central mechanism made it so people could enter one of three lotteries at three prices: $245, $325, and $four-something (uhh $390)). It was probably designed to make a certain target amount of money.

Wait a second: wild finger-painted feather-boa’d dusty creative hippie types using the inspirations of the free market? What’s going on? Here’s my theory: Burning Man creates Burning Man enthusiasts, some of whom may be drug enthusiasts, most of whom are enthusiastic for legalization, some of whom lean towards deregulation generally which at this point makes one vulnerable to crazy things like the Libertarian myopia for market distribution. That’s my theory: the whole thing smacks of drug-addled libertarians. Their devotion to markets is very idealistic, where “idealistic” is a nice way of saying ignorant of complexity. Just to spell it out.

What could go wrong? They’re actually still not sure what went wrong. (Scalpers! Hackers! Scalpers! The Masses!).

Following phone conversations with major theme camp and art group organizers, we determined that only 20%-25% of the key people needed to bring those projects to the playa had received notifications for tickets. A number of people also told us they’d used multiple credit cards and asked friends to register for them as a way to increase their chances of getting tickets. Those who received more tickets than they need said they are considering how to redistribute them.

link

As a result they are probably going to over-correct and hand-pick the people to offer their remaining tickets to, a move akin to wealth redistribution, very “non-free.”

Generally, our fine notions about society are wrong. Unintended consequences are a fact of any change to an existing institution. Sometimes they matter, and they are more likely to matter the bigger the change. So what to do? Evolution gives you one nice model: cope with the incomprehensible complexity of existence with diversity and incremental changes. My favorite thing about markets isn’t their ability to crush collusion and find equilibrium, but their ability to mimic the mutation, selection, and reproduction characteristic of effective search through complex spaces, but even that isn’t everything.