Two GIFs about peer review, and an embarrassing story …

1)

unnamed

2)


It is common to have your papers rejected from journals. I forwarded a recent rejection to my advisor along with the first GIF. Shortly after, I got the second GIF from the journal editor, with a smiley. It turns out that I’d hit Reply instead of Forward.
At least he had a sense of humor.

About

This entry was posted on Saturday, December 17th, 2016 and is filed under audio/visual, science.


Natural selection, statistical mechanics, and the idea of germs were all inspired by social science

It’s only natural to want to hold your scientific field as the most important, or noble, or challenging field. That’s probably why I always present the sciences of human society as the ones that are hardest to do. It’s not so crazy: it is inherently harder to learn about social systems than biological, engineered, or physical ones because we can’t, and shouldn’t ever, have the same control over humans that we do over bacteria, bridges, or billiard balls. But maybe I take it too far. I usually think of advances in social science as advances in what it is possible for science to teach us, and I uncritically think of social science as where scientific method will culminate.
So imagine my surprise to learn that social science isn’t the end of scientific discovery, but a beginning. According to various readings in John Carey’s Faber Book of Science, three of the most important scientific discoveries since the Enlightenment — the theory of natural selection, the germ theory of disease, and the kinetic theory of gasses — brought inspiration from human social science to non-human domains. One of Darwin’s key insights toward the theory of evolution came while reading Malthus’s work on human population. Just in case you think that’s a fluke, Alfred Russell Wallace’s independent discovery of natural selection came while he was reading Malthus. (And Darwin was also influenced by Adam Smith). Louis Pasteur developed the implications of the germ theory of disease by applying his French right-wing political philosophy to animalcules. The big leap there was that biologists rejected that very small insignificant animals could possibly threaten a large and majestic thing like a human, but Pasteur had seen how the unworthy masses threatened the French elite, and it gave him an inkling. Last, James Maxwell, the man right under Newton and Einstein in physics stature, was reading up on the new discipline of Social Statistics when he came up with the kinetic theory of gases, which in turn sparked statistical mechanics and transformed thermodynamics. Physicists have started taking statistical mechanics out of physical science and applying it to social science, completely ignorant of the fact that it started there.
All of these people were curious enough about society to think and read about it, and their social ponderings were rewarded with fresh ideas that ultimately transformed each of their fields.
I think of science as a fundamentally social endeavor, but when I say that I’m usually thinking of the methods of science. These connections out of history offer a much deeper sense in which all of natural science is the science of humanity.
Thanks to Jaimie Murdock and Colin Allen for the connection between Malthus and Darwin, straight from Darwin’s autobiography

In October 1838, that is, fifteen months after I had begun my systematic inquiry, I happened to read for amusement Malthus on Population, and being well prepared to appreciate the struggle for existence which everywhere goes on from long-continued observation of the habits of animals and plants, it at once struck me that under these circumstances favorable variations would tend to be preserved, and unfavorable ones to be destroyed. The results of this would be the formation of a new species. Here, then I had at last got a theory by which to work.


How would science be different if humans were different?

How would science be different if humans were different — if we had different physiological limits? Obviously, if our senses were finer, we wouldn’t need the same amount of manufactured instrumentation to reach the same conclusions. But there are deeper implications. If our senses were packed denser, and if we could faithfully process and perceive all of the information they collect, we would probably have much more sensitive time perception, or one way or another a much refined awareness of causal relations in the world. This would have the result that raw observation would be a much more fruitful methodology within the practice of natural science, perhaps so much so that we would have much less need for things like laboratory experiments (which are currently very important).
Of course, a big part of the practice of science is the practice of communication, and that becomes clear as soon as we change language. Language is sort of a funny way to have to get things out of one head and into another. It is slow, awkward, and very imperfect. If “language” was perfect — if we could transfer our perfect memories of subjective experience directly to each other’s heads with the fidelity of ESP — there would be almost no need for reproducibility, one of the most important parts of science-as-we-know-it. Perfect communication would also supersede the paratactic writeups that scientific writing currently relies on to make research reproducible. It may be that in some fields there would be no articles or tables or figures. Maybe there would still be abstracts. And if we had unlimited memories, it’s possible that we wouldn’t need statistics, randomized experiments, or citations either.
The reduction in memory limits would probably also lead to changes in the culture of science. Science would move faster, and it would be easier to practice without specialized training. The practice of science would probably no longer be restricted to universities, and the idea of specialized degrees like Ph.D.s would probably be very different. T.H. Huxley characterized science as “organized common sense.” This “organization” is little more than a collection of crutches for our own cognitive limits, without which the line between science and common sense would disappear entirely.
That’s interesting enough. But, for me, the bigger implication of this exercise is that science as we know it is not a Big Thing In The Sky that exists without us. Science is fundamentally human. I know people who find that idea distasteful, but chucking human peculiarities into good scientific practice is just like breaking in a pair of brand-new gloves. Having been engineered around some fictional ideal, your gloves aren’t most useful until you’ve stretched them here and there, even if you’ve also nicked them up a bit. It’s silly to judge gloves on their fit to the template. In practice, you judge them on their fit to you.


The unexpected importance of publishing unreplicable research

There was a recent attempt to replicate 100 results out of psychology. It succeeded in replicating less than half. Is Psychology in crisis? No. Why would I say that? Because unreplicable research is only half of the problem, and we’re ignoring the other half. As with most pass/fail decisions by humans, a decision to publish after peer review can go wrong in two ways:

  1. Accepting work that “shouldn’t” be published (perhaps because it will turn out to have been unreplicable; a “false positive” or “Type I” error)
  2. Rejecting work that, for whatever reason, “should” be published (a “false negative” or “Type II” error).

It is impossible to completely eliminate both types of error, and I’d even conjecture that it’s impossible for any credible peer review system to completely eliminate either type of error: even the most cursory of quality peer review will occasionally reject good work, and even the most conservative of quality peer review will accept crap. It is naïve to think that error can ever be eliminated from peer review. All you can do is change the ratio of false positives to false negatives, are your own relative preference for the competing values of skepticism and credulity.
So now you’ve got a choice, one that every discipline makes in a different way: you can build a conservative scientific culture that changes slowly, especially w.r.t. its sacred cows, or you can foster a faster and looser discipline with lots of exciting, tenuous, untrustworthy results getting thrown about all the time. Each discipline’s decision ends up nestling within a whole system of norms that develop for accommodating the corresponding glut of awful published work in the one case and excellent anathematic work in the other. It is hard to make general statements about whole disciplines, but peer review in economics tends to be more conservative than in psychology. So young economists, who are unlikely to have gotten anything through the scrutiny of their peer review processes, can get hired on the strength of totally unpublished working papers (which is crazy). And young psychologists, who quickly learn that they can’t always trust what they read, find themselves running many pilot experiments for every few they publish (which is also crazy). Different disciplines have different ways of doing science that are determined, in part, by their tolerances for Type I relative to Type II error.
In short, the importance of publishing unreplicable research is that it helps keep all replicable research publishable, no matter how controversial. So if you’re prepared to make a judgement call and claim that one place on the error spectrum is better than another, that really says more about your own high or low tolerance for ambiguity, or about the discipline that trained you, than it does about Science And What Is Good For It. And if you like this analysis, thank psychology, because the concepts of false positives and negatives come out of signal detection theory, an important math-psych formalism that was developed in early human factors research.
Because a lot of attention has gone toward the “false positive” problem of unreplicable research, I’ll close with a refresher on what the other kind of problem looks like in practice. Here is a dig at the theory of plate tectonics, which struggled for over half a century before it finally gained a general, begrudging acceptance:

It is not scientific but takes the familiar course of an initial idea, a selective search through the literature for corroborative evidence, ignoring most of the facts that are opposed to the idea, and ending in a state of auto-intoxication in which the subjective idea comes to be considered an objective fact.*

Take that, plate tectonics.

About

This entry was posted on Friday, September 4th, 2015 and is filed under science.


Paper on Go experts in Journal of Behavioral and Experimental Economics

I just published a paper with Sascha Baghestanian​ on expert Go players.
Journal of Behavioral and Experimental Economics
It turns out that having a higher professional Go ranking correlates negatively with cooperation — but being better at logic puzzles correlates positively. This challenges the common wisdom that interactive decisions (game theory) and individual decisions (decision theory) invoke the same kind of personal-utility-maximizing reasoning. By our evidence, only the first one tries to maximize utility through backstabbing. Go figure!
This paper only took three years and four rejections to publish. Sascha got the data by crashing an international Go competition and signing up a bunch of champs for testing.

About

This entry was posted on Saturday, July 25th, 2015 and is filed under science, updates.


Prediction: Tomorrow's games and new media will be public health hazards.

Every psychology undergraduate learns the same scientific parable of addiction. A rat with a line to its veins is put in a box, a “Skinner Box,” with a rat-friendly lever that releases small amounts of cocaine. The rat quickly learns to associate the lever with a rush, and starts to press it, over and over, in favor of nourishment or sociality, until death, often by stroke or heart failure.
Fortunately, rat self-administration studies, which go back to the 1960’s, offer a mere metaphor for human addiction. A human’s course down the same path is much less literal. People don’t literally jam a “self-stimulate” button until death. Right? Last week, Mr. Hsieh from Taiwan was found dead after playing unnamed “combat computer games” for three days straight. Heart failure. His case follows a handful of others from the past decade, from Asia and the West. Streaks of 12 hours to 29 days, causes of death including strokes, heart failure, and other awful things. One guy foamed at the mouth before dropping dead.
East Asia is leagues ahead of the West in the state of its video game culture. Multiplayer online games are a national pastimes with national heroes and nationally-televised tournaments.(And the South Korean government has taken a public health perspective on the downsides, with a 2011 curfew for online gamers under 18.) Among the young, games play the role that football plays for the rest of the world. With Amazon’s recent purchase of e-sport broadcaster twitch.tv, for $1.1 billion, there is every reason to believe that this is where things are going in the West.
Science and industry are toolkits, and you can use them to take the world virtually anywhere. With infinite possibilities, the one direction you ultimately choose says a lot about you, and your values. The gaming industry values immersion. You can see it in the advance of computer graphics and, more recently, in the ubiquity of social gaming and gamification. You can see it in the positively retro fascination of Silicon Valley with the outmoded 1950’s “behaviorist” school of psychology, with its Skinner boxes, stimuli and responses, classical conditioning, operant conditioning, positive reinforcement, new fangled (1970’s) intermittent reinforcement. Compulsion loops and dopamine traps. Betable.com, another big dreamer, is inpiring us all with its wager that the future of gaming is next to Vegas. Incidentally, behaviorism seems to be the most monetizable of the psychologies.
And VR is the next step in immersion, a big step. Facebook has bet $400 million on it. Virtual reality uses the human visual system — the sensory modality with the highest bandwidth for information — to provide seamless access to human neurophysiology. It works at such a fundamental level that the engineering challenges remaining in VR are no longer technological (real-time graphics rendering can now feed information fast enough to keep up with the amazing human eye). Today’s challenges are more about managing human physiology, specifically, nausea. In VR, the easiest game to engineer is “Vomit Horror Show,” and any other game is hard. Nausea is a sign that your body is struggling to resolve conflicting signals; your body doesn’t know what’s real. Developers are being forced to reimagine basic principles of game and interface design.*** Third-person perspective is uncomfortable, it makes your head swim. Cut scenes are uncomfortable for the lack of control. If your physical body is sitting while your virtual body stands, it’s possible to feel like you’re the wrong height (also uncomfortable). And the door that VR opens can close behind it: It isn’t suited to the forms that we think of when we think of video games: top-down games that makes you a mastermind or a god, “side-scroller” action games, detached and cerebral puzzle games. VR is about first-person perspective, you, and making you forget what’s real.
We use rats in science because their physiology is a good model of human physiology. But I rolled my eyes when my professor made his dramatic pause after telling the rat story. Surely, humans are a few notches up when it comes to self control. We wouldn’t literally jam the happy button to death. We can see what’s going on. Mr. Hsieh’s Skinner Box was gaming, probably first-person gaming, and he self-administered with the left mouse button, which you can use to kill. These stories are newsworthy today because they’re still news, but all the pieces are in place for them to become newsworthy because people are dying. The game industry has always had some blood on its hands. Games can be gory and they can teach and indulge violent fantasizing. But if these people are any indication, that blood is going to become a lot harder to tell from the real thing.

About

This entry was posted on Thursday, January 29th, 2015 and is filed under science, straight-geek.


The intriguing weaknesses of deep learning and deep neural networks

Deep learning (and neural networks generally) have impressed me a lot for what they can do, but much more so for what they can’t. They seem to be vulnerable to three of the very same strange, deep design limits that seem to constrain the human mind-brain system.

  • The intractability of introspection. The fact that we can know things without knowing why we know them, or even that we know them. Having trained a deep network, it’s a whole other machine learning problem just to figure out how it is doing what it is doing.
  • Bad engineering. Both neural networks and the brain are poorly engineered in the sense that they perform action X in a way that a mechanical or electrical engineer would never have designed a machine that can do X.** These systems don’t respect modularity and it is hard to analyze them with a pencil and paper. They are hard to diagnose, troubleshoot, and reverse-engineer. That’s probably important to why they work.
  • The difficulty of unlearning. The impossibility of “unseeing” the object in the image on the left (your right), once you know what it is. That is a property that neural networks share with the brain. Well, maybe that isn’t a fact, maybe I’m just conjecturing. If so, call it a conjecture: I predict that Facebook’s DeepFace, after it has successfully adapted to your new haircut, has more trouble than it should in forgetting your old one.
  • Very fast performance after very slow training. Humans make decisions in milliseconds, decisions based on patterns learned from a lifetime of experience and tons of data. In fact, the separation between the training and test phases that is standard in machine learning is more of an artifice in deep networks, whose recurrent varieties can be seen as lacking the dichotomy.
  • There are probably others, but I recognize them only slowly.

Careful. Once you know what this is, there's no going back.
Careful. Once you know what this is, there’s no going back.

Unlearning, fast learning, introspection, and “good” design aren’t hard to engineer: we already have artificial intelligences with these properties, and we humans can easily do things that seem much harder. But neither humans nor deep networks are good at any of these things. In my eyes, the fact that deep learning is reproducing these seemingly-deep design limitations of the human mind gives it tremendous credibility as an approach to human-like AI.
The coolest thing about a Ph.D. in cognitive science is that it constitutes license, almost literally, to speculate about the nature of consciousness. I used to be a big skeptic of the ambitions of AI to create human-like intelligence. Now I could go either way. But I’m still convinced that getting it, if we get it, will not imply understanding it.
Motivating links:

About

This entry was posted on Sunday, December 21st, 2014 and is filed under science.


Toothbrushes are up to 95% less effective after 3 months and hugging your children regularly can raise their risk of anxiety, alcoholism, or depression by up to 95%

It sounds impossible, but this statistic is true:

Hugging your child regularly can raise his or her risk of anxiety, alcoholism, or depression by up to 95%.

I don’t even need a citation. Does it mean parents should stop hugging their children? No. You’d think that it couldn’t possibly be right, but the truth is even better: it couldn’t possibly be wrong.
And there are other statistics just like it. I was at a Walmart and on the side of a giant bin of commodity toothbrushes I read that “a new toothbrush is up to 95% more effective than a 3 month old toothbrush in reducing plaque between teeth.”
If you’ve heard related things like “Your toothbrush stops working after three months,” from TV or word of mouth, I’ve found that they all come as butchered versions of this original statistic, which actually says something completely different.
I’d only heard the simplified versions of that stat myself, and it had always set off my bullshit detector, but what was I going to do, crusade passionately against toothbrushes? Seeing the claim written out in science speak changed things a little. The mention of an actual percentage must have struck me because I pushed my giant shopping cart in big mindless circles before the genius of the phrasing bubbled up. This is textbook truthiness: At a glance, it looks like science is saying you should buy more toothbrushes, but merely reading right showed that the sentence means nothing at all. The key is in the “up to.” All this stat says is that if you look at a thousand or a million toothbrushes you’ll find one that is completely destroyed (“95% less effective”) after three months. What does that say about your particular old toothbrush? Pretty much nothing.
And that’s how it could be true that hugging your child regularly can raise his or her risk of anxiety, alcoholism, or depression by up to 95%. Once again, the key is in the “up to.” To prove it, all I have to do is find someone who is a truly terrible hugger, parent, and person. If there exists anyone like that — and there does — then this seemingly crazy claim is actually true. If any person is capable of causing psychological distress through inappropriate physical contact, the phrase “up to” lets you generalize to everyone. Should you stop hugging your child because there exist horrible people somewhere in the world? Of course not. These statistics lead people to conclusions that are the opposite of the truth. Is that even legal?
If it’s in your mind that you should buy a new toothbrush every three months, that’s OK, it’s in mine too. And as everyone who comes within five feet of me will be happy to hear, me and dental hygiene have no conflict. But you have to know that this idea of a three month freshness isn’t based in facts. If I had to guess, I’d say that it’s a phrase that was purchased by the dental industrial complex to sell more toothbrushes, probably because they feel like they don’t sell enough toothbrushes. If it sounds tinfoil hat that an industry would invest in fake science just to back up its marketing, look at just one of the exploits pulled by Big Tobacco, very well documented in testimony and subpoenas from the 1990’s.

Press release by Colgate cites an article that never existed

Hunting to learn more about the statistic, I stumbled on some Colgate fan blogs (which I guess exist) pointing to a press release citing “Warren et al, J Dent Res 13: 119-124, 2002.”
Amazingly, it’s a fake paper! There is nothing by Warren in the Journal of Dental Research in 2002, or in any other year. But I kept looking and eventually found something that seems to fit the bill:
Conforti et al. (2003) An investigation into the effect of three months’ clinical wear on toothbrush efficacy: results from two independent studies. Journal of Clinical Dentristry 14(2):29-33. Available at http://www.ncbi.nlm.nih.gov/pubmed/12723100.
First author Warren in the fictional paper is the last author in this one. It’s got to be the right paper, because their results say exactly what I divined in Walmart, that a three month old toothbrush is fine and, separately, that if you look hard enough you’ll find really broken toothbrushes. Here it is in their own words, from the synopsis of the paper:

A comparison of the efficacies of the new and worn D4 toothbrushes revealed a non-significant tendency for the new brush head to remove more plaque than the worn brush head. However, when plaque removal was assessed for subjects using brush heads with the most extreme wear, i.e., scores of 3 or 4 (n = 15), a significant difference (p < 0.05) between new and worn brush heads was observed for the whole-mouth and approximal surfaces.

This study should never have been published. The phrase “revealed a non-significant tendency” is jargon for “revealed nothing.” To paraphrase the whole thing: “We found no effect between brand new and three month old toothbrushes, but we wanted to find one, and that’s almost good enough. Additionally, a few of the toothbrushes were destroyed during the study, and we found that those toothbrushes don’t work.” The only thing in the original stat that isn’t in the Conforti synopsis is the claim about effect size: “up to 95% less effective.” The synopsis mentions no effect size regarding the destroyed toothbrushes, so either it’s only mentioned in the full version of the paper (which I can’t get my hands on) or it’s based on a really incredibly flawed interpretation of the significance claim, “(p < 0.05)." The distinguished Paul J Warren works or worked for Braun (but not Colgate), and has apparently loved it. Braun is owned by Gillette which is owned by Proctor & Gamble. The paper’s first author, Conforti, works, with others of the paper’s authors, for Hill Top Research, Inc., a clinical research contractor based in West Palm Beach, Florida. I don’t think there’s anything inherently wrong with working for a corporate research lab — I do — but it looks like they produce crap for money, and the reviewers who let Braun’s empty promotional material get published in a scientific journal should be embarrassed with themselves.

The original flawed statistic snowballs, accumulating followers, rolling further and further from reality

I did a lot of digging for the quote, and found lots of versions of it, each further from reality than the one before it. Here is the first and best attempt at summarizing the original meaningless study:

A new toothbrush is up to 95% more effective than a three month old toothbrush in reducing plaque between teeth.*

A later mention by Colgate gets simpler (and adds “normal wear and tear,” even though the study only found an effect for extreme wear and tear.)

Studies show that after three months of normal wear and tear, toothbrushes are much less effective at removing plaque from teeth and gums compared to new ones.*

… and simpler ….

Most dental professionals agree you should change your toothbrush every three months.*.

That last one might come from a different source, and it might reflect the statistic’s transition from a single vacuous truthy boner to vacuous widespread conventional wisdom. The American Dental Association now endorses a similar message: “Replace toothbrushes at least every 3–4 months. The bristles become frayed and worn with use and cleaning effectiveness will decrease.” To their credit, their citations don’t include anything by Warren or Conforti, but the paper they do cite isn’t much better: Their evidence for the 3–4 month time span comes from a study that only ran for 2.5 months (Glaze & Wade, 1986). Furthermore, the study only tested 40 people, and it wasn’t blind, and it’s stood unelaborated and unreplicated for almost 30 years. It’s an early, preliminary result that deserves followup. But if that’s enough for the ADA to base statements on then they are a marketing association, not the medical or scientific one they claim to be.
They also cite evidence that toothbrushes you’ve used are more likely to contain bacteria, but they’re quick to point out that those bacteria are benign and that exposure to them is not linked to anything, good or bad. Of course, those bacteria on your toothbrush probably came from your body. Really, you infect your toothbrush, not the other way around, so why not do it a favor and get a new mouth every three months?

So what now?

Buy a new toothbrush if you want, but scientifically, the 3–4 months claim is on the same level with not hugging your kids. Don’t stop hugging your kids. Brush your teeth with something that can get between them, like a cheap toothbrush, an old rag dipped in charcoal, or a stick. You can use toothpaste if you want, it seems to have an additional positive effect, probably a small one. Your toothbrush is probably working fine. After hundreds of thousands of dollars of effort, the only thing researchers seem to have really discovered is that busted toothbrushes are busted, so if your toothbrush shows extreme wear, maybe buy a new one. For less than hundreds of thousands of dollars I can add a little extra wisdom: if it smells bad, you probably have bad breath.
Disclaimer is that I’m sure I could have read more, and I might be working too fast, loose, and snarky. I haven’t even read the full Conforti paper (If you have good institutional access, see if you can get it for me). I’ll dig deeper if it turns out that anyone cares; leave a comment.

Refs

Conforti N.J., Cordero R.E., Liebman J., Bowman J.P., Putt M.S., Kuebler D.S., Davidson K.R., Cugini M. & Warren P.R. (2003). An investigation into the effect of three months’ clinical wear on toothbrush efficacy: results from two independent studies., The Journal of clinical dentistry, 14 (2) 29-33. PMID: http://www.ncbi.nlm.nih.gov/pubmed/12723100
Glaze P.M. & Wade A.B. (1986). Toothbrush age and wear as it relates to plaque control*, Journal of Clinical Periodontology, 13 (1) 52-56. DOI: http://dx.doi.org/10.1111/j.1600-051x.1986.tb01414.x.


Xeno's paradox

There is probably some very deep psychology behind the age-old tradition of blaming problems on foreigners. These days I’m a foreigner, in Switzerland, and so I get to see how things are and how I affect them. I’ve found that I can trigger a change in norms even by going out of my way to have no effect on them. It’s a puzzle, but I think I’ve got it modeled.
In my apartment there is a norm (with a reminder sign) around locking the door to the basement. It’s a strange custom, because the whole building is safe and secure, but the Swiss are particular and I don’t question it. Though the rule was occasionally broken in the past (hence the sign), residents in my apartment used to be better about locking the door to the basement. The norm is decaying. Over the same time period, the number of foreigners (like me) has increased. From the naïve perspective, the mechanism is obvious: Outsiders are breaking the rules. The mechanism I have in mind shows some of the subtlety that is possible when people influence each other under uncertainty. I’m more interested in the possibility that this can exist than in showing it does. Generally, I don’t think of logic as the most appropriate tool for fighting bigotry.
When I moved in to this apartment I observed that the basement door was occasionally unlocked, despite the sign. I like to align with how people are instead of how the signs say they should be, and so I chose to just remain a neutral observer for as long as possible while I learned the how things run. I adopted a heuristic of leaving things how I found them. If the door was locked, I locked it behind me on my way out, and if the door wasn’t I left it that way.
That’s well and good, but you can’t just be an observer. Even my policy of neutrality has side effects. Say that the apartment was once full of Swiss people, including one resident who occasionally left the door unlocked but was otherwise perfectly Swiss. The rest of the residents are evenly split between orthodox door lockers and others who could go either way and so go with the flow. Under this arrangement, the door stays locked most of the time, and the people on the cusp of culture change stay consistent with what they are seeing.
Now, let’s introduce immigration and slowly add foreigners, but a particular kind that never does anything. These entrants want only to stay neutral and they always leave the door how they found it. If the norm of the apartment was already a bit fragile, then a small change in the demographic can tip the system in favor of regular norm violations.
If the probability of adopting the new norm depends on the frequency of seeing it adopted, then a spike in norm adoptions can cause a cascade that makes a new norm out of violating the old one. This is all standard threshold model: Granovetter, Schelling, Axelrod. Outsiders change the model by creating a third type that makes it look like there are more early adopters than there really are.
Technically, outsiders translate the threshold curve up and don’t otherwise change its shape. In equations, (1) is a cumulative function representing the threshold model. It sums over some positive function f() as far as percentile X to return value Y in “X% of people (adopters early adopters (E) plus non-adopters (N)) need to see that at least Y% of others have adopted before they do.” Equation (2) shifts equation (1) up by the percentage of outsiders times their probability of encountering an adopter rather than a non-adopter.
latex-image-2
If you take each variable and replace it with a big number you should start to see that the system needs either a lot of adopters or a lot of outsiders for these hypothetical neutral outsiders to be able to shift the contour very far up. That says to me that I’m probably wrong, since I’m probably the only one following my rule. My benign policy probably isn’t the explanation for the trend of failures to lock the basement door.
This exercise was valuable mostly for introducing a theoretical mechanism that shows how it could be possible for outsiders to not be responsible for a social change, even if it seems like it came with them. Change can come with disinterested outsiders if the system is already leaning toward a change, because outsiders can be mistaken for true adopters and magnify the visibility of a minority of adopters.

Update a few months later

I found another application. I’ve always wondered how it is that extreme views — like extreme political views — take up so much space in our heads even though the people who actually believe those things are so rare. I’d guess that we have a bias towards over estimating how many people are active in loud minorities, anything from the Tea Party to goth teenagers. With a small tweak, this model can explain how being memorable can make your social group seem to have more converts than it has, and thereby encourage more converts. Just filter people’s estimates of different group’s representations through a memory of every person that has been seen in the past few months, with a bias toward remembering memorable things. I’ve always thought that extreme groups are small because they are extreme, but this raises the possibility that it’s the other way around, that when you’re small, being extreme is a pretty smart growth strategy.