AI in the classroom

At Davis I teach a course that introduces programming by starting with motivation before principles. That means it’s a course about doing fun stuff. And I just added an assignment for playing with the OpenAI API.

It’s an online course with its own security requirements, and so I regularly have the students submitting video in which they flash their IDs. To play with that habit, I ask them to modify code for OpenAI’s Vision API to give me a

A {style} {animal} holding its ID up to a webcam.

Here are their submissions. I don’t have all the prompts that students filled in with, but some are in the filenames. With some, I do wonder if all of these submissions are real AI-made images as opposed to mere human-made submissions, but I have no way to confirm, so I can only enjoy the irony of suddenly being concerned that my students are manually doing their homework instead of using AI.

If it looks fun, here are the OpenAI demos, #1 (text) and #2 (images and audio) for you to play with.

About

This entry was posted on Tuesday, March 12th, 2024 and is filed under Uncategorized.


Timezones and mindstates

OK. You start in France at latitude 45. That is the south of the country, and we’ll say that you are due south of Greenwich. Just so you know, I’m writing this whole thing with the map open big, so you should probably read it that way.

This is only the very beginning of the trip, and things are already weird. The UK is the only county in Europe that both has and should have GMT. The other countries that should also have GMT, France and Spain and Norway, do not. And the only other countries that do have GMT (Portugal, Iceland, and Ireland) are mostly or entirely in GMT-1. If I woke up one day to find that everybody except me was insane, it would cross my mind that maybe I’m the one that’s crazy. These are the thoughts that the UK should be thinking.

Anyway, you are still at 45 degrees north, halfway up to the pole, and you starting flying east towards Russia, as the crow flies, but farther and a lot faster. Its noon in Greenwich, and lets say that you are going fast enough to circle the earth in a few seconds.

We haven’t moved yet, so its 1:00PM where you are, it should be noon, but its 1:00, until you hit Romania and it becomes 2:00PM. 2:00 PM starts early and ends early, you get about half as much 3:00PM as you should and it jumps to 5PM. Only some parts of Russia are having 4:00PM right now. None is this is wierd. In Kazakhstan you increment predictably to 6PM, which you enjoy an hour early in GMT+5 Halfway through 6PM you find yourself in China at 8:00PM, even while its still five in some places (or five-and-a-half in India). This isn’t weird yet.

China is big, and it has one timezone, so you don’t leave 8 until almost nine. That makes a lot of sense, right? Back on track. Very sensible. Well, you leave 8 and go into 10:00PM in Russia. That is less sensible, but forgivable. Japan is the next country past Rusia, and it appreciates the utility of 9:00PM. So when you reach Japan, having stayed at 45 degrees north the whole time, and traveling only directly east, you go Back into 9:00PM. When you leave, you jump up to 11:00PM in Russian, go back in time Again to 10:00PM, then up again to 11:00, then to midnight, but instantly to midnight of the night before, just west of mainland Alaska.

All that backwards stuff happened quickly, just between GMT+9 and GMT+10. Imagine driving from Chicago from New York. But instead of going from 4:00PM to 5:00PM, you go 3:00, 5:00, 4:00, 6:00, 5:00. You can put equal blame on the Russians, Chinese, and Japanese.

The Pacific is boring/sensible, switching when and where you might expect, excepting the 24 hour leap backwards in time. In the US, 8:00PM (PST) is truncated, and 6:00PM (Central) takes (a little more than) its time, but the rest of the trip, around to where you started, is sane. There is one more exception. The whole timezone system is built around GMT, but you never actually entered GMT the whole trip. You skipped noon—no lunch. That might be wrong: it depends on what time it is over an ocean if (a) you are in GMT+0 (where it would be noon) but (b) within the waters of Spain and France (where it is 1:00PM).

I found out about this craziness because I’ll be in Sapporo, Hokkaido, Japan, the big empty island in the north of Japan that looks like a birthmark. I don’t have any friends just east in Vladivostok. But I do have family in Manila. The one is close to being due north of the other. But, while calling my family in Manila means subtracting an hour (like you’d expect), calling a friend in the other will mean adding an hour. That is backwards.

Between Greenwich and Westphalia, there isn’t room for both your time and your sanity. Actually, I don’t really believe that. I bet it would be even weirder if we cut things the way that the folks in Greenwich originally intended—in terms of longitude rather than by national preference.

[Ed. This is a repost from my old blog.]

About

This entry was posted on Sunday, November 26th, 2023 and is filed under systems of culture.


Cut it out with this Gordian knot stuff

The cutting of the Gordian Knot by Alexander the Great is funny as great myths go: If you take a look you’ll realize that it’s usually invoked only to criticize it. Any thinker capable of nuance has to come out against it:

There’s Camus: “Yes, the rebirth is in the hands of all of us. It is up to us if the West is to bring forth any anti-Alexanders to tie together the Gordian Knot of civilization cut by the sword. 

And Sartre speaking of Heidegger: “In his abrupt, rather barbaric fashion of cutting Gordian knots rather than trying to untie them, he gives in answer to the question posited a pure and simple definition.

Really, most mentions I encounter are either to defend the knot or attack the people who think they can solve it.

I started to get a sense that anyone really moved by this sense of necessity for cutting through complexity is probably a victim of authoritarian personality and maybe not figurative but certainly literal fascism. Take Mussolini:

  • The era of Liberalism, after having accumulated an infinity of Gordian knots, tried to untie them in the slaughter of the World War-and never has any religion demanded of its votaries such a monstrous sacrifice.”  
  • I understood now,” [he] wrote, “that the Gordian knot of Italian political life could only be undone by an act of violence.
  • Not to mention his book plate.
  • But fascist Franco, not to be one-upped, put it on his seal.

Say what you want, but even the idea that we deserve to call our naive interventions solutions is a big ugly act of hubris. It’s got it’s place, but I know where I start.


The toilet humor

As I get older and more mature, I’m noticing my sense of humor changing. All the pee, poop, and fart jokes that used to make me guffaw are now even funnier. I think it’s connected to this other change, maybe physiological, where I’m starting to get as much satisfaction from the basic functions as I do from a good day’s work. The glow is strong enough that I start to wonder if my worldview is on-end, and if I’m just an animal who endures the defect of reason for the joys of creaturehood.

And with these very unexpected changes there’s come a third. I think I’m finally starting to get Freud.

About

This entry was posted on Thursday, October 19th, 2023 and is filed under life and words.


Beyond first-order skepticism

In our culture, there’s a great shortcut to the high ground: the bold skeptic who doesn’t believe any of your ignorant mainstream rot. You see it everywhere. The bold skeptic is deeply and widely appealing, instantly recognizable, and so easy to fake. It’s almost as easy to fake as the other shortcut: the underdog. If underdog billionaires can complain about “the elites,” and underdog top (as in literally mainstream media) pundits can rage at mainstream media, then calling a good thing bad is nothing.

So: to instantly amaze your friends with your intellectual depth, take something everyone believes and reject it. That’s the first-order skeptic.

First-order skepticism in itself is common, and fine. It isn’t very deep to be a contrarian. But it’s something. The problem with the first order skeptic is this: a lot of what us sheeple believe, we believe because it’s true. Floor down, sky up, grass green, sun big. It can be tricky maintaining a skeptic identity without being easily cornerable into untenable positions. This is the big problem at the ground floor of skepticism. But you can solve it with work, by going deeper.

A second-order sceptic doubts both the common wisdom and the first-order skeptics. What a first-order skeptic has on the normies, a second-order skeptic has on on the first-order skeptics. “The earth isn’t flat or round: it’s a geoid!” Then there’s your third-order skeptic, who doubts the zero-, first-, and second-order skeptics, and so on. “Sure the earth is a geoid, but that’s not really a definition of a kind of shape, it’s really more our name for whatever shape earth is“. A hippie first-order skeptic will reject microwaves and dish washers for being too gadgety and commodity, while a second-order hippie will embrace them for being energy and resource efficient. Michael Moore rejects recycling because putting sustainability work on consumers is a drop in the bucket up against the magnitude of corporate waste. That’s a second-order skeptic. 

If a first-order vegetarian rejects meat on ethical or squeamish grounds, a second-order vegetarian might use ecological grounds, which reject animal agriculture, but will eat hunted goat in the tropics, or hunted moose in the arctic, ecosystems that can support those game at those levels of prevalence. A third-order vegetarian thinks that’s fine, but a little too naive in its embrace of the bold individualist. At the third order, your vote is naturally for the the most ecologically, ethically sound protein source of all. You argue that we should farm and eat bugs. 

As you go further and further down, you occupying increasingly unlikely, creative positions, and become more and more of a character, with more credibility with each level. At each level, you have to be more informed. Each level is harder to fake. Every take feels like IcyHot: spicy freshness and stone cold logic in the same package. Many of my biggest moments of admiration or respect boil down to a moment of seeing someone lodged in at level three or four casually blowing my mind. One of my most influential professors was so radically higher-order in her feminism that she exclusively wore dresses, because she saw the trend to sell women on shirts and pants as nothing but a fashion industry ploy to get women to spend twice as much on garments. And deeper isn’t always better, I also admire consistency at medium depth. Jacobin Magazine, and The Baffler before it, are just solid reliable consistent second-order skepticism. I always think of Jacobin taking down Foucault for admiration of capitalism.

I’ve seen that sometimes if you fly too high you wrap back around to incredibly norm-y positions. I’ve found many of the friends who are best at it become absolute curmudgeons. I’ve seen the second- and third- orders get faked as well. But overall, it’s a sign of quality. As an idea it’s like “Galaxy Brain” but the result of work and investment. It’s a sign of real thought. It’s something I look for in the people I follow. I don’t know if originality exists, it’s possible it doesn’t. It’s possible that no deep originality is more than a sum up from zero stopping at third, or fourth, or fifth order skepticisms, increasingly faithful to original with every extra pass. It’s also the perfect cudgel for all those bold skeptics.


How is a pager different from a doorbell?

We don’t think of them this way, but a pager and a doorbell are very similar. Both allow one person to one-way signal to another that they want their attention. But the social context surrounding the two are different in a way that makes it so that they are used differently, and that the signals come with different expectations. In particular, a doorbell’s context is more constrained than a pager’s allowing the sender to learn more from each message. It’s a nice example of social context supplementing social technology to influence interpretation. Pager’s are a generalization of the doorbell.


Understanding Taylor Swift with Python

Here are the complete lyrics of Taylor Swift’s “Shake it off”, in the form of a Python string

shakeItOffComplete = """
I stay out too late
Got nothing in my brain
That's what people say, mm, mm
That's what people say, mm, mm
I go on too many dates
But I can't make them stay
At least that's what people say, mm, mm
That's what people say, mm, mm

But I keep cruisin'
Can't stop, won't stop movin'/groovin'
It's like I got this music in my mind
Saying it's gonna be alright

'Cause the players gonna play, play, play, play, play, 
And the haters gonna hate, hate, hate, hate, hate, 
Baby, I'm just gonna shake, shake, shake, shake, shake, 
I shake it off, I shake it off
Heartbreakers gonna break, break, break, break, break, 
And the fakers gonna fake, fake, fake, fake, fake, 
Baby, I'm just gonna shake, shake, shake, shake, shake, 
I shake it off, I shake it off

I never miss a beat
I'm lightning on my feet
And that's what they don’t see, mm, mm
That's what they don’t see, mm, mm
I'm dancing on my own (dancing on my own)
I make the moves up as I go (moves up as I go)
And that's what they don't know, mm, mm
That’s what they don’t know, mm, mm

But I keep cruisin'
Can't stop, won't stop movin'/groovin'
It's like I got this music in my mind
Saying it's gonna be alright

'Cause the players gonna play, play, play, play, play, 
And the haters gonna hate, hate, hate, hate, hate, 
Baby, I'm just gonna shake, shake, shake, shake, shake, 
I shake it off, I shake it off
Heartbreakers gonna break, break, break, break, break, 
And the fakers gonna fake, fake, fake, fake, fake, 
Baby, I'm just gonna shake, shake, shake, shake, shake, 
I shake it off, I shake it off

Shake it off, I shake it off
I, I, I shake it off, I shake it off
I, I, I shake it off, I shake it off
I, I, I shake it off, I shake it off

Hey, hey, hey
Just think while you've been getting down and out about the liars
And the dirty, dirty cheats of the world
You could've been getting down
To this sick beat

My ex-man brought his new girlfriend
She's like, “Oh my God,” but I'm just gonna shake
And to the fella over there with the hella good hair
Won't you come on over, baby?
We can shake, shake, shake
Yeah, oh, oh, oh

'Cause the players gonna play, play, play, play, play, 
And the haters gonna hate, hate, hate, hate, hate, 
Baby, I'm just gonna shake, shake, shake, shake, shake, 
I shake it off, I shake it off
Heartbreakers gonna break, break, break, break, break, 
And the fakers gonna fake, fake, fake, fake, fake, 
Baby, I'm just gonna shake, shake, shake, shake, shake, 
I shake it off, I shake it off

Shake it off, I shake it off
I, I, I shake it off, I shake it off
I, I, I shake it off, I shake it off
I, I, I shake it off, I shake it off

Shake it off, I shake it off
I, I, I shake it off, I shake it off
I, I, I shake it off, I shake it off
I, I, I shake it off, I shake it off

Shake it off, I shake it off
I, I, I shake it off, I shake it off
I, I, I shake it off, I shake it off
I, I, I shake it off, I shake it off
"""
print( type(shakeItOffComplete))

By representing it in Python we can learn about the formulae underlying pop music. Let’s break it up into parts to see how it is structured.

verse1 = """
I stay out too late
Got nothing in my brain
That's what people say, mm, mm
That's what people say, mm, mm
I go on too many dates
But I can't make them stay
At least that's what people say, mm, mm
That's what people say, mm, mm
"""

prechorus = """
But I keep cruisin'
Can't stop, won't stop movin'/groovin'
It's like I got this music in my mind
Saying it's gonna be alright
"""

chorus = """
'Cause the players gonna play, play, play, play, play, 
And the haters gonna hate, hate, hate, hate, hate, 
Baby, I'm just gonna shake, shake, shake, shake, shake, 
I shake it off, I shake it off
Heartbreakers gonna break, break, break, break, break, 
And the fakers gonna fake, fake, fake, fake, fake, 
Baby, I'm just gonna shake, shake, shake, shake, shake, 
I shake it off, I shake it off
"""

verse2 = """
I never miss a beat
I'm lightning on my feet
And that's what they don’t see, mm, mm
That's what they don’t see, mm, mm
I'm dancing on my own (dancing on my own)
I make the moves up as I go (moves up as I go)
And that's what they don't know, mm, mm
That’s what they don’t know, mm, mm
"""

postchorus = """
Shake it off, I shake it off
I, I, I shake it off, I shake it off
I, I, I shake it off, I shake it off
I, I, I shake it off, I shake it off
"""

interlude = """
Hey, hey, hey
Just think while you've been getting down and out about the liars
And the dirty, dirty cheats of the world
You could've been getting down
To this sick beat
"""

bridge = """
My ex-man brought his new girlfriend
She's like, “Oh my God,” but I'm just gonna shake
And to the fella over there with the hella good hair
Won't you come on over, baby?
We can shake, shake, shake
Yeah, oh, oh, oh
"""

With those parts, the variables verse1, verse2, prechorus, chorus, postchorus, interlude, and bridge we can see how “Shake it off” is structured (and also represent it with a lot less typing).

shakeItOffReconstructed = (verse1 +   # (it's ok to stretch expressions over several lines. It can help readabiilty)
                          prechorus + 
                          chorus + 
                          verse2 + 
                          prechorus + 
                          chorus + 
                          postchorus + 
                          interlude + 
                          bridge + 
                          chorus + 
                          postchorus * 3 )  # repeats three times
#print( shakeItOffReconstructed )

Is it really that simple? Let’s test and see if these strings are the same.

shakeItOffComplete == shakeItOffReconstructed

Verse-level representation

That’s some nice compression, but we can do better. There is a lot of repetition all over the song that we can capture in variables and chunk down. For example, the “mm, mm”s and “That’s what people say”‘s in the verses could be chunked down. But the most redundancy in any pop song is going to be the in the chorus and, in this song, especially the post-chorus. Let’s see if we can rewrite them into a more compact form.

### Original
chorus = """
'Cause the players gonna play, play, play, play, play, 
And the haters gonna hate, hate, hate, hate, hate, 
Baby, I'm just gonna shake, shake, shake, shake, shake, 
I shake it off, I shake it off
Heartbreakers gonna break, break, break, break, break, 
And the fakers gonna fake, fake, fake, fake, fake, 
Baby, I'm just gonna shake, shake, shake, shake, shake, 
I shake it off, I shake it off
"""

### Refrain
shk = 'I shake it off'   # this gets used a lot, so it gets a variable 

### Replacement
chorusReconstructed = """
'Cause the players gonna {}
And the haters gonna {}
Baby, I'm just gonna {}
{}, {}
Heartbreakers gonna {}
And the fakers gonna {}
Baby, I'm just gonna {}
{}, {}
""".format('play, ' * 5, 
           'hate, ' * 5, 
           'shake, ' * 5, 
           shk, shk, 
           'break, ' * 5, 
           'fake, ' * 5, 
           'shake, ' * 5, 
           shk, shk)

#print( chorusReconstructed)

### Test for success
chorus == chorusReconstructed

The new chorus is identical content, typed in about half as many characters. That means that, in some sense, about half the chorus of “Shake it off” is redundant.

How about the post-chorus? We’ve already defined placeholder variable shk, which it looks like we’ll keep using.

### Original
postchorus = """
Shake it off, I shake it off
I, I, I shake it off, I shake it off
I, I, I shake it off, I shake it off
I, I, I shake it off, I shake it off
"""

### Replacement
postchorusReconstructed = """
Shake it off, {}
I, I, {}, {}
I, I, {}, {}
I, I, {}, {}
""".format( shk, shk, shk, shk, shk, shk, shk )

### Better replacement
###    (observe above that most shk's are repeated twice.
###     We can use that to get a bit more compression)
shk2 = shk + ', ' + shk
postchorusReconstructed2 = """
Shake it off, {}
I, I, {}
I, I, {}
I, I, {}
""".format( shk, shk2, shk2, shk2 )

### Even better replacement
###    (observe above that most shk's are preceded by a comma and a space.
###     We can use that too)
cshk = ', ' + shk   
shk2 = cshk + cshk
postchorusReconstructed3 = """
Shake it off{}
I, I{}
I, I{}
I, I{}
""".format( cshk, shk2, shk2, shk2 )


### Too far?
i = 'I, I'
shk2 = i + cshk + cshk
postchorusReconstructed3 = """
Shake it off{}
{}
{}
{}
""".format( cshk, shk2, shk2, shk2 )

### Test for success
postchorus == postchorusReconstructed3

We’ve reduced postchorus to almost a third of the size. Add to that that the final post-chorus of the song repeats postchorus three times, and that’s a total reduction of about nine times. In other words, from an informational standpoint, the last 20% of the song is 90% redundant.

### The last 20% (40 seconds) of "Shake it off" in one line
###  "\n" is the character representation of the line break/return key
print( ( ('Shake it off' + cshk + ('\n' + shk2) * 3 ) + '\n' ) * 3 )

Of course, it’s really not fair to evaluate music from an informational standpoint. There are other standpoints that make sense for music. Nevertheless this exercise does do something useful for us. Breaking a thing down into parts—”ana-lysis”—teaches us about a thing by showing us its natural faultlines, and revealing the formula behind it. And it’s just that kind of breaking-things-down that programming makes you good at.

If you end up analyzing another song this way, let me know!

FYI, this is an excerpt from a lesson out of my Python course at UC Davis and on Coursera.


Your face’s facets

In this project a collection of kaleidoscopic passport photos helps us reveal the subtle asymmetries in anyone’s face. The dual portraits are made from symmetrizing the left and ride halves of each face. Here are about 150 portraits from 100 people. In every photo, the leftmost portrait is the left side of the original photo (and therefore the right side of your face looking out from your own perspective).

And as you browse, consider: do you believe the conventional wisdom that the more symmetric faces are the more beautiful (whether conventionally or unconventionally)?

Pictures are below but the link to a better gallery is
https://0w.uk/facefacets.

If you’re in the collection and wish your picture taken down, let me know. The code for automating much of this is
https://github.com/enfascination/faceFacets

By
Seth Frey (enfascination.com) and Gyorgy Feher (g.feher.0@gmail.com)

About

This entry was posted on Wednesday, August 23rd, 2023 and is filed under audio/visual, code.


Definitions that rhyme

I wrote a program for finding pairs of dictionary definitions that are secretly couplets! My rhyme detector is a little rhyme-happy (“surfaces” and “foods”?), but overall I’m very pleased. Best so far:

hoary (adj.):
grayish white.
old and trite.

and

crusade (v.):
be an advocate for.
fight a holy war.

About

This entry was posted on Friday, August 4th, 2023 and is filed under code, life and words.


“Why can’t I work with this person?”: Your collaborator’s secret manual

In collaborations it can take time to learn to work with certain people. They might be hard to handle in many ways: in the way they volunteer feedback; or have to be asked; in being supportive about ideas they actually don’t like, or showing that they like an idea with no other signal than vigorous attack; in expecting constant reminders; being excessively hands-off or hands-on; demanding permission for everything or resenting it. It’s complicated, especially when there’s a power dynamic on top of all that: boss/employee, advisor/advisee, principal/agent.

Fortunately, in active communities of practice, there are many collaborative linkages and the accumulated experience of those collaborators amounts to a manual for how to work with that person. Even for someone hard to work with, you have a couple of peers who manage just fine, often because they have strategies they aren’t even aware of for making it work. That knowledge gets harnessed naturally, if spottily, in my lab because my students talk to each other. One thing a student told me, that she has passed on to others, is that Seth thinks out loud during project meetings so if he’s going fast and it seems scattered and you’re totally lost about what you’re supposed to do, just wait and eventually he’ll finish and summarize.

Is there a more systematic way to harness this knowledge? The idea I came up with is a secret manual. It’s a Google Doc. The person it’s about is not invited to the doc, although they can share the link. Only past, present, or upcoming collaborators can be members. The norms are to keep it specific to collaboration tips, to keep it civil and constructive, to assume good faith and not gossip, and to keep disagreement in the comments (or otherwise distinguish advise that others have found useful from less proven or agreed upon ideas). People with access to the manual can mention parts of it while talking with its subject, but that person can’t be shown the raw doc (it’s not secret, but it is private). The person who it’s about obviously can’t contribute, but they can offer suggestions to a member for things to add (in my case, I’d want some to add: “please feel comfortable sending persistent reminders if you need something; it’s not a bother, it’s a favor”). People could maybe be members of each others’ manuals, though maybe it’s good to have a rule that the only members of one’s secret manual are equal or lesser in power.

UPDATE: if you’re a collaborator of mine, here’s a manual that someone made for me
https://0w.uk/sethmanual


Simple heuristic for breaking pills in half


Quickly:
I have to give my son dramamine on road trips, but only half a pill. That’s been a bit tricky. Even scored pills don’t always break cleanly, and then what do you do? Break it again? Actually yes. I did a simple simulation to show how you can increase your chances of breaking a pill into two half-sized portions by 15-20% (e.g. from 70% to about 85%):
1. Try to break the pill in half.
2. If you succeed, great, if not, try to break each half in half.
3. Between your four resulting fragments, some pair of them has its own probability of adding up to half a pill, plus or minus.

Honestly I thought it would work better. This is the value of modeling.

Explanation:
If after a bad break from one to two pieces you break again to four pieces, you will end up with six possible combinations of the four fragments. Some of these are equivalent so all together going to four pieces amounts to creating two more chances to create a combination that adds to 50%. And it works: your chances go up. This is simple and effective. But not incredibly effective. I thought it would increase your chances of a match by 50% or more, but the benefit is closer to 15-20%. So it’s worth doing, but not a solution to the problem. Of course, after a second round of splitting you can keep splitting and continue the gambit. In the limit, you’ve reduced the pill to a powder whose grains can add to precisely 50% in uncountable numbers of combinations, but that’s a bit unwieldy for road trip dramamine. For the record, pill splitters are also too unwieldy for a roadtrip, but maybe they’re worth considering if my heuristic only provides a marginal improvement.

The code:
Here is the simulation. Parameters: I allowed anything within 10% of half of a pill to be “close enough”, so anything in the 40% to 60% range counts. Intention and skill make the distribution of splits non-uniform, so I used a truncated normal with standard deviation set to a 70% chance of splitting the pill well on the first try.

#install.packages("truncnorm")
library(truncnorm)
inc_1st <- 0
inc_2nd <- 0
tol <- 0.1
for (i in 1:100 ) {
  #print(i);
  #a <- runif(1)
  a <- rtruncnorm(1, a=0, b=1, mean=0.5, sd=0.5^3.3)
  b <- 1 - a
  if ( a > (0.5 - tol) & a < (0.5 + tol)) {
    inc_1st <- inc_1st + 1
  } else {
    #aa <- runif(1, 0, a)
    aa <- rtruncnorm(1, a=0, b=a, mean=a/2, sd=(a*2)^3.3)
    ab <- a - aa
    #ba <- runif(1, 0, b)
    ba <- rtruncnorm(1, a=0, b=b, mean=b/2, sd=(b*2)^3.3)
    bb <- b - ba
    totals <- c(aa+ba, aa+bb)
    if (any( totals > (0.5 - tol) & totals < (0.5 + tol)) ) {
      #print(totals)
      inc_2nd <- inc_2nd + 1
    } else {
      #print(totals)
    }
  }
}

#if you only have a 20% chance of getting it right with one break, you have a 50% chance by following the strategy
#if you only have a 30% chance of getting it right with one break, you have a 60% chance by following the strategy
#if you only have a 60% chance of getting it right with one break, you have a 80% chance by following the strategy
#if you only have a 70% chance of getting it right with one break, you have a 85% chance by following the strategy

print(inc_1st)
print(inc_2nd)
print(inc_1st + inc_2nd)

All the SATOR squares in English, with code

OLYMPUS DIGITAL CAMERA

Using code for recreational word play is very fun. Having fun with a housemate we drunkenly built a list of all 4- and 5- sided SATOR squares.

These squares are special because they read the same left-to-right, top-to-bottom, and both directions in reverse. The famous one is an ancient one from Latin

SATOR
AREPO
TENET
OPERA
ROTAS

In the ancient world these were used both as magic spells and possibly the first memes. They pop up all over Western history, and into today: the movie Tenet is a reference to this square.

Here are the 70 5-letter squares:

‘assam’, ‘shama’, ‘sagas’, ‘amahs’, ‘massa’
‘assam’, ‘shama’, ‘samas’, ‘amahs’, ‘massa’
‘assam’, ‘shaya’, ‘sagas’, ‘ayahs’, ‘massa’
‘assam’, ‘shaya’, ‘samas’, ‘ayahs’, ‘massa’
‘asses’, ‘slive’, ‘simis’, ‘evils’, ‘sessa’
‘asses’, ‘slive’, ‘siris’, ‘evils’, ‘sessa’
‘asses’, ‘state’, ‘sagas’, ‘etats’, ‘sessa’
‘asses’, ‘state’, ‘samas’, ‘etats’, ‘sessa’
‘asses’, ‘stime’, ‘simis’, ’emits’, ‘sessa’
‘asses’, ‘stime’, ‘siris’, ’emits’, ‘sessa’
‘asses’, ‘swone’, ‘solos’, ‘enows’, ‘sessa’
‘ayahs’, ‘yrneh’, ‘anana’, ‘henry’, ‘shaya’
‘cares’, ‘amene’, ‘refer’, ‘enema’, ‘serac’
‘dedal’, ‘enema’, ‘deked’, ‘amene’, ‘laded’
‘dedal’, ‘enema’, ‘deled’, ‘amene’, ‘laded’
‘dedal’, ‘enema’, ‘dered’, ‘amene’, ‘laded’
‘dedal’, ‘enema’, ‘dewed’, ‘amene’, ‘laded’
‘derat’, ‘enema’, ‘refer’, ‘amene’, ‘tared’
‘gater’, ‘amene’, ‘tenet’, ‘enema’, ‘retag’
‘gnats’, ‘nonet’, ‘anana’, ‘tenon’, ‘stang’
‘hales’, ‘amene’, ‘lemel’, ‘enema’, ‘selah’
‘hales’, ‘amene’, ‘level’, ‘enema’, ‘selah’
‘laded’, ‘amene’, ‘deked’, ‘enema’, ‘dedal’
‘laded’, ‘amene’, ‘deled’, ‘enema’, ‘dedal’
‘laded’, ‘amene’, ‘dered’, ‘enema’, ‘dedal’
‘laded’, ‘amene’, ‘dewed’, ‘enema’, ‘dedal’
‘lares’, ‘amene’, ‘refer’, ‘enema’, ‘seral’
‘massa’, ‘amahs’, ‘sagas’, ‘shama’, ‘assam’
‘massa’, ‘amahs’, ‘samas’, ‘shama’, ‘assam’
‘massa’, ‘ayahs’, ‘sagas’, ‘shaya’, ‘assam’
‘massa’, ‘ayahs’, ‘samas’, ‘shaya’, ‘assam’
‘pelas’, ‘enema’, ‘lemel’, ‘amene’, ‘salep’
‘pelas’, ‘enema’, ‘level’, ‘amene’, ‘salep’
‘resat’, ‘enema’, ‘sedes’, ‘amene’, ‘taser’
‘resat’, ‘enema’, ‘seles’, ‘amene’, ‘taser’
‘resat’, ‘enema’, ‘semes’, ‘amene’, ‘taser’
‘resat’, ‘enema’, ‘seres’, ‘amene’, ‘taser’
‘resat’, ‘enema’, ‘sexes’, ‘amene’, ‘taser’
‘retag’, ‘enema’, ‘tenet’, ‘amene’, ‘gater’
‘salep’, ‘amene’, ‘lemel’, ‘enema’, ‘pelas’
‘salep’, ‘amene’, ‘level’, ‘enema’, ‘pelas’
‘selah’, ‘enema’, ‘lemel’, ‘amene’, ‘hales’
‘selah’, ‘enema’, ‘level’, ‘amene’, ‘hales’
‘serac’, ‘enema’, ‘refer’, ‘amene’, ‘cares’
‘seral’, ‘enema’, ‘refer’, ‘amene’, ‘lares’
‘sesey’, ‘edile’, ‘simis’, ‘elide’, ‘yeses’
‘sesey’, ‘edile’, ‘siris’, ‘elide’, ‘yeses’
‘sesey’, ‘elide’, ‘simis’, ‘edile’, ‘yeses’
‘sesey’, ‘elide’, ‘siris’, ‘edile’, ‘yeses’
‘sessa’, ’emits’, ‘simis’, ‘stime’, ‘asses’
‘sessa’, ’emits’, ‘siris’, ‘stime’, ‘asses’
‘sessa’, ‘enows’, ‘solos’, ‘swone’, ‘asses’
‘sessa’, ‘etats’, ‘sagas’, ‘state’, ‘asses’
‘sessa’, ‘etats’, ‘samas’, ‘state’, ‘asses’
‘sessa’, ‘evils’, ‘simis’, ‘slive’, ‘asses’
‘sessa’, ‘evils’, ‘siris’, ‘slive’, ‘asses’
‘shaya’, ‘henry’, ‘anana’, ‘yrneh’, ‘ayahs’
‘stang’, ‘tenon’, ‘anana’, ‘nonet’, ‘gnats’
‘start’, ’tiler’, ‘alula’, ‘relit’, ‘trats’
‘tared’, ‘amene’, ‘refer’, ‘enema’, ‘derat’
‘taser’, ‘amene’, ‘sedes’, ‘enema’, ‘resat’
‘taser’, ‘amene’, ‘seles’, ‘enema’, ‘resat’
‘taser’, ‘amene’, ‘semes’, ‘enema’, ‘resat’
‘taser’, ‘amene’, ‘seres’, ‘enema’, ‘resat’
‘taser’, ‘amene’, ‘sexes’, ‘enema’, ‘resat’
‘trats’, ‘relit’, ‘alula’, ’tiler’, ‘start’
‘yeses’, ‘edile’, ‘simis’, ‘elide’, ‘sesey’
‘yeses’, ‘edile’, ‘siris’, ‘elide’, ‘sesey’
‘yeses’, ‘elide’, ‘simis’, ‘edile’, ‘sesey’
‘yeses’, ‘elide’, ‘siris’, ‘edile’, ‘sesey’

To generate them yourself (and the fours), here is code that you can run by pressing Play.
https://colab.research.google.com/drive/14gaONdrLuxc3Pzz6M8y1RWpTLIdLY7H0?usp=sharing
For words we used the official Scrabble list. The tests are hard to read but they check the symmetries of the square.

The interesting findings are that

  • there are 70 5-letter ones in English,
  • 494 in 4 letters,
  • none use only familiar words,
  • few make technically readable sentences,
  • we did surprisingly well building 4-letter ones by hand without the help of code, but
  • building 5 letter ones by hand is very very hard
  • They are counter-intuitive and having code made it a lot easier to think about them and understand the constraints they have to satisfy

The basic rules in building them are that

  • all n words have to be n letters long,
  • each should be reversible (form a word in both directions),
    • if there is a middle word (for 3- and 5- and other odd lengths), it should be a palindrome (e.g. “TENET”; palindromes are a special case of reversible words), and
  • at least one should begin with a vowel
    • in english the only vowels that appeared in legal 5-letter vowel-ended words in our SATOR squares were a and e, with a’s accounting for the majority.

The next challenge would be to build a SATOR cube (filled or hollow—n slices of cube or a cube with one square on each face). Probably there are none in 5 letters (if there are any, I’d guess there is just one), a couple in 4 letters, and several in 3 letters, with filled obviously more rare than hollow.

Another challenge would be to find words that I want to include that aren’t on the Scrabble list and see if they change anything.


What’s the thing in your life that you’ve looked at more than anything else?

Ernst Mach "Reclining"What’s the thing in your life that you’ve looked at more than anything else? Your walls? Your mom? Your hands? Not counting the backs of your eyelids, the right answer is your nose and brow. They’ve always been there, right in front of you, taking up a steady twentieth or so of your vision every waking moment.

That’s important because to have access to wonder, the joy of knowing you don’t know, you need to realize there are things that are right there that you can’t notice. If you’re wired to miss the obvious, then how can you be confident of anything?

There are answers, of course, but the question has always haunted me, and still does.


How to order a coffee in the minefield of preexisting categories


There are mostly useless of bits of cognitive psychology that I’ve always loved. For example, a lot of categorization research about life on the edge of what objects are what. How flat can a bowl be before it’s a plate? How narrow can a mug be before it’s a cup? How big can a cup be before it’s a bowl? Can it have a handle and not be a cup? When does too much handle make it a spoon? These are questions that can be used to create little microcosms for the study of things like culture, learning, expectations, and all kinds of complexities around the kinds of traits we’re surprisingly sensitive to.

Again, I haven’t found much of it very useful, until recently, trying to order my coffee just the way I like it, I’ve encountered all kinds of unexpected roadblocks. The problem is that my drink doesn’t have a name, and is very close to several drinks that do, each of which comes with it’s own traits and customs and baggage. As a result, I’ve learned that when I’m not careful my drink gets sucked up semantically into the space of its bossy neighbors. The way I like my coffee is close-ish to ways coffee is already commonly served, but different in some important ways that can be very tough to get into a kindly, but overworked barrista’s busy head. Being in a non-category, close to existing ones, means that the meaning of my order has to avoid the semantic basin of other more familiar drinks in endlessly surprising and confounding ways.

To make it concrete, here’s how I like my coffee: double shot of espresso with hot water and cold heavy cream in a roughly 4 to 3 to 2 ratio. For some reason the drink just isn’t as good with too much more or less water, or half and half instead of cream, or steamed or whipped cream instead of liquid. A long-drawn shot isn’t as good as a short shot with hot water added, even though that’s almost the definition of a long shot. I don’t know why or how, but this all matters, so I try to get exactly that. I could just order it how I like it, “double shot of espresso with hot water and cold heavy cream in a roughly 4 to 3 to 2 ratio”, but I’m trying to do a few things at once:
* Keep it concise
* Get what I want
* Not be “that guy”
* And find the ask that will work on anyone: I go to a lot of different coffee shops, and I want a way to ask for this that anyone can hear and produce the same thing.

So,
“Double shot of espresso with water and heavy cream in a roughly 4 to 3 to 2 ratio”
fails on both concise and sparing me from being that guy. Fortunately there are a lot of ways of asking for what I want. Fascinatingly, they all fail in interesting ways:

“Give me a double Americano with less water and heavy cream”
The major nearest neighbor to what I want is the Americano. So it makes sense to use that as a shortcut, by giving directions to my drink from the Americano landmark. Seems straightforward, but Americano, it turns out, is a bossy category, and asking for it asks for a surprising lot of its unexpected baggage as well. Mainly the amount of water. In the US at least, the ratio of coffee to water is often 10:1. Just asking for “less” tends to get me 5:1 or 8:1, meaning there is still several times more water than coffee. No matter how I ask there’s always at least twice as much.

Another bit of the Americano’s baggage is that it’s pretty commonly taken with half and half, meaning that even when I ask for heavy cream, it’s very common for me to end up with half and half, probably due to muscle memory alone. And you can’t ask for “cream,” you have to ask for “heavy cream,” or you’ll almost always get half-and-half.

“Give me a short double Americano with heavy cream”
This should work and it just doesn’t. Something about the word Americano coming out of my mouth means that I’ll get 2 or 5 or 10 times more water than coffee, no matter how I ask.

“Give me a double Americano with very little water and also heavy cream”
Same deal. Simply doesn’t work.

And all of these problems get worse depending who got the order. Your chances are actually OK if you’re talking to the person who will make the drink. But if you’re talking to a cashier who will then communicate, verbally, in writing, or through a computer, to the person who makes your drink, then the regularizing function of language almost guarantees that your drink will be passed on as a normal Americano. The lossy game of telephone loves a good semantic attractor.

“Give me a double Americano with heavy cream in an 8oz cup”
They’ll usually still add too much water, and just not have room for more than a drop or two of cream. This order also gets dangerously close to making me that guy.

“Give me a double espresso with hot water and heavy cream”
With all the Americano trouble I eventually learned to back further away from the Americano basin and closer to my drink’s even bigger, but somehow less assumption laden neighbor, Espresso. Somehow, with this order and the refinement below I end up with what I wanted more often than not. I wish I could say that this obviously works better. It works better, but it’s still not obvious. And it still goes wrong regularly, and still occasionally in strange and new ways. The most impressive is when the barrista mentally translates “espresso with water” to “americano,” pulling me fully back into the first basin, and back into all of the traps above. Less commmonly they’ll mentally translate “espresso with cream” into macchiato or breve and steam the cream. This means that some categories are distorting my drink even when I’m in neighboring categories. They have that much gravity.

“Give me a double espresso with hot water and heavy cream; not an Americano, just a bit of water”
Fails on concision, and definitely makes me that guy.

“Give me equal parts espresso (a double), hot water, and heavy cream”
I came up with this to get out of the Americano trap elegantly, and it works pretty well. It shouldn’t because I actually like a bit less cream than water, and less of both than coffee (4:3:2, not 4:4:4), but the strength of the Americano attractor ends up working in my favor: the temptation to add less cream than anything means that they’ll tend to subconsciously ignore me and put the right amount of cream. But they’re also likely to still put more water than coffee. And another common failure occurs when I actually get taken literally and get equal proportions. That results in way too much cream, and I can’t complain because it’s literally what I asked for. It’s one of the more confounding failures because I can only blame myself.

“Give me a double espresso with equal parts hot water and heavy cream”
A little variation on the above, that also depends on the subconscious strength of the Americano trap. Less concise, but overall more effective.
Again, I really want 4:3:2, not 1:1:1, but it’s happened before that a subconscious understanding leads a barista to give me more water than cream. The most common failure, again, is when I’m taken literally and get equal proportions (too much) cream. The most hilarious failure was a barrista who listened perfectly but Also fell into the Americano trap (“espresso + water = Americano”). I ended up with 2 parts espresso, 10 parts water, and 10 parts heavy cream. You literally couldn’t taste any coffee. Who would even do that? It was like drinking watery melted butter. Totally absurd. I was too impressed to be annoyed.

“Italiano with heavy cream”
This really would be the winner, certainly on concision, except nobody knows what an Italiano is. It’s an espresso with a tiny amount of water added—perfect—so in humans with this category in their head it’s perfect, because the work has already been done carving these traits out of the Americano basin. The problem is universality: this fine category only exists in a small subset of heads. Somehow it’s the rare barrista that’s heard of an Italiano. What I could do is ask for it, and if they don’t know what it is, explain it. Something new having a word is more powerful at overcoming the Americano trap than something new not having its own word. But you really can’t get more “That Guy” than explaining to barristas obscure coffee drinks.

“Give me a cafe con panna with a bit of hot water”
Literally, this is just what I want, panna=cream, but in practice panna is understood as whipped cream and there’s not a concise way to specify liquid.

“Espresso with heavy cream”
If you just don’t mention water at all, a lot of confusion disappears. I don’t get what I want but it’s close and concise and easy and universal. Except, I should have mentioned this sooner, a lot of places don’t even have heavy cream, just half and half. Totally different thing.

“Espresso with heavy cream … … Oh! Also, could you add a bit of hot water?”
Affected afterthought aside, this works pretty water. Asking for water after cream is a good signal to not add very much. But it’s kind of a pain for everyone, and this only works at a place once before it starts coming off as inauthentic. You can’t ask the cashier, you have to ask the person making the drink, or it’ll get lost in translation and you’ll get an Americano.

“I’d like a coffee please”
This really fails on being what I want, but succeeds on so many other dimensions that, well, sometimes I’ll just give up and do this.

A note about half-and-half. Half-and-half is supposedly equal parts milk and heavy cream. I say supposedly because, well, try this: order two drinks, one espresso that you drown in half-and-half (equal parts of each) and one espresso “with a bit of milk and heavy cream” (2:1:1). They should be identical (both are two parts espresso, one part milk, one heavy cream) but you’ll find them to be very different. Half-and-half is very much its own thing.

OK, what was this pointless madness? Here’s the idea. Think of every drink as a point on the axes of coffee, water, cream, milk, half-and-half, foam, sugar, whatever. Now carve up that space. Americano gets a big space. What happens if you’re in it is that your coordinates get distorted, maybe toward the middle, of whatever space you’re in. Not just that, but points near the boundary, but outside of it, get sucked in. Something about human meaning makes it so that the act of carving a state space into a semantic regions distorts it and moves it around. By understanding these processes, and how they work, how to correct for them or even exploit them, we not only get bettter at meaning and its games, but, in the case of a nameless, obscure, specific and disregarded form of coffee, get what we want despite everything.


The crises of a quantitative social scientist

  1. So I’ve always identified as an empirical-first person, and v. cagey about theory contributions in social thought. I need the world to tell me how it is, I don’t want to tell it.
  2. But I’ve been doing a lot of theory this last two years with theory people.
  3. But I’ve had to get over being self-conscious about it, since theory is so made up.
  4. But I’m starting to appreciate that made up isn’t so bad, because the name of the game is figuring stuff out together, and that applies as much to useful distinctions and language as to facts and data.
  5. But I think that data is ultimately the thing that sciences of sociality are short on
  6. But my theory pieces are quickly eclipsing my data pieces in terms of “what the people want”
  7. But data is still a strategic advantage of mine, and something I enjoy a ton.
  8. But it takes a lot more work for a lot less out.
  9. And I’m starting to question more whether science is really the appropriate tool for learning about society: whether science as method is even ready for humans as subject. If you think about it, from cell and mouse research through the Nobel prize for lobotomies even to Facebook’s “emotion manipulation” experiments, the only times that science is really “in its element” for building knowledge about living systems is when it’s murdery.

Therefore … I don’t know. I should keep doing both I guess. So everything is exactly as it should be

About

This entry was posted on Monday, December 6th, 2021 and is filed under nescience, science.


Calvino excerpt: the wisdoms of knowing and not knowing

ichac00001p1

Calvino’s Mr. Palomar, “Serpents and skulls.” Mr Palomar is getting a tour of Toltec city of Tula from a knowledgable local scholar who goes deep into the mythos, symbolism, and network of associations. But they interrupted by a schoolteacher telling his students a simpler story.

The line of schoolboys passes. And the teacher is saying, “Esto es un chac-mool. No se sabe lo que quiere decir.” (“This is a chac-mool. We don’t know what it means.”) And he moves on.

Though Mr. Palomar continues to follow the explanation of his friend acting as guide, he always ends up crossing the path of the schoolboy and overhearing the teacher’s words. He is fascinated by his friends’s wealth of mythological references: the play of interpretation and allegorical reading has always seemed to him a supreme exercise of the mind. But he feels attracted also by the opposite attitude of the schoolteacher: what had at first seemed only a brisk lack of interest is being revealed to him as a scholarly and pedagogical position, a methodological choice by this serious and conscientious young man, a rule from which he will not serve. A stone, a figure, a sign, a word reaching us isolated from its context is only that stone, figure, sign, or word: we can try to define them, to describe them as they are, and no more than that; whether, beside the face they show us, they also have a hidden face, is not for us to know. The refusal to comprehend more than what the stones show us is perhaps the only way to evince respect for their secret; trying to guess is a presumption, a betrayal of that true, lost meaning.

About

This entry was posted on Tuesday, November 9th, 2021 and is filed under books, nescience, science.


Almost-puns: the highest art


Puns are the lowest art. They’re easy and hammy, remarkable only for representing the biggest missed opportunity in humor: they’re as funny are you can get without being in any way funny.

So imagine how delightful it is that you’ve got right there, right next to punnery, the highest art of all: almost-punnery. You all-but say a pun, enough to make others make the final step and think them. Shakespeare scholar Stephen Booth called them “unexploded puns,” and argued that they were the root of Shakespeare’s genius (as Jillian and I laid out in Nautilus Magazine).

Booth’s cleanest example was from Longfellow’s poem Hiawatha’s Childhood

Rocked him in his linden cradle,
Bedded soft in moss and rushes,
Safely bound with reindeer sinews;
Stilled his fretful wail by saying,
“Hush, the Naked Bear will hear thee!”

“Naked bear”?! You mean “Bare Bear”!!! Except Longfellow didn’t make the pun. He could have. Instead, he makes you make the pun. Maybe not consciously, maybe not on any level at all. He just lays it out there, an opportunity to give your brain a little pop, take it or leave it. It’s like a pun, but with restraint, economy, and style: none of the bad of punnery, all of the good, and then some.

Or maybe there’s nothing there at all. Maybe by naked bear Longfellow just meant naked bear, no grin on his face, and Booth is reading into it too much. That doubt adds to the tension that makes almost puns great: you get to add some tension play to the mix by planting the pun with subtle suggestions that you seem to know it’s there. In Longfellow’s case, the Hiawatha poems don’t rhyme, so you can’t argue that he was in the rhyming headspace that would have made “bare bear” obvious. But “naked bear” is pretty redundant. Bears are naked. Syllables cost space, and you don’t spend them to say things that you’re not trying to say. In fact, what better way to spend them than make your reader generate even more: “Naked bear … bare bear!”; five syllables for the price of three.

Now there’s no proving the author’s intentions, and where you can the effect is ruined. With proof you experience the author as wink-winky, and suddenly “naked bear” is no better than a pun. And when there’s doubt, another thing happens: you start questioning and digging in everywhere, concerned that you’re missing something, that there’s more. By being doubtful and withholding, almost puns created the experience of poetic richness.

Playing with almost puns

My own biggest exploit was in grad school. I had a colleague who was working really hard at his desk. It was a tiny desk in a tiny office, so he was crammed right up against the wall, his forehead just a couple feet from an anatomical drawing of testicles I had given him, a leftover from a recent exhibit on historical medical texts I had helped break down.

The setup for the pun was right there, pristine. I could just stand there in the door, turn the ham turned all the way up, and yell: “Hey, you’re really BALLS TO THE WALL!!!” He’d jump him out of his seat.

But I didn’t do that. Instead I spent the next 15 minutes setting it up, crafting the conversation, nudging and covertly cajoling, trying to get him to say it to me.

“Hey buddy, you’re really working hard, huh? I see you put the drawing right there in front of you on the wall. Gee, you’re really getting a lot done. It’s funny that your desk is facing that way, and not the window. Are you more productive that way? You know, I never took a good look at it though, is that pubes? I can’t tell from here. Wow, you’re almost up right against them. I hope I’m not interrupting; you must be busy.”

Eventually, with a lot of work, he gets this stupid grin on his face and says “Oh, you bet! I’m really BALLS TO THE WALL!!!” It’s probably one of the triumphs of my life

Playing with almost puns with others

And that’s not even the good part. I once lived with several others who’d learned the same aesthetic. We’d all taken Booth’s class, It ended up that we were always trying to pull puns out of each other, in every conversation. Eventually puns were never said and always thought. To the outsider, you can imagine you’re standing among these nerds having a normal conversation, except they won’t stop with the wicked shared glances, and you can help but wonder if something’s going on.

It was a lot like that joke about the old-timers who have been telling the same jokes for so long that they now tell them by number: “62! Har har har. 87! Har har har.” You’re telling the joke and nailing the punchline without telling the joke or nailing the punchline.

It’s in that cohort of Booth fans that I got the closest I’ve ever been to a separate plane of shared meaning. The bond was amazing, all due to unsaid word play. It was mind reading. So much more than a pun.

That’s it. As far as I’m concerned it’s the highest art form. Incepting puns. I only regret we don’t have a good word for it.


Visualizing the 4th dimension in 1936 (Jean Painlevé documentary — 10 min)


This visualization effort was clearly inspired by Edwin Abbott’s book Flatland. It’s in French but Youtube’s automatic translations became excellent in the last few years. Plus you can put together most of the content from the visuals, which are the best part. I’m enough into the look of this retro stuff (the staid narration! the graininess! the effects! the props!) that I don’t really need comprehension to get from this everything I need.

Jean Painlevé may have been the first science documentarian. He’s best know for his sea life documentaries, which precede Jacques-Yves Cousteau’s, but as you can see he did lots of other stuff. His parents were Victorian-era free-love anarchist aristocrats.

I crush majorly on Painlevé; look up his other stuff as well.

About

This entry was posted on Monday, October 18th, 2021 and is filed under Uncategorized.


Critiques of the Ostrom scholarship

I got fascinated trying to find the most critical criticisms of Elinor Ostrom’s work, and went deeper than I’d expected. Overall, there’s a lot of hero worship (me included). For every paper that criticizes her on a point, there’s one that holds her up as conciliating or defending or representing that exact point in an especially nuanced way.

The main criticisms that are available are of two related types,

  • that the paradigm fails to take into account critical understandings of power and agency, and
  • that it is too beholden to rational choice theory and methodological individualism, two basic tenets of economics and behavioral science.

The problem with the first criticism in the work I found is that every expression of it is pretty fluffy. I found no really clear and clean example putting this shortcoming in relief, and several papers holding her work up against Econ as an example of the opposite: that her work is valuable because it succeeds at taking into account power and agency.

The problem with the second criticism is that the best expressions of it don’t actually criticize her community’s angle on it (me included), they just rely on old and well-trod criticisms of rational choice generally.

It’s a bit disappointing that after all this digging I found no deeply undermining assumption of her frameworks to shake me to the core. But it makes sense, she was pretty reasonable and hedged her claims a lot. That’s a good reason to be hard to criticize. Still, out of this whole exercise I’ve managed to come out with a third “meta” criticism of the Ostrom scholarship: the hero-worship itself. There’s a tacit hierarchy in the Ostrom community of people who can assert the legitimacy to improve and criticize her work (not just apply it), with former students and collaborators at the top, most comfortable saying she missed this or was wrong about that. It could be worse: they could be closed-circle hero-worshipping keepers of the flame, but even that hierarchy is causing problems

  • her frameworks change and improve slowly and in a very hard to track way (there used to be 8 design principles, now there are 10),
  • there’s a lot of uncritical copy/paste application of her frameworks, rather than development of them
  • there is the tendency to see the Ostrom’s contributions as part of the future rather than part of the past. This makes the community vulnerable to developing blind spots.

Here are the least softball critiques that I was able to find.
Cleaver F (2001) Institutional Bricolage, Conflict and Cooperation in Usangu, Tanzania. IDS Bulletin 32(4): 26–35. DOI: 10/bd765h.
Cleaver F (2007) Understanding Agency in Collective Action. Journal of Human Development 8(2). Routledge: 223–244. DOI: 10/crhdr9.
Kashwan P (2016) Integrating power in institutional analysis: A micro-foundation perspective. Journal of Theoretical Politics 28(1). SAGE Publications Ltd: 5–26. DOI: 10.1177/0951629815586877.
Mollinga PP (2001) Water and politics: levels, rational choice and South Indian canal irrigation. Futures 33(8): 733–752. DOI: 10.1016/S0016-3287(01)00016-7.
Mosse D (1997) The Symbolic Making of a Common Property Resource: History, Ecology and Locality in a Tank-irrigated Landscape in South India. Development and Change 28(3): 467–504. DOI: 10/ftdm7p.
Saravanan VS (2015) Agents of institutional change: The contribution of new institutionalism in understanding water governance in India. Environmental Science & Policy 53. Crafting or designing? Science and politics for purposeful institutional change in Social-Ecological Systems: 225–235. DOI: 10/f7rrw2.
Social-ecological systems, social diversity, and power on JSTOR (n.d.). Available at: https://www.jstor.org/stable/26269693?seq=1#metadata_info_tab_contents (accessed 29 September 2020).
Velicu I and García-López G (2018) Thinking the Commons through Ostrom and Butler: Boundedness and Vulnerability. Theory, Culture & Society 35(6). SAGE Publications Ltd: 55–73. DOI: 10/gfdbbs.

Note to self

I do have a few more substantive critiques of my own that I haven’t developed at all:

  1. One: the design principles seem to work insofar as they create a bubble within which market exchange works (within which CPRs are excludable): so how is that an improvement on “markets for everything” ideology?
  2. Two: she has an alignment with super libertarian public choice people in the municipality/Tiebout space that might open up some avenues for criticism.
  3. Three: blind spot failure to integrate findings from the “soft stuff” in democratic theory, pretty much all of deliberative/participatory democracy.
  4. Vlad Tarko adds “There’s also a critique of the design principles as being applicable only to small scale. https://jstor.org/stable/26268233”
  5. There is a deeply baked-in assumption that when communities succeed or fail, it’s because their governance system was good or bad. Communities fail for other reasons, and other endogenous reasons (not just meteor strikes). A lot of online communities never take off in the first place, because they’re not interesting enough to users to attract the critical mass necessary for governance to be relevant. That’s not a governance failure.

About

This entry was posted on Friday, October 15th, 2021 and is filed under Uncategorized.


The simplest demo that big data breaks p-value stats


> # perfectly independent matrix of 161 observations; standard "small-n statistics"
> # (rows have different sums but are all in 4:2:1 ratio)
> tbl <- matrix(c(4, 2, 1, ... 48, 24, 12, ... 40, 20, 10), ncol=3) > chisq.test(tbl)$p.value
[1] 1
Warning message:In chisq.test(tbl) : Chi-squared approximation may be incorrect
# one more observation, still independent
> # one more observation, still independent
> tbl[3,3] <- tbl[3,3] + 1 > print(tbl)
[,1] [,2] [,3]
[1,] 4 48 40
[2,] 2 24 20
[3,] 1 12 11
> chisq.test(tbl)$p.value
[1] 0.99974
Warning message:In chisq.test(tbl) : Chi-squared approximation may be incorrect
> # Ten times more data in the same ratio is still independent
> chisq.test(tbl*10)$p.value
[1] 0.97722
# A hundred times more data in the s> # A hundred times more data in the same ratio is less independent
> chisq.test(tbl*100)$p.value
[1] 0.33017
> # A thousand times more data fails independence (and way below p<0.05) > chisq.test(tbl*1000)$p.value
[1] 0.0000000023942
> print(tbl*1000) #(still basically all 4:2:1)
[,1] [,2] [,3]
[1,] 4000 48000 40000
[2,] 2000 24000 20000
[3,] 1000 12000 11000

All the matrices maintain a near perfect 4:2:1 ratio in the rows. But when the data grow from 162 to 162000 observations, p falls from 0.99 (indistinguishable from theoretical independence) to <0.00000001. The problem with chi^2 tests in particular is old actually: Berkson (1938). The first solution came right after: Hotelling's (1939) volume test. It amounts to an endorsement to do what we do today: for big data, use data-driven statistics, not small-n statistics. Small-n statistics were developed for small-n. https://www.tandfonline.com/doi/pdf/10.1080/01621459.1938.10502329 https://www.jstor.org/stable/2371512 Here's the code:
# perfectly independent matrix of 161 observations; standard “small-n statistics”
# (rows have different sums but are all in 4:2:1 ratio)
tbl <- matrix(c(4, 2, 1, 48, 24, 12, 40, 20, 10), ncol=3) chisq.test(tbl)$p.value # one more observation, still independent tbl[3,3] <- tbl[3,3] + 1 print(tbl) chisq.test(tbl)$p.value # Ten times more data in the same ratio is still independent chisq.test(tbl*10)$p.value # A hundred times more data in the same ratio is less indepedent chisq.test(tbl*100)$p.value # A thousand times more data fails independence chisq.test(tbl*1000)$p.value print(tbl*1000)

About

This entry was posted on Sunday, October 10th, 2021 and is filed under Uncategorized.


Cancel culture and free speech are compatible, in 3 pages.


Social justice activism is bringing changes to culture and discourse, especially in the US. Those changes can cause a lot of communication breakdown, even among people who should be aligned. If you can’t stand how old liberals put so much on civility when the world is burning or, if you’re baffled that today’s social justice has thrown freedom of expression under the bus, or if you just think there’s too much infighting all around, then there’s a solution. It’s actually not hard to reconcile the ethics behind broad-minded liberalism and confrontational identity-driven progressivism into one framework, to explain how they can co-exist, and actually always have, serving different purposes.

Two spaces

The worldviews seem incompatible because they exist in two different spaces built on different assumptions. They are the “dialogue-first” and “politics-first” spaces. Dialogue-first spaces exist when there is physical security and everyone can assume the good faith of everyone else. These get you the familiar ideal of older liberals: unity is a goal, good intentions behind a bad action matter, civility matters, there are no bad ideas, you attack the idea not the person, speech is free but yelling doesn’t work, content trumps style, you can discuss abhorrent ideas, defend people with abhorrent views, due process is respected by all, and reason prevails.

Politics-first spaces are wild: none of the above is true. You don’t assume good intentions of others who have wronged you, you can attack people rather than ideas, vulnerability can be weakness, interest in other cultures is appropriation, race and other identity differences are recognized and even emphasized, affiliation and trust are based on those identities, the legitimacy of your input depends on them, mobbing is legitimate, a witchhunt is a tactic, silence is assent, self-censorship is tact, shutting someone down is fair game, how you come off is as important as how you are, and the weak are strong en masse.

You’re clearly in politics-first space on social media, in opinion columns, during protests, pretty much anytime that you’re in a position to offend people who are loud, effective, and enflamed enough to take you down. You’re in dialogue space when you can ask challenging, ignorant, vulnerable questions and count on sympathy, patience, and an explanation. Close family and friends, sometimes the classroom. I’ve seen that people who experience the world as hostile to their existence are often tuned for politics-first exchange.

The tensions play out

The catch is that a space can claim to be dialogue-first but be politics-first in secret. In fact, I wonder if most spaces that call themselves dialogue-first have the other dynamic under the hood. And that’s dangerous: when a political space projects dialogue values, the emphasis on good faith makes it easier to hide abusive dynamics. If there’s no blatant evidence of sexual assault, and good faith means taking assailants at their word, then the veneer for dialogue-first dynamics can perpetuate awful behavior.

In politics-first spaces, appearance is reality, and creepiness can’t lurk as easily. Politics-first spaces can be more transparent, in the sense that your happiness doesn’t depend on other people being honest. You also have more strategies. “Safe spaces” are ridiculed in dialogue space, but they are adaptive in political space. Call-out culture, cancel culture, and other seemingly unaccountable tactics are fair game, even strategic, in political space. This is all good. On the very edge of social change and activism, dialogue is naive because the consensus conspiracy of institutional violence has bad faith at it’s core: the civil rights movement, Apartheid resistance, and BLM. In those cases, the politics-first headspace is the right headspace.

The only truly dialogue-first spaces are those that maintain consensus from all participants all the time. If one person’s experience is that they don’t believe others are acting in good faith, it’s literally not a dialogue space anymore, no matter how many other people still believe. That sounds like an overstatement, but the proof is easy. Say someone in a dialogue-first space speaks up after covert discrimination or harassment. Do you take their claim seriously or not? If you take them seriously, then you’re acknowledging that bad faith is happening somewhere. And if you don’t, then you’re rejecting their experience out of hand, in bad faith, and the person who just broke the space is you. No matter what you do, you’ll help them break consensus.

Since anyone can call bullshit at any time, true dialogue-first spaces are fragile. Dialogue-first spaces are little islands surrounded by the political spaces that call themselves dialogue-first, which are surrounded in turn by the seas of openly political spaces. When minorities in universities say that academia isn’t actually a field of pure ideas that rewards all equally, they are saying that they are experiencing the university’s founding ideal of dialogue as just a veneer. If that’s their experience, then good faith means assuming they’re right unless proven otherwise. So universities today are listening, foregrounding their political side, and asking critically whether that founding ideal really exists for everyone. That has upsides and downsides. Firing profs for assigning Huck Finn without proper warning is the other side of the coin of finally being able to fire them for sexual harassment. And it will continue this way until affected communities feel represented and are ready to buy in again to the university’s ideal. It will take time to build back up. The thing about the university’s fragile ideal is that if it can’t be broken it’s not real.

What to do

Everyone deserves to have a dialogue space they connect with. Dialogue space is less stressful and creates more room for growth. It’s important to want and have dialogue-first spaces. But it’s also important that whatever space you’re in has the right name. So within both spaces there are important things to do.

In a dialogue-first space. It’s easy to get nostalgic for a time when people could just talk about ideas without getting mobbed on social media. But there are people who are saying that that time only existed for you. If you don’t accept their experience as true, then you are making it true by perpetuating their marginalization. So the first thing to do is take a person seriously when they are challenging the consensus of your dialogue-first spaces. Victims who come out to expose violence in superficially dialogue-first spaces often get hostility for questioning the consensus, when they should get rewarded for finding the right name. You listen to challenges because you cherish your spaces enough to question them.

In a politics-first space. The fragile consensus of dialogue-first spaces makes it hard to build back up. You can, but you need the capacity. Capacity is how much bullshit you can take before losing patience, getting frustrated, or otherwise losing good faith. People don’t get to pick their capacity, and many don’t have much. Your capacity might be higher because of your privilege or your personality or your training. Here’s how:

  1. To get two people assuming good faith from neither assuming good faith you need one person to assume good faith. That first mover should be the person with more capacity. If you’ve been blessed with high capacity, the tax on that blessing is an obligation to create a world that is dialogue-first for everyone. It’s on you to stand by dialogue-first ethics and also remain compassionate, humble, and cool in the boiling pot of politics-first exchange. You have to hold yourself to the high standards of both.

Step 1 is actually the only step. The second step is “wait.” Not because it’s all you have to do, but it’s all you can do. You can’t make someone assume good faith, so you need to have the capacity to maintain dialogue-first presence, model its value, and absorb political blows until others finally let their guard down and it becomes true.

Considerations

Because of all the patience and compromise involved, it is easier for pragmatists than ideologues to be first-movers. You often have to choose between saying things bluntly (“being right”) and saying things tactfully (“being effective”). You have to lead with your shortcomings and abolish pride. And you don’t just avoid behavior that actually alienates others, but behavior that comes off as potentially alienating, in the way that public figures work to avoid both impropriety and the appearance of impropriety. Overall, integrity in first-movers is maintaining the standards to thrive in both spaces.

If you do have capacity, you should get over that time you were called out unfairly, and become part of the solution. And if you can’t manage patience and waiting, then you might be more in the politics-first headspace than you realize. If your approach to defending rationality, reason, discussion, open-mindedness, freedom of expression, and other good stuff involves being defensive, dismissive, combative, sarcastic, or otherwise closed to the concerns of those who question consensus, then there’s a good chance that you’re just an agent provocateur, claiming you support dialogue-first spaces while covertly undermining them with bad faith politics-first tools. The tension between the spaces is an opportunity, not a warzone, and making war of it is a fundamental betrayal of enlightenment values. You should consider getting out of the way, until you get the help of a first-mover yourself.

It is sometimes easy to support people back into dialogue from a politics mindset, but there are failure modes. One failure is to move toward dialogue prematurely, while a community is still suffering from deeply rooted bad behavior or bad faith. So in addition to capacity, good first-movers need empathy and sensitivity, and they have to be up-to-date on a space’s current drama. They also need the integrity to not be creeps themselves, which can be hard if they have the blindspots that come from identifying as an ally. Another failure is that if someone has become powerful in a politics-first space, it might not be in their interest to change (a lot of radical Twitter), or they might only engage with people they identify with. That’s why a good first-mover is someone who is already a legitimate insider in their target identity group, which can be rare. Your identity shouldn’t matter, but in politics space it does. A third failure is if you encounter the immune response of a politics-first space. Politics-first constructs like tone-policing can be weaponized to discredit the whole idea of dialogue-first relationships. If that happens, this might not be the right time, or you might not be the right person. The last failure isn’t a failure at all. Again, on the very edge of social change and activism, political tools are the right tools. Some spaces are inherently politics-first, and some people specialize in that toolkit and thrive in that setting. If you step in where dialogue doesn’t make sense, you’ll just invite ridicule. Alinsky’s Rules for Radicals are built on exploiting the vulnerability of dialogue exchange for political wins.

This is a model. It succeeds at explaining a lot of the contradictions faced by people who are both sympathetic and wary about social justice. It explains why a lot of things that seem ugly about social justice rhetoric are adaptive in context, and what a space needs to be ready for civility discourse. It also gives a strategy for moving forward. And it gives rationalists who have been hurt something to aspire to. Hopefully this makes it easier for you to understand what’s going on with society right now, and articulate your place in change.

More notes

I’m still developing and editing this. On the one hand, there’s a lot to say, on the other, I’m trying to keep it short. Here are more dimensions as I come up with them, to maybe incorporate.

  • Audience for this piece is people with capacity and commitment to dialogue-first space.
  • Breaking consensus isn’t pointing out that there are politics. There are always politics. Consensus is the agreement that the politics aren’t bad enough for anytone to abandon dialogue-first values.
    Breaking consensus is announcing that the latent politics have become bad enough that your community’s shared commitment to dialogue-first values is causing too much harm
  • Just like there’s always politics, there’s always power. Dialogue-first spaces can be compatible with power asymmetries (e.g. prof <-> student). What’s important is mutual assumption of good faith, and common belief in mutual assumption of good faith.
  • “Good faith” can be made more specific: good faith commitment to community’s mission (in a university, intellectually learning/growing together)
  • During social change, the pendulum overswings. Remembering that makes it easier to feel OK when it looks like the world is abandoning your values.
  • There are more than two spaces. Political spaces, violent or not, don’t put violence first. That’s the space of actual war, which makes politics civil by comparison

About

This entry was posted on Saturday, October 9th, 2021 and is filed under Uncategorized.


The mind-expanding Internet; the like-minder finder

I’ve always sneezed in the sun, especially in the morning when I’m coming out of the dark. As an infant, I would sneeze continually: my uncle would shadow me to turn the sneezing off, and then step aside to turn it on, on and off. I only slowly gained an appreciation that sneezing in the sun was a thing, that some people did it and others didn’t, but it didn’t have a word and it was always too trivial and obscure to show any concern for. Today, it’s still niche, but it has a word (“photic sneezing”), and statistics. Researchers have hypothesized mechanisms, and built a sense of what kinds of people experience it. It can be googled for. This variety of human experience, while still niche, has been recognized and integrated, to the point that others who haven’t heard of it can learn about it and make room for it in their picture of what humans are like and how they live.

This growth in awareness of the phenomenon is some of the basis for my reserve of optimism about the future. One hopeful narrative for the course of humanity is that as we better understand each other and ourselves, we create a basis for the universal compassion that will save us and our world. It’s a utopian take that is not widely held, but does crop up in surprising places, and is often enough an impetus for journalism, research, and technology.

The Internet, despite everything, remains a powerful force for universal understanding. The Internet is absolutely an outlet for hidden or marginalized voices to be found and heard, first by finding and hearing each other. Maybe it’s the opposite as well, but that doesn’t change the fact: it’s big enough to be very much both at the same time. Through online communities, secret varieties of human experience build their sense of self, confidence, and language, enough to assert their existence to the outside world with bravery and break through the silence to attract the attention of all the rest of us. From there, acceptance can be easy or hard, but getting on the map and starting a larger conversation is a good thing, and part of expanding collective consciousness to include the actual varieties of human experience.

The examples of the Internet’s mind-expanding function are everywhere. They include once-invisible psychological and physiological quirks like cataplexy, ASMR, misophonia, photic sneezing, and especially ASMR. You can even add color blindness: until the last 50 years it was possible to go most of your life and not know you were colorblind. Varieties of social experience have seen the benefits as well, with internet-induced language, elaboration, and legitimacy around historically taboo subjects like kink, non-monogamy, homosexuality, and all the other facets of sexuality, sexual preference, sex experience, and gender experience that are beginning to assert themselves.

What do these things have in common? They all connect to ways of being that are rare enough and quiet enough (either because of their triviality (misophonia) or historically taboo nature (gender variety)) to stay invisible without tools like the internet or urbanism that enable like-minded people to find each other. From contact comes community, from community comes confidence, from confidence comes visibility, from visibility comes recognition, often hard-won, and from recognition comes the expansion of collective awareness of conscious experience that is one part of universal care and something like the humanistic progress of humanity.

Overall, I think the unintended consequences of new technology are vast enough often enough to justify a critical view of progress. But it’s just as easy, and just as much of a waste to get stuck in techno-pessimism as techno-optimism, and I try to remember the good. So here’s a little nod to the Internet and its very real higher potential.


The Future: Michio Kaku’s accurate 2020 from 1997

In 1997, physicist and futurist Michio Kaku wrote this picture of daily life in 2020. He did pretty well. .Not all predictors of the future do. What I enjoy the most is how magical the present becomes when it’s described as far-fetched visionary fare.

A gentle ring wakes you up in the morning. A wall-sized picture of the seashore hanging silently on the wall suddenly springs to life, replaced by a warm, friendly face you have named Molly, who cheerily announces: ‘It’s time to wake up!’
As you walk into the kitchen, the appliances sense your presence. The coffeepot turns itself on. Bread is toasted to the setting you prefer. Your favorite music gently fills the air. The intelligent house is coming to life.
On the coffee table, Molly has printed out a personalized edition of the newspaper by scanning the Net. As you leave the kitchen, the refridgerator scans its contents and announces:’You’re out of milk. And the yogurt is sour.’ Molly adds: ‘We’re low on computers. Pick up a dozen more at the market while you’re at it.’
Most of your friends have bought ‘intelligent agent’ programs without faces or personalities. Some claim they get in the way; others prefer not to speak to their appliances. But you like the convenience of voice commands.
Before you leave, you instruct the robot vacuum cleaner to vacuum the carpet. It springs to life and, sening the wire tracks hidden beneath the carpet, begins its job.
As you drive off to work in your elecric/hybrid car, Molly has tapped into the Global Positioning System satellite orbiting overhead. ‘There is a major delay due to construction on Highway 1,’ she informs you. ‘Here is an alternate route.’ A map appears ghostlike on the windshield.
As you start driving along the smart highway, the traffic lights, sensing no other cars on this highway, all turn green. You whiz by the toll booths, which register your vehicle PIN number with their laser sensors and electronically charge your account. Molly’s radar quietly monitors the cars aroung you. Her computer, suddenly detecting danger, blurts out, ‘Watch out! There’s a car behind you!’ You narrowly miss a car in your blind spot. Once again, Molly may have saved your life. (Next time, you remind yourself, you will consider taking mass transit.)
At your office at Computer Genetics, a giant firm specializing in personalized DNA sequencing, you scan some video mail. A few bills. You insert your smart wallet card in the computer in the wall. A laser beam checks the iris of your eye for identification, and the transaction is done. Then at ten o’clock two staff members ‘meet’ with you via the wall screen.

Copied without permission from The Faber Book of Utopias, Ed. John Carey, who copied with permission from Kaku’s 1997 book Visions.

About

This entry was posted on Friday, February 12th, 2021 and is filed under Uncategorized.


Changing how you think is like changing your nutrition in a way

Being surrounded by smart people makes you think about intelligence. After all, they’re all so different from each other, so how could intelligence be one thing? And what does it mean with it changes!? I have been around long enough, and changed enough times in enough ways, to watch my own intelligence wax and wane in minor ways with changes in personality, circumstance, social environment. I’ve evolved a picture of intelligence that’s in line with prominent theories like the Cattell–Horn–Carroll theory, in which general intelligence has many dimensions, turned all the way up to 11. Where many of us imagine that IQ is a single knob in the brain that is just cranked way up or down for different people, I’ve come to think of it something with tens or hundreds of indeterminate overlapping facets that can be tweaked by nearly everything.

In that way, intelligence is a lot like nutrition. Above the usual macronutrients of carbohydrate, fat, and protein, our bodies rely on many many trace chemicals. The way the micronutrients work is that if you only need your trace amount of each them to be generally fine. Getting more than the necessary amount doesn’t tend to provide a benefit. Megadoses of this or that vitamin, while trendy, are usually pointless, and sometimes dangerous. So the name of the nutrition game isn’t so much about getting as much as possible of a few things, but getting the right amount of lots of things, not too little, and maybe not too much. Maybe “you’re only as healthy as your weakest nutrient.” Or “the roof is only as high as the shortest column.” In this way, nutrition is limited by a lot of potential bottlenecks.

I think intelligence is the same way. It requires a lot of traits, and its level isn’t determined by the strongest of them, but the weakest of them. Qualities that support intelligence, and hold it back, include creativity and curiosity, but also attention, self-consciousness, self-control, arousal, and even memory, risk tolerance, and coping style. Within memory, long-term, short-term, episodic, visual, and muscle memory are probably each potential bottlenecks on general intelligence. It can probably be held back by one’s limits to think visually, and to think verbally. Processing skills like subitization are probably excellent prereqs to strong visual reasoning. Certainly nutrition, life stability, and other aspects of nurture, like your exposure to logic and the wide variety of other tools of thought.

This doesn’t mean that traits can’t fill in for each other. Some traits can really make up for others (memory, and maybe fortitude), and can really be cranked quite high before they stop helping. But under this picture, there are limits to that kind of coping, for at least some traits, which makes them bottlenecks.

This has all borne out for me in different ways. I’ve observed myself behave with less intelligence when I’m not alert. And more interestingly, I got to observe myself become smarter after my personality changed to make me less anxious about loss and risk. Having known incredibly intelligent people with excellent memories, and watched a decline in my memory, I think my long-term memory may be my biggest bottleneck. A lot of my younger school age success was due to the fact that, once upon a time, I only had to read things once to remember them indefinitely. It isn’t like that any more. I’ve also never been much of a visual reasoner, and I’m a sorry subitizer.

This “bottleneck-driven” or “nutrition-like” picture of intelligence accounts for a lot. It seems compatible at first with theories of “multiple intelligences,” but it’s ultimately grounded in the idea of a general intelligence. General intelligence is just the idea that intelligence is general: that being smart isn’t being smart in one thing. I buy it because of something you see in education science: when you do interventions in classrooms, there is a type of student that will just do really well on every educational intervention you give them, no matter what it tests. Distributions of student performance on these kinds of assessments can be normal bell-curves, or close, with several students clustered tightly at the top of the performance range, and others distributed widely across the rest of the range. The hypothesis is that students off of the performance ceiling have one or more bottlenecks keeping them back, and they are all different in which quality is playing the role of bottleneck.

This theory has other implications as well for how we interpret learning accommodations. Rather than saying that exams miss measuring intelligence by testing only your memory and ability to sit still and concentrate, this theory more precisely says that exams test your intelligence in part by testing your memory and ability to sit still and concentrate, and that they miss intelligence by failing to measure the facets they don’t measure like creativity and curiosity. It also reinterprets what aids do. From this view, attention aids like drugs or exercises are intelligence aids for those people whose intelligence is limited by their ability to control their attention.

Going one step away from direct supporters of intelligence, there might also be traits that are only bottlenecks in certain cultures or environments. In a typical classroom setting, obedience is probably an important predictor of how effectively education feeds intelligence, and students without it may either need to develop it or seek a learning environment that doesn’t allow a trait like obedience to become an intelligence bottleneck. Conversely, it might be that among students in unstable social or cultural environments, the hyperfocus that you find with ADHD is adaptive. And obviously, in domains where knowledge is stored primarily in written words, impediments to reading, or a lack of alternatives to reading, will be indirect impediments as well.

There are obviously many things the theory misses. It’s great if there is just one way to be intelligent and lots of ways that intelligence can be limited. But if there are many kinds of intelligence, the theory only needs a couple tweaks from the nutrition model. Rather than bunching each personality or other trait that influences intelligence into the bottlenecks category, driven by the minimum, you’d call other max driven. In the more complicated picture, you’d say that the height of the ceiling depends on the shortest of some columns and the tallest of others.

But being no specialist in education science, I can’t seriously say that this is anything more than idle theorizing based off of personal anecdote. And any theory of such an elitist idea as intelligence is inherently going to be a little offensive. Allowing that, I imagine that this picture of it is overall less offensive than most others. All things considered, I think being such a holistic theory of intelligence makes it pretty humane, open, and empowering. It’s certainly makes a lot of predictions, and is actionable. In my case, if I personally buy this theory, I should probably become a better memorizer.

About

This entry was posted on Saturday, February 6th, 2021 and is filed under Uncategorized.


More on the lasting impact of cybernetics on society


Cybernetics was an important part of my intellectual development, my first hint indication that there was a rigorous, systematic way of approaching complex things. I eventually got over cybernetics specifically, as much too general to contribute to the observation side of science. I was also disappointed that all of its legacy in science seemed to be obscured, with many senior academics I knew privately acknowledging its importance, but publicly revealing no hints. However, I’ve slowly learned that the influence of cybernetics was much more substantial than I’ve appreciated. I’ve collected anecdotes on its development in anthropology, and one of it’s services to big tobacco, and I knew it was the path to Allende’s technoutopia, with help from cyberneticist Stafford Beer. But apparently it was also influential in Jerry Brown’s first governorships, and less directly, in the first applications of graphic or interface design to computers.

Here are various quotes I pulled from the Jerry Brown article, which weaves together UC Berkeley’s full design talent, US politics, early environmentalism, and even the Dead Kennedys.

“Learn to distinguish between unity and uniformity—between God and hell.” That abouts summs [sic] up the 20th Century problem.

Eschewing the industrial iconography of steel and glass, the Bateson Building made do with concrete and wood … in order to maximize thermal performance and economy in the blazing Sacramento summer sun. The building’s understatement, which bordered on a functionalist antiaesthetic and surely contributed to its disappearance from the canon, was central to its broadly ecological mission. That mission seems to have had three main aspects: energy efficiency, interaction, and an attentiveness to systems. In pronouncing that mission, the Bateson Building represented the state’s pursuit of interdependence, adaptability, and self-reliance.

For Van der Ryn, such an integration of the greater sociopsycho-ecological whole was the central purpose of design. “The process of institution building and institutional innovation becomes more than a technical problem,” he wrote in 1968 with his then assistant, the political economist Robert Reich, who later served as secretary of labor under President Bill Clinton. “It becomes part of an overall design. It becomes utopian.” 63

The New Age state addressed a skepticism about government that ran even deeper than the culture wars. Its cybernetics and ecology countered pessimism about whether a selfless politics was even possible,

“Going into Space is an investment . . . and through the creation of new wealth we make possible the redistribution of more wealth to those who don’t have it. . . . As long as there is a safety valve of unexplored frontiers, then the creative, the aggressive, the exploitive urges of human beings can be channeled into long term possibilities and benefits. But if those frontiers close down and people begin to turn in upon themselves, that jeopardizes the democratic fabric.”

About

This entry was posted on Sunday, January 31st, 2021 and is filed under Uncategorized.


The sorry state of my optimism about humanity’s distant future

https://www.researchgate.net/publication/266854675_Social_Mobility_in_the_Transition_to_Adulthood_Educational_Systems_Career_Entry_and_Individual_Agency/figures?lo=1

I would love to be optimistic about the future. In fact, I’m actively trying. There is hope in stunning technological advances, the existence and development of progressivism, and all the license we have to abide by some lessons of history (that human potential overall gets higher) and potentially ignore others (the inevitability of resource conflict). We can imagine ignoring dark lessons of history because modern humanity has demonstrated its ability to change how it is. We’ve changed in the last 100, 50, 10, or even 5 years more than we did over thousands of our earliest years. Maybe scarcity is a solvable problem. For example, I’ve been following with awe and wonder the rapid, exciting progress of the Wendelstein 7-X experimental fusion reactor in Greifswald, Germany. The limitless energy of hot fusion, and several other emerging technologies, means that we can think about energy intensive technological solutions to otherwise forbidding problems, toward the availability of costly fresh water from sea or waste water, the availability of sufficient food and wood from ever higher input agriculture, the possibility of manually repairing the climate, undoing material waste by mining landfills and the ocean’s trash fleets, and accessing the infinite possibilities permitted by space. These things could all be in hand in the next few decades, no matter how destructive we are in the meantime. Allowing these possibilities means very actively suppressing my cynicism and critical abilities, but I think it’s a very healthy thing to be able to suppress them, and so I try.

But every argument I come up with for this rosiest view backfires into an argument against postscarcity. Here’s an unhopeful argument that I came up with recently, while trying to go the other direction. It has a few steps. Start by imagining the state of humanity on Earth as a bowling ball rolling down the lane of time. Depending which pins it hits, humanity does great, awful, or manages something in between. There are at least two gutters, representing states that you can’t come back from. They foreclose entire kinds of future path. Outside of the gutters, virtually any outcome for humanity is on the table, from limitless post-humanity to the extinction of all cellular life on earth. In a gutter, a path has been chosen, and available options are only variations on the theme that that gutter defines. The most obvious gutter is grim: if we too rapidly and eagerly misspend the resources that open our possible paths, we foreclose promising futures and end up getting to choose only among grimmer ones. A majority of climate scientists assert that we are already in a gutter of high global temperature and sea levels. There is also a hope gutter, in which technology and the progressive expansion of consciousness will get us to an upside of no return, in which only the rosiest futures are available, and species-level disaster— existential risk— is no longer feasibly on the table.

The doom gutter is the easier to imagine of the two, and therefore it is easier to overweight. It’s too easy to say that it is more likely. “You can’t envision what you can’t imagine.” The question to answer, in convincing ourselves that everything is going to be fine, is how to end up in the hope gutter, and show that it’s closer than anyone suspects. That’s what I was trying to do when I accidentally made myself more convinced of the opposite.

The first in my thinking is to simplify the problem. One thing that makes this all harder to think through is technology. How people use technology to imagine the future depends a lot on their personality and prior beliefs. And those things are all over the place. From what I can tell, comparable numbers of reasonable people insist that technology will be our destruction and salvation. So my first step in thinking this all through was to take technology out: humanity is rolling down the gutter of time with nothing beyond the technology of 1990, 2000, or 2020, whatever. Which gutter are we mostly like to find now? Removing technology may actually change the answer a lot, depending where you come from. I think many tech utopians, as part of their faith that technology will save us, also believe that we’re doomed without it. I’ve found tech optimists to be humanity pessimists, in the sense that, if brave individuals step up to invent things that save us, it will be despite the ticking time bomb of the vast mass of humanity.

I, despite being a bit of a tech pessimist, am a humanity optimist. I think if technology were frozen, if that path was cut off from us we’d have a fair chance of getting our senses and negotiating a path toward a beautiful, just, empowering future. Especially if we all suddenly knew that technology was off the table, and that nothing would save us but ourselves. I don’t think we’d be looking at a postscarcity future, there is no postscarcity without seemingly magical levels of technological advancement, but one of the futures provided by the hope gutter. Even without technological progress, changes in collective consciousness and awareness and increases in the space of thinkable ideas can and have had a huge influence on where humanity goes. Even without the deus ex machina of technology, it is still possible to have dreams about wild and wonderful possible futures. This isn’t such a fringe idea either. Some of my favorite science fiction, like Le Guinn’s “Always Coming Home,” takes technology out of the equation entirely, and still imagines strange and wonderful perfect worlds.

So without technology, I actually see humanity’s path down the bowling lane of time being a lot like the path with technology: we have a wide range of futures available to us, some in the doom gutter, some in the hope gutter. That’s a very equivocal answer, indisputable to the point of being meaningless, but it’s still useful for the next step of my argument.

I do not think technology is not good. It is not bad. It doesn’t create better futures or worse ones. It overall just makes things bigger and more complicated. One quip I use a lot is that Photoshop (an advanced design technology) made good design better and bad design worse. Nuclear science created a revolution in energy, and also exposed us to whole new kinds of bitter end. Genetic engineering will do the same. Even medicine, possibly, if it ends up being able to serve the authoritarian ends of psychology-level social control. So unlike most people, I don’t think technological advances will bias humanity into one gutter or the other. I think technology will expand the number of ways for us to get into one gutter or the other. And it will get us into whichever gutter faster. That’s step two.

Step three of the argument starts with my assumption that the doom gutter is closer than the hope gutter. It is without technology, and it is with technology. I started off by saying that we should avoid saying that because it’s easy to say. But even then, I still think so. That’s just my belief, but this is all beliefs. One of the few clean takeaways of the study of history is that it is easier to destroy things than create them. But doom, in the model I’ve built, isn’t more likely because of technology, only because it is more likely overall. That doesn’t mean that technology will have no effect. It will bring us more quickly to whichever gutter it is going to bring us to. If at the end we’re doomed or saved because of technology, that end is less likely to happen slowly, and less likely to be imaginable.

p.s. I’d love to believe something different, that technology will save us. I’ll keep trying. We’ll see.

About

This entry was posted on Sunday, January 31st, 2021 and is filed under Uncategorized.


Our world is strange enough for these tops and dice to be


“Classical mechanics” is the name of the simplest physics. It is the physics of pool tables, physics before electricity, magnetism, relativity, and quantum effects. Nevertheless, we’re still learning new things about it. And those discoveries lead to some pretty deep toys.

Four tops

Two dice

Digging up videos of these led me to other great things, like oloids, sphericons, solids of constant width, and tops in space

About

This entry was posted on Sunday, January 10th, 2021 and is filed under Uncategorized.


New paper out in the Proceedings of the Royal Society B

Society is not immutable, and it was not drawn randomly from the space of possible societies. People incrementally change the social systems that they participate in toward satisfying their own needs. This process can be conceptualized as a trajectory through the space of societies, whose “attractor” societies represent the systems that participants have selected.

I wrote a paper that implements this idea in simulation and demonstrates some simple results that fall out of it. This work explores the trajectories produced by selfish simulated agents exploring abstract spaces of economic games. It shows that the attractors produced by these artificially selfish agents can be unexpectedly fair, suggesting that the process of institutional evolution can be a mechanism for emergent cooperation.

Frey, S. and Atkisson C. (2020) A dynamic over games drives selfish agents to win–win outcomes Proc. R. Soc. B.28720202630. https://doi.org/fnq2

This paper took me almost 10 years to finish, so I’m very proud to have it out, especially in a venue as fancy as Proc B.

About

This entry was posted on Thursday, December 31st, 2020 and is filed under Uncategorized.


Low profile search for the cheapest, shortest domain on the Internet


Short URL’s are useful in their own right. But they are in demand, prohibitively expensive, and also hard to find. You have to know some tricks to find unused URLs without raising the eyebrows of hucksters, but with the explosion of top-level domains (the end part of a URL, like .com), it’s actually possible. Using this price sheet, you can find all kinds of stuff: prices are going below the standard $15/year for .com, and also well above, like over $8000/year for a .makeup link. Rooting around, with .za not available yet, .uk comes out as the cheapest 2 letter domain per year. In .com, two, three, and four character domain names are allg gone, and super valuable. How about in .uk? Are there any two, three, or four level domains? I wanted to find out, so I wrote the following shell script

#Low profile search for the cheapest, shortest domain on the Internet
for i in {0..9}; for j in {0..9}; do whois $i$j.uk | grep "No match"; done;
for i in {a..z}; for j in {0..9}; do whois $i$j.uk | grep "No match"; done;
for i in {0..9}; for j in {a..z}; do whois $i$j.uk | grep "No match"; done;
for i in {a..z}; for j in {a..z}; do whois $i$j.uk | grep "No match"; done;

The key part is whois, which takes a URL and queries an official database of registered URLs. grep pulls out all error messages returned by whois, indicating URLs that have never been registered. It returned exactly one value, meaning that out of (10+26)^2=1296 possible URLs, only one had never been registered. So here you are, talking to the proud owner of the least desirable possible 2-letter URL: 0w.uk. And rather than paying thousands or millions, I pay less than $10, less than one pays for .com or .org.

What’s so undesirable about 0w.uk? It wasn’t clear at first, but here’s what I’ve come to: Two ‘w’s are desirable because of the invocation of the World Wide Wieb’s “www” convention. But a single w doesn’t do that. All it does is give so many syllables that the url takes longer to pronounce character-by-character than some five-letter URLs. And the 0, being easily confused with o, makes it so that the most available possible word-level pronunciations (“ow!” or “ow-wuck”) are positively misleading.

Still, it’s got some charm for being the runt of its litter. I expect to put it to good use.

About

This entry was posted on Wednesday, November 11th, 2020 and is filed under Uncategorized.


Now hiring graduate students and postdocs at UC Davis

Grad student

Funded graduate position in the CSS of OSS

We are hiring a graduate student in Communication or another science for an NSF-funded research position under Prof. Seth Frey, for a graduate training in computational social science (CSS). This interdisciplinary work is focused around open source software (OSS) project success, and integrates social network analysis (SNA) and computational policy analysis, via natural language processing techniques.  You will have an opportunity to receive training from several faculty specializing in OSS, CSS, SNA, machine learning, and the quantitative study of governance systems (Prof. Frey & Vladimir Filkov at UCD and Charlie Schweik & Brenda Bushouse at UMass Amherst). You will work closely with junior computer scientists also joining the project, and other partners
Applicants should obviously have an interest in committing their graduate training toward a CSS expertise and show enthusiasm, promise, or experience in programming, data science, or statistics. As the project is funded for at least two years, and up to five, you should be able to make a strong claim that the subject matter is in line with your desired long-term research direction. Ph.D. students are preferred but Master’s students may apply. Submit a resume/CV and graduate exam scores (unofficial/outdated are fine). You may also submit a cover letter and links to previous research or code. Women, underrepresented minorities, and students with disabilities are encouraged to apply. For more information, review the project summary and contact Prof. Frey at sethfrey@ucdavis.edu.

Postdoc

Postdoc in OSS Sustainability

The Computer Science and Communication departments at the University of California Davis have an exciting opportunity for a postdoctoral fellow, funded by the NSF. This is for a 2-year research position at the intersection of computational social science, software engineering, and organizational governance.
The goal of the project is to study paths to sustainability that open source software projects can follow based on the experience of projects that have already become sustainable. The PIs, Prof. Vladimir Filkov and Prof. Seth Frey, are experts in software engineering data analysis, computational social science, and institutional analysis. In this project we are putting those backgrounds together to develop an infrastructure for understanding the paths to sustainability. Specifically, our goal is to gather development traces and governance/rules data from existing projects and build analytic tools to inform projects of how to improve their chances to be independent and sustainable. More information is available at https://nsf.gov/awardsearch/showAward?AWD_ID=2020751. The postdoc will be working in the Computer Science Department at UC Davis and will report to Prof. Filkov, but will be co-advised by both PIs.
A successful candidate will have strong background in programming and data science, and experience in machine learning and NLP. Other qualifications include interdisciplinary interests, a PhD in a computational field, and a strong track record of peer reviewed publications. The postdoctoral candidate will have an opportunity to learn techniques for the gathering, organization, and analysis of both structured and unstructured data, e.g. data from ASF Incubator and Linux Foundation projects. 
To apply contact Prof. Filkov at vfilkov@ucdavis.edu. Applications received by October 30 will be given full consideration. Women, underrepresented minorities, and students with disabilities are encouraged to apply.

About

This entry was posted on Sunday, October 4th, 2020 and is filed under Uncategorized.


How to win against Donald Trump in court


How many times have you been to court in your life? Once? Five times? Something wild, like 10 times? Donald Trump has been to court over 4095 times.

With 330M Americans, that amounts to a 1/100,000 chance he’ll end up in court against you. Remote enough, but way more likely than winning the lottery.

1 in 100,000 is about the same as your chances of dying by assault from a sharp object, and more likely than death by poisoning, peripheral vascular disease, or “animal contact”.

So … how’s this case gonna go, and what can you do to be prepared? For starters, are you more likely to be on the offense or defense?

According to the numbers, you’ve got pretty much equal chances of being the plaintiff or the defendant. Except! Close to half of all of his lawsuits involve him being the plaintiff in a casino-related case …

… so if you’re not in court with him about a casino-related matter, chances are 4:1 you’ve got him on the defense. We’ll see that that’s the wrong place to be.

And how likely are you to win? Turns out, almost any way you slice it, and no matter what other areas of life you might think he’s a loser, Donald Trump is a winner in court. Whether on defense or offense, Trump has won almost 9 times for every case he’s lost.

Of course, that’s unambiguous wins and losses, and those account for maybe only a third of cases. What about his “wins” (adding cases closed or dismissed as defendant or settled as plaintiff) and “losses” (cases closed or dismissed as plaintiff or settled as defendant)? That’s sure to change things.

Even including cases that ended settled, closed, or dismissed, Trump still comes out on top. He “wins” about 2.5 cases for every case he “loses”. Maybe it’s because he’s always right. Maybe he picks his battles. Maybe it’s that orange teflon coating. Maybe it’s just the triumph of wealth.

Or maybe, I don’t know, what if we compare his “wins” and “losses” as defendant vs plaintiff? Is he more or less effective on the offense vs defense? Turns out that that doesn’t make a difference either. On either side of the room, he wins almost 2.5 cases for each he loses.

There is one exception. I said above that most of his suits are as plaintiff in a casino case. Same for his “wins”. A major chunk of his “wins” are real offensive wins in casino cases. If we subtract those, the in the remaining cases that he’s the plaintiff you get almost perfectly even odds.

To be clear, that’s still not a winning bet. Win or lose, that’s one very costly coin flip on average, and perhaps more likely to be lose/lose than anything. But it’s the best it gets. As defendant in a non-casino case, which happen 5x more often, he’s still 2.5x more likely to come out on top somehow.

Slicing tinier and tinier adds more and more doubt, but there may be one exception to the exception. If you can get him to sue you specifically for a branding or trademark matter, the odds finally start to lean in your favor, with a 2.5 chance that *you’ll* win. That number is only based on a dozen or so cases. Go ahead, try your luck.

So the takeaway: If you must end up in court against Donald Trump, which is much more likely to happen than you think, you want to be the defendant in a case that’s not about casinos. You want him to sue you.

If Donald Trump sues you in a non-casino case, he’s about as likely to close the case prematurely, have it dismissed by the judge, or straight up lose as he is to win the case or get you to settle. This is ultimately not a winning scenario for you, just the least bad.

If you really want to find a way to win things against Donald Trump, become a fact checker. Of non-partisan Politifact’s 820 or so fact checks of Trump statements, 70% have been rated Mostly False or worse and only 4% have been rated True (https://www.politifact.com/personalities/donald-trump/).

… Or become Joe Biden who, as of mid-Summer 2020, is sitting on a 10 point major landslide of a lead.

Credit, disclaimers, and future directions:
Credit: I put this together from USA TODAY’s rolling analysis of Trump’s suits over the decades:
https://www.usatoday.com/pages/interactives/trump-lawsuits/
Death comparisons: https://en.wikipedia.org/wiki/List_of_causes_of_death_by_rate

Disclaimer 1: I’m not a lawyer.

Disclaimer 2: Of his 4000 suits, USA TODAY only has outcome data for 1500. That’s not a random 1500. It might over- or undercount his settlements or prematurely closed cases, as defendant or plaintiff. It may over-count recent cases in either role. Since the outcome data covers less than half of his cases, there’s a fair chance that all of these numbers are completely wrong. Welcome to working with data about humans.

Future directions:
1. It would be great to figure out the missing data. 2. Trump’s casino-related cases have died down a lot in recent years. Other kinds of cases have ramped way up. Rerunning this analysis on only the last 5-10 years could tell a much different story. 3. Also different, of course, are cases again his administration. His record there may be worse. 4. Are you more or less likely to get sued if you’re liberal or conservative? I’d guess no difference. 5. This would all be more clear if I had made figures.

The Takeaway again: If you must end up in court against Donald Trump, which is much more likely to happen than you think, you want to be the defendant in a case that’s not about casinos. You want him to sue you.

About

This entry was posted on Monday, August 3rd, 2020 and is filed under Uncategorized.


Toothbrushes are up to 95% less effective after 3 months and hugging your children regularly can raise their risk of anxiety, alcoholism, or depression by up to 95%


It sounds impossible, but this statistic is true:

Hugging your child regularly can raise his or her risk of anxiety, alcoholism, or depression by up to 95%.

I don’t even need a citation. Does it mean parents should stop hugging their children? No. You’d think that it couldn’t possibly be right, but the truth is even better: it couldn’t possibly be wrong.

And there are other statistics just like it. I was at a Walmart and on the side of a giant bin of cheap toothbrushes I read that “a new toothbrush is up to 95% more effective than a 3 month old toothbrush in reducing plaque between teeth.”

If you’ve heard related things like “Your toothbrush stops working after three months,” from TV or word of mouth, I’ve found that they all come as butchered versions of this original statistic, which actually says something completely different.

I’d only heard the simplified versions of that stat myself, and it had always set off my bullshit detector, but what was I going to do, crusade passionately against toothbrushes? Seeing the claim written out in science speak changed things a little. The mention of an actual percentage must have struck me because I pushed my giant shopping cart in big mindless circles before the genius of the phrasing bubbled up. This is textbook truthiness: At a glance, it looks like science is saying you should buy more toothbrushes, but merely reading right showed that the sentence means nothing at all. The key is in the “up to.” All this stat says is that if you look at a thousand or a million toothbrushes you’ll find one that is completely destroyed (“95% less effective”) after three months. What does that say about your particular old toothbrush? Pretty much nothing.

And that’s how it could be true that hugging your child regularly can raise his or her risk of anxiety, alcoholism, or depression by up to 95%. Once again, the key is in the “up to.” To prove it, all I have to do is find someone who is a truly terrible hugger, parent, and person. If there exists anyone like that — and there does — then this seemingly crazy claim is actually true. If any person is capable of causing psychological distress through inappropriate physical contact, the phrase “up to” lets you generalize to everyone. Should you stop hugging your child because there exist horrible people somewhere in the world? Of course not. These statistics lead people to conclusions that are the opposite of the truth. Is that even legal?

If it’s in your mind that you should buy a new toothbrush every three months, that’s OK, it’s in mine too. And as everyone who comes within five feet of me will be happy to hear, me and dental hygiene have no conflict. But you have to know that this idea of a three month freshness isn’t based in facts. If I had to guess, I’d say that it’s a phrase that was purchased by the dental industrial complex to sell more toothbrushes, probably because they feel like they don’t sell enough toothbrushes. If it sounds tinfoil hat that an industry would invest in fake science just to back up its marketing, look at just one of the exploits pulled by Big Tobacco, very well documented in testimony and subpoenas from the 1990’s.

Press release by Colgate cites an article that never existed

Hunting to learn more about the statistic, I stumbled on some Colgate fan blogs (which I guess exist) pointing to a press release citing “Warren et al, J Dent Res 13: 119-124, 2002.”

Amazingly, it’s a fake paper! There is nothing by Warren in the Journal of Dental Research in 2002, or in any other year. But I kept looking and eventually found something that seems to fit the bill:
Conforti et al. (2003) An investigation into the effect of three months’ clinical wear on toothbrush efficacy: results from two independent studies. Journal of Clinical Dentristry 14(2):29-33. Available at http://www.ncbi.nlm.nih.gov/pubmed/12723100.

First author Warren in the fictional paper is the last author in this one. It’s got to be the right paper, because their results say exactly what I divined in Walmart, that a three month old toothbrush is fine and, separately, that if you look hard enough you’ll find really broken toothbrushes. Here it is in their own words, from the synopsis of the paper:

A comparison of the efficacies of the new and worn D4 toothbrushes revealed a non-significant tendency for the new brush head to remove more plaque than the worn brush head. However, when plaque removal was assessed for subjects using brush heads with the most extreme wear, i.e., scores of 3 or 4 (n = 15), a significant difference (p < 0.05) between new and worn brush heads was observed for the whole-mouth and approximal surfaces.

This study should never have been published. The phrase “revealed a non-significant tendency” is jargon for “revealed nothing.” To paraphrase the whole thing: “We found no effect between brand new and three month old toothbrushes, but we wanted to find one, and that’s almost good enough. Additionally, a few of the toothbrushes were destroyed during the study, and we found that those toothbrushes don’t work.” The only thing in the original stat that isn’t in the Conforti synopsis is the claim about effect size: “up to 95% less effective.” The synopsis mentions no effect size regarding the destroyed toothbrushes, so either it’s only mentioned in the full version of the paper (which I can’t get my hands on) or it’s based on a really incredibly flawed interpretation of the significance claim, “(p < 0.05)." The distinguished Paul J Warren works or worked for Braun (but not Colgate), and has apparently loved it. Braun is owned by Gillette which is owned by Proctor & Gamble. The paper’s first author, Conforti, works, with others of the paper’s authors, for Hill Top Research, Inc., a clinical research contractor based in West Palm Beach, Florida. I don’t think there’s anything inherently wrong with working for a corporate research lab — I do — but it looks like they produce crap for money, and the reviewers who let Braun’s empty promotional material get published in a scientific journal should be embarrassed with themselves.

The original flawed statistic snowballs, accumulating followers, rolling further and further from reality

I did a lot of digging for the quote, and found lots of versions of it, each further from reality than the one before it. Here is the first and best attempt at summarizing the original meaningless study:

A new toothbrush is up to 95% more effective than a three month old toothbrush in reducing plaque between teeth.*

A later mention by Colgate gets simpler (and adds “normal wear and tear,” even though the study only found an effect for extreme wear and tear.)

Studies show that after three months of normal wear and tear, toothbrushes are much less effective at removing plaque from teeth and gums compared to new ones.*

… and simpler ….

Most dental professionals agree you should change your toothbrush every three months.*.

That last one might come from a different source, and it might reflect the statistic’s transition from a single vacuous truthy boner to vacuous widespread conventional wisdom. The American Dental Association now endorses a similar message: “Replace toothbrushes at least every 3–4 months. The bristles become frayed and worn with use and cleaning effectiveness will decrease.” To their credit, their citations don’t include anything by Warren or Conforti, but the paper they do cite isn’t much better: Their evidence for the 3–4 month time span comes from a study that only ran for 2.5 months (Glaze & Wade, 1986). Furthermore, the study only tested 40 people, and it wasn’t blind, and it’s stood unelaborated and unreplicated for almost 30 years. It’s an early, preliminary result that deserves followup. But if that’s enough for the ADA to base statements on then they are a marketing association, not the medical or scientific one they claim to be.

They also cite evidence that toothbrushes you’ve used are more likely to contain bacteria, but they’re quick to point out that those bacteria are benign and that exposure to them is not linked to anything, good or bad. Of course, those bacteria on your toothbrush probably came from your body. Really, you infect your toothbrush, not the other way around, so why not do it a favor and get a new mouth every three months?

So what now?

Buy a new toothbrush if you want, but scientifically, the 3–4 months claim is on the same level with not hugging your kids. Don’t stop hugging your kids. Brush your teeth with something that can get between them, like a cheap toothbrush, an old rag dipped in charcoal, or a stick. You can use toothpaste if you want, it seems to have an additional positive effect, probably a small one. Your toothbrush is probably working fine. If your toothbrush smells bad, you probably have bad breath.

Disclaimer is that I’m sure I could have read more, and I might be working too fast, loose, and snarky. I haven’t even read the full Conforti paper (If you have good institutional access, see if you can get it for me). I’ll dig deeper if it turns out that anyone cares; leave a comment.

Update

  • That paper that doesn’t exist actually does, sort of. The press release got the journal wrong. But that doesn’t help, because its findings have nothing to do with the claim. Conforti is still the go to resource, and it’s still crap.
  • That journal with both the Warren and Conforti results, the Journal of Clinical Dentistry, bills itself “the highest quality peer-reviewed medium for the publication of industry-supported oral care product research and reviews.” It’s a shill venue. And they don’t offer any online access to past issues or articles, so it’s real tough to dive deeper on any of these pubs, or how they were funded.
  • The industry has now organized its science supporting its claim at this site: https://www.dentalcare.com/en-us/research/research-database-landing?research-topics=toothbrush-wear&research-products=&author=&year=&publication=&research-types=
  • Warren is now at NYU. ugh.
  • Looking outside of journals that get paid by toothbrush companies to publish meaningless research, there are failures to replicate Warren’s 3 month findings:
    • “no statistically significant differences were found for plaque score reductions for 3-month-old toothbrushes exhibiting various degrees of wear.” (Malekafzali, 2011)
    • and stronger: “A total of 238 papers were identified and retrieved in full text. Data on tooth-brushing frequency and duration, ideal bristle stiffness, and tooth-brushing method were found to be equivocal. Worn tooth brushes were not shown to be less effective than unworn brushes, and no ideal toothbrush replacement interval is evident.”(Asadoorian, 2006)

Refs

Conforti N.J., Cordero R.E., Liebman J., Bowman J.P., Putt M.S., Kuebler D.S., Davidson K.R., Cugini M. & Warren P.R. (2003). An investigation into the effect of three months’ clinical wear on toothbrush efficacy: results from two independent studies., The Journal of clinical dentistry, 14 (2) 29-33. PMID: http://www.ncbi.nlm.nih.gov/pubmed/12723100

Glaze P.M. & Wade A.B. (1986). Toothbrush age and wear as it relates to plaque control*, Journal of Clinical Periodontology, 13 (1) 52-56. DOI: http://dx.doi.org/10.1111/j.1600-051x.1986.tb01414.x.


A secure Bitcoin is a manipulable Bitcoin by definition


After several years of evidence of the volatility, insecurity, and overall reality of cryptocurrency, the conversation around cryptocurrency is evolving further away from the strict ideal of an immutable governance machine that runs itself without politics. Part of this is due to seeing the pure market do one of the many things that pure markets do: get captured by monopolists and oligopolists. It became a reality very quickly that both wealth in the network and control of its infrastructure concentrated very quickly in the hands of a few powerful actors. While some libertarians embrace market power as legitimate power because it emerges, the fall of these currencies, Bitcoin in particular, into the hands of the few has meant the end of the honeymoon for many in the crypto space. The narrative that has evolved is that we had high hopes, but we’ve learned our lesson, and with our eyes wide open are now looking at more types of mechanisms and more complex governance to create a new type of system that’s not just a market and such and such.

The simple critique of that narrative is that market capture caught no one by surprise who has done their homework. Market concentration, the emergent accumulation of capital in the hands of the few, is as reliable a property of markets as equilibrium, nearly as old and distinguished. We don’t hear about it as much, usually because of a mix of rug sweeping (“Inequality isn’t cool. Efficiency; that’s cool.”, “¿Monopoly, what monopoly?”), gaslighting (“But The State!”) or rationalization (“Is a Pareto distribution really that unequal?”, “Inequality is inevitable no matter what”, “Corporate capture of regulators is the fault of the existence of regulators”, “theoretically a natural monopolist will act as if they have competitors so no problem”, “the market’s autocrats are bad but less bad than those of The State”). Really, it should have caught no one by surprise that BitCoin and coins like it fell very quickly to those who probably don’t need more money.

But to the naïve surprise at protocol capture in crypto economies, there’s a much deeper critique. The original Bitcoin whitepaper, Nakamoto (2008), which was focused on demonstrating the security of the distributed ledger scheme, actually imposes a capture process by assumption, as part of its security arguments. The first security concern that Nakamoto tackles is the double spending problem, that an attacker who builds out a false ledger faster than the original secure ledger can undermine the currency by rewriting it to enrich themselves. This attack has proven to be more than theoretical. To prove against it, Nakamote defined a stochastic process, a random walk. He theorized to 1-D random walk agents (the original and the attacker) and estimated the size of the difference in their positions, given an initial headstart by the original. A basic result of probability theory is that, for an unbiased random walk, the probability of reaching every point, whether walking in 1 or 2 dimensions, approaches 1 as time extends toward infinity. By extension, no difference between two random walkers should be insurmountable. But Nakamoto doesn’t assume an unbiased walk. He assumes that the probability of the original chain to advance is greater than the probability of the false chain. This asymmetry breaks the result and guarantees that the original chain’s probability of success will diverge exponentially as their initial advantage increases. This was the security result.

But take a look. An assumption required by the security result is that the original chain has a structural advantage.

If a majority of CPU power is controlled by honest nodes, the honest chain will grow the fastest and outpace any competing chains. To modify a past block, an attacker would have to redo the proof-of-work of the block and all blocks after it and then catch up with and surpass the work of the honest nodes.

If a greedy attacker is able to
assemble more CPU power than all the honest nodes, he would have to choose between using it
to defraud people by stealing back his payments, or using it to generate new coins. He ought to
find it more profitable to play by the rules, such rules that favour him with more new coins than
everyone else combined, than to undermine the system and the validity of his own wealth.

The assumption is being used to examine one type of attack, but assuming it for this case has consequences that are much greater. It is the assumption that miner dynamics are driven by a rich-get-richer dynamic that implies oligopoly. Nakamoto gives some attention to this problem assuming that any nodes who accumulate enough power to cheat will have a greater incentive to stay honest. Whether that holds is another question: the point is that structural advantage and rich get richer dynamics were built into Bitcoin. having an imbalance of CPU power gives an agent the power to influence the very legitimacy of the system. Power within the system confers power over it. This was just the first example of many in the crypto space, of a few agents gaining the power to write the rules they follow. This can’t be framed as an embarrassing surprise, or an unfortunate lesson learned. It’s a foundation of the viability of the system. It was baked in; Bitcoin was vulnerable to capture from Day 1.

About

This entry was posted on Tuesday, May 19th, 2020 and is filed under Uncategorized.


The decentralization fetishists and the democracy fetishists


There’s a sort of battle for the soul of Internet going on right now, among those who see it as some combination of tool and microcosm of the future of society.

On the one hand you’ve got your decentralization utopians. They’ve been bolstered by the burgeoning of crypto. You might hear libertarian bywords like “maximizing human potential.” They see antiauthoritarianism as straightforwardly anti-state, and do what they can to create weed-y technologies that can’t be tamped down: that can’t be kept out of use by “the people,” which may or may not mean entrepreneurs. They see technology as making new forms of government possible.

On the other you’ve got your democratic utopians. They’re a bit more old school: some of the original dreamers into the potential of the Internet. They’re fairly pluralistic, as you can tell by the fact that every major democracy on the Internet is completely different from every other. They see technology as a complement, and not even a too necessary complement, to culture and community as the key to success in self-governance. They are more comfortable with bureaucracy and even some hierarchy: they’re pragmatic. Or not: I think they are because that what I think. I’m definitely a democracy fetishist, and not a decentralization fetishist.

A person can be both kind of utopian. Where they differ, the decentralized types might criticize democracy as faulty, unworkable, and too bureaucratic. The democracy types might criticize the decentralization types as too focused on technology and markets, and naive about the importance of culture and the social side.

The big threat to democratic experiments online: it requires a lot of upkeep to be performed by a lot of people. Members need training or skill or experience to be good stewards of a democracy. If you fall off on training, democracy devolves into forms like demagoguery. It seems to work best when members are invested enough that they think it’s worth all the time. To really be a viable model for the future, it’s not going to be enough to have a theory of institution engineering. Where going a theory of culture engineering.

The big threat decentralization experiments online, especially these days, is their vulnerability to co-optation. They rely heavily on reputation schemes, which can be thought of as a token representation of a person’s quality. A lot of effort is going now into mechanisms that quantify ineffables like that. But by making these qualities into ownable goods, you make them easier to distribute in a market economy, and whatever your ideals for your tool, the tool itself is liable to get picked up by institutions with lots of money if it can help them make more. This is because markets only work on excludable, subtractable goods. When we use technology to gives qualities the properties of a token, it becomes legible to markets, and they can step in and do what they’re good at.

There’s also a big threat to both. Somehow, the weaknesses of each get amplified at scale. Neither grows well. Neither is robust to capture at scale.

About

This entry was posted on Saturday, May 9th, 2020 and is filed under Uncategorized.


Subjective utility paradox in a classic gift economy cycle with loss aversion

 

Decision research is full of fun paradoxes.  Here’s one I came up with the other day. I’d love to know if it’s already been explored.

  1. Imagine a group of people trading Kahneman’s coffee cup amongst themselves.
  2. If you can require that it will keep being traded, loss aversion predicts that it’ll become more valuable over time, as everyone sells it for more than they got it.
  3. Connect those people in a ring and as the cup gets traded around its value will diverge. It will become invaluable.

Kula bracelet
 

Thoughts

  • This could be a mechanism for things transitioning from having economic to cultural value, a counter-trend to the cultural->economic trend of Israeli-daycare-style crowding out.
  • The cup of course doesn’t actually have to attain infinite value for this model to be interesting.  If it increases in value at all over several people, then that’s evidence for the mechanism.
  • Step 2 at least, and probably 3, aren’t giant leaps. Who would know if this argument has been made before?
  • There is a real world case for this.  A bit too complicated to be clean-cut evidence, but at least suggestive.  The archetypal example of gift economies was the Kula ring, in which two types of symbolic gift were obligatorily traded for each other over a ring of islands, with one type of gift circulating clockwise  and the other counter-clockwise through the islands. These items had no practical use, they existed only to trade.  They became highly sought-after over time, as indicators of status.  In the variant described, both types of items should become invaluable over both directions around the circle, but should remain tradable for each other.
  • This example ends up as a fun paradox for utilitarianism under boundedly rational agents, a la Nozick’s utility monster, which subjectively enjoys everything more than everyone, and therefore under a utilitarian social scheme should rightfully receive everything.
  • The effect should be smaller as the number of people in the ring gets smaller.  A smaller ring means fewer steps until I’ve seen the object twice (less memory decay).  My memory that the thing was less valuable yesterday acts here as a counterbalance to the inflationary effect of loss aversion.

Metaphors are bad for science, except when they transform it


I love cybernetics, a funny body of work from the 50s-70s that attempted to give us a general theory of complex systems in the form of systems of differential equations. I love it so much that it took me years to realize that its metaphors, while offering wonderful links across the disciplines, are just metaphors, and would be incapable of leading me, the young scientist, towards new discoveries. Because in every discipline you find ideas that are just like those offered by cybernetics, except that they are specific, nuanced, grounded in data, and generative of insights. Cybernetics, for me, is a great illustration of how metaphors let science down, even when they are science-y. But there are exceptions.

James Hutton is the father of modern Geology, and in many ways he’s the Darwin of geology, although it might be more fair to say that Darwin is the Hutton of biology, as Hutton preceded Darwin by a generation, and his geology was the solid ground that the biologist’s biology ultimately grew on. Like Darwin, Hutton challenged the tacit dominance of the Bible in his field. The understanding, never properly questioned before him, had always been that the Earth was only a few thousand years old and had formed its mountains and hills and continents over several days of catastrophes. The theory that preceded Hutton is actually termed catastrophism, in contrast to the theory he introduced, that the earth is mind-bogglingly old, and its mountains and hills the result of the drip drip drip of water, sand, and wind.

How did he come up with that? Writer Loren Eisley gives us one theory: Hutton was trained as a physician. His thesis was on the circulatory system: “Inaugural Physico-Medical Dissertation on the Blood and the Circulation of the Microcosm”. Microcosm? That’s in there Hutton subscribed to the antiquated belief that humans were like the Universe in miniature. He observed that the dramatic differences between our young and old are the result of a long timeline of incremental changes, occurring in the tension between the constant growth and death of our skin, hair, nails, and bones. If humans are the result of a drip drip drip, and they are a copy of the universe, then incremental processes must account for other things as well. And there you have it, a shaky metaphor planting the seed for a fundamental transformation not only in how humanity views the earth, but how it views time. Hutton invented deep time by imagining the Earth to be like the body. Eisley called it “Hutton’s secret”: the Earth is an organism, and it’s there underneath us now, alive with change.

So big shaky metaphors can serve science? Really? What if Hutton was a one-off, lucky. Except he’s not alone. Pasteur’s germ theory of disease came out of a metaphor baked deep into his elitism and nationalism: as unlikely as it seems, tiny things can kill big things, the same way the awful teeming masses threaten the greatness of Mother France and her brilliant nobles. So there’s two. And the third is a big one. No grand metaphor has been more important to the last few hundred years of science than “the universe is a clockwork”, especially to physics and astronomy. This silly idea, which had its biggest impact in the 18th and 19th centuries, made thoughts thinkable that most of classical mechanics needs to make any sense at all.

I’m still not sure how bad metaphors lead to big advances. All I can figure is that committing 100% might force a person out of the ruts of received wisdom, and can make them receptive to the hints that other views pass over. Grand, ungrounded, wildly unfounded metaphors have a place in science, and not just any place. We can credit them with at least two of humanity’s most important discoveries from the last 250 years.

About

This entry was posted on Thursday, November 21st, 2019 and is filed under Uncategorized.


New work using a video game to explain how human cultural differences emerge

Video games can help us sink our teeth into some of the thorniest questions about human culture. Why are different people from different places different? Is it because their environments differ, because they differ, or is it all random? These are important questions, at the very root of what makes us act they way we do. But answering them rigorously and responsibly is a doozy. To really reliably figure out what causes differences in human cultures, you’d need something pretty crazy. You’d need a sort of human culture generator that creates lots of exact copies of the same world, and puts thousands of more or less identical people in each of them, lets them run for a while and does or does not produce cultural differences. In essence, you’d need God-like power over the nature of reality. A tall order, except, actually, this happens all the time. Multiplayer video games and other online communities are engineered societies that attract millions of people. It turns out that even the most powerful computers can’t host all of those visitors simultaneously, so game developers often create hundreds of identical copies of their game worlds, and randomly assign new players to one or another instance. This creates the circumstances necessary, less real than reality, but much more realistic than any laboratory experiment, to test fundamental theories about why human cultures differ. For example, if people on different copies of the same virtual world end up developing different social norms or conceptions of fairness, that’s evidence: mere tiny random fluctuations can cause societies to differ!

This theory, that societies don’t need fundamental genetic, or deep-seated environmental divergences to drift apart has revealed itself in many disciplines. It is known as the Strong Cultural Hypothesis in cultural anthropology, and has emerged with different names in economics, sociology, and even the philosophy of science. But stating a hypothesis is one thing, pinning it down with data is another.

With survey data from evolutionary anthropologist Pontus Strimling at the Institute for the Future in Sweden, from players of the classic multiplayer game World of Warcraft, we showed that populations can come to differ even when demographics and environment are the same. The game gives many opportunities for random strangers, who are likely to never meet again, to throw their lots together, and try to cooperate in taking down big boss characters. Being that these are strangers with no mutual accountability, players have lots of strategies for cheating each other, by playing nice until some fancy object comes along, and then stealing it and running away before anyone can do anything. The behavior is so common that it has a name in the game, “ninja-ing”, that reflects the shadowy and unassailable nature of the behavior.

Given all this opportunity for bad behavior, players within cultures have developed norms and rules for how and when to play nice and make sure others do. For those who want to play nice, there are lots of ways of deciding who should get a nice object. The problem then is which to choose? It turns out that, when you ask several people within one copy of the game world how they decide to share items, you’ll get great agreement on a specific rule. But when you look at the rules across different copies, the rule that everyone agreed on is different. Different copies of the world have converged on different consensuses for what counts as a fair distribution of resources. These differences emerge reliably between huge communities even though the player demographics between communities are comparable, and the game environments across those communities are literally identical.

If it seems like a tall order that video games can tell us about the fundamentals nature of human cultural differences, that’s fair: players are mostly male, often young, and the stakes in a game are much different than those in life. Nevertheless, people care about games, they care about being cheated, and incentives to cheat are high, so the fact that stable norms emerge spontaneously in these little artificial social systems is evidence that, as Jurassic Park’s Dr. Ian Malcolm said of life, “culture finds a way.”

Here is the piece: paywall,
no paywall


Small-scale democracy: How to head a headless organization

“Good question. Yes, we have your best interests at heart.”

Long ago I ended up in a sort of leadership position for a member-owned, volunteer-run, consensus-based, youth-heavy multi-house housing cooperative. It has everything great and bad about democracy. Over five years I made tons of mistakes and lots of friends and lots of not friends and learned a lot about myself and how to get things done. At some point I wrote some of them down.

  • Don’t bring a proposal to a membership meeting unless you know it’s going to pass. That means doing the legwork and building support behind the scenes, and also having a feel for the temperature in the room.
  • You can’t keep all the balls in the air. Be intentional and maybe even public about which balls you are letting drop. Focus on existential threats. Accept that your org is and will always be a leaky boat.
  • Hardest biggest lesson: I am full of self-doubt by nature and profession (scientist). But I learned to stick to the course of action I thought was right, against the noise of people thinking I was wrong, without having to convince myself they were wrong and I the brave harried hero.
  • When you propose a rule, don’t write it to fix the thing that went wrong, write it to prevent anything like it from ever happening again. The difference is how much thought you put into how it happened and what has to fail for it to happen again.
  • You have to be able to deal with people not liking you without getting resentful yourself. It was hard. I never would have learned except I had to. And even then I failed a lot.
  • People respond to you genuinely publicly suffering to meet their needs. I was vulnerable a lot and begging a bunch.
  • Need to bring people together? Being the common enemy works in a pinch. This is the corollary to letting balls drop. You can use this to get everyone to take on the job of keeping those less important but still nice and now unifying balls in the air.
  • You can’t ever assume things are fine and not currently about to blow up.
  • Can’t communicate enough with the membership. It’s amazing how fast bad vibes can start to build up in secret if you aren’t constantly rehumanizing yourself.
  • Neat trick for the horse trading that is a part of getting things done: One nice right of your authority position is the power to create symbols of value out of thin air (titles, the name of a thing, the signature on an important contract). They cost you nothing to create and others value them. So create them and trade them away in exchange for things that matter. I signed four houses to my coop, and had a pet name for each one, and never got to name a single one. I always had to trade the name off in exchange for support on closing the deal.
  • Power exists and you should use it to do what you think is right for the org, even if you might be wrong, as long as you are always double checking and striving to be less wrong. Democracy is inherently political/nonideal, in the sense that it is the sum of a bunch of people doing more and less undemocratic things within the broader constraints of a democratic accountability framework. So acts of power and working behind the scenes and managing information strategically aren’t undemocratic, they are a part of it, and you should do them when you need to, and you shouldn’t do them too much or its your head. That’s the way of things: admitting the existence of power and the necessity of occasionally wielding it despite your ideals. Running a system by occasionally violating its tenets isn’t bad, it’s beautiful. In an internally inconsistent world, what else but an internally inconsistent organization can survive?

Things I never figured out:

  • How to guess who will be reliable before investing a bunch and being wrong.
  • How to inspire. The few times it happened were totally unreproducible flukes. So I did a lot on my own.
  • How to build an org that learns from its mistakes
  • How to build a culture with really widespread engagement, not just a good core group

About

This entry was posted on Thursday, July 18th, 2019 and is filed under Uncategorized.


In case there’s doubt that Charles Bukowski’s Post Office is about himself

http://classic.tcj.com/blog/i-dont-bother-to-defend-bukowski/
Bukowski was dissipated, a postal worker, and a dissipated postal worker.

Here are letters to “Mr. Henry C. Bukowski Jr.” from the USPS informing him of his right to participate in the various stages of disciplinary review for actions including drunken arrests and skipping work. These letters make it look like you really have to try to get fired from a government job, so we should be impressed at his commitment and hard work.



These come from the amazing collection of Indiana University’s Lilly Library in Bloomington, Indiana, one of the few special collections library’s that is open to random wanderers off the street.

About

This entry was posted on Friday, July 12th, 2019 and is filed under audio/visual, books.


Why Carl Sagan wasn’t an astronaut

Astronomer Carl Sagan probably loved space more than most people who get to go there. So why did it never occur to me that he maybe wanted to go himself? We don’t really think of astronomers as wanting to be astronauts. But once you think about it, how could they not? I was in the archives of Indiana University’s Lilly Library, looking through the papers of Herman Joseph Muller, the biologist whose Nobel Prize was for being the first to do biology by irradiating fruit flies. He was advisor to a precocious young high-school-aged Sagan, and they had a long correspondence. Flipping through it, you get to watch Sagan evolve from calling his advisor “Prof. Muller” to “Joe” over the years. You see him bashfully asking for letters of recommendation. And you get to see him explain why he was never an astronaut.

The letter

HARVARD COLLEGE OBSERVATORY
Cambridge 38, Massachusetts

November 7, 1966

Professor H. J. Muller
Department of Zoology
Jordan Hall 222
University of Indiana
Bloomington, Indiana

Dear Joe,

Many thanks for the kind thoughts about the scientist-astronaut program. I am not too old, but I am too tall. There is an upper limit of six feet! So I guess I’ll just stay here on the ground and try to understand what’s up in the sky. But a manned Mars expedition — I’d try and get shrunk a little for that.

With best wishes,
Cordially,
Carl Sagan

A little note on using special collections

A library’s Special Collections can be intimidating and opaque. But they have amazing stuff once you’re started. The easiest way to get started is to show and up and just ask to be shown something cool. It’s the librarian’s job to find things, and they’ll find something. But that only shows you things people know about. How do you find things that no one even knew was in there? The strategy I’m converging on is to start by going through a library’s “finding aids”, skip to the correspondence, skip to the alphabetized correspondence, Google the people who have been pulled out, and pull the folder of the first person who looks interesting. The great thing about this strategy is that even if your Library only has the papers of boring people, those papers will include letters from that boring person’s interesting friends.


Bringing big data to the science of community: Minecraft Edition

https://www.hmc.edu/calendar/wp-content/uploads/sites/39/2019/01/Rubiks-Cube-images.jpg

Looking at today’s Internet, it is easy to wonder: whatever happened to the dream that it would be good for democracy? Well, looking past the scandals of big social media and scary plays of autocracy’s hackers, I think there’s still room for hope. The web remains full of small experiments in self-governance. It’s still happening, quietly maybe, but at such a tremendous scale that we have a chance, not only to revive the founding dream of the web, but to bring modern scientific methods to basic millenia-old questions about self-governance, and how it works.

Minecraft? Minecraft.

That’s why I spent five years studying Minecraft. Minecraft, the game you or your kid or niece played anytime between 5 minutes and 10 years ago, consists of joining one of millions of boundless virtual worlds, and building things out of cubic blocks. Minecraft doesn’t have a plot, but narrative abhors a vacuum, so people used the basic mechanics of the game to create their own plots, and in the process catapulted it into its current status as the best-selling video game of all time. Bigger than Tetris.

Minecraft’s players and their creations have been the most visible facet of the game, but they are supported by a class of amateur functionaries that have made Minecraft special for a very different reason. These are the “ops” and administrators, the people who do the thankless work of running each copy of Minecraft’s world so that it works well enough that the creators can create.

Minecraft, it turns out, is special not just for its open-ended gameplay, but because it is “self-hosted”: when you play on a world with other people, there is a good chance that it is being maintained not by a big company like Microsoft, but by an amateur, a player, who somehow roped themselves in to all kinds of uncool, non-cubic work writing rules, resolving conflicts, fixing problems, and herding cats. We’re used to leaving critical challenges to professionals and, indeed, most web services you use are administered by people who specialize in providing CPU, RAM, and bandwidth publicly. But there is a whole underworld of amateur-run server communities, in which people with no governance training, and no salary, who would presumably prefer to be doing something else, take on the challenge of building and maintaining a community of people who share a common vision, and work together toward it. When that works, it doesn’t matter if that vision is a block-by-block replica of the starship Enterprise, it’s inspiring. These people have no training in governance, they are teaching themselves to build governance institutions. Each world they create is a political experiment. By my count, 19 of 20 fail, and each success and failure is a miraculous data point in the quest to make self-governance a science.

That’s the dream of the Internet in action, especially if we can bring that success rate up from 1/20, 5 percent. To really understand the determinants of healthy institutions, we’d have to be able to watch 100,000s of the nations of Earth rise and fall. Too bad Earth only has a few hundred nations. Online communities are the next best thing: they give us the scale to run huge comparisons, and even experiments. And there is more to governing them than meets the eye.

Online communities as resource governance institutions

Minecraft servers are one example of an interesting class of thing: the public web server. A web server is a computer that someone is using to provide a web service, be it a computer game, website, mailing list, wiki, or forum. Being computers, web servers have limits: finite processing power (measured in gigahertz), memory (measured in gigabytes), bandwidth (measured in gigabytes per second), and electricity (measured in $$$ per month). Failing to provide any of these adequately means failing to provide a service that your community can rely on. Being a boundless 3D multiplayer virtual world open to virtually anyone, Minecraft is especially resource intensive, making these challenges especially critical.

Any system that manages to thrive in these conditions, despite being available to the entire spectrum of humanity, from anonymous adolescents with poor impulse control to teams of professional hackers, is doing something special. Public web servers are “commons” by default. Each additional user or player who joins your little world imposes a load on it. Even if all of your users are well intentioned your server will grind to a halt if too many are doing too much, and your community will suffer. When a valuable finite resource is available to all, we call it a common pool resource, and we keep our eyes out for the classic Tragedy of the Commons: the problem of too many people taking too much until everyone has nothing.

The coincidence of the Information Age with the global dominance of market exchange is that virtually every application of advancing technology has been toward making commons extinct. Anything that makes a gadget smaller or cheaper makes it easier to privately own, and more legible to systems that understand goods as things that you own and buy and sell. This goes back all the way to barbed wire, which turned The Wild West from the gigantic pasture commons that created cowboys to one that could feasibly to fence off large tracts of previously wild land, and permit the idea of private property. (Cowboys were common pool resource managers who ranged the West bringing cow herds back to their owners, through round-ups.). Private servers like those in Minecraft are a counterpoint to this narrative. With modern technology’s adversity to the commons, it’s funny every time you stumble on a commons that was created by technology. It’s like they won’t go away.

That brings up a big question. Will commons go away? Can they be privatized and technologized away? This is one foundation of the libertarian ideology behind cryptocurrency. But the stakes are higher than the latest fad.

One claim that has been made by virtually every philosopher of democracy is that successful self-governance depends not only on having good rules in place, but on having members who hold key norms and values. Democracy has several well-known weak spots, and norms and values are its only reliable protection from demagogues, autocrats, elites, or mob rule. This sensitivity to culture puts institutions like democracy in contrast with institutions like markets, hierarchies, and autocracies, whose reliance on base carrots and sticks makes them more independent of value systems. Economist Sam Bowles distinguishes between Machiavellian and Aristotelian institutions, those that are robust to the worst citizen, and those that create good ones. The cynical versus the culture-driven institutions.

The same things that make cynical institutions cynical make them easy to analyze, design, and engineer. We have become good at building them, and they have assumed their place at the top of the world order. Is it their rightful place? In the tradition that trained me, only culture-driven institutions are up to the challenge of managing commons. If technology cannot take the commons from our our future, we need to be as good at engineering culture-driven institutions as we are at engineering markets and chains of command. Minecraft seems like just a game, a kid’s game, but behind its success are the tensions that are defining the role of democracy in the 21st century.

Unfortunately, the same factors that make cynical institutions easy to build and study make culture-driven institutions hard. It is possible to make thousands of copies of a hierarchy and test its variations: that’s what a franchise is: Starbucks, McDonalds, copy, paste. By contrast, each inspiring participatory community you discover in your life is a unique snowflake whose essence is impossible to replicate, for better and worse.

By researching self-organizing communities on the Internet, wherever they occur, we take advantage of a historic opportunity to put the “science” in “political science” to an extent that was once unimaginable. When you watch one or ten people try to play God, you are practicing history. When you watch a million, you are practicing statistics. We can watch millions of people trying to build their own little Utopia, watch them succeed and fail, distinguish bad choices from bad luck, determine when a bad idea in most contexts will be good somewhere else, and build general theories of institutional effectiveness.

There are several features that make online communities ideal for the study of culture-driven institutions. Their low barrier to entry means that there are many more of them. Amateur servers are also more transparent, their smaller scale makes them simpler, their shorter, digitally recorded histories permit insights into the processes of institutional change, and the fact that they serve identical copies of known software makes it possible to perform apples-to-apples comparisons that make comparisons of the Nations of Earth look apples-to-elephants by comparison.

A study of the emergence of formal governance

Big ideas are nice, but you’ve got to pin it down somehow. I began my research asking a more narrow question: how and why do communities develop their governance systems in the direction of increasing integration and formalization. This is the question of where states come from, and bureaucracy, and rules. Do we need rules. Is there a right way to use them to govern? Is it different for large and small populations? To answer this, I wrote a program that scanned the Internet every couple of hours for two years, visiting communities for information about how they are run, who visits them, and how regularly those visitors return. I defined community success as the emergence of a core group, the number of players who return to a specific server at least once a week for a month, despite the thousands of other communities they could have visited. And because the typical lifespan of a server is nine weeks, it was possible to observe thousands of communities, over 150,000, over their entire life histories. Each starts from essentially the same initial conditions, a paradoxical “tyrano-anarchy” with one ruler and no rules. And each evolves in accordance with a sovereign administrators naïve sense of what brings people together. As they develop that sense, administrators can install bits of software that implement dimensions of governance, including private property rights, peer monitoring, social hierarchy, trade, communication, and many others. Most fail, some succeed.

According to my analysis, large communities seem to be the most successful the more actively they attend to the full range of resource management challenges, and, interestingly, the more they empower the sole administrator. Leadership is a valuable part of successful community, especially as communities grow. The story becomes much harder to align with a favorite ideology when we turn our focus to small communities. It turns out that if your goal is to run a community of 4, rather than 400 regular users, there is no governance style that is clearly more effective than any other: be a despot, be a socialist, use consensus or dice, with few enough people involved, arrangements that seem impossible can be made to work just fine.

The future

What this project shows is that rigorous comparisons of very large, well-documented populations of political experiments make it possible to understand the predictors of governance success. This is important for the future of participatory, empowering governance institutions. Until effective community building can be reduced to a formula, effective communities will be rare, and we humans will continue to fail to tap the full potential of the Internet to make culture-driven institutions scalable, replicable, viable competitors to the cynical institutions that dominate our interactions.

With more bad news every day about assaults on our privacy and manipulation of our opinions, it is hard to be optimistic about the Internet, and what it will contribute to the health of our institutions. But, working diligently in the background, is a whole generation of youth who have been training themselves to design and lead successful communities. Their sense of what brings people together doesn’t come from a charismatic’s speechifying, but their own past failures to bring loved ones together. They can identify the warning signs of a nascent autocrat, not because they read about autocracies past, but because they have personally experienced the temptation of absolute power over a little virtual kingdom. And as scientist learn these lessons vicariously, at scale, self-governance online promises not only to breed more savvy defenders of democracy, but to inform the design and growth of healthy, informed participatory cultures in the real world.


H. G. Wells on science and humility

Cosmos
“It is this sense of unfathomable reality to which not only life but all present being is but a surface, it is this realization “of the gulf beneath all seeming and of the silence above all sounds,” which makes a modern mind impatient with the tricks and subterfuges of those ghost-haunted apologists who are continually asserting and clamouring that science is dogmatic—with would-be dogmas that are forever being overthrown. They try to degrade science to their own level. But she has never pretended to that finality which is the quality of religious dogmas.” — H.G. Wells in “Science and Ultimate Truth”

Also, am I the only one who always confused George Orwell, H. G. Wells, and Orson Welles?

About

This entry was posted on Tuesday, May 14th, 2019 and is filed under Uncategorized.


A recent history of astrology


I made a site this summer—http://whatsyoursign.baby—that’s a sort of glorified blog post about what happens when you go ahead and give astrology a basis in science. I wrapped it up with an explainer that was hidden in the back of the site, so I’m publishing the full thing here.

The history

Before the 17th century, Westerners used similarity as the basis for order in the world. Walnuts look like brains? They must be good for headaches. The planets seem to wander among the stars, the way that humans walk among the plants and trees? The course of those planets must tell us something about the course of human lives. This old way of thinking about the stars persists today, in the form of astrology, an ancestor of the science of astronomy that understands analogy as a basis of cosmic order.

Planets and the earthbound objects that exert the same gravitational pull at about two meters away. If you don’t believe that a shared spiritual essence binds the objects of the cosmos to a common fate, keep in mind that the everyday object most gravitationally similar to Uranus is the toilet.
It was our close relationship to the heavenly bodies that slowly changed the role of similarity in explanation, across the sciences, from something in the world to something in our heads. Thanks to the stargazers, similarity has been almost entirely replaced by cause and mechanism as the ordering principle of nature. This change was attended by another that brought the heavenly bodies down to earth, literally.

Physics was barely a science before Isaac Newton’s insights into gravity. But Newton’s breakthrough was due less to apples than to cannons. He asked what would happen if a cannonball were shot with such strength that, before it could fall some distance towards the ground, the Earth’s curvature had moved the ground away by the same amount. The cannonball would be … a moon! Being in orbit is nothing more than constantly falling! Through this and other thought experiments, he showed that we don’t need separate sciences for events on Earth and events beyond it (except, well, he was also an occultist). Newton unified nature.

Gravity—poorly understood to this day—causes masses to be attracted to each other. It is very weak. If you stand in front of a large office building, its pull on you is a fraction of the strength of a butterfly’s wing beats. The Himalyas, home of the tallest peak on Earth, have enough extra gravity to throw off instruments, but not enough to make you heavier. Still enough to matter: Everests’s extra gravity vexed the mountain’s first explorers, who couldn’t get a fix on its height because they could couldn’t get their plumb lines plumb.

So is the gravity of faraway bodies strong enough to change your fate? And if so, how? Following the three-century impact of science on thought, modern astrologists have occasionally ventured to build out from the mere fact of astrology into explanations of how it must actually work; to bring astrology into nature right there with physics. Most explanations have focused on gravity, that mysterious force, reasoning that the patterns of gravitational effects of such massive bodies must be large, unique, or peculiar enough to leave some imprint at the moment of birth.

But by making astrology physical, we leave open the possibility that it is subject to physics. If we accept astrology and know the laws of gravity, we should be able reproduce the gravitational fingerprint of the planets and bring our cosmic destiny into our own hands.

Refs

Foucault’s “The order of things”
Bronowski’s “The common sense of science”
Pratt, 1855 “I. On the attraction of the Himalaya Mountains, and of the elevated regions beyond them, upon the plumb-line in India”

About

This entry was posted on Wednesday, October 24th, 2018 and is filed under believers, science.


New work on positive and negative social influence in social media: how your words come back to haunt you

I have a paper that will be coming out in the upcoming Big Data special issue of Behavior Research Methods, a top methods journal in psychology. It’s called “The rippling dynamics of valenced messages in naturalistic youth chat” and it is out in an online preview version here:
https://link.springer.com/article/10.3758%2Fs13428-018-1140-6

The paper looked at hundreds of millions of chat in an online virtual world for youth. The popular pitch for the piece is about your words coming back to haunt you on social media. That’s one takeaway you might draw out from the work we did. We looked at social influence: how my words or actions affect you. Of course, a lot of people look at social influence. Some papers look at those influence over minutes, and that’s good to do because it might help us understand behavior in, say, online political discussion. Others look at social influence over years, and that’s also good to do because it tells us how our peers change us in the long term. But say you wanted the God’s eye view of specifically what kinds of small daily interactions have the smallest or largest effect on long-term influence. That would really get at the mechanisms of the emergence of identity and, in some sense, social change. But the same things that make that kind of conclusion exciting also make it hard to reach. Short-term social influence is a tangle of interactions, and long-scale influence is a tangle of tangles. We were able to untie the know just a bit. Specifically, we reconstructed the flow of time for chat messages as they rippled through a chat room and, reciprocally, as they rippled back to the original speaker. The finding was that, when I say something, that thing elicits responses in two seconds (predictably), and keeps eliciting responses for a minute, getting stronger in its effects quickly, and then slowly tapering off. The effect of the single chat event is to produce a wave of chat events stretched out over time. And each of those itself causes ripples that effect everyone else further, including the speaker. Putting it all together, your words’ effect on other ripples back to affect you, in a wave that starts around 8 seconds, and continues for several minutes, almost ten in you were being negative. We are able to count the amount of chat that occurred as a consequence of the original event, that wouldn’t have occurred if the chat hadn’t happened. By isolating your effect on yourself through others, and mapping that wave’s effects from 2 seconds to thirty minutes, we’re able to put a quantitative description on something we’ve always known but have rarely been able to study directly: the feedback, self-activating nature of conversation and influence. If chat rooms are echo chambers, we were able to capture not just others’ echoes of you, but your own echoes of yourself in the past.

Social scientists are very ecological in their understanding of causes and effects. If you stay close to the data, you are bound to see the world in terms of everything affecting everything. It’s what makes social science so hard to do. It’s also what makes virtual worlds so exciting. They are artificial places composed of real people. Stripped down, the social interactions they host can be seen more clearly, and you can pick tangles apart in a way you couldn’t do any other way. For this study, we were able to use a unique property of online youth chat to pry open an insight into how people’s words affect each other over time. To really do that, you’d have to piece out all the ways I affect myself: I heard myself say words, and that changes me. I anticipate others responding to my words and that changes me. Others actually respond and their responses change me. Those are all different ways that I can change me, and it seems impossible to separate them. The accomplishment of this project is that we were able to use the artificiality of the virtual world separate the third kind of change from the other two, to really zoom in on one specific channel for self influence.

This world is designed for kids, and kids need protection, so the system has a safety filter built in. The way the filter works is that if it finds something it doesn’t like, it won’t send it, but it also won’t tell you that it didn’t send. The result is that you think you sent a chat, but no one ever saw it. That situation never occurs in real life, but because it occurs online, we are able to look at the effect of turning off the effects of others hearing your words, without changing either your ability to hear your own words or your belief that others heard you. With this and other features of the system, we were able to compare similar messages that differed only on whether they were sent or were only thought to have been sent. By seeing how you are different a few seconds after, a few minutes after, when you did and didn’t actually reach others, we’re able to capture the rippling of influence over time.

This is a contribution to theory and method because we assumed for decades that this kind of rippling and tangling of overlapping influences is what drives conversation, but we’ve never been able to watch it in action, and actually see how influence over seconds translates into influence over minutes or tens of minutes. That’s a little academic for a popular audience, but there’s a popular angle as well. It turns out that these patterns are much different if the thing you said was positive or negative. That has implications for personally familiar online phenomena like rants and sniping. The feedback of your actions onto yourself through others corresponds to your rants and rages negatively affecting you through you effects on those you affected, and we’re able to show precisely how quickly your words can come back to haunt you.

About

This entry was posted on Wednesday, October 24th, 2018 and is filed under Uncategorized.


Instagram Demo: Your friends are more popular than you


I’m teaching a class that uses code to discover unintuitive things about social systems (UC Davis’ CMN 151). One great one shows how hard it is to think about social networks, and it’s easy to state: “On average, your friends are more popular than you” (Feld 1991).

It’s one thing to explain, but something more to show it. I had a demo coded up on Facebook, but it was super fragile, and more of my students use Instagram anyway, so I coded it up again.

To run the demo you

  1. Consider not to participating (because, for a student, the demo involves logging into your Instagram account on a public computer and running code written by someone with power over you).
  2. Log in to your Instagram account
  3. Click to show your Followers, and scroll down that list all the way until they are all showing. This could take a while for people with many followers.
  4. Open up View -> Developer -> JavaScript Console (in Chrome. “Web Console” in Firefox. Slightly different for other browsers. In Safari you need to find developer mode first and turn it on)
  5. Ask them to paste the code below, which will be accessible to them via Canvas, into their browser’s JavaScript Console. If Followers aren’t showing, it won’t work. This could also take a while if you have many followers. Keep pasting the last part until the numbers are stable. You computer is working in the background growing the list of your followers’ numbers of followers.
  6. Open this Google Sheet.
  7. Paste your values into the sheet.
  8. Calculate the average number of followers, and the average number of followers of followers. Compare them. With enough participants, the second will be bigger, even if you exclude giant robot accounts.

This post isn’t an explainer, so I won’t get into how and why it’s true. But the way you set it up beforehand in class is by reasoning that there shouldn’t be a systematic difference between your and your friends’ popularities. The numbers should be the same. You wrap the lesson up after the data is in by hopping onto the spreadsheet live and coding up the averages of their followers, and of their friends followers, to show that their friends’ average is higher on averages. After explaining about fat tails, you drive it home on the board by drawing a star-shaped network and showing that the central node is the only one that is more popular than her friends, and all others are less popular.

The code

Open your Instagram Followers (so that the URL in the location bar reads https://www.instagram.com/yourusername/followers/) and paste this into your JavaScript console.



// from https://stackoverflow.com/questions/951021/what-is-the-javascript-version-of-sleep
function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
function instaFollowerCount(page) {
return parseInt(page.querySelector("a[href$='/followers/']").firstElementChild.textContent.replace(/,/g, ""))
}
function instaFollowerCount2(page) {
return parseInt(page.querySelector("head meta[name='description']").attributes['content'].value.match(/([\d,]+)\sFollowers/)[1].replace(/,/g, "") )
}
function instaFollowerList(page) {
return Array.prototype.slice.call(page.querySelector("div[role='presentation'] div[role='dialog']").querySelector("ul").querySelectorAll("a[title]")).map(x => x.href)
}
// https://stackoverflow.com/questions/247483/http-get-request-in-javascript#4033310
function httpGet(theUrl)
{
var xmlHttp = new XMLHttpRequest();
xmlHttp.responseType = 'document';
xmlHttp.open( "GET", theUrl, false ); // false for synchronous request
xmlHttp.send( null );
return xmlHttp.response;
}
function httpGetAsync(theUrl, callback)
{
var xmlHttp = new XMLHttpRequest();
xmlHttp.responseType = 'document';
xmlHttp.onreadystatechange = function() {
if (xmlHttp.readyState == 4 && xmlHttp.status == 200)
callback(xmlHttp.response);
}
xmlHttp.open("GET", theUrl, true); // true for asynchronous
xmlHttp.send(null);
}
var iFollowers = instaFollowerCount(document);
var aFollowers = instaFollowerList(document);
var docs = [];
for (f in aFollowers) {
httpGetAsync(aFollowers[f] + "followers/", function(response) {
docs.push(instaFollowerCount2(response));
});
if(f % 100 == 0 & f > 0) {
await sleep( 1000 * 60 * 30 + 10000); // in ms, so 1000 = 1 second.
// instagram limits you to 200 queries per hour, so this institutes a 30 minute (plus wiggle) wait every 100 queries
// If you're fine running the demo with just a sample of 200 of your followers, that should be fine, and it's also way faster: this demo can run in seconds instead of taking all night. To have it that way, delete the above 'await sleep' line.

}
}


And then, after waiting until docs.length is close enough to iFollowers, run



console.log(`You have ${iFollowers} followers`);
console.log(`(You've heard from ${docs.length} of them)`);
console.log("");
console.log(`On average, they have ${docs.reduce((total, val, i, arr) => total + val) / docs.length} followers`);
console.log(`Your most popular follower has ${docs.reduce((incumbent, challenger, i, arr) => incumbent > challenger ? incumbent : challenger)} followers`);
console.log(`Your least popular follower has ${docs.reduce((incumbent, challenger, i, arr) => incumbent < challenger ? incumbent : challenger)} followers`);


The result isn't meaningful for just one person, but with enough people, it's a strong lively demo. See how things are coming along for others on this Sheet.

Technical details

Instagram crippled their API, so it isn't possible to run this demo above board, not even with the /self functionality, which should be enough since all participants are logged in to their own accounts. This code works by getting the list of usernames of all followers and posting a GET request for that users page. But Instagram can tell you are scraping so it cripples the response. That's why instaFollowerCount differs from instaFollowerCount2. In the main user's page, the followers are prominent and relatively easy to scrape, but the requested page of the friend can't be reached through a console request. Fortunately, Instagram's "meta" summary description of a user's page in the lists their number of followers, so a simple regex yields it. Of course, even scraping the follower count and IDs from the main page is tricky because Instagram has some scheme to scramble all class names for every page load or account or something. Fortunately it's still a semantic layout, so selector queries for semantic attributes like "content", "description", and "presentation" work just fine to dig up the right elements. Of course, this could all change tomorrow: I have no idea how robust this code is, but it works on Oct 24, 2018. Let me know if your mileage varies.


Change your baby’s astrological sign with physics!

My summer project this year was a little non-academic web app project.

http://whatsyoursign.baby/

The premise of the site is that the mechanism of astrology is gravitational influence, and that since small nearby things have influence comparable to large things far away, it should be possible to tune your child’s astrological sign by giving birth around specifically arranged person-made objects. As a pop science site, you’ll see that it is a pretty soft sell: not telling anyone that astrology is wrong, instead trying to channel the interest in astrology into relevant subjects of physics.

I haven’t even released the site yet, but as a summer project it’s already a big success. I developed my frontend skills a bunch, and learned how to use astrological ephemeris databases. I also learned that astrology has a big open source community. I learned that there is a .baby and .amazon top-level domain for web addresses. I also learned a bit more about how to teach web programming students, hopefully showing the bones of the Internet a bit and making code a bit less intimidating.


“It was found that pairing abstract art pieces with randomly generated pseudoprofound titles enhanced the perception of profoundness”


I was following up on the lit around Bullshit Receptivity (or “BSR”; http://journal.sjdm.org/vol10.6.html) and stumbled on this master’s thesis using it to evaluate modern art perceptions. Title quote is from the abstract
https://uwspace.uwaterloo.ca/handle/10012/13746
(Can’t vouch for the research though; didn’t actually read the thesis).

As someone who always cringes reading artist statements, and whose English-degree-holding proud pedantic jerk wife failed critical theory twice out of spite, this was pretty gratifying.

Just one comment from the pedantic jerk: “Did they mean profundity?”

About

This entry was posted on Saturday, September 29th, 2018 and is filed under Uncategorized.


New in Journal of Computational Social Science: “Cognitive mechanisms for human flocking dynamics”

New in Journal of Computational Social Science:
“Cognitive mechanisms for human flocking dynamics” with Rob Goldstone

Think of it as Cognitive Science meets Human Collective Behavior meets Game Theory. This is (only) the second paper to come out of my dissertation (5 years ago). It’s three chapters jammed into one, so if it feels like it’s about level-k being social, and mental models revealing themselves on the fly, and games being open to interpretation, and flocking being robust, and human being capable of faking 10 levels of what-you-think-I-think-you-think-I-think, then, well, it is.

https://link.springer.com/article/10.1007%2Fs42001-018-0017-x
(free version: https://arxiv.org/abs/1506.05410)

This is the next level of the work that got me a BBC Radio documentary appearance, and some other lots of other rock-paper-scissors coverage.


Appearance on two podcasts with Steaming Piles of Science

Steaming piles of science is a very fun science podcast based out in New Hampshire. They recorded a “Science pub” I did with colleagues on the science and practice of community building, and we followed that up with a wider ranging sit-down.

Here they are:
https://steamingpilesofscience.com/upcoming-episodes/page/2/

About

This entry was posted on Monday, September 17th, 2018 and is filed under audio/visual.


Book: The Common Sense of Science (1978) by Jacob Bronowski


I stumbled on this in a used bookstore. Books about science by scientists are already my thing, and anthropologist Jacob Bronowski already stands out in my mind as a distinguished big-picture popular scientist because I’ve youtubed his The Ascent of Man, a 1980’s BBC series about human natural history that was commissioned by David Attenborough.

The book has its unity, even though its most easily described in terms of its parts: a sketch of the history of Western thought as a history of science. The emphasis he places on human error, accident, and historical contingency help reinforce an overall message that science is a human social endeavor. He succeeds in showing that the things that make it vulnerable and flawed are precisely what make it accessible, and he positions the book as an argument against a popular fear and suspicion of science that emerged in the 20th century.

What struck me in the beginning of the book was how much his account of the rise of science-like thinking mirrored Foucault’s in The Order of Things: for centuries humans understood the signature of order in nature to be similarity (walnuts prescribed for headaches because they look like brains—one of Foucault’s examples) , until intellectual developments in the 17th and 18th centuries reinterpreted similarity as occurring in minds, and not beyond them, and created a new way of ordering things in terms of causes and mechanisms.

And what struck me about the end of the book was how forcefully presents the rare picture of science that I most often fight for myself, as something less about steel-cold logic, and more about a world so complicated as to permit only tinkering, and yield only to luck and experience.

Most times that you hear a scientist resisting a picture of science, they’re pushing back against what they imagine the man-on-the-street thinks: “We’re not out-of-touch eggheads in the ivory tower? We matter!”. But there is a picture of science that I wrench as often out of the heads of scientists as any other type of person. It’s the idea that the goal of science is to find the logical system that explains everything. It’s an attractive picture because it has an end, and also because, at moments in the past, it hasn’t seemed so far off. Newton found a system that explains both billiard balls and the solar system. 20th century biologists of the modern evolutionary synthesis connected genetics to evolution to, eventually, cellular biology. But for most sciences, especially the natural and social sciences, well, there might be a system, but its going to be beyond the ability of the human mind to encapsulate. In such an environment, you have to step back from finding the system that explains everything, to finding a system that explains as much as it can without getting too complicated. In this picture, a big constraint on theory building is the human capacity to understand theories. It’s a special view of science because it’s harshly critical of many of the archetypes that we usually see as unimpeachable core scientists. Those great minds that imposed mathematical rigor on human behavior, and were so dazzled by it that they dismissed evidence when it threatened to haze the luster.

The sweep and finality of his system, which like the Goddess of Wisdom seemed to his contemporaries to step fully formed from a single brain was a visible example. From a puzzle of loose observations and working rules he had produced a single system ordered only by mathematics and a few axioms: ordered, it seemed, by a single divine edict, the law of inverse squares. Here was the traditional problem of the trader nations since Bible times; its solution meant something to every educated man. And its solution was so remarkably simple: everyone could grasp the law of inverse squares. From the moments that it was seen that this lightning flash of clarity was sufficient—God said “Let Newton be” and there was light—from this moment it was felt that here plainly was the order of God. And plainly therefore the mathematical method was the method of nature.

A science which orders its thoughts too early is stifled. For example, the ideas of the Epicureans about atoms two thousand years ago were quite reasonable; but they did only harm to a physics which could not yet measure temperature and pressure and learn the simpler laws which relate them. Or again the hope of the medieval alchemists that the elements might be changed was not as fanciful as we once thought. But it was merely damaging to a chemistry which did not yet understand the compositions of water and common salt.

The ambitions of the 18th century systematizers was to impose a mathematical finality on history and biology and geology and mining and spinning. It was a mistaken ambition and very damaging. (p44–45)

I’m especially happy about his digs at economics.

There is no sense at all in which science can be called a mere description of facts. It is in no sense, as humanists sometimes pretend, a neutral record of what happens in an endless mechanical encyclopedia. This mistaken view goes back to the eighteenth century. It pictures scientists as utilitarians still crying “Let be!” and still believing that the world runs best with no other regulating principles natural gravitation and human self-interest.

But this picture of the world of Mandeville and Bentham and Dickens’s Hard Times was never science. For science is not the blank record of facts, but the search for order with in the facts. And the truth of science is not truth to fact, which can never be more than approximate, but the truth of the laws which we see within the facts (130p)

About

This entry was posted on Monday, September 17th, 2018 and is filed under Uncategorized.


A strong identity is no defense against hypocrisy (a good offense is a bad defense)

Take a look at these five people, and see what they have in common

  • The young contrarian so repulsed by his lefty friends’ sheeple-ness that he becomes a reactionary, only to become an ideologue himself.
  • The brave young hipster who has called himself a feminist for so long that everyone is blind to his violence against women, including himself
  • The pastor whose consuming identity as a servant of God makes him blind to his own embezzlement or abuse
  • The downtrodden who become toxic after rejecting that victims of oppression are capable of acts oppression.
  • I see it all the time in science too. For example, T.C. Chamberlin, a 19th and 20th century “dean” of American geology, within decades of extending his fame with a classic warning against dogmatism in science, had become such a toxic antagonist of the theory of plate tectonics that he probably singlehandedly set its acceptance back by decades.

You have hypocrites in all of these examples, but name calling is beside the point here. Each of these characters started out sympathetic, and changed in a very human way into something unhealthy. Considering what they went through—the actual etiology of hypocrisy—empowers us to move past imperiously impugning the fallen, and actually protect ourselves from the same ugly fate.

Each of those people probably began with good strong intellectual defenses against some threat. But eventually, they all excused themselves from the need to constantly re-evaluate themselves. They stopped questioning their standing, and lost it. It’s like you have a big strong wall around you, but you slowly let your identity balloon to include it. You start to be impressed by how forbidding the identity is, and how much easier it is to maintain than the wall’s bulky brick and mortar. Eventually, you let the balloon take over as the wall crumbles. But a big ballon is a superficial and misleading defense. Or, to trade some faithfulness for concision, you have a big strong gate keeping out the riff-raff. You want to make it even more formidable, so you light it on fire, and that works for a while, until you’re down to nothing.

Using identity as a defense against hypocrisy is a really subtle and insidious trap, but naming it and describing it makes it easier to guard against, which is why I like to think about these things.

Still, I’m no exception. I’ve caught myself in it before, several times, and that’s why a basic part of my intellectual hygiene is never letting myself think that my current intellectual hygiene is enough. Hopefully that’s enough to protect me, except, well, if I think it’s enough then by definition it isn’t.

Self-doubt is an awful foundation for knowledge, but, when you’re all too human, it might be less bad than anything else.


Do you lose things? Here’s the magical way to find them.

Let’s say you make a trip to the store, making sure to lock the door behind your on the way out. When you return and try to let yourself in, you discover that you lost your keys somewhere along the way. Round-trip, the whole distance traveled was longish to hunt for a pair of lost keys, like 1km. They could be anywhere!

How should you go about finding your keys? Should you spend the whole cold day and night slowly scouring your path? That sounds awful. But reality isn’t going to do you any favors: there’s no way your keys are more likely to be in one place along the way than another. So, for example, if the space within ten meters of your door accounts for 2% of the whole trip, the probability of finding your keys within that space must be equal to 2%, not greater than or less than 2%. Right?

Nope. It turns out that reality wants to do you a favor. There’s a good place to look for your keys.

The answer

Intuition says that they are as likely to be in one place along the way as any other. And intuition is right for the special case that your keys were definitely very secure and very unlikely to have fallen out on that particular trip. But they probably weren’t. After all, if it was so unlikely, they shouldn’t have fallen out. So we can’t just consider the world where the very unlikely happened. We have to consider several possible worlds of two rough types:
* The worlds in which your keys were very secure, but the very unlikely happened and they fell out anyway.
* The worlds in which your keys, on that particular trip, were unusually loose and bound to fall out.
So those are the two types of possible world we’re in, and we don’t have to consider them equally. The mere fact that your keys fell out means it’s more likely that you’re in the second type of world, that they were bound to fall out. And if they were bound to fall out, then they probably fell out right away. Why? We can take those worlds and divide them again, into those where your keys were likely but not too too likely to fall out, and those in which your keys were not just very likely, but especially very likely to fall out. And so on. Of the worlds in which your keys were bound to fall out, the ones that are most likely are the ones in which they fell out right away.

So there it is. If you lost your keys somewhere along a long stretch, you don’t have to search every bit of it equally, because they most likely fell out on your way down the doorstep, or thereabouts. The probability of finding your keys within 10 meters of the door is greater than 2%, possibly much greater.

What is the probability exactly? If you’d had several keys to lose, we might be able to better estimate which specific world we’re in of the millions. But even with just one key lost, the mere fact that it got lost means it was most likely to have gotten lost immediately.

Why is it magic?

If you know the likelihood of losing your keys, that makes them impossible to find. If you have no idea the chances they fell out, then they’re more than likely near the door. It’s your uncertainty about how you lost them that causes them to be easy to find. It’s as if the Universe is saying “Aww, here you go, you pitiful ignorant thing.”

Solving the puzzle, with and without data

So you can’t get the actual probability without estimates of how often this trick works.  But even without hard data, we can still describe the general pattern. The math behind this is tractable, in that someone who knows how to prove things can show that the distribution of your key over the length of the route follows an exponential distribution, not a uniform distribution, with most of the probability mass near the starting point, and a smooth falling off as you get further away. The exponential distribution is commonly used for describing waiting times between events that are increasingly likely to have happened at least once as time goes by. Here is my physicist friend, “quantitative epistemologist” Damian Sowinski explaining how it is that your uncertainty about the world causes the world to put your keys close to your door.

If you get in this situation and try this trick, write me whether it worked or not and I’ll keep a record that we can use to solve for lambda in Damian’s notes.

In the meantime, we do have one real-world data point. This all happened to me recently on my way to and from the gym. I was panicking until I realized that if they fell out at all, they probably fell out right away. And like magic, I checked around my starting point And There They Were. It’s an absolutely magical feeling when mere logic helps you solve a real problem in the real world. I’ve never been so happy to have lost my keys.

 

UPDATE: How strong is the effect?

All of the above tells us that there’s a better than 2% chance of finding your keys in the first 10 meters. But how much better than 2%?  20% or 2.001%?  If the latter, then we’re really talking intellectual interest more than a pro-tip; even if the universe is doing you a favor, it’s not exactly bending over backwards for you.  To tackle this, we have mathematician Austin Shapiro.  Backing him up I can add that, on the occasion on which this trick worked for me, my keys were super super loose, just like he predicts.  A takeaway is going to be that if this trick works for you, you did a very bad job of securing your keys.

I read your blog post, including Damian’s note. I have some things to add, but to clearly explain where they fit in, let me try to delineate two separate “chapters” in the solution to your key problem.

In chapter 1, we narrow our set of models for the location of the keys to the exponential distributions. Damian gives a good account of how this can be justified from first principles. But after doing this, we still have an infinite set of models, because an exponential distribution depends on a parameter \lambda (the expected rate of key losses per kilometer walked, which may be high if the keys are loose and hanging out of your pocket, or low if they are well secured).

In chapter 2, we use conditional probability to select among the possible values of \lambda, or, as you put it in your blog post, try to figure out which world we are in. This is the part that interests me, and it’s also the part that still needs mathematical fleshing-out. All Damian says about it is “So what is the value of \lambda? That’s a question for experiment — one must measure it.” But as you say, we’ve already done one experiment: you observed that your keys did fall out during a 1 km walk. This is enough to put a posterior distribution on \lambdaif we posit a prior distribution.

However… what does a neutral prior for \lambda look like? I don’t know any principled way to choose. A uniform distribution between 0 and some finite ceiling is unsuitable, since according to such a model, if you’re ever very likely to lose your keys, you’re usually pretty likely to lose your keys.

Assigning \lambda itself an exponential prior distribution seems murkily more realistic, so I tried that. If \lambda\sim{\rm Exp}(k), then, if I did my math right, your probability of having lost your keys in the first x km of your walk works out to k(k+1)\left(\frac 1k-\frac 1{k+x}\right), which is (1+\frac 1k)x+O(x^2) for small x. So in this case, Bayesian reasoning boosts the chances that you lost your keys in the first, say, 10 meters, by a factor of 1+\frac 1k. Observe that for this effect to be large, k has to be pretty small… and the smaller k is, the higher your average propensity to lose your keys (the mean of the exponential distribution is \frac 1k). Thus, for example, to achieve the result that the universe is helping you find your keys to the tune of a factor of 5 — i.e., that your chance of having lost your keys in the first 10 meters is 5% instead of the “intuitive” 1% — you need to assume that, a priori, you’re so careless with your keys as to lose them 4 times per kilometer on an average trip. That prior seems just as implausible as the uniform prior.

I can think of one kind of prior that could lead to a strong finding that the universe wants to help you find your keys. That would be a bimodal prior, with a high probability that \lambda is close to 0 (key chained to both nipple rings) and a small probability that \lambda is very large (key scotch-taped to beard), with nothing in between. But I can’t think of any reason to posit such a prior that isn’t transparently circular reasoning, motivated by the answer we’re trying to prove.

So… while all the exponential models definitely give you a better chance of finding your keys near the beginning of your route than near the end, I’m not convinced the effect size is all that strong; or, if it is (and you do have one magical experience to suggest it is), I’m not convinced that math is the reason!

Au.

Tom Lehrer song ripping on quantitative social science

Tom Lehrer was is a cold war era lefty musical satirist, best known for Poisoning Pigeons in the Park, and his jingles about math, science, and nuclear holocaust. In addition to being a musician, he also taught math and stats at MIT and Santa Cruz. His courseload at MIT through the 1960’s included the Political Science department’s quantitative modeling course, an experience that seems to have made him very mocking about the sciences of society. The song below is addressed to sociology but, as he admits, it’s really about all quantitative approaches to social science.

Some choice bits:

They can take one small matrix,
and really do great tricks,
all in the name of socioloigy.

They can snow all their clients,
by calling it a science,
although it’s only sociology.

Elsewhere in the same clip are very nerdy mathematical songs, and a good satire about professors thinking we’re brilliant, and a School House Rock type kids song. Before stumbling on this, I discovered and rediscovered a bunch of other wonderful songs, such as the Vatican Rag, “I got it from Agnes”, and Oedipus Rex. I was especially into Selling Out.


Philip K. Dick’s vanity was his best protection from his vanity

I went on a deep dive and learned several fascinating things about Philip K. Dick and his life. Foremost, he named his daughter “Isa Dick”. Talk about a Dick move.

Among his notes about A Scanner Darkly were a question and answer. Question: “How will the book sell?” Answer: “Such inducements have no appeal to the superior man.” I like that he both considered the question unselfconsciously and posed himself to deny interest in it. I like how, in the context of an answer to a question about himself, the funny construct of the “superior man” isn’t about superiority to everyone else, like it would come off in any other context, but superiority to oneself. The phrasing was so peculiar that I Googled it. Expecting to find more by him, I stumbled on the same phrasing in the divination manual The I Ching, or The Book of Changes, which he wrote a book about and got deeper into as he fought less hard against schizophrenia and started to imagine us all in the Matrix.

In his notes he had written under each question, and prior to each answer, numbers and dashes and codes that looked meaningless until I had made the I Ching connection. The questions were real questions he had, and the answers were divined. His roll for the question about how the book will sell was for hexagram 58, where I found the quote about the superior man. That means that he didn’t endogenously pose himself to deny his crass interests, but that his reading emboldened him. In that context, it’s very clear that The Superior Man is best imagined as a version of you that’s notable only for being superior to yourself.

No, I’m wrong. Dick’s question was crass because he was vain. His attraction to the I Ching’s was an attraction to the idea that the Universe is organized around the Superior Man, which is vain. His speechifying about being The One who saw into the computer simulation controlling us was an assertion that he was host to the superior. The only thing that pulled him from the vain thought of his book sales was the vain thought that he was too much better than everyone else to worry about them. Any of my tea leaf reading about this softer interpretation of the superior man says more about my hopes than about either Dick or the Book that inspired him.

I was also interested to learn that, after his divorce, he lived communally but maybe not inappropriately with 1970s street kids, that he was very much from the Bay and Berkeley, and that despite his reputation for a variety of drugs, his devotion was exclusive to prescription amphetamine, on which he wrote most of his books. The mathematician Paul Erdos had the same hangup. They were contemporaries in more ways than one.

I learned all this from the audio commentary track on a Scanner Darkly DVD, which had Linklater, Keanu Reeves, Isa Dick, the flick’s screenwriter, and another person. It’s funny to hear Reeves philosophize without the benefit of a script. Nearly every time he spoke up, it was to helpfully and prosocially elicit more commentary from one of the others, but it came off like a philosophical conversation between a bunch of sage elders as convened and presided over by a stoned 14 year-old.

Credit

Image is from this comic about the man.


Hey look, Andrew Gelman didn’t rip me a new one!

My website redesign was supposed to be a time-intensive and completely ineffectual effort to increase my readership. But mere days after, I landed one of the most steely-eyed, critical voices in the scientific discourse around the replication crisis. Scientists, as they exist in society’s imagination, should have an Asperger’s caliber disinterest in breaking errors gently, any otherwise attending to the feelings of others. Andrew Gelman is a more active, thoughtful, thorough, and terrifying bad methods sniper in scientist-to-scientist discourse today. Yikes. He found my blog, which sent him down a little rabbit hole. I seem to have come through it OK. Better than the other Seth he mentions!:

http://andrewgelman.com/2018/07/26/think-accelerating-string-research-successes/

My own role in the social science’s current replication discourse is as a person with very interesting opinions that no one but me really cares about. Until today! Here is what I have to offer:

About

This entry was posted on Thursday, July 26th, 2018 and is filed under Uncategorized.


1920’s infoviz, when “Flapperism” was the culmination of Western civilization

HistoryInfoviz_Dahlberg

This image offers a schematic of Western history with a two-axis timeline that brings attention more effectively to long periods. It was published in the journal Social Forces in 1927.

Its author Arthur Dahlberg was a science popularizer and Technocrat active through the 20’s and 30’s. His books, which presented economic systems as closed plumbing systems and other visual metaphors, brought technocratic ideas to many important thinkers in the first half of the 20th century, making him the route by which Technocratic ideas influenced the science of complex systems. Technocracy was a social movement and economic theory that can best be glossed as capitalism under a planned economy. It was popular among farmers and other rural Americans, but was ridiculed otherwise. Nevertheless, its popularity brought it to the attention of people like Herbert Simon, who made fundamental contributions to organization theory, cognitive science, and economics, and Donella Meadows, whose own stocks-and-flows theories of economic system successfully forecasted today’s population growth and global climate change in the 1970s. His influence on original thinkers in the second half of the last century is what piqued my interest in him, and led me to this fun illustration of the state of the art of information visualization in the 1920’s. I love how it all leads to “Flapperism”, which we’ll guess he takes to mean some kind of societal fizzling over.


in Cognitive Science: Synergistic Information Processing Encrypts Strategic Reasoning in Poker

It took five years, but it’s out, and I’m thrilled:

https://onlinelibrary.wiley.com/doi/full/10.1111/cogs.12632

You can get an accessible version here

I’m happy to answer questions.

About

This entry was posted on Friday, June 15th, 2018 and is filed under Uncategorized.


Satie’s doodles


These are a few of my favorite doodles from the “A Mammals Notebook” collection of Erik Satie’s whimsical (i.e. silly) writings and drawings and ditties. Who knew he also wrote and joked and drew? I scanned many many more:

SatieDoodles

About

This entry was posted on Wednesday, May 2nd, 2018 and is filed under Uncategorized.


Waiting in the cold: the cognitive upper limits on the formation of spontaneous order.

It’s a cold Black Friday morning, minutes before a major retailer with highly anticipated products opens its doors for the day. There are hundreds of people, but no one outside. Everyone is sitting peacefully in their car, warm and comfortable, and in the last seconds before the doors open, the very first arrivals, the rabidly devoted fans who drove in at 2AM, peacefully start to walk to their rightful place before the door as it is about to open, with numbers 2 through 10 through 200 filing wordlessly and without doubt into their proper places behind. There was no wonder or doubt so no comment, that hundreds of people spent only as much time in the cold as it took to get to the double doors, and everyone retained their rightful place in the tacit parking lot queue.

This utopian fiction isn’t fictional for being utopian, just for being big. This kind of system is a perfectly accurate description of events for a crowd of 1 or 2 or 3 or 4 people. I was there. It only starts becoming fanciful at 5 or 10 or more. Naturally, the first person to arrive knows that they are first, the second that they are second and the tenth that they are tenth. But knowing your place isn’t enough, you have to realize that everyone else knows their place.

I broke the utopia the first time I showed up at Wilson Tire to get my winter tires changed out. I had heard that they don’t take appointments and that I should arrive early, even before they open, to get served without being so far back in the day’s queue that I literally wait all day. So I drive up and park among the other five or so other cars and get out to wait by the benches by the door. I was worried that there were so many other cars, but with no one at the benches by the door I figured that they must be employees or something and I rushed to take my position at the front of the queue. This was at about 6:40, a little more than a quarter of an hour before the doors were to open. New Hampshire is usually still cold when you’re switching to or from your summer tires, and I never acclimated, so I was suffering through the cold, and suddenly I wasn’t alone. Four other people got out of their cars to join me, and as new people arrived the cold crowd outside the front door got bigger, with a sort of rough line forming up. I slowly realized that I was a defector, and that I had best imagine myself as behind the people who I had forced out of their cars. That cohort of us milled on the concrete landing, some starting to stand up in an actual queue, me staying more relaxed in my bench, trusting that others would be smart enough to know I was near the front, even though I hadn’t been smart enough waiting from my car to realize that they were. The later arrivals, who saw the milling but had no sense of its order, avoided the mess by queueing up on the asphalt, further away. Immediately before the doors opened one guy with incredible hubris got out of his car at the last minute and cut in front of all of us to be the first served. I was steamed, as much at him as at the docility of the others in my early bird cohort for not saying anything. But I tend to be more of a litigator than most.

It was only after several minutes that I realized that he must have been the first, he’d probably arrived at 6, and his confidence in boldly taking his rightful place was built on the recognition that the other early arrivals would realize he’d been first. But I never did, at least not in time, with the result that I made 5 people get out of their cars who, until my arrival, had peacefully and stably trusted each other to stay warm in their cars, and queue up physically when the time was right. But if I hadn’t done it, someone else would have. It’s much harder for the late arrivals than the early ones.

Number 1 doesn’t just know they’re first, they also know that 2 knows that they are first. 2 knows who 1 is and knows that 1 knows they know. They know that 3 won’t know who is 1 and who is 2, but they realize that 3 will be able to trust 1 and 2 to know each other. 4 might also realize that they are driving into a queue, but 5 and 6 just see a bunch of cars with no order. They know that they are number of 5 or 6, but they think that they’re they’re the last beans in a pile, rather than the last in an increasingly tacit queue. And even if they realize they’re in a queue, 6 might not trust that 5 realizes.

For a car queue to form, it’s not enough to know that you are 10, you have to realize that 9 know they are 9 (and who 10 is), 8 knows they are 8 (and who 9 is), 7 knows they are 7, and so on down to 1, 2, and 3, who know each other, and who know that they know each other knows. When everyone has the capacity to realize they are in a queue, they can queue in the warmth of their cars. But where are minds top out, and common knowledge of the queue breaks down, defection begins.

I’ve now had my tires changed out a few times at Wilson. Number 1 is never the first out of their car. It’s always number 4 or 5 or, in my case, 6. They tend to sit on a bench a few feet from the door. They are followed within a minute or so by the person who came right before them, who wants to signal their priority. And once two people are out, the cascase begins, and everyone else gets out, with the earliest birds standing by the door instead of sitting on the bench, so as to secure their proper place (and signal that they’re securing it). This stays stable, with the sitters knowing their place relative to the standers, and the standers knowing it too. But persons 9 and 10 come upon a disorderly sight, a confusing mix of sitters looking relaxed even though they were the nervous defectors, and standers trying to be in line without looking like they’re in line. 9 adapts to this unsteady sight by standing further away from the door on the asphalt, and 10 lines up behind 9. When the doors open, 9 and 10 watch with apparent wonder as the gaggle by the door fails to devolve into jockeying and each person wordlessly finds their proper place. Of course, it shouldn’t be any surprise: as long as everyone knows their own number, there is enough information for everyone to find their place and even enough information for everyone to keep everyone else accountable

This inevitable degradataion of newcomer’s mental models from queue to pile, from ordered to disordered, creates growing insecurity that people adapt to by moving from an imaginary line to a physical one. In a physical line, you don’t even have to know which number you are, you just have to know where the end of the line is, and so it can scales to hundreds or thousands. On the way to the physical line, a variety of alternative institutions—the very comfortable car queue, the cold but somewhat trustful bench queue, the eventually self-organizing aggregation by the door—ascended and then degraded as common knowledge of them degraded.

A line seems like a simple and straightforward thing. But what for me was moving to the bench to start a line looked others like someone trying to cut in a line that had already existed. If we were all capable of thinking harder and deeper, so capable that we could count on each other to always be doing so wordlessly, then Black Friday shoppers or summer blockbuster campers could enjoy a much more comfortable, satisfying, and civilized norm. But quiet pressures like human reasoning limits, and the degradation of common knowledge they trigger, cause cultural processes to select for institutions that are easy to think about. If you look around, you’ll see lots of situations where cognitive simplicity has won out over social efficiency or fairness, and absent a lot of awkward conversation, it’s perfectly natural to expect it.

Testing it

I may be making this all up, but it’s very easy to test. All I’d have to do is get up at 5:30AM for the next 30 days and drive over with a clipboard record the following events:

    • Time and number of each arrival
      Time at which each person exits their car
      Order of entry into the tire shop.
  • Everything I’m saying make pretty clear predictions. With few people in the parking lot, people will stay in their cars. With many, they will start to get out and queue up. The first person out of the car will usually be the fourth or fifth arrival. The first arrival may tend to be the last out of the car.

    This could be done from afar, at any tire shop, two times a year in any region with winters. An especially nice property of the domain is that this queueing problem, being something people only deal with a few times a year, is sort of a one-shot game, in that every morning brings entirely new people with more or (probably) less experience at navigating the trust and reasoning issues that make parking lot queues so fraught. But it’s hard to get out of bed, which I guess explains why there are so many theorists.

    About

    This entry was posted on Tuesday, May 1st, 2018 and is filed under Uncategorized.


    Many have tried, none have succeeded: “Pavlov’s cat” isn’t funny.

    “Pavlov’s CAT!!!! GET IT?”

    Here are 20 more or less professional cartoonists who had precisely that original thought. Guess how many of them managed to make it funny. I’m posting this because I’m surprised at how much failure there is on this. Is the idea of Pavlov’s Cat inherently, objectively unfunny?

    pavlovscat_newthumb

    pavlov1

    PavlovsCat_20140929_960

    623813_1

    2160452_1

    science-fail-3-pavlov-cat_1024x1024

    MW75-Pavlovs-cat

    mockup-ec225aa9_1024x1024

    psych133_pavlovs-first-experiment

    s-l300

    Pavlov's Cat: 'Dream on Buddy.'

    3oQVFbgffBAbt9LaFBhQMPjSrHcID6hRzWIyql__O9s

    a119940699d20eb287829445fa09c386

    raf,750x1000,075,t,fafafa_ca443f4786.u1

    342535722_2d257e5dd9

    cg4addd0f5e5c3a0

    pavlov's_cat

    pavlovs_cat_card-r0b31d587f0564e63b9e966be2f31b632_xvuak_8byvr_540

    julius_katz_the_cat_pavlov_birthday_card-r5a3442b58eba4f21b1d3f2ce49d2b4af_xvuat_8byvr_540

    05Horacek-OL02-Pavlovs-Cats-col2400

    20080618cpbss-a

    Pavlov's cat.

    Bonus, slightly fewer people made it to Schrödinger’s dog. Somehow, a few of these are kind of funny. Why is Schrodinger’s Dog less hackneyed than Pavlov’s Cat? What does it mean about humor or semantics. Also notice the different roles of the dog compared to those of the cat.

    schorodingersdogheather_fullpic_1

    schrodinger__s_dog_by_timelike01-d2y5efk

    mwc0106-schrodingers-dog

    ugr9Nle

    c626be298757ff5f62fc93250974e0f6

    Schrodingers+dog+_6b9b930ee4046cb8ea7febadde0e00c6

    download

    And what does not original plus not original equal? Still not original.

    ec00_cat_vs_dog

    papergc,441x415,w,ffffff.2u1

    289e5a763a673eef471dc5091a0dd7a4

    cartoon7237

    0a61a33e28546af4c0ba494c1233530e

    OK, fine, the last one is funny.

    About

    This entry was posted on Friday, April 20th, 2018 and is filed under Uncategorized.


    Ramon y Cajal’s Advice to a young investigator

    I read
    Advice for a young investigator
    by Santiago Ramon y Cajal (1897)

    Here is a good bit:

    Once a hypothesis is clearly formulated, it must be submitted
    to the ratification of testing. For this, we must choose
    experiments or observations that are precise, complete, and
    conclusive. One of the characteristic attributes of a great
    intellect is the ability to design appropriate experiments.
    They immediately ªnd ways of solving problems that
    average scholars only clarify with long and exhausting
    investigation.
    If the hypothesis does not fit the data, it must be rejected
    mercilessly and another explanation beyond reproach
    drawn up. Let us subject ourselves to harsh self-criticism
    that is based on a distrust of ourselves. During the course
    of proof, we must be just as diligent in seeking data contrary
    to our hypothesis as we are in ferreting out data that may
    support it. Let us avoid excessive attachment to our own
    ideas, which we need to treat as prosecutor, not defense
    attorney. Even though a tumor is ours, it must be removed.
    It is far better to correct ourselves than to endure correction
    by others. Personally, I do not feel the slightest embarrassment
    in giving up my ideas because I believe that to fall and
    to rise alone demonstrates strength, whereas to fall and wait
    for a helping hand indicates weakness.
    Furthermore, we must admit our own absurdities whenever
    someone points them out, and we should act accordingly.
    Proving that we are driven only by a love of truth, we
    shall win for our views the consideration and esteem of our
    superiors.
    Excessive self-esteem and pride deprive us of the supreme
    pleasure of sculpting our own lives; of the incomparable
    gratification of having improved and conquered ourselves;
    of refining and perfecting our cerebral machinery—the legacy
    of heredity. If conceit is ever excusable, it is when the
    will remodels or re-creates us, acting as it were as a supreme
    critic.
    If our pride resists improvement, let us bear in mind that,
    whether we like it or not, none of our tricks can slow the
    triumph of truth, which will probably happen during our
    lifetime. And the livelier the protestations of self-esteem
    have been, the more lamentable the situation will be. Some
    disagreeable character, perhaps even with bad intentions,
    will undoubtedly arrive on the scene and point out our
    inconsistency to us. And he will inevitably become enraged
    if we readily correct ourselves because we will have deprived
    him of an easy victory at our expense. However, we
    should reply to him that the duty of the scientist is to adapt
    continuously to new scientific methods, not become paralyzed
    by mistakes; that cerebral vigor lies in mobilizing
    oneself, not in reaching a state of ossiªcation; and that in
    man’s intellectual life, as in the mental life of animals, the
    harmful thing is not change, but regression and atavism.
    Change automatically suggests vigor, plasticity, and youth.
    In contrast, rigidity is synonymous with rest, cerebral lassitude,
    and paralysis of thought; in other words, fatal inertia—certain
    harbinger of decrepitude and death. With
    winning sincerity, a certain scientist once remarked: “I
    change because I study.” It would be even more self-effacing
    and modest to point out: “I change because others study,
    and I am fortunate to renew myself.” (pp 122–123)

    Of course, he also said things like this:

    To sum things up: As a general rule, we advise the man inclined toward science to seek in the one whom his heart has chosen a compatible psychological profile rather than beauty and wealth. In other words, he should seek feelings, tastes, and tendencies that are to a certain extent complementary to his own. He will not simply choose a woman, but a woman who belongs to him, whose best dowry will be a sensitive compliance with his wishes, and a warm and full-hearted acceptance of her husband’s view of life.
    (pp 103–104)

    Unlike teachers of history and literature, unaccustomed to assigning writing that mixes nuggets of wisdom and bald sexism. I’m thinking of being explicit with my students that they have several options, to
    read and think in a manner divorced from emotion, to take the good and leave the bad, or to dismiss it all as rot. That’s got problems, but so does everything else I can think of. Working on it.

    About

    This entry was posted on Saturday, March 31st, 2018 and is filed under Uncategorized.


    My most sticky line from Stephenson’s Diamond Age

    It’s been years and this never left my head. The line is from a scene with a judge for a far-future transhumanist syndicate based on the teachings of Confucius.

    The House of the Venerable and Inscrutable Colonel was what they called it when they were speaking Chinese. Venerable because of his goatee, white as the dogwood blossom, a badge of unimpeachable credibility in Confucian eyes. Inscrutable because he had gone to his grave without divulging the Secret of the Eleven Herbs and Spices. p. 92

    About

    This entry was posted on Monday, March 19th, 2018 and is filed under Uncategorized.


    Generous and terrifying: the best late homework policy of all time

    I want all of my interactions with students to be about the transmission of wondrous ideas. All the other bullshit should be defined out of my life as an educator.

    But life happens, and students can flake on you and on their classmates, and if you don’t discourage it, it gets worse. So now the transmission of wonder is being crowded out by discussion about your late policy. And late policies are a trap.

    For a softy like me, any policy that is strong enough to actually discourage tardy work is too harsh to be credible. To say NO LATE WORK WILL BE ACCEPTED is all well and good until you hit the exceptions: personal tragedies you don’t want to know about, the student who thoughtfully gave you three weeks advance notice by email, your own possible mistakes. Suddenly you’re penalizing thoughtfulness, incentivizing students to dishonestly inflate their excuse into an unspeakable tragedy, and setting yourself up to be the stern looker-past-of-quivering-chins. And what’s the alternative? 10% off for each day late? I don’t want to be rooting through month-past late-night emails from stressed students, looking up old deadlines, counting hours since submission, or calculating 10% decrements for this person and 30% for that one, especially not when such soft alternatives actually incentivize students to do the math and decide that 10% is worth another 24 hours. Plus, with all of these schemes, you’re pretending you care about a 10:02 submission on a 10:00 deadline—or even worse, you’re forgetting reality and convincing yourself that you actually do care.

    My late policy should be flagrantly generous and utterly fearsome. It should be easy to compute and clear and reasonable. It should most certainly not increase the amount of late work, especially because that increases the work on me. It should be so fair that no one who challenges it has a leg to stand on, and so tough that all students are very strongly incentivized to get their work in on time. It should softly encourage students to be good to themselves, while allowing students flexibility in their lives, while not being so arbitrarily flexible that you’re always being challenged and prodded for more flexibility.

    What I wanted was a low effort, utterly fair policy that nevertheless had my students in constant anxiety for every unexcused minute that they were late.

    GambleProtocol

    Is that even possible? Meet the Gamble Protocol. It’s based around one idea: because humans are risk averse, you can define systems that students simultaneously experience as rationally generous and emotionally terrifying. All you have to do is create a very friendly policy with small, steadily increasing probabilities of awful outcomes.

    The Gamble Protocol is a lot like the well-known “10% off for every day late.” In fact, in the limit of infinite assignments, they’re statistically indistinguishable. Under the Protocol, a student who gets an assignment in before the deadline has a 100% chance of fair assessment of their work. After the deadline, they have a steadily increasing chance of getting 0% credit for all of their hard work. No partial credit: either a fair grade or nothing at all. On average, a student who submits 100 perfect assignments at 90% probability gets an A-, not because all submissions got 90%, but because ten got 0%. A bonus, for my purposes, is that I teach a lot of statistical reasoning, so the Protocol has extra legitimacy as an exercise in experiential learning.

    After experimenting a bit, and feeling out my own feelings, I settled on the following: for each assignment, I draw a single number that applies to everyone (rather than recalculating for every late student). I draw it whenever I like, and I always tell students what number got drawn, and how many students got caught. The full details go in the syllabus:

    Deadline. If the schedule says something is due for a class, it is due the night before that class at 10:00PM. There is no partial credit for unexcused lateness; late assignments are worth 0%. However, assignments submitted after the deadline will get a backup deadline subject to the Gamble Protocol.
    The Gamble Protocol. I will randomly generate a backup deadline between 0 and 36 hours after the main deadline, following a specific pattern. Under this scheme:

    • an assignment that is less than 2 hours late (before midnight), has a 99% chance of earning credit,
    • an assignment turned in before 2:00AM has a 98% chance of earning credit,
    • an assignment turned in 12 hours late, by 10AM, has a 90% chance of earning credit,
    • that jumps suddenly down to 80% between 12–14 hours, getting worse faster,
    • an assignment turned in 24 hours late, before the next 10:00PM, has a 60% chance of earning credit,
    • and an assignment turned in more than 36 hours late is guaranteed to earn zero credit.

    I will not calculate the backup deadline until well after its assignment was due.

    Calculating is easy. For each assignment,

    • you can put the following numbers in a hat and draw:

      0 2 4 5 6 7 8 9 10 11 12 12 12 12 12 12
      12 12 12 12 14 15 16 17 18 19 20 21 22 23 14 15
      16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
      32 33 34 35 24 25 26 27 28 29 30 31 32 33 34 35
      24 25 26 27 28 29 30 31 32 33 34 35 24 25 26 27
      28 29 30 31 32 33 34 35 24 25 26 27 28 29 30 31
      32 33 34 35
    • or you can open any online R console and paste this code:
      deadline <- c( 0,2, c(4,5,6,7,8,9,10,11), rep(12, 10), rep(14:23,2), rep(24:35,5) ) sample(deadline)[1]

    I'm keeping data from classes that did and did not use this policy to see if it reduces late work. I still haven't chugged any of it, but I will if requested. For future classes, I was thinking of extending from 36 hours to a few days, so that it really is directly equivalent to 10% for a day's tardiness.

    About

    This entry was posted on Monday, March 19th, 2018 and is filed under Uncategorized.


    How to create a Google For Puppies homepage

    Trying to get my students interested in how the Internet works, I ended up getting my family interested as well. We made this:
    PuppyGoogle
    Here is how to install it:

    • Download this file, containing the homepage and puppy image in a folder
    • Move the file where you want it installed and unzip it
    • Drag the Google.html file to your browser
    • Copy the address of the file from your location bar, and have it handy
    • Copy that file name into your browser’s box for replacing or overriding the new tab page. Or, if you are on another browser, wherever in their options where new tab pages get customized.
      • On Chrome, you’ll have to install this extension
      • On Firefox, you’ll have to install this extension
      • In addition to changing the new tab page, you can more easily change the default home page to the same address.
    • If you want to change the appearance of this page in any way, you can edit the Google.html file as you like. The easiest thing to do is search/replace text that you want to be different.

    About

    This entry was posted on Saturday, February 10th, 2018 and is filed under Uncategorized.


    Quantifying the relative influence of prejudices in scientific bias, for Ioannidis

    Technology makes it increasingly practical and efficient to quickly deploy experiments, and run large numbers of people through them. The upshot is that, today, a fixed amount of effort produces work of a much higher level of scientific rigor than 100, 50, or even 10 years ago. Some scientists have focused their steely gazes on applying this new better technology to foundational findings of the past, triggering a replication crisis that has made researchers throughout the human sciences question the very ground they walk on. John Ioannidis is a prominent figure in bringing attention to the replication crisis with new methods and a very admirable devotion to the thankless work of replication.

    In the provocatively titled “Why Most Published Research Findings Are False”, Ioannidis makes five inferences about scientific practice in the experimental human sciences:

    1. The smaller the stud- ies conducted in a scientific field, the less likely the research find- ings are to be true.
    2. The smaller the effect sizes in a scientific field, the less likely the research findings are to be true.
    3. The greater the num- ber and the lesser the selection of tested relationships in a scientific field, the less likely the research findings are to be true.
    4. The greater the flex- ibility in designs, definitions, out- comes, and analytical modes in a scientific field, the less likely the research findings are to be true.
    5. The greater the financial and other interests and preju- dices in a scientific field, the less likely the research findings are to be true.
    6. The hotter a scientific field (with more scientific teams involved), the less likely the research findings are to be true.

    His argument, and arguments like it, has produced a great effort at quantifying the effects of these various forms of bias. Excellent work has already gone into the top three or four. But the most mysterious, damning, dangerous, and intriguing of these is #6. And, if you dig through the major efforts at pinning these various effects down, you’ll find that they all gloss over #6, understandably, because it seems impossible to measure. That said, Ioannidis gives us a little hint about how we’d measure it. He briefly entertains the idea of a whole scientific discipline built on nothing, which nevertheless finds publishable results in 1 out of 2, or 4 or 10 or 20 cases. If such a discipline existed, it would help us estimate the relative impact of preconceived notions on scientific outputs.

    Having received much of my training in psychology, I can say that there are quite a few cases of building a discipline on nothing. They’re not at the front of our minds because psychology pedagogy tends to focus more on its successes, but if you peer between the cracks you’ll find scientific, experimental, quantitative, data-driven sub-fields of psychology that persisted for decades before fading with the last of their proponents, that are remembered now as false starts, dead ends, and quack magnets. A systematic review of the published quantitative findings of these areas, combined with a possibly unfair assumption that they were based entirely on noise, could help us estimate the specific frequency at which preconceived bias creates Type I false positive error.

    What disciplines am I talking about? Introspection, phrenology, hypnosis, and several others are the first that came to mind mind. More quantitative areas of psychoanalysis, if they exist, and if they’re ridiculous, could also be fruitful. In case I or anyone else wants to head down this path, I collected a bunch of resources for where I’d start digging. My goal would be to find tables of numbers, or ratios of published to unpublished manuscripts, or some way to distinguish true results from true non results from false results from false non results.

    • Introspection:
      • The archives of Titchener (at Cornell) and Wundt
      • https://plato.stanford.edu/entries/introspection/
      • Boring’s paper on the History of Introspection https://pdfs.semanticscholar.org/1191/4d0d6987fa13d7f75c0717441d1457b969f3.pdf
    • ESP:
      • Bem’s pilots
      • https://www.newyorker.com/magazine/2010/12/13/the-truth-wears-off (ironically written by Jonah Lehrer)
      • Hypnosis:
        • http://journals.sagepub.com/doi/abs/10.1177/0073275317743120
        • Orne’s “On the social psychology of the psychological experiment”
      • Phrenology:
        https://archiveshub.jisc.ac.uk/search/archives/beb88bfc-51c1-3536-9539-6370f2b9440d
      • Other dead theories:
        Dictionary of Theories, Laws, and Concepts in Psychology (https://books.google.com/books?id=6mu3DLkyGfUC&pg=PA49 )

    About

    This entry was posted on Sunday, February 4th, 2018 and is filed under Uncategorized.


    Pandas in 2018

    I’m late to the game on data science in Python because I continue to do my data analysis overwhelmingly in R (thank god for data.table and the tidyverse and all the amazing stats packages. To hell with data.frame and factors). But I’m finally picking up Python’s approach as well, mainly because I want my students, if they’re going to learn only one language, to learn Python. So I’m teaching the numpy, pandas, matplotlib, seaborn combination. I got lucky to discover two things about pandas very quickly, and only because I’ve been through the same thing in R. 1) the way you learn to use a package is different i subtle ways from how it is documented and taught, and 2) the way a young data science package is used now is different from how it was first used (and documented) before it was tidied up. That means that StackExchange and other references are going to be irrelevant a lot of the time in ways that are hard to spot until someone holds your hand.

    I just got the hand-holding—the straight-to-pandas-in-2018 fast-forward—and I’m sharing it. The pitfalls all come down to Python’s poor distinctions between copying objects and editing them in place. In a nutshell, use .query() and .assign() as much as possible, as well as .loc(), .iloc(), and .copy(). Use [], [[]], and simple df. as little as possible, and, if so, only when reading and never when writing or munging. In more detail, the resources below are up-to-date as of the beginning of 2018. They will spare your ontogeny from having to recapitulate pandas’ phylogeny:

    https://tomaugspurger.github.io/modern-1-intro

    http://nbviewer.jupyter.org/urls/dl.dropbox.com/s/sp3flbe708brblz/Pandas_Views_vs_Copies.ipynb

    Thanks Eshin

    About

    This entry was posted on Tuesday, January 9th, 2018 and is filed under Uncategorized.


    Good mental hygiene demands constant vigilance, meta-vigilance, and meta-meta-vigilance

    I get paid to think. It’s wonderful. It’s also hard. The biggest challenge is the constant risk of fooling yourself into thinking you’re right. The world is complicated, and learning things about it is hard, so being a good thinker demands being careful and skeptical, especially of yourself. One of my favorite tools for protecting myself from my ego is the method of multiple working hypotheses, described in wonderfully old-fashioned language by the geologist Thomas C. Chamberlin in the 1890s. Under this method, investigators protect themselves from getting too attached to their pet theories by developing lots of pet theories for every phenomenon. It’s a trick that help maintain an open mind. I’ve always admired Chamberlin for that article.

    Now, with good habits, you might become someone who is always careful to doubt themselves. Once that happens, you’re safe, right? Wrong. I was reading up on Chamberlin and discovered that he ended his career as a dogmatic, authoritarian, and very aggressive critic of those who contradicted him. This attitude put him on the wrong side of history when he become one of the most vocal critics of the theory of continental drift, which he discounted from the start. His efforts likely set the theory’s acceptance back by decades.

    The takeaway is that no scientist is exempt from becoming someone who eventually starts doing more harm than good to science. Being wrong isn’t the dangerous thing. What’s dangerous is thinking that being vigilant makes you safe from being wrong, and thinking that not thinking that being dangerous makes you safe from being wrong makes you safe from being wrong. Don’t let your guard down.

    Also see my list of brilliant scientists who died as the last holdouts on a theory that was obviously wrong. It has a surprising number of Nobel prize winners.

    Sources:

    • https://www.geosociety.org/gsatoday/archive/16/10/pdf/i1052-5173-16-10-30.pdf
    • https://www.smithsonianmag.com/science-nature/when-continental-drift-was-considered-pseudoscience-90353214/

    About

    This entry was posted on Tuesday, January 9th, 2018 and is filed under Uncategorized.


    List of Google Scholar advanced search operators

    I’m posting this because it was surprisingly hard to find. That is partly because, as far as I can tell, you don’t need it. Everything I could find is already implemented in Scholar’s kind-of-hidden visual interface to Advanced Search. The only possible exception is site:, which Advanced Search doesn’t off, but source: supersedes a bit. Standard things like “”, AND, OR, (), plus, and minus are as-is and well documented.

    Beyond that, I didn’t find much:

    1. allintitle: — conduct the whole search over paper titles
    2. allintext: — conduct the whole search over paper texts
    3. author: — search within a specific author.
    4. source: — search within a specific journal
    5. site: — search within a specific site

    There are no operators for years that I could find, you have to use the sidebar or as_ylo and as_yhi parameters in the url (e.g.
    &as_ylo=1990&as_yhi=2022).

    example:

    allintitle: receptor site:jbc.org hormone “peptide receptor” -human author:”y chen” source:journal

    *

    operator operators advanced search special keywords complete academic list

    About

    This entry was posted on Friday, December 29th, 2017 and is filed under Uncategorized.


    typographically heavy handed web design

    Typography is fun. Recent developments in HTML are v. underexplored, especially in what they let you do with type and transparency. I came up with a concept for a navigation bar that would have no backgrounds or borders. It uses noise to direct attention, and gets structure from how things emerge from noise. All in CSS and HTML: no Javascript needed.

    See the Pen Designing with type by Seth Frey (@enfascination) on CodePen.0

    http://enfascination.com/htdocs/text_design/

    About

    This entry was posted on Wednesday, November 29th, 2017 and is filed under Uncategorized.


    A simple way to drive to get more efficiency out of cruise control

    Out of the box, consumer cruise control interfaces favor simplicity over efficiency. Even though it can be efficient to maintain constant speed, cruise control wastes a lot of energy downhill by braking to not go more than 1 MPH over the target speed. If cruise control systems allowed more variation around the target speed, softer and more spread out upper and lower bounds, they would build efficiency by letting cars build momentum and store energy downhill that they can use uphill.

    I developed a brainless way of implementing this without having to overthink anything. This method is much simpler than driving without cruise control, and it only takes a little more attention than using cruise control normally. Using it on a 3 hour hilly drive, a round trip from Hanover, NH to Burlington, VT, I increased my MPG by almost 10, from high 38 to low 46. I got there in about the same amount of time, but with much more variation in speed. The control trip had cruise control at 72 in a 65. I didn’t deviate from that except for the occasional car. The temp both days was around 70°. Car is a 2008 Prius.

    For the method, instead of deciding on a desired speed, you decide a desired MPG and minimum and maximum speeds. That’s three numbers to think up instead of one, but you can do it in a way that’s still brainless. Set your cruise control to the minimum, fix your foot on the throttle so that you’re usually above that speed driving at the target MPG, and only hit the breaks when you expect to hit your maximum. For this trip, my target MPG was 50, and my minimum and maximum speeds were 64 and 80 (so cruise control was at 64). For the most part, my foot is setting the pace and the cruise control is doing nothing. As I go uphill, the car decides that I’m not hitting the gas hard enough and it takes over. As we round the hill it eases off and I feel my foot get back in control (even though it hasn’t moved at all). Then, using momentum built downhill, I’m usually most of the way up the next hill before the engine kicks in. Momentum goes along way, especially in a hybrid. Hybrids are heavy because of their batteries. Over three hours, I was at 46 MPG and spent most of the trip around 70MPH.

    This method probably doesn’t make a difference in flat areas, but it contributes a lot in hilly ones. I don’t expect to ever hit my target MPG, but by minimizing the time spent below that, I can count on approaching it asymptotically. A hypermiler would recommend driving a lot more slowly than 70, but they’d also recommend stripping out your spare tire and back seats, so take it and leave it.

    Peak fuel efficiency on a Prius is crazy low, like in the 30s I think: a pretty unrealistic target for highway driving. But if there was no traffic, and if I was never in a hurry, I’d try it again with cruise control 45 MPH, a target at 60MPG, and a max speed of 90MPH, to see if I could hit 50MPG. I haven’t stayed above 50 on that drive before, but I still think I can do it and still keep my back seat.

    About

    This entry was posted on Tuesday, October 24th, 2017 and is filed under Uncategorized.


    How do we practice large-scale social engineering when, historically, most of it is evil?

    There are many obvious candidates for most evil large scale social system of all time. Apartheid gets special interest for the endurance of its malevolence. I am interested in how to design social systems. Looking at oppressive designs is important for a few reasons. First, as a warning: it’s an awful fact that the most successful instances of social engineering are all clear examples of steps backwards in the betterment of humankind. Second, as reverse inspiration. Apartheid was a very clear set of rules, intentionally put together to make Africans second to those of European descent. Each rule contributed to that outcome. Some of those rules exist today in the US in weaker form, but they are hard to recognize as inherently oppressive until you see them highlighted as basic principles of the perfectly oppressive society. So what are those principles? And where did they come from?

    I recently learned that intellectual architects of Apartheid in South Africa visited the American South for inspiration, which they tweaked with more lessons in subjugation from British Colonial rule. One historian described early 20th century South Africa and the USA as representing “the highest stage of white supremacy.”

    But Apartheid wasn’t a copy/paste job. Afrikaners understood Apartheid as something that learned from the failures of Jim Crow as a system of segregation and control. US failures to prevent racial mixing inspired the South African system in which multiracial people are a third race, called Colored, which to this day is distinct from Black. The US model also inspired a political geography (the Homelands) that would keep Africans entirely outside of urbanized areas, except as laborers. The Africaner’s were able to go further as well. In order to undermine organizing and maintain control, they took measures to prevent communication between homelands (like by making a different language the “national” language of each fake nation). With black Africans divided between 9 (?) of these fake nations, the 5:1 minority of white people could ensure that they are not outnumbered by any one body. And the animosity that these artificial divisions created between black Africans 70 years ago persist today.

    I don’t like “smoky shadow conspiracy / backroom deal” theories of political control, because I think a lot of systemic oppression happens in a decentralized way through perverse values. But some systems of oppression really are designed.

    Notes

    I got onto the question of US influence on Apartheid after hearing Trevor Noah’s autobiography. At one point he says that a commission that outlined Apartheid did a world tour of oppressive regimes and wrote a report of recommendations. I still haven’t found that list of countries (or the date of the trip, or the name of the report (Lagden Commission? Sauer Commission?), but I found other things: early (40 years prior) intellectual groundwork of Apartheid. Here are the sources I got my hands on for the specific question of foreign inspiration.

    Primary:
    https://archive.org/details/southafricannati00sout
    https://archive.org/details/blackwhiteinsout00evan

    Secondary:
    Rethinking the Rise and Fall of Apartheid: South Africa and World Politics
    By Adrian Guelke
    Racial segregation and the origins of apartheid in South Africa, 1919-36 / Saul Dubow
    The highest stage of white supremacy : the origins of segregation in South Africa and the American South / John W. Cell

    About

    This entry was posted on Tuesday, October 17th, 2017 and is filed under Uncategorized.


    “What’s a pee-dant?”

    download-1
    My wife, a librarian and self-described pedantic jerk, got a tough question at the library the other day: “What’s a pee-dant?” Her first thought? “This has got to be a setup.

    About

    This entry was posted on Saturday, September 9th, 2017 and is filed under Uncategorized.


    My great grandma’s face tattoos

    My momma is from a part of Jordan where women had a tradition of getting tattoos all over. After many years of searching, and finally help from my librarian wife, I found a book published by Jordan’s national press by Taha Habahbeh and Hana Sadiq, an Iraqi fashion designer living in Jordan. I don’t think either speaks English, and the book is only in Arabic, but the pictures are good, if grainy.

    tatt3

    tatt2

    tatt1

    (None of these are my great-grandma. These are all pictures from the book. It has a lot more. Full scan here.)

    So yeah, face tattoos. And while we’re on the subject of things you in the Middle East do without fully thinking through the consequences, here’s a political service announcement about US foreign policy: After an extended period of secularization through the mid-20th century, in which my mom wore miniskirts and short hair, fundamentalist Islam started its revival in Jordan in the 1980s. The reversal is almost entirely attributable to the fallout from USA’s hysterically anticommunist foreign policy. That violent silliness drove US funding and training of the Afghani groups that became Al Qaeda, the initiation of a nuclear program in Iran to keep it from leaning on Russia, the smuggling of arms to Iran to fund anti-Communist massacres in Nicaragua, and the destructive consequences of the US’s uncompromising support for the Israeli occupation of Palestine. More recently, with US-caused conflicts in Iraq spreading war to Syria, Jordan continues to be the largest refugee camp in the world. Jordanians may always be a minority in their country.

    About

    This entry was posted on Monday, August 28th, 2017 and is filed under Uncategorized.


    New paper out from my time at Disney: Blind moderation with human computation

    Frey, S., Bos, M.W., and Sumner, R.W (2017) “Can you moderate an unreadable message? ‘Blind’ content moderation via human computation” Human Computation 4:1:78–106. DOI: 10.15346/hc.v4i1.5
    Open access (free) here.

    What’s it about?

    Say I’m the mailman and you just received a letter, and you wanted to know before opening it if it has anything disturbing. You could ask me to invade your privacy and open it. Or I could respect your privacy and make you take a chance. But I can’t do both. In this sense, safety and privacy are opposed. Or are they? In certain decision settings, its possible to filter out unsafe letters without opening any of them.

    In this project, I lay out two tricks I developed for determining without looking at a piece of content whether it contains inappropriate content. This is important because most kids are on the Internet. In fact, according to some reports, a third of all cell phones are owned by minors.

    One of the two methods could one day work for protecting voters from intimidation, by replacing normal checkboxes on a ballot with low-resolution pictures of two generic faces. Here’s the basic idea. You have a tyrant and an upstart competing for the tyrant’s seat. Everyone wants to vote for the upstart, but everyone is afraid that the tyrant will read their ballot and seek retribution. Assume the big assumption that the winner will get to take office and there’s protection from voting fraud and all that stuff, and just focus on the mechanics of the ballot.

    In my scheme, your ballot doesn’t actually name any candidate. All there is are two copies of the same generic faces, both sort of fuzzed up with noise like the snow on a TV. By chance, because of the noise, one face will barely looks slightly more like the candidate you prefer. To vote, all you do is circle that face. Every person gets a ballot with the same face, but different noise. Then after all the ballots are collected, you take all the faces that got circled, average them, and the generic face plus the averaged noise will look like the face of the upstart. But from each individual ballot it’ll be impossible for the tyrant to know who you voted for. This averaging method, called reverse correlation in social psychology, has already been shown to do all kinds of cool stuff. But never anything vaguely useful before. That’s why this paper could be considered a contribution.

    I’m proud of this paper, and with how quickly it came out: just, umm, three years. Quick for me.

    About

    This entry was posted on Thursday, August 10th, 2017 and is filed under Uncategorized.


    Modern economic ideology has overwritten sharing as the basis of human history

    The commons and collective efforts to govern them are probably as old as humanity. They are certainly as old as Western civilization. And yet, very few people know it, and most who don’t know the history would even say that what I say is in doubt. This is because the spread of the ideology of modern Western civilization has not only downplayed the role of common property in our history, it has also reinterpreted the successes of the commons as successes of capitalism.

    The truth of it comes right out and grabs you in the roots of two words, “commoner” and “capital”. In modern usage, the word “commoner” refers to someone who is common (and poor), as opposed to someone who is exceptional (and rich). But the actual roots of the word referred to people who depended in their daily lives on the commons: forests, rivers, peat bogs, and other lands that were common property, and managed collectively, for centuries. The governance regimes that developed around commons were complex, efficient, fair, stable, multi-generational, and uniquely suited to the local ecology. They were beautiful, until power grabs around the world made them the property of the wealthy and powerful, and reduced commoners to common poverty.

    The word capital comes from Latin roots for “head,” a reference to heads of cattle, an early form of tradable property that may have formed the intellectual root of ideas of property on which capitalism is built. The herding of capital is typical of the pastoralist life that characterized much of life before the invention of agriculture and the state. A notable feature of pastoralists around the world is that they tend to share and collectively manage rangelands. Some of the oldest cooperatives in the world, like a Swiss one of more than 500 years, are grazing cooperatives. Collective ownership is at least as old as the management of domesticated herds, and it is absolutely essential to most instances of it, particularly among herders living in low-yield lands surrounding the cradle of Western civilization. In other words, common property is what made capital possible.

    Common property is alive and well today, and just as new technology is making it more and more goods privately ownable (and therefore distributable through markets), it is also giving people more and more opportunities to benefit from collective action, and continue to be a part of history.

    About

    This entry was posted on Saturday, July 15th, 2017 and is filed under Uncategorized.


    Black and white emoji fonts

    I’m working with Matteo Visconti di Oleggio Castello to bring modern emoji to letterpress. Nerds are into standards, so by “modern” I mean Emoji version 5.0, which is implemented in Unicode 10.0. We’re helped by our typehigh project for transforming .svg, .png, and even full .ttf files into 3dprintable .stl models (via .scad). All we need are emoji font files suitable for letterpress. After a bit of effort seeing if it would be easy to convert color fonts to black and white, we realized that there should be black and white emoji fonts. But it was harder than we thought. Almost all modern emoji fonts are all in full color, and it took some digging to find symbol fonts that are still black and white. I was able to find a bunch, as well as some full color fonts that are designed to have black and white “fallback” modes.

    Fonts

    Here is what I found:

    Noto Emoji Font
    Google has a fully internationalized font, Noto, whose emoji font has a black and white version:
    https://github.com/googlei18n/noto-emoji/tree/master/fonts
    The smiley’s are blobs.

    EmojiOne
    EmojiOne is a color font with black and white fallbacks. I couldn’t figure out how to trigger the fallbacks, but I found an early pre-color version of EmojiOne:
    https://github.com/eosrei/emojione

    Android Emoji
    Not sure why, but one of Android’s main Emoji fonts is black and white
    https://github.com/delight-im/Emoji/tree/master/Android/assets/fonts
    The smiley’s are androids.

    GNU’s FreeFont
    FreeFont is black and white.
    http://savannah.gnu.org/projects/freefont/
    http://ftp.gnu.org/gnu/freefont/?C=M;O=D

    SymbolA
    SymbolA is a black and white Linux font with nearly full Unicode support:
    http://apps.timwhitlock.info/emoji/tables/unicode
    http://users.teilar.gr/~g1951d/

    EmojiSymbols
    A free font by an independent designer.
    http://emojisymbols.com/
    You can convert from woff to ttf here

    Microsoft Segoe UI Symbol
    Microsoft has a very high-quality emoji set in its Segoe UI Symbol/Emoji font. And because of copyright law, in which things have to be copyrighted separately for different uses, there shouldn’t be anything keeping us from using it to create printed type:
    https://en.wikipedia.org/wiki/Segoe
    http://www.myfontfree.com/segoeuiemoji-myfontfreecom126f132714.htm

    FireFoxEmoji
    This might be from an old pre color version:
    https://github.com/mozilla-b2g/moztt/blob/master/FirefoxEmoji-1.6.7/FirefoxEmoji.ttf

    Twitter’s Emoji font
    Twitter open sources its emoji font. This doesn’t have a black and white version, but it does have black and white fallbacks. If I can figure out how to extract or trigger the fallbacks, this could be great.
    https://github.com/eosrei/twemoji

    There may be more at the bottom of this:
    https://github.com/eosrei/emojione
    and here
    https://wiki.archlinux.org/index.php/fonts#Emoji_and_symbols

    Using/testing/seeing these fonts

    Don’t do this through a browser, but on your own system. You have to install each font, then download this file (instead of viewing it in your browser):
    www.unicode.org/Public/emoji/5.0/emoji-test.txt
    Open it in a text editor and change the Font to each of these fonts to see how each emoji set looks.

    keywords

    emoji symbol font ttf otf open source fallback BW B&W

    About

    This entry was posted on Friday, June 16th, 2017 and is filed under Uncategorized.


    all the emoji in a line

    Here is a quick and dirty list of most simple emoji:
    ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ☺️☺ ? ? ? su? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? su☹️☹ ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? su? ? ? ? ? ? su? ? ? ? ? ? ? ? su? ? ? ? ? ☠️☠ ? ? ? ? ? su? ? ? ? ? ? ? ? ? su? ? ? su? ??????????? ??????????? ??????????? ??????????? ??????????? ??????????? ??????????? ??????????? ??????????? ??????????su?‍?????????????????????‍?????????????????????‍???????????‍???????????‍???????????‍???????????‍?????????????????????‍?????????????????????‍???????????‍???????????‍???????????‍???????????‍???????????‍???????????‍???????????‍???????????‍???????????‍???????????‍???????????‍???????????‍???????????‍???????????‍???????????‍???????????‍???????????‍???????????‍?????????????????????‍?????????????????????‍???????????‍???????????‍???????????‍??????????? ???????????????????????????????????????????????????????️? ???????????️??️??????????????????????️?????????????????????? ??????????????????????????????????????????????????????? ??????????????????????????????????????????????????????? ??????????? ??????????? ??????????????????????????????????????????????????????? ??????????? ??????????? ??????????? ??????????????????????????????????????????????????????? ??????????? ??????????? ??????????? ??????????su? ??????????? ??????????? ??????????? ??????????????????????????????????????????????????????? ??????????????????????????????????????????????????????? ??????????????????????????????????????????????????????? ?????????????????????????????????????????????????????? ??????????????????????????????????????????????????????? ????? ????su? ??????????????????????????????????????????????????????? ??????????????????????????????????????????????????????? ??????????????????????????????????????????????????????? ??????????????????????????????????????????????????????? ??????????????????????????????????????????????????????? ??????????????????????????????????????????????????????? ??????????????????????????????????????????????????????? ??????????????????????????????????????????????????????? ??????????????????????????????????????????????????????su? ??????????????????????????????????????????????????????? ??????????????????????????????????????????????????????? ??????????????????????????????????????????????????????? ??????????????????????????????????????????????????????? ??????????? ??????????? ????? ??????????????????????????????????????????????????????? ??????????????????????????????????????????????????????? ??????????????????????????????????????????????????????? ??????????? ???????????️? ???????????️? ? ? su? ? ??????????⛷️⛷ ? ???????????️? ???????????️??️??????????????????????️??️?????????????????????? ??????????????????????????????????????????????????????? ??????????????????????????????????????????????????????? ????????????????????????????????? ??????????????????????⛹️⛹ ⛹?⛹?⛹?⛹?⛹?⛹️⛹⛹️⛹⛹?⛹?⛹?⛹?⛹?⛹?⛹?⛹?⛹?⛹?⛹️⛹⛹️⛹⛹?⛹?⛹?⛹?⛹?⛹?⛹?⛹?⛹?⛹??️? ???????????️??️??????????????????????️??️?????????????????????? ??????????? ????????????????????????????????????????????? ???????????????????????????????????????????????????????️? ?️? ? ??????????????????????????????????????????????????????? ????? ??????????????????????????????????????????????????????? ??????????????????????????????????????????????????????? ??????????????????????????????????????????????????????su? ? ? ? ??‍? ???‍? ??‍su? ??????????? ??????????? ??????????? ??????????☝️☝ ☝?☝?☝?☝?☝?? ??????????? ??????????? ??????????✌️✌ ✌?✌?✌?✌?✌?? ??????????? ??????????? ??????????? ???????????️? ??????????✋ ✋?✋?✋?✋?✋?? ??????????? ??????????? ??????????✊ ✊?✊?✊?✊?✊?? ??????????? ??????????? ??????????? ??????????? ??????????? ??????????✍️✍ ✍?✍?✍?✍?✍?? ??????????? ??????????? ??????????? ??????????? ??????????? ? ??????????? ??????????? ??????????? ? ?️? ?️??️?? ? ? su? ? ❤️❤ ? ? ? ? ? ? ? ? ? ? ? ? ? ? ❣️❣ ? ? ? ? ? ? ? ? ? ?️? ?️? ? ?️? su? ?️? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?️? ? ? ? ? ? ? ? ? ? ? ? ⛑️⛑ ? ? ? ? SmSmgrsu? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?️? ? ? ? ? ? ? su? ? ? ? ? ? ? ? ?️? ? ? ? su? su? ? ? ? ? ? ? ? su? ? ? ? ? ? ? ? ? ? ? ? su? ? ? ? ? ? ? ?️? ?️? ? su? ? ? ?️? ? ? ? ? ? ? su? ? ? ? ? ? ? ☘️☘ ? ? ? ? AnAngrsu? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? su? ? ? ? ? ?️? ? ? ? ? ? su? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? su? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? su? ? ? ? ? ? ? ? ? ? ? ? ? su? ? ☕ ? ? ? ? ? ? ? ? ? ? ? su? ?️? ? ? ? ? FoFogrsu? ? ? ? ?️? ? su?️? ⛰️⛰ ? ? ?️? ?️? ?️? ?️? ?️? su?️? ?️? ?️? ?️? ?️? ?️? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? su⛪ ? ? ⛩️⛩ ? su⛲ ⛺ ? ? ? ? ? ? ? ♨️♨ ? ? ? ? ? ? ? ?️? ? ? su? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?️? ?️? ⛽ ? ? ? ? ? su⚓ ⛵ ? ? ?️? ⛴️⛴ ?️? ? su✈️✈ ?️? ? ? ? ? ? ? ? ?️? ? ? su?️? ? ?️? ?️? ? ? ? su⌛ ⏳ ⌚ ⏰ ⏱️⏱ ⏲️⏲ ?️? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? su? ? ? ? ? ? ? ? ? ? ? ? ?️? ☀️☀ ? ? ⭐ ? ? ☁️☁ ⛅ ⛈️⛈ ?️? ?️? ?️? ?️? ?️? ?️? ?️? ?️? ?️? ? ? ? ☂️☂ ☔ ⛱️⛱ ⚡ ❄️❄ ☃️☃ ⛄ ☄️☄ ? ? ? TrTrgrsu? ? ? ? ✨ ? ? ? ? ? ? ? ? ? ? ? ?️? ?️? ? su?️? ? ? ? ? ? su⚽ ⚾ ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ⛳ ⛸️⛸ ? ? ? ? ? su? ?️? ? ♠️♠ ♥️♥ ♦️♦ ♣️♣ ? ? ? AcAcgrsu? ? ? ? ? ? ? ? ? su? ? ? ?️? ?️? ?️? ? ? ? su? ? ? ? ? ? su? ? ☎️☎ ? ? ? su? ? ? ?️? ?️? ⌨️⌨ ?️? ?️? ? ? ? ? su? ?️? ?️? ? ? ? ? ? ? ? ? ? ? ? ?️? ? ? ? su? ? ? ? ? ? ? ? ? ? ? ? ? ?️? ? ? ?️? su? ? ? ? ? ? ? ? ? ? su✉️✉ ? ? ? ? ? ? ? ? ? ? ? ?️? su✏️✏ ✒️✒ ?️? ?️? ?️? ?️? ? su? ? ? ?️? ? ? ?️? ?️? ? ? ? ? ? ? ? ? ?️? ? ? ✂️✂ ?️? ?️? ?️? su? ? ? ? ? ?️? su? ⛏️⛏ ⚒️⚒ ?️? ?️? ⚔️⚔ ? ? ?️? ? ? ⚙️⚙ ?️? ⚗️⚗ ⚖️⚖ ? ⛓️⛓ su? ? su? ⚰️⚰ ⚱️⚱ ? ?️? ? ? ObObgrsu? ? ? ♿ ? ? ? ? ? ? ? ? ? su⚠️⚠ ? ⛔ ? ? ? ? ? ? ? ? ☢️☢ ☣️☣ su⬆️⬆ ↗️↗ ➡️➡ ↘️↘ ⬇️⬇ ↙️↙ ⬅️⬅ ↖️↖ ↕️↕ ↔️↔ ↩️↩ ↪️↪ ⤴️⤴ ⤵️⤵ ? ? ? ? ? ? ? su? ⚛️⚛ ?️? ✡️✡ ☸️☸ ☯️☯ ✝️✝ ☦️☦ ☪️☪ ☮️☮ ? ? su♈ ♉ ♊ ♋ ♌ ♍ ♎ ♏ ♐ ♑ ♒ ♓ ⛎ su? ? ? ▶️▶ ⏩ ⏭️⏭ ⏯️⏯ ◀️◀ ⏪ ⏮️⏮ ? ⏫ ? ⏬ ⏸️⏸ ⏹️⏹ ⏺️⏺ ⏏️⏏ ? ? ? ? ? ? su♀️♀ ♂️♂ ⚕️⚕ ♻️♻ ⚜️⚜ ? ? ? ⭕ ✅ ☑️☑ ✔️✔ ✖️✖ ❌ ❎ ➕ ➖ ➗ ➰ ➿ 〽️ÿ〽 ✳️✳ ✴️✴ ❇️❇ ‼️‼ ⁉️⁉ ❓ ❔ ❕ ❗ 〰️ÿ〰 ©️© ®️® ™️™ su#️#⃣*️*⃣0️0⃣1️1⃣2️2⃣3️3⃣4️4⃣5️5⃣6️6⃣7️7⃣8️8⃣9️9⃣? su? ? ? ? ? ? ?️? ? ?️? ? ? ? ℹ️ℹ ? Ⓜ️Ⓜ ? ? ?️? ? ?️? ? ? ? ? ?️? ?️? ? ? ? ? ? ? ? ? ? ? ㊗️ÿ㊗ ㊙️ÿ㊙ ? ? su▪️▪ ▫️▫ ◻️◻ ◼️◼ ◽ ◾ ⬛ ⬜ ? ? ? ? ? ? ? ? ? ? ⚪ ⚫ ? ? SySygrsu? ? ? ? ?️? ?️?su????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????su??????

    Dirty because there are a few non-emoji characters mixed in. Here is the one-liner:
    wget http://www.unicode.org/Public/emoji/5.0/emoji-test.txt -qO - | sed 's/.*# //;s/\(..\).*/\1/' | uniq | sort | tr -d '\n' | tr -d ' '
    If you want female cop distinguished from male cop, try changing the two dots in a row (“..”) to three dots (“…”). If you want null skin color, change the two dots to one dot.

    About

    This entry was posted on Sunday, June 11th, 2017 and is filed under Uncategorized.


    Yeah, I’m not sure that that’s the takeaway

    I’m reading a paper by Centola and Baronchelli. It describes a well-designed, ambitious experiment with interesting results. But I hit the brakes at this:

    The approach used here builds on the general model of linguistic conventions proposed by Wittgenstein (39), in which repeated interaction produces collective agreement among a pair of players.

    I’m always thrilled to see philosophy quoted as inspiration in a scientific paper, but in this case there’s a legitimacy problem: no one who ever actually paid attention to Wittgenstein is going to have the guts to gloss him that blithely. You don’t formalize into language a legendary demonstration of the non-formalizability of language without introducing and following your gloss with a bunch of pathetic self-consciously equivocal footwork. Also, Wittgenstein, I’m really really sorry for describing Philosophical Investigations as merely or even remotely about the non-formalizability of language.

    D. Centola, A. Baronchelli, (2015) The spontaneous emergence of conventions: An experimental study of cultural evolution. http://www.pnas.org/content/112/7/1989

    About

    This entry was posted on Thursday, June 8th, 2017 and is filed under Uncategorized.


    Behavioral economics in the smallest nutshell

    In James March’s book of very good Cornell Lectures, The Ambiguities of Experience (2010), I stumbled on the best and most concise summary of behavioral econ that I’ve read.

    Some features of human cognitive abilities and styles affect the ways stories and models are created from ambiguous and complex experience. Humans have limited capabilities to store and recall history. They are sensitive to reconstructed memories that serve current beliefs and desires. They have limited capabilities for analysis, a limitation that makes them sensitive to the framing that is given to experience. They conserve belief by being less critical of evidence that seems to confirm prior beliefs than of evidence that seems to disconfirm them. They distort both observations and beliefs in order to make them consistent. They prefer simple causalities, ideas that place causes and effects close to one another and that match big effects with big causes. They prefer heurisitics that involve limited information and simple calculations to more complex analyses. This general picture of human interpretations of experience is well documented and well known (Camerer, Loewenstein, and Rabin 2004, Kosnick 2008).

    He goes on to add that

    These elements of individual storytelling are embedded in the interconnected, coevolutionary feature of social interpretation. An individual learns from many others who are simultaneously learning from him or her and from each other. The stories and theories that one individual embraces are not independent of the stories and theories held by others. Since learning responds as a result to echoes of echoes, ordinary life almost certainly provides greater consistency of observations and interpretations of them than is justified by the underlying reality. In particular, ordinary life seems to lead to greater confirmation of prior understandings than is probably warranted.

    Overall, the book is very good. Thoughtful and thorough while staying concise. Crisp without being too pithy. I’m thinking of assigning parts.

    About

    This entry was posted on Tuesday, May 30th, 2017 and is filed under Uncategorized.


    Searle has good ideas and original ideas, but his good ones aren’t original, and his original ones aren’t good.

    John Searle is an important philosopher of mind who has managed to maintain his status despite near-ridicule by every philosopher of mind I’ve ever met. He has good ideas and original ones. In the “original” column you can put the Chinese Room and his theory of consciousness. In the “good” column go his theories of speech acts, intentionality, and institutions. None of the former are good and none of the latter are original.

    All credit for this particular takedown goes to Dennett, who put it more thoroughly and less zippily: Searle’s “direction of fit” idea about intentionality is cribbed from Elizabeth Anscombe¹s Intention, Searle’s contributions to speech acts are largely a simplified version of Austin’s “How to do things with words,” and his framework for the social construction of reality is obvious enough that the not-even-that-impressive distinction of having gotten there first can be attributed to Anscombe again, in other ways to Schuetz and Berger, and clearly to Durkheim and probably dozens of other sociologists.

    I never admired Searle. His understanding of philosophy of mind is pre-Copernican, both in terms of being based on ancient metaphysics and having everything revolve around him. He only assigned his own books, and the points we had to argue were always only his. He also had a reputation of being a slumlord and a creep. The world recently discovered that he’s definitely a creep. Already feeling not generous about his work and personality, I do hope that his scandals undermine his intellectual legacy.

    About

    This entry was posted on Sunday, May 7th, 2017 and is filed under Uncategorized.


    White hat p-hacking, a primer

    Jargon glossary: Exploratory data analysis is what you do when you suspect there is something interesting in there but you don’t have a good idea of what it might be, so you don’t use a hypothesis. It overlaps with p-hacking, asking random questions of a noisy world on scant data until the world accidentally misfires and tells you what you want to hear, and you pretend that that was what you thought would happen all along. p-hacking is a response to null results, when you spent forever organizing a study and nothing happens. p-hacking might have caused the replicability crisis, which is researchers becoming boors when they realize that everything they thought was true is wrong. Hypothesis registration is when you tell the world what question you’re gonna ask and what you expect to find before doing anything at all. People are excited because it is a solution to p-hacking. A false positive is when you think you found something that actually isn’t there. It is one of the two types of error, the other being a false negative, when you missed something that actually is there. The reproducibility movement is focused on reducing false positives.

    I almost falsified data once. I was a young research assistant in primatologist Marc Hauser’s lab in 2004 (well before he had to quit for falsifying data, but probably unrelated to that). I was new to Boston, lonely and jobless. I admired science and wanted to do it, but I kept screwing up. I had already screwed up once running my monkey experiment. I got a stern talking to and was put on thin ice. Then I screwed up again. I got scared and prepared to put made-up numbers in the boxes. I immediately saw myself doing it. Then I started to cry, erased them, unloaded on the RA supervising me, quit on the spot, and even quit science for a few years before allowing myself back in in 2008. I know how we fool and pressure ourselves. To be someone you respect requires either inner strength or outside help. Maybe I’ve got the first now. I don’t intend to find out.

    That’s what’s great about hypothesis registration. And still, I’m not impressed by it. Yes it’s rigorous and valuable for some kinds of researchers, but it does not have to be in my toolkit for me to be a good social scientist. First, there are responsible alternatives to registration, which itself is only useful in domains that are already so well understood that why are we still studying them? Second, “exploratory data analysis” is getting paired with irresponsible p-hacking. That’s bad and it will keep happening until we stop pretending that we already know the unknowns. In the study of complicated systems, uncertain data-first exploratory approaches will always precede solid theory-first predictive approaches. We need a good place for exploration, and many of the alternatives to registration have one.

    What are the responsible alternatives to hypothesis registration?

    1. Design good experiments, the “critical” kind whose results will be fascinating no matter what happens, even if nothing happens. The first source of my not-being-impressed-enough by the registration craze is that it misses a bigger problem: people should design studies that they know in advance will be interesting no matter the outcome. If you design null results out, you don’t get to a point of having to fish in the first place. Posting your rotten intuitions in advance is no replacement for elegant design. And elegant design can be taught.
    2. Don’t believe everything you read. Replicability concerns don’t acknowledge the hidden importance of tolerating unreplicable research. The ground will always be shaky, so if it feels firm, it’s because you’re intellectual dead weight and an impediment to science. Reducing false positives requires increasing false negatives, and trying to eliminate one type of error makes the other kind explode. Never believe that there is anything you can do to get the immutable intellectual foundation you deserve. Example: psychology has a lot of research that’s bunk. Econ has less research that’s bunk. But psychology adapts quickly, and econ needs decades of waiting for the old guard to die before something as obvious as social preferences can be suffered to exist. Those facts have a deep relationship: economists historically suffer false negatives at the cost of false positives. Psychologists do the opposite, and they cope with the predominance of bunk by not believing most studies they read. Don’t forget what they once said about plate tectonics: “It is not scientific but takes the familiar course of an initial idea, a selective search through the literature for corroborative evidence, ignoring most of the facts that are opposed to the idea, and ending in a state of auto-intoxication in which the subjective idea comes to be considered an objective fact.” link
    3. Design experiments that are obvious to you and only you, because you’re so brilliant. If your inside knowledge gives you absolute confidence about what will happen and why it’s interesting, you won’t need to fish: if you’re wrong despite that wild confidence, that’s interesting enough to be publishable itself. Unless you’re like me and your intuition is so awful that you need white hat p-hacking to find anything at all.
    4. Replace p-values with empirical confidence intervals.
    5. Find weak effects boring. After all, they are.
    6. Collect way too much data, and set some aside that you won’t look at until later.

    OK, so you’re with me: Exploratory data analysis is important. It’s impossible to distinguish from p-hacking. Therefore, p-hacking is important. So the important question is not how to avoid p-hacking, but how to p-hack responsibly. We can; we must. Here is one way:

    1. Collect data without a hypothesis
    2. Explore and hack it unapologetically until you find/create an interesting/counterintuitive/publishable/PhD-granting result.
    3. Make like a responsible researcher by posting your hypothesis about what already happened after the fact.
    4. Self-replicate: Get new data or unwrap your test data.
    5. Test your fishy hypothesis on it.
    6. Live with the consequences.

    While it seems crazy to register a hypothesis after the experiment, it’s totally legitimate, and is probably better done after your first study than before it. This whole thing works because good exploratory findings are both interesting and really hard to kill, and testing out of sample forces you to not take the chance on anything that you don’t think will replicate.

    I think of it as integrity exogenously enforced. And that’s the real contribution of recent discourse: hypothesis registration isn’t what’s important, it’s tying your hands to the integrity mast, whether by registration, good design, asking fresher questions, or taking every step publicly. It’s important to me because I’m very privileged: I can admit that I can lie to myself. Maybe I’m strong enough to not do it again. I don’t intend to find out.

    About

    This entry was posted on Monday, April 17th, 2017 and is filed under Uncategorized.


    Words with dundant or fluous fixes

    Words that aren’t opposites

    • real — unreal
    • canny — uncanny
    • valuable — invaluable
    • credulous — incredulous
    • fact — fiction (this is actually a deep one. Roots of both are in proto-indo-european words for “to make”)
    • mure — demure
    • vert — invert
    • aging — imaging
    • pact — impact
    • mediate — immediate
    • predate — postdate
    • toward — untoward

    Prefixed words that aren’t words and don’t have prefixes. Some of these are words that aren’t opposites because one of them isn’t a word.

    • ert — inert
    • molish — demolish
    • venient — convenient
    • dundant — redundant
    • fluous — superfluous
    • becile — imbecile
    • agining — imagining
    • plicate — replicate
    • gruntled — disgruntled
    • sidious — insidious
    • whelmed — overwhelmed
    • rageous — outrageous
    • cursion — recursion
    • imburse — reimburse
    • burse — reimburse
    • fluous — superfluous
    • cilious — supercilious
    • quited — requited
    • quited — unrequited
    • vagant — extravagant
    • bolé — hyperbole
    • bolic — hyperbolic
    • luctant — reluctant
    • hap — mishap
    • pugnant — repugnant
    • dolent — redolent
    • eptitude — ineptitude

    Also, words with redundant prefixes, and

    • reiterate — iterate
    • concatenate — catenate
    • intercatenate — catenate
    • encompass — compass
    • eminant — preeminant
    • perception — apperception

    Other fixed words whose meanings don’t correspond to those of their bases

    • irrespective
    • consummate
    • insure
    • ensure
    • fulsome
    • remiss
    • relax
    • reply
    • reflux
    • reflex
    • convent
    • effable

    This post was formerly “Words that aren’t opposites,” but it’s bigger now. Obviously, there’s room for more.

    See also, islands that don’t exist, and list of fictional guidebooks.

    About

    This entry was posted on Monday, March 20th, 2017 and is filed under life and words, lists.


    Journey through rope

    Hypnotic flythrough of CT Scans of polymer climbing rope

    From Wikimedia Commons

    About

    This entry was posted on Thursday, March 16th, 2017 and is filed under Uncategorized.


    Is it scientific or lazy to lose ten bikes to theft?

    As of today, I’ve had more than 10 bikes stolen in the past seven years. That’s 1 in Boston, 8 in Bloomington, 0 in Zurich, and now 2 in Hanover, NH. These aren’t >$1000 bikes, they’re almost all <$100. But it makes you wonder, how do you convince a reasonable person that you're not crazy when you say that you still aren't locking up? Is it something about wanting to give the world multiple chances to be better than it is? (Or some other rhetoric for self-administering that noble glow?) Is it rather some egoless, arcane, and strictly intellectual life practice about non-attachment? Or maybe an extended experiment for learning what kinds of places or vulnerable to the theft of crappy bikes (college town on party night: very high risk; downtown Boston: surprisingly low risk)? That can't be it; as interesting as that question is, I definitely don't care enough about it to have lost all the bikes I've lost. Maybe it all comes down to some brilliant, insightful way I have of calculating costs and benefits that makes this all very reasonable and acceptable and it's everyone else that's crazy. Or maybe I should just cut the crap and admit to being stubborn or lazy or asinine, and, like a fool, inexplicably smug about all of those foolish qualities. I try to be honest with myself about why I do things. And in this case I honestly don't know. I think there's something more to it than the most unflattering accounts allow. I need to know, because I need to know myself. So as much as I hate losing all of these bikes I've built and rode and loved and lost, I might have to keep on doing it until I've figured myself out. UPDATE: 11

    About

    This entry was posted on Sunday, February 12th, 2017 and is filed under Uncategorized.


    Books read in 2016

    Read:

    • Mark Twain: Collected Tales, Sketches, Speeches, & Essays 1852–1890
      • Reading Twain’s smaller writing. Great to see his less interesting stuff, and fun to be steeped in his voice.
    • Slow Democracy (Chelsea Green, 2012). S. Clark, W. Teachout
      • Clark and Teachout have a great vision for the role democracy should play in people’s lives. I love to see that view represented. This book is more on the movement building side than the handbook or theory side, so it was mostly for helping me not feel alone, although there was good history and good examples.
    • The Communistic Societies of the United States: Economic Social and Religious Utopias of the Nineteenth Century (Dover Publications, 1966). Charles Nordhoff

      • My understanding is that this is a classic study of small-scale communistic societies in the 19th century. They are overwhelmingly religious separatists with leaders. Their business organizations have surprising similarities. The Shakers seemed to have a lot of trouble with embezzlement by those leaders, with about a third of communities demonstrating some past of it. A great resource, and important reminder that the promise of America was for a long time, and for many people, in its communist utopias.
    • A Paradise Built in Hell : The Extraordinary Communities That Arise in Disaster.
      Penguin Books (2010), Rebecca Solnit,

      • Beautiful, creative, and powerful. I admire Solnit a lot and her message is so clear and strong. She finds a bias hidden deeply in the thinking from both the left and the right, and makes it impossible to unsee. Shortly after reading it I saw it again in the etymology of “havoc.” It’s hard to be uncompromisingly radical and even-headedly fair and lucid at the same time, but she makes it look easy and makes me feel intellectually and physically lazy for failing to make the integration of those apparent extremes look effortless. Maybe the glue is compassion? I hope she has a lot more to say about utopia.
    • Individual strategy and social structure : an evolutionary theory of institutions / H. Peyton Young.
      • An important book laying out an important theory that evolutionary game theory offers a model of cultural evolution. I disagree, and now have a better sense of why. Great history and examples. I read past all but the essential introductory formal work (aka math).
    • JavaScript: The Good Parts and Eloquent Javascript

      • Two short books about Javascript that are helping me learn to think right in the language.
    • The Invisible Hook: The Hidden Economics of Pirates (Princeton University Press, 2009). Peter Leeson

      • Pirate societies. I’ll be teaching this book. I like Leeson a lot even though he’s a mad libertarian. He’s creative.
    • The Social Order of the Underworld: How Prison Gangs Govern the American Penal
      System (2014). Skarbeck

      • Prison societies. I’ll be teaching this too.
    • Codes of the Underworld: How criminals communicate (2009). Diego Gambetta

      • An economist’s signaling perspective on the Mafia.
    • Thinking in Systems: A Primer by Donella Meadows

      • I’ll be teaching this book to help teach my class systems thinking, which is especially gratifying since she was here at Dartmouth. Meadows had a huge influence on me. In fact, my wife got a head start on it and she describes it as a user’s manual to my brain. I didn’t even know this book existed until wifey found it. It’s the best kind of posthumous book because she was almost done writing it when we (the world and, more specifically, Dartmouth) lost her.
    • Citizens of no place – An architectural graphic novel by Jimenez Lai
      • Fun, fast, a good mix of dreamy, ambitious, and wanky.
    • The Little Sister (Philip Marlowe, #5) by Raymond Chandler

      • Chandler is classic noir and I’m happy to get caught up on the lit behind my favorite movies. Marlowe is as cynical, dissipated, dark, and clever as you’d want, though I’ve got to admit I like Hammett better than Chandler: he does cynical better with Spade, and dark better with the Operator, and shows through Nick and Nora that he can lighten it up with just as much fluency.

    Reading:

    • Faust

      • Gift from a friend, a proud German friend who took me to the restaurant in the book where the devil gives the jolly guys wine while Faust sits there disaffected and bored. I’m now recognizing that this is an important book for a scientist to read, at least for making the intellectual life romantic.
    • Mark Twain: Collected Tales, Sketches, Speeches, & Essays 1891–1910 Edited by Louis Budd

      • A thick volume of Mark Twain’s early work, where you get to see his voice fill out from journalism through speeches to storytelling, for which he’s now most appreciated. Amazing to see all of his contributions to English, proportional to those of the King James Bible.
    • Elements of Statistical Learning : Data Mining, Inference, and Prediction (New York, NY : Springer-Verlag New York, 2009). J. Friedman, T. Hastie, R. Tibshirani

      • Important free textbook on statistical learning. Great read too; who knew?
    • The evolution of primate societies (University of Chicago Press, 2012). J. C. Mitani, J. Call, P. M. Kappeler, R. A. Palombit, J. B. Silk.

      • Amazing exciting expansive comprehensive academic walk through the primates, how they get on, and how humans are different. It’s a big one, and a slow read, but I’m learning a ton and it’s a great background for me as both a cognitive scientist and social scientist
    • The origin and evolution of cultures (2005). R. Boyd, P. J. Richerson

      • The fruits of unifying economic, evolutionary, and anthropological thought with mathematical rigor. Great background as I teach myself more about cultural evolution and the evolution of culture.
    • Joseph Henrichs’ The Secret of Our Success: How Culture Is Driving Human Evolution, Domesticating Our Species, and Making Us Smarter

      • Pop book that gives an easier overview of the cultural anthro lit. He offers a big vision. Light on details, which is inly a problem for me because the claims are so strong, but that’s not what this book is for. Makes me recognize, as a cognitive scientist, that language and consciousness are a giant gaping hole in current evolutionary accounts of what makes humans different.

    To read:

    • Are We Smart Enough to Know How Smart Animals Are? by Frans de Waal

      • De Waal making an overstrong-but-important-to-integrate case for animal social and psychological complexity
    • Animal social complexity : intelligence, culture, and individualized societies / edited by Frans B.M. de Waal and Peter L. Tyack.

      • An academic version of De Waal’s pop book, more concrete examples and lit,a nd a great cross-species overview to complement my more focused reading on primates.
    • The Sacred beetle and other great essays in science / edited by Martin Gardner

      • I love reading science writers’ collections of insider science writing, and I had no idea Gardner had one. How fun!
    • The Cambridge companion to Nozick’s Anarchy, state, and utopia / [edited by] Ralf M. Bader, John Meadowcroft.

      • Got to continue my unsympathetic reading of Nozick, especially for the ways that he might be right.

    Sampling:

    • A mammal’s notebook : the writings of Erik Satie / edited and introduced by Ornella Volta ; translations by Antony Melville.

      • Has sketches and cartoons!
    • A cross-cultural summary, compiled by Robert B. Textor, 1964

      • This is mostly a list of numbers, but there’s some book in there too. This was hard to find.
    • The anthropology of complex economic systems : inequality, stability, and cycles of crisis / Niccolo Leo Caldararo.

      • Interesting argument about physical+historical limits influencing economic practice in very subtle ways.
    • The new dinosaurs : an alternative evolution / Dougal Dixon

      • Silly and sick pictures driven by a wildly creative vision.
    • Coping with chaos : analysis of chaotic data and the exploitation of chaotic systems / [edited by] Edward Ott, Tim Sauer, James A. Yorke.

      • Methods for the data-driven analysis of time series

    Bedtime:

    • Rereading Wodehouse
    • Reading more Ursula LeGuin
      • She’s so important, and still my favorite representative of sci-fi that is more interested in political than technological frontiers.

    About

    This entry was posted on Sunday, January 1st, 2017 and is filed under Uncategorized.


    Two GIFs about peer review, and an embarrassing story …

    1)

    unnamed

    2)









    It is common to have your papers rejected from journals. I forwarded a recent rejection to my advisor along with the first GIF. Shortly after, I got the second GIF from the journal editor, with a smiley. It turns out that I’d hit Reply instead of Forward.

    At least he had a sense of humor.

    About

    This entry was posted on Saturday, December 17th, 2016 and is filed under audio/visual, science.


    The cooperative lives of the Swiss

    I lived in Zurich, Switzerland for two years and saw lots that relates to my research interests. They have a healthy democracy, an incredibly orderly and rational society, loads of civic participation, high rates of cooperative housing, and many other types of cooperative business. They also have one of the bad things that comes with all those good things above: lots of peer-policing. In two years, the only times I fell naturally into small talk with random strangers on the street were after being scolded by them for violating a social norm. In one I’d been recycling wrong, and the other for being too loud. So it comes as no surprise that the Swiss also casually spy on each other at home.

    http://www.thelocal.ch/20160721/study-a-fifth-of-swiss-spy-on-neighbours

    About

    This entry was posted on Saturday, December 17th, 2016 and is filed under Uncategorized.


    The beauty of unyielding disappointment, in science and beyond

    There’s an academic trend, hopefully growing, of successful professors publishing their “CVs of failure,” essentially keeping track of their failures with the same fortitude that they track their successes. It’s inspiring, in its own daunting way, and it emphasizes the importance of thick skin, but I think we can do better. I’ve come up with a way to celebrate and rejoice in rejection. Rejection is a lot like science.

    There’s this image that appeared in my head a few years ago on a bus ride, that I find myself returning to whenever the constant rejection gets too much. What I do is imagine this giant brass door set in an imposing rock wall stretching interminably up and to each side. In front of it lies this bruised and emaciated monk in tattered robes. Instead of meditation, his practice is to pace back, gather speed, hurl himself at the door with an awful war cry, crumple pathetically against it, get up again, and repeat, over and over, forever. He doesn’t do it with any expectation of the door ever opening. The door is an eardrum or an eye into the other side, in whose dull defeating reverberations lie hints like drumming echoes of the mysterious world beyond, and no ritual less painful can yield truth.

    That’s my bus ride image. It sounds crazy, but going back to it literally never fails to cheer me up again. It’s hard to pin down, but I’ve come up with a few theories for why maybe it works. Maybe it’s reassuring because absurdity and humility are great at putting things in perspective. Or because it’s equally accurate as a description of failure and as a description of the nature of scientific progress. Or maybe what’s going on is that futility becomes romantic when it can be experienced in a way that’s inseparable from ritual, hilarity, and ecstasy.

    About

    This entry was posted on Monday, October 17th, 2016 and is filed under Uncategorized.


    Michael Lacour has rebranded himself as Michael Jules at www.michaeljules.xyz

    Michael Lacour is a former aspiring political scientist famous for standing accused of a major academic fraud that made national news, embarrassed huge names in his field, led to a major retraction, and drove him from academia forever while netting his whistleblower a job at the prestigious Stanford University. So naturally I’d be curious what Lacour would do next, and I’ve been following his main sites, http://www.michaeljules.xyz and http://www.beautifuldataviz.com/ , for a while.

    The takeaway is that the ever-enterprising guy didn’t stay down. He’s been learning to code and develop himself as a data scientist. www.beautifuldataviz.com is still clunky, but it’s much less unbeautiful than it was six months ago, so I figure he’s coming along well enough on his plan B.

    It all makes me wonder what I’d do if I ever got in the same kind of mess, and what would others do about me. They’re questions worth thinking about. Most people probably don’t care and would be wary but ultimately ready to forgive me, though not to the point of ever letting me back in the ivory towers again. That’s probably justified. I imagine that a small number of others would continue to dog me no matter what I tried for next, and try to protect the whole world from me by spreading my old and new names on the Internet. On that I’m torn. There’s no evidence that Lacour showed any contrition, so maybe everyone should be protected from him. But suffering is a private thing, and it’s funny to make permitting the guy to ever breathe again contingent on his satisfying you that he feels bad or learned the right lesson. Assuming I’m actually not a sociopath, I’d want to draw the line at academia and assert my freedom to move forward from there. But maybe I shouldn’t be allowed near schools of any kind. So when you’ve been shunned at a national scale, what doors should remain open to you in even the eyes of your most toxic schadenfriends? The answer is clear, and Michael Jules Lacour nailed it: even the most dogged of your haters are gonna fall off if your idea of moving forward is to enter the private sector. There’s a fine history there: exiled Harvard primatologist Marc Hauser went into consulting I think. And sociopath or not, capitalism is made for thriving off of people with a name for exploiting the trust of others, and if it doesn’t affect the bottom line, the market is more than ready to forgive it.

    I didn’t say what I’d do. Start a business. JK. Actually, I already know my plan B: get back into the organizing of worker-owned businesses. But I don’t think it’ll come to that. On the market now. Wish me luck.

    About

    This entry was posted on Sunday, October 9th, 2016 and is filed under Uncategorized.


    Zeno’s Arrow Keys: Geometric text navigation sequences in vim

    I wrote a little script for the world’s best text editor. It solves a very simple problem whose smallness is counterbalanced by its commonness. Here is some text:

    I wrote a little script for the world’s best text editor.

    I want to get to the word “text.” In Vim you have a few options:

    • type ‘l’ over and over
    • type $ then ‘h’ over and over.
    • 50l and then adjust
    • 10w if you can subitize that high
    • ft;;;;; but it would help a lot if you know how many ‘t’s there were without having to thinking about it
    • /text\\ but that’s a lot for a little.

    What I really want is to just think about being there and I’m there. Short of that, I want a command that just goes to the right part of the line. Short of that, I want to solve this problem they way Zeno would: get halfway, then go half of that, and half of that, until I’m there. So with the function I wrote, ZenosArrowKeys(), mapped to C-l for forward and C-h for back, I can go to the halfway mark with C-l, the 3/4 mark with C-ll, the 1/4 mark with C-lh, the 5/8 with C-llh, and so on. It’s a few strokes, but you can type them unconsciously because your eye knows where you want to end up so your brain can form a motor plan at stroke one. The halving resets 2 seconds after you’ve initiated. The fractions are calculated relative to you current cursor position and the beginning or end of the line. It’s my first attempt at Vimscript and I’m pretty happy with the result.

    """ Zeno-style line navigation for vim.
    """ I want a navigation mode that let's me quickly get to certain
    """ points in the line. Even though its up to five keystrokes, it
    """ probably will never feel like more than two, since your eye knows
    """ where you want to end up. Control-left and right takes you half
    """ the distance it did previously, for two seconds.
    """ Seth Frey
    """ Put this in .vimrc
    function! ZenosArrowKeys(direction)
    """ Find current position
    let s:nowpos = getpos(".")
    """ Separate timeouts for vertical and horizontal navigation
    let s:vtimeout = 0.8
    let s:htimeout = 0.8
    """ Find previous position
    """ This command has to have state because how far you navigate
    """ depends on how far you just navigated.
    """ The 5 below is the numer of seconds to wait before resetting
    """ all the state.
    if ( ($ZENONAVLASTTIME != "") && (abs(reltime()[0] - str2nr($ZENONAVLASTTIME)) < s:vtimeout ) ) let s:vcontinuing = 1 else let s:vcontinuing = 0 endif if ( ($ZENONAVLASTTIME != "") && (abs(reltime()[0] - str2nr($ZENONAVLASTTIME)) < s:htimeout ) ) let s:hcontinuing = 1 else let s:hcontinuing = 0 endif """ Calculate future position if (a:direction < 2) " left or right """ whether left or irght, first zeno press takes you to the halfway """ point of the line, measured from the indent "let s:halfway = ( col("$") - col("0") + 0.001) / 2 if ( s:hcontinuing ) let s:diff = abs( str2float($ZENONAVLASTPOSITIONH) ) / 2 if (a:direction == 0) " left let s:nowpos[2] = float2nr( round( s:nowpos[2] - s:diff ) ) else " right let s:nowpos[2] = float2nr( round( s:nowpos[2] + s:diff ) ) endif else let s:indent = indent( line(".") ) let s:halfway = ( col("$") - s:indent + 0.001) / 2 let s:nowpos[2] = float2nr( round( s:indent + s:halfway ) ) let s:diff = s:halfway endif let $ZENONAVLASTPOSITIONH = printf( "%f", s:diff ) """ make up down scrolling normal if (len(s:nowpos) == 4) let s:nowpos = s:nowpos + [s:nowpos[2]] else let s:nowpos[4] = s:nowpos[2] endif else " up or down """ first up and donw zeno action is to the upper or bottom quater, """ since M already give syou the middle fo the screen if ( s:vcontinuing ) let s:diff = abs( str2float($ZENONAVLASTPOSITIONV) ) / 2 else let s:diff = ( line("w$") - line("w0") + 0.001) / 4 endif if (a:direction == 2) "up if ( s:vcontinuing ) let s:nowpos[1] = float2nr( ceil( s:nowpos[1] - s:diff ) ) else let s:nowpos[1] = float2nr( round( line("w0") + s:diff ) ) endif else "down if ( s:vcontinuing ) let s:nowpos[1] = float2nr( floor( s:nowpos[1] + s:diff ) ) else let s:nowpos[1] = float2nr( round( line("w0") + ( 3 * s:diff ) ) ) endif endif let $ZENONAVLASTPOSITIONV = printf( "%f", s:diff ) endif """ Change position and update state for next execution call setpos(".", s:nowpos) if ( s:diff <= 1) """ if the command has topped out and is freezing you, reset "let $ZENONAVLASTTIME = string( reltime()[0] + s:htimeout ) let $ZENONAVLASTTIME = reltime()[0] else let $ZENONAVLASTTIME = reltime()[0] endif endfunction """ crazy mappings with iterm2 in iterm, map to  (aka F13) and
    """ proceed a few more times for the other codes. than map the F codes to zeno
    """ Map to specially escaped left and right keys
    :set =
    :map [1;2P :call ZenosArrowKeys(0)
    :set =
    :map [1;2Q :call ZenosArrowKeys(3)
    :set =
    :map [1;2R :call ZenosArrowKeys(2)
    :set =
    :map [1;2S :call ZenosArrowKeys(1)

    About

    This entry was posted on Wednesday, September 21st, 2016 and is filed under straight-geek.


    Vision of 2001 from the pages of a 1901 weekly

    I love seeing visions of the future and of the past. I also love things that make super-human spans of time apprehensible. So, naturally I’m sold on this concluding vision of 2001 from a January 1901 special issue of Collier’s Weekly that focused on life since 1801.

    2001from1901

    I got this from the collections of Dartmouth College’s Rauner Library, which lets everybody in to ask for anything, and which hosted a personal showing of proto-sci-fi and early astronomy books.

    About

    This entry was posted on Saturday, September 17th, 2016 and is filed under Uncategorized.


    Books read in 2015 (way late)

    This is a bit late, but honestly I didn’t want to write this until I’d remembered one of them that I knew I’d forgotten: all the Nero Wolfe.

    Blockbusters by Anita Elberse. This is a very pop-biz book. We think the Internet gives opportunity to those without it, but it’s much bigger role is to make the big bigger, at least in the world of entertainment. I’m glad I read it, even if I’m not so glad that I now know about anything I learned from it.

    Annie Dillard’s Pilgrim at Tinker Creek. This book is so important. The first time I read it, I had to put it down every page to calm down. The second time was less invigorating, but just as dazzling and inspiring. I wish we scientists would (or could?) write for each other about science the way she does. I reread this all the way to the second to last chapter when I lost it.

    Robert Nozick’s Anarchy, the State, and Utopia. The book is as impressive as its title, and no less because I was reading it as part of my practice of obsessing about libertarianism, in which I slowly hate-read all of its major thinkers. I hope I can think and write as big and as clearly as Nozick, but without all the same perfectly sound logic that still somehow ends up at obviously faulty conclusions.

    Edward Abbey’s Desert Solitaire. He’s no Dillard, but I admire him for what he contributed, and I’m prepared to take the warts along with that. I feel like the big names in the ecstatic individualist male naturalist tradition — Thoreau, Muir, Whitman, and Abbey too — have been getting all kinds of crap piled on them lately, to the point where it’s kind of out of vogue to appreciate them without tacking on a bunch of apologetic quibbles at the end. Maybe they deserve it. My only point here is that Abbey probably deserves it more than the others.

    This one mycology textbook. Didn’t finish — just made it a few chapters in so far. It’s a dense book, literally and literarily, but the topic is mystifying enough that I’ll keep at it.

    Daniel Kahneman’s Thinking, Fast and Slow. I read this because I felt obliged to know more than I do about decision making, since I’m a decision making researcher. I now know more, and there’s no one I’d have rather learned it from.

    Simpler: The Future of Government. A useful tour of the nudge lit, specifically as it’s been actually applied. Admirable goal. I’m wary to get too applied myself, but I appreciate the work. Writing is unremarkable, but that’s easy to tolerate when a writer also has information to convey.

    Benoit Mandelbrot’s Misbehavior of Markets: A fractal view of financial turbulence. I love the way Mandelbrot writes, but I still kind of always get this dirty suspicious feeling, as if one should approach his books by spending less time reading him than reading past him. I have some un-blogged comments on one part of his I particularly appreciated.

    Bedtime reading: old detective and mystery novels and Wodehouse. In noir, Dashiell Hammett’s The Thin Man, The Maltese Falcon, and Red Harvest. In mystery, three of Rex Stout’s stories about Nero Wolfe: Fer-de-Lande, The League of Frightened Men, and The Rubber Band.
    And to round out my early 20th century genre lit, rereading Wodehouse.

    About

    This entry was posted on Saturday, September 17th, 2016 and is filed under Uncategorized.


    “The unexpected humanity of robot soccer” with Patrick House in Nautilus Magazine

    I have a new popular audience article in the amazing Nautilus Magazine with science journalist, neuroscientist, and old cooperative housemate Patrick House. I have tons of respect for both, so its exciting to have them together.

    http://nautil.us/issue/39/sport/the-unexpected-humanity-of-robot-soccer

    This article had many lives in the writing, and it was a tough collaboration, but we came through OK. Don’t be fooled by my name at the lead: Patrick did most of the work.

    About

    This entry was posted on Thursday, September 1st, 2016 and is filed under Uncategorized.


    My mug on thewaltdisneycompany.com

    “Don’t just study the data; be the data.” I volunteered to help out my friends Thabo Beeler and Derek Bradley last year. They needed facial scans to analyze for their research. That work is done, and is featured prominently by Disney. The video of their work is halfway down.

    https://thewaltdisneycompany.com/disney-research-releases-latest-round-of-inventions/

    About

    This entry was posted on Thursday, September 1st, 2016 and is filed under Uncategorized.


    Is there any legitimate pleasure or importance in self-denial?

    I’m kind of a curmudgeon, preferring not to have comfortable doodads, labor-saving contrivances, and other perks of consumer society. I try not to have air conditioning when it’s hot, heat when it’s cold, I’ve successfully avoided having a phone for years and, until recently, a car. The car has one of those lock/unlock key dongles that I accepted so naturally into my life that I want to scoff at myself. I’ve managed to flatter myself at different times that all of this abstention from the finer things is about maintaining intellectual independence or building character, being rugged, preserving my senses, avoiding wastefulness, or being Universal and not just American. But sometimes I wonder my society’s interpretations are more accurate, and if I’m really a cranky, miserly, smug and self-superior Luddite, or, at best, completely joyless and humorless the way people think of Ralph Nader. After all, when I think back through the people I know who act the same way as me, I realize that I can’t stand being around most of them.

    I don’t think there’s room for the idea of self-denial in a consumer culture, in a culture in which one must buy commodities to participate in social meaning. It clicked when I read this voice from the 1870’s on the (apparently controversial) benefits of involving women in business: “… this gives them, I have noticed, contentment of mind, as well as enlarged views and pleasure in self-denial.” (p.412)

    The context doesn’t matter, all that matters is that in the 1870’s, when some kind of American culture still existed outside of market exchange, there also existed an idea that there is a legitimate secular satisfaction in self-denial. The book, “The Communistic Societies of the United States,” is a study of the many separatist religious communities that existed through the nineteenth century. The religious part is important because when you Google “self-denial,” the only non-dictionary hits are to religious sites or Bible verses. Now just as it has no place in modern society, self-denial is also no longer seen as a prominent theme of Christendom in America, but it seems to have kept some kind of legitimate home there anyhow. I’m not sure why; maybe it’s legitimately importat, or maybe self-denial is a handy justification for arbitrary ethical proscriptions. Either way, I don’t know what to make of it.

    Going secular I’m just as confused. I’d like to know if there is any evidence that self-denial is a good thing, in whatever way, or if I’m nothing more than a curmudgeon. I have to look into it and think more.

    Another good quote from the book:

    “Bear ye one another’s burdens” might well be written above the gates of every [intentional community]. p. 411

    About

    This entry was posted on Tuesday, August 2nd, 2016 and is filed under Uncategorized.


    Four-leaf clovers are lucky

    Because they seem to grow near each other, the best way to find a four-leaf clover is to have found one before. Besides an abundance of luck, discovering this also got me thinking. What if the first person to find a four-leaf clover, before it meant anything to anyone, showed it around, sparked some wonder, and got her friends looking here and there and turning up empty. How auspicious it would seem to those chumps if, after all their fruitless rooting, she showed up with a second, and then a third, like it was nothing. If you didn’t know they grow together, you might get the idea that some people have all the luck. That’s my just-so story for the myth. Trifolium’s misnomers give real actual luck, but only the narrow kind you need to find more.

    About

    This entry was posted on Wednesday, July 13th, 2016 and is filed under Uncategorized.


    Bless me

    So I was just walking down the street when some nice person I don’t know gave me a nice kind smile and said “God Bless You” kind of out of nowhere. At first I thought, “How nice and friendly, I love small towns” and I transitioned from there into “What a strange and archaic greeting.” Did he want my money? He had been crouched against the side of a building, but that was just because he had been stooping to help his dog. Had he been flyering for Jesus? (By this time I’ve continued well past him, and am just working back through the encounter in my head) Nope, no pamphlets. The closest I ever got to that key insight into the whole thing was that he must be a Shaker or Amish guy recently departed from his closed community of a dozen families and their quaint ways who has just gotten into his first big city of 5,000+ people and hasn’t yet learned that you’re not supposed to say Hi to everyone. But I didn’t actually finish that thought—it was around there that I realized from a still-damp hand that I must have sneezed.

    Maybe this is what it means to live a life of the mind.

    About

    This entry was posted on Friday, July 8th, 2016 and is filed under life and words.


    My ideal gig

    My uncle asked me what I’ll be looking for in a department when I hit the job market. I smiled and told him “prestige and money.” It got awkward because he didn’t realize I was joking, and it got more awkward as I squirmed to replace that answer in his head with my serious answer, since I had no idea what my serious answer was. Now I’ve thought about it. Here’s what I’m looking for in a department.

    1. I want to be part of an intellectual community in which I can be vulnerable, at least professionally if not personally. That means feeling safe sharing ideas, good ones and bad ones alike (since I can rarely tell the two apart without talking things through over beer). There’s nothing more awful and tragic than a department in which people mistrust each other and feel proprietary about their ideas — why even be in science? Conversely, there’s nothing more amazing then being part of a group with strong rapport, complementary skills, and a unified vision. (An ordinal listing misses how much higher this first wish is than all the others.)
    2. My colleagues are all smarter than me, or beyond me in whichever of a number of likely ways: more creative, more active, harder working, more connected, more engaged, effortlessly productive, exquisitely balanced and critical and fiery and calm. There’s something to be said for learning from osmosis.
    3. I have inspiring students — undergraduate and graduate — and maybe even students that are smarter than me, or more creative, more active, &c.
    4. My colleagues and I share some kind of unified vision. I’ve seen that in action before and it’s amazing.
    5. Prestige. I can’t pretend I’m too far above prestige. A recognized school attracts better students, which makes teaching more fun. It has more resources lying around, which makes it easy to make things happen quickly. It casts a glow of success that makes it easier to raise money, and build partnerships. They are often more likely to be able to follow through on commitments to underprivileged students. And last, since age is the major cause of prestige, fancy schools tend to be on more storied and beautiful campuses.
    6. My colleagues cross disciplines.
    7. My department has institutional support for interdisciplinary research (no list of five journals to publish in, conferences and journals on equal footing, tenure letters of support accepted from people outside the same department).
    8. I’m in a department beloved, or at least on the radar of, the dean. I don’t know a lot about this, but I get the feeling that life is a little easier when a department has a dean’s support.
    9. Beautiful campus.
    10. In the US. Alternatively, the UK or the Netherlands. In a good city or back in CA, or maybe in one of these economically depressed post-industrial-wasteland cities. Can’t explain that last one. Well, I can: it means to me that it’ll have a more active arts community, be more diverse, and have a neighborly sense of community.

    About

    This entry was posted on Monday, May 23rd, 2016 and is filed under Uncategorized.


    Secretly deep or secretly trivial?

    I know that the word “football” means something different to Americans than it does to Europeans. It might be that most Americans know that. But the rest of the world thinks of Americans as not knowing it, and it led to something funny when I was in Switzerland. Living right in the middle of Europe, in any conversation about football, both my interlocuter and I had to call it soccer, even though neither of us wanted to call it that. I knew perfectly well that football meant round checker ball, but if I called that ball game football, others always assumed that I was being American and referring to oblong brown ball. They expected me to call round checker ball soccer, and that made it the most convenient word, which meant that I always had to go with it. It was just easier that way.

    Since I study the role of what-you-think-I-think-you-think in peoples’ social behavior, I keep thinking of that as deep and fascinating, but every time I try to pin it down analytically as something novel, it just goes limp and becomes this really mundane, obvious, easy to explain inefficiency.

    About

    This entry was posted on Tuesday, May 17th, 2016 and is filed under Uncategorized.


    Who is science_of_craft and why is he on my Minecraft server?

    I am studying Minecraft servers, and the way they are run. But there are a lot of servers out there, so, to get data efficiently I have a script logging my user, science_of_craft, into thousands of servers. science_of_craft collects your version, plugins, number of players online, and also more detailed things like the signs that are posted near spawn. “Science of craft” is a translation of “technology.”

    science_of_craft should just stand there before logging off and moving on to the next server. But if he is causing problems for you, you can either ban him or contact me (moctodliamg at the same thing backwards) and I’ll get him off your back.

    If you are a server administrator who got a visit from s_o_c, thank you for tolerating this project, and thank you for doing what you do. I think it’s valuable, that’s why I’m studying it.

    Followup

    I’ve heard from many of you. I’ve been gratified to see that no one has seemed annoyed, or anything but interested. Thank you for your encouragement and patience. I’m not publishing any comments, but I am reading and responding to them.

    The most common question that is coming up is “how did you find my server?” I’ve been getting lists from a few public sources: Reddit, a couple big minecraft server list sites, and shodan.io. If you saw me on your server that isn’t advertised or visited by anyone you don’t know, the answer to your question is probably shodan.io. If you don’t like it, let me know that it’s a problem and if there’s anything I can do.

    About

    This entry was posted on Friday, May 13th, 2016 and is filed under Uncategorized.


    My work in this Sunday’s New York Times Magazine

    I am working now using a large corpus of Minecraft servers to understand online governance. That work got a mention in a long feature on Minecraft, by Clive Thompson, titled “The Minecraft Generation.” It was very well done, and Clive was very attentive as a journalist to my nervous scientist’s quibbling about phrasing things precisely with respect to what must seem like completely arbitrary academic distinctions. It feels great and intimidating.

    About

    This entry was posted on Saturday, April 16th, 2016 and is filed under Uncategorized.


    Seth’s Backwards Guide to Doing Science

    I got some exciting press for a current project, but I’m a little too embarrassed to enjoy it because it’s on a project I’m barely halfway through. That’s part of a larger pattern I’ve found in myself in which I talk more about stuff that isn’t done or isn’t even started, and I don’t have as much out as I’d like.

    I feel like I’m getting ahead of myself, but maybe I’m wrong and you should be even more like me than me. If that’s what you really want, here is my backwards guide to doing science:

    1. Get good press coverage, then
    2. publish your research,
    3. figure out what your message is going to be,
    4. interpret your data,
    5. analyze your data,
    6. collect your data, and finally
    7. plan out your study.

    That last step is very important. You should always carefully plan out your studies. And if you think this whole thing is totally backwards, well that’s just, like, your opinion, man.

    About

    This entry was posted on Thursday, April 14th, 2016 and is filed under Uncategorized.


    Stop, look, and listen: A tour of the world’s red crosswalks

    Stop2
    My favorite thing about traveling is the little things. And with Google’s Maps, you can celebrate those without going anywhere. Here are “stop walking” signs from cities around the world.

    Europe

    As expected, Europe has a lot of diversity, particularly Switzerland:

    Geneva, Switzerland has this skinny person
    stopgeneva
    Lucerne, Switzerland has a lanky Giacometti type
    stoplucerna
    Zurich, Switzerland also goes lanky, but a little more of the Age of Aquarius, Platonic ideal, smooth edges, hard ideas style that you get in that city.
    stopzurich

    More of Europe:
    Berlin, Germany is v. different.
    stopberlin
    Vienna, Austria, which put these up during a recent Eurovision contest, gets the prize.

    Moscow
    stopmoscow
    Oslo, Norway means business!
    stoposlo
    Stopping and going, Brussels, Belgium has style
    stopbrussels
    gobrussels

    North America

    The huge US is depressingly homogeneous, especially in comparison to the much smaller Switzerland. Maybe there’s a monopoly in the US traffic-light market?

    NYC
    stopNYC
    LA
    sopLA
    Chicago
    stopchicago
    Atlanta
    stopatlanta
    St. Louis
    stopStLouis

    Zooming out to the rest of North America doesn’t seems improve things, though I admit I could have looked harder.
    Montreal, the least Anglophone Canadian city, deviates from the US mold by only a bit, by hollowing out the hand. It’s “walk” guy is better though — I’ve got a picture of one below.
    stopMontreal
    I pathetically couldn’t find any lights in Mexico City and haven’t checked other major Mexican cities, though I’m guess that border towns at least will look American.

    Africa

    In Africa, I tried Addis Ababa, Lagos, Accra, Nairobi, and even Cairo, but Google hasn’t shot any of them. I only found Streetview in South Africa. Here is Pretoria
    pretoriaStop

    East Asia

    There is also very little Streetview in China. I tried Beijing, Shanghai, and a few other Chinese cities. All I found was Hong Kong. I guess that by the time we come to envy China for not having been scanned, Google will have them scanned too. China has over 200 cities of population over 1,000,000. There are only 9 in the US that big. Other parts of east Asia, like Japan and South Korea, are much better.
    Hong Kong is realistic enough to automatically have its identity fizzed out by Google’s algorithms.
    stophongkong
    Tokyo, Japan. Looks like a worker. I was told that, in Japanese, the word for jaywalking translates to “red light, don’t care.”
    stoptokyo
    Seoul, South Korea
    stopseoul

    South Asia

    I didn’t have any luck finding lit crosswalks in south Asia, but that could be my problem.

    Southeast Asia

    In southeast Asia, I only looked in Manila, which only recently went up on Streetview in the past year I think, but they mostly only have crosswalks in their upscale neighborhoods, and, in-line with the USA-philia over there, those few look very much like the American ones
    stopmanila

    Middle East

    In the Middle East (and outside of Israel), I only found usable intersections in Dubai, whose lights look like the Swiss ones above. Only connection I can think of is that that’s where they keep all their money
    stopdubai

    Israel has more. Here is Tel Aviv. Pretty manly, right? Wait till you see Sao Paolo.
    stoptelavvi

    South America and Latin America

    South America is also very diverse. I only looked a bit, and many cities are unscanned, but it seems that there is a lot more interesting variety there than in other parts of the world. In fact, you can find different lights in the same intersection! In Santiago you’ll see a silhouette of the “walk” light — sprightly fella — and a more generic “walk” light guy walking in the other direction. These two really are from the same intersection.
    stopsantiago
    Santiago, same intersection, walking guy walking the other way
    gosantiago
    Bogota, Colombia
    stopbogota
    It looks like Sao Paolo, Brazil has a burly burly strong man. I can’t figure out if the crookedness adds to or subtracts from his apparent virility.
    stopsaopaolo

    “Walk” lights

    “Walk” lights are harder to catch in Streetview than “stop”s. That said, I got a not-bad collection of those too. The lessons above stick: the US is homogenous; variety happens elsewhere. And, outside the US, the walker tends to be green and walk to the left instead of the right.
    NYC
    goNYC
    Atlanta
    atlantaGo
    Manila
    gomanila
    Montreal
    gomontreal
    London, UK
    golondon
    Moscow, Russia
    gomoscow
    Tokyo, Japan
    gotokyo
    Seoul, South Korea
    goseoul
    Bogota
    gobogota

    If there is some important cross-walk of the world you think I really missed out on, I’m happy to add more.


    Research confidence and being dangerous with a gun.

    There are two very different ways to be dangerous with a gun: to know what you’re doing, and to only think you know what you’re doing. Tweak “dangerous” a bit, and research is the same way. I draw from many disciplines, and in the course of every new project I end up having to become conversant in some new unfamiliar field. I dig in, root around, and build up my sense of the lay of the land, until I can say with confidence that I know what I’m doing. But I don’t try to kid myself that I know which kind of dangerous I am. I don’t think it’s possible to know, and even if it is, I think it’s better to resist the temptation to resolve the question one way or the other. Better to just enjoy the feelings of indeterminacy and delicacy. That may seem like a very insecure and unsatisfying way to experience knowhow, but actually it takes a tremendous amount of self-confidence to admit to ignorance and crises of confidence in research. Conversely, an eagerness to be confident communicates to me a grasping impatience for answers, a jangling discomfort with uncertainty, or a narrow desire to be perceived as an expert. The last is especially awful. My society understands confidence as a quality of expertise. It’s a weakness that we mistake for a strength, and everyone loses.

    Crises of confidence are a familiar feeling in interdisciplinary research. Pretty much every project I start involves some topic that is completely new to me, and I always have to wonder if I’m the outsider who is seeing things freshly, or the outsider who is just stomping loudly around other people’s back yards. Interdisciplinary researchers are more susceptible to facing these questions, but the answers are for everyone. I think the tenuousness of knowhow is inherent to all empirical research, the only difference being that when you work across methods and disciplines, it’s harder to deceive yourself that you have a better command of the subject than you do. That’s two more benefits of interdisciplinary practice: it keeps humility in place in daily scientific practice, and it makes being dangerous less dangerous to you.

    About

    This entry was posted on Monday, February 15th, 2016 and is filed under Uncategorized.


    Interdisciplinary researchers need to care about clear, honest, interesting writing

    In interdisciplinary academic writing, you don’t always know who you’re writing for, and that makes it completely different from traditional academic writing. The people who respond most excitedly to my work are rarely the people I predicted, and they rarely find it through the established disciplinary channels of academia. Since you don’t know ahead who you’re writing for, you have to write more clearly and accessibly. I’ve been read by psychologists, biologists, physicists, economists, and many others. The only way to communicate clearly to all of these audiences has been to keep in mind the last time they all had the same background. That’s why, when I write, I imagine a college-bound high school graduate who likes science. The lowest common denominator of academic comprehension is “high school student.” And that’s fine. Those who doubt the existence of writing that is both clear and correct probably aren’t trying hard enough. The benefits of people able to write for a wide academic audience are many. First, I think researchers of all types have some responsibility to serve as public intellectuals, particularly when they work in areas, like the social sciences, that are inherently vulnerable to misconstruction, misappropriation, and abuse. Writing clearly helps me meet that responsibility. Second, since I rarely know the best audience for my projects, accessible writing makes it easier to attract popular science reporting to get the word around. And, most valuable of all, spending time on writing makes me think better. Clear honest writing is the surest symptom of clear honest thinking.

    About

    This entry was posted on Wednesday, January 13th, 2016 and is filed under Uncategorized.


    Natural selection, statistical mechanics, and the idea of germs were all inspired by social science

    It’s only natural to want to hold your scientific field as the most important, or noble, or challenging field. That’s probably why I always present the sciences of human society as the ones that are hardest to do. It’s not so crazy: it is inherently harder to learn about social systems than biological, engineered, or physical ones because we can’t, and shouldn’t ever, have the same control over humans that we do over bacteria, bridges, or billiard balls. But maybe I take it too far. I usually think of advances in social science as advances in what it is possible for science to teach us, and I uncritically think of social science as where scientific method will culminate.

    So imagine my surprise to learn that social science isn’t the end of scientific discovery, but a beginning. According to various readings in John Carey’s Faber Book of Science, three of the most important scientific discoveries since the Enlightenment — the theory of natural selection, the germ theory of disease, and the kinetic theory of gasses — brought inspiration from human social science to non-human domains. One of Darwin’s key insights toward the theory of evolution came while reading Malthus’s work on human population. Just in case you think that’s a fluke, Alfred Russell Wallace’s independent discovery of natural selection came while he was reading Malthus. (And Darwin was also influenced by Adam Smith). Louis Pasteur developed the implications of the germ theory of disease by applying his French right-wing political philosophy to animalcules. The big leap there was that biologists rejected that very small insignificant animals could possibly threaten a large and majestic thing like a human, but Pasteur had seen how the unworthy masses threatened the French elite, and it gave him an inkling. Last, James Maxwell, the man right under Newton and Einstein in physics stature, was reading up on the new discipline of Social Statistics when he came up with the kinetic theory of gases, which in turn sparked statistical mechanics and transformed thermodynamics. Physicists have started taking statistical mechanics out of physical science and applying it to social science, completely ignorant of the fact that it started there.

    All of these people were curious enough about society to think and read about it, and their social ponderings were rewarded with fresh ideas that ultimately transformed each of their fields.

    I think of science as a fundamentally social endeavor, but when I say that I’m usually thinking of the methods of science. These connections out of history offer a much deeper sense in which all of natural science is the science of humanity.

    Thanks to Jaimie Murdock and Colin Allen for the connection between Malthus and Darwin, straight from Darwin’s autobiography

    In October 1838, that is, fifteen months after I had begun my systematic inquiry, I happened to read for amusement Malthus on Population, and being well prepared to appreciate the struggle for existence which everywhere goes on from long-continued observation of the habits of animals and plants, it at once struck me that under these circumstances favorable variations would tend to be preserved, and unfavorable ones to be destroyed. The results of this would be the formation of a new species. Here, then I had at last got a theory by which to work.


    Machine learning’s boosting as a model of scientific community

    Boosting is a classic, very simple, clever algorithm for training a crappy classifier into a group of less crappy classifiers that are collectively one impressively good classifier. Classifiers are important for automatically making decisions about how to categorize things, like this:

    Here is how boosting works:

    1. Take a classifier. It doesn’t have to be any good. In fact, its performance can be barely above chance.
    2. Collect all the mistakes and modify the classifier into a new one that it is more likely to get those particular ones right next time.
    3. Repeat, say a hundred times, keeping each iteration, so that you end up with a hundred classifiers
    4. Now, on a new task, for every instance you want to classify, ask all of your classifiers which category that instance belongs in, giving more weight to the ones that make fewer mistakes. Collectively, they’ll be very accurate.

    The connection to scientific community?

    With a few liberties, science is like boosting. Let’s say there are a hundred scientists in a community, and each gets to take a stab at the twenty problems of their discipline. The first one tries, does great on some, not so great on others, and gets a certain level of prestige based on how well he did. The second one comes along, giving a bit of extra attention to the ones that the last guy flubbed, and when it’s all over earns a certain level of prestige herself. The third follows the second, and so on. Then I come along and write a textbook on the twenty problems. To do it, I have to read all 100 papers about each problem, and make a decision based on each paper and the prestige of each author. When I’m done, I’ve condensed the contributions of this whole scientific community into it’s collective answers to the twenty questions.

    This is a simple, powerful model for explaining how a community of so-so scientists can collectively reach impressive levels of know-how. Its shortcomings are clear, but, hey, that’s part of what makes a good model.

    If one fully accepts boosting as a model of scientific endeavor, then a few implications fall right out:

    • Science should be effective enough to help even really stupid humans develop very accurate theories.
    • It is most likely that no scholar holds a complete account of a field’s knowledge, and that many have to be consulted for a complete view.
    • Research that synthesizes the findings of others is of a different kind than research that addresses this or that problem.

    About

    This entry was posted on Friday, November 27th, 2015 and is filed under Uncategorized.


    The DSM literally makes everyone crazy

    Having a book like the “Diagnostic and Statistical Manual in Mental Disorders,” a large catalog of ways that people can be crazy, inherently creates more crazy people. I’m not talking about this in a sociological or historical sense, but in a geometrical one.

    First some intuitive geometry. Imagine a cloud of points floating still in front of your face, maybe a hundred or so, and try to visualize all the points that are on the outside of the cloud, as if you had to shrink-wrap the cloud and the points making up the border started to poke out and become noticeable. Maybe a quarter of your points are making up this border of your cloud — remember that. Now take that away and instead shine a light at your cloud of points to cast it’s shadow on a wall. You’re now looking at a flat shadow of the same point cloud. If you do the same thing on the shadow, draw a line connecting all the points that make up the border around it, it turns out that the points making up the border of the flat cloud are a smaller percentage of all the points, less than a quarter. That’s because a lot of points that were on the top and bottom in three dimensions look like they’re in the middle when you flatten down to two dimensions: only the dots that described a particular diameter of the cloud are still part of the border of this flattened one. And, going in the opposite direction, up from shadow to cloud to tens of dimensions, what ends up happening is that the number of points in the “middle” crashes: with enough dimensions, they’re all outliers. A single point’s chances of not being an outlier on any dimension are small. This is a property of point clouds in high dimensions: they are all edge and no middle.

    Back to being crazy. Let’s define being crazy as being farther along on a spectrum than any other person in your society. Real crazy is more nuanced, but let’s run with this artificial definition for a second. And let’s say that we live in a really simple society with only one spectrum along which people define themselves. Maybe it’s “riskiness,” so there’s no other collective conceptions of identity, no black or white or introverted or sexy or tall or nice or fun, you’re just something between really risky and not. Most everyone is a little risky, but there’s one person who is really really risky, and another person who less risky than anyone else. Those are the two crazy people in this society. With one dimension of craziness, there can only be two truly crazy people, and everyone else is in the middle. Now add another dimension, e.g. “introvertedness.” Being a lot of it, or very little of it, or a bit introverted and also risky or non-risky, all of those things can now qualify a person as crazy. The number of possible crazy people is blowing up — not because the people changed, but only as a geometrical consequence of having a society with more dimensions along which a person can be crazy. The number of people on the edge of society’s normal will grow exponentially with the number of dimensions, and before you know it, with maybe just ten dimensions, almost no one is “normal” because almost everyone is an outlier in one way or another.

    The DSM-V, at 991 pages, offers so many ways in which you and I could be screwy that it virtually guarantees that all of us will be. And, thanks to the geometry of high-dimensional spaces, the thicker that book gets, the crazier we all become.

    About

    This entry was posted on Friday, November 20th, 2015 and is filed under Uncategorized.


    Language and science as abstraction layers

    The nature of reality doesn’t come up so often in general conversation. It only just occurred to me that that’s amazing, since pretty much everyone I know who has thought about it thinks something different. I know Platonists, relativists, nihilists, positivists, constructivists, and objectivists. Given the very deep differences between their beliefs about the nature of their own existences, it’s really a miracle that they can even have conversations. Regardless of the fact of the matter, what you think about these things affects what language means, what words means, and what it means to talk to other people.

    And not only can you complete sentences with these people, you can do science with them, and science can build on a slow, steady accretion of facts and insights, even if each nugget was contributed by someone with a totally different, and utterly irreconcilable conception of the nature and limits of human knowledge. How?

    I think of science as an abstraction layer. That’s sad, probably for a lot of reasons, but most immediately because it means that the only easy metaphor I was able to find is the computer programming language Java. Java was important to the software industry because it made it possible to write one program that could run on multiple different operating systems with no extra work. Java took the complexities and peculiarities of Unix and Windows and Moc and Linux and Solaris and built a layer on top of each that could make them all look the same to a Java program. I think of the tools of thought provided by science as an abstraction layer on different epistemologies that makes it possible for people with different views to get ideas back and forth to each other, despite all their differences.

    Here is an excellent illustration.
    abstraction

    About

    This entry was posted on Thursday, November 12th, 2015 and is filed under Uncategorized.


    Practical eliminativist materialism

    Eliminativist materialism is a perspective in the philosophy of mind that, in normal language, says beliefs, desires, consciousness, free will, and other pillars of subjective experience don’t actually, um, exist. It’s right there in the name: the materialisms are the philosophies of mind that are over The Soul and “eliminativist” is just what it sounds like. I’m actually sympathetic to the view, but reading the Wikipedia article makes me realize that I’ve got to refine my position a bit. Here’s what I think I believe right now:

    • To the extent that experiencing them makes it possible to account for them, there’s going to exist a way to account for them in terms of neural and biological processes.
    • I believe that we’ll probably never really understand that account. Even if we manage to create artificial entities that satisfy us that they are conscious, we won’t really know how we did it. This is already happening.
    • So, as far as humans are concerned, eliminativist materialism will turn out to be practically true, even if it somehow turns out not to be more true than the other materialisms.

    Given all that, I think of eliminativist materialism as possibly right and probably less wrong than any other prominent philosophy of mind. Call it “practical eliminativist materialism.” If you think I’m full of crap, that’s totally OK, but unlike you, my stoner musings about the nature of consciousness have been legitimized by society with a doctorate in cognitive science. Those aren’t really good for anything else, so I’m gonna go ahead and keep musing about the nature of consciousness.

    About

    This entry was posted on Sunday, November 1st, 2015 and is filed under Uncategorized.


    Two great quotes for how greedy we are for the feeling that we understand

    When the truth of a thing is shrouded, and real understanding is impossible, that rarely stops the feeling of understanding from rushing in anyway and acting like it owns the place. Two great quotes:

    In the study of ideas, it is necessary to remember that hard-headed clarity issues from sentimental feeling, as it were a mist, cloaking the perplexities of fact. Insistence on clarity at all costs is based on sheer superstition as to the mode in which human intelligence functions. Our reasonings grasp at straws for premises and float on gossamers for deductions.
    — A. N. Whitehead

    Or, more tersely

    There’s no sense in being precise when you don’t even know what you’re talking about.
    — John von Neumann

    Also, while I’m writing, some quotes by McLuhan from his graphic book “The Medium is the Massage” (sic). McLuhan can be eye-rolly, but not as bad as I’d expected. But maybe I’d been thinking of Luhmann, it’s hard to keep these media theorists straight. Here is a hopeful one:

    There is absolutely no inevitability as long as there is a willingness to contemplate what is happening

    And here is one that clearly expresses a deep argument for the value of telecom. It is clear enough that it could be tested in experiments, which is worth doing, because you wouldn’t want to just assume he’s right.

    Media, by altering the environment, evoke in us unique ratios of sense perceptions. The extension of any one sense alters the way we think and act—the way we perceive the world. When these ratios change, men change.

    About

    This entry was posted on Wednesday, October 21st, 2015 and is filed under Uncategorized.


    “Are you feeling the Bern now?”

    Some psychologist colleagues are partying for the upcoming debate, but instead of taking a shot after each keyword, they’re triggering the thermode on a laboratory apparatus called “the pain machine.” It delivers a pulse of up to 55°C within a second. ifls.

    About

    This entry was posted on Monday, October 12th, 2015 and is filed under Uncategorized.


    How would science be different if humans were different?

    How would science be different if humans were different — if we had different physiological limits? Obviously, if our senses were finer, we wouldn’t need the same amount of manufactured instrumentation to reach the same conclusions. But there are deeper implications. If our senses were packed denser, and if we could faithfully process and perceive all of the information they collect, we would probably have much more sensitive time perception, or one way or another a much refined awareness of causal relations in the world. This would have the result that raw observation would be a much more fruitful methodology within the practice of natural science, perhaps so much so that we would have much less need for things like laboratory experiments (which are currently very important).

    Of course, a big part of the practice of science is the practice of communication, and that becomes clear as soon as we change language. Language is sort of a funny way to have to get things out of one head and into another. It is slow, awkward, and very imperfect. If “language” was perfect — if we could transfer our perfect memories of subjective experience directly to each other’s heads with the fidelity of ESP — there would be almost no need for reproducibility, one of the most important parts of science-as-we-know-it. Perfect communication would also supersede the paratactic writeups that scientific writing currently relies on to make research reproducible. It may be that in some fields there would be no articles or tables or figures. Maybe there would still be abstracts. And if we had unlimited memories, it’s possible that we wouldn’t need statistics, randomized experiments, or citations either.

    The reduction in memory limits would probably also lead to changes in the culture of science. Science would move faster, and it would be easier to practice without specialized training. The practice of science would probably no longer be restricted to universities, and the idea of specialized degrees like Ph.D.s would probably be very different. T.H. Huxley characterized science as “organized common sense.” This “organization” is little more than a collection of crutches for our own cognitive limits, without which the line between science and common sense would disappear entirely.

    That’s interesting enough. But, for me, the bigger implication of this exercise is that science as we know it is not a Big Thing In The Sky that exists without us. Science is fundamentally human. I know people who find that idea distasteful, but chucking human peculiarities into good scientific practice is just like breaking in a pair of brand-new gloves. Having been engineered around some fictional ideal, your gloves aren’t most useful until you’ve stretched them here and there, even if you’ve also nicked them up a bit. It’s silly to judge gloves on their fit to the template. In practice, you judge them on their fit to you.


    The unexpected importance of publishing unreplicable research

    There was a recent attempt to replicate 100 results out of psychology. It succeeded in replicating less than half. Is Psychology in crisis? No. Why would I say that? Because unreplicable research is only half of the problem, and we’re ignoring the other half. As with most pass/fail decisions by humans, a decision to publish after peer review can go wrong in two ways:

    1. Accepting work that “shouldn’t” be published (perhaps because it will turn out to have been unreplicable; a “false positive” or “Type I” error)
    2. Rejecting work that, for whatever reason, “should” be published (a “false negative” or “Type II” error).

    It is impossible to completely eliminate both types of error, and I’d even conjecture that it’s impossible for any credible peer review system to completely eliminate either type of error: even the most cursory of quality peer review will occasionally reject good work, and even the most conservative of quality peer review will accept crap. It is naïve to think that error can ever be eliminated from peer review. All you can do is change the ratio of false positives to false negatives, are your own relative preference for the competing values of skepticism and credulity.

    So now you’ve got a choice, one that every discipline makes in a different way: you can build a conservative scientific culture that changes slowly, especially w.r.t. its sacred cows, or you can foster a faster and looser discipline with lots of exciting, tenuous, untrustworthy results getting thrown about all the time. Each discipline’s decision ends up nestling within a whole system of norms that develop for accommodating the corresponding glut of awful published work in the one case and excellent anathematic work in the other. It is hard to make general statements about whole disciplines, but peer review in economics tends to be more conservative than in psychology. So young economists, who are unlikely to have gotten anything through the scrutiny of their peer review processes, can get hired on the strength of totally unpublished working papers (which is crazy). And young psychologists, who quickly learn that they can’t always trust what they read, find themselves running many pilot experiments for every few they publish (which is also crazy). Different disciplines have different ways of doing science that are determined, in part, by their tolerances for Type I relative to Type II error.

    In short, the importance of publishing unreplicable research is that it helps keep all replicable research publishable, no matter how controversial. So if you’re prepared to make a judgement call and claim that one place on the error spectrum is better than another, that really says more about your own high or low tolerance for ambiguity, or about the discipline that trained you, than it does about Science And What Is Good For It. And if you like this analysis, thank psychology, because the concepts of false positives and negatives come out of signal detection theory, an important math-psych formalism that was developed in early human factors research.

    Because a lot of attention has gone toward the “false positive” problem of unreplicable research, I’ll close with a refresher on what the other kind of problem looks like in practice. Here is a dig at the theory of plate tectonics, which struggled for over half a century before it finally gained a general, begrudging acceptance:

    It is not scientific but takes the familiar course of an initial idea, a selective search through the literature for corroborative evidence, ignoring most of the facts that are opposed to the idea, and ending in a state of auto-intoxication in which the subjective idea comes to be considered an objective fact.*

    Take that, plate tectonics.

    About

    This entry was posted on Friday, September 4th, 2015 and is filed under science.


    First days at Dartmouth College

    I just arrived in Hanover, NH to start a new position at Dartmouth College. I earned a fellowship from the Neukom Institute for Computational Science. William Neukom is Bill Gates’ old lawyer, and he was an alumnus of the College, and it seems he wanted to give back by funding interdisciplinary research that involves computers. For the next few years, he’s funding mine.

    The town is very pretty. Also tiny: ten times smaller than Bloomington, IN, the previous smallest town I’ve lived in. But there’s plenty here for me, and plenty of goodness. Something tells me that, despite the deep deep isolated winter, life will be easier here than it was in Switzerland. I’ve already started mapping the local fruit trees, many of which are in action now. I found hops, which I’d never actually seen before, but which I recognized from beer bottles. Lots of micro brews use hops flowers as a motif to ornament the labels of their brews. Ha! The flower looks more like the bud of a flower than like a lower itself. If you grab it and mash it up in your fingers it smells delicious. It might actually be edible, though not every part of the flower is as palatable as others. And I’ve had another unexpected find for this latitude. I was walking along an ivy-covered wall, or at least I assumed it was ivy, and found it was grape-vine-covered instead. And I found another bunch of grapes vines later that day. All way under-ripe, but I’ll be paying attention as the season passes. I’ll be totally surprised, and utterly pleased, if edible grapes grow around here in the fall time.

    About

    This entry was posted on Tuesday, September 1st, 2015 and is filed under updates.


    Extra info about my appearance on BBC Radio 4

    I was on a BBC radio documentary by Jolyon Jenkins, “Rock Paper Scissors.” The goal of the documentary was to show that this seemingly trivial game is secretly fascinating, because of what we humans make of it. My own academic contribution to that fun claim has been published here and in much more detail here.

    Jolyon was a gracious host, but the documentary was released without any word or warning to me, and with rough spots. I’ve got to clarify a few things.

    The most important is an error. The show ended with my describing a game in which people “irrationally” herd together and make lots of money. The results of this game were reported faithfully in the show, but the game itself got defined wrong, and in a way that makes the results impossible. Here’s the full game: All of you pick an integer 1 through X. Each person gets a buck for picking a number exactly one more than what someone else picked. ADDITIONALLY, the number 1 is defined to be exactly one more than number X, making the choices into a big circle of numbers. The documentary left that last bit out, and it’s really important. Without anything to beat X, I’m guessing that everyone will converge pretty quickly on X without much of this flocking behavior. It’s only when the game is like Rock Paper Scissors, with no single choice that can’t be beat by another, that you start to see the strange behavior I describe in the show.

    Three more things. All of the work was done with my coauthor and advisor Rob Goldstone at IU, who wasn’t mentioned. Second, it’s not accurate, and pretty important, the way that Jolyon implicitly linked my past work to my current employer. The work presented on the show was performed before I started with Disney Research, and has nothing to do with my work for Disney Research. Last, a lot of what I said on the show was informed by the work of Colin Camerer, specifically things like this.

    About

    This entry was posted on Wednesday, August 5th, 2015 and is filed under audio/visual, updates.


    Bern, Switzerland hires ex biker gang to scare trash sinners straight

    I met a Swiss behavioral economist who had partnered with his city to see how “nudges” could prevent people from sneaking out in the night and illicitly leaving bags of trash at neighborhood recycling stations. Many Swiss cities tax trash per bag to incentivize waste reduction, with the side-effect that shirking takes the form of discreet dumping. Treatments for the side-effect take many forms. The most wild, according to this economist, was an initiative from Bern in which the city hired the Euro equivalent of Hell’s Angels to hide in the bushes and pop out to catch people in the act. It sounded too wild to be true, and the hiding was maybe an exaggeration, but I got pointed to a few articles from Swiss tabloids that support the story, this one from 20 Minuten and the one below from Blick, translated by Jillian.

    doc20150723011006

    The Green Rockers of Bern

    Their reputation precedes them. The Broncos security forces don’t just look mean; they also have a reputation in Bern for taking drastic measures — qualities that are now bringing the security group, which is no longer in association with the rock club of the same name, into unexpected territory. According to the Berner Zeitung, the municipality of Köniz has commissioned the Broncos to ensure that people don’t throw extraneous trash into the recycling bins meant for aluminum, glass, and tin.
    The services of the Broncos in the fight against “trash sinners” cost the community a cool 9000 francs. But the Broncos aren’t just there to look mean. They’re supposed to be doing preventative work — and passing out leaflets.

    About

    This entry was posted on Sunday, August 2nd, 2015 and is filed under systems of culture.


    Paper on Go experts in Journal of Behavioral and Experimental Economics

    I just published a paper with Sascha Baghestanian​ on expert Go players.

    Journal of Behavioral and Experimental Economics

    It turns out that having a higher professional Go ranking correlates negatively with cooperation — but being better at logic puzzles correlates positively. This challenges the common wisdom that interactive decisions (game theory) and individual decisions (decision theory) invoke the same kind of personal-utility-maximizing reasoning. By our evidence, only the first one tries to maximize utility through backstabbing. Go figure!

    This paper only took three years and four rejections to publish. Sascha got the data by crashing an international Go competition and signing up a bunch of champs for testing.

    About

    This entry was posted on Saturday, July 25th, 2015 and is filed under science, updates.


    Individually wrapped M&Ms

    Visiting Japan, and with a little time in Taiwan and Korea, it’s been comical the amount of packaging around snacks in East Asia. But these individually wrapped M&Ms, from Indonesia, take the cake. They’re packaged with a foil backing, like pills.

    individually-wrapped-mnms

    About

    This entry was posted on Thursday, July 16th, 2015 and is filed under life and words.


    Unplanned downtime last week

    The site was down because I got hacked — didn’t keep wordpress up to date. The site is still affected, with slow load times. The good news was that someone told me about the problem, a sure sign that people actually read this blog. Go Figure.

    About

    This entry was posted on Thursday, July 16th, 2015 and is filed under updates.


    Louisiana’s “daiquiri exemption” and other drinks while driving.

    Short article on states that permit drinking while driving. File under heterogeneity and the living tension between governance institutions that exist at different scales.

    About

    This entry was posted on Saturday, May 16th, 2015 and is filed under systems of culture.


    Economic game theory’s “folk theorem” is not empirically relevant

    I study a lot of game dynamics: how people learn as they make the same socially-inflected decision over and over. A branch of my career has been devoted to finding out that people do neat unexpected things that are totally unpredicted by established models. Like in most things, anything close to opposition to this work looks less like resistance and more like indifference. One concrete reason, in my area, is that it is old news that strange things can happen in repeated games. That is thanks to the venerated folk theorem. As Fisher (89) put it, the “folk theorem” is as follows

    in an infinitely repeated game with low enough discount rates, any outcome that is individually rational can turn out to be a Nash equilibrium (Fudenberg and Maskin, 1986). Crudely put: anything that one might imagine as sensible can turn out to be the answer

    It is a mathematical result, a result about formal systems. And it is used to say that, in the real world, anything goes in the domain of repeated games. But it can’t be wrong: no matter what one finds in the real world, a game theorist could say “Ah yes, the folk theorem said that could happen.” What’s that mean for me? Good news. The folk theorem, as much as we love it, is fine logic, but it isn’t science. It says a lot about system of equations, but because it can’t be falsified, it has nothing to offer the empirical study of human behavior.

    Oh, FYI, I’d love to be wrong here. If you can find a way to falsify the Folk Theorem, let me know. Alternatively, I’d love to find a citation that says this better than I do here.

    Fisher F.M. (1989). Games Economists Play: A Noncooperative View, The RAND Journal of Economics, 20 (1) 113. DOI: http://dx.doi.org/10.2307/2555655


    Use Shakespeare criticism to inspire language processing research in cognitive science

    I have a side-track of research in the area of “empirical humanities.” I got to present this abstract recently at a conference called “Cognitive futures in the humanities.”

    It might seem self-evident that “the pun … must be noticed as such for it to work its poetic effect.” Joel Fineman says it confidently in his discussion of Shakespeare’s “Sonnet 132.” But experimental psychologists have proven that people are affected by literary devices that they did not notice. That is a problem with self-evidence, and it reveals one half of the promise of empirical humanities.

    Counterintuition pervades every aspect of language experience. Consider the four versions of the following sentence, and how the semantic connections they highlight could affect conscious recognition of the malapropism at pack: “Parker could not have died by [suicide/cigarettes], as he made a [pact with the devil/pack with the devil] that guaranteed immortal life.” Pack is an error. Cigarette semantically “primes” it, just as suicide primes pact. Will readers be more disturbed by pack when it is primed, or less? Does cigarette disguise pack or make it pop out? Classic theories in cognitive science would argue for the latter, that priming the malapropism will make it more disruptive and harder to miss. But no scientific theory has considered the alternative. I hadn’t myself until I reviewed the self-evidence of Shakespeare scholar Stephen Booth. This is the other half of the promise of empirical humanities. Literary criticism can reveal new possibilities in unquestioned cognitive theories, and inspire new tracks of thought.

    After reviewing some lab work in the human mind, and some literary fieldwork there, I will tell you what cigarette does to pack.

    It was fun spending a week learning how humanities people think. The experiment is work with Melody Dye and Greg Cox that I was a part of.


    Some people know how to kill

    Certain processes are vital to the computer’s operation and should not be killed. For example, after I took the screenshot of myself being attacked by csh, csh was shot by friendly fire from behind, possibly by tcsh or xv, and my session was abruptly terminated.

    Context? This. Turns out I’m only 14 years behind the latest word on Doom as a system administration tool.


    Some people know how to live

    While following the Rolling Stones across the country during their 1972 tour, Jim Bell found a discarded cardboard sign on the side of the road that read: “ALASKA.” On a whim he picked it up and stuck his thumb out. He’s been here ever since.*

    About

    This entry was posted on Saturday, March 7th, 2015 and is filed under life and words.


    Cultural arbitrariness is not the thing that is at the root of how race doesn’t exist.

    On the old Radiolab episode about race, the producers used an interesting fact to make an argument that race doesn’t exist — that it’s entirely a social construct. It turns out that the genetic variability within races is greater than the variability between races; the average difference between two people of the same race is greater than that of groups of people across races. In that sense, the idea of race is not really meaningful. But the same is true for the sounds p and b.

    Put your finger to your throat and say “ppuh.” Then say “bbuh.” The vibration you felt for the second one is called voicing; it’s supposed to be the only difference between p and b. That said, things get fuzzy fast. Say “pee.” “Pee” doesn’t start out voiced, but it ends that way (in contrast with “bee,” which is voiced more from the beginning). Depending on context, you can actually move voicing up a lot more and still be perceived as uttering a p. And you can move voicing down from the beginning and still be a b. There are big individual differences too, so that the thing that came out of my mouth as a p might have come out of your sounding like a b. In real everyday language, the fluctuations are so wild that the variability within p or b is greater that then variability between them.

    Does race exist? As much as p and b do. So wait: Do p and b exist? It turns out that there are sharp people working to destroy the ideas of the sounds p and b. For example, cognitive scientist Bob Port put his career behind undermining the static approaches to phonology that permitted the idea of linguistic atoms. And there’s something to it. It turns out that p and b are really complicated. But he can still pronounce his name. It seems you don’t have to be able to draw a clear line between them for them to be used by reasonable people as ideas. To take them too seriously is wrong, and to think that they can’t be used responsibly, or even usefully, is also wrong.

    p, b, and race all look superficially like basic building blocks, but really they are each a complicated result of things like physiology, culture, and the context of each instant. So they are constructs, but not just social constructs. Their cultural arbitrariness is not the thing that is at the root of how they don’t exist. What does it mean for you? These constructs aren’t insubstantial because they are nothing, they are insubstantial because they are complicated.


    Prediction: Tomorrow’s games and new media will be public health hazards.

    Every psychology undergraduate learns the same scientific parable of addiction. A rat with a line to its veins is put in a box, a “Skinner Box,” with a rat-friendly lever that releases small amounts of cocaine. The rat quickly learns to associate the lever with a rush, and starts to press it, over and over, in favor of nourishment or sociality, until death, often by stroke or heart failure.

    Fortunately, rat self-administration studies, which go back to the 1960’s, offer a mere metaphor for human addiction. A human’s course down the same path is much less literal. People don’t literally jam a “self-stimulate” button until death. Right? Last week, Mr. Hsieh from Taiwan was found dead after playing unnamed “combat computer games” for three days straight. Heart failure. His case follows a handful of others from the past decade, from Asia and the West. Streaks of 12 hours to 29 days, causes of death including strokes, heart failure, and other awful things. One guy foamed at the mouth before dropping dead.

    East Asia is leagues ahead of the West in the state of its video game culture. Multiplayer online games are a national pastimes with national heroes and nationally-televised tournaments.(And the South Korean government has taken a public health perspective on the downsides, with a 2011 curfew for online gamers under 18.) Among the young, games play the role that football plays for the rest of the world. With Amazon’s recent purchase of e-sport broadcaster twitch.tv, for $1.1 billion, there is every reason to believe that this is where things are going in the West.

    Science and industry are toolkits, and you can use them to take the world virtually anywhere. With infinite possibilities, the one direction you ultimately choose says a lot about you, and your values. The gaming industry values immersion. You can see it in the advance of computer graphics and, more recently, in the ubiquity of social gaming and gamification. You can see it in the positively retro fascination of Silicon Valley with the outmoded 1950’s “behaviorist” school of psychology, with its Skinner boxes, stimuli and responses, classical conditioning, operant conditioning, positive reinforcement, new fangled (1970’s) intermittent reinforcement. Compulsion loops and dopamine traps. Betable.com, another big dreamer, is inpiring us all with its wager that the future of gaming is next to Vegas. Incidentally, behaviorism seems to be the most monetizable of the psychologies.

    And VR is the next step in immersion, a big step. Facebook has bet $400 million on it. Virtual reality uses the human visual system — the sensory modality with the highest bandwidth for information — to provide seamless access to human neurophysiology. It works at such a fundamental level that the engineering challenges remaining in VR are no longer technological (real-time graphics rendering can now feed information fast enough to keep up with the amazing human eye). Today’s challenges are more about managing human physiology, specifically, nausea. In VR, the easiest game to engineer is “Vomit Horror Show,” and any other game is hard. Nausea is a sign that your body is struggling to resolve conflicting signals; your body doesn’t know what’s real. Developers are being forced to reimagine basic principles of game and interface design.*** Third-person perspective is uncomfortable, it makes your head swim. Cut scenes are uncomfortable for the lack of control. If your physical body is sitting while your virtual body stands, it’s possible to feel like you’re the wrong height (also uncomfortable). And the door that VR opens can close behind it: It isn’t suited to the forms that we think of when we think of video games: top-down games that makes you a mastermind or a god, “side-scroller” action games, detached and cerebral puzzle games. VR is about first-person perspective, you, and making you forget what’s real.

    We use rats in science because their physiology is a good model of human physiology. But I rolled my eyes when my professor made his dramatic pause after telling the rat story. Surely, humans are a few notches up when it comes to self control. We wouldn’t literally jam the happy button to death. We can see what’s going on. Mr. Hsieh’s Skinner Box was gaming, probably first-person gaming, and he self-administered with the left mouse button, which you can use to kill. These stories are newsworthy today because they’re still news, but all the pieces are in place for them to become newsworthy because people are dying. The game industry has always had some blood on its hands. Games can be gory and they can teach and indulge violent fantasizing. But if these people are any indication, that blood is going to become a lot harder to tell from the real thing.

    About

    This entry was posted on Thursday, January 29th, 2015 and is filed under science, straight-geek.


    The law of welfare royalty

    To propose that human society is governed by laws is generally foolhardy. I wouldn’t object to a Law of Social Laws to push along the lines that all generalizations are false. But this observation has a bit going for it, namely that it depends on the inherent complexity of society, and on human limits. Those are things we can count on.

    The law of welfare royalty: Every scheme for categorizing members of a large-scale society will suffer from at least one false positive and at least one false negative.

    The law says that every social label will be misapplied in two ways: It will be used to label people it shouldn’t (false positive), and it will fail to be applied to people it should (false negative). Both errors will exist.

    The ideas of false positives and false negatives come from signal detection theory, which is about labeling things. If you fired a gun in the direction of someone who might be friend or foe, four things can happen: a good hit, a good miss, a bad hit (friendly fire), and a bad miss.** Failing to keep all four outcomes in mind leads to bad reasoning about humans and society, especially when it comes to news and politics.

    Examples:

    • No matter how generous a social welfare system, it will always be possible to find someone suffering from starvation and exposure, and to use their story to argue for more generosity.
    • No matter how stingy and inadequate a welfare system, it will always be possible to cry “waste” and “scandal” on some kind of welfare royalty abusing the system.
    • No matter the inherent threat of violence from a distant ethnic group, it will always be possible to report a very high and very low threat of violence.
    • Airport security measures are all about tolerating a very very high rate of false positives (they search everybody) in order to prevent misses (letting actual terrorists board planes unsearched), but it cannot be guaranteed to succeed, and the cost of searching everybody has to be measured against that.
    • In many places, jaywalking laws are only used to shut down public protests. During street protests, jaywalking laws have a 0% hit rate and a 0% correct reject (true negative) rate: they never catch people they should, and they catch all of the people they shouldn’t.

    The law of welfare royalty is important for how we think about society and social change. The upshot is that trustworthy reporting about social categories must report using lots of data. Anecdotes will always be available to support any opinion about any act on society. You can also infer from my formulation of the law a corollary that there will always be a talking head prepared to support your opinion, though that isn’t so deep or interesting or surprising.

    In fact, none of this is so surprising once a person thinks about it. The challenge is getting a person to think about it, even once. That’s the value of giving the concept a name. If I could choose one facet of statistical literacy to upload into the head of every human being, it would be a native comfort with the complementary concepts of false positives and negatives. Call it a waste of an upload if you want, but signal detection theory has become a basic part of my daily intellectual hygiene.


    Scott’s “Art of not being governed,” in a nutshell

    I read James Scott’s 2009 “The art of not being governed: An anarchist history of upland Southeast Asia.” I’ve reproduced a quote that concisely captures a nice chunk of the ~400 page book. The terms raw and cooked are Chinese euphemisms, possibly outmoded, for distinguishing between “barbarians” and “citizens.” It would seem to describe refinement, but it could as easily have come down to whether the person was someone who paid taxes.

    As a political location — outside the state but adjacent to it — the ethnicized barbarians represent a permanent example of defiance of central authority. Semiotically necessary to the cultural idea of civilization, the barbarians are also well nigh ineradicable, owing to their defensive advantages in terrain, in dispersal, in segmentary social organization, and in their mobile, fugitive subsistence strategies. They remain an example — and thus an option, a temptation — of a form of social organization outside state-based hierarchy and taxes. One imagines that the eighteenth century Buddhist rebel against the Qing in Yunnan understood the appeal of “barbarian-ness” when he exhorted people with the chant: “Api’s followers need pay no taxes. They plow for themselves and eat their own produce.” For officials of the nearby state, the barbarians represent a refuge for criminals and rebels, and an exit for tax-shy subjects.
    The actual appeal of “barbarity,” of residing out of the state’s reach — let alone forsaking civilization — has no logical place in the official state narratives of the four major civilizations that concern us here: the Han-Chinese, the Vietnamese, the Burman, and the Siamese. All are “produced on irrevocable assimilation in a single direction.” In the Han case, the very terms raw and cooked imply irreversibility: raw meat can be cooked but it cannot be “uncooked” —though it can spoil! No two-way traffic or backsliding is provided for. Nor does it allow for the indisputable fact that the core civilizations to which assimilation is envisaged are, themselves, a cultural alloy of many diverse sources.

    This is one of those books where the Conclusion is a good synopsis of
    the whole book, so if you want more after these notes, read that
    first.

    (2011). Life without the State The Art of Not Being Governed: An Anarchist History of Upland Southeast Asia . New Haven: Yale University Press. 2009. James C. Scott , Anthropology Now, 3 (3) 111-114. DOI: http://dx.doi.org/10.5816/anthropologynow.3.3.0111

    About

    This entry was posted on Saturday, January 17th, 2015 and is filed under books.


    Egg yolks as design feature

    I have more trouble than I should remembering how many cups of flour I’ve put in the batter so far, but I never have trouble remembering the number of eggs, because each egg comes with a yellow token to help in keeping count. I’d say eggs are pretty well-designed, even if I don’t completely understand all of the design decisions behind them. For example, why aerodynamic?

    About

    This entry was posted on Wednesday, January 14th, 2015 and is filed under Uncategorized.


    Back by one forward by two: Does planning for norm failure encourage it?

    Most people who care about resource management care about big global common resources: oceans, forests, rivers, the air. But the commons that we deal with directly — shared fridges, flagging book clubs, public restrooms — may be as important. These “mundane” commons give everyday people experiences of governance, possibly the only type of experience that humanity can rely on to solve global commons dilemmas.

    I think that’s important, and so the problems of maintaining mundane commons always get me. One community of mine, my lab, has recently had trouble with a norm of “add one clean two.” Take a sink shared with many people, at an office or in a community. There are a million ways to keep this kind of resource clean, and I see new ideas everywhere I look. Still, most shared sinks have dirty dishes. One recent proposed idea was “add one clean two.” If you can’t count on every individual to clean their own dish, why not appeal to the prosocial people (the ones most likely to discuss the problem as a problem) to clean two dishes for every one they add?

    On the one hand, this cleverly embraces homogeneity of cooperativeness to solve an institutional design problem. On the other, a norm built on the premise that violators exist makes it OK for people continue to leave their dishes undone. It isn’t clear to me what conditions would make the first effect overpower the second. Seems testable though.


    The scientist as dataset — specifically a high-rez, 4-D facial capture dataset

    RigidStabilizationSelfie
    I am data for my colleagues at Disney Research. Note lawless dentition and sorry excuse for anger.


    The intriguing weaknesses of deep learning and deep neural networks

    Deep learning (and neural networks generally) have impressed me a lot for what they can do, but much more so for what they can’t. They seem to be vulnerable to three of the very same strange, deep design limits that seem to constrain the human mind-brain system.

    • The intractability of introspection. The fact that we can know things without knowing why we know them, or even that we know them. Having trained a deep network, it’s a whole other machine learning problem just to figure out how it is doing what it is doing.
    • Bad engineering. Both neural networks and the brain are poorly engineered in the sense that they perform action X in a way that a mechanical or electrical engineer would never have designed a machine that can do X.** These systems don’t respect modularity and it is hard to analyze them with a pencil and paper. They are hard to diagnose, troubleshoot, and reverse-engineer. That’s probably important to why they work.
    • The difficulty of unlearning. The impossibility of “unseeing” the object in the image on the left (your right), once you know what it is. That is a property that neural networks share with the brain. Well, maybe that isn’t a fact, maybe I’m just conjecturing. If so, call it a conjecture: I predict that Facebook’s DeepFace, after it has successfully adapted to your new haircut, has more trouble than it should in forgetting your old one.
    • Very fast performance after very slow training. Humans make decisions in milliseconds, decisions based on patterns learned from a lifetime of experience and tons of data. In fact, the separation between the training and test phases that is standard in machine learning is more of an artifice in deep networks, whose recurrent varieties can be seen as lacking the dichotomy.
    • There are probably others, but I recognize them only slowly.

    Careful. Once you know what this is, there's no going back.
    Careful. Once you know what this is, there’s no going back.

    Unlearning, fast learning, introspection, and “good” design aren’t hard to engineer: we already have artificial intelligences with these properties, and we humans can easily do things that seem much harder. But neither humans nor deep networks are good at any of these things. In my eyes, the fact that deep learning is reproducing these seemingly-deep design limitations of the human mind gives it tremendous credibility as an approach to human-like AI.

    The coolest thing about a Ph.D. in cognitive science is that it constitutes license, almost literally, to speculate about the nature of consciousness. I used to be a big skeptic of the ambitions of AI to create human-like intelligence. Now I could go either way. But I’m still convinced that getting it, if we get it, will not imply understanding it.

    Motivating links:

    About

    This entry was posted on Sunday, December 21st, 2014 and is filed under science.


    The best skeptics are gullible

    Our culture groups science with concepts like skepticism, logic, and reductionism, together as a cluster in opposition to creativity, holistic reasoning, and the “right brain.” This network of alliances feeds into another opposition our culture accepts, that between art and science. I’ve always looked down on the whole thing, but sometimes I feel lonely in that.

    The opposite of skepticism is credulity, a readiness to believe things. For my part, I try to communicate a vision for science in which skepticism and credulity are equal and complementary tools in the production of scientific insight. An imbalance of either is dangerous, one for increasing the number of wrong ideas that survive (the “miss” rate) and the other for increasing the number of good ideas that die (the “false positive” rate). Lots of both is good if you can manage it, but people allow themselves to identify with one or the other. As far as I’m concerned, the cost of confining yourself like that just isn’t worth the security of feeling like you know who you are.

    Skepticism and credulity are equally important to my intellectual hygiene. It’s very valuable, on hearing an idea, to be able to put up a fight and pick away every assumption it rests on. It’s equally valuable, on hearing the same idea, either before or after I’ve given it hell, to do everything I can to make it hold — and the more upside-down I can turn the world, the better. Sometimes that means readjusting my prior beliefs about the way the world works. More often it means assuming a little good faith and having a little patience with the person at the front of the room. If some superficial word choice makes you bristle, switch it out with a related word, one that you have permitted to exist. If you have too little of either, skepticism or credulity, you’re doing injustice to the world, to your community, and, most importantly, to yourself.

    Don’t take my word for it. Here’s a nice bit from Daniel Kahneman, on working with his longtime colleague Amos Tversky.

    … perhaps most important, we checked our critical weapons at the door. Both Amos and I were critical and argumentative, he even more than I, but during the years of our collaboration neither of us ever rejected out of hand anything the other said. (from page 6 of his Thinking, fast and slow, which is like having a user manual for your brain)

    I’m not saying that there isn’t enough credulity in the scientific community. There’s a lot, it’s dangerous, it should be treated with respect. In a good skeptic, credulity is a quality, not a lapse. Making room for it in the scientific attitude is the first step toward recognizing that creativity is, and has always been, as basic as analytic rigor to good science.

    Arvai J. (2013). Thinking, fast and slow, Daniel Kahneman, Farrar, Straus & Giroux, Journal of Risk Research, 16 (10) 1322-1324. DOI: http://dx.doi.org/10.1080/13669877.2013.766389

    About

    This entry was posted on Thursday, December 18th, 2014 and is filed under Uncategorized.


    Sailing west down the Panama Canal will get you into which ocean?

    The Atlantic.

    And when you get to the Pacific, and sail up to L.A., you can drive west toward Reno. WHile we’re at it, there is also a sliver of the world where the timezones go backward. Thank you geopolitics.


    Photo from Wikipedia.


    Common-knowledge arbitrage

    Hypothesis 1: Ask people what they think about a stock or a political issue, and also what they think “most people” think. Where these guesses are the same, predictions about the outcome will be right. Where they differ, outcomes will have more upsets.

    There are a few places where I would ultimately want to see this perspective go. One would look at advertising and other goal-oriented broadcasts as aimed at strategically creating a difference between what people think and what they think others think. Another would try to predict changes in finance markets based on these differences. This perspective will be useful in any domain where people don’t merely act on what they think, but on the differences with their estimate of common knowledge. It will also be useful in domains where people’s expressed opinions differ from their privately held ones.

    Hypothesis 2: Holding everything else still, average opinion and the average of estimates of public opinion will tend toward being equal.

    If this second guess is true, a systematic significant difference between the average opinion and the average estimate of public opinion could provide an objective measure of propaganda pressure, one that could be used to assign a number to the strength of social pressure that is being applied by a goal-oriented agent working on a population through the mass media ecosystem.

    But maybe that is too conspiracy theory-ey, and too top-down. The same measure could indicate a bottom-up dynamic. Take a social taboo that is privately ignored but still publicly upheld. In such a domain, it will be common for expressed opinions to differ from held opinions, which will drive a consistent non-zero difference between average opinion and average received opinion. Over a dozen taboos, those with a large or growing divergence will be those that are most likely to become outmoded. Anecdotally, I’m thinking here of the surprise, and surprisingly-robust, changes in opinion and policy around controlled substances, most striking in California.

    Hypothesis 3: This is a little idle, but I would also guess that people with larger differences tend to be less happy, particularly where the differences concentrate on highly-politicized topics. Causation there could go either way — I’d guess both way.

    This subject has some relationship to some extensions to Schelling’s opinion models and to my dissertation work (on surprising group-scale effects of “what you think I think you think I think” reasoning).


    Do social preferences break “I split you chose”?

    Hypothesis: Social preferences undermine the fairness, efficiency, and stability of “I cut, you choose” rules.

    A lot of people chafe at the assumptions behind game theory and standard economic theory, and I don’t blame them. If those theories were right, there are a lot of things in our daily lives that wouldn’t work as well as they obviously do. But I came up with an example of the opposite: an everyday institution that would work a lot better if we weren’t so generous and egalitarian — if we didn’t have “social preferences.” Maybe; this is just a hypothesis, one that I may never get around to testing, but here it is.

    “I cut, you choose” is a pretty common method for splitting things. Academically, it is appealing because it is easy to describe mathematically. It is a clean real world version of a classic Nash bargaining problem. There is a finite resource and two agents must agree about how to split it. The first person divides it into two parts and the second is free to pick the biggest. It is common in domains where the resource is hard to split evenly. The splitter knows that the picker will choose the larger part, and that he or she can do no better than getting 50%. This incentivizes the splitter to try for a completely fair distribution. Binmore has a theory that cultural evolution will select for social situations that are stable, efficient, and fair, and “I split, you choose” has those qualities, in theory.

    It sounds fine, and I’ve seen it work great, but I’ve also seen it go wrong, particularly among the guilty and shy. In the splitter role they get anxious and in the receiver role they tend to pick the smaller share. It might sound heartless for someone to exploit that, but my wonderful boss did: He was splitting a candy bar with an anxious friend and proposed “I split, you chose.” He volunteered also to be the splitter, and proceeded to divide the bar blatantly 70/30. What did the victim do? He knew he was being manipulated, he watched the split with horror, but, however wounded, mysteriously picked the smaller share. Social preferences, in that case make “I split, you choose” into an institution that is neither stable nor fair and, if it’s efficient, it’s only because every possible outcome is equally efficient.

    That’s interesting because we normally think of game theory as this sterile thing that implies a selfish existence whose only redeeming value is that it’s contradicted by our social preferences, which make everything better. But, if I’m right, this is a clean example of the opposite. Game theory would be offering a very nice clean institution, and social preferences break it.


    Regex crossword puzzle

    This showed up at the lab one day. Print it out, give it a try.
    regexcrossword
    I have no idea who to credit. If you don’t know what this is, that’s OK. In my opinion, ignorance, in this case, is bliss, but this explains the basic idea. And, if you’re interested, here are more puzzles.

    About

    This entry was posted on Tuesday, August 26th, 2014 and is filed under straight-geek.


    Design for magical spherical dice (3D printed)

    I designed a die. It’s special because it’s a sphere pretending to have six sides: each roll will end with one to six dots facing up. It’s also special because you can print a copy. The trick is a weight that falls into one of six pockets under each of the numbers. “Spherical dice” sounds better than “spherical die,” so print two.

    Some assembly is required: You just have to drop in the weight and jam in the plug. According to the colleague that helped me, Nobuyuki Umetani, fishing stores are the best place to get lead. For the plug, you can use your thumb. Most of the plug will still be sticking out post-jam and you’ll have to snap off the rest. The way many printers print makes them snap clean along the path of the printer head. So score the plug by drawing a knife around it’s diameter where it meets the sphere, steady it (with a vice or on the edge of a table), and give it a good whack.

    Notes

    • The roll has satisfying action. Video at bottom.
    • The plug is tapered so as to jam well. It functions as the number one.
    • With this design, the strength of the jam may influence the fairness of the die. Probably not a real concern (since the ball’s mid-air choice of pockets will have a bigger influence on the outcome), but this is an imperfection in the design. Someone will have to do a few thousand or so rolls to make sure.
    • The density of the fill and the weight of the missing dots could also influence a die’s fairness, but if you care that much then you know not to bet six with any dice that didn’t come through a casino.
    • You can fill the dots in if you want them to stand out. Nail polish will do. Just be careful: the plastic doesn’t forgive mistakes because its layers act like capillaries and suck up liquidy paint (or nail polish removered nail polish)
    • You want the diameter of the lead weight to leave some wiggle in the pockets. If your weight is a snug fit into the die, get a smaller weight (or scale up the size of the model).
    • I’ve oriented the model at an angle so that it’ll print correctly (without any support material on the inside) if your printer can handle printing a 45° overhang. It probably can? I don’t know how common that is, but the machine I used can.
    • The original design subtracted an octahedron from the center of the sphere, but it was a little too sensitive, and also harder to make fair, so I redesigned it to subtract three mutually orthogonal boxes.
    • Workflow was 123D (for the orthogonal part) to Meshmixer (to sculpt out the dots) to MakerWare (staging and path planning) to a second generation Replicator.
    • I got the idea from someone who did the same thing in wood. I saw it for sale at a store call Aha.

    And, this is how I roll:

    About

    This entry was posted on Saturday, July 26th, 2014 and is filed under straight-geek, tricks.


    “No wang-wang zone”

    My dad lives in the Philippines, and I was in the Manila airport on my way to visit him. I was in the part where you get in line and wait for them to glance at your passport when I saw a cheap computer printed sign taped to a column.

    This is a no wang-wang zone.
    We have cameras, don’t embarrass yourself.
    Stay in line.

    I couldn’t help but notice it, which made me wonder why I’d never seen it before — I’ve been visiting that country every few years since I was 12. It didn’t seem like a very airport type of sign.

    Eventually I had a chance to ask my stepmom, and she explained. Wang-wang is pretending that you’re really important. Say you don’t want to wait in traffic. You light up the siren on your dashboard, get over to the shoulder, and plough on past all the suckers. Or say you don’t want to wait in line at the airport. You put on a pair of sunglasses, tell your family to follow you in a tight pack while tittering about your celebrity or photographing you, and stroll confidently past all the long lines right on through to baggage. It’s called wang-wang because of the first example: That’s the sound a siren makes.

    I’m not one of these Foucault-fawning critical theory types — at all — but when I think about wang-wang I can’t shake words like “postmodern” and “postcolonial.” They don’t usually go together, but I think wang-wang is both. The Philippines was colonized for centuries and the country, despite its merits, has a bad case of some of the worst aspects of Western culture, like extreme wealth disparities, deified celebrities, and the use of bureaucracy for interfacing between citizen and state.

    Naturally there are forms of resistance to those things, and wang-wang is one, but it stands out to me for how savvy it is to the arbitrariness of power. The terms of being a powerful person are a bit arbitrary in any culture, but they are so blatantly arbitrary in the Philippines, partly because of the colonial mold: their governance system and economic structure were copied and pasted from Western models with Western loans and no regard for this idea that a country’s political and economic systems should be congruent with its culture. The common result is the portfolio of asymmetries that characterize life in the developing world, like asymmetry in wealth, in power, in the development of urban and rural places, in the relative amounts of law and lawfullness, and in the amount of admiration for Westerners over compatriots.

    If asymmetry is the common result of orthodox international development, the “no wang-wang zone” is the postmodern result: a rule in an airport immigration lobby chastising this new kind of person who can break all the rules by pretending to be the kind of person who can break all the rules.


    Xeno’s paradox

    There is probably some very deep psychology behind the age-old tradition of blaming problems on foreigners. These days I’m a foreigner, in Switzerland, and so I get to see how things are and how I affect them. I’ve found that I can trigger a change in norms even by going out of my way to have no effect on them. It’s a puzzle, but I think I’ve got it modeled.

    In my apartment there is a norm (with a reminder sign) around locking the door to the basement. It’s a strange custom, because the whole building is safe and secure, but the Swiss are particular and I don’t question it. Though the rule was occasionally broken in the past (hence the sign), residents in my apartment used to be better about locking the door to the basement. The norm is decaying. Over the same time period, the number of foreigners (like me) has increased. From the naïve perspective, the mechanism is obvious: Outsiders are breaking the rules. The mechanism I have in mind shows some of the subtlety that is possible when people influence each other under uncertainty. I’m more interested in the possibility that this can exist than in showing it does. Generally, I don’t think of logic as the most appropriate tool for fighting bigotry.

    When I moved in to this apartment I observed that the basement door was occasionally unlocked, despite the sign. I like to align with how people are instead of how the signs say they should be, and so I chose to just remain a neutral observer for as long as possible while I learned the how things run. I adopted a heuristic of leaving things how I found them. If the door was locked, I locked it behind me on my way out, and if the door wasn’t I left it that way.

    That’s well and good, but you can’t just be an observer. Even my policy of neutrality has side effects. Say that the apartment was once full of Swiss people, including one resident who occasionally left the door unlocked but was otherwise perfectly Swiss. The rest of the residents are evenly split between orthodox door lockers and others who could go either way and so go with the flow. Under this arrangement, the door stays locked most of the time, and the people on the cusp of culture change stay consistent with what they are seeing.

    Now, let’s introduce immigration and slowly add foreigners, but a particular kind that never does anything. These entrants want only to stay neutral and they always leave the door how they found it. If the norm of the apartment was already a bit fragile, then a small change in the demographic can tip the system in favor of regular norm violations.

    If the probability of adopting the new norm depends on the frequency of seeing it adopted, then a spike in norm adoptions can cause a cascade that makes a new norm out of violating the old one. This is all standard threshold model: Granovetter, Schelling, Axelrod. Outsiders change the model by creating a third type that makes it look like there are more early adopters than there really are.

    Technically, outsiders translate the threshold curve up and don’t otherwise change its shape. In equations, (1) is a cumulative function representing the threshold model. It sums over some positive function f() as far as percentile X to return value Y in “X% of people (adopters early adopters (E) plus non-adopters (N)) need to see that at least Y% of others have adopted before they do.” Equation (2) shifts equation (1) up by the percentage of outsiders times their probability of encountering an adopter rather than a non-adopter.
    latex-image-2

    If you take each variable and replace it with a big number you should start to see that the system needs either a lot of adopters or a lot of outsiders for these hypothetical neutral outsiders to be able to shift the contour very far up. That says to me that I’m probably wrong, since I’m probably the only one following my rule. My benign policy probably isn’t the explanation for the trend of failures to lock the basement door.

    This exercise was valuable mostly for introducing a theoretical mechanism that shows how it could be possible for outsiders to not be responsible for a social change, even if it seems like it came with them. Change can come with disinterested outsiders if the system is already leaning toward a change, because outsiders can be mistaken for true adopters and magnify the visibility of a minority of adopters.

    Update a few months later

    I found another application. I’ve always wondered how it is that extreme views — like extreme political views — take up so much space in our heads even though the people who actually believe those things are so rare. I’d guess that we have a bias towards over estimating how many people are active in loud minorities, anything from the Tea Party to goth teenagers. With a small tweak, this model can explain how being memorable can make your social group seem to have more converts than it has, and thereby encourage more converts. Just filter people’s estimates of different group’s representations through a memory of every person that has been seen in the past few months, with a bias toward remembering memorable things. I’ve always thought that extreme groups are small because they are extreme, but this raises the possibility that it’s the other way around, that when you’re small, being extreme is a pretty smart growth strategy.


    Words whose acronyms take longer to pronounce

    • WWW
    • WWII
    • WTF
    • maybe any acronym with W in it, possibly no other acronyms

    Oh, just thought of an acronym with a W that might be an abbreviation of its source: WWF. Theory-wise, this phenomenon should be a puzzle for researchers who assume that efficiency is an important factor in language change and evolution.

    About

    This entry was posted on Saturday, June 21st, 2014 and is filed under life and words, lists.


    Don’t let Airbnb, Uber, or Peers redefine sharing

    When thought leaders are VC-funded you have to be careful. Market cheerleaders Peers and SOCAP associated themselves with alternative economics by holding a conference about sharing economies. The message seems to be that market exchange can be called sharing when it happens between web users. It reads to me like pressure to paint orthodox concepts with the glow of the exciting discourse around alternative economic models. Fortunately, conference participants wasted no time critiquing their magnanimous hosts. But even the most well-developed criticisms seem to be buying into the reframing that sharing is a satisfactory word for what happens on sites like Airbnb.

    It sounds like one purpose of the Share Conference was to raise a question, What is sharing really? The implication is that these inspiring revolutionaries are pushing the idea in new directions, and that it needs to be revised to accommodate their bigness. That’s bunk. The word share is fine meaning precisely what it means, to divide ownership of a resource. The word exchange means something different, to give a resource in return for another of equal value, and a thesaurus isn’t a dictionary. People who want to read market exchange as sharing are up to something, some combination of

    • Selling a product
    • Associating themselves with freshness
    • Assuring themselves that they are Good People doing Good Things

    Don’t get me wrong, markets are great, and they are capable of doing a great job of distributing resources in uncertain environments. But they aren’t new, and as inventiveness goes, opening new resources to market distribution is a relatively unimaginative application of the web. These entrepreneurs are following an old, distinguished formula, and there are lots of great words for that, but ‘revolutionary’ isn’t one of them. I use Airbnb, and I’ve been impressed by Uber, but if you want me to believe that these contributions are new, then I’m bored. For me, there is only one interesting question in this space: Why do the developers of these market institutions want to think of themselves as facilitators of a fluffy value like sharing? There is some branding in there, but I think there’s something more. I think that expressions like the Share Conference are the sound of successful entrepreneurs trying to drown out their own quiet doubts about market ideology.

    About

    This entry was posted on Thursday, June 19th, 2014 and is filed under complexity.


    Irony is the flatulence of truth

    I can actually defend this: The world is complicated so the truth is too, and it can’t always contain itself. Irony reveals parts of the truth, but always out around the back, and in sudden spurts. Even when you don’t see it you can still sense it. I could keep going.

    About

    This entry was posted on Wednesday, June 18th, 2014 and is filed under life and words.


    Auspicious and inconsequential

    Photo on 16-06-14 at 12.49

    “Auspicious” and “inconsequential” are two tidy words for describing the experience of being burger customer number one. I like that they can coexist with so little friction. Maybe it works because auspicious wafts superstitious.


    Chrome extension: Facebook deconditioner

    I used to find myself on Facebook even when I did not want to be there. Now that doesn’t happen any more. Every time I go I have to click through a bunch of popups. The number of popups increases by one each time I return. I can still check the site, it just takes a little work, and a little more work each time.

    With the carefully engineered convenience of these sites, you can reach a point where spasms of muscle memory override your own intentions about where you want your mind. If you think a small simple barrier would help you be a more mindful consumer of social media, you can install an extension I made for Chrome.

    Even if you check the “do not show popups” box every time, this plugin will still force at least three clicks before every page access. And it will still make it easier to stop than to continue. And it will still keep count.

    nerds

    Here is the early code (you can ask me for more recent code). And these pages are useful for authoring.

    About

    This entry was posted on Monday, May 19th, 2014 and is filed under straight-geek, tricks.


    Is it possible to forge your own signature?

    It’s true for everyone that no two signatures are identical, at least in the sense that no two periods on a page are identical. It’s a little more true for me. My signature is sloppy, but I’ve never been called out on it until now. I’ve been trying to get a credit card and the issuers cannot be satisfied that I am who I say I am. They’ve returned my application to me five times now, complaining each time that my signatures on the forms don’t match the one on my ID. The first few times I figured it was a fluke and I just signed the forms again. Then I photographed myself holding a copy of my signature, my ID with my other signature, and a note saying “this is me.” They didn’t buy it. So now I’ve resorted to forgery, copying out my card signature, enlarging it, and tracing over it exactly. I would call this my confession but the nice customer service lady giggled when I told her so I guess everything will be OK. But what the hell do I know? Maybe it should be a crime to pretend to be the person you were yesterday.

    About

    This entry was posted on Monday, April 14th, 2014 and is filed under life and words.


    The selling out diaries: Surprising sources of pressure

    I’m a behavioral scientist, pretty lefty, and I currently do research for a major media corporation. I predicted before taking on this job that I would feel some pressure to drift from deeper questions about society towards “business school” questions — questions that are less about human behavior and more about consumer behavior. What I didn’t predict is that all of that pressure would come from within myself. I voluntarily propose questions in the direction of consumer behavior when it’s not what I want to do and I’m not being pressured to do it. Why?

    The big factor is that I’m amiable and eager to please. So while I maybe am not drawn towards consumer research questions, the people I meet in other parts of the company are often interested — personally interested as reasonable people — in just that stuff. I like these people, and I recognize the good in the things they want to accomplish, and I want to be worth their time to do other kinds of work with them in the future, so I offer to help.

    And there it is: I prepared myself against outside pressures, and got surprised by the pressures I’m really vulnerable to, the ones that come from the inside. They are trickiest in that they seem to come from good places — in particular from the ways that I like to think of myself as a good person.

    In introspection-heavy spaces, recognizing a problem is the bigger part of solving it. For this particular problem, the rest is easy enough: For every 50 questions I generate, 10 are academically interesting, and 1 also has appeal to the people I work with. So if I stay creative enough to sustainably generate 100s of questions, I can constrain my helpfulness to the ways that I want to help without making any party feel constrained; I can do satisfying work and help my colleagues at the same time.

    This particular solution is a patch, and it will raise other problems. I’m not done thinking about these things. But as long as I pay attention and stay aware of my values I think I can do work that is good for me, good for the people who support me, and good for the world.


    My first autogenerated recommendation letter

    I was procrastinating through my LinkedIn backlog when I found this note from an old acquaintance:

    Dear Seth,
    I’ve written this recommendation of your work to share with other
    LinkedIn users.

    Details of the Recommendation: “Seth is a personable and empathetic leader with a passion and a drive to see his goals through to completion. He looks at the world with great curiosity, from a logical and questioning point of view. Seth is imaginative and innovative in his thinking while also being practical and systematic. He seeks precision and notices the minute distinctions that define the essence of things, then analyzes and examines their interconnections and interrelations. Seth scrutinizes all sides of an issue, looking to solve problems in creative and unusual ways. He employs novel cognitive models as tools for discovery, using “what if” questions to explore alternatives and allowing multiple possibilities to coexist. He sets high standards for himself and others, inspiring enthusiasm, fostering collaboration, and maintaining accountability. I very much enjoyed working with him.”

    We didn’t have very much contact, much less anything like professional contact. This contact was quirky, and generally suspicious, and it was such a nice-but-strangely-um-I-don’t-know gesture that I found myself cynically googling sentences from it. It turns out my cynicism was justified: the recommendation above points straight to Wikipedia’s pages on

    About

    This entry was posted on Tuesday, March 11th, 2014 and is filed under life and words.


    Betable.com on the ethics of developing addictive social games

    betable

    I’m looking at using big social game data to do science. I wanted to advance my own thinking about the ethical issues, so I rooted up some of the names that are pushing the social-gaming conversation in new directions. Among the places I found was Betable.com, a social gaming startup that is very excited to involve gambling in the future of online gaming. It is also very good at marketing itself. Given its prominence in the scene and its eagerness to present itself as avant garde I figured its leadership would have provocative — even original — thoughts on the subject of ethics. Is that so naïve?


    How we create culture from noise

    learningnoise

    I don’t like to act too knowledgable about society, but I’m ready to conjecture law: “Peoples will interpret patterns into the phenomena that affect their lives, even phenomena without patterns. Culture amplifies pareidolia.”

    It’s interesting when those patterns are random, as in weather and gambling. “Random” is a pretty good model for weather outside the timescale of days. But we can probably count on every human culture to have narratives that give weather apprehensible causes. Gambling is random by definition, but that doesn’t stop the emergence of gambling “systems” that societies continue to honor with meaningfulness. Societies do not seem to permit impactful events to be meaningless.

    This is all easy to illustrate in fine work by Kalish et al. (2007). The image above shows five series (rows) of people learning a pattern of dots from the person before them, one dot at a time, and then teaching it to the next person in the same way. Each n (each column) is a new person in the “cultural” transmission of the pattern. The experiment starts with some given “true” pattern (the first column).

    The first row of the five tells a pretty clean story. The initial pattern was a positive linear function that people learned and transmitted with ease. But the second and third rows already raise some concern: the initial patterns were more complicated functions that, within just a couple of generations, got transformed into the same linear function as in the first row. This is impressive because the people were different between rows; Each row happened without any awareness of what happened in the other rows — people had only the knowledge of what just happened in the cell to their immediate left. Treating the five people in rows two or three as constituting a miniature society, we can say that they collectively simplified a complicated reality into something that was easier to comprehend and communicate.

    And in the fourth and fifth rows the opposite happens: Subjects are not imposing their bias for positive lines on a more complicated hidden pattern, but on no pattern at all. Again, treating these five people as a society, their line is a social construct that emerges reliably across “cultures” from nothing but randomness. People are capable of slightly more complex cultural products (the negative line in the fifth row) but probably not much more, and probably rarely.

    The robustness of this effect gives strong evidence that culture can amplify the tendencies of individuals toward pareidolia — seeing patterns in noise. It also raises the possibility that the cultural systems we hold dear are built on noise. I’m betting that any work to change such a system is going to find itself up against some very subtle, very powerful social forces.


    Exclamation point on a flag?


    There is the category of thoughts that were nice to think halfly because I never imagined thinking them. One was trying to figure out if/how/why it’s redundant/deep to include an exclamation point on a flag. Maybe because each are such active forms of non-action, and the combination makes such a louder call to talking about arms.

    About

    This entry was posted on Thursday, February 13th, 2014 and is filed under life and words.


    Translation with rotation. An American railroad man sold Marx on Iroquois culture.

    By a strange irony, the League of the Iroquois has become a model for Marxist theory. The twisting trail that leads to Friedrich Engels begins with Lewis Henry Morgan, a Rochester lawyer and lobbyist for railroads. His interest in the Iroquois was aroused because he wanted to use their rituals in a rather sophomoric fraternal organization he and several business friends were setting up. As a result, he studied the Iroquois deeply …
    He was a thoroughly conventional man, unquestioning in religious orthodoxy, and also a staunch capitalist. But he published his theories in Ancient Society in 1877, at the very time tht Karl Marx was working on the final volumes of Das Kapital. Marx was enthusiastic and made notes about Morgan’s findings, which by accident fitted in with his own materialistic views of history. Marx died before he could write a book incorporating Morgan’s theories, but Engels used them as the cornerstone for his influential The origin of the family, private property, and the state (1884). This volume has become the source book for all anthropological theory in Soviet Russia and most other communist countries. Engels was ecstatic about what he had learned, or thought he had learned, of the League of the Iroquois from Morgan … That bourgeois gentleman Morgan is to this day enshrined in the pantheon of socialist thinkers.

    “This day” is the 1968 of Peter Farb, from his book Man’s rise to civilization as shown by the Indians of North America from primeval times to the coming of the industrial state. Any book written by a 1960’s anthropologist is going to be dated, but this one is also so progressive in some places (even by today’s standards) that I say it breaks even.

    Other valuable excerpts from the book:

    Extremely literal rank accounting:

    Once a society starts to keep track in this way of who is who, there is no telling where such genealogical bookkeeping will end. In Northwest Coast society it did not end until the very last and lowliest citizen knew his precise hereditary rank with a defined distance from the chief, and he knew it with exactitude. There is record of a Kwakiutl feast in which each of the 658 guests from thirteen subdivisions of the chiefdom knew whether he was, say, number 437 or number 438. … A specialist in the Northwest Coast has wisely stated: “To insist upon the use of the term ‘class system’ for Northwest Coast society means that we must say that each individual was in a class by himself.”

    Emergent market exchange:

    Membership in other kinds of societies was also often purchased, and in fact many things were for sale among the Plains tribes: sacred objects, religious songs, and even the description of a particularly good vision. The right to paint a particular design on the face during a religious ceremony might cost as much as a horse. Permission just to look inside someone’s sacred bundle of fetishes and feathers was often worth the equivalent of a hundred dollars. A Crow is known to have paid two horses to his sponsor to get himself invited into a tobacco society, and teh candidate’s family contributed an additional twenty-three horses. A prudent Blackfoot was well advised to put his money into a sacred bundle, and investment that paid him continued dividends.

    Of the Cheyenne, with a connection to Bengime:

    Only the bravest of the brave warriors could belong to the elite military society known as the Contraries. Somewhat like the Zuni Mudheads, they were privileged clowns. They did the opposite of everything: They said no when they meant yes; went away when called and came near when told to go away; called left right; and sat shivering on the hottest day.

    How the Cherokee got screwed, an important story from the USA’s 19th century campaign of genocide:

    About 1790 the Cherokee decided to adopt the ways of their White conquerors and to emulate their civilization, their morals, their learning, and their arts. The Cherokee made remarkable and rapid progress in their homeland in the mountains where Georgia, Tennessee, and North Carolina meet. They established churches, mills, schools, and well-cultivated farms; judging from descriptions of that time, the region was a paradise when compared with the bleak landscape that the White successors have made of Appalachia today. In 1826 a Cherokee reported to the Presbyterian Church that his people already possessed 22,000 cattle, 7,600 houses, 46,000 swine, 2,500 sheep, 762 looms, 1,488 spinning wheels, 2,948 plows, 10 saw mills, 31 grist mills, 62 blacksmith shops, and 18 schools. In one of the Cherokee districts alone there were some 1,000 volumes of “good books.” In 1821, after 12 twelve years of hard work, a Cherokee names Seqoya (honored in the scientific names for the the redwood and the giant sequoia trees in California, three thousand miles from his homeland) perfected a method of syllabary notation in which English letters stood for Cherokee syllables; by 1828 the Cherokee were already publishing their own newspaper. At about the same time, they adopted a written constitution providing for an executive, a bicameral legislature, a supreme court, and a code of laws.
    Before the passage of the Removal Act of 1830, a group of Cherokee chiefs went to the Senate committee that was studying this legislation, to report on what they had already achieved in the short space of forty years. They expressed the hope that they would be permitted to enjoy in peace “the blessings of civilization and Christianity on the soil of their rightful inheritance.” Instead, they were daily subjected to brutalities and atrocities by White neighbors, harassed by the state government of Georgia, cajoled and bribed by Federal agents to agree to removal, and denied even the basic protection of the federal government. Finally, in 1835, a minority faction of five hundred Cherokee out of a total of some twenty thousand signed a treaty agreeing to removal. The Removal Act was carried out almost everywhere with a notable lack of compassion, but in the case of the Cherokee—civilized and Christianized as they were—it was particularly brutal.
    After many threats, about five thousand finally consented to be marched westward, but another fifteen thousand clung to their neat farms, schools, and libraries “of good books.” So General Winfield Scott set about systematically extirpating the rebellious ones. Squads of soldiers descended upon isolated Cherokee farms and at bayonet point marched the families off to what today would be known as concentration camps. Torn from their homes with all the dispatch and efficiency the Nazis displayed under similar circumstances, the families had no time to prepare for the arduous trip ahead of them. No way existed for the Cherokee family to sell its property and possessions, and the local Whites fell upon the lands, looting, burning, and finally taking possession.
    Some Cherokee managed to escape into the gorges and thick forests of the Great Smoky Mountains, where they became the nucleus of those living there today, but most were finally rounded up or killed. They then were set off on a thousand-mile march—called to this day “the trail of ters tears” by the Cherokee—that was one of the notable death marches in history. Ill clad, badly fed, lacking medical attention, and prodded on by soldiers wielding bayonets, the Indians suffered severe losses. An estimate made at the time stated that some four thousand Cherokee died en route, but that figure is certainly too low. At the very moment that these people were dying in droves, President Van Buren reported to Congress that the government’s handling of the Indian problem had been “just and friendly throughout; its efforts for their civilization constant, and directed by the best feelings of humanity; it’s watchfulness in protecting them from individual frauds unremitting.”


    Are existential crises heavier when you don’t exist?

    This robot fails the turing test on herself. She can keep Claude Shannon’s Ultimate Machine company in the category of Self Denying Automata That I Think Are Deep But I Can’t Tell And That’s Why They Are.


    The empirics of identity: Over what timescale does self-concept develop?

    There is little more slippery than who we think we are. It is mixed up with what we do, what we want to do, who we like to think we are, who others think we are, who we think others want us to think we are, and dozens of other equally slippery concepts. But we emit words about ourselves, and those statements — however removed from the truth — are evidence. For one, their changes over time they can give insight into the development of self-concept. Let’s say that you just had a health scare and quit fast food. How long do you have to have been saying “I’ve been eating healthy” before you start saying “I eat healthy”? A month? Three? A few years? How does that time change with topic, age, sex, and personality? Having stabilized, what is the effect of a relapse in each of these cases? Are people who switch more quickly to “I eat healthy” more or less prone to sustained hypocracy — hysteresis — after a lapse into old bad eating habits? And, on the subject of relapse, how do statements about self-concept feed back into behavior; All else being equal, do ex-smokers who “are quitting” relapse more or less than those who “don’t smoke”? What about those who “don’t smoke” against those who “don’t smoke anymore”; does including the regretted-past make it more or less likely to return? With the right data — large longitudinal corpora of self-statements and creative/ambitious experimental design — these may become empirical questions.


    What polished bronze can teach us about crowdsourcing

    1. Crowds can take tasks that would be too costly for any individual, and perform them effortless for years — even centuries.
    2. You can’t tell the crowd what it wants to do or how it wants to do it.

    from http://photo.net/travel/italy/verona-downtown


    The market distribution of the ball, a thought experiment.

    The market is a magical thing.  Among other things, it has been entrusted with much of the production and distribution the world’s limited resources. But markets-as-social-institutions are hard to understand because they are tied up with so many other ideas: capitalism, freedom, inequality, rationality, the idea of the corporation, and consumer society. It is only natural that the value we place on these abstractions will influence how we think about the social mechanism called the market. To remove these distractions, it will help to take the market out of its familiar context and put it to a completely different kind of challenge.

    Basketball markets

    What would basketball look like if it was possible to play it entirely with markets, if the game was redesigned so that players within a team were “privatized” during the game and made free of the central planner, their stately coach: free to buy and sell favors from each other in real time and leave teamwork to an invisible hand?  I’m going to take my best shot, and in the process I’ll demonstrate how much of our faith in markets is faith, how much of our market habit is habit.

    We don’t always know why one player passes to another on the court. Sometimes the ball goes to the closest or farthest player, or to the player with the best position or opening in the momentary circumstances of the court. Sometimes all players are following the script for this or that play. Softer factors may also figure in, like friendship or even the feeling of reciprocity. It is probably a mix of all of these things.  But the market is remarkable for how it integrates diverse sources of information.  It does so quickly, adapting almost magically, even in environments that have been crafted to break markets.

    So what if market institutions were used to bring a basketball team to victory? For that to work, we’d have to suspend a lot of disbelief, and make a lot of things true that aren’t. The process of making those assumptions explicit is the process of seeing the distance of markets from the bulk of real world social situations.

    The most straightforward privatization of basketball could class behavior into two categories, production (moving the ball up court) and trade (passing and shooting). In this system, the coach has already arranged to pay players only for the points they have earned in the game. At each instant, players within a team are haggling with the player in possession, offering money to get the ball passed to them. Every player has a standing bid for the ball, based on their probability of making a successful shot. The player in possession has perfect knowledge of what to produce, of where to go to have either the highest chances of making a shot or of getting the best price for the ball from another teammate.

    If the player calculates a 50% chance of successfully receiving the pass and making a 3-point shot, then that pass is worth 1.5 points to him. At that instant, 1.5 will be that player’s minimum bid for the ball, which the player in possession is constantly evaluating against all other bids. If, having already produced the best set of bids, any bid is greater then that possessing player’s own estimated utility from attempting the shot, then he passes (and therefore sells) to the player with the best offer. The player in possession shoots when the probability of success exceeds any of the standing bids and any of the (perfectly predicted) benefits of moving.

    A lot is already happening, so it will help to slow down. The motivating question is how would reality have to change for this scheme to lead to good baskeball? Most obviously, the pace of market transactions would have to speed up dramatically, so that making, selecting, and completing transactions happened instantaneously, and unnoticably. Either time would have to freeze at each instant or the transaction costs of managing the auction institution would have to be reduced to an infinitesimal. Similarly, each player’s complex and inarticulable process of calculating their subjective shot probabilities would have to be instantaneous as well.

    Players would have to be more than fast at calculating values and probabilities, they would also have to be accurate. If players were poor at calcuating their subjective shot probabilities, and at somehow converting those into cash values, they would not be able to translate their moment’s strategic advantage into the market’s language. And it would be better that players’ bids reflect only the probability of making a shot, and not any other factors. If players’ bids incorporate non-cash values, like the value of being regarded well by others, or the value of not being in pain, then passes may be over- or under-valued. To prevent players from incorporating non-cash types of value the coach has to pay enough per point to drown out the value of these other considerations. Unline other parts of this thought experiment, that is probably already happening.

    It would not be enough for players to accurately calculate their own values and probabilities, but those of every other player, at every moment. Markets are vulnerable to assymmetries in information. This means that if these estimates weren’t common knowledge, players could take advantage of each other artificially inflating prices and reducing the efficiency of the team (possibly in both the technical and colloquial senses). Players that fail to properly value or anticipate future costs and benefits will pass prematurely and trap their team in suboptimal states, local maxima. To prevent that kind of short-sightedness, exactly the kind of shortsightedness that teamwork and coaching are designed to prevent, it would be necessary for players to be able to divine not only perfect trading, but perfect production. Perfect production would mean knowing where and when on the court a pass or a shot will bring the highest expected payoff, factoring in the probability of getting to that location at that time.

    I will be perfectly content to be proven wrong, but I believe that players who could instantaneously and accurately put a tradable cash value on their current and future state — and on the states of every other player on the court — could use market transactions to create perfectly coherent teams. In such a basketball, the selfish pursuit of private value could be manuevered by the market institution to guarantee the good of the team.

    The kicker

    With perfect (instantaneous and accurate) judgement and foresight a within-team system of live ball-trading could produce good basketball. But with those things, a central planner could also produce good basketball. Even an anarchist system of shared norms and mutual respect could do so. In fact, as long as those in charge all share the goal of winning, the outputs of all forms governance will become indistinguishable as transaction costs, judgement errors, and prediction errors fall to zero. With no constraints it doesn’t really matter what mechanisms you use to coordinate individual behavior to produce optimal group behavior.

    So the process of making markets workable on the court is the process of redeeming any other conceivable form of government. Suddenly it’s trivial that markets are a perfect coordination mechanism in a perfect world.  The real question is which of these mechanisms is the closest to its perfect form in this the real world. Markets are not. In some cases, planned economies like board-driven corporations and coach-driven teams probably are.

    Other institutions

    What undermines bosshood, what undermines a system of mutual norms, and what undermines markets?  Which assumptions are important to each?  

    • A coach can prescribe behavior from a library of taught plays and habits. If the “thing that is the best to do” changes at a pace that a coach can meaningfully engage with, and if the coached behavior can be executed by players on this time scale, than a coach can prescribe the best behavior and bring the team close to perfect coherence.
    • If players have a common understanding of what kinds of coordinated behavior is the best for what kinds of situations, and they reliably
      and independently come to the same evaluation of the court, than consensual social norms can model perfect coherence satisfactorily.
    • And if every instant on the court is different, and players have a perfect ability to evaluate the state of the court and their own abilities, then an institution that organizes self-interest for the common good will be the one that brings it closest to perfect coherence

    Each has problems, each is based on unrealistic assumptions, each makes compromises, and each has its place. But even now the story is still too simple. What if all of those things are true at different points over the course of a game? If the answer is “all of the above,” players should listen to their coach, but also follow the norms established by their teammates, and also pursue their own self-interest. From here, it is easy to see that I am describing the status quo. The complexity of our social institutions must match the complexity of the problems they were designed for. Where that complexity is beyond the bounds that an individual can comprehend, the institutional design should guide them in the right direction. Where that complexity is beyond the bounds of an institution, it should be allowed to evolve beyond the ideological or conceptual boxes we’ve imposed on it.

    The closer

    Relative to the resource systems we see every day, a sport is a very simple world.  The rules are known, agreed upon by both teams, and enforced closely. The range of possible actions is carefully prescribed and circumscribed, and the skills necessary to thrive are largely established and agreed upon. The people are occupying each position are world-class professionals. So if even basketball is too complicated for any but an impossible braid of coordination mechanisms, why should the real world be any more manageable? And what reasonable person would believe that markets alone are up to the challenge of distributing the world’s limited resources?

    note

    It took a year and a half to write this. Thanks to Keith Taylor and Devin McIntire for input.


    Hayek’s “discovery” is the precognition of economics

    I’m exaggerating, but I’m still suspcious. I think Vernon Smith does have some interesting, unconventional work in that direction. There are also null results.

    About

    This entry was posted on Tuesday, November 26th, 2013 and is filed under life and words, science.


    A list of things I wanted to know in July 2013

    • the biology of mushrooms
    • the mathematical methods of physics: how to wreak havoc on equations
    • the name and history every plant I step on
    • when we should have decentralized control, when we should have bosses
    • the contributions of statistical physics to social science
    • more theoretical neuro
    • more theoretical bio
    • more theoretical ecology
    • how to evolve modularity, and how modularity evolved
    • birds by their songs
    • more about soil ecology
    • how palm wine tastes differs in every country that you can find it
    • every Mediterranean climate in the world
    • the influences of Greco-Roman culture that elicited Christianity from Judaism
    • the cultural histories of Heavens and Hells
    • how to never lie to myself unintentionally
    • how to keep changing forever
    • how I’ll change when I leave this town for the next
    • why there aren’t more worker-owned businesses

    FYI, I don’t know yet.


    My dissertation

    In August I earned a doctorate in cognitive science and informatics. My dissertation focused on the role of higher-level reasoning in stable behavior. In experimental economics, researchers treat human “what you think I think you think I think” reasoning as an implementation of a theoretical mechanism that should cause groups of humans to behave consistently with a theory called Nash equilibrium. But there are also cases when human higher-level reasoning causes deviations from equilibrium that are larger than if there had been no higher-level reasoning at all. My dissertation explored those cases. Here is a video.

    My dissertation. The work was supported by Indiana University, NSF/IGERT, NSF/EAPSI, JSPS, and NASA/INSGC.

    Life is now completely different.

    About

    This entry was posted on Monday, November 25th, 2013 and is filed under books, science, updates.


    One year free my ass: Webfaction trumps AWS/JumpBox

    AWS was pretty complicated, and on top of that I couldn’t figure out why my free trial was costing three time what I’m paying with webfaction. Now i’ve got all the control I need, without getting charged each time I write to the harddrive.

    About

    This entry was posted on Monday, November 18th, 2013 and is filed under straight-geek.