“We don’t want any leader who wants to lead”: King sacrifice and a mechanism for finding reluctant leaders

Domalde was a tragic Swede king described in the 9th century. His mother had cursed him with bad luck, which carried to his people, who were having bad harvests. As the story goes:

Domald took the heritage after his father Visbur, and ruled over the land. As in his time there was great famine and distress, the Swedes made great offerings of sacrifice at Upsalir. The first autumn they sacrificed oxen, but the succeeding season was not improved thereby. The following autumn they sacrificed men, but the succeeding year was rather worse. The third autumn, when the offer of sacrifices should begin, a great multitude of Swedes came to Upsalir; and now the chiefs held consultations with each other, and all agreed that the times of scarcity were on account of their king Domald, and they resolved to offer him for good seasons, and to assault and kill him, and sprinkle the stalle of the gods with his blood. And they did so.

https://en.wikipedia.org/wiki/Domalde

This idea of king sacrifice is intriguing. It takes head on a contradiction inherent in the idea of a king, the most valuable person. If they’re so valuable, and if sacrifice is about giving up valuable things, then naturally a king is who you murder once murder of less valuable things has failed. King Sacrifice is important to distinguish from chaotic regicide by the masses. Domalde didn’t just face an uprising or revolt, he was sacred and therefore sacrificial. Or not. Maybe it’s just a story. Maybe some mere uprising or revolt got dressed up as having a deeper purpose to make the people look less barbaric. But the trope of the Sacred/Sacrificial King, evident in examples such as Rex Nemorensis, the scapegoats of myth, and obviously Christianity, gives us a look into a world before Kings merely had a divine right to rule, when they had a divine responsibility, enforced by all people, to rule well.

The theme of King Sacrifice—in which the benefits of power are structurally outstripped by its risks and responsibilities—has great implications for governance design, a concern of mine. King sacrifice creates a frame in which leadership is inherently thankless, accountable, and fraught: an ideal environment for selecting responsible leaders.

The prompt

Once you have experienced the failures of strong leadership and the failures of extreme decentralization, you may converge on a specific place in the middle: “We should have leaders, but not anyone who wants to lead.” The collective wisdom is that reluctant leaders are more like followers. They are humble, empowering, and their use of power is seen as responsible and legitimate because everyone understands that they don’t like using it.

So let’s say we all agree: “We should have leaders, but not anyone who wants to lead.” What then? It’s a great if you happen upon the rare someone like that, but in practice they simply can’t be found and recruited reliably. Whether you’re a community group or a large firm, it’s the luck of the draw for you: to have the right person around with the right attitude and alignment, you get lucky or you don’t. In that way, the maxim is more of a description than a strategy for good centralized governance.

Of course, it would be different if we could find or create reluctant leaders. If there were a way to reliably and systematically produce and select reluctant leaders, we’d be one step closer to saving dictatorship, while creating a dictatorship that’s worth saving.

The first thing to do is break this dichotomous construct “reluctance”. It presumes that a candidate is either entirely power hungry or entirely power wary. But how a person is depends on where they are. We know that power-wary people can eventually a develop a comfort for and fluency with power that can start to look like a taste for it. And we know that many traditional authority-focused leaders can have eye-opening experiences that inspire them to open up and flatten out. We also know that your reluctance depends on simple things like how full your plate is.

Let’s take advantage of context to create reluctant leaders. And under a framing of king sacrifice, it isn’t difficult. We need to step away from a picture of leadership as providing access to power and control, in favor of the view in terms of service and collaboration. For the right person, a big sacrifice counterbalances the benefits of power and control, leaving only the value of learning and serving others.

Imagine a small group election in which each candidate has to explicitly demonstrate their reluctance, and convince people to vote for them on the faith that they don’t really want to do it. More precisely, imagine an election in which every candidate describes what they are sacrificing in order to lead.

People who already don’t want to lead will already see that they are making a sacrifice, and can just describe it. People who do want to lead, and would naturally sacrifice anything, can make themselves actually reluctant by proposing a big sacrifice that they will make if they take the position. They will take a pay cut, they will never force anyone to do anything, they will get a therapist, or make themselves recallable with a minority vote, or pay out of pocket toimplement a program that reduces the authority of the position by spreading power around.

The community will then evaluate each candidate’s sacrifice along with every other qualification of holding the position. Whether the sacrifice is big or small or credible or not can’t really be quantified in general, so it’s a subjective or political decision on the part of voters whether a person’s sacrifice is substantial and legitimate enough. But even then, the political process is more responsive to arguments that are credible, and a goal is credible that meets recognized goal evaluation criteria. One example is the SMART taxonomy, which helps you ask: Is the proposed sacrifice not just substantial, but Specific, Measurable, Achievable, Relevant, and Time-bound?

Can this sacrifice framing mechanism be gamed? Well it’s political, of course it can be gamed. This is why it’s important to assume constrained contexts, such as an organization that is small enough that it can maintain alignment and rely on social factors like trust, respect, reputation, and just deeper knowledge about the person. Whether a pay cut is a sacrifice or a token depends on inside knowledge about the person and their existing means. It’s up to the community to know their candidates well enough to know if sacrifice is being invoked authentically or only rhetorically.

Of course, I’m coming from a specific place, defining leadership as shared or cooperative leadership, dictatorship as simply unitary authority, potentially benign. It can be captured by the power hungry, but doesn’t serve them by definition. And I’m assuming a group with a basic level of mutual regard and goal alignment. So these aren’t general claims, but claims that make sense for a group that is small enough for social norms to play a role in how things work.

Where is all this coming from?

Aside from the noble project of Saving Democracy, I’ve got a side hustle of Saving “Dictatorship”. It’s not such a betrayal:

  1. Systems based on authority and leadership are pragmatic, workable, and familiar.
  2. They are the easiest social systems to design and implement and scale (explaining why they’ve taken over the world)
  3. If you can keep them benevolent, they can also be surprisingly democratic, in the sense of integrating the needs of all stakeholders, if you define benevolence to require it
  4. Power and coercion are bad, they often co-occur with authority and hierarchy, but they can be decoupled. Successfully decoupling them puts unitary structures on the map as democratic solutions. That separation has to happen both in an org and in everyone’s minds
  5. While people think of leadership and democracy as opposed, I think they’re aligned. As I define those terms, strong participation is a result of strong leadership, and strong leadership is a result of strong participation. They’re no dueling alternatives to the distribution of power, but two sides of the same coin of universal empowerment.
  6. Taking it all the way, I actually don’t trust a democracy to work if every member of it hasn’t had a lot of experience leading. Making unitary authority systems of every size more capable of care and accountability makes it easier to give more people more experience in leadership.

About

This entry was posted on Thursday, October 24th, 2024 and is filed under Uncategorized.


The project of developing leaders when freedom is an endless meeting

When you stumble on an excerpt that says what you want to say better than you could ever say it, you switch very eagerly from blogging to quoting. This is a long excerpt from Francesca Polletta’s 2002, “Freedom is an endless meeting,” and incredible historical book about participatory community organizing. You can tell from the title that she’s interested in the fact that all solutions come with a pet tension for struggling with. Recounting story after story of early civil rights organizers balancing idealism and pragmatism, you understand how she gets so easily to realism, and you wonder why everyone in the democracy scene hasn’t. This bit tackles the eternal question of what leadership means when your goal is cultivating leaders and the biggest threat is your own effectiveness.


The literature on organizing is rife with injunctions against leading: organizers should rather help residents articulate their own agendas and build their leadership. Yet, in the process, organizers are often expected to help identify goals, push people to question their preferences, and rally them to act. How can they do that without thereby undermining the leadership capacities of those whom they are organizing? Myles Horton’s answer was to ask questions. “I use questions more than I do anything else. They don’t think of a question as intervening because they don’t realize that the reason you asked that question is because you know something…. Instead of you getting on a pinnacle you put them on a pinnacle.” Horton described a Highlander director in a workshop who “asks one question, and that one question turned that workshop around and completely moved it in a different direction.” Was the Highlander workshop leader leading? Should one ask questions that open the whole enterprise up for scrutiny? That purposely move a discussion in a new direction? In SNCC, asking questions later became a way for organizers to hold onto their radicalism without feeling that they were imposing it on the people whom they organized. The tactic ended up alienating people more than involving them. What comes across in the stories that Horton tells, in SNCC workers’ tales of the best organizers, and in the broader literature on organizing is good organizers’ creativity: their ability to respond to local conditions, to capitalize on sudden opportunities, to turn to advantage a seeming setback, to know when to exploit teachable moments and when to concentrate on winning an immediate objective. Sometimes you insist on fully participatory decisionmaking; sometimes you do not. Albany SNCC project head Charles Sherrod urged fellow organizers not to “let the project go to the dogs because you feel you must be democratic to the letter.” Horton recounted on numerous occasions an experience that he had had in a union organizing effort. At the time, the highway patrol was escorting scabs through the picket line, and the strike committee was at its wit’s end about how to counter this threat to strikers’ solidarity. After considering and rejecting numerous proposals, exhausted committee members demanded advice from Horton. When he refused, one of them pulled a gun. “I was tempted then to become an instant expert, right on the spot!” Horton confessed. “But I knew that if I did that, all would be lost and then all the rest of them would start asking me what to do. So I said: ‘No. Go ahead and shoot if you want to, but I’m not going to tell you.’ And the others calmed him down.”

Giving in would have defeated the purpose of persuading the strikers that they had the knowledge to make the decision themselves. But Horton sometimes told another story. When he was once asked to speak to a group of Tennessee farmers about organizing a cooperative, he knew, he said, that since “their expectation was that I would speak as an expert… if I didn’t speak, and said, ‘let’s have a discussion about this,’ they’d say, that guy doesn’t know anything.” So Horton “made a speech, the best speech I could. Then after it was over, while we were still there, I said, let’s discuss this speech. Let’s discuss what I have said. Well now, that was just one step removed, but close enough to their expectation that I was able to carry them along…. You do have to make concessions like that.” What better time to make a concession than when you’re looking down the barrel of a gun? Horton presumably knew that he could get away with refusing to be an expert in the first situation and not in the second. Perhaps the difference was that he was unknown to the farmers and was known to the strikers. But one could argue that a relationship with a history could tolerate aberrant exercises of leadership while first impressions die harder. In other words, extracting rules from the stories that Horton tells is difficult. When to lead and when to defer, when to ask leading questions and when to remain silent, when to focus on the limited objective and when to encourage people to see the circumscribed character of that objective—the answers depend on the situation and are not always readily evident.

p. 76

I love how that first bit about questions turns the patronism of the Socratic method right on its big self-important head. I also like the focus on process, what Polletta calls “the developmental project of democracy.” I think the single biggest force acting against democracy is the experience of everyday people in their first organizing role trying and failing to get others involved, and coming reluctantly to the conclusion that it just doesn’t work, that people want to be told what to do. Your bad experience cultivating democrary wasn’t a lens into the fundamental architecture of human nature. You’re a person in a social reality trying to fine tune a smaller reality within it. You’re in a project, and a project has to get where it’s going by starting where it’s at.

From a developmental perspective, no compromise from your ideal is really a compromise. A compromise is a step away from the ideal, and your steps are still toward the ideal away from the status quo. They don’t approach the ideal directly, as the crow flies. They follow the landscape and its contours, avoiding the mud as much as possible. Following the hills is only a compromise in the sense that obeying gravity is.

To keep the navigational metaphors going, what does it mean to navigate by the stars? When we follow a star, it’s not with the goal of getting there. You follow a star to reach a place on Earth that’s closer to it. And that’s a meaningful, deeply idealistic journey even if, in a cosmic sense, every place on Earth you could possibly go is ultimately the same number of years away from the light. Even if, as your North Star takes you climbing along the sphere to its pole, less and less of your motion is up toward the star and more and more is sideways to the pole. That’s just physical law. Obeying gravity is not a compromise.

An especially exciting thing about Polletta is her critique of prefiguration. Prefiguration is a popular framework for activism and radical change because it offers a way to pursue an ideal in this non-ideal world. It proposes that one create little microcosms of the ideal within the real, and that your perfect bubble grows and grows until it’s as big as the world. In the prefigurative view, the root of the power of participatory approaches to community is that they prefigure the global approach by enacting it. Seems hard to fault. But Polletta holds the developmental project in contrast to the prefigurative project, arguing that prefiguration works in relation to itself, with no more influence from the outside world than is necessary, while the developmental project is about the outside world. The project of pursuing the ideal becomes the project of finding the most idealistic way of relating to the rest of the world as it is, and that being that way in this world changes the world. The great thing about Polletta is it’s all examples and history first, so these ideas are grounded in actual things that happened, giving you nuance for free. From the page before the quote above:

One can also contrast this developmental rationale for participatory democratic decisionmaking with the prefigurative commitment that commentators have attributed to SNCC and the new left. Where a prefigurative commitment envisions change through personal self-transformation and moral suasion rather than through institutional political change, a developmental commitment is not in conflict with an explicitly political one. To the contrary, its very purpose is to produce activists and organizations capable of taking on powerful officials and agencies. From early on, Horton said, he had been “more concerned with structural changes than I have with changing the hearts of people.” A prefigurative commitment tends toward absolutism since the object is both to “oppose” a current regime and to be truly “opposite”; a developmental commitment tends more toward an acceptance of the conventional. The two projects have very different views of organization. A prefigurative project is suspicious of organization, concerned that it molds people in its own image, valorizing efficiency and conformity over the purposes for which the organization was created, raising means to the level of ends. Enacting the ends in the means, committing to the “here-and-now revolution,” favoring community over organization—all these counter the oligarchical tendencies of organizations. By contrast, the broader organizing strategy of which a developmental project is a part sees organizations as one of the key arenas for developing political efficacy, leadership, and accountability and, not least, for securing power. An organization is doomed to failure unless people have a stake in its preservation, however. Participation in decisionmaking provides the sense of ownership and the pleasures of learning that sustain people’s participation.

The relationship underpinning a developmental democratic project is a pedagogical one. People learn to articulate concerns and evaluate options by doing so. At the same time, they learn from each other, and they may also learn from a facilitator or teacher, someone who encourages, guides, questions, and challenges them.

p.74

The tough thing for me about a good book is it takes years to read because I keep going into reveries. It’s been 2 years probably and I’m only 75 pages in. Here’s another great quote from earlier in the book, redeeming meetings:

Local people have really begun to find a way that they can use a meeting as a tool for running their own lives. For having someone to say about it.


That’s a line from Bob Moses, an organizer for the civil rights movement in the US South. It offers such a striking counter narrative to the modern “meetings are bad; fewer meetings” atmosphere that work culture creates. I think what’s happened is that there has been a change in the meaning of the word “meeting”. The way it is used in those quotes is as a bottom-up gathering of community members to discuss a matter of shared concern. That is so much different from what the word means today. If I were weaving conspiracy theories, I’d say that part of the project of undermining democracy has been capturing and corrupting the word. I think there’s a case to be made that meeting, not voting, is the fundamental unit of democracy.

About

This entry was posted on Sunday, October 13th, 2024 and is filed under Uncategorized.


What you know when you know nothing: some toys in quantitative epistemology

You’ve just reached into a large opaque jar and pulled out the number 87. I could add some other structure, that the biggest number is 100, or that odd numbers are twice as common as even, and then ask clearly answerable questions. But what if we know nothing more than what we’ve seen: a ball from the jar with the number 87? What’s the biggest number in the jar? Are there even other numbers in the jar? Or you’ve reached in and pulled out a purple marble, how many colors of marble are there? Are they all round? Are there any numbers in the jar? Some of these have actual answers, and that’s important because they link to a bigger question:

What do our philosophies of statistics fill in about our world when they’ve seen almost nothing from it?

Or put another way: just how much can you know about the world with nearly no data and nearly no theory? I’m in social science, so it’s a real thrill when I find knowledge I can trust. And to do it from nearly nothing is astonishing. You can get the shape of the world from just 1 or even 0 observations. As an empirics-first person, I’m surprised to hear myself say it, but there’s so much you already know about the world when you don’t know nearly nothing. I’ve been collecting examples for year. And these examples have actually helped me do normal things, like find my keys, charge my phone, and, thanks to German tanks, take a long, hot shower.

The foggy sea, the German tank problem, and showers at a campsite.

You’ve been wandering for weeks, no map, through a foggy landscape. You reached a body of water, knowing nothing about how large it is, and decided to cross it with your inflatable raft. It’s been about 10 minutes already, so you know that this body of water is bigger than a creek, but it could be a lake, sea, or ocean. It could be another 2 minutes to the other side, or two months. Knowing nothing, what’s your best guess for the total size of the lake?

This is a continuous version of a famous statistical problem called the German tank problem. The Allies need to estimate German tank production. As they captured and examined German tank hulls, they found that their engines had serial numbers. So if you’ve captured a tank with the number 200 on the engine, you immediately know that there are at least 200 tanks. But does that also tell you anything about the total number of tanks? It does, and that’s fascinating: it’s a hint of everything you know when you know almost nothing.

And it’s not just a matter of theory. I was at a camping ground with spotty hot water. Some days there was as much as you want, and others it only lasted long enough to get your hopes up, just one minute or two, before getting icy. So if you’ve been in the shower for one minute, you know that there was at least one minute of hot water available, but do you now know anything about how much hot water is left?

Amazingly, you do. And that’s useful to know. If you need 10 minutes to wash your hair, you’ll be in a bind if the water goes icy in five. And absent the magic answer, it’s not entirely clear what you should do. Play it safe and never wash your hair? Roll the dice and have soap in your eyes when the water turns cold? Well there’s a solution to this problem, the German Tank Problem, and to the problem of the foggy sea: Your best guess at any moment is that you’re halfway through. So if you’ve been rowing for 10 minutes, then your best guess in this moment is that there are ten minutes of rowing. And every minute after that you best guess will go up (not down!). If you’ve captured tank #350 then your best guess in this moment is that there are another 350 tanks, for a total of 700. And if you don’t know if you’ll have five minutes of hot water in the shower, spend the first five minutes showering cautiously, as if the hot water could end at any moment. At the five minute mark you have a legitimate reason to feel confident that you’ll have five more minutes to wash your hair.

That’s what I did, and it worked!

Connection to science

The scientific method in the West comes out of several centuries of debate: can you know the world by reason alone? By observation alone? To skip past a lot of arguing, the answer is “both” via the scientific method, a procedure for bringing reason and observation into dialogue. But in practice they’re always in balance, and whether to center theory or data differs so much by discipline. In some areas of knowledge, like physics, data drapes beautifully off the framework of theory. In others, with phenomena that are too complex for elegant theories (e.g. social science), theory does a much more sorry job at propping up the data, so you end up using the data as its own model. that explains the role of machine theory, information theory, and statistics. As a social scientist your theories aren’t nearly good enough to really predict outcomes, and you spend a lot of time with statistics, the part of math that turns data into knowledge.

The conventional statistics that social scientists are taught can be called “small-n” statistics because it was developed in the early 20th century when it was costly to collect many independent observations (n represent your total number of observations). You had to squeeze every bit of insight from the couple dozen n you could get. The computing of the 21st century brought us big data and a switch to an alterantive philosophy of statistics that could leverage large n.

Within that frame, this exercise, “what you know when you know nothing” is a bit of a throwback. We drop from large-n statistics, down past small-n statistics, to the very smallest-n statistics of n=1.

Relevance to the philosophy of statistics

There isn’t just one statistics of n=1, and there’s actually a different answer to the German Tank problem. There are two alternative philosophies of statistics: the frequentist and Bayesian paradigms. There is a clear formal difference that is hard to cast intuitively, but narratively frequentism develops statistics by estimating what will happen from what has happened, while Bayesianism understands statistics as a problem of estimating what will happen from an observer’s beliefs about what has happened. Instead of trying to figure out how many tanks are in this world, the Bayesian observer imagines a range of possible worlds, in which the Germans have built everything from 2 to 20,000 tanks, and they work to determine which of those worlds we’re in. It may sound like a subtle distinction, but philosophically it’s big and mathematically it’s big. And, as we’ll see, the different philosophies predict very different numbers of tanks.

Between the two, Bayesianism is ascendant today because it wasn’t feasible to use before the ubiquity of cheap computing, but one isn’t better than the other. They are both ways of writing models, and all models are wrong.

One practical difference: the Bayesian approach is better for handling the complexity that comes with a lot of data, while the frequentist approach was developed for the era of “small-n” statistics, when the challenge was usually to learn as much as you could from very little data. And because the German Tank Problem is a “very very small data” problem, the frequentist answer is better. The frequentist answer is the one I’ve described, that what you’ve observed is half of the total. The Bayesian solution is that what you’ve observed is the total: if the largest serial number you’ve seen is #350, then your best guess is that there are only 350 tanks, because, assuming tanks are costly to produce, a world of 350 tanks is the one with the most evidence.

But the Bayesian way of thinking has it’s own place as well in revealing what we know when we know nothing.

How to find your keys

I lost my keys on the ride to the gym, somewhere on a mile-long stretch, and it was dark by the time I came out of the gym and realized it. I didn’t want to wait till morning, and I didn’t want to backtrack slowly and spend a half hour looking carefully. After all, they could have been anywhere. So I thought about it. Are my keys equally likely to be anywhere along the mile stretch? Or are they more likely to be in some places than others? I could be in a world in which it was very unlikely that I would lose my keys at all. In that world they really could be anywhere. I could also be in a world in which they were just waiting to be lost, as if they were scotch-taped to the outside of my side pocket. In that world I probably lost them right away, walking down my steps. I didn’t know which world I was in, or which of all the worlds in between, but in more of those worlds my keys were near my front door. So instead of searching slowly back home from the gym, I decided to ride straight home and start the search at my front door. My keys turned out to be right there. In most possible worlds, you lost your keys as soon as they were loseable, and they are most likely to be wherever you last remember having them or moving them. The power of Bayesian reasoning is that you can reason to that, and you can prove it too, which some friends helped me do in another post.

How much charge is on your phone?

If you look at your phone, it’s unlikely that it’s at 68% charge. But unless it’s constantly dead (0%), or constantly charging (100%), it is more likely to be at 68% than anything else. If you charge intermittently, but enough to stay above empty, and not enough to keep at full, then you can think of it this way: It’s morning and you’re at 100%. By evening it would be at 0%, but you charge in little moments during the day. We’ll say that you took a step down every time you were drained by a point, and a step up every time you charged by a point. We’ll say that when you hit 100% you always unplug (you can’t take a step up over 100).

A question is: how many ways are there to take 100 steps from 100%; how many paths are there in all the different combinations of up and down steps? And for any given charge level, how many ways are there to that point from 100% that involve exactly 100 steps? 0% only has one path leading to it: there is only one way to take 100 steps down from 100%. The 2% level has about 100 paths leading to it: a hundred ways to take 98 steps down with a little goosestep up and down at some point between the top and bottom. There are a lot of paths from 100% back to 100%: you can go down 50 and up 50, you can go down and up 50 times, you can go down and up by 4 then 7 then 15.

With these ideas, we’re now to our key question. Which charge level between 0 and 100 has the greatest number of paths leading to it? The 68% charge level is the one with the greatest number of paths leading to it (0% has the fewest). Another way of saying the same thing: if you randomly generate paths of length 100, up and down, over and over, the number you’ll land on the most is 68%. Not by a lot, but if I know nearly nothing about your phone—it’s got about a day of charge, it’s near the end of the day, you’re on the move enough that it’s often but not usually plugged in—the least bad guess is that you’re at about 60-70% charge by the end of the day.

The orthography of number

In the next book you pick up, keep an eye out for the first number you see, not spelled out but in digits. Will it be big, or little? Even or odd? What can we say about the numbers that are dealt to us, numbers about anything: dollars, marbles, people, fish? The fun thing about pure reason is a) you will learn something interesting, and b) you won’t get to choose what. According to Benford’s Law, the next number you see, big or small, is most likely to start with a 1. A number starting with 1 is 30%! That ends up being about 12% more likely than 2, which is about 5% more likely than 3, and so on down to 9, which initiates only 4.6% of numbers, not 11% like you’d expect (11=100/9 digits; you don’t divide by 10 because in Arabic numerals the only number than can start with the tenth digit, 0, is 0). I don’t understand it perfectly but it’s got something to do with there being more small numbers than big numbers, with logarithms, and with Arabic numerals. They come together to give 1’s center-stage. I don’t know if this is a metaphor or the actual explanation, but if you look at the way a slide rule gives physical space to each digit according to the logarithmic way of representing “bigness”, you’ll see that 1 gets more space than any of the others, and in that way gets more real estate in our lives, with pride of place on the far left of most written numbers.

It might sound far fetched to say, but pure reason, charged with statistical theory, and seeded with one observation, can help you shower comfortably, find your keys quickly, and keep your phone alive. It can probably also help you brush your teeth, clean your windows, and wash the dishes; let me know what you find.

About

This entry was posted on Friday, July 19th, 2024 and is filed under Uncategorized.


Why save democracy when you can save dictatorship? Incentive compatible survey design

“Good question. Yes, we have your best interests at heart.”

There is a kind of problem so fundamental to organizing that we sometimes forget to think of it; common as day. You see it when

  • People answering a survey tell you what they want you hear instead of the truth
  • Someone lies at an interview
  • Just about any time that people aren’t incentivized to be transparent

Those are all examples of mis-alignment in the sense that individual incentives don’t point to system goals. It’s called the problem of “incentive alignment” (also known as “incentive compatibility”).

The phenomenon of “buyer’s remorse” gives a clean economic example of the idea. In a normal auction, where people are bidding for a thing, it turns out that the structure of the decision doesn’t actually incentivize an honest evaluation by buyers of what they think a thing is worth. In real world auctions people often overbid, in part because they are influenced by the fear of losing. So typical “first-price” auctions are actually not incentive aligned.

But there’s an auction design out there that actually does incentivize honesty. It’s the “second-price” auction. In a second price auction the winner doesn’t pay the price they bid, but the next highest price. Why does that change anything? To see the trick you have to think a bit. At first thought you might just think that the smart strategy is to name a crazy high price and pay the losing bidder’s fair price. But what if all bidders think that? Then you’re going to overpay. You don’t want that: you don’t want to pay more for a thing than it’s worth to you. Where this reasoning gets you is that all bidders in a second-price setting will decide to name the price that they are actually willing to pay, no more, no less.

This is a good example because it also shows how small elegant tweaks you can restore alignment. In so many real world settings, incentives don’t support honest disclosure. We have workarounds in most parts of our life, but the problem still matters, and it attracts a lot of attention from economists. Their work depends crucially on the idea that incentives determine behavior.

Why save democracy when you can save dictatorship?

Incentive compatibility is especially challenging for survey design. How old are you? How much do you make? Who did you vote for in the last election? It turns out that we can’t always trust the answers to these questions. That’s important because surveys are the least bad way to learn things about people in a standardized way. But what if it was possible to pose any question—How often do you have non-PC thoughts?—in a way that people felt an incentive to answer truthfully?

For some things the problem is easy to solve once you’ve spotted it. Say you’re studying philanthropy, and you ask “Do you donate more or less to charity than your peers?” But you realize that most people will say that they donate more than they do. The incentive compatible way of getting an (honest) answer will be to invite people to non-hypothetically donate some of their survey reward to a charity. If they donate and that donation is smaller or larger than average, you the researcher found out if they donate more than others without actually having had to ask. By replacing hypothetical questions with costly behavior you get honesty.

Another strategy is to ask questions with verifiable answers. Instead of asking “What is your height and weight?” you might say “What is your height and weight? We will measure you after this and you’ll only get paid for participation if the difference is 0.” But if you’re verifying then why ask in the first place? And what if verification is impractical? And, most relevant for us, what if it’s impossible, such as with subjective self-evaluations (“Are you kind to others?”)?

Where it matters most, incentive compatible survey design is actually a real can of worms. The problem there is clearest if we hit pause on saving democracy and take a moment to try and save dictatorship. As a longtime scholar and organizer of self-governing communities, I’m comfortable saying that many communities could do worse than structure their governance under a benevolent dictatorship. A lot of groups, organizations, and communities that I admire do. In its most ideal form, benevolent dictatorship is not that different from democracy, because the dictator, being benevolent, is caring, curious, and motivated to understand and integrate everyone’s needs. As a result, the dictator will generate just the kinds of solutions that a healthy democracy would, and they’ll probably so it much more efficiently than a large governing body.

So why not replace everything with benevolent dictatorship? The main problem is fragility. Nothing systemic keeps the “benevolent” in there. If your competent leader was replaced by another competent leader, it’s generally luck. And you have to keep getting lucky because your first dictator won’t be your last. Benevolent dictatorship slips very easily into the non-benevolent kind that has reliably attended humanity’s darkest moments. Whether it’s through bad succession or the corrupting influence of power, no tool we have can reliably keep a benevolent dictatorship benevolent.

Incentive compatible survey design

Well there might be one tool. What if we had incentive compatible personality tests? It’s easy to imagine the important questions you would want to ask a candidate for dictator.

  • “How likely are you to abuse power?”
  • “How do you respond to disagreement?”
  • “How do you respond to insults?”
  • “If a brakeless trolley is hurtling toward a loved one, and you’re at the switch that can divert it on to another track with n people you’ve never met, what is the largest n you’ll tolerate.”
  • You’re infuriated after reading a personal attack by a journalist in a major newspaper. Will you act on that journalist? If so, what will you do?

Asking is easy; what’s hard is to know if their answer is honest. If there was a way to know what someone really thinks, you’d just disqualify the people who give bad answers and appoint the people who give good answers.

I have lots of bad ideas on how to solve this, such as what I call “double-blind policy“, and is based on the premise that you can’t lie about a question if nobody knows what was asked.

But generally this is more likely to remain the name of a major challenge rather than the name of a class of solutions. Still: if we could solve it, I’m not entirely positive that I’d remain a scholar of classic democratic systems. I mean I would, but it would be harder not to admire the green grass on the other side.

About

This entry was posted on Wednesday, May 22nd, 2024 and is filed under Uncategorized.


I found the place in Chico where the world was created

“… The raft came ashore at Ta’doikö, and the place can be seen today.”

That’s the ending to a story that captured my imagination over 15 years ago in this 2007 post. It’s the creation myth of the Maidu indians, a people that lived in today’s Northern California all along the Sierra foothills, north of Sacramento, south of Redding, centered around Marysville Butte and Chico, CA (The full creation myth is here; I love it.).

Unfortunately according to the Wikipedia page the place Taodoiko has been lost.

Except that I found it, in this 2008 post. The key was in a map from the Handbook of the Indians of California (1967) written by the father of Ursula K. LeGuin, Berkeley anthropologist Alfred Louis Kroeber.

The key with Maidu (instead of Miwok) names for these places is on page 394 of the book.

With help from some historical maps of the area (in which the stream landmarks haven’t been developed away), that places Tadoiko right between Chico and Durham, CA.

You’d think I would have gone sooner. But I finally made it to the place, now a Walnut Orchard off of Fimple Rd. in Butte County twenty minutes south of Chico. The place where the world was created.

It was special to go. Where it describes thing coming from the North, I can see the north. Where it describes the sun, I could point. As in the story “And all around were mountains, as far as the eye could see.”

Another powerful thing about the piece: The companion to this story of the Earth’s creation is the one of humanity’s creation, which takes place in a different location, Marysville Butte. I’d always treated the like two separate stories but upon going it’s clear that they’re the same: from Tadoiko, Marysville Buttes is one of the most prominent features.

Less powerful of the place is that there’s no sign of it. It has ben replaced by walnut tree fields as far as the eye can see. I’d be curious if, among the residents, there was any inkling that their far is at the place of an old storied settlement.

About

This entry was posted on Sunday, May 12th, 2024 and is filed under Uncategorized.


Simple heuristic for breaking pills in half


Quickly:
I have to give my son dramamine on road trips, but only half a pill. That’s been a bit tricky. Even scored pills don’t always break cleanly, and then what do you do? Break it again? Actually yes. I did a simple simulation to show how you can increase your chances of breaking a pill into two half-sized portions by 15-20% (e.g. from 70% to about 85%):
1. Try to break the pill in half.
2. If you succeed, great, if not, try to break each half in half.
3. Between your four resulting fragments, some pair of them has its own probability of adding up to half a pill, plus or minus.

Honestly I thought it would work better. This is the value of modeling.

Explanation:
If after a bad break from one to two pieces you break again to four pieces, you will end up with six possible combinations of the four fragments. Some of these are equivalent so all together going to four pieces amounts to creating two more chances to create a combination that adds to 50%. And it works: your chances go up. This is simple and effective. But not incredibly effective. I thought it would increase your chances of a match by 50% or more, but the benefit is closer to 15-20%. So it’s worth doing, but not a solution to the problem. Of course, after a second round of splitting you can keep splitting and continue the gambit. In the limit, you’ve reduced the pill to a powder whose grains can add to precisely 50% in uncountable numbers of combinations, but that’s a bit unwieldy for road trip dramamine. For the record, pill splitters are also too unwieldy for a roadtrip, but maybe they’re worth considering if my heuristic only provides a marginal improvement.

The code:
Here is the simulation. Parameters: I allowed anything within 10% of half of a pill to be “close enough”, so anything in the 40% to 60% range counts. Intention and skill make the distribution of splits non-uniform, so I used a truncated normal with standard deviation set to a 70% chance of splitting the pill well on the first try.

#install.packages("truncnorm")
library(truncnorm)
inc_1st <- 0
inc_2nd <- 0
tol <- 0.1
for (i in 1:100 ) {
  #print(i);
  #a <- runif(1)
  a <- rtruncnorm(1, a=0, b=1, mean=0.5, sd=0.5^3.3)
  b <- 1 - a
  if ( a > (0.5 - tol) & a < (0.5 + tol)) {
    inc_1st <- inc_1st + 1
  } else {
    #aa <- runif(1, 0, a)
    aa <- rtruncnorm(1, a=0, b=a, mean=a/2, sd=(a*2)^3.3)
    ab <- a - aa
    #ba <- runif(1, 0, b)
    ba <- rtruncnorm(1, a=0, b=b, mean=b/2, sd=(b*2)^3.3)
    bb <- b - ba
    totals <- c(aa+ba, aa+bb)
    if (any( totals > (0.5 - tol) & totals < (0.5 + tol)) ) {
      #print(totals)
      inc_2nd <- inc_2nd + 1
    } else {
      #print(totals)
    }
  }
}

#if you only have a 20% chance of getting it right with one break, you have a 50% chance by following the strategy
#if you only have a 30% chance of getting it right with one break, you have a 60% chance by following the strategy
#if you only have a 60% chance of getting it right with one break, you have a 80% chance by following the strategy
#if you only have a 70% chance of getting it right with one break, you have a 85% chance by following the strategy

print(inc_1st)
print(inc_2nd)
print(inc_1st + inc_2nd)

Visualizing the 4th dimension in 1936 (Jean Painlevé documentary — 10 min)


This visualization effort was clearly inspired by Edwin Abbott’s book Flatland. It’s in French but Youtube’s automatic translations became excellent in the last few years. Plus you can put together most of the content from the visuals, which are the best part. I’m enough into the look of this retro stuff (the staid narration! the graininess! the effects! the props!) that I don’t really need comprehension to get from this everything I need.

Jean Painlevé may have been the first science documentarian. He’s best know for his sea life documentaries, which precede Jacques-Yves Cousteau’s, but as you can see he did lots of other stuff. His parents were Victorian-era free-love anarchist aristocrats.

I crush majorly on Painlevé; look up his other stuff as well.

About

This entry was posted on Monday, October 18th, 2021 and is filed under Uncategorized.


Critiques of the Ostrom scholarship

I got fascinated trying to find the most critical criticisms of Elinor Ostrom’s work, and went deeper than I’d expected. Overall, there’s a lot of hero worship (me included). For every paper that criticizes her on a point, there’s one that holds her up as conciliating or defending or representing that exact point in an especially nuanced way.

The main criticisms that are available are of two related types,

  • that the paradigm fails to take into account critical understandings of power and agency, and
  • that it is too beholden to rational choice theory and methodological individualism, two basic tenets of economics and behavioral science.

The problem with the first criticism in the work I found is that every expression of it is pretty fluffy. I found no really clear and clean example putting this shortcoming in relief, and several papers holding her work up against Econ as an example of the opposite: that her work is valuable because it succeeds at taking into account power and agency.

The problem with the second criticism is that the best expressions of it don’t actually criticize her community’s angle on it (me included), they just rely on old and well-trod criticisms of rational choice generally.

It’s a bit disappointing that after all this digging I found no deeply undermining assumption of her frameworks to shake me to the core. But it makes sense, she was pretty reasonable and hedged her claims a lot. That’s a good reason to be hard to criticize. Still, out of this whole exercise I’ve managed to come out with a third “meta” criticism of the Ostrom scholarship: the hero-worship itself. There’s a tacit hierarchy in the Ostrom community of people who can assert the legitimacy to improve and criticize her work (not just apply it), with former students and collaborators at the top, most comfortable saying she missed this or was wrong about that. It could be worse: they could be closed-circle hero-worshipping keepers of the flame, but even that hierarchy is causing problems

  • her frameworks change and improve slowly and in a very hard to track way (there used to be 8 design principles, now there are 10),
  • there’s a lot of uncritical copy/paste application of her frameworks, rather than development of them
  • there is the tendency to see the Ostrom’s contributions as part of the future rather than part of the past. This makes the community vulnerable to developing blind spots.

Here are the least softball critiques that I was able to find.
Cleaver F (2001) Institutional Bricolage, Conflict and Cooperation in Usangu, Tanzania. IDS Bulletin 32(4): 26–35. DOI: 10/bd765h.
Cleaver F (2007) Understanding Agency in Collective Action. Journal of Human Development 8(2). Routledge: 223–244. DOI: 10/crhdr9.
Kashwan P (2016) Integrating power in institutional analysis: A micro-foundation perspective. Journal of Theoretical Politics 28(1). SAGE Publications Ltd: 5–26. DOI: 10.1177/0951629815586877.
Mollinga PP (2001) Water and politics: levels, rational choice and South Indian canal irrigation. Futures 33(8): 733–752. DOI: 10.1016/S0016-3287(01)00016-7.
Mosse D (1997) The Symbolic Making of a Common Property Resource: History, Ecology and Locality in a Tank-irrigated Landscape in South India. Development and Change 28(3): 467–504. DOI: 10/ftdm7p.
Saravanan VS (2015) Agents of institutional change: The contribution of new institutionalism in understanding water governance in India. Environmental Science & Policy 53. Crafting or designing? Science and politics for purposeful institutional change in Social-Ecological Systems: 225–235. DOI: 10/f7rrw2.
Social-ecological systems, social diversity, and power on JSTOR (n.d.). Available at: https://www.jstor.org/stable/26269693?seq=1#metadata_info_tab_contents (accessed 29 September 2020).
Velicu I and García-López G (2018) Thinking the Commons through Ostrom and Butler: Boundedness and Vulnerability. Theory, Culture & Society 35(6). SAGE Publications Ltd: 55–73. DOI: 10/gfdbbs.

Note to self

I do have a few more substantive critiques of my own that I haven’t developed at all:

  1. One: the design principles seem to work insofar as they create a bubble within which market exchange works (within which CPRs are excludable): so how is that an improvement on “markets for everything” ideology?
  2. Two: she has an alignment with super libertarian public choice people in the municipality/Tiebout space that might open up some avenues for criticism.
  3. Three: blind spot failure to integrate findings from the “soft stuff” in democratic theory, pretty much all of deliberative/participatory democracy.
  4. Vlad Tarko adds “There’s also a critique of the design principles as being applicable only to small scale. https://jstor.org/stable/26268233”
  5. There is a deeply baked-in assumption that when communities succeed or fail, it’s because their governance system was good or bad. Communities fail for other reasons, and other endogenous reasons (not just meteor strikes). A lot of online communities never take off in the first place, because they’re not interesting enough to users to attract the critical mass necessary for governance to be relevant. That’s not a governance failure.

About

This entry was posted on Friday, October 15th, 2021 and is filed under Uncategorized.


The simplest demo that big data breaks p-value stats


> # perfectly independent matrix of 161 observations; standard "small-n statistics"
> # (rows have different sums but are all in 4:2:1 ratio)
> tbl <- matrix(c(4, 2, 1, ... 48, 24, 12, ... 40, 20, 10), ncol=3) > chisq.test(tbl)$p.value
[1] 1
Warning message:In chisq.test(tbl) : Chi-squared approximation may be incorrect
# one more observation, still independent
> # one more observation, still independent
> tbl[3,3] <- tbl[3,3] + 1 > print(tbl)
[,1] [,2] [,3]
[1,] 4 48 40
[2,] 2 24 20
[3,] 1 12 11
> chisq.test(tbl)$p.value
[1] 0.99974
Warning message:In chisq.test(tbl) : Chi-squared approximation may be incorrect
> # Ten times more data in the same ratio is still independent
> chisq.test(tbl*10)$p.value
[1] 0.97722
# A hundred times more data in the s> # A hundred times more data in the same ratio is less independent
> chisq.test(tbl*100)$p.value
[1] 0.33017
> # A thousand times more data fails independence (and way below p<0.05) > chisq.test(tbl*1000)$p.value
[1] 0.0000000023942
> print(tbl*1000) #(still basically all 4:2:1)
[,1] [,2] [,3]
[1,] 4000 48000 40000
[2,] 2000 24000 20000
[3,] 1000 12000 11000

All the matrices maintain a near perfect 4:2:1 ratio in the rows. But when the data grow from 162 to 162000 observations, p falls from 0.99 (indistinguishable from theoretical independence) to <0.00000001. The problem with chi^2 tests in particular is old actually: Berkson (1938). The first solution came right after: Hotelling's (1939) volume test. It amounts to an endorsement to do what we do today: for big data, use data-driven statistics, not small-n statistics. Small-n statistics were developed for small-n. https://www.tandfonline.com/doi/pdf/10.1080/01621459.1938.10502329 https://www.jstor.org/stable/2371512 Here's the code:
# perfectly independent matrix of 161 observations; standard “small-n statistics”
# (rows have different sums but are all in 4:2:1 ratio)
tbl <- matrix(c(4, 2, 1, 48, 24, 12, 40, 20, 10), ncol=3) chisq.test(tbl)$p.value # one more observation, still independent tbl[3,3] <- tbl[3,3] + 1 print(tbl) chisq.test(tbl)$p.value # Ten times more data in the same ratio is still independent chisq.test(tbl*10)$p.value # A hundred times more data in the same ratio is less indepedent chisq.test(tbl*100)$p.value # A thousand times more data fails independence chisq.test(tbl*1000)$p.value print(tbl*1000)

About

This entry was posted on Sunday, October 10th, 2021 and is filed under Uncategorized.


Cancel culture and free speech are compatible, in 3 pages.


Social justice activism is bringing changes to culture and discourse, especially in the US. Those changes can cause a lot of communication breakdown, even among people who should be aligned. If you can’t stand how old liberals put so much on civility when the world is burning or, if you’re baffled that today’s social justice has thrown freedom of expression under the bus, or if you just think there’s too much infighting all around, then there’s a solution. It’s actually not hard to reconcile the ethics behind broad-minded liberalism and confrontational identity-driven progressivism into one framework, to explain how they can co-exist, and actually always have, serving different purposes.

Two spaces

The worldviews seem incompatible because they exist in two different spaces built on different assumptions. They are the “dialogue-first” and “politics-first” spaces. Dialogue-first spaces exist when there is physical security and everyone can assume the good faith of everyone else. These get you the familiar ideal of older liberals: unity is a goal, good intentions behind a bad action matter, civility matters, there are no bad ideas, you attack the idea not the person, speech is free but yelling doesn’t work, content trumps style, you can discuss abhorrent ideas, defend people with abhorrent views, due process is respected by all, and reason prevails.

Politics-first spaces are wild: none of the above is true. You don’t assume good intentions of others who have wronged you, you can attack people rather than ideas, vulnerability can be weakness, interest in other cultures is appropriation, race and other identity differences are recognized and even emphasized, affiliation and trust are based on those identities, the legitimacy of your input depends on them, mobbing is legitimate, a witchhunt is a tactic, silence is assent, self-censorship is tact, shutting someone down is fair game, how you come off is as important as how you are, and the weak are strong en masse.

You’re clearly in politics-first space on social media, in opinion columns, during protests, pretty much anytime that you’re in a position to offend people who are loud, effective, and enflamed enough to take you down. You’re in dialogue space when you can ask challenging, ignorant, vulnerable questions and count on sympathy, patience, and an explanation. Close family and friends, sometimes the classroom. I’ve seen that people who experience the world as hostile to their existence are often tuned for politics-first exchange.

The tensions play out

The catch is that a space can claim to be dialogue-first but be politics-first in secret. In fact, I wonder if most spaces that call themselves dialogue-first have the other dynamic under the hood. And that’s dangerous: when a political space projects dialogue values, the emphasis on good faith makes it easier to hide abusive dynamics. If there’s no blatant evidence of sexual assault, and good faith means taking assailants at their word, then the veneer for dialogue-first dynamics can perpetuate awful behavior.

In politics-first spaces, appearance is reality, and creepiness can’t lurk as easily. Politics-first spaces can be more transparent, in the sense that your happiness doesn’t depend on other people being honest. You also have more strategies. “Safe spaces” are ridiculed in dialogue space, but they are adaptive in political space. Call-out culture, cancel culture, and other seemingly unaccountable tactics are fair game, even strategic, in political space. This is all good. On the very edge of social change and activism, dialogue is naive because the consensus conspiracy of institutional violence has bad faith at it’s core: the civil rights movement, Apartheid resistance, and BLM. In those cases, the politics-first headspace is the right headspace.

The only truly dialogue-first spaces are those that maintain consensus from all participants all the time. If one person’s experience is that they don’t believe others are acting in good faith, it’s literally not a dialogue space anymore, no matter how many other people still believe. That sounds like an overstatement, but the proof is easy. Say someone in a dialogue-first space speaks up after covert discrimination or harassment. Do you take their claim seriously or not? If you take them seriously, then you’re acknowledging that bad faith is happening somewhere. And if you don’t, then you’re rejecting their experience out of hand, in bad faith, and the person who just broke the space is you. No matter what you do, you’ll help them break consensus.

Since anyone can call bullshit at any time, true dialogue-first spaces are fragile. Dialogue-first spaces are little islands surrounded by the political spaces that call themselves dialogue-first, which are surrounded in turn by the seas of openly political spaces. When minorities in universities say that academia isn’t actually a field of pure ideas that rewards all equally, they are saying that they are experiencing the university’s founding ideal of dialogue as just a veneer. If that’s their experience, then good faith means assuming they’re right unless proven otherwise. So universities today are listening, foregrounding their political side, and asking critically whether that founding ideal really exists for everyone. That has upsides and downsides. Firing profs for assigning Huck Finn without proper warning is the other side of the coin of finally being able to fire them for sexual harassment. And it will continue this way until affected communities feel represented and are ready to buy in again to the university’s ideal. It will take time to build back up. The thing about the university’s fragile ideal is that if it can’t be broken it’s not real.

What to do

Everyone deserves to have a dialogue space they connect with. Dialogue space is less stressful and creates more room for growth. It’s important to want and have dialogue-first spaces. But it’s also important that whatever space you’re in has the right name. So within both spaces there are important things to do.

In a dialogue-first space. It’s easy to get nostalgic for a time when people could just talk about ideas without getting mobbed on social media. But there are people who are saying that that time only existed for you. If you don’t accept their experience as true, then you are making it true by perpetuating their marginalization. So the first thing to do is take a person seriously when they are challenging the consensus of your dialogue-first spaces. Victims who come out to expose violence in superficially dialogue-first spaces often get hostility for questioning the consensus, when they should get rewarded for finding the right name. You listen to challenges because you cherish your spaces enough to question them.

In a politics-first space. The fragile consensus of dialogue-first spaces makes it hard to build back up. You can, but you need the capacity. Capacity is how much bullshit you can take before losing patience, getting frustrated, or otherwise losing good faith. People don’t get to pick their capacity, and many don’t have much. Your capacity might be higher because of your privilege or your personality or your training. Here’s how:

  1. To get two people assuming good faith from neither assuming good faith you need one person to assume good faith. That first mover should be the person with more capacity. If you’ve been blessed with high capacity, the tax on that blessing is an obligation to create a world that is dialogue-first for everyone. It’s on you to stand by dialogue-first ethics and also remain compassionate, humble, and cool in the boiling pot of politics-first exchange. You have to hold yourself to the high standards of both.

Step 1 is actually the only step. The second step is “wait.” Not because it’s all you have to do, but it’s all you can do. You can’t make someone assume good faith, so you need to have the capacity to maintain dialogue-first presence, model its value, and absorb political blows until others finally let their guard down and it becomes true.

Considerations

Because of all the patience and compromise involved, it is easier for pragmatists than ideologues to be first-movers. You often have to choose between saying things bluntly (“being right”) and saying things tactfully (“being effective”). You have to lead with your shortcomings and abolish pride. And you don’t just avoid behavior that actually alienates others, but behavior that comes off as potentially alienating, in the way that public figures work to avoid both impropriety and the appearance of impropriety. Overall, integrity in first-movers is maintaining the standards to thrive in both spaces.

If you do have capacity, you should get over that time you were called out unfairly, and become part of the solution. And if you can’t manage patience and waiting, then you might be more in the politics-first headspace than you realize. If your approach to defending rationality, reason, discussion, open-mindedness, freedom of expression, and other good stuff involves being defensive, dismissive, combative, sarcastic, or otherwise closed to the concerns of those who question consensus, then there’s a good chance that you’re just an agent provocateur, claiming you support dialogue-first spaces while covertly undermining them with bad faith politics-first tools. The tension between the spaces is an opportunity, not a warzone, and making war of it is a fundamental betrayal of enlightenment values. You should consider getting out of the way, until you get the help of a first-mover yourself.

It is sometimes easy to support people back into dialogue from a politics mindset, but there are failure modes. One failure is to move toward dialogue prematurely, while a community is still suffering from deeply rooted bad behavior or bad faith. So in addition to capacity, good first-movers need empathy and sensitivity, and they have to be up-to-date on a space’s current drama. They also need the integrity to not be creeps themselves, which can be hard if they have the blindspots that come from identifying as an ally. Another failure is that if someone has become powerful in a politics-first space, it might not be in their interest to change (a lot of radical Twitter), or they might only engage with people they identify with. That’s why a good first-mover is someone who is already a legitimate insider in their target identity group, which can be rare. Your identity shouldn’t matter, but in politics space it does. A third failure is if you encounter the immune response of a politics-first space. Politics-first constructs like tone-policing can be weaponized to discredit the whole idea of dialogue-first relationships. If that happens, this might not be the right time, or you might not be the right person. The last failure isn’t a failure at all. Again, on the very edge of social change and activism, political tools are the right tools. Some spaces are inherently politics-first, and some people specialize in that toolkit and thrive in that setting. If you step in where dialogue doesn’t make sense, you’ll just invite ridicule. Alinsky’s Rules for Radicals are built on exploiting the vulnerability of dialogue exchange for political wins.

This is a model. It succeeds at explaining a lot of the contradictions faced by people who are both sympathetic and wary about social justice. It explains why a lot of things that seem ugly about social justice rhetoric are adaptive in context, and what a space needs to be ready for civility discourse. It also gives a strategy for moving forward. And it gives rationalists who have been hurt something to aspire to. Hopefully this makes it easier for you to understand what’s going on with society right now, and articulate your place in change.

More notes

I’m still developing and editing this. On the one hand, there’s a lot to say, on the other, I’m trying to keep it short. Here are more dimensions as I come up with them, to maybe incorporate.

  • Audience for this piece is people with capacity and commitment to dialogue-first space.
  • Breaking consensus isn’t pointing out that there are politics. There are always politics. Consensus is the agreement that the politics aren’t bad enough for anytone to abandon dialogue-first values.
    Breaking consensus is announcing that the latent politics have become bad enough that your community’s shared commitment to dialogue-first values is causing too much harm
  • Just like there’s always politics, there’s always power. Dialogue-first spaces can be compatible with power asymmetries (e.g. prof <-> student). What’s important is mutual assumption of good faith, and common belief in mutual assumption of good faith.
  • “Good faith” can be made more specific: good faith commitment to community’s mission (in a university, intellectually learning/growing together)
  • During social change, the pendulum overswings. Remembering that makes it easier to feel OK when it looks like the world is abandoning your values.
  • There are more than two spaces. Political spaces, violent or not, don’t put violence first. That’s the space of actual war, which makes politics civil by comparison

About

This entry was posted on Saturday, October 9th, 2021 and is filed under Uncategorized.


The Future: Michio Kaku’s accurate 2020 from 1997

In 1997, physicist and futurist Michio Kaku wrote this picture of daily life in 2020. He did pretty well. .Not all predictors of the future do. What I enjoy the most is how magical the present becomes when it’s described as far-fetched visionary fare.

A gentle ring wakes you up in the morning. A wall-sized picture of the seashore hanging silently on the wall suddenly springs to life, replaced by a warm, friendly face you have named Molly, who cheerily announces: ‘It’s time to wake up!’
As you walk into the kitchen, the appliances sense your presence. The coffeepot turns itself on. Bread is toasted to the setting you prefer. Your favorite music gently fills the air. The intelligent house is coming to life.
On the coffee table, Molly has printed out a personalized edition of the newspaper by scanning the Net. As you leave the kitchen, the refridgerator scans its contents and announces:’You’re out of milk. And the yogurt is sour.’ Molly adds: ‘We’re low on computers. Pick up a dozen more at the market while you’re at it.’
Most of your friends have bought ‘intelligent agent’ programs without faces or personalities. Some claim they get in the way; others prefer not to speak to their appliances. But you like the convenience of voice commands.
Before you leave, you instruct the robot vacuum cleaner to vacuum the carpet. It springs to life and, sening the wire tracks hidden beneath the carpet, begins its job.
As you drive off to work in your elecric/hybrid car, Molly has tapped into the Global Positioning System satellite orbiting overhead. ‘There is a major delay due to construction on Highway 1,’ she informs you. ‘Here is an alternate route.’ A map appears ghostlike on the windshield.
As you start driving along the smart highway, the traffic lights, sensing no other cars on this highway, all turn green. You whiz by the toll booths, which register your vehicle PIN number with their laser sensors and electronically charge your account. Molly’s radar quietly monitors the cars aroung you. Her computer, suddenly detecting danger, blurts out, ‘Watch out! There’s a car behind you!’ You narrowly miss a car in your blind spot. Once again, Molly may have saved your life. (Next time, you remind yourself, you will consider taking mass transit.)
At your office at Computer Genetics, a giant firm specializing in personalized DNA sequencing, you scan some video mail. A few bills. You insert your smart wallet card in the computer in the wall. A laser beam checks the iris of your eye for identification, and the transaction is done. Then at ten o’clock two staff members ‘meet’ with you via the wall screen.

Copied without permission from The Faber Book of Utopias, Ed. John Carey, who copied with permission from Kaku’s 1997 book Visions.

About

This entry was posted on Friday, February 12th, 2021 and is filed under Uncategorized.


Changing how you think is like changing your nutrition in a way

Being surrounded by smart people makes you think about intelligence. After all, they’re all so different from each other, so how could intelligence be one thing? And what does it mean with it changes!? I have been around long enough, and changed enough times in enough ways, to watch my own intelligence wax and wane in minor ways with changes in personality, circumstance, social environment. I’ve evolved a picture of intelligence that’s in line with prominent theories like the Cattell–Horn–Carroll theory, in which general intelligence has many dimensions, turned all the way up to 11. Where many of us imagine that IQ is a single knob in the brain that is just cranked way up or down for different people, I’ve come to think of it something with tens or hundreds of indeterminate overlapping facets that can be tweaked by nearly everything.

In that way, intelligence is a lot like nutrition. Above the usual macronutrients of carbohydrate, fat, and protein, our bodies rely on many many trace chemicals. The way the micronutrients work is that if you only need your trace amount of each them to be generally fine. Getting more than the necessary amount doesn’t tend to provide a benefit. Megadoses of this or that vitamin, while trendy, are usually pointless, and sometimes dangerous. So the name of the nutrition game isn’t so much about getting as much as possible of a few things, but getting the right amount of lots of things, not too little, and maybe not too much. Maybe “you’re only as healthy as your weakest nutrient.” Or “the roof is only as high as the shortest column.” In this way, nutrition is limited by a lot of potential bottlenecks.

I think intelligence is the same way. It requires a lot of traits, and its level isn’t determined by the strongest of them, but the weakest of them. Qualities that support intelligence, and hold it back, include creativity and curiosity, but also attention, self-consciousness, self-control, arousal, and even memory, risk tolerance, and coping style. Within memory, long-term, short-term, episodic, visual, and muscle memory are probably each potential bottlenecks on general intelligence. It can probably be held back by one’s limits to think visually, and to think verbally. Processing skills like subitization are probably excellent prereqs to strong visual reasoning. Certainly nutrition, life stability, and other aspects of nurture, like your exposure to logic and the wide variety of other tools of thought.

This doesn’t mean that traits can’t fill in for each other. Some traits can really make up for others (memory, and maybe fortitude), and can really be cranked quite high before they stop helping. But under this picture, there are limits to that kind of coping, for at least some traits, which makes them bottlenecks.

This has all borne out for me in different ways. I’ve observed myself behave with less intelligence when I’m not alert. And more interestingly, I got to observe myself become smarter after my personality changed to make me less anxious about loss and risk. Having known incredibly intelligent people with excellent memories, and watched a decline in my memory, I think my long-term memory may be my biggest bottleneck. A lot of my younger school age success was due to the fact that, once upon a time, I only had to read things once to remember them indefinitely. It isn’t like that any more. I’ve also never been much of a visual reasoner, and I’m a sorry subitizer.

This “bottleneck-driven” or “nutrition-like” picture of intelligence accounts for a lot. It seems compatible at first with theories of “multiple intelligences,” but it’s ultimately grounded in the idea of a general intelligence. General intelligence is just the idea that intelligence is general: that being smart isn’t being smart in one thing. I buy it because of something you see in education science: when you do interventions in classrooms, there is a type of student that will just do really well on every educational intervention you give them, no matter what it tests. Distributions of student performance on these kinds of assessments can be normal bell-curves, or close, with several students clustered tightly at the top of the performance range, and others distributed widely across the rest of the range. The hypothesis is that students off of the performance ceiling have one or more bottlenecks keeping them back, and they are all different in which quality is playing the role of bottleneck.

This theory has other implications as well for how we interpret learning accommodations. Rather than saying that exams miss measuring intelligence by testing only your memory and ability to sit still and concentrate, this theory more precisely says that exams test your intelligence in part by testing your memory and ability to sit still and concentrate, and that they miss intelligence by failing to measure the facets they don’t measure like creativity and curiosity. It also reinterprets what aids do. From this view, attention aids like drugs or exercises are intelligence aids for those people whose intelligence is limited by their ability to control their attention.

Going one step away from direct supporters of intelligence, there might also be traits that are only bottlenecks in certain cultures or environments. In a typical classroom setting, obedience is probably an important predictor of how effectively education feeds intelligence, and students without it may either need to develop it or seek a learning environment that doesn’t allow a trait like obedience to become an intelligence bottleneck. Conversely, it might be that among students in unstable social or cultural environments, the hyperfocus that you find with ADHD is adaptive. And obviously, in domains where knowledge is stored primarily in written words, impediments to reading, or a lack of alternatives to reading, will be indirect impediments as well.

There are obviously many things the theory misses. It’s great if there is just one way to be intelligent and lots of ways that intelligence can be limited. But if there are many kinds of intelligence, the theory only needs a couple tweaks from the nutrition model. Rather than bunching each personality or other trait that influences intelligence into the bottlenecks category, driven by the minimum, you’d call other max driven. In the more complicated picture, you’d say that the height of the ceiling depends on the shortest of some columns and the tallest of others.

But being no specialist in education science, I can’t seriously say that this is anything more than idle theorizing based off of personal anecdote. And any theory of such an elitist idea as intelligence is inherently going to be a little offensive. Allowing that, I imagine that this picture of it is overall less offensive than most others. All things considered, I think being such a holistic theory of intelligence makes it pretty humane, open, and empowering. It’s certainly makes a lot of predictions, and is actionable. In my case, if I personally buy this theory, I should probably become a better memorizer.

About

This entry was posted on Saturday, February 6th, 2021 and is filed under Uncategorized.


More on the lasting impact of cybernetics on society


Cybernetics was an important part of my intellectual development, my first hint indication that there was a rigorous, systematic way of approaching complex things. I eventually got over cybernetics specifically, as much too general to contribute to the observation side of science. I was also disappointed that all of its legacy in science seemed to be obscured, with many senior academics I knew privately acknowledging its importance, but publicly revealing no hints. However, I’ve slowly learned that the influence of cybernetics was much more substantial than I’ve appreciated. I’ve collected anecdotes on its development in anthropology, and one of it’s services to big tobacco, and I knew it was the path to Allende’s technoutopia, with help from cyberneticist Stafford Beer. But apparently it was also influential in Jerry Brown’s first governorships, and less directly, in the first applications of graphic or interface design to computers.

Here are various quotes I pulled from the Jerry Brown article, which weaves together UC Berkeley’s full design talent, US politics, early environmentalism, and even the Dead Kennedys.

“Learn to distinguish between unity and uniformity—between God and hell.” That abouts summs [sic] up the 20th Century problem.

Eschewing the industrial iconography of steel and glass, the Bateson Building made do with concrete and wood … in order to maximize thermal performance and economy in the blazing Sacramento summer sun. The building’s understatement, which bordered on a functionalist antiaesthetic and surely contributed to its disappearance from the canon, was central to its broadly ecological mission. That mission seems to have had three main aspects: energy efficiency, interaction, and an attentiveness to systems. In pronouncing that mission, the Bateson Building represented the state’s pursuit of interdependence, adaptability, and self-reliance.

For Van der Ryn, such an integration of the greater sociopsycho-ecological whole was the central purpose of design. “The process of institution building and institutional innovation becomes more than a technical problem,” he wrote in 1968 with his then assistant, the political economist Robert Reich, who later served as secretary of labor under President Bill Clinton. “It becomes part of an overall design. It becomes utopian.” 63

The New Age state addressed a skepticism about government that ran even deeper than the culture wars. Its cybernetics and ecology countered pessimism about whether a selfless politics was even possible,

“Going into Space is an investment . . . and through the creation of new wealth we make possible the redistribution of more wealth to those who don’t have it. . . . As long as there is a safety valve of unexplored frontiers, then the creative, the aggressive, the exploitive urges of human beings can be channeled into long term possibilities and benefits. But if those frontiers close down and people begin to turn in upon themselves, that jeopardizes the democratic fabric.”

About

This entry was posted on Sunday, January 31st, 2021 and is filed under Uncategorized.


The sorry state of my optimism about humanity’s distant future

https://www.researchgate.net/publication/266854675_Social_Mobility_in_the_Transition_to_Adulthood_Educational_Systems_Career_Entry_and_Individual_Agency/figures?lo=1

I would love to be optimistic about the future. In fact, I’m actively trying. There is hope in stunning technological advances, the existence and development of progressivism, and all the license we have to abide by some lessons of history (that human potential overall gets higher) and potentially ignore others (the inevitability of resource conflict). We can imagine ignoring dark lessons of history because modern humanity has demonstrated its ability to change how it is. We’ve changed in the last 100, 50, 10, or even 5 years more than we did over thousands of our earliest years. Maybe scarcity is a solvable problem. For example, I’ve been following with awe and wonder the rapid, exciting progress of the Wendelstein 7-X experimental fusion reactor in Greifswald, Germany. The limitless energy of hot fusion, and several other emerging technologies, means that we can think about energy intensive technological solutions to otherwise forbidding problems, toward the availability of costly fresh water from sea or waste water, the availability of sufficient food and wood from ever higher input agriculture, the possibility of manually repairing the climate, undoing material waste by mining landfills and the ocean’s trash fleets, and accessing the infinite possibilities permitted by space. These things could all be in hand in the next few decades, no matter how destructive we are in the meantime. Allowing these possibilities means very actively suppressing my cynicism and critical abilities, but I think it’s a very healthy thing to be able to suppress them, and so I try.

But every argument I come up with for this rosiest view backfires into an argument against postscarcity. Here’s an unhopeful argument that I came up with recently, while trying to go the other direction. It has a few steps. Start by imagining the state of humanity on Earth as a bowling ball rolling down the lane of time. Depending which pins it hits, humanity does great, awful, or manages something in between. There are at least two gutters, representing states that you can’t come back from. They foreclose entire kinds of future path. Outside of the gutters, virtually any outcome for humanity is on the table, from limitless post-humanity to the extinction of all cellular life on earth. In a gutter, a path has been chosen, and available options are only variations on the theme that that gutter defines. The most obvious gutter is grim: if we too rapidly and eagerly misspend the resources that open our possible paths, we foreclose promising futures and end up getting to choose only among grimmer ones. A majority of climate scientists assert that we are already in a gutter of high global temperature and sea levels. There is also a hope gutter, in which technology and the progressive expansion of consciousness will get us to an upside of no return, in which only the rosiest futures are available, and species-level disaster— existential risk— is no longer feasibly on the table.

The doom gutter is the easier to imagine of the two, and therefore it is easier to overweight. It’s too easy to say that it is more likely. “You can’t envision what you can’t imagine.” The question to answer, in convincing ourselves that everything is going to be fine, is how to end up in the hope gutter, and show that it’s closer than anyone suspects. That’s what I was trying to do when I accidentally made myself more convinced of the opposite.

The first in my thinking is to simplify the problem. One thing that makes this all harder to think through is technology. How people use technology to imagine the future depends a lot on their personality and prior beliefs. And those things are all over the place. From what I can tell, comparable numbers of reasonable people insist that technology will be our destruction and salvation. So my first step in thinking this all through was to take technology out: humanity is rolling down the gutter of time with nothing beyond the technology of 1990, 2000, or 2020, whatever. Which gutter are we mostly like to find now? Removing technology may actually change the answer a lot, depending where you come from. I think many tech utopians, as part of their faith that technology will save us, also believe that we’re doomed without it. I’ve found tech optimists to be humanity pessimists, in the sense that, if brave individuals step up to invent things that save us, it will be despite the ticking time bomb of the vast mass of humanity.

I, despite being a bit of a tech pessimist, am a humanity optimist. I think if technology were frozen, if that path was cut off from us we’d have a fair chance of getting our senses and negotiating a path toward a beautiful, just, empowering future. Especially if we all suddenly knew that technology was off the table, and that nothing would save us but ourselves. I don’t think we’d be looking at a postscarcity future, there is no postscarcity without seemingly magical levels of technological advancement, but one of the futures provided by the hope gutter. Even without technological progress, changes in collective consciousness and awareness and increases in the space of thinkable ideas can and have had a huge influence on where humanity goes. Even without the deus ex machina of technology, it is still possible to have dreams about wild and wonderful possible futures. This isn’t such a fringe idea either. Some of my favorite science fiction, like Le Guinn’s “Always Coming Home,” takes technology out of the equation entirely, and still imagines strange and wonderful perfect worlds.

So without technology, I actually see humanity’s path down the bowling lane of time being a lot like the path with technology: we have a wide range of futures available to us, some in the doom gutter, some in the hope gutter. That’s a very equivocal answer, indisputable to the point of being meaningless, but it’s still useful for the next step of my argument.

I do not think technology is not good. It is not bad. It doesn’t create better futures or worse ones. It overall just makes things bigger and more complicated. One quip I use a lot is that Photoshop (an advanced design technology) made good design better and bad design worse. Nuclear science created a revolution in energy, and also exposed us to whole new kinds of bitter end. Genetic engineering will do the same. Even medicine, possibly, if it ends up being able to serve the authoritarian ends of psychology-level social control. So unlike most people, I don’t think technological advances will bias humanity into one gutter or the other. I think technology will expand the number of ways for us to get into one gutter or the other. And it will get us into whichever gutter faster. That’s step two.

Step three of the argument starts with my assumption that the doom gutter is closer than the hope gutter. It is without technology, and it is with technology. I started off by saying that we should avoid saying that because it’s easy to say. But even then, I still think so. That’s just my belief, but this is all beliefs. One of the few clean takeaways of the study of history is that it is easier to destroy things than create them. But doom, in the model I’ve built, isn’t more likely because of technology, only because it is more likely overall. That doesn’t mean that technology will have no effect. It will bring us more quickly to whichever gutter it is going to bring us to. If at the end we’re doomed or saved because of technology, that end is less likely to happen slowly, and less likely to be imaginable.

p.s. I’d love to believe something different, that technology will save us. I’ll keep trying. We’ll see.

About

This entry was posted on Sunday, January 31st, 2021 and is filed under Uncategorized.


Our world is strange enough for these tops and dice to be


“Classical mechanics” is the name of the simplest physics. It is the physics of pool tables, physics before electricity, magnetism, relativity, and quantum effects. Nevertheless, we’re still learning new things about it. And those discoveries lead to some pretty deep toys.

Four tops

Two dice

Digging up videos of these led me to other great things, like oloids, sphericons, solids of constant width, and tops in space

About

This entry was posted on Sunday, January 10th, 2021 and is filed under Uncategorized.


New paper out in the Proceedings of the Royal Society B

Society is not immutable, and it was not drawn randomly from the space of possible societies. People incrementally change the social systems that they participate in toward satisfying their own needs. This process can be conceptualized as a trajectory through the space of societies, whose “attractor” societies represent the systems that participants have selected.

I wrote a paper that implements this idea in simulation and demonstrates some simple results that fall out of it. This work explores the trajectories produced by selfish simulated agents exploring abstract spaces of economic games. It shows that the attractors produced by these artificially selfish agents can be unexpectedly fair, suggesting that the process of institutional evolution can be a mechanism for emergent cooperation.

Frey, S. and Atkisson C. (2020) A dynamic over games drives selfish agents to win–win outcomes Proc. R. Soc. B.28720202630. https://doi.org/fnq2

This paper took me almost 10 years to finish, so I’m very proud to have it out, especially in a venue as fancy as Proc B.

About

This entry was posted on Thursday, December 31st, 2020 and is filed under Uncategorized.


Low profile search for the cheapest, shortest domain on the Internet


Short URL’s are useful in their own right. But they are in demand, prohibitively expensive, and also hard to find. You have to know some tricks to find unused URLs without raising the eyebrows of hucksters, but with the explosion of top-level domains (the end part of a URL, like .com), it’s actually possible. Using this price sheet, you can find all kinds of stuff: prices are going below the standard $15/year for .com, and also well above, like over $8000/year for a .makeup link. Rooting around, with .za not available yet, .uk comes out as the cheapest 2 letter domain per year. In .com, two, three, and four character domain names are allg gone, and super valuable. How about in .uk? Are there any two, three, or four level domains? I wanted to find out, so I wrote the following shell script

#Low profile search for the cheapest, shortest domain on the Internet
for i in {0..9}; for j in {0..9}; do whois $i$j.uk | grep "No match"; done;
for i in {a..z}; for j in {0..9}; do whois $i$j.uk | grep "No match"; done;
for i in {0..9}; for j in {a..z}; do whois $i$j.uk | grep "No match"; done;
for i in {a..z}; for j in {a..z}; do whois $i$j.uk | grep "No match"; done;

The key part is whois, which takes a URL and queries an official database of registered URLs. grep pulls out all error messages returned by whois, indicating URLs that have never been registered. It returned exactly one value, meaning that out of (10+26)^2=1296 possible URLs, only one had never been registered. So here you are, talking to the proud owner of the least desirable possible 2-letter URL: 0w.uk. And rather than paying thousands or millions, I pay less than $10, less than one pays for .com or .org.

What’s so undesirable about 0w.uk? It wasn’t clear at first, but here’s what I’ve come to: Two ‘w’s are desirable because of the invocation of the World Wide Wieb’s “www” convention. But a single w doesn’t do that. All it does is give so many syllables that the url takes longer to pronounce character-by-character than some five-letter URLs. And the 0, being easily confused with o, makes it so that the most available possible word-level pronunciations (“ow!” or “ow-wuck”) are positively misleading.

Still, it’s got some charm for being the runt of its litter. I expect to put it to good use.

About

This entry was posted on Wednesday, November 11th, 2020 and is filed under Uncategorized.


Now hiring graduate students and postdocs at UC Davis

Grad student

Funded graduate position in the CSS of OSS

We are hiring a graduate student in Communication or another science for an NSF-funded research position under Prof. Seth Frey, for a graduate training in computational social science (CSS). This interdisciplinary work is focused around open source software (OSS) project success, and integrates social network analysis (SNA) and computational policy analysis, via natural language processing techniques.  You will have an opportunity to receive training from several faculty specializing in OSS, CSS, SNA, machine learning, and the quantitative study of governance systems (Prof. Frey & Vladimir Filkov at UCD and Charlie Schweik & Brenda Bushouse at UMass Amherst). You will work closely with junior computer scientists also joining the project, and other partners
Applicants should obviously have an interest in committing their graduate training toward a CSS expertise and show enthusiasm, promise, or experience in programming, data science, or statistics. As the project is funded for at least two years, and up to five, you should be able to make a strong claim that the subject matter is in line with your desired long-term research direction. Ph.D. students are preferred but Master’s students may apply. Submit a resume/CV and graduate exam scores (unofficial/outdated are fine). You may also submit a cover letter and links to previous research or code. Women, underrepresented minorities, and students with disabilities are encouraged to apply. For more information, review the project summary and contact Prof. Frey at sethfrey@ucdavis.edu.

Postdoc

Postdoc in OSS Sustainability

The Computer Science and Communication departments at the University of California Davis have an exciting opportunity for a postdoctoral fellow, funded by the NSF. This is for a 2-year research position at the intersection of computational social science, software engineering, and organizational governance.
The goal of the project is to study paths to sustainability that open source software projects can follow based on the experience of projects that have already become sustainable. The PIs, Prof. Vladimir Filkov and Prof. Seth Frey, are experts in software engineering data analysis, computational social science, and institutional analysis. In this project we are putting those backgrounds together to develop an infrastructure for understanding the paths to sustainability. Specifically, our goal is to gather development traces and governance/rules data from existing projects and build analytic tools to inform projects of how to improve their chances to be independent and sustainable. More information is available at https://nsf.gov/awardsearch/showAward?AWD_ID=2020751. The postdoc will be working in the Computer Science Department at UC Davis and will report to Prof. Filkov, but will be co-advised by both PIs.
A successful candidate will have strong background in programming and data science, and experience in machine learning and NLP. Other qualifications include interdisciplinary interests, a PhD in a computational field, and a strong track record of peer reviewed publications. The postdoctoral candidate will have an opportunity to learn techniques for the gathering, organization, and analysis of both structured and unstructured data, e.g. data from ASF Incubator and Linux Foundation projects. 
To apply contact Prof. Filkov at vfilkov@ucdavis.edu. Applications received by October 30 will be given full consideration. Women, underrepresented minorities, and students with disabilities are encouraged to apply.

About

This entry was posted on Sunday, October 4th, 2020 and is filed under Uncategorized.


How to win against Donald Trump in court


How many times have you been to court in your life? Once? Five times? Something wild, like 10 times? Donald Trump has been to court over 4095 times.

With 330M Americans, that amounts to a 1/100,000 chance he’ll end up in court against you. Remote enough, but way more likely than winning the lottery.

1 in 100,000 is about the same as your chances of dying by assault from a sharp object, and more likely than death by poisoning, peripheral vascular disease, or “animal contact”.

So … how’s this case gonna go, and what can you do to be prepared? For starters, are you more likely to be on the offense or defense?

According to the numbers, you’ve got pretty much equal chances of being the plaintiff or the defendant. Except! Close to half of all of his lawsuits involve him being the plaintiff in a casino-related case …

… so if you’re not in court with him about a casino-related matter, chances are 4:1 you’ve got him on the defense. We’ll see that that’s the wrong place to be.

And how likely are you to win? Turns out, almost any way you slice it, and no matter what other areas of life you might think he’s a loser, Donald Trump is a winner in court. Whether on defense or offense, Trump has won almost 9 times for every case he’s lost.

Of course, that’s unambiguous wins and losses, and those account for maybe only a third of cases. What about his “wins” (adding cases closed or dismissed as defendant or settled as plaintiff) and “losses” (cases closed or dismissed as plaintiff or settled as defendant)? That’s sure to change things.

Even including cases that ended settled, closed, or dismissed, Trump still comes out on top. He “wins” about 2.5 cases for every case he “loses”. Maybe it’s because he’s always right. Maybe he picks his battles. Maybe it’s that orange teflon coating. Maybe it’s just the triumph of wealth.

Or maybe, I don’t know, what if we compare his “wins” and “losses” as defendant vs plaintiff? Is he more or less effective on the offense vs defense? Turns out that that doesn’t make a difference either. On either side of the room, he wins almost 2.5 cases for each he loses.

There is one exception. I said above that most of his suits are as plaintiff in a casino case. Same for his “wins”. A major chunk of his “wins” are real offensive wins in casino cases. If we subtract those, the in the remaining cases that he’s the plaintiff you get almost perfectly even odds.

To be clear, that’s still not a winning bet. Win or lose, that’s one very costly coin flip on average, and perhaps more likely to be lose/lose than anything. But it’s the best it gets. As defendant in a non-casino case, which happen 5x more often, he’s still 2.5x more likely to come out on top somehow.

Slicing tinier and tinier adds more and more doubt, but there may be one exception to the exception. If you can get him to sue you specifically for a branding or trademark matter, the odds finally start to lean in your favor, with a 2.5 chance that *you’ll* win. That number is only based on a dozen or so cases. Go ahead, try your luck.

So the takeaway: If you must end up in court against Donald Trump, which is much more likely to happen than you think, you want to be the defendant in a case that’s not about casinos. You want him to sue you.

If Donald Trump sues you in a non-casino case, he’s about as likely to close the case prematurely, have it dismissed by the judge, or straight up lose as he is to win the case or get you to settle. This is ultimately not a winning scenario for you, just the least bad.

If you really want to find a way to win things against Donald Trump, become a fact checker. Of non-partisan Politifact’s 820 or so fact checks of Trump statements, 70% have been rated Mostly False or worse and only 4% have been rated True (https://www.politifact.com/personalities/donald-trump/).

… Or become Joe Biden who, as of mid-Summer 2020, is sitting on a 10 point major landslide of a lead.

Credit, disclaimers, and future directions:
Credit: I put this together from USA TODAY’s rolling analysis of Trump’s suits over the decades:
https://www.usatoday.com/pages/interactives/trump-lawsuits/
Death comparisons: https://en.wikipedia.org/wiki/List_of_causes_of_death_by_rate

Disclaimer 1: I’m not a lawyer.

Disclaimer 2: Of his 4000 suits, USA TODAY only has outcome data for 1500. That’s not a random 1500. It might over- or undercount his settlements or prematurely closed cases, as defendant or plaintiff. It may over-count recent cases in either role. Since the outcome data covers less than half of his cases, there’s a fair chance that all of these numbers are completely wrong. Welcome to working with data about humans.

Future directions:
1. It would be great to figure out the missing data. 2. Trump’s casino-related cases have died down a lot in recent years. Other kinds of cases have ramped way up. Rerunning this analysis on only the last 5-10 years could tell a much different story. 3. Also different, of course, are cases again his administration. His record there may be worse. 4. Are you more or less likely to get sued if you’re liberal or conservative? I’d guess no difference. 5. This would all be more clear if I had made figures.

The Takeaway again: If you must end up in court against Donald Trump, which is much more likely to happen than you think, you want to be the defendant in a case that’s not about casinos. You want him to sue you.

About

This entry was posted on Monday, August 3rd, 2020 and is filed under Uncategorized.


A secure Bitcoin is a manipulable Bitcoin by definition


After several years of evidence of the volatility, insecurity, and overall reality of cryptocurrency, the conversation around cryptocurrency is evolving further away from the strict ideal of an immutable governance machine that runs itself without politics. Part of this is due to seeing the pure market do one of the many things that pure markets do: get captured by monopolists and oligopolists. It became a reality very quickly that both wealth in the network and control of its infrastructure concentrated very quickly in the hands of a few powerful actors. While some libertarians embrace market power as legitimate power because it emerges, the fall of these currencies, Bitcoin in particular, into the hands of the few has meant the end of the honeymoon for many in the crypto space. The narrative that has evolved is that we had high hopes, but we’ve learned our lesson, and with our eyes wide open are now looking at more types of mechanisms and more complex governance to create a new type of system that’s not just a market and such and such.

The simple critique of that narrative is that market capture caught no one by surprise who has done their homework. Market concentration, the emergent accumulation of capital in the hands of the few, is as reliable a property of markets as equilibrium, nearly as old and distinguished. We don’t hear about it as much, usually because of a mix of rug sweeping (“Inequality isn’t cool. Efficiency; that’s cool.”, “¿Monopoly, what monopoly?”), gaslighting (“But The State!”) or rationalization (“Is a Pareto distribution really that unequal?”, “Inequality is inevitable no matter what”, “Corporate capture of regulators is the fault of the existence of regulators”, “theoretically a natural monopolist will act as if they have competitors so no problem”, “the market’s autocrats are bad but less bad than those of The State”). Really, it should have caught no one by surprise that BitCoin and coins like it fell very quickly to those who probably don’t need more money.

But to the naïve surprise at protocol capture in crypto economies, there’s a much deeper critique. The original Bitcoin whitepaper, Nakamoto (2008), which was focused on demonstrating the security of the distributed ledger scheme, actually imposes a capture process by assumption, as part of its security arguments. The first security concern that Nakamoto tackles is the double spending problem, that an attacker who builds out a false ledger faster than the original secure ledger can undermine the currency by rewriting it to enrich themselves. This attack has proven to be more than theoretical. To prove against it, Nakamote defined a stochastic process, a random walk. He theorized to 1-D random walk agents (the original and the attacker) and estimated the size of the difference in their positions, given an initial headstart by the original. A basic result of probability theory is that, for an unbiased random walk, the probability of reaching every point, whether walking in 1 or 2 dimensions, approaches 1 as time extends toward infinity. By extension, no difference between two random walkers should be insurmountable. But Nakamoto doesn’t assume an unbiased walk. He assumes that the probability of the original chain to advance is greater than the probability of the false chain. This asymmetry breaks the result and guarantees that the original chain’s probability of success will diverge exponentially as their initial advantage increases. This was the security result.

But take a look. An assumption required by the security result is that the original chain has a structural advantage.

If a majority of CPU power is controlled by honest nodes, the honest chain will grow the fastest and outpace any competing chains. To modify a past block, an attacker would have to redo the proof-of-work of the block and all blocks after it and then catch up with and surpass the work of the honest nodes.

If a greedy attacker is able to
assemble more CPU power than all the honest nodes, he would have to choose between using it
to defraud people by stealing back his payments, or using it to generate new coins. He ought to
find it more profitable to play by the rules, such rules that favour him with more new coins than
everyone else combined, than to undermine the system and the validity of his own wealth.

The assumption is being used to examine one type of attack, but assuming it for this case has consequences that are much greater. It is the assumption that miner dynamics are driven by a rich-get-richer dynamic that implies oligopoly. Nakamoto gives some attention to this problem assuming that any nodes who accumulate enough power to cheat will have a greater incentive to stay honest. Whether that holds is another question: the point is that structural advantage and rich get richer dynamics were built into Bitcoin. having an imbalance of CPU power gives an agent the power to influence the very legitimacy of the system. Power within the system confers power over it. This was just the first example of many in the crypto space, of a few agents gaining the power to write the rules they follow. This can’t be framed as an embarrassing surprise, or an unfortunate lesson learned. It’s a foundation of the viability of the system. It was baked in; Bitcoin was vulnerable to capture from Day 1.

About

This entry was posted on Tuesday, May 19th, 2020 and is filed under Uncategorized.


The decentralization fetishists and the democracy fetishists


There’s a sort of battle for the soul of Internet going on right now, among those who see it as some combination of tool and microcosm of the future of society.

On the one hand you’ve got your decentralization utopians. They’ve been bolstered by the burgeoning of crypto. You might hear libertarian bywords like “maximizing human potential.” They see antiauthoritarianism as straightforwardly anti-state, and do what they can to create weed-y technologies that can’t be tamped down: that can’t be kept out of use by “the people,” which may or may not mean entrepreneurs. They see technology as making new forms of government possible.

On the other you’ve got your democratic utopians. They’re a bit more old school: some of the original dreamers into the potential of the Internet. They’re fairly pluralistic, as you can tell by the fact that every major democracy on the Internet is completely different from every other. They see technology as a complement, and not even a too necessary complement, to culture and community as the key to success in self-governance. They are more comfortable with bureaucracy and even some hierarchy: they’re pragmatic. Or not: I think they are because that what I think. I’m definitely a democracy fetishist, and not a decentralization fetishist.

A person can be both kind of utopian. Where they differ, the decentralized types might criticize democracy as faulty, unworkable, and too bureaucratic. The democracy types might criticize the decentralization types as too focused on technology and markets, and naive about the importance of culture and the social side.

The big threat to democratic experiments online: it requires a lot of upkeep to be performed by a lot of people. Members need training or skill or experience to be good stewards of a democracy. If you fall off on training, democracy devolves into forms like demagoguery. It seems to work best when members are invested enough that they think it’s worth all the time. To really be a viable model for the future, it’s not going to be enough to have a theory of institution engineering. Where going a theory of culture engineering.

The big threat decentralization experiments online, especially these days, is their vulnerability to co-optation. They rely heavily on reputation schemes, which can be thought of as a token representation of a person’s quality. A lot of effort is going now into mechanisms that quantify ineffables like that. But by making these qualities into ownable goods, you make them easier to distribute in a market economy, and whatever your ideals for your tool, the tool itself is liable to get picked up by institutions with lots of money if it can help them make more. This is because markets only work on excludable, subtractable goods. When we use technology to gives qualities the properties of a token, it becomes legible to markets, and they can step in and do what they’re good at.

There’s also a big threat to both. Somehow, the weaknesses of each get amplified at scale. Neither grows well. Neither is robust to capture at scale.

About

This entry was posted on Saturday, May 9th, 2020 and is filed under Uncategorized.


Metaphors are bad for science, except when they transform it


I love cybernetics, a funny body of work from the 50s-70s that attempted to give us a general theory of complex systems in the form of systems of differential equations. I love it so much that it took me years to realize that its metaphors, while offering wonderful links across the disciplines, are just metaphors, and would be incapable of leading me, the young scientist, towards new discoveries. Because in every discipline you find ideas that are just like those offered by cybernetics, except that they are specific, nuanced, grounded in data, and generative of insights. Cybernetics, for me, is a great illustration of how metaphors let science down, even when they are science-y. But there are exceptions.

James Hutton is the father of modern Geology, and in many ways he’s the Darwin of geology, although it might be more fair to say that Darwin is the Hutton of biology, as Hutton preceded Darwin by a generation, and his geology was the solid ground that the biologist’s biology ultimately grew on. Like Darwin, Hutton challenged the tacit dominance of the Bible in his field. The understanding, never properly questioned before him, had always been that the Earth was only a few thousand years old and had formed its mountains and hills and continents over several days of catastrophes. The theory that preceded Hutton is actually termed catastrophism, in contrast to the theory he introduced, that the earth is mind-bogglingly old, and its mountains and hills the result of the drip drip drip of water, sand, and wind.

How did he come up with that? Writer Loren Eisley gives us one theory: Hutton was trained as a physician. His thesis was on the circulatory system: “Inaugural Physico-Medical Dissertation on the Blood and the Circulation of the Microcosm”. Microcosm? That’s in there Hutton subscribed to the antiquated belief that humans were like the Universe in miniature. He observed that the dramatic differences between our young and old are the result of a long timeline of incremental changes, occurring in the tension between the constant growth and death of our skin, hair, nails, and bones. If humans are the result of a drip drip drip, and they are a copy of the universe, then incremental processes must account for other things as well. And there you have it, a shaky metaphor planting the seed for a fundamental transformation not only in how humanity views the earth, but how it views time. Hutton invented deep time by imagining the Earth to be like the body. Eisley called it “Hutton’s secret”: the Earth is an organism, and it’s there underneath us now, alive with change.

So big shaky metaphors can serve science? Really? What if Hutton was a one-off, lucky. Except he’s not alone. Pasteur’s germ theory of disease came out of a metaphor baked deep into his elitism and nationalism: as unlikely as it seems, tiny things can kill big things, the same way the awful teeming masses threaten the greatness of Mother France and her brilliant nobles. So there’s two. And the third is a big one. No grand metaphor has been more important to the last few hundred years of science than “the universe is a clockwork”, especially to physics and astronomy. This silly idea, which had its biggest impact in the 18th and 19th centuries, made thoughts thinkable that most of classical mechanics needs to make any sense at all.

I’m still not sure how bad metaphors lead to big advances. All I can figure is that committing 100% might force a person out of the ruts of received wisdom, and can make them receptive to the hints that other views pass over. Grand, ungrounded, wildly unfounded metaphors have a place in science, and not just any place. We can credit them with at least two of humanity’s most important discoveries from the last 250 years.

About

This entry was posted on Thursday, November 21st, 2019 and is filed under Uncategorized.


Small-scale democracy: How to head a headless organization

“Good question. Yes, we have your best interests at heart.”

Long ago I ended up in a sort of leadership position for a member-owned, volunteer-run, consensus-based, youth-heavy multi-house housing cooperative. It has everything great and bad about democracy. Over five years I made tons of mistakes and lots of friends and lots of not friends and learned a lot about myself and how to get things done. At some point I wrote some of them down.

  • Don’t bring a proposal to a membership meeting unless you know it’s going to pass. That means doing the legwork and building support behind the scenes, and also having a feel for the temperature in the room.
  • You can’t keep all the balls in the air. Be intentional and maybe even public about which balls you are letting drop. Focus on existential threats. Accept that your org is and will always be a leaky boat.
  • Hardest biggest lesson: I am full of self-doubt by nature and profession (scientist). But I learned to stick to the course of action I thought was right, against the noise of people thinking I was wrong, without having to convince myself they were wrong and I the brave harried hero.
  • When you propose a rule, don’t write it to fix the thing that went wrong, write it to prevent anything like it from ever happening again. The difference is how much thought you put into how it happened and what has to fail for it to happen again.
  • You have to be able to deal with people not liking you without getting resentful yourself. It was hard. I never would have learned except I had to. And even then I failed a lot.
  • People respond to you genuinely publicly suffering to meet their needs. I was vulnerable a lot and begging a bunch.
  • Need to bring people together? Being the common enemy works in a pinch. This is the corollary to letting balls drop. You can use this to get everyone to take on the job of keeping those less important but still nice and now unifying balls in the air.
  • You can’t ever assume things are fine and not currently about to blow up.
  • Can’t communicate enough with the membership. It’s amazing how fast bad vibes can start to build up in secret if you aren’t constantly rehumanizing yourself.
  • Neat trick for the horse trading that is a part of getting things done: One nice right of your authority position is the power to create symbols of value out of thin air (titles, the name of a thing, the signature on an important contract). They cost you nothing to create and others value them. So create them and trade them away in exchange for things that matter. I signed four houses to my coop, and had a pet name for each one, and never got to name a single one. I always had to trade the name off in exchange for support on closing the deal.
  • Power exists and you should use it to do what you think is right for the org, even if you might be wrong, as long as you are always double checking and striving to be less wrong. Democracy is inherently political/nonideal, in the sense that it is the sum of a bunch of people doing more and less undemocratic things within the broader constraints of a democratic accountability framework. So acts of power and working behind the scenes and managing information strategically aren’t undemocratic, they are a part of it, and you should do them when you need to, and you shouldn’t do them too much or its your head. That’s the way of things: admitting the existence of power and the necessity of occasionally wielding it despite your ideals. Running a system by occasionally violating its tenets isn’t bad, it’s beautiful. In an internally inconsistent world, what else but an internally inconsistent organization can survive?

Things I never figured out:

  • How to guess who will be reliable before investing a bunch and being wrong.
  • How to inspire. The few times it happened were totally unreproducible flukes. So I did a lot on my own.
  • How to build an org that learns from its mistakes
  • How to build a culture with really widespread engagement, not just a good core group

About

This entry was posted on Thursday, July 18th, 2019 and is filed under Uncategorized.


H. G. Wells on science and humility

Cosmos
“It is this sense of unfathomable reality to which not only life but all present being is but a surface, it is this realization “of the gulf beneath all seeming and of the silence above all sounds,” which makes a modern mind impatient with the tricks and subterfuges of those ghost-haunted apologists who are continually asserting and clamouring that science is dogmatic—with would-be dogmas that are forever being overthrown. They try to degrade science to their own level. But she has never pretended to that finality which is the quality of religious dogmas.” — H.G. Wells in “Science and Ultimate Truth”

Also, am I the only one who always confused George Orwell, H. G. Wells, and Orson Welles?

About

This entry was posted on Tuesday, May 14th, 2019 and is filed under Uncategorized.


New work on positive and negative social influence in social media: how your words come back to haunt you

I have a paper that will be coming out in the upcoming Big Data special issue of Behavior Research Methods, a top methods journal in psychology. It’s called “The rippling dynamics of valenced messages in naturalistic youth chat” and it is out in an online preview version here:
https://link.springer.com/article/10.3758%2Fs13428-018-1140-6

The paper looked at hundreds of millions of chat in an online virtual world for youth. The popular pitch for the piece is about your words coming back to haunt you on social media. That’s one takeaway you might draw out from the work we did. We looked at social influence: how my words or actions affect you. Of course, a lot of people look at social influence. Some papers look at those influence over minutes, and that’s good to do because it might help us understand behavior in, say, online political discussion. Others look at social influence over years, and that’s also good to do because it tells us how our peers change us in the long term. But say you wanted the God’s eye view of specifically what kinds of small daily interactions have the smallest or largest effect on long-term influence. That would really get at the mechanisms of the emergence of identity and, in some sense, social change. But the same things that make that kind of conclusion exciting also make it hard to reach. Short-term social influence is a tangle of interactions, and long-scale influence is a tangle of tangles. We were able to untie the know just a bit. Specifically, we reconstructed the flow of time for chat messages as they rippled through a chat room and, reciprocally, as they rippled back to the original speaker. The finding was that, when I say something, that thing elicits responses in two seconds (predictably), and keeps eliciting responses for a minute, getting stronger in its effects quickly, and then slowly tapering off. The effect of the single chat event is to produce a wave of chat events stretched out over time. And each of those itself causes ripples that effect everyone else further, including the speaker. Putting it all together, your words’ effect on other ripples back to affect you, in a wave that starts around 8 seconds, and continues for several minutes, almost ten in you were being negative. We are able to count the amount of chat that occurred as a consequence of the original event, that wouldn’t have occurred if the chat hadn’t happened. By isolating your effect on yourself through others, and mapping that wave’s effects from 2 seconds to thirty minutes, we’re able to put a quantitative description on something we’ve always known but have rarely been able to study directly: the feedback, self-activating nature of conversation and influence. If chat rooms are echo chambers, we were able to capture not just others’ echoes of you, but your own echoes of yourself in the past.

Social scientists are very ecological in their understanding of causes and effects. If you stay close to the data, you are bound to see the world in terms of everything affecting everything. It’s what makes social science so hard to do. It’s also what makes virtual worlds so exciting. They are artificial places composed of real people. Stripped down, the social interactions they host can be seen more clearly, and you can pick tangles apart in a way you couldn’t do any other way. For this study, we were able to use a unique property of online youth chat to pry open an insight into how people’s words affect each other over time. To really do that, you’d have to piece out all the ways I affect myself: I heard myself say words, and that changes me. I anticipate others responding to my words and that changes me. Others actually respond and their responses change me. Those are all different ways that I can change me, and it seems impossible to separate them. The accomplishment of this project is that we were able to use the artificiality of the virtual world separate the third kind of change from the other two, to really zoom in on one specific channel for self influence.

This world is designed for kids, and kids need protection, so the system has a safety filter built in. The way the filter works is that if it finds something it doesn’t like, it won’t send it, but it also won’t tell you that it didn’t send. The result is that you think you sent a chat, but no one ever saw it. That situation never occurs in real life, but because it occurs online, we are able to look at the effect of turning off the effects of others hearing your words, without changing either your ability to hear your own words or your belief that others heard you. With this and other features of the system, we were able to compare similar messages that differed only on whether they were sent or were only thought to have been sent. By seeing how you are different a few seconds after, a few minutes after, when you did and didn’t actually reach others, we’re able to capture the rippling of influence over time.

This is a contribution to theory and method because we assumed for decades that this kind of rippling and tangling of overlapping influences is what drives conversation, but we’ve never been able to watch it in action, and actually see how influence over seconds translates into influence over minutes or tens of minutes. That’s a little academic for a popular audience, but there’s a popular angle as well. It turns out that these patterns are much different if the thing you said was positive or negative. That has implications for personally familiar online phenomena like rants and sniping. The feedback of your actions onto yourself through others corresponds to your rants and rages negatively affecting you through you effects on those you affected, and we’re able to show precisely how quickly your words can come back to haunt you.

About

This entry was posted on Wednesday, October 24th, 2018 and is filed under Uncategorized.


“It was found that pairing abstract art pieces with randomly generated pseudoprofound titles enhanced the perception of profoundness”


I was following up on the lit around Bullshit Receptivity (or “BSR”; http://journal.sjdm.org/vol10.6.html) and stumbled on this master’s thesis using it to evaluate modern art perceptions. Title quote is from the abstract
https://uwspace.uwaterloo.ca/handle/10012/13746
(Can’t vouch for the research though; didn’t actually read the thesis).

As someone who always cringes reading artist statements, and whose English-degree-holding proud pedantic jerk wife failed critical theory twice out of spite, this was pretty gratifying.

Just one comment from the pedantic jerk: “Did they mean profundity?”

About

This entry was posted on Saturday, September 29th, 2018 and is filed under Uncategorized.


Book: The Common Sense of Science (1978) by Jacob Bronowski


I stumbled on this in a used bookstore. Books about science by scientists are already my thing, and anthropologist Jacob Bronowski already stands out in my mind as a distinguished big-picture popular scientist because I’ve youtubed his The Ascent of Man, a 1980’s BBC series about human natural history that was commissioned by David Attenborough.

The book has its unity, even though its most easily described in terms of its parts: a sketch of the history of Western thought as a history of science. The emphasis he places on human error, accident, and historical contingency help reinforce an overall message that science is a human social endeavor. He succeeds in showing that the things that make it vulnerable and flawed are precisely what make it accessible, and he positions the book as an argument against a popular fear and suspicion of science that emerged in the 20th century.

What struck me in the beginning of the book was how much his account of the rise of science-like thinking mirrored Foucault’s in The Order of Things: for centuries humans understood the signature of order in nature to be similarity (walnuts prescribed for headaches because they look like brains—one of Foucault’s examples) , until intellectual developments in the 17th and 18th centuries reinterpreted similarity as occurring in minds, and not beyond them, and created a new way of ordering things in terms of causes and mechanisms.

And what struck me about the end of the book was how forcefully presents the rare picture of science that I most often fight for myself, as something less about steel-cold logic, and more about a world so complicated as to permit only tinkering, and yield only to luck and experience.

Most times that you hear a scientist resisting a picture of science, they’re pushing back against what they imagine the man-on-the-street thinks: “We’re not out-of-touch eggheads in the ivory tower? We matter!”. But there is a picture of science that I wrench as often out of the heads of scientists as any other type of person. It’s the idea that the goal of science is to find the logical system that explains everything. It’s an attractive picture because it has an end, and also because, at moments in the past, it hasn’t seemed so far off. Newton found a system that explains both billiard balls and the solar system. 20th century biologists of the modern evolutionary synthesis connected genetics to evolution to, eventually, cellular biology. But for most sciences, especially the natural and social sciences, well, there might be a system, but its going to be beyond the ability of the human mind to encapsulate. In such an environment, you have to step back from finding the system that explains everything, to finding a system that explains as much as it can without getting too complicated. In this picture, a big constraint on theory building is the human capacity to understand theories. It’s a special view of science because it’s harshly critical of many of the archetypes that we usually see as unimpeachable core scientists. Those great minds that imposed mathematical rigor on human behavior, and were so dazzled by it that they dismissed evidence when it threatened to haze the luster.

The sweep and finality of his system, which like the Goddess of Wisdom seemed to his contemporaries to step fully formed from a single brain was a visible example. From a puzzle of loose observations and working rules he had produced a single system ordered only by mathematics and a few axioms: ordered, it seemed, by a single divine edict, the law of inverse squares. Here was the traditional problem of the trader nations since Bible times; its solution meant something to every educated man. And its solution was so remarkably simple: everyone could grasp the law of inverse squares. From the moments that it was seen that this lightning flash of clarity was sufficient—God said “Let Newton be” and there was light—from this moment it was felt that here plainly was the order of God. And plainly therefore the mathematical method was the method of nature.

A science which orders its thoughts too early is stifled. For example, the ideas of the Epicureans about atoms two thousand years ago were quite reasonable; but they did only harm to a physics which could not yet measure temperature and pressure and learn the simpler laws which relate them. Or again the hope of the medieval alchemists that the elements might be changed was not as fanciful as we once thought. But it was merely damaging to a chemistry which did not yet understand the compositions of water and common salt.

The ambitions of the 18th century systematizers was to impose a mathematical finality on history and biology and geology and mining and spinning. It was a mistaken ambition and very damaging. (p44–45)

I’m especially happy about his digs at economics.

There is no sense at all in which science can be called a mere description of facts. It is in no sense, as humanists sometimes pretend, a neutral record of what happens in an endless mechanical encyclopedia. This mistaken view goes back to the eighteenth century. It pictures scientists as utilitarians still crying “Let be!” and still believing that the world runs best with no other regulating principles natural gravitation and human self-interest.

But this picture of the world of Mandeville and Bentham and Dickens’s Hard Times was never science. For science is not the blank record of facts, but the search for order with in the facts. And the truth of science is not truth to fact, which can never be more than approximate, but the truth of the laws which we see within the facts (130p)

About

This entry was posted on Monday, September 17th, 2018 and is filed under Uncategorized.


Hey look, Andrew Gelman didn’t rip me a new one!

My website redesign was supposed to be a time-intensive and completely ineffectual effort to increase my readership. But mere days after, I landed one of the most steely-eyed, critical voices in the scientific discourse around the replication crisis. Scientists, as they exist in society’s imagination, should have an Asperger’s caliber disinterest in breaking errors gently, any otherwise attending to the feelings of others. Andrew Gelman is a more active, thoughtful, thorough, and terrifying bad methods sniper in scientist-to-scientist discourse today. Yikes. He found my blog, which sent him down a little rabbit hole. I seem to have come through it OK. Better than the other Seth he mentions!:

http://andrewgelman.com/2018/07/26/think-accelerating-string-research-successes/

My own role in the social science’s current replication discourse is as a person with very interesting opinions that no one but me really cares about. Until today! Here is what I have to offer:

About

This entry was posted on Thursday, July 26th, 2018 and is filed under Uncategorized.


in Cognitive Science: Synergistic Information Processing Encrypts Strategic Reasoning in Poker

It took five years, but it’s out, and I’m thrilled:

https://onlinelibrary.wiley.com/doi/full/10.1111/cogs.12632

You can get an accessible version here

I’m happy to answer questions.

About

This entry was posted on Friday, June 15th, 2018 and is filed under Uncategorized.


Satie’s doodles


These are a few of my favorite doodles from the “A Mammals Notebook” collection of Erik Satie’s whimsical (i.e. silly) writings and drawings and ditties. Who knew he also wrote and joked and drew? I scanned many many more:

SatieDoodles

About

This entry was posted on Wednesday, May 2nd, 2018 and is filed under Uncategorized.


Waiting in the cold: the cognitive upper limits on the formation of spontaneous order.

It’s a cold Black Friday morning, minutes before a major retailer with highly anticipated products opens its doors for the day. There are hundreds of people, but no one outside. Everyone is sitting peacefully in their car, warm and comfortable, and in the last seconds before the doors open, the very first arrivals, the rabidly devoted fans who drove in at 2AM, peacefully start to walk to their rightful place before the door as it is about to open, with numbers 2 through 10 through 200 filing wordlessly and without doubt into their proper places behind. There was no wonder or doubt so no comment, that hundreds of people spent only as much time in the cold as it took to get to the double doors, and everyone retained their rightful place in the tacit parking lot queue.

This utopian fiction isn’t fictional for being utopian, just for being big. This kind of system is a perfectly accurate description of events for a crowd of 1 or 2 or 3 or 4 people. I was there. It only starts becoming fanciful at 5 or 10 or more. Naturally, the first person to arrive knows that they are first, the second that they are second and the tenth that they are tenth. But knowing your place isn’t enough, you have to realize that everyone else knows their place.

I broke the utopia the first time I showed up at Wilson Tire to get my winter tires changed out. I had heard that they don’t take appointments and that I should arrive early, even before they open, to get served without being so far back in the day’s queue that I literally wait all day. So I drive up and park among the other five or so other cars and get out to wait by the benches by the door. I was worried that there were so many other cars, but with no one at the benches by the door I figured that they must be employees or something and I rushed to take my position at the front of the queue. This was at about 6:40, a little more than a quarter of an hour before the doors were to open. New Hampshire is usually still cold when you’re switching to or from your summer tires, and I never acclimated, so I was suffering through the cold, and suddenly I wasn’t alone. Four other people got out of their cars to join me, and as new people arrived the cold crowd outside the front door got bigger, with a sort of rough line forming up. I slowly realized that I was a defector, and that I had best imagine myself as behind the people who I had forced out of their cars. That cohort of us milled on the concrete landing, some starting to stand up in an actual queue, me staying more relaxed in my bench, trusting that others would be smart enough to know I was near the front, even though I hadn’t been smart enough waiting from my car to realize that they were. The later arrivals, who saw the milling but had no sense of its order, avoided the mess by queueing up on the asphalt, further away. Immediately before the doors opened one guy with incredible hubris got out of his car at the last minute and cut in front of all of us to be the first served. I was steamed, as much at him as at the docility of the others in my early bird cohort for not saying anything. But I tend to be more of a litigator than most.

It was only after several minutes that I realized that he must have been the first, he’d probably arrived at 6, and his confidence in boldly taking his rightful place was built on the recognition that the other early arrivals would realize he’d been first. But I never did, at least not in time, with the result that I made 5 people get out of their cars who, until my arrival, had peacefully and stably trusted each other to stay warm in their cars, and queue up physically when the time was right. But if I hadn’t done it, someone else would have. It’s much harder for the late arrivals than the early ones.

Number 1 doesn’t just know they’re first, they also know that 2 knows that they are first. 2 knows who 1 is and knows that 1 knows they know. They know that 3 won’t know who is 1 and who is 2, but they realize that 3 will be able to trust 1 and 2 to know each other. 4 might also realize that they are driving into a queue, but 5 and 6 just see a bunch of cars with no order. They know that they are number of 5 or 6, but they think that they’re they’re the last beans in a pile, rather than the last in an increasingly tacit queue. And even if they realize they’re in a queue, 6 might not trust that 5 realizes.

For a car queue to form, it’s not enough to know that you are 10, you have to realize that 9 know they are 9 (and who 10 is), 8 knows they are 8 (and who 9 is), 7 knows they are 7, and so on down to 1, 2, and 3, who know each other, and who know that they know each other knows. When everyone has the capacity to realize they are in a queue, they can queue in the warmth of their cars. But where are minds top out, and common knowledge of the queue breaks down, defection begins.

I’ve now had my tires changed out a few times at Wilson. Number 1 is never the first out of their car. It’s always number 4 or 5 or, in my case, 6. They tend to sit on a bench a few feet from the door. They are followed within a minute or so by the person who came right before them, who wants to signal their priority. And once two people are out, the cascase begins, and everyone else gets out, with the earliest birds standing by the door instead of sitting on the bench, so as to secure their proper place (and signal that they’re securing it). This stays stable, with the sitters knowing their place relative to the standers, and the standers knowing it too. But persons 9 and 10 come upon a disorderly sight, a confusing mix of sitters looking relaxed even though they were the nervous defectors, and standers trying to be in line without looking like they’re in line. 9 adapts to this unsteady sight by standing further away from the door on the asphalt, and 10 lines up behind 9. When the doors open, 9 and 10 watch with apparent wonder as the gaggle by the door fails to devolve into jockeying and each person wordlessly finds their proper place. Of course, it shouldn’t be any surprise: as long as everyone knows their own number, there is enough information for everyone to find their place and even enough information for everyone to keep everyone else accountable

This inevitable degradataion of newcomer’s mental models from queue to pile, from ordered to disordered, creates growing insecurity that people adapt to by moving from an imaginary line to a physical one. In a physical line, you don’t even have to know which number you are, you just have to know where the end of the line is, and so it can scales to hundreds or thousands. On the way to the physical line, a variety of alternative institutions—the very comfortable car queue, the cold but somewhat trustful bench queue, the eventually self-organizing aggregation by the door—ascended and then degraded as common knowledge of them degraded.

A line seems like a simple and straightforward thing. But what for me was moving to the bench to start a line looked others like someone trying to cut in a line that had already existed. If we were all capable of thinking harder and deeper, so capable that we could count on each other to always be doing so wordlessly, then Black Friday shoppers or summer blockbuster campers could enjoy a much more comfortable, satisfying, and civilized norm. But quiet pressures like human reasoning limits, and the degradation of common knowledge they trigger, cause cultural processes to select for institutions that are easy to think about. If you look around, you’ll see lots of situations where cognitive simplicity has won out over social efficiency or fairness, and absent a lot of awkward conversation, it’s perfectly natural to expect it.

Testing it

I may be making this all up, but it’s very easy to test. All I’d have to do is get up at 5:30AM for the next 30 days and drive over with a clipboard record the following events:

    • Time and number of each arrival
      Time at which each person exits their car
      Order of entry into the tire shop.
  • Everything I’m saying make pretty clear predictions. With few people in the parking lot, people will stay in their cars. With many, they will start to get out and queue up. The first person out of the car will usually be the fourth or fifth arrival. The first arrival may tend to be the last out of the car.

    This could be done from afar, at any tire shop, two times a year in any region with winters. An especially nice property of the domain is that this queueing problem, being something people only deal with a few times a year, is sort of a one-shot game, in that every morning brings entirely new people with more or (probably) less experience at navigating the trust and reasoning issues that make parking lot queues so fraught. But it’s hard to get out of bed, which I guess explains why there are so many theorists.

    About

    This entry was posted on Tuesday, May 1st, 2018 and is filed under Uncategorized.


    Many have tried, none have succeeded: “Pavlov’s cat” isn’t funny.

    “Pavlov’s CAT!!!! GET IT?”

    Here are 20 more or less professional cartoonists who had precisely that original thought. Guess how many of them managed to make it funny. I’m posting this because I’m surprised at how much failure there is on this. Is the idea of Pavlov’s Cat inherently, objectively unfunny?

    pavlovscat_newthumb

    pavlov1

    PavlovsCat_20140929_960

    623813_1

    2160452_1

    science-fail-3-pavlov-cat_1024x1024

    MW75-Pavlovs-cat

    mockup-ec225aa9_1024x1024

    psych133_pavlovs-first-experiment

    s-l300

    Pavlov's Cat: 'Dream on Buddy.'

    3oQVFbgffBAbt9LaFBhQMPjSrHcID6hRzWIyql__O9s

    a119940699d20eb287829445fa09c386

    raf,750x1000,075,t,fafafa_ca443f4786.u1

    342535722_2d257e5dd9

    cg4addd0f5e5c3a0

    pavlov's_cat

    pavlovs_cat_card-r0b31d587f0564e63b9e966be2f31b632_xvuak_8byvr_540

    julius_katz_the_cat_pavlov_birthday_card-r5a3442b58eba4f21b1d3f2ce49d2b4af_xvuat_8byvr_540

    05Horacek-OL02-Pavlovs-Cats-col2400

    20080618cpbss-a

    Pavlov's cat.

    Bonus, slightly fewer people made it to Schrödinger’s dog. Somehow, a few of these are kind of funny. Why is Schrodinger’s Dog less hackneyed than Pavlov’s Cat? What does it mean about humor or semantics. Also notice the different roles of the dog compared to those of the cat.

    schorodingersdogheather_fullpic_1

    schrodinger__s_dog_by_timelike01-d2y5efk

    mwc0106-schrodingers-dog

    ugr9Nle

    c626be298757ff5f62fc93250974e0f6

    Schrodingers+dog+_6b9b930ee4046cb8ea7febadde0e00c6

    download

    And what does not original plus not original equal? Still not original.

    ec00_cat_vs_dog

    papergc,441x415,w,ffffff.2u1

    289e5a763a673eef471dc5091a0dd7a4

    cartoon7237

    0a61a33e28546af4c0ba494c1233530e

    OK, fine, the last one is funny.

    About

    This entry was posted on Friday, April 20th, 2018 and is filed under Uncategorized.


    Ramon y Cajal’s Advice to a young investigator

    I read
    Advice for a young investigator
    by Santiago Ramon y Cajal (1897)

    Here is a good bit:

    Once a hypothesis is clearly formulated, it must be submitted
    to the ratification of testing. For this, we must choose
    experiments or observations that are precise, complete, and
    conclusive. One of the characteristic attributes of a great
    intellect is the ability to design appropriate experiments.
    They immediately ªnd ways of solving problems that
    average scholars only clarify with long and exhausting
    investigation.
    If the hypothesis does not fit the data, it must be rejected
    mercilessly and another explanation beyond reproach
    drawn up. Let us subject ourselves to harsh self-criticism
    that is based on a distrust of ourselves. During the course
    of proof, we must be just as diligent in seeking data contrary
    to our hypothesis as we are in ferreting out data that may
    support it. Let us avoid excessive attachment to our own
    ideas, which we need to treat as prosecutor, not defense
    attorney. Even though a tumor is ours, it must be removed.
    It is far better to correct ourselves than to endure correction
    by others. Personally, I do not feel the slightest embarrassment
    in giving up my ideas because I believe that to fall and
    to rise alone demonstrates strength, whereas to fall and wait
    for a helping hand indicates weakness.
    Furthermore, we must admit our own absurdities whenever
    someone points them out, and we should act accordingly.
    Proving that we are driven only by a love of truth, we
    shall win for our views the consideration and esteem of our
    superiors.
    Excessive self-esteem and pride deprive us of the supreme
    pleasure of sculpting our own lives; of the incomparable
    gratification of having improved and conquered ourselves;
    of refining and perfecting our cerebral machinery—the legacy
    of heredity. If conceit is ever excusable, it is when the
    will remodels or re-creates us, acting as it were as a supreme
    critic.
    If our pride resists improvement, let us bear in mind that,
    whether we like it or not, none of our tricks can slow the
    triumph of truth, which will probably happen during our
    lifetime. And the livelier the protestations of self-esteem
    have been, the more lamentable the situation will be. Some
    disagreeable character, perhaps even with bad intentions,
    will undoubtedly arrive on the scene and point out our
    inconsistency to us. And he will inevitably become enraged
    if we readily correct ourselves because we will have deprived
    him of an easy victory at our expense. However, we
    should reply to him that the duty of the scientist is to adapt
    continuously to new scientific methods, not become paralyzed
    by mistakes; that cerebral vigor lies in mobilizing
    oneself, not in reaching a state of ossiªcation; and that in
    man’s intellectual life, as in the mental life of animals, the
    harmful thing is not change, but regression and atavism.
    Change automatically suggests vigor, plasticity, and youth.
    In contrast, rigidity is synonymous with rest, cerebral lassitude,
    and paralysis of thought; in other words, fatal inertia—certain
    harbinger of decrepitude and death. With
    winning sincerity, a certain scientist once remarked: “I
    change because I study.” It would be even more self-effacing
    and modest to point out: “I change because others study,
    and I am fortunate to renew myself.” (pp 122–123)

    Of course, he also said things like this:

    To sum things up: As a general rule, we advise the man inclined toward science to seek in the one whom his heart has chosen a compatible psychological profile rather than beauty and wealth. In other words, he should seek feelings, tastes, and tendencies that are to a certain extent complementary to his own. He will not simply choose a woman, but a woman who belongs to him, whose best dowry will be a sensitive compliance with his wishes, and a warm and full-hearted acceptance of her husband’s view of life.
    (pp 103–104)

    Unlike teachers of history and literature, unaccustomed to assigning writing that mixes nuggets of wisdom and bald sexism. I’m thinking of being explicit with my students that they have several options, to
    read and think in a manner divorced from emotion, to take the good and leave the bad, or to dismiss it all as rot. That’s got problems, but so does everything else I can think of. Working on it.

    About

    This entry was posted on Saturday, March 31st, 2018 and is filed under Uncategorized.


    My most sticky line from Stephenson’s Diamond Age

    It’s been years and this never left my head. The line is from a scene with a judge for a far-future transhumanist syndicate based on the teachings of Confucius.

    The House of the Venerable and Inscrutable Colonel was what they called it when they were speaking Chinese. Venerable because of his goatee, white as the dogwood blossom, a badge of unimpeachable credibility in Confucian eyes. Inscrutable because he had gone to his grave without divulging the Secret of the Eleven Herbs and Spices. p. 92

    About

    This entry was posted on Monday, March 19th, 2018 and is filed under Uncategorized.


    Generous and terrifying: the best late homework policy of all time

    I want all of my interactions with students to be about the transmission of wondrous ideas. All the other bullshit should be defined out of my life as an educator.

    But life happens, and students can flake on you and on their classmates, and if you don’t discourage it, it gets worse. So now the transmission of wonder is being crowded out by discussion about your late policy. And late policies are a trap.

    For a softy like me, any policy that is strong enough to actually discourage tardy work is too harsh to be credible. To say NO LATE WORK WILL BE ACCEPTED is all well and good until you hit the exceptions: personal tragedies you don’t want to know about, the student who thoughtfully gave you three weeks advance notice by email, your own possible mistakes. Suddenly you’re penalizing thoughtfulness, incentivizing students to dishonestly inflate their excuse into an unspeakable tragedy, and setting yourself up to be the stern looker-past-of-quivering-chins. And what’s the alternative? 10% off for each day late? I don’t want to be rooting through month-past late-night emails from stressed students, looking up old deadlines, counting hours since submission, or calculating 10% decrements for this person and 30% for that one, especially not when such soft alternatives actually incentivize students to do the math and decide that 10% is worth another 24 hours. Plus, with all of these schemes, you’re pretending you care about a 10:02 submission on a 10:00 deadline—or even worse, you’re forgetting reality and convincing yourself that you actually do care.

    My late policy should be flagrantly generous and utterly fearsome. It should be easy to compute and clear and reasonable. It should most certainly not increase the amount of late work, especially because that increases the work on me. It should be so fair that no one who challenges it has a leg to stand on, and so tough that all students are very strongly incentivized to get their work in on time. It should softly encourage students to be good to themselves, while allowing students flexibility in their lives, while not being so arbitrarily flexible that you’re always being challenged and prodded for more flexibility.

    What I wanted was a low effort, utterly fair policy that nevertheless had my students in constant anxiety for every unexcused minute that they were late.

    GambleProtocol

    Is that even possible? Meet the Gamble Protocol. It’s based around one idea: because humans are risk averse, you can define systems that students simultaneously experience as rationally generous and emotionally terrifying. All you have to do is create a very friendly policy with small, steadily increasing probabilities of awful outcomes.

    The Gamble Protocol is a lot like the well-known “10% off for every day late.” In fact, in the limit of infinite assignments, they’re statistically indistinguishable. Under the Protocol, a student who gets an assignment in before the deadline has a 100% chance of fair assessment of their work. After the deadline, they have a steadily increasing chance of getting 0% credit for all of their hard work. No partial credit: either a fair grade or nothing at all. On average, a student who submits 100 perfect assignments at 90% probability gets an A-, not because all submissions got 90%, but because ten got 0%. A bonus, for my purposes, is that I teach a lot of statistical reasoning, so the Protocol has extra legitimacy as an exercise in experiential learning.

    After experimenting a bit, and feeling out my own feelings, I settled on the following: for each assignment, I draw a single number that applies to everyone (rather than recalculating for every late student). I draw it whenever I like, and I always tell students what number got drawn, and how many students got caught. The full details go in the syllabus:

    Deadline. If the schedule says something is due for a class, it is due the night before that class at 10:00PM. There is no partial credit for unexcused lateness; late assignments are worth 0%. However, assignments submitted after the deadline will get a backup deadline subject to the Gamble Protocol.
    The Gamble Protocol. I will randomly generate a backup deadline between 0 and 36 hours after the main deadline, following a specific pattern. Under this scheme:

    • an assignment that is less than 2 hours late (before midnight), has a 99% chance of earning credit,
    • an assignment turned in before 2:00AM has a 98% chance of earning credit,
    • an assignment turned in 12 hours late, by 10AM, has a 90% chance of earning credit,
    • that jumps suddenly down to 80% between 12–14 hours, getting worse faster,
    • an assignment turned in 24 hours late, before the next 10:00PM, has a 60% chance of earning credit,
    • and an assignment turned in more than 36 hours late is guaranteed to earn zero credit.

    I will not calculate the backup deadline until well after its assignment was due.

    Calculating is easy. For each assignment,

    • you can put the following numbers in a hat and draw:

      0 2 4 5 6 7 8 9 10 11 12 12 12 12 12 12
      12 12 12 12 14 15 16 17 18 19 20 21 22 23 14 15
      16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
      32 33 34 35 24 25 26 27 28 29 30 31 32 33 34 35
      24 25 26 27 28 29 30 31 32 33 34 35 24 25 26 27
      28 29 30 31 32 33 34 35 24 25 26 27 28 29 30 31
      32 33 34 35
    • or you can open any online R console and paste this code:
      deadline <- c( 0,2, c(4,5,6,7,8,9,10,11), rep(12, 10), rep(14:23,2), rep(24:35,5) ) sample(deadline)[1]

    I'm keeping data from classes that did and did not use this policy to see if it reduces late work. I still haven't chugged any of it, but I will if requested. For future classes, I was thinking of extending from 36 hours to a few days, so that it really is directly equivalent to 10% for a day's tardiness.

    About

    This entry was posted on Monday, March 19th, 2018 and is filed under Uncategorized.


    How to create a Google For Puppies homepage

    Trying to get my students interested in how the Internet works, I ended up getting my family interested as well. We made this:
    PuppyGoogle
    Here is how to install it:

    • Download this file, containing the homepage and puppy image in a folder
    • Move the file where you want it installed and unzip it
    • Drag the Google.html file to your browser
    • Copy the address of the file from your location bar, and have it handy
    • Copy that file name into your browser’s box for replacing or overriding the new tab page. Or, if you are on another browser, wherever in their options where new tab pages get customized.
      • On Chrome, you’ll have to install this extension
      • On Firefox, you’ll have to install this extension
      • In addition to changing the new tab page, you can more easily change the default home page to the same address.
    • If you want to change the appearance of this page in any way, you can edit the Google.html file as you like. The easiest thing to do is search/replace text that you want to be different.

    About

    This entry was posted on Saturday, February 10th, 2018 and is filed under Uncategorized.


    Quantifying the relative influence of prejudices in scientific bias, for Ioannidis

    Technology makes it increasingly practical and efficient to quickly deploy experiments, and run large numbers of people through them. The upshot is that, today, a fixed amount of effort produces work of a much higher level of scientific rigor than 100, 50, or even 10 years ago. Some scientists have focused their steely gazes on applying this new better technology to foundational findings of the past, triggering a replication crisis that has made researchers throughout the human sciences question the very ground they walk on. John Ioannidis is a prominent figure in bringing attention to the replication crisis with new methods and a very admirable devotion to the thankless work of replication.

    In the provocatively titled “Why Most Published Research Findings Are False”, Ioannidis makes five inferences about scientific practice in the experimental human sciences:

    1. The smaller the stud- ies conducted in a scientific field, the less likely the research find- ings are to be true.
    2. The smaller the effect sizes in a scientific field, the less likely the research findings are to be true.
    3. The greater the num- ber and the lesser the selection of tested relationships in a scientific field, the less likely the research findings are to be true.
    4. The greater the flex- ibility in designs, definitions, out- comes, and analytical modes in a scientific field, the less likely the research findings are to be true.
    5. The greater the financial and other interests and preju- dices in a scientific field, the less likely the research findings are to be true.
    6. The hotter a scientific field (with more scientific teams involved), the less likely the research findings are to be true.

    His argument, and arguments like it, has produced a great effort at quantifying the effects of these various forms of bias. Excellent work has already gone into the top three or four. But the most mysterious, damning, dangerous, and intriguing of these is #6. And, if you dig through the major efforts at pinning these various effects down, you’ll find that they all gloss over #6, understandably, because it seems impossible to measure. That said, Ioannidis gives us a little hint about how we’d measure it. He briefly entertains the idea of a whole scientific discipline built on nothing, which nevertheless finds publishable results in 1 out of 2, or 4 or 10 or 20 cases. If such a discipline existed, it would help us estimate the relative impact of preconceived notions on scientific outputs.

    Having received much of my training in psychology, I can say that there are quite a few cases of building a discipline on nothing. They’re not at the front of our minds because psychology pedagogy tends to focus more on its successes, but if you peer between the cracks you’ll find scientific, experimental, quantitative, data-driven sub-fields of psychology that persisted for decades before fading with the last of their proponents, that are remembered now as false starts, dead ends, and quack magnets. A systematic review of the published quantitative findings of these areas, combined with a possibly unfair assumption that they were based entirely on noise, could help us estimate the specific frequency at which preconceived bias creates Type I false positive error.

    What disciplines am I talking about? Introspection, phrenology, hypnosis, and several others are the first that came to mind mind. More quantitative areas of psychoanalysis, if they exist, and if they’re ridiculous, could also be fruitful. In case I or anyone else wants to head down this path, I collected a bunch of resources for where I’d start digging. My goal would be to find tables of numbers, or ratios of published to unpublished manuscripts, or some way to distinguish true results from true non results from false results from false non results.

    • Introspection:
      • The archives of Titchener (at Cornell) and Wundt
      • https://plato.stanford.edu/entries/introspection/
      • Boring’s paper on the History of Introspection https://pdfs.semanticscholar.org/1191/4d0d6987fa13d7f75c0717441d1457b969f3.pdf
    • ESP:
      • Bem’s pilots
      • https://www.newyorker.com/magazine/2010/12/13/the-truth-wears-off (ironically written by Jonah Lehrer)
      • Hypnosis:
        • http://journals.sagepub.com/doi/abs/10.1177/0073275317743120
        • Orne’s “On the social psychology of the psychological experiment”
      • Phrenology:
        https://archiveshub.jisc.ac.uk/search/archives/beb88bfc-51c1-3536-9539-6370f2b9440d
      • Other dead theories:
        Dictionary of Theories, Laws, and Concepts in Psychology (https://books.google.com/books?id=6mu3DLkyGfUC&pg=PA49 )

    About

    This entry was posted on Sunday, February 4th, 2018 and is filed under Uncategorized.


    Pandas in 2018

    I’m late to the game on data science in Python because I continue to do my data analysis overwhelmingly in R (thank god for data.table and the tidyverse and all the amazing stats packages. To hell with data.frame and factors). But I’m finally picking up Python’s approach as well, mainly because I want my students, if they’re going to learn only one language, to learn Python. So I’m teaching the numpy, pandas, matplotlib, seaborn combination. I got lucky to discover two things about pandas very quickly, and only because I’ve been through the same thing in R. 1) the way you learn to use a package is different i subtle ways from how it is documented and taught, and 2) the way a young data science package is used now is different from how it was first used (and documented) before it was tidied up. That means that StackExchange and other references are going to be irrelevant a lot of the time in ways that are hard to spot until someone holds your hand.

    I just got the hand-holding—the straight-to-pandas-in-2018 fast-forward—and I’m sharing it. The pitfalls all come down to Python’s poor distinctions between copying objects and editing them in place. In a nutshell, use .query() and .assign() as much as possible, as well as .loc(), .iloc(), and .copy(). Use [], [[]], and simple df. as little as possible, and, if so, only when reading and never when writing or munging. In more detail, the resources below are up-to-date as of the beginning of 2018. They will spare your ontogeny from having to recapitulate pandas’ phylogeny:

    https://tomaugspurger.github.io/modern-1-intro

    http://nbviewer.jupyter.org/urls/dl.dropbox.com/s/sp3flbe708brblz/Pandas_Views_vs_Copies.ipynb

    Thanks Eshin

    About

    This entry was posted on Tuesday, January 9th, 2018 and is filed under Uncategorized.


    Good mental hygiene demands constant vigilance, meta-vigilance, and meta-meta-vigilance

    I get paid to think. It’s wonderful. It’s also hard. The biggest challenge is the constant risk of fooling yourself into thinking you’re right. The world is complicated, and learning things about it is hard, so being a good thinker demands being careful and skeptical, especially of yourself. One of my favorite tools for protecting myself from my ego is the method of multiple working hypotheses, described in wonderfully old-fashioned language by the geologist Thomas C. Chamberlin in the 1890s. Under this method, investigators protect themselves from getting too attached to their pet theories by developing lots of pet theories for every phenomenon. It’s a trick that help maintain an open mind. I’ve always admired Chamberlin for that article.

    Now, with good habits, you might become someone who is always careful to doubt themselves. Once that happens, you’re safe, right? Wrong. I was reading up on Chamberlin and discovered that he ended his career as a dogmatic, authoritarian, and very aggressive critic of those who contradicted him. This attitude put him on the wrong side of history when he become one of the most vocal critics of the theory of continental drift, which he discounted from the start. His efforts likely set the theory’s acceptance back by decades.

    The takeaway is that no scientist is exempt from becoming someone who eventually starts doing more harm than good to science. Being wrong isn’t the dangerous thing. What’s dangerous is thinking that being vigilant makes you safe from being wrong, and thinking that not thinking that being dangerous makes you safe from being wrong makes you safe from being wrong. Don’t let your guard down.

    Also see my list of brilliant scientists who died as the last holdouts on a theory that was obviously wrong. It has a surprising number of Nobel prize winners.

    Sources:

    • https://www.geosociety.org/gsatoday/archive/16/10/pdf/i1052-5173-16-10-30.pdf
    • https://www.smithsonianmag.com/science-nature/when-continental-drift-was-considered-pseudoscience-90353214/

    About

    This entry was posted on Tuesday, January 9th, 2018 and is filed under Uncategorized.


    List of Google Scholar advanced search operators

    I’m posting this because it was surprisingly hard to find. That is partly because, as far as I can tell, you don’t need it. Everything I could find is already implemented in Scholar’s kind-of-hidden visual interface to Advanced Search. The only possible exception is site:, which Advanced Search doesn’t off, but source: supersedes a bit. Standard things like “”, AND, OR, (), plus, and minus are as-is and well documented.

    Beyond that, I didn’t find much:

    1. allintitle: — conduct the whole search over paper titles
    2. allintext: — conduct the whole search over paper texts
    3. author: — search within a specific author.
    4. source: — search within a specific journal
    5. site: — search within a specific site

    There are no operators for years that I could find, you have to use the sidebar or as_ylo and as_yhi parameters in the url (e.g.
    &as_ylo=1990&as_yhi=2022).

    example:

    allintitle: receptor site:jbc.org hormone “peptide receptor” -human author:”y chen” source:journal

    *

    operator operators advanced search special keywords complete academic list

    About

    This entry was posted on Friday, December 29th, 2017 and is filed under Uncategorized.


    typographically heavy handed web design

    Typography is fun. Recent developments in HTML are v. underexplored, especially in what they let you do with type and transparency. I came up with a concept for a navigation bar that would have no backgrounds or borders. It uses noise to direct attention, and gets structure from how things emerge from noise. All in CSS and HTML: no Javascript needed.

    See the Pen Designing with type by Seth Frey (@enfascination) on CodePen.0

    http://enfascination.com/htdocs/text_design/

    About

    This entry was posted on Wednesday, November 29th, 2017 and is filed under Uncategorized.


    A simple way to drive to get more efficiency out of cruise control

    Out of the box, consumer cruise control interfaces favor simplicity over efficiency. Even though it can be efficient to maintain constant speed, cruise control wastes a lot of energy downhill by braking to not go more than 1 MPH over the target speed. If cruise control systems allowed more variation around the target speed, softer and more spread out upper and lower bounds, they would build efficiency by letting cars build momentum and store energy downhill that they can use uphill.

    I developed a brainless way of implementing this without having to overthink anything. This method is much simpler than driving without cruise control, and it only takes a little more attention than using cruise control normally. Using it on a 3 hour hilly drive, a round trip from Hanover, NH to Burlington, VT, I increased my MPG by almost 10, from high 38 to low 46. I got there in about the same amount of time, but with much more variation in speed. The control trip had cruise control at 72 in a 65. I didn’t deviate from that except for the occasional car. The temp both days was around 70°. Car is a 2008 Prius.

    For the method, instead of deciding on a desired speed, you decide a desired MPG and minimum and maximum speeds. That’s three numbers to think up instead of one, but you can do it in a way that’s still brainless. Set your cruise control to the minimum, fix your foot on the throttle so that you’re usually above that speed driving at the target MPG, and only hit the breaks when you expect to hit your maximum. For this trip, my target MPG was 50, and my minimum and maximum speeds were 64 and 80 (so cruise control was at 64). For the most part, my foot is setting the pace and the cruise control is doing nothing. As I go uphill, the car decides that I’m not hitting the gas hard enough and it takes over. As we round the hill it eases off and I feel my foot get back in control (even though it hasn’t moved at all). Then, using momentum built downhill, I’m usually most of the way up the next hill before the engine kicks in. Momentum goes along way, especially in a hybrid. Hybrids are heavy because of their batteries. Over three hours, I was at 46 MPG and spent most of the trip around 70MPH.

    This method probably doesn’t make a difference in flat areas, but it contributes a lot in hilly ones. I don’t expect to ever hit my target MPG, but by minimizing the time spent below that, I can count on approaching it asymptotically. A hypermiler would recommend driving a lot more slowly than 70, but they’d also recommend stripping out your spare tire and back seats, so take it and leave it.

    Peak fuel efficiency on a Prius is crazy low, like in the 30s I think: a pretty unrealistic target for highway driving. But if there was no traffic, and if I was never in a hurry, I’d try it again with cruise control 45 MPH, a target at 60MPG, and a max speed of 90MPH, to see if I could hit 50MPG. I haven’t stayed above 50 on that drive before, but I still think I can do it and still keep my back seat.

    About

    This entry was posted on Tuesday, October 24th, 2017 and is filed under Uncategorized.


    How do we practice large-scale social engineering when, historically, most of it is evil?

    There are many obvious candidates for most evil large scale social system of all time. Apartheid gets special interest for the endurance of its malevolence. I am interested in how to design social systems. Looking at oppressive designs is important for a few reasons. First, as a warning: it’s an awful fact that the most successful instances of social engineering are all clear examples of steps backwards in the betterment of humankind. Second, as reverse inspiration. Apartheid was a very clear set of rules, intentionally put together to make Africans second to those of European descent. Each rule contributed to that outcome. Some of those rules exist today in the US in weaker form, but they are hard to recognize as inherently oppressive until you see them highlighted as basic principles of the perfectly oppressive society. So what are those principles? And where did they come from?

    I recently learned that intellectual architects of Apartheid in South Africa visited the American South for inspiration, which they tweaked with more lessons in subjugation from British Colonial rule. One historian described early 20th century South Africa and the USA as representing “the highest stage of white supremacy.”

    But Apartheid wasn’t a copy/paste job. Afrikaners understood Apartheid as something that learned from the failures of Jim Crow as a system of segregation and control. US failures to prevent racial mixing inspired the South African system in which multiracial people are a third race, called Colored, which to this day is distinct from Black. The US model also inspired a political geography (the Homelands) that would keep Africans entirely outside of urbanized areas, except as laborers. The Africaner’s were able to go further as well. In order to undermine organizing and maintain control, they took measures to prevent communication between homelands (like by making a different language the “national” language of each fake nation). With black Africans divided between 9 (?) of these fake nations, the 5:1 minority of white people could ensure that they are not outnumbered by any one body. And the animosity that these artificial divisions created between black Africans 70 years ago persist today.

    I don’t like “smoky shadow conspiracy / backroom deal” theories of political control, because I think a lot of systemic oppression happens in a decentralized way through perverse values. But some systems of oppression really are designed.

    Notes

    I got onto the question of US influence on Apartheid after hearing Trevor Noah’s autobiography. At one point he says that a commission that outlined Apartheid did a world tour of oppressive regimes and wrote a report of recommendations. I still haven’t found that list of countries (or the date of the trip, or the name of the report (Lagden Commission? Sauer Commission?), but I found other things: early (40 years prior) intellectual groundwork of Apartheid. Here are the sources I got my hands on for the specific question of foreign inspiration.

    Primary:
    https://archive.org/details/southafricannati00sout
    https://archive.org/details/blackwhiteinsout00evan

    Secondary:
    Rethinking the Rise and Fall of Apartheid: South Africa and World Politics
    By Adrian Guelke
    Racial segregation and the origins of apartheid in South Africa, 1919-36 / Saul Dubow
    The highest stage of white supremacy : the origins of segregation in South Africa and the American South / John W. Cell

    About

    This entry was posted on Tuesday, October 17th, 2017 and is filed under Uncategorized.


    “What’s a pee-dant?”

    download-1
    My wife, a librarian and self-described pedantic jerk, got a tough question at the library the other day: “What’s a pee-dant?” Her first thought? “This has got to be a setup.

    About

    This entry was posted on Saturday, September 9th, 2017 and is filed under Uncategorized.


    My great grandma’s face tattoos

    My momma is from a part of Jordan where women had a tradition of getting tattoos all over. After many years of searching, and finally help from my librarian wife, I found a book published by Jordan’s national press by Taha Habahbeh and Hana Sadiq, an Iraqi fashion designer living in Jordan. I don’t think either speaks English, and the book is only in Arabic, but the pictures are good, if grainy.

    tatt3

    tatt2

    tatt1

    (None of these are my great-grandma. These are all pictures from the book. It has a lot more. Full scan here.)

    So yeah, face tattoos. And while we’re on the subject of things you in the Middle East do without fully thinking through the consequences, here’s a political service announcement about US foreign policy: After an extended period of secularization through the mid-20th century, in which my mom wore miniskirts and short hair, fundamentalist Islam started its revival in Jordan in the 1980s. The reversal is almost entirely attributable to the fallout from USA’s hysterically anticommunist foreign policy. That violent silliness drove US funding and training of the Afghani groups that became Al Qaeda, the initiation of a nuclear program in Iran to keep it from leaning on Russia, the smuggling of arms to Iran to fund anti-Communist massacres in Nicaragua, and the destructive consequences of the US’s uncompromising support for the Israeli occupation of Palestine. More recently, with US-caused conflicts in Iraq spreading war to Syria, Jordan continues to be the largest refugee camp in the world. Jordanians may always be a minority in their country.

    About

    This entry was posted on Monday, August 28th, 2017 and is filed under Uncategorized.


    New paper out from my time at Disney: Blind moderation with human computation

    Frey, S., Bos, M.W., and Sumner, R.W (2017) “Can you moderate an unreadable message? ‘Blind’ content moderation via human computation” Human Computation 4:1:78–106. DOI: 10.15346/hc.v4i1.5
    Open access (free) here.

    What’s it about?

    Say I’m the mailman and you just received a letter, and you wanted to know before opening it if it has anything disturbing. You could ask me to invade your privacy and open it. Or I could respect your privacy and make you take a chance. But I can’t do both. In this sense, safety and privacy are opposed. Or are they? In certain decision settings, its possible to filter out unsafe letters without opening any of them.

    In this project, I lay out two tricks I developed for determining without looking at a piece of content whether it contains inappropriate content. This is important because most kids are on the Internet. In fact, according to some reports, a third of all cell phones are owned by minors.

    One of the two methods could one day work for protecting voters from intimidation, by replacing normal checkboxes on a ballot with low-resolution pictures of two generic faces. Here’s the basic idea. You have a tyrant and an upstart competing for the tyrant’s seat. Everyone wants to vote for the upstart, but everyone is afraid that the tyrant will read their ballot and seek retribution. Assume the big assumption that the winner will get to take office and there’s protection from voting fraud and all that stuff, and just focus on the mechanics of the ballot.

    In my scheme, your ballot doesn’t actually name any candidate. All there is are two copies of the same generic faces, both sort of fuzzed up with noise like the snow on a TV. By chance, because of the noise, one face will barely looks slightly more like the candidate you prefer. To vote, all you do is circle that face. Every person gets a ballot with the same face, but different noise. Then after all the ballots are collected, you take all the faces that got circled, average them, and the generic face plus the averaged noise will look like the face of the upstart. But from each individual ballot it’ll be impossible for the tyrant to know who you voted for. This averaging method, called reverse correlation in social psychology, has already been shown to do all kinds of cool stuff. But never anything vaguely useful before. That’s why this paper could be considered a contribution.

    I’m proud of this paper, and with how quickly it came out: just, umm, three years. Quick for me.

    About

    This entry was posted on Thursday, August 10th, 2017 and is filed under Uncategorized.


    Modern economic ideology has overwritten sharing as the basis of human history

    The commons and collective efforts to govern them are probably as old as humanity. They are certainly as old as Western civilization. And yet, very few people know it, and most who don’t know the history would even say that what I say is in doubt. This is because the spread of the ideology of modern Western civilization has not only downplayed the role of common property in our history, it has also reinterpreted the successes of the commons as successes of capitalism.

    The truth of it comes right out and grabs you in the roots of two words, “commoner” and “capital”. In modern usage, the word “commoner” refers to someone who is common (and poor), as opposed to someone who is exceptional (and rich). But the actual roots of the word referred to people who depended in their daily lives on the commons: forests, rivers, peat bogs, and other lands that were common property, and managed collectively, for centuries. The governance regimes that developed around commons were complex, efficient, fair, stable, multi-generational, and uniquely suited to the local ecology. They were beautiful, until power grabs around the world made them the property of the wealthy and powerful, and reduced commoners to common poverty.

    The word capital comes from Latin roots for “head,” a reference to heads of cattle, an early form of tradable property that may have formed the intellectual root of ideas of property on which capitalism is built. The herding of capital is typical of the pastoralist life that characterized much of life before the invention of agriculture and the state. A notable feature of pastoralists around the world is that they tend to share and collectively manage rangelands. Some of the oldest cooperatives in the world, like a Swiss one of more than 500 years, are grazing cooperatives. Collective ownership is at least as old as the management of domesticated herds, and it is absolutely essential to most instances of it, particularly among herders living in low-yield lands surrounding the cradle of Western civilization. In other words, common property is what made capital possible.

    Common property is alive and well today, and just as new technology is making it more and more goods privately ownable (and therefore distributable through markets), it is also giving people more and more opportunities to benefit from collective action, and continue to be a part of history.

    About

    This entry was posted on Saturday, July 15th, 2017 and is filed under Uncategorized.


    Black and white emoji fonts

    I’m working with Matteo Visconti di Oleggio Castello to bring modern emoji to letterpress. Nerds are into standards, so by “modern” I mean Emoji version 5.0, which is implemented in Unicode 10.0. We’re helped by our typehigh project for transforming .svg, .png, and even full .ttf files into 3dprintable .stl models (via .scad). All we need are emoji font files suitable for letterpress. After a bit of effort seeing if it would be easy to convert color fonts to black and white, we realized that there should be black and white emoji fonts. But it was harder than we thought. Almost all modern emoji fonts are all in full color, and it took some digging to find symbol fonts that are still black and white. I was able to find a bunch, as well as some full color fonts that are designed to have black and white “fallback” modes.

    Fonts

    Here is what I found:

    Noto Emoji Font
    Google has a fully internationalized font, Noto, whose emoji font has a black and white version:
    https://github.com/googlei18n/noto-emoji/tree/master/fonts
    The smiley’s are blobs.

    EmojiOne
    EmojiOne is a color font with black and white fallbacks. I couldn’t figure out how to trigger the fallbacks, but I found an early pre-color version of EmojiOne:
    https://github.com/eosrei/emojione

    Android Emoji
    Not sure why, but one of Android’s main Emoji fonts is black and white
    https://github.com/delight-im/Emoji/tree/master/Android/assets/fonts
    The smiley’s are androids.

    GNU’s FreeFont
    FreeFont is black and white.
    http://savannah.gnu.org/projects/freefont/
    http://ftp.gnu.org/gnu/freefont/?C=M;O=D

    SymbolA
    SymbolA is a black and white Linux font with nearly full Unicode support:
    http://apps.timwhitlock.info/emoji/tables/unicode
    http://users.teilar.gr/~g1951d/

    EmojiSymbols
    A free font by an independent designer.
    http://emojisymbols.com/
    You can convert from woff to ttf here

    Microsoft Segoe UI Symbol
    Microsoft has a very high-quality emoji set in its Segoe UI Symbol/Emoji font. And because of copyright law, in which things have to be copyrighted separately for different uses, there shouldn’t be anything keeping us from using it to create printed type:
    https://en.wikipedia.org/wiki/Segoe
    http://www.myfontfree.com/segoeuiemoji-myfontfreecom126f132714.htm

    FireFoxEmoji
    This might be from an old pre color version:
    https://github.com/mozilla-b2g/moztt/blob/master/FirefoxEmoji-1.6.7/FirefoxEmoji.ttf

    Twitter’s Emoji font
    Twitter open sources its emoji font. This doesn’t have a black and white version, but it does have black and white fallbacks. If I can figure out how to extract or trigger the fallbacks, this could be great.
    https://github.com/eosrei/twemoji

    There may be more at the bottom of this:
    https://github.com/eosrei/emojione
    and here
    https://wiki.archlinux.org/index.php/fonts#Emoji_and_symbols

    Using/testing/seeing these fonts

    Don’t do this through a browser, but on your own system. You have to install each font, then download this file (instead of viewing it in your browser):
    www.unicode.org/Public/emoji/5.0/emoji-test.txt
    Open it in a text editor and change the Font to each of these fonts to see how each emoji set looks.

    keywords

    emoji symbol font ttf otf open source fallback BW B&W

    About

    This entry was posted on Friday, June 16th, 2017 and is filed under Uncategorized.


    all the emoji in a line

    Here is a quick and dirty list of most simple emoji:
    ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ☺️☺ ? ? ? su? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? su☹️☹ ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? su? ? ? ? ? ? su? ? ? ? ? ? ? ? su? ? ? ? ? ☠️☠ ? ? ? ? ? su? ? ? ? ? ? ? ? ? su? ? ? su? ??????????? ??????????? ??????????? ??????????? ??????????? ??????????? ??????????? ??????????? ??????????? ??????????su?‍?????????????????????‍?????????????????????‍???????????‍???????????‍???????????‍???????????‍?????????????????????‍?????????????????????‍???????????‍???????????‍???????????‍???????????‍???????????‍???????????‍???????????‍???????????‍???????????‍???????????‍???????????‍???????????‍???????????‍???????????‍???????????‍???????????‍???????????‍???????????‍?????????????????????‍?????????????????????‍???????????‍???????????‍???????????‍??????????? ???????????????????????????????????????????????????????️? ???????????️??️??????????????????????️?????????????????????? ??????????????????????????????????????????????????????? ??????????????????????????????????????????????????????? ??????????? ??????????? ??????????????????????????????????????????????????????? ??????????? ??????????? ??????????? ??????????????????????????????????????????????????????? ??????????? ??????????? ??????????? ??????????su? ??????????? ??????????? ??????????? ??????????????????????????????????????????????????????? ??????????????????????????????????????????????????????? ??????????????????????????????????????????????????????? ?????????????????????????????????????????????????????? ??????????????????????????????????????????????????????? ????? ????su? ??????????????????????????????????????????????????????? ??????????????????????????????????????????????????????? ??????????????????????????????????????????????????????? ??????????????????????????????????????????????????????? ??????????????????????????????????????????????????????? ??????????????????????????????????????????????????????? ??????????????????????????????????????????????????????? ??????????????????????????????????????????????????????? ??????????????????????????????????????????????????????su? ??????????????????????????????????????????????????????? ??????????????????????????????????????????????????????? ??????????????????????????????????????????????????????? ??????????????????????????????????????????????????????? ??????????? ??????????? ????? ??????????????????????????????????????????????????????? ??????????????????????????????????????????????????????? ??????????????????????????????????????????????????????? ??????????? ???????????️? ???????????️? ? ? su? ? ??????????⛷️⛷ ? ???????????️? ???????????️??️??????????????????????️??️?????????????????????? ??????????????????????????????????????????????????????? ??????????????????????????????????????????????????????? ????????????????????????????????? ??????????????????????⛹️⛹ ⛹?⛹?⛹?⛹?⛹?⛹️⛹⛹️⛹⛹?⛹?⛹?⛹?⛹?⛹?⛹?⛹?⛹?⛹?⛹️⛹⛹️⛹⛹?⛹?⛹?⛹?⛹?⛹?⛹?⛹?⛹?⛹??️? ???????????️??️??????????????????????️??️?????????????????????? ??????????? ????????????????????????????????????????????? ???????????????????????????????????????????????????????️? ?️? ? ??????????????????????????????????????????????????????? ????? ??????????????????????????????????????????????????????? ??????????????????????????????????????????????????????? ??????????????????????????????????????????????????????su? ? ? ? ??‍? ???‍? ??‍su? ??????????? ??????????? ??????????? ??????????☝️☝ ☝?☝?☝?☝?☝?? ??????????? ??????????? ??????????✌️✌ ✌?✌?✌?✌?✌?? ??????????? ??????????? ??????????? ???????????️? ??????????✋ ✋?✋?✋?✋?✋?? ??????????? ??????????? ??????????✊ ✊?✊?✊?✊?✊?? ??????????? ??????????? ??????????? ??????????? ??????????? ??????????✍️✍ ✍?✍?✍?✍?✍?? ??????????? ??????????? ??????????? ??????????? ??????????? ? ??????????? ??????????? ??????????? ? ?️? ?️??️?? ? ? su? ? ❤️❤ ? ? ? ? ? ? ? ? ? ? ? ? ? ? ❣️❣ ? ? ? ? ? ? ? ? ? ?️? ?️? ? ?️? su? ?️? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?️? ? ? ? ? ? ? ? ? ? ? ? ⛑️⛑ ? ? ? ? SmSmgrsu? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?️? ? ? ? ? ? ? su? ? ? ? ? ? ? ? ?️? ? ? ? su? su? ? ? ? ? ? ? ? su? ? ? ? ? ? ? ? ? ? ? ? su? ? ? ? ? ? ? ?️? ?️? ? su? ? ? ?️? ? ? ? ? ? ? su? ? ? ? ? ? ? ☘️☘ ? ? ? ? AnAngrsu? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? su? ? ? ? ? ?️? ? ? ? ? ? su? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? su? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? su? ? ? ? ? ? ? ? ? ? ? ? ? su? ? ☕ ? ? ? ? ? ? ? ? ? ? ? su? ?️? ? ? ? ? FoFogrsu? ? ? ? ?️? ? su?️? ⛰️⛰ ? ? ?️? ?️? ?️? ?️? ?️? su?️? ?️? ?️? ?️? ?️? ?️? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? su⛪ ? ? ⛩️⛩ ? su⛲ ⛺ ? ? ? ? ? ? ? ♨️♨ ? ? ? ? ? ? ? ?️? ? ? su? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?️? ?️? ⛽ ? ? ? ? ? su⚓ ⛵ ? ? ?️? ⛴️⛴ ?️? ? su✈️✈ ?️? ? ? ? ? ? ? ? ?️? ? ? su?️? ? ?️? ?️? ? ? ? su⌛ ⏳ ⌚ ⏰ ⏱️⏱ ⏲️⏲ ?️? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? su? ? ? ? ? ? ? ? ? ? ? ? ?️? ☀️☀ ? ? ⭐ ? ? ☁️☁ ⛅ ⛈️⛈ ?️? ?️? ?️? ?️? ?️? ?️? ?️? ?️? ?️? ? ? ? ☂️☂ ☔ ⛱️⛱ ⚡ ❄️❄ ☃️☃ ⛄ ☄️☄ ? ? ? TrTrgrsu? ? ? ? ✨ ? ? ? ? ? ? ? ? ? ? ? ?️? ?️? ? su?️? ? ? ? ? ? su⚽ ⚾ ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ⛳ ⛸️⛸ ? ? ? ? ? su? ?️? ? ♠️♠ ♥️♥ ♦️♦ ♣️♣ ? ? ? AcAcgrsu? ? ? ? ? ? ? ? ? su? ? ? ?️? ?️? ?️? ? ? ? su? ? ? ? ? ? su? ? ☎️☎ ? ? ? su? ? ? ?️? ?️? ⌨️⌨ ?️? ?️? ? ? ? ? su? ?️? ?️? ? ? ? ? ? ? ? ? ? ? ? ?️? ? ? ? su? ? ? ? ? ? ? ? ? ? ? ? ? ?️? ? ? ?️? su? ? ? ? ? ? ? ? ? ? su✉️✉ ? ? ? ? ? ? ? ? ? ? ? ?️? su✏️✏ ✒️✒ ?️? ?️? ?️? ?️? ? su? ? ? ?️? ? ? ?️? ?️? ? ? ? ? ? ? ? ? ?️? ? ? ✂️✂ ?️? ?️? ?️? su? ? ? ? ? ?️? su? ⛏️⛏ ⚒️⚒ ?️? ?️? ⚔️⚔ ? ? ?️? ? ? ⚙️⚙ ?️? ⚗️⚗ ⚖️⚖ ? ⛓️⛓ su? ? su? ⚰️⚰ ⚱️⚱ ? ?️? ? ? ObObgrsu? ? ? ♿ ? ? ? ? ? ? ? ? ? su⚠️⚠ ? ⛔ ? ? ? ? ? ? ? ? ☢️☢ ☣️☣ su⬆️⬆ ↗️↗ ➡️➡ ↘️↘ ⬇️⬇ ↙️↙ ⬅️⬅ ↖️↖ ↕️↕ ↔️↔ ↩️↩ ↪️↪ ⤴️⤴ ⤵️⤵ ? ? ? ? ? ? ? su? ⚛️⚛ ?️? ✡️✡ ☸️☸ ☯️☯ ✝️✝ ☦️☦ ☪️☪ ☮️☮ ? ? su♈ ♉ ♊ ♋ ♌ ♍ ♎ ♏ ♐ ♑ ♒ ♓ ⛎ su? ? ? ▶️▶ ⏩ ⏭️⏭ ⏯️⏯ ◀️◀ ⏪ ⏮️⏮ ? ⏫ ? ⏬ ⏸️⏸ ⏹️⏹ ⏺️⏺ ⏏️⏏ ? ? ? ? ? ? su♀️♀ ♂️♂ ⚕️⚕ ♻️♻ ⚜️⚜ ? ? ? ⭕ ✅ ☑️☑ ✔️✔ ✖️✖ ❌ ❎ ➕ ➖ ➗ ➰ ➿ 〽️ÿ〽 ✳️✳ ✴️✴ ❇️❇ ‼️‼ ⁉️⁉ ❓ ❔ ❕ ❗ 〰️ÿ〰 ©️© ®️® ™️™ su#️#⃣*️*⃣0️0⃣1️1⃣2️2⃣3️3⃣4️4⃣5️5⃣6️6⃣7️7⃣8️8⃣9️9⃣? su? ? ? ? ? ? ?️? ? ?️? ? ? ? ℹ️ℹ ? Ⓜ️Ⓜ ? ? ?️? ? ?️? ? ? ? ? ?️? ?️? ? ? ? ? ? ? ? ? ? ? ㊗️ÿ㊗ ㊙️ÿ㊙ ? ? su▪️▪ ▫️▫ ◻️◻ ◼️◼ ◽ ◾ ⬛ ⬜ ? ? ? ? ? ? ? ? ? ? ⚪ ⚫ ? ? SySygrsu? ? ? ? ?️? ?️?su????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????su??????

    Dirty because there are a few non-emoji characters mixed in. Here is the one-liner:
    wget http://www.unicode.org/Public/emoji/5.0/emoji-test.txt -qO - | sed 's/.*# //;s/\(..\).*/\1/' | uniq | sort | tr -d '\n' | tr -d ' '
    If you want female cop distinguished from male cop, try changing the two dots in a row (“..”) to three dots (“…”). If you want null skin color, change the two dots to one dot.

    About

    This entry was posted on Sunday, June 11th, 2017 and is filed under Uncategorized.


    Yeah, I’m not sure that that’s the takeaway

    I’m reading a paper by Centola and Baronchelli. It describes a well-designed, ambitious experiment with interesting results. But I hit the brakes at this:

    The approach used here builds on the general model of linguistic conventions proposed by Wittgenstein (39), in which repeated interaction produces collective agreement among a pair of players.

    I’m always thrilled to see philosophy quoted as inspiration in a scientific paper, but in this case there’s a legitimacy problem: no one who ever actually paid attention to Wittgenstein is going to have the guts to gloss him that blithely. You don’t formalize into language a legendary demonstration of the non-formalizability of language without introducing and following your gloss with a bunch of pathetic self-consciously equivocal footwork. Also, Wittgenstein, I’m really really sorry for describing Philosophical Investigations as merely or even remotely about the non-formalizability of language.

    D. Centola, A. Baronchelli, (2015) The spontaneous emergence of conventions: An experimental study of cultural evolution. http://www.pnas.org/content/112/7/1989

    About

    This entry was posted on Thursday, June 8th, 2017 and is filed under Uncategorized.


    Behavioral economics in the smallest nutshell

    In James March’s book of very good Cornell Lectures, The Ambiguities of Experience (2010), I stumbled on the best and most concise summary of behavioral econ that I’ve read.

    Some features of human cognitive abilities and styles affect the ways stories and models are created from ambiguous and complex experience. Humans have limited capabilities to store and recall history. They are sensitive to reconstructed memories that serve current beliefs and desires. They have limited capabilities for analysis, a limitation that makes them sensitive to the framing that is given to experience. They conserve belief by being less critical of evidence that seems to confirm prior beliefs than of evidence that seems to disconfirm them. They distort both observations and beliefs in order to make them consistent. They prefer simple causalities, ideas that place causes and effects close to one another and that match big effects with big causes. They prefer heurisitics that involve limited information and simple calculations to more complex analyses. This general picture of human interpretations of experience is well documented and well known (Camerer, Loewenstein, and Rabin 2004, Kosnick 2008).

    He goes on to add that

    These elements of individual storytelling are embedded in the interconnected, coevolutionary feature of social interpretation. An individual learns from many others who are simultaneously learning from him or her and from each other. The stories and theories that one individual embraces are not independent of the stories and theories held by others. Since learning responds as a result to echoes of echoes, ordinary life almost certainly provides greater consistency of observations and interpretations of them than is justified by the underlying reality. In particular, ordinary life seems to lead to greater confirmation of prior understandings than is probably warranted.

    Overall, the book is very good. Thoughtful and thorough while staying concise. Crisp without being too pithy. I’m thinking of assigning parts.

    About

    This entry was posted on Tuesday, May 30th, 2017 and is filed under Uncategorized.


    Searle has good ideas and original ideas, but his good ones aren’t original, and his original ones aren’t good.

    John Searle is an important philosopher of mind who has managed to maintain his status despite near-ridicule by every philosopher of mind I’ve ever met. He has good ideas and original ones. In the “original” column you can put the Chinese Room and his theory of consciousness. In the “good” column go his theories of speech acts, intentionality, and institutions. None of the former are good and none of the latter are original.

    All credit for this particular takedown goes to Dennett, who put it more thoroughly and less zippily: Searle’s “direction of fit” idea about intentionality is cribbed from Elizabeth Anscombe¹s Intention, Searle’s contributions to speech acts are largely a simplified version of Austin’s “How to do things with words,” and his framework for the social construction of reality is obvious enough that the not-even-that-impressive distinction of having gotten there first can be attributed to Anscombe again, in other ways to Schuetz and Berger, and clearly to Durkheim and probably dozens of other sociologists.

    I never admired Searle. His understanding of philosophy of mind is pre-Copernican, both in terms of being based on ancient metaphysics and having everything revolve around him. He only assigned his own books, and the points we had to argue were always only his. He also had a reputation of being a slumlord and a creep. The world recently discovered that he’s definitely a creep. Already feeling not generous about his work and personality, I do hope that his scandals undermine his intellectual legacy.

    About

    This entry was posted on Sunday, May 7th, 2017 and is filed under Uncategorized.


    White hat p-hacking, a primer

    Jargon glossary: Exploratory data analysis is what you do when you suspect there is something interesting in there but you don’t have a good idea of what it might be, so you don’t use a hypothesis. It overlaps with p-hacking, asking random questions of a noisy world on scant data until the world accidentally misfires and tells you what you want to hear, and you pretend that that was what you thought would happen all along. p-hacking is a response to null results, when you spent forever organizing a study and nothing happens. p-hacking might have caused the replicability crisis, which is researchers becoming boors when they realize that everything they thought was true is wrong. Hypothesis registration is when you tell the world what question you’re gonna ask and what you expect to find before doing anything at all. People are excited because it is a solution to p-hacking. A false positive is when you think you found something that actually isn’t there. It is one of the two types of error, the other being a false negative, when you missed something that actually is there. The reproducibility movement is focused on reducing false positives.

    I almost falsified data once. I was a young research assistant in primatologist Marc Hauser’s lab in 2004 (well before he had to quit for falsifying data, but probably unrelated to that). I was new to Boston, lonely and jobless. I admired science and wanted to do it, but I kept screwing up. I had already screwed up once running my monkey experiment. I got a stern talking to and was put on thin ice. Then I screwed up again. I got scared and prepared to put made-up numbers in the boxes. I immediately saw myself doing it. Then I started to cry, erased them, unloaded on the RA supervising me, quit on the spot, and even quit science for a few years before allowing myself back in in 2008. I know how we fool and pressure ourselves. To be someone you respect requires either inner strength or outside help. Maybe I’ve got the first now. I don’t intend to find out.

    That’s what’s great about hypothesis registration. And still, I’m not impressed by it. Yes it’s rigorous and valuable for some kinds of researchers, but it does not have to be in my toolkit for me to be a good social scientist. First, there are responsible alternatives to registration, which itself is only useful in domains that are already so well understood that why are we still studying them? Second, “exploratory data analysis” is getting paired with irresponsible p-hacking. That’s bad and it will keep happening until we stop pretending that we already know the unknowns. In the study of complicated systems, uncertain data-first exploratory approaches will always precede solid theory-first predictive approaches. We need a good place for exploration, and many of the alternatives to registration have one.

    What are the responsible alternatives to hypothesis registration?

    1. Design good experiments, the “critical” kind whose results will be fascinating no matter what happens, even if nothing happens. The first source of my not-being-impressed-enough by the registration craze is that it misses a bigger problem: people should design studies that they know in advance will be interesting no matter the outcome. If you design null results out, you don’t get to a point of having to fish in the first place. Posting your rotten intuitions in advance is no replacement for elegant design. And elegant design can be taught.
    2. Don’t believe everything you read. Replicability concerns don’t acknowledge the hidden importance of tolerating unreplicable research. The ground will always be shaky, so if it feels firm, it’s because you’re intellectual dead weight and an impediment to science. Reducing false positives requires increasing false negatives, and trying to eliminate one type of error makes the other kind explode. Never believe that there is anything you can do to get the immutable intellectual foundation you deserve. Example: psychology has a lot of research that’s bunk. Econ has less research that’s bunk. But psychology adapts quickly, and econ needs decades of waiting for the old guard to die before something as obvious as social preferences can be suffered to exist. Those facts have a deep relationship: economists historically suffer false negatives at the cost of false positives. Psychologists do the opposite, and they cope with the predominance of bunk by not believing most studies they read. Don’t forget what they once said about plate tectonics: “It is not scientific but takes the familiar course of an initial idea, a selective search through the literature for corroborative evidence, ignoring most of the facts that are opposed to the idea, and ending in a state of auto-intoxication in which the subjective idea comes to be considered an objective fact.” link
    3. Design experiments that are obvious to you and only you, because you’re so brilliant. If your inside knowledge gives you absolute confidence about what will happen and why it’s interesting, you won’t need to fish: if you’re wrong despite that wild confidence, that’s interesting enough to be publishable itself. Unless you’re like me and your intuition is so awful that you need white hat p-hacking to find anything at all.
    4. Replace p-values with empirical confidence intervals.
    5. Find weak effects boring. After all, they are.
    6. Collect way too much data, and set some aside that you won’t look at until later.

    OK, so you’re with me: Exploratory data analysis is important. It’s impossible to distinguish from p-hacking. Therefore, p-hacking is important. So the important question is not how to avoid p-hacking, but how to p-hack responsibly. We can; we must. Here is one way:

    1. Collect data without a hypothesis
    2. Explore and hack it unapologetically until you find/create an interesting/counterintuitive/publishable/PhD-granting result.
    3. Make like a responsible researcher by posting your hypothesis about what already happened after the fact.
    4. Self-replicate: Get new data or unwrap your test data.
    5. Test your fishy hypothesis on it.
    6. Live with the consequences.

    While it seems crazy to register a hypothesis after the experiment, it’s totally legitimate, and is probably better done after your first study than before it. This whole thing works because good exploratory findings are both interesting and really hard to kill, and testing out of sample forces you to not take the chance on anything that you don’t think will replicate.

    I think of it as integrity exogenously enforced. And that’s the real contribution of recent discourse: hypothesis registration isn’t what’s important, it’s tying your hands to the integrity mast, whether by registration, good design, asking fresher questions, or taking every step publicly. It’s important to me because I’m very privileged: I can admit that I can lie to myself. Maybe I’m strong enough to not do it again. I don’t intend to find out.

    About

    This entry was posted on Monday, April 17th, 2017 and is filed under Uncategorized.


    Journey through rope

    Hypnotic flythrough of CT Scans of polymer climbing rope

    From Wikimedia Commons

    About

    This entry was posted on Thursday, March 16th, 2017 and is filed under Uncategorized.


    Is it scientific or lazy to lose ten bikes to theft?

    As of today, I’ve had more than 10 bikes stolen in the past seven years. That’s 1 in Boston, 8 in Bloomington, 0 in Zurich, and now 2 in Hanover, NH. These aren’t >$1000 bikes, they’re almost all <$100. But it makes you wonder, how do you convince a reasonable person that you're not crazy when you say that you still aren't locking up? Is it something about wanting to give the world multiple chances to be better than it is? (Or some other rhetoric for self-administering that noble glow?) Is it rather some egoless, arcane, and strictly intellectual life practice about non-attachment? Or maybe an extended experiment for learning what kinds of places or vulnerable to the theft of crappy bikes (college town on party night: very high risk; downtown Boston: surprisingly low risk)? That can't be it; as interesting as that question is, I definitely don't care enough about it to have lost all the bikes I've lost. Maybe it all comes down to some brilliant, insightful way I have of calculating costs and benefits that makes this all very reasonable and acceptable and it's everyone else that's crazy. Or maybe I should just cut the crap and admit to being stubborn or lazy or asinine, and, like a fool, inexplicably smug about all of those foolish qualities. I try to be honest with myself about why I do things. And in this case I honestly don't know. I think there's something more to it than the most unflattering accounts allow. I need to know, because I need to know myself. So as much as I hate losing all of these bikes I've built and rode and loved and lost, I might have to keep on doing it until I've figured myself out. UPDATE: 11

    About

    This entry was posted on Sunday, February 12th, 2017 and is filed under Uncategorized.


    Books read in 2016

    Read:

    • Mark Twain: Collected Tales, Sketches, Speeches, & Essays 1852–1890
      • Reading Twain’s smaller writing. Great to see his less interesting stuff, and fun to be steeped in his voice.
    • Slow Democracy (Chelsea Green, 2012). S. Clark, W. Teachout
      • Clark and Teachout have a great vision for the role democracy should play in people’s lives. I love to see that view represented. This book is more on the movement building side than the handbook or theory side, so it was mostly for helping me not feel alone, although there was good history and good examples.
    • The Communistic Societies of the United States: Economic Social and Religious Utopias of the Nineteenth Century (Dover Publications, 1966). Charles Nordhoff

      • My understanding is that this is a classic study of small-scale communistic societies in the 19th century. They are overwhelmingly religious separatists with leaders. Their business organizations have surprising similarities. The Shakers seemed to have a lot of trouble with embezzlement by those leaders, with about a third of communities demonstrating some past of it. A great resource, and important reminder that the promise of America was for a long time, and for many people, in its communist utopias.
    • A Paradise Built in Hell : The Extraordinary Communities That Arise in Disaster.
      Penguin Books (2010), Rebecca Solnit,

      • Beautiful, creative, and powerful. I admire Solnit a lot and her message is so clear and strong. She finds a bias hidden deeply in the thinking from both the left and the right, and makes it impossible to unsee. Shortly after reading it I saw it again in the etymology of “havoc.” It’s hard to be uncompromisingly radical and even-headedly fair and lucid at the same time, but she makes it look easy and makes me feel intellectually and physically lazy for failing to make the integration of those apparent extremes look effortless. Maybe the glue is compassion? I hope she has a lot more to say about utopia.
    • Individual strategy and social structure : an evolutionary theory of institutions / H. Peyton Young.
      • An important book laying out an important theory that evolutionary game theory offers a model of cultural evolution. I disagree, and now have a better sense of why. Great history and examples. I read past all but the essential introductory formal work (aka math).
    • JavaScript: The Good Parts and Eloquent Javascript

      • Two short books about Javascript that are helping me learn to think right in the language.
    • The Invisible Hook: The Hidden Economics of Pirates (Princeton University Press, 2009). Peter Leeson

      • Pirate societies. I’ll be teaching this book. I like Leeson a lot even though he’s a mad libertarian. He’s creative.
    • The Social Order of the Underworld: How Prison Gangs Govern the American Penal
      System (2014). Skarbeck

      • Prison societies. I’ll be teaching this too.
    • Codes of the Underworld: How criminals communicate (2009). Diego Gambetta

      • An economist’s signaling perspective on the Mafia.
    • Thinking in Systems: A Primer by Donella Meadows

      • I’ll be teaching this book to help teach my class systems thinking, which is especially gratifying since she was here at Dartmouth. Meadows had a huge influence on me. In fact, my wife got a head start on it and she describes it as a user’s manual to my brain. I didn’t even know this book existed until wifey found it. It’s the best kind of posthumous book because she was almost done writing it when we (the world and, more specifically, Dartmouth) lost her.
    • Citizens of no place – An architectural graphic novel by Jimenez Lai
      • Fun, fast, a good mix of dreamy, ambitious, and wanky.
    • The Little Sister (Philip Marlowe, #5) by Raymond Chandler

      • Chandler is classic noir and I’m happy to get caught up on the lit behind my favorite movies. Marlowe is as cynical, dissipated, dark, and clever as you’d want, though I’ve got to admit I like Hammett better than Chandler: he does cynical better with Spade, and dark better with the Operator, and shows through Nick and Nora that he can lighten it up with just as much fluency.

    Reading:

    • Faust

      • Gift from a friend, a proud German friend who took me to the restaurant in the book where the devil gives the jolly guys wine while Faust sits there disaffected and bored. I’m now recognizing that this is an important book for a scientist to read, at least for making the intellectual life romantic.
    • Mark Twain: Collected Tales, Sketches, Speeches, & Essays 1891–1910 Edited by Louis Budd

      • A thick volume of Mark Twain’s early work, where you get to see his voice fill out from journalism through speeches to storytelling, for which he’s now most appreciated. Amazing to see all of his contributions to English, proportional to those of the King James Bible.
    • Elements of Statistical Learning : Data Mining, Inference, and Prediction (New York, NY : Springer-Verlag New York, 2009). J. Friedman, T. Hastie, R. Tibshirani

      • Important free textbook on statistical learning. Great read too; who knew?
    • The evolution of primate societies (University of Chicago Press, 2012). J. C. Mitani, J. Call, P. M. Kappeler, R. A. Palombit, J. B. Silk.

      • Amazing exciting expansive comprehensive academic walk through the primates, how they get on, and how humans are different. It’s a big one, and a slow read, but I’m learning a ton and it’s a great background for me as both a cognitive scientist and social scientist
    • The origin and evolution of cultures (2005). R. Boyd, P. J. Richerson

      • The fruits of unifying economic, evolutionary, and anthropological thought with mathematical rigor. Great background as I teach myself more about cultural evolution and the evolution of culture.
    • Joseph Henrichs’ The Secret of Our Success: How Culture Is Driving Human Evolution, Domesticating Our Species, and Making Us Smarter

      • Pop book that gives an easier overview of the cultural anthro lit. He offers a big vision. Light on details, which is inly a problem for me because the claims are so strong, but that’s not what this book is for. Makes me recognize, as a cognitive scientist, that language and consciousness are a giant gaping hole in current evolutionary accounts of what makes humans different.

    To read:

    • Are We Smart Enough to Know How Smart Animals Are? by Frans de Waal

      • De Waal making an overstrong-but-important-to-integrate case for animal social and psychological complexity
    • Animal social complexity : intelligence, culture, and individualized societies / edited by Frans B.M. de Waal and Peter L. Tyack.

      • An academic version of De Waal’s pop book, more concrete examples and lit,a nd a great cross-species overview to complement my more focused reading on primates.
    • The Sacred beetle and other great essays in science / edited by Martin Gardner

      • I love reading science writers’ collections of insider science writing, and I had no idea Gardner had one. How fun!
    • The Cambridge companion to Nozick’s Anarchy, state, and utopia / [edited by] Ralf M. Bader, John Meadowcroft.

      • Got to continue my unsympathetic reading of Nozick, especially for the ways that he might be right.

    Sampling:

    • A mammal’s notebook : the writings of Erik Satie / edited and introduced by Ornella Volta ; translations by Antony Melville.

      • Has sketches and cartoons!
    • A cross-cultural summary, compiled by Robert B. Textor, 1964

      • This is mostly a list of numbers, but there’s some book in there too. This was hard to find.
    • The anthropology of complex economic systems : inequality, stability, and cycles of crisis / Niccolo Leo Caldararo.

      • Interesting argument about physical+historical limits influencing economic practice in very subtle ways.
    • The new dinosaurs : an alternative evolution / Dougal Dixon

      • Silly and sick pictures driven by a wildly creative vision.
    • Coping with chaos : analysis of chaotic data and the exploitation of chaotic systems / [edited by] Edward Ott, Tim Sauer, James A. Yorke.

      • Methods for the data-driven analysis of time series

    Bedtime:

    • Rereading Wodehouse
    • Reading more Ursula LeGuin
      • She’s so important, and still my favorite representative of sci-fi that is more interested in political than technological frontiers.

    About

    This entry was posted on Sunday, January 1st, 2017 and is filed under Uncategorized.


    The cooperative lives of the Swiss

    I lived in Zurich, Switzerland for two years and saw lots that relates to my research interests. They have a healthy democracy, an incredibly orderly and rational society, loads of civic participation, high rates of cooperative housing, and many other types of cooperative business. They also have one of the bad things that comes with all those good things above: lots of peer-policing. In two years, the only times I fell naturally into small talk with random strangers on the street were after being scolded by them for violating a social norm. In one I’d been recycling wrong, and the other for being too loud. So it comes as no surprise that the Swiss also casually spy on each other at home.

    http://www.thelocal.ch/20160721/study-a-fifth-of-swiss-spy-on-neighbours

    About

    This entry was posted on Saturday, December 17th, 2016 and is filed under Uncategorized.


    The beauty of unyielding disappointment, in science and beyond

    There’s an academic trend, hopefully growing, of successful professors publishing their “CVs of failure,” essentially keeping track of their failures with the same fortitude that they track their successes. It’s inspiring, in its own daunting way, and it emphasizes the importance of thick skin, but I think we can do better. I’ve come up with a way to celebrate and rejoice in rejection. Rejection is a lot like science.

    There’s this image that appeared in my head a few years ago on a bus ride, that I find myself returning to whenever the constant rejection gets too much. What I do is imagine this giant brass door set in an imposing rock wall stretching interminably up and to each side. In front of it lies this bruised and emaciated monk in tattered robes. Instead of meditation, his practice is to pace back, gather speed, hurl himself at the door with an awful war cry, crumple pathetically against it, get up again, and repeat, over and over, forever. He doesn’t do it with any expectation of the door ever opening. The door is an eardrum or an eye into the other side, in whose dull defeating reverberations lie hints like drumming echoes of the mysterious world beyond, and no ritual less painful can yield truth.

    That’s my bus ride image. It sounds crazy, but going back to it literally never fails to cheer me up again. It’s hard to pin down, but I’ve come up with a few theories for why maybe it works. Maybe it’s reassuring because absurdity and humility are great at putting things in perspective. Or because it’s equally accurate as a description of failure and as a description of the nature of scientific progress. Or maybe what’s going on is that futility becomes romantic when it can be experienced in a way that’s inseparable from ritual, hilarity, and ecstasy.

    About

    This entry was posted on Monday, October 17th, 2016 and is filed under Uncategorized.


    Michael Lacour has rebranded himself as Michael Jules at www.michaeljules.xyz

    Michael Lacour is a former aspiring political scientist famous for standing accused of a major academic fraud that made national news, embarrassed huge names in his field, led to a major retraction, and drove him from academia forever while netting his whistleblower a job at the prestigious Stanford University. So naturally I’d be curious what Lacour would do next, and I’ve been following his main sites, http://www.michaeljules.xyz and http://www.beautifuldataviz.com/ , for a while.

    The takeaway is that the ever-enterprising guy didn’t stay down. He’s been learning to code and develop himself as a data scientist. www.beautifuldataviz.com is still clunky, but it’s much less unbeautiful than it was six months ago, so I figure he’s coming along well enough on his plan B.

    It all makes me wonder what I’d do if I ever got in the same kind of mess, and what would others do about me. They’re questions worth thinking about. Most people probably don’t care and would be wary but ultimately ready to forgive me, though not to the point of ever letting me back in the ivory towers again. That’s probably justified. I imagine that a small number of others would continue to dog me no matter what I tried for next, and try to protect the whole world from me by spreading my old and new names on the Internet. On that I’m torn. There’s no evidence that Lacour showed any contrition, so maybe everyone should be protected from him. But suffering is a private thing, and it’s funny to make permitting the guy to ever breathe again contingent on his satisfying you that he feels bad or learned the right lesson. Assuming I’m actually not a sociopath, I’d want to draw the line at academia and assert my freedom to move forward from there. But maybe I shouldn’t be allowed near schools of any kind. So when you’ve been shunned at a national scale, what doors should remain open to you in even the eyes of your most toxic schadenfriends? The answer is clear, and Michael Jules Lacour nailed it: even the most dogged of your haters are gonna fall off if your idea of moving forward is to enter the private sector. There’s a fine history there: exiled Harvard primatologist Marc Hauser went into consulting I think. And sociopath or not, capitalism is made for thriving off of people with a name for exploiting the trust of others, and if it doesn’t affect the bottom line, the market is more than ready to forgive it.

    I didn’t say what I’d do. Start a business. JK. Actually, I already know my plan B: get back into the organizing of worker-owned businesses. But I don’t think it’ll come to that. On the market now. Wish me luck.

    About

    This entry was posted on Sunday, October 9th, 2016 and is filed under Uncategorized.


    Vision of 2001 from the pages of a 1901 weekly

    I love seeing visions of the future and of the past. I also love things that make super-human spans of time apprehensible. So, naturally I’m sold on this concluding vision of 2001 from a January 1901 special issue of Collier’s Weekly that focused on life since 1801.

    2001from1901

    I got this from the collections of Dartmouth College’s Rauner Library, which lets everybody in to ask for anything, and which hosted a personal showing of proto-sci-fi and early astronomy books.

    About

    This entry was posted on Saturday, September 17th, 2016 and is filed under Uncategorized.


    Books read in 2015 (way late)

    This is a bit late, but honestly I didn’t want to write this until I’d remembered one of them that I knew I’d forgotten: all the Nero Wolfe.

    Blockbusters by Anita Elberse. This is a very pop-biz book. We think the Internet gives opportunity to those without it, but it’s much bigger role is to make the big bigger, at least in the world of entertainment. I’m glad I read it, even if I’m not so glad that I now know about anything I learned from it.

    Annie Dillard’s Pilgrim at Tinker Creek. This book is so important. The first time I read it, I had to put it down every page to calm down. The second time was less invigorating, but just as dazzling and inspiring. I wish we scientists would (or could?) write for each other about science the way she does. I reread this all the way to the second to last chapter when I lost it.

    Robert Nozick’s Anarchy, the State, and Utopia. The book is as impressive as its title, and no less because I was reading it as part of my practice of obsessing about libertarianism, in which I slowly hate-read all of its major thinkers. I hope I can think and write as big and as clearly as Nozick, but without all the same perfectly sound logic that still somehow ends up at obviously faulty conclusions.

    Edward Abbey’s Desert Solitaire. He’s no Dillard, but I admire him for what he contributed, and I’m prepared to take the warts along with that. I feel like the big names in the ecstatic individualist male naturalist tradition — Thoreau, Muir, Whitman, and Abbey too — have been getting all kinds of crap piled on them lately, to the point where it’s kind of out of vogue to appreciate them without tacking on a bunch of apologetic quibbles at the end. Maybe they deserve it. My only point here is that Abbey probably deserves it more than the others.

    This one mycology textbook. Didn’t finish — just made it a few chapters in so far. It’s a dense book, literally and literarily, but the topic is mystifying enough that I’ll keep at it.

    Daniel Kahneman’s Thinking, Fast and Slow. I read this because I felt obliged to know more than I do about decision making, since I’m a decision making researcher. I now know more, and there’s no one I’d have rather learned it from.

    Simpler: The Future of Government. A useful tour of the nudge lit, specifically as it’s been actually applied. Admirable goal. I’m wary to get too applied myself, but I appreciate the work. Writing is unremarkable, but that’s easy to tolerate when a writer also has information to convey.

    Benoit Mandelbrot’s Misbehavior of Markets: A fractal view of financial turbulence. I love the way Mandelbrot writes, but I still kind of always get this dirty suspicious feeling, as if one should approach his books by spending less time reading him than reading past him. I have some un-blogged comments on one part of his I particularly appreciated.

    Bedtime reading: old detective and mystery novels and Wodehouse. In noir, Dashiell Hammett’s The Thin Man, The Maltese Falcon, and Red Harvest. In mystery, three of Rex Stout’s stories about Nero Wolfe: Fer-de-Lande, The League of Frightened Men, and The Rubber Band.
    And to round out my early 20th century genre lit, rereading Wodehouse.

    About

    This entry was posted on Saturday, September 17th, 2016 and is filed under Uncategorized.


    “The unexpected humanity of robot soccer” with Patrick House in Nautilus Magazine

    I have a new popular audience article in the amazing Nautilus Magazine with science journalist, neuroscientist, and old cooperative housemate Patrick House. I have tons of respect for both, so its exciting to have them together.

    http://nautil.us/issue/39/sport/the-unexpected-humanity-of-robot-soccer

    This article had many lives in the writing, and it was a tough collaboration, but we came through OK. Don’t be fooled by my name at the lead: Patrick did most of the work.

    About

    This entry was posted on Thursday, September 1st, 2016 and is filed under Uncategorized.


    My mug on thewaltdisneycompany.com

    “Don’t just study the data; be the data.” I volunteered to help out my friends Thabo Beeler and Derek Bradley last year. They needed facial scans to analyze for their research. That work is done, and is featured prominently by Disney. The video of their work is halfway down.

    https://thewaltdisneycompany.com/disney-research-releases-latest-round-of-inventions/

    About

    This entry was posted on Thursday, September 1st, 2016 and is filed under Uncategorized.


    Is there any legitimate pleasure or importance in self-denial?

    I’m kind of a curmudgeon, preferring not to have comfortable doodads, labor-saving contrivances, and other perks of consumer society. I try not to have air conditioning when it’s hot, heat when it’s cold, I’ve successfully avoided having a phone for years and, until recently, a car. The car has one of those lock/unlock key dongles that I accepted so naturally into my life that I want to scoff at myself. I’ve managed to flatter myself at different times that all of this abstention from the finer things is about maintaining intellectual independence or building character, being rugged, preserving my senses, avoiding wastefulness, or being Universal and not just American. But sometimes I wonder my society’s interpretations are more accurate, and if I’m really a cranky, miserly, smug and self-superior Luddite, or, at best, completely joyless and humorless the way people think of Ralph Nader. After all, when I think back through the people I know who act the same way as me, I realize that I can’t stand being around most of them.

    I don’t think there’s room for the idea of self-denial in a consumer culture, in a culture in which one must buy commodities to participate in social meaning. It clicked when I read this voice from the 1870’s on the (apparently controversial) benefits of involving women in business: “… this gives them, I have noticed, contentment of mind, as well as enlarged views and pleasure in self-denial.” (p.412)

    The context doesn’t matter, all that matters is that in the 1870’s, when some kind of American culture still existed outside of market exchange, there also existed an idea that there is a legitimate secular satisfaction in self-denial. The book, “The Communistic Societies of the United States,” is a study of the many separatist religious communities that existed through the nineteenth century. The religious part is important because when you Google “self-denial,” the only non-dictionary hits are to religious sites or Bible verses. Now just as it has no place in modern society, self-denial is also no longer seen as a prominent theme of Christendom in America, but it seems to have kept some kind of legitimate home there anyhow. I’m not sure why; maybe it’s legitimately importat, or maybe self-denial is a handy justification for arbitrary ethical proscriptions. Either way, I don’t know what to make of it.

    Going secular I’m just as confused. I’d like to know if there is any evidence that self-denial is a good thing, in whatever way, or if I’m nothing more than a curmudgeon. I have to look into it and think more.

    Another good quote from the book:

    “Bear ye one another’s burdens” might well be written above the gates of every [intentional community]. p. 411

    About

    This entry was posted on Tuesday, August 2nd, 2016 and is filed under Uncategorized.


    Four-leaf clovers are lucky

    Because they seem to grow near each other, the best way to find a four-leaf clover is to have found one before. Besides an abundance of luck, discovering this also got me thinking. What if the first person to find a four-leaf clover, before it meant anything to anyone, showed it around, sparked some wonder, and got her friends looking here and there and turning up empty. How auspicious it would seem to those chumps if, after all their fruitless rooting, she showed up with a second, and then a third, like it was nothing. If you didn’t know they grow together, you might get the idea that some people have all the luck. That’s my just-so story for the myth. Trifolium’s misnomers give real actual luck, but only the narrow kind you need to find more.

    About

    This entry was posted on Wednesday, July 13th, 2016 and is filed under Uncategorized.


    My ideal gig

    My uncle asked me what I’ll be looking for in a department when I hit the job market. I smiled and told him “prestige and money.” It got awkward because he didn’t realize I was joking, and it got more awkward as I squirmed to replace that answer in his head with my serious answer, since I had no idea what my serious answer was. Now I’ve thought about it. Here’s what I’m looking for in a department.

    1. I want to be part of an intellectual community in which I can be vulnerable, at least professionally if not personally. That means feeling safe sharing ideas, good ones and bad ones alike (since I can rarely tell the two apart without talking things through over beer). There’s nothing more awful and tragic than a department in which people mistrust each other and feel proprietary about their ideas — why even be in science? Conversely, there’s nothing more amazing then being part of a group with strong rapport, complementary skills, and a unified vision. (An ordinal listing misses how much higher this first wish is than all the others.)
    2. My colleagues are all smarter than me, or beyond me in whichever of a number of likely ways: more creative, more active, harder working, more connected, more engaged, effortlessly productive, exquisitely balanced and critical and fiery and calm. There’s something to be said for learning from osmosis.
    3. I have inspiring students — undergraduate and graduate — and maybe even students that are smarter than me, or more creative, more active, &c.
    4. My colleagues and I share some kind of unified vision. I’ve seen that in action before and it’s amazing.
    5. Prestige. I can’t pretend I’m too far above prestige. A recognized school attracts better students, which makes teaching more fun. It has more resources lying around, which makes it easy to make things happen quickly. It casts a glow of success that makes it easier to raise money, and build partnerships. They are often more likely to be able to follow through on commitments to underprivileged students. And last, since age is the major cause of prestige, fancy schools tend to be on more storied and beautiful campuses.
    6. My colleagues cross disciplines.
    7. My department has institutional support for interdisciplinary research (no list of five journals to publish in, conferences and journals on equal footing, tenure letters of support accepted from people outside the same department).
    8. I’m in a department beloved, or at least on the radar of, the dean. I don’t know a lot about this, but I get the feeling that life is a little easier when a department has a dean’s support.
    9. Beautiful campus.
    10. In the US. Alternatively, the UK or the Netherlands. In a good city or back in CA, or maybe in one of these economically depressed post-industrial-wasteland cities. Can’t explain that last one. Well, I can: it means to me that it’ll have a more active arts community, be more diverse, and have a neighborly sense of community.

    About

    This entry was posted on Monday, May 23rd, 2016 and is filed under Uncategorized.


    Secretly deep or secretly trivial?

    I know that the word “football” means something different to Americans than it does to Europeans. It might be that most Americans know that. But the rest of the world thinks of Americans as not knowing it, and it led to something funny when I was in Switzerland. Living right in the middle of Europe, in any conversation about football, both my interlocuter and I had to call it soccer, even though neither of us wanted to call it that. I knew perfectly well that football meant round checker ball, but if I called that ball game football, others always assumed that I was being American and referring to oblong brown ball. They expected me to call round checker ball soccer, and that made it the most convenient word, which meant that I always had to go with it. It was just easier that way.

    Since I study the role of what-you-think-I-think-you-think in peoples’ social behavior, I keep thinking of that as deep and fascinating, but every time I try to pin it down analytically as something novel, it just goes limp and becomes this really mundane, obvious, easy to explain inefficiency.

    About

    This entry was posted on Tuesday, May 17th, 2016 and is filed under Uncategorized.


    Who is science_of_craft and why is he on my Minecraft server?

    I am studying Minecraft servers, and the way they are run. But there are a lot of servers out there, so, to get data efficiently I have a script logging my user, science_of_craft, into thousands of servers. science_of_craft collects your version, plugins, number of players online, and also more detailed things like the signs that are posted near spawn. “Science of craft” is a translation of “technology.”

    science_of_craft should just stand there before logging off and moving on to the next server. But if he is causing problems for you, you can either ban him or contact me (moctodliamg at the same thing backwards) and I’ll get him off your back.

    If you are a server administrator who got a visit from s_o_c, thank you for tolerating this project, and thank you for doing what you do. I think it’s valuable, that’s why I’m studying it.

    Followup

    I’ve heard from many of you. I’ve been gratified to see that no one has seemed annoyed, or anything but interested. Thank you for your encouragement and patience. I’m not publishing any comments, but I am reading and responding to them.

    The most common question that is coming up is “how did you find my server?” I’ve been getting lists from a few public sources: Reddit, a couple big minecraft server list sites, and shodan.io. If you saw me on your server that isn’t advertised or visited by anyone you don’t know, the answer to your question is probably shodan.io. If you don’t like it, let me know that it’s a problem and if there’s anything I can do.

    About

    This entry was posted on Friday, May 13th, 2016 and is filed under Uncategorized.


    My work in this Sunday’s New York Times Magazine

    I am working now using a large corpus of Minecraft servers to understand online governance. That work got a mention in a long feature on Minecraft, by Clive Thompson, titled “The Minecraft Generation.” It was very well done, and Clive was very attentive as a journalist to my nervous scientist’s quibbling about phrasing things precisely with respect to what must seem like completely arbitrary academic distinctions. It feels great and intimidating.

    About

    This entry was posted on Saturday, April 16th, 2016 and is filed under Uncategorized.


    Seth’s Backwards Guide to Doing Science

    I got some exciting press for a current project, but I’m a little too embarrassed to enjoy it because it’s on a project I’m barely halfway through. That’s part of a larger pattern I’ve found in myself in which I talk more about stuff that isn’t done or isn’t even started, and I don’t have as much out as I’d like.

    I feel like I’m getting ahead of myself, but maybe I’m wrong and you should be even more like me than me. If that’s what you really want, here is my backwards guide to doing science:

    1. Get good press coverage, then
    2. publish your research,
    3. figure out what your message is going to be,
    4. interpret your data,
    5. analyze your data,
    6. collect your data, and finally
    7. plan out your study.

    That last step is very important. You should always carefully plan out your studies. And if you think this whole thing is totally backwards, well that’s just, like, your opinion, man.

    About

    This entry was posted on Thursday, April 14th, 2016 and is filed under Uncategorized.


    Research confidence and being dangerous with a gun.

    There are two very different ways to be dangerous with a gun: to know what you’re doing, and to only think you know what you’re doing. Tweak “dangerous” a bit, and research is the same way. I draw from many disciplines, and in the course of every new project I end up having to become conversant in some new unfamiliar field. I dig in, root around, and build up my sense of the lay of the land, until I can say with confidence that I know what I’m doing. But I don’t try to kid myself that I know which kind of dangerous I am. I don’t think it’s possible to know, and even if it is, I think it’s better to resist the temptation to resolve the question one way or the other. Better to just enjoy the feelings of indeterminacy and delicacy. That may seem like a very insecure and unsatisfying way to experience knowhow, but actually it takes a tremendous amount of self-confidence to admit to ignorance and crises of confidence in research. Conversely, an eagerness to be confident communicates to me a grasping impatience for answers, a jangling discomfort with uncertainty, or a narrow desire to be perceived as an expert. The last is especially awful. My society understands confidence as a quality of expertise. It’s a weakness that we mistake for a strength, and everyone loses.

    Crises of confidence are a familiar feeling in interdisciplinary research. Pretty much every project I start involves some topic that is completely new to me, and I always have to wonder if I’m the outsider who is seeing things freshly, or the outsider who is just stomping loudly around other people’s back yards. Interdisciplinary researchers are more susceptible to facing these questions, but the answers are for everyone. I think the tenuousness of knowhow is inherent to all empirical research, the only difference being that when you work across methods and disciplines, it’s harder to deceive yourself that you have a better command of the subject than you do. That’s two more benefits of interdisciplinary practice: it keeps humility in place in daily scientific practice, and it makes being dangerous less dangerous to you.

    About

    This entry was posted on Monday, February 15th, 2016 and is filed under Uncategorized.


    Interdisciplinary researchers need to care about clear, honest, interesting writing

    In interdisciplinary academic writing, you don’t always know who you’re writing for, and that makes it completely different from traditional academic writing. The people who respond most excitedly to my work are rarely the people I predicted, and they rarely find it through the established disciplinary channels of academia. Since you don’t know ahead who you’re writing for, you have to write more clearly and accessibly. I’ve been read by psychologists, biologists, physicists, economists, and many others. The only way to communicate clearly to all of these audiences has been to keep in mind the last time they all had the same background. That’s why, when I write, I imagine a college-bound high school graduate who likes science. The lowest common denominator of academic comprehension is “high school student.” And that’s fine. Those who doubt the existence of writing that is both clear and correct probably aren’t trying hard enough. The benefits of people able to write for a wide academic audience are many. First, I think researchers of all types have some responsibility to serve as public intellectuals, particularly when they work in areas, like the social sciences, that are inherently vulnerable to misconstruction, misappropriation, and abuse. Writing clearly helps me meet that responsibility. Second, since I rarely know the best audience for my projects, accessible writing makes it easier to attract popular science reporting to get the word around. And, most valuable of all, spending time on writing makes me think better. Clear honest writing is the surest symptom of clear honest thinking.

    About

    This entry was posted on Wednesday, January 13th, 2016 and is filed under Uncategorized.


    Machine learning’s boosting as a model of scientific community

    Boosting is a classic, very simple, clever algorithm for training a crappy classifier into a group of less crappy classifiers that are collectively one impressively good classifier. Classifiers are important for automatically making decisions about how to categorize things, like this:

    Here is how boosting works:

    1. Take a classifier. It doesn’t have to be any good. In fact, its performance can be barely above chance.
    2. Collect all the mistakes and modify the classifier into a new one that it is more likely to get those particular ones right next time.
    3. Repeat, say a hundred times, keeping each iteration, so that you end up with a hundred classifiers
    4. Now, on a new task, for every instance you want to classify, ask all of your classifiers which category that instance belongs in, giving more weight to the ones that make fewer mistakes. Collectively, they’ll be very accurate.

    The connection to scientific community?

    With a few liberties, science is like boosting. Let’s say there are a hundred scientists in a community, and each gets to take a stab at the twenty problems of their discipline. The first one tries, does great on some, not so great on others, and gets a certain level of prestige based on how well he did. The second one comes along, giving a bit of extra attention to the ones that the last guy flubbed, and when it’s all over earns a certain level of prestige herself. The third follows the second, and so on. Then I come along and write a textbook on the twenty problems. To do it, I have to read all 100 papers about each problem, and make a decision based on each paper and the prestige of each author. When I’m done, I’ve condensed the contributions of this whole scientific community into it’s collective answers to the twenty questions.

    This is a simple, powerful model for explaining how a community of so-so scientists can collectively reach impressive levels of know-how. Its shortcomings are clear, but, hey, that’s part of what makes a good model.

    If one fully accepts boosting as a model of scientific endeavor, then a few implications fall right out:

    • Science should be effective enough to help even really stupid humans develop very accurate theories.
    • It is most likely that no scholar holds a complete account of a field’s knowledge, and that many have to be consulted for a complete view.
    • Research that synthesizes the findings of others is of a different kind than research that addresses this or that problem.

    About

    This entry was posted on Friday, November 27th, 2015 and is filed under Uncategorized.


    The DSM literally makes everyone crazy

    Having a book like the “Diagnostic and Statistical Manual in Mental Disorders,” a large catalog of ways that people can be crazy, inherently creates more crazy people. I’m not talking about this in a sociological or historical sense, but in a geometrical one.

    First some intuitive geometry. Imagine a cloud of points floating still in front of your face, maybe a hundred or so, and try to visualize all the points that are on the outside of the cloud, as if you had to shrink-wrap the cloud and the points making up the border started to poke out and become noticeable. Maybe a quarter of your points are making up this border of your cloud — remember that. Now take that away and instead shine a light at your cloud of points to cast it’s shadow on a wall. You’re now looking at a flat shadow of the same point cloud. If you do the same thing on the shadow, draw a line connecting all the points that make up the border around it, it turns out that the points making up the border of the flat cloud are a smaller percentage of all the points, less than a quarter. That’s because a lot of points that were on the top and bottom in three dimensions look like they’re in the middle when you flatten down to two dimensions: only the dots that described a particular diameter of the cloud are still part of the border of this flattened one. And, going in the opposite direction, up from shadow to cloud to tens of dimensions, what ends up happening is that the number of points in the “middle” crashes: with enough dimensions, they’re all outliers. A single point’s chances of not being an outlier on any dimension are small. This is a property of point clouds in high dimensions: they are all edge and no middle.

    Back to being crazy. Let’s define being crazy as being farther along on a spectrum than any other person in your society. Real crazy is more nuanced, but let’s run with this artificial definition for a second. And let’s say that we live in a really simple society with only one spectrum along which people define themselves. Maybe it’s “riskiness,” so there’s no other collective conceptions of identity, no black or white or introverted or sexy or tall or nice or fun, you’re just something between really risky and not. Most everyone is a little risky, but there’s one person who is really really risky, and another person who less risky than anyone else. Those are the two crazy people in this society. With one dimension of craziness, there can only be two truly crazy people, and everyone else is in the middle. Now add another dimension, e.g. “introvertedness.” Being a lot of it, or very little of it, or a bit introverted and also risky or non-risky, all of those things can now qualify a person as crazy. The number of possible crazy people is blowing up — not because the people changed, but only as a geometrical consequence of having a society with more dimensions along which a person can be crazy. The number of people on the edge of society’s normal will grow exponentially with the number of dimensions, and before you know it, with maybe just ten dimensions, almost no one is “normal” because almost everyone is an outlier in one way or another.

    The DSM-V, at 991 pages, offers so many ways in which you and I could be screwy that it virtually guarantees that all of us will be. And, thanks to the geometry of high-dimensional spaces, the thicker that book gets, the crazier we all become.

    About

    This entry was posted on Friday, November 20th, 2015 and is filed under Uncategorized.


    Language and science as abstraction layers

    The nature of reality doesn’t come up so often in general conversation. It only just occurred to me that that’s amazing, since pretty much everyone I know who has thought about it thinks something different. I know Platonists, relativists, nihilists, positivists, constructivists, and objectivists. Given the very deep differences between their beliefs about the nature of their own existences, it’s really a miracle that they can even have conversations. Regardless of the fact of the matter, what you think about these things affects what language means, what words means, and what it means to talk to other people.

    And not only can you complete sentences with these people, you can do science with them, and science can build on a slow, steady accretion of facts and insights, even if each nugget was contributed by someone with a totally different, and utterly irreconcilable conception of the nature and limits of human knowledge. How?

    I think of science as an abstraction layer. That’s sad, probably for a lot of reasons, but most immediately because it means that the only easy metaphor I was able to find is the computer programming language Java. Java was important to the software industry because it made it possible to write one program that could run on multiple different operating systems with no extra work. Java took the complexities and peculiarities of Unix and Windows and Moc and Linux and Solaris and built a layer on top of each that could make them all look the same to a Java program. I think of the tools of thought provided by science as an abstraction layer on different epistemologies that makes it possible for people with different views to get ideas back and forth to each other, despite all their differences.

    Here is an excellent illustration.
    abstraction

    About

    This entry was posted on Thursday, November 12th, 2015 and is filed under Uncategorized.


    Practical eliminativist materialism

    Eliminativist materialism is a perspective in the philosophy of mind that, in normal language, says beliefs, desires, consciousness, free will, and other pillars of subjective experience don’t actually, um, exist. It’s right there in the name: the materialisms are the philosophies of mind that are over The Soul and “eliminativist” is just what it sounds like. I’m actually sympathetic to the view, but reading the Wikipedia article makes me realize that I’ve got to refine my position a bit. Here’s what I think I believe right now:

    • To the extent that experiencing them makes it possible to account for them, there’s going to exist a way to account for them in terms of neural and biological processes.
    • I believe that we’ll probably never really understand that account. Even if we manage to create artificial entities that satisfy us that they are conscious, we won’t really know how we did it. This is already happening.
    • So, as far as humans are concerned, eliminativist materialism will turn out to be practically true, even if it somehow turns out not to be more true than the other materialisms.

    Given all that, I think of eliminativist materialism as possibly right and probably less wrong than any other prominent philosophy of mind. Call it “practical eliminativist materialism.” If you think I’m full of crap, that’s totally OK, but unlike you, my stoner musings about the nature of consciousness have been legitimized by society with a doctorate in cognitive science. Those aren’t really good for anything else, so I’m gonna go ahead and keep musing about the nature of consciousness.

    About

    This entry was posted on Sunday, November 1st, 2015 and is filed under Uncategorized.


    Two great quotes for how greedy we are for the feeling that we understand

    When the truth of a thing is shrouded, and real understanding is impossible, that rarely stops the feeling of understanding from rushing in anyway and acting like it owns the place. Two great quotes:

    In the study of ideas, it is necessary to remember that hard-headed clarity issues from sentimental feeling, as it were a mist, cloaking the perplexities of fact. Insistence on clarity at all costs is based on sheer superstition as to the mode in which human intelligence functions. Our reasonings grasp at straws for premises and float on gossamers for deductions.
    — A. N. Whitehead

    Or, more tersely

    There’s no sense in being precise when you don’t even know what you’re talking about.
    — John von Neumann

    Also, while I’m writing, some quotes by McLuhan from his graphic book “The Medium is the Massage” (sic). McLuhan can be eye-rolly, but not as bad as I’d expected. But maybe I’d been thinking of Luhmann, it’s hard to keep these media theorists straight. Here is a hopeful one:

    There is absolutely no inevitability as long as there is a willingness to contemplate what is happening

    And here is one that clearly expresses a deep argument for the value of telecom. It is clear enough that it could be tested in experiments, which is worth doing, because you wouldn’t want to just assume he’s right.

    Media, by altering the environment, evoke in us unique ratios of sense perceptions. The extension of any one sense alters the way we think and act—the way we perceive the world. When these ratios change, men change.

    About

    This entry was posted on Wednesday, October 21st, 2015 and is filed under Uncategorized.


    “Are you feeling the Bern now?”

    Some psychologist colleagues are partying for the upcoming debate, but instead of taking a shot after each keyword, they’re triggering the thermode on a laboratory apparatus called “the pain machine.” It delivers a pulse of up to 55°C within a second. ifls.

    About

    This entry was posted on Monday, October 12th, 2015 and is filed under Uncategorized.


    Egg yolks as design feature

    I have more trouble than I should remembering how many cups of flour I’ve put in the batter so far, but I never have trouble remembering the number of eggs, because each egg comes with a yellow token to help in keeping count. I’d say eggs are pretty well-designed, even if I don’t completely understand all of the design decisions behind them. For example, why aerodynamic?

    About

    This entry was posted on Wednesday, January 14th, 2015 and is filed under Uncategorized.


    The best skeptics are gullible

    Our culture groups science with concepts like skepticism, logic, and reductionism, together as a cluster in opposition to creativity, holistic reasoning, and the “right brain.” This network of alliances feeds into another opposition our culture accepts, that between art and science. I’ve always looked down on the whole thing, but sometimes I feel lonely in that.

    The opposite of skepticism is credulity, a readiness to believe things. For my part, I try to communicate a vision for science in which skepticism and credulity are equal and complementary tools in the production of scientific insight. An imbalance of either is dangerous, one for increasing the number of wrong ideas that survive (the “miss” rate) and the other for increasing the number of good ideas that die (the “false positive” rate). Lots of both is good if you can manage it, but people allow themselves to identify with one or the other. As far as I’m concerned, the cost of confining yourself like that just isn’t worth the security of feeling like you know who you are.

    Skepticism and credulity are equally important to my intellectual hygiene. It’s very valuable, on hearing an idea, to be able to put up a fight and pick away every assumption it rests on. It’s equally valuable, on hearing the same idea, either before or after I’ve given it hell, to do everything I can to make it hold — and the more upside-down I can turn the world, the better. Sometimes that means readjusting my prior beliefs about the way the world works. More often it means assuming a little good faith and having a little patience with the person at the front of the room. If some superficial word choice makes you bristle, switch it out with a related word, one that you have permitted to exist. If you have too little of either, skepticism or credulity, you’re doing injustice to the world, to your community, and, most importantly, to yourself.

    Don’t take my word for it. Here’s a nice bit from Daniel Kahneman, on working with his longtime colleague Amos Tversky.

    … perhaps most important, we checked our critical weapons at the door. Both Amos and I were critical and argumentative, he even more than I, but during the years of our collaboration neither of us ever rejected out of hand anything the other said. (from page 6 of his Thinking, fast and slow, which is like having a user manual for your brain)

    I’m not saying that there isn’t enough credulity in the scientific community. There’s a lot, it’s dangerous, it should be treated with respect. In a good skeptic, credulity is a quality, not a lapse. Making room for it in the scientific attitude is the first step toward recognizing that creativity is, and has always been, as basic as analytic rigor to good science.

    Arvai J. (2013). Thinking, fast and slow, Daniel Kahneman, Farrar, Straus & Giroux, Journal of Risk Research, 16 (10) 1322-1324. DOI: http://dx.doi.org/10.1080/13669877.2013.766389

    About

    This entry was posted on Thursday, December 18th, 2014 and is filed under Uncategorized.


    “In the days of the frost seek a minor sun”

    
From unsympathetic eyes, no science is more arrogant than astronomy. Astronomers think that we can know the universe and replace the dreams and the meaning in the skies with a cold place that is constantly dying.
    But I think that there is no more humble science than astronomy. No science has had so much romance imposed on it by the things that we want to be true, no other science has found a starker reality, and no other science has submitted so thoroughly. They’ve been so pummelled by what they’ve seen that they will believe absolutely anything that makes the equations balance out. As the wild story currently goes, the universe is growing at an accelerating rate because invisible matter woven into the universe is pulling the stars from each other. Its hard to swallow, and we don’t appreciate how astronomers struggled to face that story. They’ve accepted that the universe has no regard for our sense of sensibility, and they are finally along for the ride. I wish it was me, I want to see how much I’m missing by thinking I understand.