Gell-Mann's 1995 What is Complexity?

From enfascination

Jump to: navigation, search

Look into course graining

"As measures of something like complexity for an entity in the real world, all such quantities are to some extent context-dependent or even subjective. They depend on the coarse graining (level of detail) of the description of the entity, on the previous knowledge and understanding of the world that is assumed, on the language employed, on the coding method used for conversion from that language into a string of bits, and on the particular ideal computer chosen as a standard. "

I haven't heard of this measure: " A measure that corresponds much better to what is usually meant by complexity in ordinary conversation, as well as in scientific discourse, refers not to the length of the most concise description of an entity (which is roughly what AIC is), but to the length of a concise description of a set of the entity's regularities. Thus something almost entirely random, with practically no regularities, would have effective complexity near zero. So would something completely regular, such as a bit string consisting entirely of zeroes. Effective complexity can be high only a region intermediate between total order and complete disorder. "

In scientific discourse? Formally or implicitly?

Also, forgiving my ignorance, it might be possible to make the two indistinguishable (length of shortest description of the message and length of shortest description of the message's regularities):

  • If, for every finite string of random digits there exists an integer seed that can generate it pseudorandomly
    • Is the number of possible random random strings larger than the number of integers?
  • If there exists an algorithm that can find the seed
  • Than the shortest description of a random string just got a lot shorter.

But that misses the point, because he later claims, quite reasonably, that it is impossible to find all regularities. The word 'description' circumvents that nicely. The description of a regularity is called your 'schema'. I've been conscious of a problem concerning the arbitrariness of selecting a schema. I satisfied myself by saying that it doesn't matter which schema you use in comparing messages as long as you use the same one, but I don't think that accommodates all the problems. Gell-Mann's resolution is to leave choice of schema to evolution in the scientific community selecting via the scientific method.

"...and the behavior of computers that are built or programmed to evolve strategies" This vindicates one of my current favored definitions of 'complex systems as a toolkit'.

RE: John Holland: " What I call a schema he calls an internal model. Both of us are conforming to the old saying that a scientist would rather use someone else's toothbrush than another scientist's nomenclature. "

This get into organisms as information processors in the info theoretical sense. I haven't thought much about that. Maybe I should read Holland.

" Here we encounter time measures of "complexity," for instance logical depth, which for a bit string is related to the time required for a standard universal computer to compute the string, print it out, and then halt. That time is averaged over the various programs that will accomplish the task, with an averaging procedure that weights shorter programs more heavily. We can then consider the logical depth of any entity if a suitably coarse-grained description of it is encoded into a bit string.

A kind of inverse concept to logical depth is crypticity, which measures the time needed for a computer to reverse the process and go from a bit string to one of the shorter programs that will generate it. In the human scientific enterprise, we can identify crypticity roughly with the difficulty of constructing a good theory from a set of data, while logical depth is a crude measure of the difficulty of making predictions from the theory. " Haven't heard of logical depth either. I wonder how useful it is.

"Thus the pattern has a good deal of logical depth and very little effective complexity. " This corresponds to the complicated/complex distinction brought up in class. Are the above formalized?

"Moreover, histories can be assigned probabilities only if they are sufficiently coarse-grained to display decoherence (the absence of interference terms between them)." This suggests a much more technical origin for the term course graining. I assume the application of the term to info theory is analogical. Any relationship to 'symmetry breaking'?

And then a few more definitions, ending awkwardly with the death of the universe.