Intelligence without Reason

From enfascination

Jump to: navigation, search
 
(5 intermediate revisions by one user not shown)
Line 1: Line 1:
 +
===Review===
 
Fantastic paper, a critical history of AI with proposed solutions (i.e. descriptions of Brook's work for the past thirty years).  All his criticisms of AI echo the ones that I felt but couldn't articulate when I was learning the field by the tradition carried forward by Stuart Russell.   
 
Fantastic paper, a critical history of AI with proposed solutions (i.e. descriptions of Brook's work for the past thirty years).  All his criticisms of AI echo the ones that I felt but couldn't articulate when I was learning the field by the tradition carried forward by Stuart Russell.   
  
 
The best papers are the ones that inspire day dreaming, but of a directed sort.  I've come out of it with many ideas, many of my own answers to his questions and many questions of the same questions.  I'll go through my notes on this paper backwards.
 
The best papers are the ones that inspire day dreaming, but of a directed sort.  I've come out of it with many ideas, many of my own answers to his questions and many questions of the same questions.  I'll go through my notes on this paper backwards.
  
 +
===Brook's directions for robotics===
 
At the end he lays out his Big Questions for the advance of Behavior based robotics.  This paper is from 1991, and I don't know where he is now, but my impression is that we haven't made all that much progress on any of these.  And I don't have any evidence from his relatively few publications in this millennium to believe that his story is different now than it was 17 years ago.
 
At the end he lays out his Big Questions for the advance of Behavior based robotics.  This paper is from 1991, and I don't know where he is now, but my impression is that we haven't made all that much progress on any of these.  And I don't have any evidence from his relatively few publications in this millennium to believe that his story is different now than it was 17 years ago.
 
*Individual Behavior
 
*Individual Behavior
Line 31: Line 33:
 
Unfortunately, I still lack some confidence in these claims.  Brooks and Radhika are both right on with so many aspects of the problem, and have been so successful evading so many conventions of AI, so why haven't they concluded that analyses of distributed systems will not be able to expect the same kinds of results as conventionally engineered systems?  There is a good reason, and I don't know it.
 
Unfortunately, I still lack some confidence in these claims.  Brooks and Radhika are both right on with so many aspects of the problem, and have been so successful evading so many conventions of AI, so why haven't they concluded that analyses of distributed systems will not be able to expect the same kinds of results as conventionally engineered systems?  There is a good reason, and I don't know it.
  
 +
===My directions for robotics===
 
A rephrasing of Brook's words that I would feel more comfortable with is this:
 
A rephrasing of Brook's words that I would feel more comfortable with is this:
 
"
 
"
Line 45: Line 48:
 
Of course, talk of states gets into defining them, sifting through them and counting them.  I need more information theory.
 
Of course, talk of states gets into defining them, sifting through them and counting them.  I need more information theory.
  
 +
===Some more of Brook's work===
 
I don't know why it took me the whole paper to learn this, but the school Brooks identifies with is called behavior-based robotics.  That is a vocab word.  So is 'subsumption'.   
 
I don't know why it took me the whole paper to learn this, but the school Brooks identifies with is called behavior-based robotics.  That is a vocab word.  So is 'subsumption'.   
  
Line 52: Line 56:
  
 
Subsumption seems to me like 'ad hoc as a methodology'.  I get the sense that theory plays a minimal role in engineering an robot to accomplish a goal, and there is much tweaking and twisting of the specifics of an agent until it accomplishes said goal.  To the extent that that is true, subsumptive approaches to robotics with always be goal oriented; grounded in a problem.  This ad hoc tweaking seems like it is the single agent equivalent of the natural multi agent approach where %90 of the agents fail, where the survivors were only incidentally pre tweaked to accomplish their goal.
 
Subsumption seems to me like 'ad hoc as a methodology'.  I get the sense that theory plays a minimal role in engineering an robot to accomplish a goal, and there is much tweaking and twisting of the specifics of an agent until it accomplishes said goal.  To the extent that that is true, subsumptive approaches to robotics with always be goal oriented; grounded in a problem.  This ad hoc tweaking seems like it is the single agent equivalent of the natural multi agent approach where %90 of the agents fail, where the survivors were only incidentally pre tweaked to accomplish their goal.
 +
 +
In describing the goals and observations of subsumption on p 566, he understates "It necessarily forcers the programmer to use a different style of organzingtion for the programs for intelligence.
 +
 +
I need a better sense of the 'procedure' that sub sumption approaches suggest/imply.  Is the above description of 'ad hoc engineering' accurate?
 +
 +
I'm getting hints at an idea that before you can make intelligence you need minds, before minds you need motion/bodies and before either of those you need, uh, life?  Well, to the extent that robotics recapitulates ontogeny, that may be true.
 +
 +
===First Half of the paper===
 +
I'm running on fumes, but I am determined to finish my notes on all my 'pending' papers. starting backwrds from p582:
 +
"For one very simple animcal C elegans, a nematode, we have a complete viting diagram of its nercous system, including its development stages (Wood 88).  In the hermaphrodite, there are 302 neronds and 56 support cells out of the animl's total 959 cells. in the male there are 381 neurons and 92 support cells our of a total of 1031 cells."
 +
I need to read up on C elegans.
 +
 +
implicit brain as computer assumption can be seen explicitly in the fiction of professional researchers such as Dennett 81 and Morevec 88.
 +
 +
"N INTERPRETATION OF MULTIPLE ALMOST INDEPENDENT AGENCIES SUCH AS HYPOTHESIZED BY mINsky 86
 +
 +
see TOB's interesting cognitive deficits in McCarthy and arrington 88]
 +
 +
temporal illusions: Dennett and Kinsbourne 90 for an overview
 +
See agre 91 for how people ''actually'' plan trips (like Boston to CA). this typ eof work acts as an argument against validity of introspection as a tool in cog sci or AI.  Also: "See Churchland 86 for a discussion of folk psychology"
 +
 +
"Tinbergens (ethological) model has lergely been replaced ... by theories of motivational competition, disinhibition, and dominant and sub-dominant behaviors"
 +
 +
apparently neural networks need extensive front and backends in order to operate in real time and therefore be situated and embodied.
 +
 +
Dreyfus '81 provides a useful criticism of the 'we'll fix it later' approaches unifying conventional vision and knowledge representation work.
 +
 +
"The key problem that I see with all this work (apart from the use of search) is that it relied on the assumption that a compete world model could be built internally and then manipulated"
 +
 +
Brooks atributes failure of cybernetics in part to its failure to move past the implicit analog-electronic mental representations of systems. "The critical point is the way in which the mathematical proposal is tied to a technnological implementation as a certification of the validity of the approach."
 +
 +
"In general, performance increases in computers were able to fgeed researchers with a steadily laeger search space, enabling them to feel that they were making proress as the yeasrs went by.
 +
 +
Ashby 52 recognized that an organism and its encironment must be modeled together in order to understand the behavior produced by the organism.
 +
 +
"The tools of feedback analysis were used, Ashby 56 to concentrate on such issues as stbility of the sysem as the environment was perturbed, and in particular a system's homeostasis or ability to keep vertain parameters with prescribed ranges no matter what the uncontrolled variations within the environemtn."
 +
 +
"Recently there has been a trend to try to integrate traditional symbolic reason on top of a purely reactive system... horizon effect...bought a little time."  My notes ask "The living dead?"
 +
 +
"Furthermore the representations an agent uses of objects in the world need not rely on a semactic correspondence with symbols that the agent possesses, but rather can be defined through interactions of the agent with the world"
 +
  
 
===Questions===
 
===Questions===
Line 57: Line 102:
 
* Similarly, should I take a search course?
 
* Similarly, should I take a search course?
 
* What metrics exist for measuring the complexity of an environment (in an info theory manner)
 
* What metrics exist for measuring the complexity of an environment (in an info theory manner)
 +
** How is stigmergy accomodated in information theoretic metrics for environments?  Anyone beyond Ashby looked at the relationship between the complexity of an system and that of environment? (ps, in most of this note, complexity is entropy, k log number of states.)
 +
* Is anyone (besides Yaneer) advocating evolution-as-design-methodology?  Who?
  
 
+
===Good Quotes, mostly about 'Complexity'===
 
+
===Good Quotes===
+
 
* "We only know that thought exists in biological systems through our own introspection.  "
 
* "We only know that thought exists in biological systems through our own introspection.  "
 
*"Evolution has decided that there is a trade-off between what we know through our genes and what we must find out for ourselves as we develop"
 
*"Evolution has decided that there is a trade-off between what we know through our genes and what we must find out for ourselves as we develop"
Line 68: Line 113:
 
*"It is very easy for an observer of a system to attribute more complex internal structure than really exists.  Herbert appeared to be doing things like path planning and map building, even  though it was not"
 
*"It is very easy for an observer of a system to attribute more complex internal structure than really exists.  Herbert appeared to be doing things like path planning and map building, even  though it was not"
 
**This gets back to measuring the complexity of behavior/environment.  Thinking of a centralized system as one that constrains the number of possible states of an otherwise independent set of parts gives us an image of a state space confined to, perhaps, only those states which are 'useful'.  Any other kind of relationship (like a local, non heirarchical, or mutual set of connections) is also going to add constraints to the possible states the system can be in(mutual information).  This quote describes a case where local rules have restrained the state space of the robot to the same set of states as some collection of central rules.
 
**This gets back to measuring the complexity of behavior/environment.  Thinking of a centralized system as one that constrains the number of possible states of an otherwise independent set of parts gives us an image of a state space confined to, perhaps, only those states which are 'useful'.  Any other kind of relationship (like a local, non heirarchical, or mutual set of connections) is also going to add constraints to the possible states the system can be in(mutual information).  This quote describes a case where local rules have restrained the state space of the robot to the same set of states as some collection of central rules.
 +
**"In programming Herbert, it was decided that it should maintain no state longer than 3 seconds, and that there would be no communication between behavior generating modules"
 +
***Wow!
 +
***The word communication here is tricky.  There is a sort of stigmergy or some communication since, even without wires connecting the arm and hand, the arm puts the hand in situations where it is going to grab, and the state of one gives predictive power concerning the state of the other.
 +
**"The same opportunism among behaviors let the arm adapt automatically to a wide variety of cluttered desktops"
 +
***I don't want to engage too much with the word opportunism, because I am afraid it will mislead me and confuse my models, but it was a neat word and gives some insight:  The use of the word opportunism here suggests the extent to which the different components ''are'' decoupled; independent of each other.  Opportunism in this context seems to mean that herbert has more possible states than any of his more centralized brothers.  In combination with the immediately preceding wuote, there is a sense in which the arm and hand constrain each other and there is a sense in which they don't.  I'm going to need more tools in order to think clearly about that.
 +
*John Connell.  Connell's law. "There is roughly one watt of electrical power for each pound of overall weight of the robot.
 +
*Earlier, [[Simon 69]] had discussed a similar point in terms of an ant walking along the beach.  He pointed out that the complexity of the behavior of the ant is more a reflection of the complexity of its environment that its own internal complexity.  He speculated that the same may be true of humans, but within two pages of text had reduced studying human behavior to the domain of crypto-arithmetic problems"
 +
**Good quote, particularly because I think its wrong, or misses something important.  The ant, in a simple env, with employ simpler behavior.  This implies the typical matching of complexity of a system to its environment, but that only reflects that a system's complexity (the ant's number of states) can't be counted by observed behavior. The ant is still complex on a flat floor.
  
 
===To Read===
 
===To Read===
 
*[[Intelligence as Adaptive Behaviour]] Beer 1990
 
*[[Intelligence as Adaptive Behaviour]] Beer 1990
 +
*[[Brooks 90b]] A review of a few decades of robots
 +
*[[Brooks 86]] the  allen paper.
 +
*[[Connell 89]] The herbert paper.
 
*[[Intelligence without Representation]] Brooks 1991
 
*[[Intelligence without Representation]] Brooks 1991
 
*[[Dan's Thesis from Radhika's Class]]
 
*[[Dan's Thesis from Radhika's Class]]
Line 76: Line 132:
 
*"[[Maes 89]] introduced the notion of switching whole pieces of the network on and off"
 
*"[[Maes 89]] introduced the notion of switching whole pieces of the network on and off"
 
*"Such (subsumption) systems can make plans--but they are not the same as trad AI plans, see [[Agre and Chapman 90]] for an analysis of this issue]]
 
*"Such (subsumption) systems can make plans--but they are not the same as trad AI plans, see [[Agre and Chapman 90]] for an analysis of this issue]]
 +
*"There has been a lot of work on emergence based on the theme of self-organization (e.g. [[Nichols and Prigogine 77]])"  Also [[Steels 1990 a looks good]]
 +
*Ashby 52 and 56
 +
*wiener 48, 61
 +
*agre 91
 +
*churchland 86
 +
*minsky 86
 +
*more modern minsky?
 +
*dreyfus 81
 +
*wood 88 c. elegans
 +
*a book about c elegans
 +
 +
 +
[[Category: Sept 08 Readings]]

Latest revision as of 06:43, 16 September 2008

Contents

Review

Fantastic paper, a critical history of AI with proposed solutions (i.e. descriptions of Brook's work for the past thirty years). All his criticisms of AI echo the ones that I felt but couldn't articulate when I was learning the field by the tradition carried forward by Stuart Russell.

The best papers are the ones that inspire day dreaming, but of a directed sort. I've come out of it with many ideas, many of my own answers to his questions and many questions of the same questions. I'll go through my notes on this paper backwards.

Brook's directions for robotics

At the end he lays out his Big Questions for the advance of Behavior based robotics. This paper is from 1991, and I don't know where he is now, but my impression is that we haven't made all that much progress on any of these. And I don't have any evidence from his relatively few publications in this millennium to believe that his story is different now than it was 17 years ago.

  • Individual Behavior
    • "Convergence:Demonstrate or prove that a specified behavior is such that the robot will indeed carry out the desired task successfully. For instance, we may want to give some set of initial conditions for a robot, and some limitations on possible worlds which it is placed, and show that under those conditions, the robot is guaranteed to follow a particular wall, rather than diverge and get lost."
    • "Synthesis: Given a particular task, automatically derive a behavior specification for the creature so that is carries out that task in a way which has clearly demonstrable convergence.
    • Complexity
    • Learning
  • Multiple Behaviors in a robot
    • Coherence
    • Relevance
    • Adequacy
    • Representation
    • Learning
  • Multiple Robots
    • Emergence
    • Synthesis
    • Communication
    • Cooperation
    • Interference
    • Density Dependance
    • Individuality
    • Learning

Most of what I have to say concerns the first. His concern with 'solving' problems in the realm of behavior based robotics mirrors Radhika Nagpal's preoccupation/emphasis on 'solving' problems in distributed robotics. The word solve is in the analytic sense. It seems the goal is to be able to draw the same kinds of conclusions with the same confidence as conventional approaches to robotics are able to. But the problem with that is that, insofar as engineering is the science of designing systems that are 'solvable', the successful design of distributed systems will not be engineering.

My inclination is that employing distributed systems to solve engineering problems will involve giving up some confidence and control, and replacing the metrics that are so amenable to analysis with statistical and gross metrics that can never be anything better than 'good enough'.

Unfortunately, I still lack some confidence in these claims. Brooks and Radhika are both right on with so many aspects of the problem, and have been so successful evading so many conventions of AI, so why haven't they concluded that analyses of distributed systems will not be able to expect the same kinds of results as conventionally engineered systems? There is a good reason, and I don't know it.

My directions for robotics

A rephrasing of Brook's words that I would feel more comfortable with is this: " Convergence:Provide the probability that a specified behavior is such that the robot will indeed carry out the desired task successfully. For instance, we may want to give some set of initial conditions for a robot, and some limitations on possible worlds which it is placed, and show that under those conditions, how likely the robot is to follow a particular wall, rather than diverge and get lost. "

As a metric to keep engineers happy, this can't be enough without embracing the idea that engineering distributed systems means engineering multi-agent systems. Assuming no interactions between robots, the above calculated probability ("that a specified behavior is such that the robot will indeed carry out the desired task successfully")can be brought to acceptable ranges simply by adding more agents. If with one agent you are determined to have a %50 probability of accomplishing a goal, than with 3 agents you have 1-1/8=%87 chance of accomplishing your goal. If you want %98 percent likelihood, add more agents.

That assumes independence of the agents, or no interaction. When agents interact, you get two more possibilities, they can help each other accomplish their task and they can interfere with each other. This is where we start to get into 'emergence'. If a procedure can be found for finding rules of interaction that cause agents to help each other, than your above %98 percent likelihood becomes a lower limit. My best sketch right now for such a procedure looks like this:

  • There are many states a robot can be in.
  • There is a set of 'stuck' states and a set of 'making progress' states.
  • If, after interaction, one or both robots is more likely to be in a 'making progress' state than a 'stuck' state, you have a 'helping' rule.

Of course, talk of states gets into defining them, sifting through them and counting them. I need more information theory.

Some more of Brook's work

I don't know why it took me the whole paper to learn this, but the school Brooks identifies with is called behavior-based robotics. That is a vocab word. So is 'subsumption'.

Maes and Brooks 90 and Maes 89 describe how they tuned a hexapod to tune itself to exhibit the tripod gait.

Subsumption systems can make predictions, plans and have goals without central, symbolic or even manipulable representations.

Subsumption seems to me like 'ad hoc as a methodology'. I get the sense that theory plays a minimal role in engineering an robot to accomplish a goal, and there is much tweaking and twisting of the specifics of an agent until it accomplishes said goal. To the extent that that is true, subsumptive approaches to robotics with always be goal oriented; grounded in a problem. This ad hoc tweaking seems like it is the single agent equivalent of the natural multi agent approach where %90 of the agents fail, where the survivors were only incidentally pre tweaked to accomplish their goal.

In describing the goals and observations of subsumption on p 566, he understates "It necessarily forcers the programmer to use a different style of organzingtion for the programs for intelligence.

I need a better sense of the 'procedure' that sub sumption approaches suggest/imply. Is the above description of 'ad hoc engineering' accurate?

I'm getting hints at an idea that before you can make intelligence you need minds, before minds you need motion/bodies and before either of those you need, uh, life? Well, to the extent that robotics recapitulates ontogeny, that may be true.

First Half of the paper

I'm running on fumes, but I am determined to finish my notes on all my 'pending' papers. starting backwrds from p582: "For one very simple animcal C elegans, a nematode, we have a complete viting diagram of its nercous system, including its development stages (Wood 88). In the hermaphrodite, there are 302 neronds and 56 support cells out of the animl's total 959 cells. in the male there are 381 neurons and 92 support cells our of a total of 1031 cells." I need to read up on C elegans.

implicit brain as computer assumption can be seen explicitly in the fiction of professional researchers such as Dennett 81 and Morevec 88.

"N INTERPRETATION OF MULTIPLE ALMOST INDEPENDENT AGENCIES SUCH AS HYPOTHESIZED BY mINsky 86

see TOB's interesting cognitive deficits in McCarthy and arrington 88]

temporal illusions: Dennett and Kinsbourne 90 for an overview See agre 91 for how people actually plan trips (like Boston to CA). this typ eof work acts as an argument against validity of introspection as a tool in cog sci or AI. Also: "See Churchland 86 for a discussion of folk psychology"

"Tinbergens (ethological) model has lergely been replaced ... by theories of motivational competition, disinhibition, and dominant and sub-dominant behaviors"

apparently neural networks need extensive front and backends in order to operate in real time and therefore be situated and embodied.

Dreyfus '81 provides a useful criticism of the 'we'll fix it later' approaches unifying conventional vision and knowledge representation work.

"The key problem that I see with all this work (apart from the use of search) is that it relied on the assumption that a compete world model could be built internally and then manipulated"

Brooks atributes failure of cybernetics in part to its failure to move past the implicit analog-electronic mental representations of systems. "The critical point is the way in which the mathematical proposal is tied to a technnological implementation as a certification of the validity of the approach."

"In general, performance increases in computers were able to fgeed researchers with a steadily laeger search space, enabling them to feel that they were making proress as the yeasrs went by.

Ashby 52 recognized that an organism and its encironment must be modeled together in order to understand the behavior produced by the organism.

"The tools of feedback analysis were used, Ashby 56 to concentrate on such issues as stbility of the sysem as the environment was perturbed, and in particular a system's homeostasis or ability to keep vertain parameters with prescribed ranges no matter what the uncontrolled variations within the environemtn."

"Recently there has been a trend to try to integrate traditional symbolic reason on top of a purely reactive system... horizon effect...bought a little time." My notes ask "The living dead?"

"Furthermore the representations an agent uses of objects in the world need not rely on a semactic correspondence with symbols that the agent possesses, but rather can be defined through interactions of the agent with the world"


Questions

  • differential calculus/geometry: To the extent that problems I will encounter will be meaningfully represented as multidimensional searches, I will benefit from an understanding of tools that provide insight into how to think about multidimensional spaces, i.e. their rules of thumb. Should I take a course? Does one exist here?
  • Similarly, should I take a search course?
  • What metrics exist for measuring the complexity of an environment (in an info theory manner)
    • How is stigmergy accomodated in information theoretic metrics for environments? Anyone beyond Ashby looked at the relationship between the complexity of an system and that of environment? (ps, in most of this note, complexity is entropy, k log number of states.)
  • Is anyone (besides Yaneer) advocating evolution-as-design-methodology? Who?

Good Quotes, mostly about 'Complexity'

  • "We only know that thought exists in biological systems through our own introspection. "
  • "Evolution has decided that there is a trade-off between what we know through our genes and what we must find out for ourselves as we develop"
    • The stability of an environment, or aspects of an environment, or large time scales can be measured by what behaviors are genetic and which learned.
  • "There can ... be representation which are partial models of the world== ni fact I mentioned that "individual layers extract only those aspects of the world which they find relevant...Brooks 1991a"
    • What is meant by layers and by aspects?
  • "It is very easy for an observer of a system to attribute more complex internal structure than really exists. Herbert appeared to be doing things like path planning and map building, even though it was not"
    • This gets back to measuring the complexity of behavior/environment. Thinking of a centralized system as one that constrains the number of possible states of an otherwise independent set of parts gives us an image of a state space confined to, perhaps, only those states which are 'useful'. Any other kind of relationship (like a local, non heirarchical, or mutual set of connections) is also going to add constraints to the possible states the system can be in(mutual information). This quote describes a case where local rules have restrained the state space of the robot to the same set of states as some collection of central rules.
    • "In programming Herbert, it was decided that it should maintain no state longer than 3 seconds, and that there would be no communication between behavior generating modules"
      • Wow!
      • The word communication here is tricky. There is a sort of stigmergy or some communication since, even without wires connecting the arm and hand, the arm puts the hand in situations where it is going to grab, and the state of one gives predictive power concerning the state of the other.
    • "The same opportunism among behaviors let the arm adapt automatically to a wide variety of cluttered desktops"
      • I don't want to engage too much with the word opportunism, because I am afraid it will mislead me and confuse my models, but it was a neat word and gives some insight: The use of the word opportunism here suggests the extent to which the different components are decoupled; independent of each other. Opportunism in this context seems to mean that herbert has more possible states than any of his more centralized brothers. In combination with the immediately preceding wuote, there is a sense in which the arm and hand constrain each other and there is a sense in which they don't. I'm going to need more tools in order to think clearly about that.
  • John Connell. Connell's law. "There is roughly one watt of electrical power for each pound of overall weight of the robot.
  • Earlier, Simon 69 had discussed a similar point in terms of an ant walking along the beach. He pointed out that the complexity of the behavior of the ant is more a reflection of the complexity of its environment that its own internal complexity. He speculated that the same may be true of humans, but within two pages of text had reduced studying human behavior to the domain of crypto-arithmetic problems"
    • Good quote, particularly because I think its wrong, or misses something important. The ant, in a simple env, with employ simpler behavior. This implies the typical matching of complexity of a system to its environment, but that only reflects that a system's complexity (the ant's number of states) can't be counted by observed behavior. The ant is still complex on a flat floor.

To Read