Intelligence without Reason
From enfascination
Fantastic paper, a critical history of AI with proposed solutions (i.e. descriptions of Brook's work for the past thirty years). All his criticisms of AI echo the ones that I felt but couldn't articulate when I was learning the field by the tradition carried forward by Stuart Russell.
The best papers are the ones that inspire day dreaming, but of a directed sort. I've come out of it with many ideas, many of my own answers to his questions and many questions of the same questions. I'll go through my notes on this paper backwards.
At the end he lays out his Big Questions for the advance of Behavior based robotics. This paper is from 1991, and I don't know where he is now, but my impression is that we haven't made all that much progress on any of these. And I don't have any evidence from his relatively few publications in this millennium to believe that his story is different now than it was 17 years ago.
- Individual Behavior
- "Convergence:Demonstrate or prove that a specified behavior is such that the robot will indeed carry out the desired task successfully. For instance, we may want to give some set of initial conditions for a robot, and some limitations on possible worlds which it is placed, and show that under those conditions, the robot is guaranteed to follow a particular wall, rather than diverge and get lost."
- "Synthesis: Given a particular task, automatically derive a behavior specification for the creature so that is carries out that task in a way which has clearly demonstrable convergence.
- Complexity
- Learning
- Multiple Behaviors in a robot
- Coherence
- Relevance
- Adequacy
- Representation
- Learning
- Multiple Robots
- Emergence
- Synthesis
- Communication
- Cooperation
- Interference
- Density Dependance
- Individuality
- Learning
Most of what I have to say concerns the first. His concern with 'solving' problems in the realm of behavior based robotics mirrors Radhika Nagpal's preoccupation/emphasis on 'solving' problems in distributed robotics. The word solve is in the analytic sense. It seems the goal is to be able to draw the same kinds of conclusions with the same confidence as conventional approaches to robotics are able to. But the problem with that is that, insofar as engineering is the science of designing systems that are 'solvable', the successful design of distributed systems will not be engineering.
My inclination is that employing distributed systems to solve engineering problems will involve giving up some confidence and control, and replacing the metrics that are so amenable to analysis with statistical and gross metrics that can never be anything better than 'good enough'.
Unfortunately, I still lack some confidence in these claims. Brooks and Radhika are both right on with so many aspects of the problem, and have been so successful evading so many conventions of AI, so why haven't they concluded that analyses of distributed systems will not be able to expect the same kinds of results as conventionally engineered systems? There is a good reason, and I don't know it.
A rephrasing of Brook's words that I would feel more comfortable with is this: " Convergence:Provide the probability that a specified behavior is such that the robot will indeed carry out the desired task successfully. For instance, we may want to give some set of initial conditions for a robot, and some limitations on possible worlds which it is placed, and show that under those conditions, how likely the robot is to follow a particular wall, rather than diverge and get lost. "
As a metric to keep engineers happy, this can't be enough without embracing the idea that engineering distributed systems means engineering multi-agent systems. Assuming no interactions between robots, the above calculated probability ("that a specified behavior is such that the robot will indeed carry out the desired task successfully")can be brought to acceptable ranges simply by adding more agents. If with one agent you are determined to have a %50 probability of accomplishing a goal, than with 3 agents you have 1-1/8=%87 chance of accomplishing your goal. If you want %98 percent likelihood, add more agents.
That assumes independence of the agents, or no interaction. When agents interact, you get two more possibilities, they can help each other accomplish their task and they can interfere with each other. This is where we start to get into 'emergence'. If a procedure can be found for finding rules of interaction that cause agents to help each other, than your above %98 percent likelihood becomes a lower limit. My best sketch right now for such a procedure looks like this:
- There are many states a robot can be in.
- There is a set of 'stuck' states and a set of 'making progress' states.
- If, after interaction, one or both robots is more likely to be in a 'making progress' state than a 'stuck' state, you have a 'helping' rule.
Of course, talk of states gets into defining them, sifting through them and counting them. I need more information theory.
I don't know why it took me the whole paper to learn this, but the school Brooks identifies with is called behavior-based robotics. That is a vocab word. So is 'subsumption'.
Maes and Brooks 90 and Maes 89 describe how they tuned a hexapod to tune itself to exhibit the tripod gait.
Subsumption systems can make predictions, plans and have goals without central, symbolic or even manipulable representations.
Subsumption seems to me like 'ad hoc as a methodology'. I get the sense that theory plays a minimal role in engineering an robot to accomplish a goal, and there is much tweaking and twisting of the specifics of an agent until it accomplishes said goal. To the extent that that is true, subsumptive approaches to robotics with always be goal oriented; grounded in a problem. This ad hoc tweaking seems like it is the single agent equivalent of the natural multi agent approach where %90 of the agents fail, where the survivors were only incidentally pre tweaked to accomplish their goal.
Questions
- differential calculus/geometry: To the extent that problems I will encounter will be meaningfully represented as multidimensional searches, I will benefit from an understanding of tools that provide insight into how to think about multidimensional spaces, i.e. their rules of thumb. Should I take a course? Does one exist here?
- Similarly, should I take a search course?
- What metrics exist for measuring the complexity of an environment (in an info theory manner)
Good Quotes
- "We only know that thought exists in biological systems through our own introspection. "
- "Evolution has decided that there is a trade-off between what we know through our genes and what we must find out for ourselves as we develop"
- The stability of an environment, or aspects of an environment, or large time scales can be measured by what behaviors are genetic and which learned.
- "There can ... be representation which are partial models of the world== ni fact I mentioned that "individual layers extract only those aspects of the world which they find relevant...Brooks 1991a"
- What is meant by layers and by aspects?
- "It is very easy for an observer of a system to attribute more complex internal structure than really exists. Herbert appeared to be doing things like path planning and map building, even though it was not"
- This gets back to measuring the complexity of behavior/environment. Thinking of a centralized system as one that constrains the number of possible states of an otherwise independent set of parts gives us an image of a state space confined to, perhaps, only those states which are 'useful'. Any other kind of relationship (like a local, non heirarchical, or mutual set of connections) is also going to add constraints to the possible states the system can be in(mutual information). This quote describes a case where local rules have restrained the state space of the robot to the same set of states as some collection of central rules.
To Read
- Intelligence as Adaptive Behaviour Beer 1990
- Intelligence without Representation Brooks 1991
- Dan's Thesis from Radhika's Class
- Mataric 90, 91 "Learning representations of the world was already discussed concerning the work of..."
- "Maes 89 introduced the notion of switching whole pieces of the network on and off"
- "Such (subsumption) systems can make plans--but they are not the same as trad AI plans, see Agre and Chapman 90 for an analysis of this issue]]