Intelligence without Reason

From enfascination

Jump to: navigation, search
Line 30: Line 30:
  
 
Unfortunately, I still lack some confidence in these claims.  Brooks and Radhika are both right on with so many aspects of the problem, and have been so successful evading so many conventions, why haven't they concluded that analysis of distributed systems can't expect the same results as conventionally engineered systems?  There is a good reason, and I don't know it.
 
Unfortunately, I still lack some confidence in these claims.  Brooks and Radhika are both right on with so many aspects of the problem, and have been so successful evading so many conventions, why haven't they concluded that analysis of distributed systems can't expect the same results as conventionally engineered systems?  There is a good reason, and I don't know it.
 +
 +
A rephrasing of Brook's words that I would feel more comfortable with is this:
 +
"
 +
Convergence:''Provide the probability'' that a specified behavior is such that the robot will indeed carry out the desired task successfully.  For instance, we may want to give some set of initial conditions for a robot, and some limitations on possible worlds which it is placed, and show that under those conditions, ''how likely the robot is'' to follow a particular wall, rather than diverge and get lost.
 +
"
 +
 +
As a metric to keep engineers happy, this can't be enough without embracing the idea that engineering distributed systems is inseparable from engineering for multiple agents.  Assuming no interactions between robots, the above calculated probability distribution can be brought to acceptable ranges simply by adding more agents.  If with one agent you are determined to have a %50 probability of accomplishing a goal, than with 3 agents you have 1-1/8=%87 chance of accomplishing your goal.  If you want %98 percent liklihood, add more agents. 
 +
 +
That assumes independence of the agents, or no interaction.  When agents interact it is possible for them to help each other accomplish a goal and also for them to prevent each other from accomplishing their individual goal.  If a procedure can be found for finding rules of interaction that cause agents to help each other, than your above %98 percent liklihood becomes a lower limit.  A rough procedure for defining rules of interaction would
  
 
===Questions===
 
===Questions===

Revision as of 20:34, 14 September 2008

Fantastic paper, a critical history of AI with proposed solutions (i.e. descriptions of Brook's work for the past thirty years). All his criticisms of AI echo the ones that I felt but couldn't articulate when I was learning the field by the tradition carried forward by Stuart Russell.

The best papers are the ones that inspire day dreaming, but of a directed sort. I've come out of it with many ideas, many of my own answers to his questions and many questions of the same questions. I'll go through my notes on this paper backwards.

At the end he lays out his Big Questions for the advance of Behavior based robotics. This paper is from 1991, and I don't know where he is now, but my impression is that we haven't made all that much progress on any of these. And I don't have any evidence from his relatively few publications in this millenium to believe that his story is different now than it was 17 years ago.

  • Individual Behavior
    • "Convergence:Demonstrate or prove that a specified behavior is such that the robot will indeed carry out the desired task successfully. For instqance, we may want to give some set of initial conditions for a robot, and some limitations on possible worlds which it is placed, and show that under those conditions, the robot is guaranteed to follow a particular wall, rather than diverge and get lost."
    • "Synthesis: Given a particular task, automatically derive a behavior specification for the creature so that is carries out that task in a way which has clearly ddemonstrable convergnece.
    • Complexity
    • Learning
  • Multiple Behaviors in a robot
    • Coherence
    • Relevance
    • Adequacy
    • Representation
    • Learning
  • Multiple Robots
    • Emergence
    • Synthesis
    • Communication
    • Cooperation
    • Interference
    • Density Dependance
    • Individuality
    • Learning

Most of what I have to say concerns the first. His concern with 'solving' problems in the realm of behavior based robotics mirrors Radhika Nagpal's preoccupation/emphasis on 'solving' problems in distributed robotics. The word solve is in the analytic sense. It seems the goal is to be able to draw the same kinds of conclusions with the same confidence as conventional approaches to robotics are able to. But the problem with that is that, insofar as engineering is the science of designing systems that are 'solvable', the successful design of distributed systems will not be engineering.

My inclination is that employing distributed systems to solve engineering problems will involve giving up some confidence and control, and replacing the metrics that are so amenable to analysis with statistical and gross metrics that can never be anything better than 'good enough'.

Unfortunately, I still lack some confidence in these claims. Brooks and Radhika are both right on with so many aspects of the problem, and have been so successful evading so many conventions, why haven't they concluded that analysis of distributed systems can't expect the same results as conventionally engineered systems? There is a good reason, and I don't know it.

A rephrasing of Brook's words that I would feel more comfortable with is this: " Convergence:Provide the probability that a specified behavior is such that the robot will indeed carry out the desired task successfully. For instance, we may want to give some set of initial conditions for a robot, and some limitations on possible worlds which it is placed, and show that under those conditions, how likely the robot is to follow a particular wall, rather than diverge and get lost. "

As a metric to keep engineers happy, this can't be enough without embracing the idea that engineering distributed systems is inseparable from engineering for multiple agents. Assuming no interactions between robots, the above calculated probability distribution can be brought to acceptable ranges simply by adding more agents. If with one agent you are determined to have a %50 probability of accomplishing a goal, than with 3 agents you have 1-1/8=%87 chance of accomplishing your goal. If you want %98 percent liklihood, add more agents.

That assumes independence of the agents, or no interaction. When agents interact it is possible for them to help each other accomplish a goal and also for them to prevent each other from accomplishing their individual goal. If a procedure can be found for finding rules of interaction that cause agents to help each other, than your above %98 percent liklihood becomes a lower limit. A rough procedure for defining rules of interaction would

Questions

  • differential calculus/geometry: To the extent that problems I will encounter will be meaningfully represented as multidimensional searches, I will benefit from an understanding of tools that provide insight into how to think about multidimensional spaces, i.e. their rules of thumb. Should I take a course? Does one exist here?
  • Similarly, should I take a search course?

Good Quotes

  • "We only know that thought exists in biological systems through our own introspection. "

To Read