The intriguing weaknesses of deep learning and deep neural networks

Deep learning (and neural networks generally) have impressed me a lot for what they can do, but much more so for what they can’t. They seem to be vulnerable to three of the very same strange, deep design limits that seem to constrain the human mind-brain system.

  • The intractability of introspection. The fact that we can know things without knowing why we know them, or even that we know them. Having trained a deep network, it’s a whole other machine learning problem just to figure out how it is doing what it is doing.
  • Bad engineering. Both neural networks and the brain are poorly engineered in the sense that they perform action X in a way that a mechanical or electrical engineer would never have designed a machine that can do X.** These systems don’t respect modularity and it is hard to analyze them with a pencil and paper. They are hard to diagnose, troubleshoot, and reverse-engineer. That’s probably important to why they work.
  • The difficulty of unlearning. The impossibility of “unseeing” the object in the image on the left (your right), once you know what it is. That is a property that neural networks share with the brain. Well, maybe that isn’t a fact, maybe I’m just conjecturing. If so, call it a conjecture: I predict that Facebook’s DeepFace, after it has successfully adapted to your new haircut, has more trouble than it should in forgetting your old one.
  • Very fast performance after very slow training. Humans make decisions in milliseconds, decisions based on patterns learned from a lifetime of experience and tons of data. In fact, the separation between the training and test phases that is standard in machine learning is more of an artifice in deep networks, whose recurrent varieties can be seen as lacking the dichotomy.
  • There are probably others, but I recognize them only slowly.
Careful. Once you know what this is, there's no going back.
Careful. Once you know what this is, there’s no going back.

Unlearning, fast learning, introspection, and “good” design aren’t hard to engineer: we already have artificial intelligences with these properties, and we humans can easily do things that seem much harder. But neither humans nor deep networks are good at any of these things. In my eyes, the fact that deep learning is reproducing these seemingly-deep design limitations of the human mind gives it tremendous credibility as an approach to human-like AI.

The coolest thing about a Ph.D. in cognitive science is that it constitutes license, almost literally, to speculate about the nature of consciousness. I used to be a big skeptic of the ambitions of AI to create human-like intelligence. Now I could go either way. But I’m still convinced that getting it, if we get it, will not imply understanding it.

Motivating links:

About

This entry was posted on Sunday, December 21st, 2014 and is filed under science.