If we are to capture human learning and cognition, we must envision the exotic architectures that make them possible.
- New models of brain hardware can shed light on new models of brain software.
- Design rule: Fit the approach to the problem you’re trying to solve.
- The trendiest tools are not always the best tools for the job.
- For example, modern computer systems are not the best metaphor for how brain hardware works. (6 key differences between computers and brains)
- And artificial neural networks with backprop are not the best model of how brains learn. (6 unrealistic aspects of back prop)
- In particular, backprop requires global lockstep coordination where brains are mostly asyncronous and local; and even recurrent neural networks pass information in one direction rather than many, as in the brain (compare recursive functions vs coroutines.)
- And artificial neural networks with backprop are not the best engineering solution for every learning problem.
- “Bulldozer computing” works well on object recognition, but fails on motor control (which has too-large state space) and medical diagnosis (which integrates many kinds of info.)
- We should tailor new tools to such problems.
- All of the recent ML performance improvements have come not from new ideas about learning, but rather from advances in computing power and data size.
- Moore’s law will asymptote.
- We must imagine new exotic architectures and learning mechanisms, tailored to the problems we’re trying to solve.
- Surprise: There are other kinds of “neural networks” that are better brain models or better at solving ML problems:
- Marr-Albus model of the cerebellum.
- Hopfield networks
- Self-organizing maps
- Decision trees.
(C) 2019 Dylan Holmes. This work is not covered by MIT OpenCourseWare’s Creative Commons license but is licensed under the Creative Commons Attribution-NoDerivatives 4.0 International License. To view a copy of this license, visit https://creativecommons.org/licenses/by-nd/4.0/.