Innate Knowledge and Deep Learning

Are we born with some form of innate knowledge? Innatism is gaining neuroscientific evidence and may shape the next R&D steps in AI and Deep Learning.

Mattia Ferrini
Towards Data Science

--

Figure 1: An elder Plato walks alongside Aristotle, The School of Athens, Raphael

“The doctrine of Innatism [..] holds that the human mind is born with ideas or knowledge. This belief, put forth most notably by Plato as his Theory of Forms and later by Descartes in his Meditations, is currently gaining neuroscientific evidence that could validate the belief that we are born with innate knowledge of our world” (source).

The doctrine of Innatism clashes with a “purist” Machine Learning approach where Machine Learning algorithms learn exclusively from data without being explicitly programmed or equipped with pre-programmed computational and logical modules. “The actual contents of minds are tremendously, irredeemably complex; we should stop trying to find simple ways to think about the contents of minds, such as simple ways to think about space, objects, multiple agents, or symmetries. All these are part of the arbitrary, intrinsically-complex, outside world. They are not what should be built in, as their complexity is endless; instead we should build in only the meta-methods that can find and capture this arbitrary complexity.” (source)

Poles apart, a different school of thoughts suggest the combination of symbolic AI techniques with deep learning.

The future of Deep Learning

Hinton disparaged the idea — which has been advocated by New York University professor Gary Marcus among others — that deep learning will need to be combined with older, symbolic AI techniques to achieve human-level intelligence. Hinton compared this to using electric motors only to run the fuel-injectors of gasoline engines, even though electricity is far more energy efficient.“ (source)

At the same time, hybrid models may address apparent limitations of deep learning, in particular that “deep learning currently lacks a mechanism for learning abstractions through explicit, verbal definition, and works best when there are thousands, millions or even billions of training examples”. (source)

Should we better integrate GOFAI into Deep Learning? The debate is ongoing and fierce.

New neurological evidence

In my opinion, the discussion ultimately boils down to one question — do we, humans, learn everything from experience or are we born equipped with some form of innate knowledge?

A study published on the Proceedings of the National Academy of the Sciences (PNAS), “discovered a synaptic organizing principle that groups neurons in a manner that is common across animals and hence, independent of individual experiences” (source). Such clusters contain representations of certain simple workings of the physical world. “The groups of neurons, or cell assemblies, appear consistently in the Neocortices of animals and are essentially cellular “building blocks”. In many animals then, it may hold true that learning, perception, and memory are a result of putting these pieces together rather than forming new cell assemblies” (source).

A thin demarcation line

In light of growing neurological evidence that supports the existence of innate knowledge, it may make sense to equip deep learning with “innate” computational modules or primitives. It is likely that some of such primitives will be based on ideas borrowed from or inspired by GOFAI.

On the other hand, it is hard to foresee how Deep Learning architectures will look like in the future. Yoshua Bengio himself acknowledges that “new architectures for deep learning would be needed, however, before a neural networks could match the kind of general intelligence the human brain possesses” (source).

In my opinion, it is very likely that symbol manipulation will be deeply coupled and entwined with neural architectures as opposed to a clear-cut juxtaposition of, for example, a neural back-end and a symbolic front-end (e.g. figure 2). “Models closer to general-purpose computer programs, built on top of far richer primitives than our current differentiable layers — this is how we will get to reasoning and abstraction, the fundamental weakness of current models”(source).

Figure 2: Deep Symbolic Reinforcement Learning, the neural back end learns to map raw sensor data into a symbolic representation, which is used by the symbolic front end to learn an effective policy (source)

This suggests that the demarcation line between the two approaches, “purist” vs hybrid, is really thin. Therefore, I believe that the differences in the point of views are more differences of emphasis rather than fundamental.

--

--