The tremendous success, in recent centuries, of science and technology explaining the world around us and improving the human condition helped create the impression that we are on the brink of understanding the Universe. The world is complex, but we seem to have been able to reduce its complexity down to a relatively small number of fundamental laws. These laws are formulated in the language of mathematics, and the idea is that, even if we can’t solve all the equations describing complex systems, at least we can approximate the solutions, usually with the help of computers. These successes led to a feeling bordering on euphoria at the power of our reasoning. Eugene Wigner summed up this feeling in his famous essay, The Unreasonable Effectiveness of Mathematics in the Natural Sciences.

Granted, there are still a few missing pieces, like the unification of gravity with the Standard Model, and the 95% of the mass of the Universe unaccounted for, but we’re working on it… So there’s nothing to worry about, right?

Actually, if you think about it, the idea that the Universe can be reduced to a few basic principles is pretty preposterous. If this turned out to be the case, I would be the first to believe that we live in a simulation. It would mean that this enormous Universe, with all the galaxies, stars, and planets was designed with one purpose in mind: that a bunch of sentient monkeys on the third planet from a godforsaken star in a godforsaken galaxy were able to understand it. That they would be able to build, in their puny brains–maybe extended with some silicon chips and fiber optics–a perfect model of it.

How do we understand things? By building models in our (possibly computer-enhanced) minds. Obviously, it only makes sense if the model is smaller than the actual thing; which is only possible if reality is compressible. Now compare the size and the complexity of the Universe with the size and the complexity of our collective brains. Even with lossy compression, the discrepancy is staggering. But, you might say, we don’t need to model the totality of the Universe, just the small part around us. This is where compositionality becomes paramount. We assume that the world can be decomposed, and that the relevant part of it can be modeled, to a good approximation, independent from the rest.

Reductionism, which has been fueling science and technology, was made possible by the decompositionality of the world around us. And by “around us” I mean not only physical vicinity in space and time, but also proximity of scale. Consider that there are 35 orders of magnitude between us and the Planck length (which is where our most precious model of spacetime breaks down). It’s perfectly possible that the “sphere of decompositionality” we live in is but a thin membrane; more of an anomaly than a rule. The question is, why do we live in this sphere? Because that’s where life is! Call it anthropic or biotic principle.

The first rule of life is that there is a distinction between the living thing and the environment. That’s the primal decomposition.

It’s no wonder that one of the first “inventions” of life was the cell membrane. It decomposed space into the inside and the outside. But even more importantly, every living thing contains a model of its environment. Higher animals have brains to reason about the environment (where’s food? where’s predator?). But even a lowly virus encodes, in its DNA or RNA, the tricks it uses to break into a cell. Show me your RNA, and I’ll tell you how you spread. I’d argue that the definition of life is the ability to model the environment. And what makes the modeling possible is that the environment is decomposable and compressible.

We don’t think much of the possibility of life on the surface of a proton, mostly because we think that the proton is too small. But a proton is closer to our scale than it is to the Planck scale. A better argument is that the environment at the proton scale is not easily decomposable. A quarkling would not be able to produce a model of its world that would let it compete with other quarklings and start the evolution. A quarkling wouldn’t even be able to separate itself from its surroundings.

Once you accept the possibility that the Universe might not be decomposable, the next question is, why does it appear to be so overwhelmingly decomposable? Why do we believe so strongly that the models and theories that we construct in our brains reflect reality? In fact, for the longest time people would study the structure of the Universe using pure reason rather than experiment (some still do). Ancient Greek philosophers were masters of such introspection. This makes perfect sense if you consider that our brains reflect millions of years of evolution. Euclid didn’t have to build a Large Hadron Collider to study geometry. It was obvious to him that two parallel lines never intersect (it took us two thousand years to start questioning this assertion–still using pure reason).

You cannot talk about decomposition without mentioning atoms. Ancient Greeks came up with this idea by pure reasoning: if you keep cutting stuff, eventually you’ll get something that cannot be cut any more, the “uncuttable” or, in Greek, ἄτομον [atomon]. Of course, nowadays we not only know how to cut atoms but also protons and neutrons. You might say that we’ve been pretty successful in our decomposition program. But these successes came at the cost of constantly redefining the very concept of decomposition.

Intuitively, we have no problem imagining the Solar System as composed of the Sun and the planets. So when we figured out that atoms were not elementary, our first impulse was to see them as little planetary systems. That didn’t quite work, and we know now that, in order to describe the composition of the atom, we need quantum mechanics. Things are even stranger when decomposing protons into quarks. You can split an atom into free electrons and a nucleus, but you can’t split a proton into individual quarks. Quarks manifest themselves only under confinement, at high energies.

Also, the masses of the three constituent quarks add up only to one percent of the mass of the proton. So where does the rest of the mass come from? From virtual gluons and quark/antiquark pairs. So are those also the constituents of the proton? Well, sort of. This decomposition thing is getting really tricky once you get into quantum field theory.

Human babies don’t need to experiment with falling into a precipice in order to learn to avoid visual cliffs. We are born with some knowledge of spacial geometry, gravity, and (painful) properties of solid objects. We also learn to break things apart very early in life. So decomposition by breaking apart is very intuitive and the idea of a particle–the ultimate result of breaking apart–makes intuitive sense. There is another decomposition strategy: breaking things into waves. Again, it was Ancient Greeks, Pythagoras who studied music by decomposing it into harmonics, and Aristotle who suggested that sound propagates through movement of air. Eventually we uncovered wave phenomena in light, and then the rest of the electromagnetic spectrum. But our intuitions about particles and weaves are very different. In essence, particles are supposed to be localized and waves are distributed. The two decomposition strategies seem to be incompatible.

Enter quantum mechanics, which tells us that every elementary chunk of matter is both a wave and a particle. Even more shockingly, the distinction depends on the observer. When you don’t look at it, the electron behaves like a wave, the moment you glance at it, it becomes a particle. There is something deeply unsatisfying about this description and, if it weren’t for the amazing agreement with experiment, it would be considered absurd.

Let’s summarize what we’ve discussed so far. We assume that there is some reality (otherwise, there’s nothing to talk about), which can be, at least partially, approximated by decomposable models. We don’t want to identify reality with models, and we have no reason to assume that reality itself is decomposable. In our everyday experience, the models we work with fit reality almost perfectly. Outside everyday experience, especially at short distances, high energies, high velocities, and strong gravitational fields, our naive models break down. A physicist’s dream is to create the ultimate model that would explain everything. But any model is, by definition, decomposable. We don’t have a language to talk about non-decomposable things other than describing what they aren’t.

Let’s discuss a phenomenon that is borderline non-decomposable: two entangled particles. We have a quantum model that describes a single particle. A two-particle system should be some kind of composition of two single-particle systems. Things may be complicated when particles are close together, because of possible interaction between them, but if they move in opposite directions for long enough, the interaction should become negligible. This is what happens in classical mechanics, and also with isolated wave packets. When one experimenter measures the state of one of the particles, this should have no impact on the measurement done by another far-away scientist on the second particle. And yet it does! There is a correlation that Einstein called “the spooky action at a distance.” This is not a paradox, and it doesn’t contradict special relativity (you can’t pass information from one experimenter to the other). But if you try to stuff it into either particle or wave model, you can only explain it by assuming some kind of instant exchange of data between the two particles. That makes no sense!

We have an almost perfect model of quantum mechanical systems using wave functions until we introduce the observer. The observer is the Godzilla-like mythical beast that behaves according to classical physics. It performs experiments that result in the collapse of the wave function. The world undergoes an instantaneous transition: wave before, particle after. Of course an instantaneous change violates the principles of special relativity. To restore it, physicists came up with quantum field theory, in which the observers are essentially relegated to infinity (which, for all intents and purposes, starts a few centimeters away from the point of the violent collision in an collider). In any case, quantum theory is incomplete because it requires an external classical observer.

The idea that measurements may interfere with the system being measured makes perfect sense. In the macro world, when we shine the light on something, we don’t expect to disturb it too much; but we understand that the micro world is much more delicate. What’s happening in quantum mechanics is more fundamental, though. The experiment forces us to switch models. We have one perfectly decomposable model in terms of the Schroedinger equation. It lets us understand the propagation of the wave function from one point to another, from one moment to another. We stick to this model as long as possible, but a time comes when it no longer fits reality. We are forced to switch to a different, also decomposable, particle model. Reality doesn’t suddenly collapse. It’s our model that collapses because we insists–we have no choice!–on decomposability. But if nature is not decomposable, one model cannot possibly fit all of it.

What happens when we switch from one model to another? We have to initialize the new model with data extracted from the old model. But these models are incompatible. Something has to give. In quantum mechanics, we lose determinism. The transition doesn’t tell us how exactly to initialize the new model, it only gives us probabilities.

Notice that this approach doesn’t rely on the idea of a classical observer. What’s important is that somebody or something is trying to fit a decomposable model to reality, usually locally, although the case of entangled particles requires the reconciliation of two separate local models.

Model switching and model reconciliation also show up in the interpretation of the twin paradox in special relativity. In this case we have three models: the twin on Earth, the twin on the way to Proxima Centauri, and the twin on the way back. They start by reconciling their models–synchronizing the clocks. When the astronaut twin returns from the trip, they reconcile their models again. The interesting thing happens at Proxima Centauri, where the second twin turns around. We can actually describe the switch between the two models, one for the trip to, and another for the trip back, using more advanced general relativity, which can deal with accelerating frames. General relativity allows us to keep switching between local models, or inertial frames, in a continuous way. One could speculate that similar continuous switching between wave and particle models is what happens in quantum field theory.

In math, the closest match to this kind of model-switching is in the definition of topological manifolds and fiber bundles. A manifold is covered with maps–local models of the manifold in terms of simple n-dimensional spaces. Transitions between maps are well defined, but there is no guarantee that there exists one global map covering the whole manifold. To my knowledge, there is no theory in which such transitions would be probabilistic.

Seen from the distance, physics looks like a very patchy system, full of holes. Traditional wisdom has it that we should be able to eventually fill the holes and connect the patches. This optimism has its roots in the astounding series of successes in the first half of the twentieth century. Unfortunately, since then we have entered a stagnation era, despite record number of people and resources dedicated to basic research. It’s possible that it’s a temporary setback, but there is a definite possibility that we have simply reached the limits of decomposability. There is still a lot to explore within the decomposability sphere, and the amount of complexity that can be built on top of it is boundless. But there may be some areas that will forever be out of bounds to our reason.


Fig 1. Current decomposability sphere.

  • GR: General Relativity (gravity)
  • SR: Special Relativity
  • PQFT: Perturbative Quantum Field Theory (compatible with SR)
  • QM: Quantum Mechanics (non-relativistic)
  • BB: Big Bang
  • H: Higgs Field
  • SB: Symmetry Breaking (inflation)