According
to Wolfram's principle of computational equivalence (PCE),
which I introduced in Part 70,
almost all processes that are not obviously simple can be viewed as
computations of equivalent sophistication. In particular, simple CA no more
elaborate than the Game of Life are computationally equivalent to powerful
computer systems. Once you get beyond very simple systems, practically all
systems have the highest possible level of complexity, and are computationally
equivalent to all other nonsimple systems.

The
PCE, according to Wolfram (2002),
‘tells us what kinds of computations can and cannot happen in our universe
[and] summarizes purely abstract deductions about possible computations, and
provides foundations for more general definitions of the very concept of
computation'. No system can ever carry out explicit computations that are more
sophisticated than those carried out by systems like cellular automata and
Turing machines.

Let us now see
how the PCE rationalizes the rampant occurrence of computational irreducibility
(or complexity) in Nature. For this we have to address the question of
comparing the computational sophistication of the systems that we study with
the computational sophistication of the systems that we use for studying them.
The PCE implies that, once a threshold has been crossed, any real system must
exhibit essentially the same level of computational sophistication. And this
applies to our own perception and analysis capabilities also; according to the
PCE they are not computationally superior to the complex systems we seek to
observe and understand. Beyond a certain threshold, all systems are
computationally equivalent. If predictions about the behaviour of a system are
to be possible, it must be the case that the system making the predictions is
able to

*outrun*the system it is trying to make predictions about. But this is possible only if the predicting system is able to perform more sophisticated computations than the system under investigation, and the PCE does not allow that. Therefore, except for simple systems (mainly Class 1 and Class 2 systems), no systematic predictions can be made about their behaviour at any chosen time in the future. Thus there is no general way to shortcut their process of evolution. In other words, most such systems are computationally irreducible.
This line of
reasoning helps understand the illusion of free will. The human
brain is clearly a very complex computationally irreducible system.
This means that, even if all the rules are known and all the initial conditions
have been specified, we cannot predict the end results of the computations
carried out by the brain. The only way to know an end result is to actually run
the system to see what happens; whether a program 'halts' or not. There
is also the added complication of extreme sensitivity to initial conditions, typical of
chaotic or quasi-chaotic systems. [Complex adaptive systems (the brain included) thrive best at the edge of chaos in phase space.] All told, this is
how the illusion of unpredictability or 'free will' arises. There is the
illusion that a human being (and perhaps other creatures) can 'decide' the
future action arbitrarily; 'at will'.

A real example of a computationally irreducible cellular automaton is given on page 740 of Wolfram (2002). This cellular automaton follows a 3-colour '

*totalistic rule*'. In a totalistic rule, the new colour of a cell depends only on the average colour of cells in its neighbourhood, and not on their individual colours.

As Wolfram
emphasizes, the whole idea of doing science with mathematical formulas makes
sense only for computationally

*reducible*systems. For other systems there are no computational shortcuts (and such shortcuts are essentially what modelling a system by a few differential equations is all about). Practically the only way of knowing a future configuration of a complex system is to actually run through all the evolutionary time steps, and Wolfram’s NKS is ideally suited for that purpose. Here is how:
One generates
the consequences of all sorts of simple programs defined by the corresponding
CA, thus generating the '

*Wolfram computational universe*'. For understanding the basics of a given complex system observed in Nature, one tries to see if the observed behaviour pattern can be matched with any of the archived complex patterns. If yes, then the simple program used for generating that particular CA pattern may point to a possible explanation of the time or space evolution of the complex behaviour observed in the actual physical system under study.
The PCE
apparently puts an

*upper limit*on complexity by implying that no system can carry out explicit computations that are more sophisticated than those carried out by CA or Turing machines.
I mention here one more
result from the work of Wolfram, namely a possible answer to the question of

*why the universe runs the way it does*. Is there a fundamental deterministic rule from which all else follows? Conventional wisdom says that randomness is at the heart of quantum mechanics, and because of this randomness the universe has infinite complexity (as quantified by the degree of complexity for a random system). Wolfram suggests that this may or may not be so. According to him, there may be no real randomness; only*pseudo*-randomness, like the randomness produced by random-number generators in a computer. The computer generates these numbers by using mathematical equations, and what we get are actually deterministic sequences of numbers.
Wolfram gives the
analogy of

*π*= 3.1415926. . . Suppose you are given, not the whole equation, but only a string of digits coming from far inside the decimal expansion. It would look random,*in the absence of complete knowledge*. In reality it is only pseudo-random. Wolfram puts forward the viewpoint that, similarly, the quantum randomness we see in the physical world may really be pseudo-randomness, and the physical world may actually be deterministic. It is simply that we do not know the underlying law, which may well be a simple cellular automaton.
There is a
human or anthropic angle to the meaning of complexity. Let us go back to the
equation

*π*= 3.1415926 . . . It has only a small information content or degree of complexity: A small algorithm using the fact that*π*is given by the ratio of the circumference of a circle to its diameter can generate the entire information contained in this equation. But if we humans do not have knowledge about the entire sequence of digits, but are given only a string of digits coming from far inside the decimal expansion, then the degree of complexity is just about as large as the length of the string and can, in principle, be infinite. For us humans, the degree of complexity of a system depends on our knowledge about the system. As more knowledge is acquired by us, the degree of complexity may keep collapsing. Of course, this happens only for systems which are not irreducibly complex. If the complexity of a system is irreducibly or intrinsically large, our increasing knowledge about the system can have little effect on its degree of complexity.