Coherence and Order: Integrating Information, Simplicity, and Self-Organization
Claim O2.1 – Reality Tends Toward Order: Many natural systems self-organize into structured, low-entropy states when driven far from equilibrium. Prigogine’s work on dissipative structures shows that beyond critical thresholds, random fluctuations can amplify into coherent patterns. For example, heated fluids develop macroscopic convection cells (“Benard convection”) that reflect a new “supermolecular order” maintained by energy flow from the environment. Likewise, living organisms maintain high internal order by importing free energy and exporting entropy. In general, open systems (with external energy/input) can locally decrease entropy and increase complexity even while overall disorder (entropy) rises. As one analysis notes, in an isolated system entropy increases monotonically, but informational complexity typically rises then falls as equilibrium is approached. Thus the universe as a whole may maximize entropy, yet pockets of sustained order and high complexity readily emerge (e.g. galaxies, life, culture). From this perspective, “chaos” (maximal randomness) is just one regime; far-from-equilibrium conditions favor rich structure. In fact, many natural laws and constants appear fine-tuned to foster complexity: anthropic arguments observe that physical constants (electron charge, force strengths, etc.) lie within narrow ranges allowing stable matter and chemistry. The consensus is that the cosmos is arranged such that ordered, life-friendly structures are not only possible but expected. (In short, reality exhibits universal coherence alongside entropy increases.)
Entropy vs. Complexity: It is crucial to distinguish entropy (disorder measure in thermodynamics/information theory) from complexity. Shannon entropy $H$ quantifies unpredictability: it is minimal for a perfectly ordered distribution (e.g. all mass in one state) and maximal for a uniform random distribution. In contrast, many notions of complexity (e.g. algorithmic complexity) peak at intermediate order. A common observation is that a random string has high entropy but actually low structure, whereas a highly regular (all zeros) string has low entropy and low complexity. Indeed, one analysis found that in a closed system complexity rises then declines, reaching a maximum before equilibrium. Open, self-organizing systems exploit this: driven by constant energy or information input, they can climb to high complexity states (far from uniformity) while exporting disorder to the surroundings.
Self-Organization (Dissipative Structures): In 1970s, Prigogine showed that far-from-equilibrium conditions generate order. When energy flux (e.g. temperature gradient) crosses a threshold, a chaotic system (like a uniformly heated fluid) can abruptly form coherent patterns, sustained by continuous energy dissipation. He emphasized that non-equilibrium is a source of order: small random fluctuations become amplified into stable, ordered “dissipative structures” when coupled to the environment. Similar principles underlie pattern formation in chemistry, biology, and ecology. Thus, rather than only decaying to disorder, many systems inherently evolve toward complex organization when given energy and selection for stability.
Cosmic Fine-Tuning: In cosmology, many point to the “fine-tuned” universe as evidence of built-in coherence. Virtually all fundamental constants (particle masses, force strengths, cosmological constant, etc.) lie in narrow ranges that allow galaxies, stars, and complex chemistry (hence observers) to exist. As Hawking and others note, this “remarkable fact” suggests that the laws of physics are adjusted to permit life, implying a bias toward ordered, life-permitting configurations. (Whether by multiverse selection or design, the observed fine-tuning implies the cosmos is predisposed to produce complexity rather than sterile chaos.)
Collectively, these lines of evidence support O2.1: order and coherence emerge naturally in physics and biology, not just randomness. Local decreases in entropy (order) are ubiquitous in driven systems, and the very structure of the universe favors complexity.
Integrated Information Theory (IIT) and Φ (O2.2)
Objective Measure of Coherence: Integrated Information Theory (IIT) posits that the amount of intrinsic, irreducible causal interactions in a system—denoted Φ (big Phi)—quantifies its unified complexity or “coherence.” Formally, IIT defines intrinsic information and integrated information for a physical system’s state using its transition probabilities. Given a system (e.g. neural network, electronic circuit) with state $s$, one computes the information $ii(s,\tilde s)$ that $s$ specifies about past or future states $\tilde s$. A minimum information partition (MIP) is found that minimally cuts interactions between subsystems. Then the integrated information $\varphi$ of that state is
φ=minpartitions θ[ii(s,s~)−iiθ(s,s~)] ,\varphi = \min_{\text{partitions }\theta}\bigl[ii(s,\tilde s) - ii_\theta(s,\tilde s)\bigr]!,φ=partitions θmin[ii(s,s~)−iiθ(s,s~)],
quantifying how much information is lost if the system is split. Systems whose parts heavily constrain each other have large $\varphi$. A complex is a subset of elements that maximizes φ, and the total Φ (big phi) of the system sums the φ-values over all irreducible distinctions and relations in that complex. In IIT’s framework, Φ corresponds to the quantity of the system’s integrated structure.
- Example: A simple truth-teller (bit that always outputs “1”) has zero integration (parts are independent). By contrast, a densely connected neural-like network with feedback can have very high φ, since each node’s state strongly depends on the whole network.
IIT grounds this measure philosophically via axioms of experience (intrinsicality, integration, information, etc.), but importantly it yields a mathematically precise definition of systemic cohesion. In practice, computing Φ exactly is intractable for large systems, but proxy measures (Φ*, Φ^G, Perturbational Complexity Index) have been developed. Empirical studies show proxies of Φ correlate with consciousness and integration in brains.
Thus IIT (versions 1.0–4.0) provides a concrete objective measure of coherence: Φ is high only when a system’s elements form an irreducible whole. Under IIT, a highly coherent system (like a brain generating unified experience) has a large Φ, whereas a collection of isolated parts (chaotic or independent) has Φ ≈ 0.
Critiques and Revisions of IIT
IIT is controversial. Critics argue that φ can produce counterintuitive results. Scott Aaronson famously showed that simple inactive logic circuits (e.g. grids of XOR gates) could compute high Φ, implying those artificial systems would be “vastly more conscious” than humans. He wrote that IIT “fails” common-sense tests by predicting unbounded consciousness in bizarre systems. Tononi and colleagues concede the panpsychist implication: any system with causal structure has some nonzero Φ, however small. As Aaronson quotes, Tononi “cheerfully accepts” that even a thermostat or photodiode has “small but nonzero levels” of consciousness. For IIT proponents, this is not a flaw but a natural corollary: Φ simply measures integration, not our intuitions about animal vs. gadget. Tononi argues the theory predicts very large Φ only in richly integrated architectures (e.g. cortex), and that trivial systems do indeed have negligible Φ.
Other critiques note computational and definitional issues (e.g. different Φ variants, intractability). Defenders of IIT argue that refinements (IIT 3.0, 4.0) and careful partitioning rules address many pathologies. For example, IIT 4.0 includes rigorous criteria for maximal complexes and explicit formulas for Φ-structures. Nevertheless, Aaronson’s point remains instructive: any purported measure of order must be scrutinized for hidden assumptions.
Despite debates, IIT has strong support from consciousness researchers (e.g. Koch) and has inspired empirical tests. It predicts, for instance, that anisotropic brains or brains under anesthesia will show dramatic Φ differences, consistent with experiments. Recent work even attempts to reconcile IIT with other theories (see below). Crucially for our framework, IIT provides a quantifiable measure of system-wide coherence. If the physical world tends to order, integrated information is the mathematical gauge of that order.
Simplicity and Kolmogorov Complexity (O2.3)
Simplicity as a guiding principle: Across science, simpler explanations are favored (Occam’s razor). This principle has been rigorously formalized via Kolmogorov (algorithmic) complexity. For a data string $S$, its Kolmogorov complexity $K(S)$ is defined as the length of the shortest computer program (in a fixed universal language) that outputs $S$. Informally, highly regular data (like “010101…”) have low $K$ because they compress to short rules, while random data have $K$ approximately equal to their length. This matches intuition: a simpler (more compressible) description is preferable.
Modern accounts show that among competing hypotheses consistent with data, those with lower Kolmogorov complexity have higher prior probability (Solomonoff induction) and generalize better. Equivalently, the Minimum Description Length (MDL) principle trades off model complexity and data fit: it selects models that minimize the sum of model code length and data’s description length under the model. In effect, simpler (smaller $K$) theories are statistically favored. This yields a concrete footing for Occam’s razor: the bias toward low algorithmic complexity follows from fundamental ideas of inference and information theory.
Implications for coherence: Simplicity and integrated information are intertwined. A system with high Φ typically has correlated, structured behavior that can often be described succinctly (low $K$). Conversely, a system with maximal entropy (randomness) has high $K$ but low causal integration. In fact, both notions hinge on compression: Kolmogorov complexity is compressibility, and integrated information can be seen as the amount of multi-part information not decomposable into separate parts. Thus, models of the world or of systems that maximize integrated information often coincide with low description length of observed regularities.
In summary, O2.3 is borne out by the mathematics of induction: preferring simpler, lower-$K$ explanations is equivalent to seeking the most regular, least “surprising” patterns in data. This bias complements the drive toward order: a coherent model of reality has both high Φ and low algorithmic complexity relative to an incoherent one.
Free Energy Principle and Coherence
Karl Friston’s Free Energy Principle (FEP) provides a unified account of self-organization and perception. It states that adaptive systems (brains, organisms) minimize their variational free energy, a quantity that bounds “surprise” (negative log-likelihood) of sensory inputs. Concretely: systems pursue paths of least surprise by continuously updating internal models or by acting to make the world conform to predictions. Under the FEP, minimizing free energy equates to maximizing model coherence and reducing uncertainty.
Several recent works connect FEP with IIT. Both frameworks describe how systems maintain internal structure: IIT focuses on intrinsic causal integration (coherence), while FEP emphasizes accurate modeling and entropy reduction. Conceptual bridges have been proposed: for example, Markovian monism notes that a system’s Markov blanket (FEP’s partition between internal and external states) formalizes the IIT notion of an integrated complex. Likewise, the Integrated World Modeling Theory argues that agents under active inference naturally develop rich integrated models (high Φ) because such models support long-term free-energy minimization. Empirical work in neuronal cultures finds that Φ tends to increase during inference and correlates with measures of Bayesian surprise (a component of free energy). In other words, as a network learns and reduces uncertainty, its integrated information grows.
In sum, FEP and IIT are complementary: FEP describes why systems organize (to minimize surprise), and IIT quantifies how much order they attain. Systems that effectively minimize free energy tend to build integrated, low-entropy models of their world—precisely those with high Φ. Thus coherence (IIT) and surprise minimization (FEP) are tightly linked.
Comparison of Information and Complexity Measures
To clarify these relationships, consider the following comparison of key measures:
| Measure | Context & Definition | Interpretation/Behavior |
|---|---|---|
| Shannon entropy (H) | Measure of uncertainty/disorder in a probability distribution | Low for predictable (uniform) states; maximal for random; in closed systems entropy tends to increase (Second Law). |
| Kolmogorov complexity (K) | Length of shortest program describing an object | Low for regular/compressible data, high for random data; formalizes Occam’s razor by preferring simpler models. |
| Integrated information (Φ) | Total irreducible causal information in a system | High when subsystems cannot be partitioned without loss of information (strong integration); identifies unified structures (e.g. conscious wholes). |
| Free Energy (F) | Variational bound on sensory surprise in predictive models | Systems minimize F by adapting models/actions; low F implies a coherent internal model of the environment. |
This table shows that Φ uniquely captures system-wide causal cohesion beyond random disorder (entropy) or description length (K). It aligns with our intuition of coherence: high Φ signals a globally linked structure that is more “ordered” than its parts separately. Meanwhile, Occam’s (low K) and FEP (low F) principles ensure that the simplest, most predictive organizations are favored.
Conclusion: Φ as a Measure of Systemic Order
The convergence of these theories suggests that order and coherence in reality can be rigorously defined and measured. Self-organization and fine-tuning show that nature trends toward integrated, low-entropy structures. The Free Energy Principle explains this drive as model optimization and entropy export. Kolmogorov complexity and Occam’s razor ensure that the descriptions of these structures are as simple (compressible) as possible. Crucially, Integrated Information (Φ) provides the quantitative bridge connecting these ideas: it is a formal measure of the degree to which parts act as a cohesive whole.
By combining information theory, algorithmic complexity, and thermodynamics, we find a coherent picture: the most successful systems minimize surprise and description length while maximizing irreducible information. In such a framework, Φ emerges as the fundamental metric of order — a system’s internal coherence is literally its “integrated information”. This unified perspective supports the Coherence Framework (O2.1–O2.3) by showing that reality’s tendency toward order can be measured and explained through well-grounded mathematical principles and physics.
Tables or Comparisons:
The table above compares key measures (entropy, complexity, Φ, free energy) and shows how integrated information uniquely quantifies systemic “wholeness”. All theories interlock: compression (low K) and minimal free energy tend to increase Φ, and high Φ necessarily corresponds to low Shannon entropy under a suitable coarse-graining. In this sense, Φ captures the essence of order that these other measures hint at, rendering it a compelling candidate for a fundamental coherence metric in nature.
Sources: Integrated Information Theory and its formulations are documented by Tononi and Koch, with detailed definitions of Φ. Aaronson’s critique and Tononi’s response clarify interpretational issues. Kolmogorov complexity and Occam’s razor are standard in algorithmic information theory. Friston’s Free Energy Principle is outlined in neuroscience literature. The distinction between entropy and complexity, as well as examples of self-organization, are discussed in statistical mechanics and complexity science texts. Cosmological fine-tuning is reviewed in philosophical and scientific sources. Together, these sources substantiate the claim that Φ is a mathematically sound, philosophically coherent measure of universal order.
Sources