← Axiom Explorer ← Papers Index
P4 ? Coherence
Backed Categories
The Paper (Narrative)
Included chapters (from narrative sequence)
P2.1_Order_vs_Chaos
← Axiom Explorer ← Papers Index
P2.1 - Order vs. Chaos
Premise: “Meaningful structure” is measurable; coherence is the metric.
One-sentence version
If the world contains stable structure, “pure chaos” is not an explanation; coherence is a real, quantifiable constraint.
The Paper (Narrative)
There is a simple reason people distrust big systems: they feel like they can explain anything. If a theory can swallow any outcome, it predicts nothing.
So we start this volume with a demand:
If the universe is not random, what does “not random” look like in a form we can measure?
Your intuition already knows the answer. You can tell the difference between:
- a page of static and a page of language,
- a pile of sand and a functioning engine,
- noise and music.
In each pair, the second has integrated structure: parts that constrain each other.
That is the core idea behind “coherence.” Coherence is not a vibe. It is what you are pointing at whenever you say “that system is organized” or “that pattern holds together.”
This paper does two things:
- It asserts that order is required for reality to be intelligible and stable.
- It asserts that order admits degrees, and the project uses a coherence measure (Phi / Φ language in the axioms) to talk about those degrees.
At this point, we are not yet telling you what the final equation is or where the boundary conditions come from. We are only refusing the lazy move: “It just happened.” The more structure you observe, the less plausible “pure randomness” becomes as a sufficient story.
What This Paper Is Not Claiming
- It is not claiming “entropy never increases” (it does; that is not the point).
- It is not claiming you can read moral meaning off physics (later papers separate the layers).
- It is not claiming we already have the one true coherence measure; it is claiming the need for one.
Level 1 - Formal Claims (Axioms)
Level 2 - Case File (Receipts)
Next (Why Simple Laws Win)
Link to original
P2.2_The_Parsimony_Law
← Axiom Explorer ← Papers Index
P2.2 - The Parsimony Law
Premise: Occam’s razor is not taste - it’s constraint. Minimal description length wins.
One-sentence version
If multiple explanations fit, the one with less unnecessary machinery is not just “nicer”; it is more stable under extension, prediction, and integration.
The Paper (Narrative)
Parsimony is usually introduced as a rule of thumb: “Don’t multiply entities beyond necessity.” That can sound subjective, like style preference.
But in science and engineering, parsimony behaves more like a survival filter:
- Overfit models are fragile.
- Baroque explanations break when you add new data.
- Unnecessary parameters hide contradictions.
So the project treats parsimony as a constraint on what counts as a defensible ontology.
This matters because it tells you what kind of claims to expect from the rest of the system:
- Not “it could be this, or that, or anything.”
- But “given the constraints, the solution space collapses.”
Later, when the system argues for a terminal observer, a grace operator, or a specific set of boundary conditions, it is not trying to be maximalist. It is trying to be minimal while sufficient.
That is the parsimony law in plain language:
Reality may be deep, but it is not wasteful.
What This Paper Is Not Claiming
- It is not claiming the simplest story is always true in every context.
- It is not claiming we can measure Kolmogorov complexity perfectly (we cannot).
- It is claiming that explanations that require infinite ad-hoc patches are not stable.
Level 1 - Formal Claim (Axioms)
Level 2 - Case File (Receipts)
Next (Why Simple Rules Create Deep Worlds)
Link to original
P2.3_The_Generative_Fractal
← Axiom Explorer ← Papers Index
P2.3 - The Generative Fractal
Premise: Simple laws can generate deep histories; “complexity” is not the same as “random.”
One-sentence version
A system can be compressible (simple rule) and still produce outcomes that are irreducibly historical (deep), which is why “the world looks complex” does not imply “the world is random.”
The Paper (Narrative)
Here is a mistake people make when they meet complexity:
They assume “complex” means “uncaused” or “unstructured” or “random.”
But you already know a counterexample: life. A seed is small. A tree is huge. The growth rule is simple in principle, but the history is long.
Algorithmic depth is the same idea in information terms:
- A random string is hard to compress, but it is also shallow: there is no story, no structure, no layered dependence.
- A deep structure can be generated by a compact rule, but it contains a long chain of intermediate steps that cannot be skipped.
This matters because the project will later argue that reality is both:
- compressible enough to be lawful (parsimony), and
- deep enough to generate rich structure (history, identity, moral trajectories).
If you accept parsimony without depth, you get a sterile universe. If you accept depth without parsimony, you get a chaotic universe.
The point of this paper is to show you that you can have both without contradiction.
What This Paper Is Not Claiming
- It is not claiming the universe is literally a fractal in every technical sense.
- It is not claiming “depth” proves God; it is setting up why lawful simplicity does not destroy richness.
Level 1 - Formal Claim (Axioms)
Level 2 - Case File (Receipts)
Next (Enter the Observer)
Link to original