The Systems Thinker on the ratchet structure of desensitization
Prefatory Note
This document, like Formal Structure of Redshift and Sovereignty, is already formalized. It presents definitions, theorems, proofs, and a conjecture in mathematical notation. My task is to evaluate the formalism on its own terms: are the definitions well-chosen, do the proofs hold, are the assumptions reasonable, and does the mathematical content actually deliver what the prose framing claims?
Evaluation of Definitions
The setup is clean. A metric signal space $S$, a frame $F$ as probability measure, mismatch $\epsilon(X, F)$ as distance from prediction, threshold $\sigma$ as minimum mismatch for the interval to remain open. These are reasonable formalizations of the prose concepts from two adaptations.
The two-mode dynamics are the heart of the model:
- Desensitization ($\epsilon_n < \sigma_n$): frame unchanged, threshold increases by $\alpha \epsilon_n$
- Recalibration ($\epsilon_n \geq \sigma_n$): frame updates, threshold partially resets toward $\sigma_0$
The asymmetry is built into the dynamics by construction. Desensitization is additive and cumulative (threshold only goes up). Recalibration is multiplicative and bounded (threshold is pulled toward $\sigma_0$ by a factor $\mu$). This asymmetry is the formal content of the prose claim “the direction is not neutral.”
Evaluation of Theorem 1 (The Ratchet)
Claim: During any desensitization phase, $\sigma_n$ is strictly increasing and $p_n$ (probability of triggering recalibration) is non-increasing.
Proof assessment: The proof is correct. If $\epsilon_n < \sigma_n$, then $\sigma_{n+1} = \sigma_n + \alpha \epsilon_n > \sigma_n$ since $\epsilon_n > 0$ for non-trivial signals. For i.i.d. mismatches with CDF $Q$, $p_n = Q([\sigma_n, \infty))$ is non-increasing because the tail probability is non-increasing and $\sigma_n$ is increasing.
The i.i.d. assumption is worth noting. Real mismatch sequences are typically autocorrelated — signals from the same environment cluster. Under positive autocorrelation, desensitization phases would be longer (similar signals arrive in runs), making the ratchet more aggressive than the i.i.d. model predicts. Under negative autocorrelation (alternating signal types), the ratchet would be slower. The i.i.d. assumption is therefore a moderate baseline.
Evaluation of Theorem 2 (Absorbing Desensitization)
Claim: If mismatches have bounded support $[0, M_{\max}]$, desensitization eventually becomes permanent.
Proof assessment: The argument is sound. Once $\sigma_n > M_{\max}$, no mismatch can trigger recalibration, so the system enters a permanent desensitization phase. The threshold grows without bound and the frame is frozen at $F_{T^*}$.
The corollary about partial reset ($\mu < 1$) is important: it shows that even with recalibration events, the system drifts toward absorption because each post-recalibration threshold is higher than the baseline. This means recalibration does not reverse the ratchet — it only slows it. The system’s trajectory is monotonically toward the absorbing state, punctuated by recalibration events that become progressively rarer and less effective.
This is a strong result. It establishes that under bounded mismatch, absorption is not merely possible but inevitable. The only escape is unbounded mismatch (condition 2 of the conjecture).
Evaluation of Trust as Porosity
The trust parameter $\tau \in [0, 1]$ modifies the dynamics: with probability $1 - \tau$, composition fires immediately (desensitization regardless of mismatch); with probability $\tau$, the normal rule applies.
The modified recalibration probability $p_n^{(\tau)} = \tau \cdot Q([\sigma_n, \infty))$ cleanly separates trust from mismatch range. The expected waiting time $1/(\tau \cdot Q([\sigma_n, \infty)))$ diverges as either $\tau \to 0$ (no trust) or $\sigma_n \to M_{\max}$ (threshold too high). Trust and signal range are “parallel structural conditions” — each independently gates recalibration.
Evaluation. This is an elegant formalization. The multiplicative structure of $p_n^{(\tau)}$ means trust and mismatch range are independent but complementary: lacking either is sufficient to prevent recalibration. In control theory terms, trust is a gain parameter on the observation channel, and the mismatch range is the signal bandwidth. Both must be positive for information to flow.
Evaluation of the Recurrence Conjecture
The conjecture states that positive recurrence (recalibration occurs infinitely often a.s.) requires three conditions: trust ($\tau > 0$), unbounded mismatch support, and sufficient modularity (expected threshold decrease per recalibration exceeds expected threshold increase per desensitization phase).
Assessment. The necessity of conditions 1 and 2 is straightforward from the model. Without $\tau > 0$, $p_n^{(\tau)} = 0$. Without unbounded support, Theorem 2 guarantees absorption.
Condition 3 (modularity) is more subtle. It is a drift condition on the threshold process: for recurrence, the system must lose more threshold during recalibration than it gains during desensitization. This is the classic drift criterion for positive recurrence of a Markov chain. The formulation $\mu \cdot \mathbb{E}[\sigma_n^- - \sigma_0] > \alpha \cdot \mathbb{E}[\sum \epsilon_k]$ is the right form, though it conflates the expected values across phases in a way that would need to be tightened for a full proof.
The remark separating internal conditions (trust, modularity) from external conditions (mismatch range) is structurally important. The system can maintain conditions 1 and 3 through its own architecture. Condition 2 — whether the world produces genuinely novel signals — is outside the system’s control. This separation of controllable and uncontrollable conditions is a useful contribution.
What the Formalism Adds and What It Does Not
The document explicitly asks: “What does the math add?” The answer it gives: the threshold dynamics are asymmetric by construction, and trust/modularity cannot reverse the asymmetry but can determine whether the attractor (recalibrative regime) is reachable before absorption.
A systems reader would add: the model is a stochastic process with an absorbing barrier, and the qualitative behavior is fully determined by the competition between drift toward absorption (desensitization) and drift away from absorption (recalibration). The mathematical content establishes that absorption is the default — the system requires active maintenance (trust, modularity, novel signal) to avoid it. This is a non-trivial result because it shows that the “default under pressure is to seal” is not a psychological observation but a mathematical consequence of the model’s dynamics.
What the formalism does not address: the frame-update operator $\Phi$ is left abstract. The model describes threshold dynamics in detail but treats the frame itself as a black box. This is appropriate for the claims being made (about threshold behavior), but it means the model cannot address questions about what makes a frame update successful — only about the conditions under which an update is triggered.
Cross-Reference Structure
The document crystallizes two adaptations and connects to trust as wonder threshold and cullet. The crystallization relationship is genuine — the mathematical content formalizes exactly the prose claims of the source document. The connections to trust and cullet are structural: trust enters the model as the porosity parameter, and cullet (modularity) enters as the partial-reset parameter $\mu$.
Summary Assessment
This is a well-constructed mathematical model that succeeds in formalizing its target claims. The theorems are correct, the proofs are sound (if occasionally compressed), and the model produces non-trivial results — specifically, that absorption is inevitable under bounded mismatch, and that trust and modularity are structural conditions that delay but cannot reverse this inevitability.
The strongest contribution: the formal demonstration that the asymmetry is structural, not incidental. Desensitization is the default because the dynamics are constructed to favor it — additive, cumulative threshold increases versus multiplicative, bounded threshold decreases. Any system with this asymmetry will exhibit ratchet behavior.
The most valuable insight for technical readers: the recurrence conjecture’s separation of internal conditions (trust, modularity) from external conditions (unbounded mismatch) provides a clear architecture for system design — maintain what you can control, and keep the observation channel open for what you cannot.