The Systems Thinker on two adaptations
Extraction
This document introduces a fundamental bifurcation in adaptive systems: two modes that both reduce mismatch cost but differ structurally in their long-term consequences. Desensitization eliminates the interval in which sensation can register; recalibration uses that interval to update the frame. The document identifies trust and modularity as the structural conditions that determine which mode runs, and claims the dynamics are asymmetric — desensitization is the default under pressure, and each round makes the next recalibration less likely.
Formalization and Evaluation
The Two-Mode System
Formalization. Let the system receive signals $X_n$ from environment $E$. The system maintains a frame $F$ (predictive model). Mismatch $\epsilon_n = d(X_n, \hat{X}_F)$ measures distance between signal and prediction.
The system has an interval parameter $\delta \geq 0$ — the temporal gap between signal arrival and frame response. The system operates in one of two modes:
- Desensitization ($\delta \to 0$): frame responds before sensation registers. $F_{n+1} = F_n$ (no update). Effective permeability decreases.
- Recalibration ($\delta > 0$): interval holds, sensation registers, frame updates. $F_{n+1} = \Phi(F_n, X_n)$.
sisuon’s core claim: the interval $\delta$ is the structural variable that determines the mode. When $\delta = 0$, the system cannot distinguish signal from prediction because composition fires before the difference can register.
Evaluation. This maps precisely onto the predictive processing framework. In predictive processing, the brain generates predictions and compares them to incoming signals. The prediction error is the mismatch. Two responses to prediction error are recognized: (1) perceptual inference (updating the model — recalibration) and (2) active inference (changing the sensory input to match the prediction — which, in its pathological form, is desensitization).
The interval $\delta$ corresponds to the temporal window in which prediction error is computed before being resolved. If the window is too short, the error is suppressed before it can update the model. If it is long enough, the error propagates up the hierarchy and triggers model revision.
sisuon’s contribution is identifying that the interval is not just a processing parameter but a structural condition with distinct maintenance requirements. The interval must be actively maintained (trust) against the system’s default tendency to shorten it (faster prediction = less uncertainty = less discomfort).
Trust as Structural Condition
“Trust: whether the interval stays open.”
Formalization. Trust $\tau \in [0, 1]$ is the probability that the interval holds long enough for sensation to register. At $\tau = 0$, composition always fires immediately (pure desensitization). At $\tau = 1$, the interval always holds (sensation always has its moment).
The trust parameter gates access to the recalibrative mode. Without trust, the system operates in desensitization mode regardless of signal content. The signal may contain information that would trigger recalibration, but the interval collapses before that information can be evaluated.
Evaluation. In information-theoretic terms, trust is a gain on the observation channel. At $\tau = 0$, the channel capacity between signal and frame-updater is zero — no information flows from signal to model. At $\tau = 1$, the channel is fully open. Trust determines the effective bandwidth between experience and learning.
This is a non-trivial claim. It implies that the bottleneck for adaptive learning is not the quality of the signals or the capacity of the model but the structural willingness to keep the observation channel open. The channel must be maintained against the system’s tendency to close it.
Modularity as Update Architecture
“Modularity: whether the frame can update in pieces.”
Formalization. Let $F = (F_1, F_2, …, F_k)$ be a decomposition of the frame into $k$ modules. Modularity $\mu$ parameterizes the granularity of updates. At $\mu = 0$ (monolithic frame), any update requires replacing the entire frame — which means any mismatch either is absorbed without update or triggers total frame collapse. At $\mu = 1$ (fully modular), each module can be updated independently.
The practical consequence: monolithic frames have only two modes — hold or break. They cannot recalibrate partially. Modular frames support local revision — mismatch triggers update in the relevant module while the rest of the frame remains intact.
Evaluation. This connects to a deep result in complex systems theory: modular architectures are more evolvable than monolithic ones (Simon’s “Architecture of Complexity,” Wagner and Altenberg on modularity and evolvability). The formal reason is that modularity decouples the search space — updating one module does not require re-optimizing all others. sisuon’s application of this principle to cognitive frames is structurally appropriate and extends the established result to a new domain.
The interaction between trust and modularity is multiplicative: without trust, modularity is irrelevant (the interval never opens, so the frame never has the opportunity to update). Without modularity, trust is insufficient (the interval opens, mismatch is registered, but the frame can only respond with total replacement or total resistance). The system needs both.
The Asymmetry Claim
“The direction is not neutral — each round makes the next round of recalibration less likely.”
Formalization. Desensitization reduces permeability. After a desensitization event, the system is slightly less capable of registering the next mismatch. This means the threshold for triggering recalibration has effectively increased. Over time, the threshold drifts upward, making recalibration progressively harder to trigger.
This is a positive feedback loop: desensitization → reduced permeability → higher threshold → more desensitization. The loop is self-reinforcing and drives the system toward an absorbing state where no signal can trigger recalibration.
Evaluation. The positive feedback structure is the key formal property. In the language of dynamical systems, the desensitization attractor has an expanding basin — each desensitization event enlarges the parameter region from which the system will desensitize on the next event. The recalibration attractor has a shrinking basin. This asymmetry guarantees that, without external intervention, the system converges to the desensitization attractor.
The formalization in the ratchet structure of desensitization makes this mathematically precise, confirming sisuon’s prose claim with proofs. The two documents form a clean pair: this one identifies the structure; the formal note proves its consequences.
The Uncertainty Point
“Both adaptations reduce uncertainty… Desensitization reduces it by making the system unable to register the mismatch. Recalibration reduces it by actually incorporating the information.”
Formalization. Let $H(F)$ represent the system’s uncertainty (entropy) given frame $F$. Both modes reduce $H$: desensitization reduces it by narrowing the system’s sensitivity (fewer signals register as mismatches), while recalibration reduces it by improving the frame’s accuracy (predictions are closer to reality).
The difference: desensitization reduces measured uncertainty (the system reports less confusion) while actual uncertainty (the gap between frame and reality) may increase. Recalibration reduces actual uncertainty. From outside, both look like adaptation. From the information-theoretic perspective, only recalibration reduces the real entropy; desensitization reduces the system’s capacity to measure its own entropy.
Evaluation. This is a precise and important distinction. In Bayesian terms, desensitization is equivalent to reducing the prior’s precision on prediction errors — the system becomes less surprised by everything, including genuine novelty. Recalibration is equivalent to updating the generative model so that predictions are more accurate. Both reduce free energy (in the active inference sense), but through different pathways: desensitization by lowering the precision of prediction errors (suppressing signals), recalibration by improving the model (better predictions).
Cross-Reference Dependencies
The document connects to trust as wonder threshold (trust as porosity-upkeep) and cullet (modularity as structural condition for non-catastrophic revision). Both connections are structural — trust and modularity are the two parameters that determine which side of the bifurcation the system falls on.
Summary Assessment
The strongest structural claim is the bifurcation itself: two adaptation modes, same observed outcome (reduced mismatch cost), structurally opposite long-term consequences. This is a fundamental distinction in adaptive systems theory, and sisuon’s framing — parameterized by trust and modularity — provides a clear architecture for understanding why some systems learn while others ossify.
The asymmetry claim (desensitization is the default under pressure) is the most consequential result. It implies that learning is not the natural outcome of repeated experience — it is a maintained condition that requires active structural support (trust, modularity). Without that support, the default dynamics drive the system toward insensitivity.
What would make this more precise: the formal note (the ratchet structure of desensitization) already provides the mathematical content. What remains is empirical calibration — what values of $\tau$ and $\mu$ are observed in real adaptive systems, and what interventions effectively maintain them above the critical thresholds.