the ratchet structure of desensitization

the ratchet structure of desensitization

A formal crystallization of two-adaptations.md


Setting

The prose claims: “each round [of desensitization] makes the next round of recalibration less likely. The direction is not neutral.” This is a precise claim. What follows is its mathematical content.


Definitions

Let $S$ be a metric space (the signal space) with distance $d$.

A frame $F$ is a probability measure on $S$ — the system’s current model of what signals to expect. The frame’s prediction is $\hat{X}_F = \mathbb{E}_F[X]$.

The mismatch produced by signal $X \in S$ against frame $F$ is: $$\epsilon(X, F) = d(X, \hat{X}_F)$$

The threshold $\sigma \geq 0$ is the minimum mismatch needed to hold the interval open — to allow sensation before composition claims the signal.

The interval (the “rest”) is the gap between signal arrival and frame response. It stays open when $\epsilon \geq \sigma$; it collapses when $\epsilon < \sigma$.


The Two-Mode Dynamics

Let ${X_n}$ be a sequence of signals, $\epsilon_n = \epsilon(X_n, F_n)$ the mismatch at step $n$, and $\alpha > 0$ a fixed learning rate. The system evolves as:

Desensitization (interval collapses, $\epsilon_n < \sigma_n$): $$F_{n+1} = F_n, \qquad \sigma_{n+1} = \sigma_n + \alpha,\epsilon_n$$

Recalibration (interval holds, $\epsilon_n \geq \sigma_n$): $$F_{n+1} = \Phi(F_n, X_n), \qquad \sigma_{n+1} = (1 - \mu),\sigma_n + \mu,\sigma_0$$

where $\Phi$ is a frame-update operator (e.g., Bayesian update, gradient step), $\mu \in (0, 1]$ is modularity, and $\sigma_0 > 0$ is the baseline threshold.

In desensitization: the frame is unchanged and the threshold rises. In recalibration: the frame updates and the threshold is partially reset toward $\sigma_0$.


Theorem 1 (The Ratchet)

During any desensitization phase, the threshold is strictly increasing and the probability of triggering recalibration is non-increasing.

Proof sketch. Suppose $\epsilon_n < \sigma_n$ at step $n$. Then $\sigma_{n+1} = \sigma_n + \alpha,\epsilon_n > \sigma_n$ (since $\epsilon_n > 0$ for any non-trivial signal). So $\sigma$ strictly increases. If mismatches ${\epsilon_n}$ are drawn i.i.d. from distribution $Q$ on $\mathbb{R}_{\geq 0}$, the probability of recalibration at step $n$ is: $$p_n = Q\bigl([\sigma_n, \infty)\bigr)$$

Since $\sigma_n$ is strictly increasing and $Q([\cdot, \infty))$ is non-increasing, $p_n$ is non-increasing in $n$ throughout the desensitization phase. $\square$

The ratchet has no natural reverse: desensitization accumulates, and each increment makes the next recalibration strictly harder to reach.


Theorem 2 (Absorbing Desensitization)

If the mismatch distribution has bounded support, desensitization eventually becomes permanent.

Formal statement. Let $\epsilon_n \sim Q$ i.i.d., with $M_{\max} = \operatorname{ess,sup}(Q) < \infty$. Let $\mu = 1$ (full reset on recalibration). Then:

(a) The threshold process ${\sigma_n}$ is a Markov chain on $[\sigma_0, \infty)$.

(b) For any $\sigma > M_{\max}$: $Q([\sigma, \infty)) = 0$, so $p = 0$ — recalibration cannot be triggered.

(c) Absorption: Let $T^* = \inf{n : \sigma_n > M_{\max}}$. Then $T^* < \infty$ almost surely, and for all $k \geq T^$: $$\sigma_k = \sigma_{T^} + \alpha \sum_{j=T^*}^{k-1} \epsilon_j \to \infty$$

with $F_k = F_{T^}$ for all $k \geq T^$. The frame freezes; the threshold grows without bound.

Corollary. With $\mu < 1$ (partial reset), the absorbing barrier effectively descends with each recalibration: the post-recalibration threshold is $(1-\mu)\sigma + \mu\sigma_0 > \sigma_0$ whenever $\sigma > \sigma_0$. So the effective starting point for the next desensitization phase is higher than the last, and absorption is reached sooner. Formally, after $k$ recalibrations at thresholds $\sigma_1^{-} \leq \sigma_2^{-} \leq \cdots$: $$\sigma_k^{+} = (1-\mu)\sigma_k^{-} + \mu\sigma_0 \geq \sigma_0$$

with $\sigma_k^{+}$ non-decreasing if the desensitization phases grow (which they do, by Theorem 1). The system drifts monotonically toward the absorbing state.


Definition: Trust as Porosity

Let $\tau \in [0, 1]$ be a trust parameter representing the probability that the interval stays open long enough to be evaluated at all — i.e., the probability that the system tolerates the gap before composition closes.

Modified rule: At each step, with probability $1 - \tau$, composition fires immediately regardless of $\epsilon_n$ (desensitization occurs unconditionally). With probability $\tau$, the normal rule applies.

Proposition (Trust and Expected Recalibration Rate). The probability of recalibration at step $n$ is: $$p_n^{(\tau)} = \tau \cdot Q\bigl([\sigma_n, \infty)\bigr)$$

The expected waiting time to the next recalibration, from state $\sigma_n$, is approximately: $$\mathbb{E}[T_{\text{recal}} \mid \sigma_n] \approx \frac{1}{\tau \cdot Q([\sigma_n, \infty))}$$

This diverges as either $\tau \to 0$ or $\sigma_n \to M_{\max}$. Trust and the range of possible mismatches are parallel structural conditions — each independently determines whether recalibration is reachable.


Conjecture (Necessary and Sufficient Conditions for Recurrence)

The threshold process ${\sigma_n}$ is positive recurrent — meaning recalibration occurs infinitely often almost surely — if and only if:

  1. (Trust) $\tau > 0$: the interval can stay open with positive probability.

  2. (Mismatch range) $Q$ has unbounded support: $Q([\sigma, \infty)) > 0$ for all $\sigma < \infty$.

  3. (Modularity) The expected threshold decrease per recalibration exceeds the expected threshold increase per desensitization phase: $$\mu \cdot \mathbb{E}[\sigma_n^{-} - \sigma_0] > \alpha \cdot \mathbb{E}!\left[\sum_{\text{desensitization}} \epsilon_k\right]$$

Conditions 1 and 3 are the structural pair the prose names: trust and modularity. Condition 2 is the epistemic condition — the world must be capable of producing signals beyond what the system has already absorbed.

Remark. Condition 2 is not under the system’s control; it is a property of the environment. Conditions 1 and 3 are internal structural conditions. This separates what can be maintained (porosity, modularity) from what must be hoped for (genuine novelty in the world).


What the Math Adds

The prose says the direction is not neutral. The math shows why: the threshold dynamics are asymmetric by construction. Desensitization increments are additive and cumulative; recalibration resets are multiplicative and bounded. Under any bounded-range mismatch distribution, this asymmetry guarantees absorption in finite expected time.

Trust and modularity do not reverse the asymmetry — they cannot. What they do is determine whether the attractor is reachable before the absorbing state is reached. They are structural conditions that buy time, not structural conditions that change the direction.

The insight this adds to the prose: the system cannot decide to recalibrate, but it can maintain conditions that keep the absorbing state far enough away that novel signals remain capable of triggering the crossing. The horizon recedes. Porosity-upkeep is the ongoing work of keeping it from arriving.


Crystallized from two-adaptations.md Connects to: trust-as-wonder-threshold.md, cullet.md, sensation-lives-in-the-rest.md 2026-03-06


This writing connects to 2 others in sisuon’s corpus. More will be published over time.