pith. machine review for the scientific record. sign in

Explain the Lean theorem `recidivismCost_nonneg` in module `IndisputableMonolith.CriminalJustice.RecidivismFromJCost`. Write for an educated reader who knows science and programming but may not know Lean. Cover: (1) what the declaration says in plain English, (2) why it matters in Recognition Science, (3) how to read the formal statement, (4) visible dependencies or certificates in the supplied source, and (5) what this declaration does not prove. Cite only declarations present in the supplied Recognition source.

Big AI job. Grok 4.3 reads the canon and writes a Lean-grounded derivation; usually 20 seconds to 2 minutes. The page will refresh to your answer when it’s ready.
confidence: high in recognition cached

1. Plain English Meaning

The theorem recidivismCost_nonneg establishes that the structural cost associated with a recidivism ratio is always greater than or equal to zero, provided both the reoffense rate and the baseline rate are strictly positive numbers.

2. Role in Recognition Science

In Recognition Science, physical and systemic structures are treated under a MODEL that minimizes a universal J-cost, where equilibrium corresponds to zero cost. A pre-intervention societal state has a reoffense ratio of r = 1 and thus J = 0. By providing a THEOREM that the cost of any positive ratio is non-negative, RS ensures that deviations from equilibrium accumulate as a positive, absolute structural metric. This absolute baseline is required to define a meaningful "recognition threshold". The framework makes an empirical HYPOTHESIS that effective rehabilitation interventions push the system by at least a one-φ-step departure (costing J(φ) ≈ 0.118).

3. Reading the Formal Statement

theorem recidivismCost_nonneg (reoffense baseline : ℝ)
    (hr : 0 < reoffense) (hb : 0 < baseline) :
    0 ≤ recidivismCost reoffense baseline
  • (reoffense baseline : ℝ): Declares the variables representing the two rates as real numbers.
  • (hr : 0 < reoffense) and (hb : 0 < baseline): The hypotheses. The theorem requires both rates to be strictly greater than zero.
  • 0 ≤ recidivismCost reoffense baseline: The conclusion. It asserts that evaluating the recidivismCost function on these inputs yields a non-negative result.
  • The proof script (by unfold recidivismCost; exact Jcost_nonneg (div_pos hr hb)) expands the definition into J(reoffense / baseline), uses Lean's Mathlib to note that the division of two positive reals is positive (div_pos), and applies the general J-cost non-negativity theorem.

4. Visible Dependencies and Certificates

The declaration directly depends on the definition recidivismCost, which bridges the ratio to the underlying Jcost.

Structurally, it acts as a required pillar for the module's certificate. It fulfills the cost_nonneg requirement of the RecidivismCert structure. This certificate is instantiated via cert, which is then used to prove cert_inhabited, certifying that recidivism ratios legally obey the required RS cost symmetries.

5. What This Does Not Prove

This THEOREM is purely structural. It does not prove:

  • The exact value of the cost at any specific ratio (e.g., that r=1 yields zero is separately proved by recidivismCost_at_equilibrium).
  • What happens if the baseline rate is exactly zero (division by zero is structurally excluded by the hypotheses).
  • Any empirical claim. It does not prove that a specific cognitive-behavioral therapy intervention actually achieves a recidivism reduction. That remains an empirical HYPOTHESIS subject to falsification via randomized controlled trials.

cited recognition theorems

outside recognition

Aspects Recognition does not yet address:

  • The empirical data proving that any real-world rehabilitation program actually meets the one-phi-step recognition threshold.

recognition modules consulted

The Recognition library is at github.com/jonwashburn/shape-of-logic. The model is restricted to the supplied Lean source and instructed not to invent theorem names. Treat output as a starting point, not a verified proof.