pith. machine review for the scientific record. sign in
theorem proved tactic proof high

unsat_cost_lower_bound

show as:
view Lean formalization →

For any unsatisfiable CNF formula the J-cost of every assignment is bounded below by 1. Researchers working on recognition models of complexity cite the result to separate the zero-cost landscape of satisfiable instances from the obstructed landscape of unsatisfiable ones. The proof applies the strict-positivity lemma for unsatisfiable formulas and lifts the real-valued inequality to the natural numbers via integer casting and the omega tactic.

claimLet $f$ be a CNF formula over $n$ variables that is unsatisfiable. Then for every assignment $a$, the J-cost of $f$ under $a$ satisfies $J(f,a) = |$unsatisfied clauses$| ≥ 1$.

background

A CNFFormula is a list of clauses over $n$ variables; an Assignment is a map Fin $n$ → Bool that evaluates literals and clauses in the usual way. The function satJCost returns the real number equal to the count of clauses left unsatisfied by a given assignment, serving as the J-cost in the Recognition Science encoding of SAT. The module shows that satisfiable instances reach zero J-cost in linear recognition steps while unsatisfiable instances carry a topological obstruction that keeps every J-cost positive. This theorem rests on the upstream lemma unsat_positive_jcost, which already establishes that every assignment yields strictly positive J-cost when the formula is unsatisfiable.

proof idea

The tactic proof introduces an arbitrary assignment $a$, invokes unsat_positive_jcost to obtain satJCost $f$ $a$ > 0, extracts the integer length of the filtered unsatisfied clauses, applies mod_cast to relate the real and natural values, and uses omega to conclude the length is at least 1 before casting back to the desired lower bound.

why it matters in Recognition Science

The declaration supplies the concrete lower bound consumed by moat_width_unsat and landscapeDepth_unsat; it appears verbatim in the unsat_obstruction field of dissolution_holds and in the local_certifier_fails component of rsatSeparation. It therefore closes the UNSAT half of the recognition-time separation between SAT and its complement inside the R̂ model, without resolving classical P versus NP. The result aligns with the module's claim that recognition complexity differs from Turing time precisely because of this positive J-cost obstruction.

scope and limits

Lean usage

theorem moat_width_unsat {n : ℕ} (f : CNFFormula n) (h : f.isUNSAT) : ∀ a : Assignment n, satJCost f a ≥ 1 := unsat_cost_lower_bound f h

formal statement (Lean)

 151theorem unsat_cost_lower_bound {n : ℕ} (f : CNFFormula n) (h : f.isUNSAT) :
 152    ∀ a : Assignment n, satJCost f a ≥ 1 := by

proof body

Tactic-mode proof.

 153  intro a
 154  have hpos := unsat_positive_jcost f h a
 155  -- satJCost is a natural-number length cast to ℝ; > 0 implies ≥ 1
 156  have hnat : 1 ≤ (f.clauses.filter (fun c => !c.satisfiedBy a)).length := by
 157    have : 0 < (f.clauses.filter (fun c => !c.satisfiedBy a)).length := by
 158      unfold satJCost at hpos; exact_mod_cast hpos
 159    omega
 160  unfold satJCost
 161  exact_mod_cast hnat
 162
 163/-! ## Part 3: Separation via BalancedParityHidden -/
 164
 165/-- The adversarial failure theorem (from BalancedParityHidden):
 166    any fixed-view decoder on a proper subset of variables can be fooled.
 167    This is the RS version of the natural proof barrier: no local property
 168    of the formula can certify unsatisfiability. -/

used by (4)

From the project-wide theorem graph. These declarations reference this one in their body.

depends on (16)

Lean names referenced from this declaration's body.