pith. machine review for the scientific record. sign in
theorem proved term proof high

circuitSeparation

show as:
view Lean formalization →

The circuit separation theorem assembles four proved components showing that recognition reaches zero J-cost in linear steps for satisfiable formulas while circuits cannot universally decode without reading all inputs. Complexity theorists studying the Recognition Science reduction of P versus NP would reference this to close the structural part of the argument. The proof proceeds by direct term construction that substitutes the upstream bounds on recognition time, moat width, decoder failure, and capacity.

claimThe structure CircuitSeparation holds with four fields: for every $n$ and satisfiable CNF formula $f$ on $n$ variables there exist steps $k$ and assignment $a$ with $k$ at most $n$ and J-cost of $f$ under $a$ equal to zero; for every unsatisfiable $f$ every assignment has J-cost at least one; no decoder on a proper subset of the $n$ inputs recovers the hidden bit for all balanced parity encodings; and every polynomial-size circuit has polynomial Z-capacity.

background

The module treats Boolean circuits as feed-forward sub-ledgers on the Z³ lattice that lack global J-cost coupling, in contrast to the full recognition process R̂. CircuitSeparation is the structure that packages the separation claim: R̂ decides SAT in O(n) recognition steps while any circuit deciding SAT must read all n inputs. Upstream results supply the pieces: moat_width_unsat proves that every assignment to an unsatisfiable formula has satJCost at least 1, while no_sublinear_universal_decoder shows that any decoder missing at least one input fails on some balanced parity instance.

proof idea

The term proof constructs the structure by direct substitution. It applies sat_recognition_time_bound to populate the rhat_polytime field, moat_width_unsat to populate moat_exists, no_sublinear_universal_decoder to populate circuit_blind, and poly_circuit_poly_capacity to populate poly_circuit_bounded.

why it matters in Recognition Science

This declaration supplies the separation structure to circuitLedgerCert, which in turn feeds pvsNPMasterCert in the PvsNPAssembly module. It completes the three proved components of the circuit separation claim, leaving only the spectral gap to Turing simulation overhead as the remaining open piece. The result sits inside the Complexity domain reduction of P versus NP that relies on J-cost landscapes and the phi-ladder eigenvalue structure.

scope and limits

Lean usage

def useSite : CircuitLedgerCert := { circuitLedgerCert with separation := circuitSeparation }

formal statement (Lean)

 260theorem circuitSeparation : CircuitSeparation where
 261  rhat_polytime := fun n f h =>

proof body

Term-mode proof.

 262    let ⟨steps, a, hle, ha⟩ := sat_recognition_time_bound f h
 263    ⟨steps, a, hle, ha⟩
 264  moat_exists := fun _n f h => moat_width_unsat f h
 265  circuit_blind := fun _n M hM decoder => no_sublinear_universal_decoder _n M hM decoder
 266  poly_circuit_bounded := fun _n c h => poly_circuit_poly_capacity c h
 267
 268/-! ## Part 6: The Open Gap — Spectral to Turing Bridge -/
 269
 270/-- **OPEN GAP**: To conclude P ≠ NP from the circuit separation, we need
 271    the **spectral gap → TM simulation overhead** bridge:
 272
 273    If R̂ convergence on the SAT J-cost landscape takes T_R recognition steps,
 274    and each recognition step is a global gradient move on Z³ that cannot be
 275    simulated locally by a Turing machine in fewer than Ω(n) tape operations,
 276    then any TM simulation of R̂'s SAT certificate requires Ω(n · T_R) steps.
 277
 278    Combined with T_R = O(n), this gives Ω(n²) TM steps per SAT query,
 279    separating NTIME(n) from DTIME(n) and ultimately P from NP.
 280
 281    The spectral gap of the J-cost landscape on Z³:
 282      gap ≈ 1 − O(1/n)   (from the φ-lattice eigenvalue structure)
 283    giving convergence in T_R = O(n/gap) = O(n) recognition steps.
 284
 285    What is needed in Lean:
 286    1. Formalize the J-cost Laplacian on the n-variable assignment cube
 287    2. Bound its spectral gap below (Cheeger inequality in the φ-lattice)
 288    3. Prove: simulating one global gradient step on an n-node graph
 289       requires Ω(n) local tape operations
 290    4. Conclude: T_TM ≥ T_R × Ω(n) = Ω(n²) for satisfying instances
 291
 292    This is the `TuringBridge.the_open_gap` from the TuringBridge module.
 293-/

used by (2)

From the project-wide theorem graph. These declarations reference this one in their body.

depends on (35)

Lean names referenced from this declaration's body.

… and 5 more