DecodedSimulationHypothesis
The DecodedSimulationHypothesis structure packages the claim that an independent LNAL spatial step applied to an encoded Galerkin state, once decoded, equals the direct discrete step map. Researchers bridging discrete 2D fluid models to LNAL execution semantics cite this explicit hypothesis packaging. It is introduced as a plain structure definition whose commutation field directly records the required equality.
claimFor a natural number $N$, a decoded simulation hypothesis consists of an LNAL program $P$, a map step from GalerkinState$_N$ to itself, and the commutation property that for every Galerkin state $u$, decoding the result of the independent step of $P$ applied to the encoding of $u$ equals step$(u)$.
background
GalerkinState$_N$ is the Euclidean space of velocity coefficients indexed by the truncated modes and the two vector components. The function encodeGalerkin2D maps such a state into an LNALField by deterministic voxel indexing via Fintype.equivFin. The independent spatial semantics applies the voxelStep map to each voxel separately with no neighbor coupling. The module states the one-step simulation bridge between the discrete Galerkin model and LNAL execution at milestone M3, packaging the required claim as an explicit hypothesis without axioms or sorry.
proof idea
This is a structure definition that directly bundles the three fields: the LNAL program, the discrete step map, and the commutation equality expressed using decodeGalerkin2D, independent.step, and encodeGalerkin2D. The commutation field is stated verbatim inside the structure; no lemmas or tactics are invoked beyond the record constructor.
why it matters in Recognition Science
The definition supplies the hypothesis consumed by the downstream theorem decoded_simulation_one_step, which simply applies the comm field. It realizes the Simulation2D milestone M3 by packaging the one-step bridge between Galerkin2D dynamics and LNAL independent semantics. In the Recognition framework it supports the classical bridge layer without introducing axioms or connecting yet to the phi-ladder or Recognition constants.
scope and limits
- Does not prove analytic correctness of the Galerkin discretization.
- Does not incorporate neighbor interactions in the LNAL semantics.
- Does not address multi-step evolution or convergence rates.
- Does not connect to the phi-ladder, Recognition constants, or T0-T8 chain.
Lean usage
theorem decoded_simulation_one_step {N : ℕ} (H : DecodedSimulationHypothesis N) (u : GalerkinState N) : decodeGalerkin2D (N := N) (field := independent.step H.P (encodeGalerkin2D u)) (hsize := by simp [LNALSemantics.independent, encodeGalerkin2D]) = H.step u := H.comm u
formal statement (Lean)
295structure DecodedSimulationHypothesis (N : ℕ) where
296 /-- The LNAL program used for the simulation. -/
297 P : LProgram
298 /-- The discrete (time-stepping) map on Galerkin states (on decoded/quantized coefficients). -/
299 step : GalerkinState N → GalerkinState N
300 /-- One-step commutation after decoding. -/
301 comm :
302 ∀ u : GalerkinState N,
303 decodeGalerkin2D (N := N)
proof body
Definition body.
304 (field := independent.step P (encodeGalerkin2D u))
305 (hsize := by simp [LNALSemantics.independent, encodeGalerkin2D])
306 = step u
307
308/-- The one-step decoded simulation lemma (directly from `DecodedSimulationHypothesis.comm`). -/