theorem
proved
conservation_from_balance
show as:
view math explainer →
open explainer
Read the cached plain-language explainer.
open lean source
IndisputableMonolith.Foundation.LedgerForcing on GitHub at line 168.
browse module
All declarations in this module, on Recognition.
explainer page
Derivations using this theorem
depends on
-
comp_apply -
reciprocal -
of -
add_assoc -
add_comm -
of -
balanced -
balanced_list -
flow_contribution -
flow_contribution_reciprocal -
net_flow -
reciprocal -
reciprocal_inj -
reciprocal_reciprocal -
RecognitionEvent -
RecognitionEvent -
is -
of -
as -
is -
of -
is -
RecognitionEvent -
of -
is -
map -
and -
L -
M -
L -
M -
add_comm -
empty
used by
formal source
165 support the multiset argument. A cleaner proof would use Multiset.sum. For now, we
166 observe that the algebraic structure guarantees conservation.
167-/
168theorem conservation_from_balance (L : Ledger) (_hbal : balanced L) (agent : ℕ) :
169 net_flow L agent = 0 := by
170 have hbal : balanced_list L.events := _hbal
171
172 -- Rewrite `net_flow` as a `List.sum` of `flow_contribution`.
173 have step_eq :
174 ∀ (acc : ℝ) (e : RecognitionEvent),
175 (if e.source = agent then acc + Real.log e.ratio
176 else if e.target = agent then acc + Real.log e.ratio
177 else acc)
178 = acc + flow_contribution e agent := by
179 intro acc e
180 unfold flow_contribution
181 by_cases hs : e.source = agent
182 · simp [hs]
183 · by_cases ht : e.target = agent
184 · simp [hs, ht]
185 · simp [hs, ht]
186
187 have h_foldl :
188 ∀ acc,
189 L.events.foldl (fun acc e =>
190 if e.source = agent then acc + Real.log e.ratio
191 else if e.target = agent then acc + Real.log e.ratio
192 else acc) acc
193 =
194 L.events.foldl (fun acc e => acc + flow_contribution e agent) acc := by
195 intro acc
196 induction L.events generalizing acc with
197 | nil =>
198 simp
papers checked against this theorem (showing 22 of 22)
-
Text encoding lets one model handle planning, detection and maps from camera images
"This approach allows EMMA to jointly process various driving tasks in a unified language space"
-
MEMIT edits thousands of facts into large language models
"inspired by the ROME direct editing method... modify a sequence of layers and develop a way for thousands of modifications to be performed simultaneously"
-
Unique reciprocal cost on ratios forces balanced discrete ledgers
"Under deterministic update semantics and minimality (no intra-tick ordering metadata), we derive atomic ticks (at most one event per tick). Explicit structural assumptions (conservation, no sources/sinks, pairwise locality, quantization in δℤ) force balanced double-entry postings"
-
Trillion-parameter models scale recommendation quality with compute
"We reformulate recommendation problems as sequential transduction tasks within a generative modeling framework (Generative Recommenders), and propose a new architecture, HSTU, designed for high cardinality, non-stationary streaming recommendation data."
-
Gated DeltaNet beats Mamba2 on retrieval and long sequences
"the gated delta rule... combines both approaches... hardware-efficient chunkwise algorithm"
-
Sliding window halves GRPO training time for flow models
"the entire denoising process to be framed as a Markov Decision Process (MDP) in a stochastic environment, where GRPO is then applied to optimize the complete state-action sequence"
-
Blended diffusion sampling solves noisy nonlinear inverse problems
"we extend diffusion solvers to efficiently handle general noisy (non)linear inverse problems via approximation of the posterior sampling"
-
Hundreds of thousands of qubits needed for practical quantum advantage
"the overheads required to ensure fault-tolerant quantum computing will significantly outpace the resources required for fault-tolerant classical computing"
-
Paged KV cache lifts LLM serving throughput 2-4x
"we propose PagedAttention, an attention algorithm inspired by the classical virtual memory and paging techniques in operating systems. On top of it, we build vLLM, an LLM serving system that achieves (1) near-zero waste in KV cache memory and (2) flexible sharing of KV cache within and across requests"
-
RNNs learn to halt after variable steps
"the ponder cost ρt = N(t) + R(t) ... τP(x)"
-
Lightning attention scales test-time compute in 456B hybrid model
"we propose CISPO, a novel RL algorithm to further enhance RL efficiency. CISPO clips importance sampling weights rather than token updates"
-
LLM agents evolve memory skills from their own failures
"Experiments on LoCoMo, LongMemEval, HotpotQA, and ALFWorld demonstrate that MemSkill improves task performance over strong baselines and generalizes well across settings."
-
q-form global symmetries extend to higher-dimensional operators
"They can also have 't Hooft anomalies, which prevent us from gauging them, but lead to 't Hooft anomaly matching conditions. Such anomalies can also lead to anomaly inflow on various defects and exotic Symmetry Protected Topological phases."
-
Tiny changes fool neural networks and transfer across models
"the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input."
-
GLM-4.5 Hits 91% on AIME and 64% on SWE-Bench
"Through multi-stage training on 23T tokens and comprehensive post-training with expert model iteration and reinforcement learning"
-
RL teaches LLMs to call search engines inside their reasoning
"SEARCH-R1 optimizes LLM reasoning trajectories with multi-turn search interactions, leveraging retrieved token masking for stable RL training"
-
67B DeepSeek LLM beats LLaMA-2 70B on code and math benchmarks
"To support the pre-training phase, we have developed a dataset that currently consists of 2 trillion tokens"
-
AGN split into two populations with separate fueling modes
"In Jet-Mode AGN the bulk of energetic output takes the form of collimated outflows (jets). These AGN are associated with the more massive black holes in more massive (classical) bulges and elliptical galaxies. Neither the accretion onto these black holes nor star-formation in their host bulge is significant today. These AGN are probably fueled by the accretion of slowly cooling hot gas that is limited by the feedback/heating provided by AGN radio sources."
-
Agentic memory builds evolving networks for LLM agents
"Following the basic principles of the Zettelkasten method, we designed our memory system to create interconnected knowledge networks through dynamic indexing and linking. When a new memory is added, we generate a comprehensive note containing multiple structured attributes, including contextual descriptions, keywords, and tags. The system then analyzes historical memories to identify relevant connections, establishing links where meaningful similarities exist."
-
Supergravity action on AdS gives CFT correlation functions
"large N phase transition related to the thermodynamics of AdS black holes"
-
Mem0 raises LLM memory accuracy 26% while cutting latency 91%
"Empirical results show that our methods consistently outperform all existing memory systems across four question categories: single-hop, temporal, multi-hop, and open-domain. Notably, Mem0 achieves 26% relative improvements in the LLM-as-a-Judge metric over OpenAI, while Mem0 with graph memory achieves around 2% higher overall score than the base configuration."
-
Sequence-level ratios stabilize MoE LLM training
"the sequence-level importance weight πθ(yi|x)/πθold(yi|x) has a clear theoretical meaning"