Recognition: 2 theorem links
· Lean TheoremRobust Priors in Nonlinear Panel Models with Individual and Time Effects
Pith reviewed 2026-05-13 17:39 UTC · model grok-4.3
The pith
A target-centered full-exponential Laplace-cumulant expansion yields robust priors that deliver bias reduction for common parameters, fixed effects, and average partial effects in nonlinear two-way panel models under large N,T asymptotics.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
We propose a target-centered full-exponential Laplace--cumulant expansion that exploits the sparse higher-order derivative structure implied by additive effects, delivering a tractable approximation with a negligible remainder under large-N,T asymptotics.
Load-bearing premise
The approximation relies on large-N,T asymptotics together with the sparse higher-order derivative structure implied by additive individual and time effects; if this sparsity fails or the asymptotics do not hold, the remainder may not be negligible.
read the original abstract
We develop likelihood-based bias reduction for nonlinear panel models with additive individual and time effects. In two-way panels, integrated-likelihood corrections are attractive but challenging because the required integration is high dimensional and standard Laplace approximations may fail when the parameter dimension grows with the sample size. We propose a target-centered full-exponential Laplace--cumulant expansion that exploits the sparse higher-order derivative structure implied by additive effects, delivering a tractable approximation with a negligible remainder under large-$N,T$ asymptotics. The expansion motivates robust priors that yield bias reduction for both common parameters and fixed effects. We provide implementations for binary, ordered, and multinomial response models with two-way effects. For average partial effects, we show that the remaining first-order bias has a simple variance form and can be removed by a closed-form adjustment. Monte Carlo experiments and an empirical illustration show substantial bias reduction with accurate inference.
Editorial analysis
A structured set of objections, weighed in public.
Axiom & Free-Parameter Ledger
axioms (2)
- domain assumption Large-N,T asymptotics with N and T both diverging
- domain assumption Sparse higher-order derivative structure implied by additive individual and time effects
Lean theorems connected to this paper
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
target-centered full-exponential Laplace–cumulant expansion that exploits the sparse higher-order derivative structure implied by additive effects
-
IndisputableMonolith/Foundation/BranchSelection.leanbranch_selection unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
robust priors that yield bias reduction for both common parameters and fixed effects
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Forward citations
Cited by 1 Pith paper
-
Penalized Likelihood for Dyadic Network Formation Models with Degree Heterogeneity
Penalized likelihood resolves non-existence of MLE and incidental-parameter bias in network models with degree heterogeneity while allowing sparse networks and providing asymptotic guarantees.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.