pith. machine review for the scientific record. sign in

econ

Economics

0
econ.TH 2026-05-14 2 theorems

Extended SBA adds two-layer architecture for strategic scenarios

Extended Scenario Bundle Analysis: A Formal Framework for Strategic Scenario Modeling

Separates static database from dynamic trees and adds beliefs, desires, and commitments without needing full payoffs or probabilities

Figure from the paper full image
abstract click to expand
Strategic crisis analysis needs representations that combine qualitative expert judgement, explicit interdependence, and auditable update rules without requiring fully specified payoffs or probabilities. Scenario Bundle Analysis (SBA), developed by Amos Perlmutter and Reinhard Selten, provides such a starting point, but the original formulation leaves several database, topology, and update interfaces implicit. This paper presents a formal refinement and extension of the original SBA framework, introducing a two-layer architecture that separates a static scenario database from a dynamic scenario tree system. The extended framework incorporates a richer attitude vocabulary: beliefs, desires, intentions, fears, and coalitional commitments, with expectations treated as doxastic attitudes. It also adds a domain/modifier layer for contextual framing, a topology on admissible scenario spaces\index{Scenario space}, typed assessment-state updates, and multi-criteria evaluation. Mathematical definitions are stated with sufficient precision to support computational implementation.
0
0
econ.TH 2026-05-14 2 theorems

Signal precision can lower screening accuracy when high

Pitfall of Precision in Noisy Signaling

Low-quality agents increase mimicry as noise falls, outweighing the principal's informational gain.

Figure from the paper full image
abstract click to expand
A principal decides whether to approve an agent based on a noisy signal (e.g., test scores) generated by the agent. High-quality agents can produce high signals on average at lower cost, but the realizations are subject to noise that depends on the screening technology's precision. We uncover a paradoxical "pitfall of precision": when precision is already high, further improvements reduce screening accuracy and lower the principal's welfare. This occurs because greater precision incentivizes strategic signaling from more low-quality agents, outweighing the direct benefit from improved precision. The pitfall of precision also has implications for statistical discrimination: groups with noisier technologies face lower approval rates yet may be favored ex ante -- a reversal of discrimination. We also examine how commitment power helps mitigate the pitfall.
0
0
econ.EM 2026-05-13 2 theorems

Grid growth faster than r_n to the 1/4 ensures valid uniform inference

A Grid-Rate Condition for Valid Uniform Inference

For twice differentiable functions in a Donsker class, the rule makes discretization error negligible relative to statistical variation.

Figure from the paper full image
abstract click to expand
Estimating a continuous functional $F: \X \to \R$ involves specifying $L_n^d$ nodes on $\X \subset \R^d$ for estimation and uniform inference. While asymptotically valid inference requires $L_n$ to increase with $n$, existing fixed-$L$ rules of thumb and heuristic data-driven approaches lack formal justification. This paper shows that, for functions within a Donsker class, the simple grid-growth condition \(L_n=\omega(r_n^{1/4})\) is sufficient for valid inference for twice continuously differentiable functions estimable at the \(r_n^{1/2}\) rate. This condition ensures that the approximation error is asymptotically negligible relative to the stochastic variation of the empirical process.
0
0
econ.EM 2026-05-12 2 theorems

Moments made orthogonal to any order with fixed extra parameters

Higher-Order Neyman Orthogonality in Moment-Condition Models

The construction keeps sensitivity to nuisance errors low at arbitrary orders without adding more parameters as the order rises.

Figure from the paper full image
abstract click to expand
We construct moment functions that are Neyman-orthogonal to a chosen order in parametric moment condition models. These moment functions reduce sensitivity to nuisance estimation error and, as such, offer a unified and tractable route to higher-order debiasing in a wide range of econometric models. The number of additional nuisance parameters required by our construction, beyond those already present in the original moment conditions, is independent of the order of orthogonalization and can be reduced to a single scalar if desired.
0
0
econ.GN 2026-05-12 2 theorems

AI boosts solo entries but teams top the charts

Generative AI Fuels Solo Entrepreneurship, but Teams Still Lead at the Top

Post-ChatGPT Product Hunt data shows more individuals launching products, yet teams hold the highest quality rankings.

Figure from the paper full image
abstract click to expand
Recent advances in generative artificial intelligence (AI) are reshaping who enters entrepreneurship, but not who reaches the top of the quality distribution. Using data on over 160,000 product launches on Product Hunt, we find that entrepreneurial entry increased sharply following the public release of ChatGPT-3.5, driven disproportionately by solo entrepreneurs. This shift toward solo entry is particularly pronounced in categories that historically favored team-based ventures. However, much of this growth reflects low-commitment, experimental entry and does not translate into greater representation among the highest-quality outcomes. Team-based ventures are increasingly dominant in the top tiers of platform rankings. These findings suggest that generative AI lowers barriers to solo entrepreneurship while reinforcing team-based advantages.
0
0
econ.GN 2026-05-12 Recognition

Skill premium rise causes one gender to fully invest and the other less

Skill Premia and Pre-Marital Investments in Marriage Markets

In symmetric marriage markets with search frictions, higher wages for skilled workers create unique equilibria with divergent gender skill,

abstract click to expand
I study a decentralized marriage market with search frictions, costly pre-marital skill investments, and non-transferable utility. Despite a symmetric environment, the market can exhibit asymmetric equilibria, with one gender investing more in skills than the other; in some environments, the asymmetric equilibrium is unique. A microfounded model of household utility maximization shows that this transition from a unique symmetric equilibrium to a unique asymmetric equilibrium can be driven by rising labor-market wages for high-skilled workers: as the skill premium rises, one gender ends up fully investing while the other invests substantially less.
0
0
econ.TH 2026-05-11 3 theorems

Inequality in job search effort lowers matching rates

The Matching Function: A Unified Look into the Black Box

Tracing the matching function to applicant-vacancy networks shows dispersion of search intensities on both sides reduces match efficacy.

Figure from the paper full image
abstract click to expand
In this paper, we use tools from network theory to trace the properties of the matching function to the structure of granular connections between applicants and vacancies. We unify seemingly disparate parts of the literature by recovering multiple functional forms as special cases including the CES. We derive a testable condition under which matching in any network from the broad class we analyze can be thought "as if" it comes from a CES matching function, up to a first-order approximation. We provide a theory of match efficacy in which inequality in search intensities is the key determinant of how well the matching process works. A robust finding of our analysis is that dispersion of search intensities on either side of the market is bad for the matching process. We also show that a rise in the market's mean search intensity can reduce match efficacy when it is associated with a higher Gini coefficient of search intensities.
0
0
econ.EM 2026-05-11 2 theorems

Hybrid booster adds linear terms to trees for macro forecasts

LGB+: A Macroeconomic Forecasting Road Test

LGB+ lets linear corrections compete with tree updates via out-of-bag checks, lifting accuracy on autoregressive targets.

abstract click to expand
Needless to say, linear dynamics are pervasive in economic time series, particularly autoregressive ones. While gradient boosting with trees excels at capturing nonlinearities, it is inefficient in small samples when much of the predictive content is linear, expending splits to approximate relationships better captured by simple linear terms. This paper proposes LGB+, a boosting procedure operating on a more inclusive set of basis functions. The idea comes in two flavors. LGB+ evaluates a tree and a linear candidate at each step against out-of-bag data; only the winner advances. The simpler variant, LGB^A+, alternates on a fixed schedule: a block of tree updates, then a greedy linear correction, repeat. Both designs avoid ex ante commitments to any particular functional form or predictor selection. Because the prediction is the sum of a linear and a tree component, forecasts decompose natively into linear and nonlinear contributions, and so does permutation-based variable importance and historical proximity weights. In a quarterly U.S. macroeconomic forecasting exercise, LGB+ delivers strong gains for targets with pronounced autoregressive dynamics or mixed linear-nonlinear signals. Variables dominating the linear channel are those operating through autoregressive persistence or near-accounting relationships to the target (e.g., initial claims for unemployment and building permits for housing starts).
0
0
econ.EM 2026-05-11 2 theorems

Risk-adjusted metrics favor professional forecasters

Quantifying the Risk-Return Tradeoff in Forecasting

Loss differentials treated as returns reveal experts avoid big failures while some models win on specific targets.

abstract click to expand
Average forecast accuracy is not the same as forecast reliability. I treat forecast loss differentials relative to a benchmark as a return series. I then evaluate these returns using risk-adjusted performance measures from finance, including the Sharpe ratio, Sortino ratio, Omega ratio, and drawdown-based metrics. I also introduce the Edge Ratio capturing a model's propensity to deliver uniquely informative predictions relative to the forecasting frontier. I apply this framework to U.S. macroeconomic forecasting, comparing econometric benchmarks, machine learning models, a foundation model (TabPFN), and the Survey of Professional Forecasters. While it is often feasible to beat professional forecasters in terms of average accuracy, it is much harder to beat them on a risk-adjusted basis. They rarely exhibit catastrophic failures and often achieve high Edge Ratios, plausibly reflecting the value of contextual judgment. Nonetheless, selected machine learning methods deliver attractive risk profiles for specific targets. The framework naturally extends to meta-analyses across targets, horizons, and samples, illustrated with a density forecast evaluation and the M4 competition.
0
0
econ.GN 2026-05-11 2 theorems

Off-grid solar spreads via contagion that expands then consolidates

From Expansion to Consolidation: Socio-Spatial Contagion Dynamics in Off-Grid PV Adoption

Satellite analysis across 507 communities links clustering intensity to faster adoption and shows a shift from range growth to filling in as

Figure from the paper full image
abstract click to expand
In traditional rural societies, where social ties are embedded in physical space, the diffusion of emerging technologies may be amplified through socio-spatial contagion (SSC). Such processes may play a key role in accelerating residential PV adoption in off-grid regions. Yet empirical evidence on SSC in PV adoption remains largely limited to affluent, grid-connected settings, while off-grid regions often lack systematic installation records. To address these gaps, we use a deep learning segmentation model to extract PV installations from a decade-long series of remote sensing imagery across 507 off-grid settlement clusters (hereafter, communities). This enables data-driven spatio-temporal point pattern inference of SSC in data-scarce contexts. SSC is quantified through the range and intensity of clustering of new installations around prior adopters, and the dynamics of these dimensions are linked to adoption outcomes. We found that SSC is nearly ubiquitous, often spanning most of the community's spatial extent, while exhibiting substantial heterogeneity in intensity. Although SSC intensifies over time, its effects remain temporally concentrated, peaking within 1 to 2 years of nearby installations and weakening thereafter. SSC intensity is positively associated with adoption rates in both cross-sectional and temporal analyses. However, the relationship between SSC range and adoption changes over time - in early diffusion phases, adoption growth is associated with range expansion, whereas in later phases it is associated with range contraction. This shift reflects a transition from clustering to consolidation of installations. These findings highlight the potential of seeding interventions to accelerate PV diffusion in off-grid regions.
0
0
econ.GN 2026-05-11 2 theorems

Contagion drives off-grid solar from spread to fill-in

From Expansion to Consolidation: Socio-Spatial Contagion Dynamics in Off-Grid PV Adoption

Early clustering expands range while later growth contracts it, marking the shift from new sites to denser coverage.

Figure from the paper full image
abstract click to expand
In traditional rural societies, where social ties are embedded in physical space, the diffusion of emerging technologies may be amplified through socio-spatial contagion (SSC). Such processes may play a key role in accelerating residential PV adoption in off-grid regions. Yet empirical evidence on SSC in PV adoption remains largely limited to affluent, grid-connected settings, while off-grid regions often lack systematic installation records. To address these gaps, we use a deep learning segmentation model to extract PV installations from a decade-long series of remote sensing imagery across 507 off-grid settlement clusters (hereafter, communities). This enables data-driven spatio-temporal point pattern inference of SSC in data-scarce contexts. SSC is quantified through the range and intensity of clustering of new installations around prior adopters, and the dynamics of these dimensions are linked to adoption outcomes. We found that SSC is nearly ubiquitous, often spanning most of the community's spatial extent, while exhibiting substantial heterogeneity in intensity. Although SSC intensifies over time, its effects remain temporally concentrated, peaking within 1 to 2 years of nearby installations and weakening thereafter. SSC intensity is positively associated with adoption rates in both cross-sectional and temporal analyses. However, the relationship between SSC range and adoption changes over time - in early diffusion phases, adoption growth is associated with range expansion, whereas in later phases it is associated with range contraction. This shift reflects a transition from clustering to consolidation of installations. These findings highlight the potential of seeding interventions to accelerate PV diffusion in off-grid regions.
0
0
econ.GN 2026-05-11 Recognition

Historical GWP data forecasts economic explosion by 2047

On the probability distribution of long-term changes in the growth rate of the global economy: An outside view

A diffusion model fitted to millennia of observations places most data in the middle of predicted ranges and implies median explosion in 204

abstract click to expand
Daniel Kahneman and Amos Tversky argued for challenging inside views (informed by contextual specifics) with outside views (based on historical "base rates" for certain event types). A reasonable inside view of the prospects for the global economy in this century is that growth will converge to 2.5%/year or less: population growth is expected to slow or halt by 2100; and as more countries approach the technological frontier, economic growth should slow as well. To test that view, this paper models gross world product (GWP) observed since 10,000 BCE or earlier, in order to estimate a base distribution for changes in the growth rate as a function of the GWP level. For econometric rigor, it casts a GWP series as a sample path in a stochastic diffusion whose specification is novel yet rooted in neoclassical growth theory. After estimation, most observations fall between the 40th and 60th percentiles of predicted distributions. The fit implies that GWP explosion is all but inevitable, in a median year of 2047. The friction between inside and outside views highlights two insights. First, accelerating growth is more easily explained by theory than is constant growth. Second, the world system may be less stable than traditional growth theory and the growth record of the last two centuries suggest.
0
0
econ.EM 2026-05-11 Recognition

Control-system model breaks middle-income trap

Engineering Economy: A New Paradigm for Escaping the Middle-Income Trap

Eleven pillars address Turkey's R&D demand shortage to enable high-income transition like South Korea's.

abstract click to expand
This paper introduces the concept of Engineering Economy as a new paradigm for understanding and managing macroeconomic policy in middle-income countries seeking to escape the middle-income trap. Drawing on Turkiye's post-2001 economic trajectory and South Korea's successful transition from a low-income to a high-income economy, the study argues that conventional frameworks whether the Washington Consensus's market liberalization prescriptions or the institutionalist critique alone are insufficient. Instead, it proposes treating the economy as a dynamic control system requiring continuous calibration rather than static equilibrium. The paper develops a road-surface metaphor (highway, side-road, off-road) to characterize different global economic regimes and presents eleven interconnected policy pillars spanning venture capital formation, regulatory sandboxes, technology-focused industrial policy, and human capital development. By synthesizing insights from endogenous growth theory (Romer), institutional economics (Acemoglu), the catching-up literature (Lee), cybernetic systems theory (Wiener), and Schumpeterian creative destruction, the framework reconceptualizes macroeconomic instruments through control-engineering analogies: interest rates as energy gradients, fiscal policy as energy flow, exchange rates as balance motors, and regulation as adaptive suspension. The analysis demonstrates that Turkiye's structural challenge is not merely institutional weakness but a systemic absence of R&D demand from its dominant enterprise structures, creating a vicious cycle that conventional reforms cannot break. Seven specific opportunity windows arising from US-China technological rivalry are identified, and a phased implementation roadmap is proposed.
0
0
econ.TH 2026-05-11 2 theorems

Non-CARA preferences leave prices only partially revealing

On the Possibility of Informationally Inefficient Markets Without Noise

Without noise traders, the mismatch between demand aggregation and Bayesian updating creates a Jensen gap that preserves private information

Figure from the paper full image
abstract click to expand
Noise traders can be dispensed with entirely. Partial revelation of information through prices arises under any non-exponential expected utility preference, including CRRA, without noise traders, random endowments, supply shocks, hedging motives, or behavioral biases. The model contains zero exogenous noise. The mechanism is a mismatch between the space in which market clearing aggregates signals and the Bayesian sufficient statistic. CARA demand is linear in log-odds, so prices aggregate in log-odds space and reveal the statistic exactly. Every other preference aggregates differently; the resulting Jensen gap makes revelation partial. I prove that CARA is the unique fully revealing preference class, characterize the rational expectations equilibrium via a contour integration fixed point, and verify that partial revelation survives learning from prices. The Grossman-Stiglitz paradox is resolved: information acquisition has positive value within the rational class. Numerical solution of the rational expectations fixed point at K = 3 confirms partial revelation, positive trade volume, and positive value of information across the full range of CRRA risk aversion, vanishing only in the CARA limit.
0
0
econ.TH 2026-05-11 Recognition

Deleting actions shifts equilibria that price tweaks leave stuck

Changing the Game: Status-Quo Inertia, Institutional Design, and Equilibrium Transition

Status-quo inertia keeps the old outcome selected as long as it remains feasible, so redesigning available moves outperforms adjusting their

Figure from the paper full image
abstract click to expand
Many economic interventions are designed as marginal changes in incentives. Yet in environments shaped by coordination, institutional persistence, and path dependence, such reforms often leave behavior largely unchanged. This paper studies interventions in games when equilibrium selection displays status-quo inertia: if the pre-intervention equilibrium remains a Nash equilibrium after policy, it continues to be selected. In that environment, price-based interventions and simple option expansion may fail even when they improve welfare in a partial-equilibrium sense. By contrast, interventions that modify the feasible action space, especially deletion and replacement interventions, can be substantially more effective because they remove the strategic basis for persistence. We develop a simple framework, derive general results, provide complete proofs, and illustrate the economics with examples from climate transition, platform regulation, financial reform, and industrial modernization. The analysis highlights a basic policy lesson: when inefficient equilibria are institutionally entrenched, the central problem is often not how to price the existing game more finely, but how to change the game itself.
0
0
econ.TH 2026-05-11 2 theorems

Secret messages with deniability reveal only state cutoffs

Secret Communication with Plausible Deniability

Directional baseline information limits frontier secret communication to binary above-or-below signals while preserving rationalizability.

Figure from the paper full image
abstract click to expand
Communication is secret if a message is independent of the state; however, the receiver's subsequent action may still reveal that she has acted on hidden information. This paper studies when secret communication can also provide plausible deniability: under single-crossing preferences, every action induced by the sender's message must be rationalizable using the receiver's baseline information alone. We characterize joint information structures that satisfy both secrecy and plausible deniability. We show that plausible deniability restricts communication exactly when the baseline message is directional -- meaning its likelihood is monotone in the state. Combining this restriction with secrecy, we show that, for directional messages, frontier communication reveals at most whether the state lies above or below a cutoff. Finally, we identify conditions under which a greatest feasible communication structure exists and can be constructed explicitly in a simple way.
0
0
econ.TH 2026-05-11 2 theorems

Only one rule aggregates multiple Elo ratings while obeying three consistency axioms

Aggregating Elo Ratings: An Axiomatization

Convert each rating to strength, average with weights, convert back; this is the unique method that respects normalization, recursion, and E

abstract click to expand
Many environments assign several Elo ratings to the same agent: a chess player has classical, rapid, and blitz ratings; an online platform may rate by time control, mode, or format; an evaluator may rate performance across tasks or roles. This paper axiomatizes when such a vector of ratings can be reduced to a single scalar rating that is itself on the Elo scale. We impose three substantive conditions: same-scale normalization (a uniform profile keeps its rating), recursive consistency (aggregating in blocks gives the same answer as aggregating directly, provided each block carries the total weight of its members), and marginal Elo-strength consistency (for two equally weighted coordinates, the ratio of marginal contributions to the combined rating equals the ordinary Elo odds). The unique rating rule satisfying these conditions converts each component to its Elo strength, takes a weighted arithmetic mean of strengths, and converts back. We show how this rule differs from a random-format lottery and from rating-scale averaging, prove the axioms are independent, and illustrate the rule on combining classical, rapid, and blitz ratings.
0
0
econ.GN 2026-05-11 Recognition

ChatGPT availability leaves high school test scores unchanged

Little Impact of ChatGPT Availability on High School Student Test Score Performance

Summer usage drops identify heavy educational adopters with no performance difference

abstract click to expand
In educational settings, AI can be used as a learning aid, but can also be used to avoid schoolwork, thereby passing classes while learning little. Many existing studies on the impact of AI on education focus on AI use in controlled settings or with specialized tools. In this paper, the dropoff in ChatGPT activity during non-school summer months in 2023 and 2024 is used to identify areas with heavy educational AI use and thus estimate the educational impact of AI as it is actually used. I find no meaningful impact of AI usage on high school test score averages in either direction. These results imply that, to the extent that high school students use AI to avoid learning, it either does not matter much for their test performance or is cancelled out by positive uses of AI in the aggregate.
0
0
econ.GN 2026-05-11 Recognition

Spanish inflation decoupled from money supply growth after 1600

The Phase Structure of Metallic Money: An MPTT Framework for the Spanish Price Revolution

Pre-1600 prices rose nearly in step with bullion inflows, but later money growth had far smaller price effects.

Figure from the paper full image
abstract click to expand
The Spanish Price Revolution is usually treated as a classic case in which American bullion inflows expanded the money supply and generated inflation. This view captures the first phase of the episode but fails to explain why the same monetary expansion did not continue to produce proportional price growth after 1600. We develop a two-phase Money Phase Transition Theory (MPTT) model in which the classical monetary relation is recovered before a transition point, while a second-phase correction term modifies the money-price transmission coefficient after the transition. Using annual Spanish CPI and reconstructed money-supply data, we show that 1500-1600 was a high-transmission metallic inflationary phase: CPI increased approximately 3.35-fold while money supply increased approximately 3.73-fold. After 1600, money supply continued to rise, increasing approximately 1.82-fold during 1600-1650, while CPI rose only approximately 1.22-fold. A classical one-phase model fitted on 1500-1600, therefore, overpredicts post-1600 prices when extrapolated forward. The MPTT two-phase model with transition point tau=1600 estimates beta_1=0.949, gamma=-0.812, and beta_2=beta_1+gamma=0.137, indicating a sharp post-transition weakening of monetary transmission. An unrestricted break scan identifies a deeper BIC-minimizing break around 1636. These results suggest that the Spanish Price Revolution was not a single monotonic bullion-inflation process but the rise and exhaustion of high-transmission metallic money inflation.
0
0
econ.EM 2026-05-11 Recognition

GRU nowcasts Italian municipal income from nightlights at 4% median error

Nowcasting Italian Municipal Income with Nightlights: A Deep Learning Approach

Satellite nightlight data fed into a gated recurrent unit beats persistence and spatial linear models for 7,631 towns with 1.07 million euro

abstract click to expand
This paper assesses whether NASA Black Marble nightlight intensity can serve as an early indicator of annual taxable income at the Italian municipal level, where official data are released with a 12--18 month lag. Using a panel of 7{,}631 municipalities over 2012--2021, we compare four recurrent neural network architectures (LSTM, BiLSTM, GRU, Transformer) against six benchmarks: simple persistence, panel fixed effects, autoregressive distributed lag, and two spatial econometric specifications (SAR, Spatial Durbin) on a queen-contiguity matrix. Models are trained on 2012--2019 and evaluated out-of-sample on 2020--2021 with a cross-sectional Diebold--Mariano test. A single-layer GRU achieves a median forecast error of 1.07 million euros across the cross-section of municipalities -- approximately $4\%$ of the median municipal IRPEF income of 29 million euros -- statistically dominating every benchmark (DM $>4$ against persistence, $>40$ against spatial linear models, all $p<0.001$). Spatial models recover statistically significant spatial autocorrelation ($\rho \approx 0.71$) and a meaningful nightlight spillover ($\theta \approx 0.05$), but their forecasting gap with the GRU is virtually identical to that of spatially-naive linear specifications. We conclude that nightlights contain genuine predictive content for municipal income, but extracting it requires a model class flexible enough to capture cross-sectional heterogeneity and non-linearities that linear specifications, spatial or otherwise, cannot recover.
0
0
econ.EM 2026-05-11 2 theorems

Nonparametric EB intervals reach nominal coverage asymptotically

Nonparametric Empirical Bayes Confidence Intervals

They converge in conditional and marginal coverage while shortening intervals by borrowing strength across units at a logarithmic rate.

Figure from the paper full image
abstract click to expand
Empirical Bayes methods can improve inference on unobservable individual effects by borrowing strength across units. This paper proposes nonparametric empirical Bayes confidence intervals (NP-EBCIs) for unobservable individual effects in a normal means model. The oracle intervals are constructed from posterior quantiles under a point-identified, fully nonparametric prior; feasible intervals replace these quantiles with nonparametric estimates. The NP-EBCIs are asymptotically exact in the sense that both their conditional and marginal coverage probabilities converge to the nominal level. The flexibility of this nonparametric construction has an unavoidable statistical cost. We demonstrate that posterior quantiles, unlike posterior means, inherit the severe ill-posedness of nonparametric deconvolution: the minimax optimal estimation rate is logarithmic. This logarithmic rate is minimax optimal for errors in the conditional coverage probability, and the resulting errors in the marginal coverage probability also vanish at the same logarithmic rate. Despite these slow asymptotic rates, simulations show that the NP-EBCIs remain close to nominal coverage when the prior is non-Gaussian, and deliver substantial length reductions relative to intervals that treat each unit in isolation.
0
0
econ.EM 2026-05-11 Recognition

AI Changes Incidence and Visibility of Econometric Failures

Vibe Econometrics and the Analysis Contract

The Analysis Contract counters this by requiring method-data agreements, data audits, and pre-commitments to disconfirming results before a

Figure from the paper full image
abstract click to expand
"Vibe coding" and "vibe analytics" have been framed as a democratization of technical capability. This paper argues that AI-assisted methodology more broadly, or what I call "vibe methodology," also democratizes the failure modes specific to each domain. When AI assists with methods whose validity depends on assumptions that cannot be verified from the output alone (a class I call "vibe inference"), the failure surface is structurally different: the output does not reliably signal invalidity, and when it does, recognizing the signal requires the expertise the workflow bypasses. I focus on "vibe econometrics," the subset of AI-assisted causal analysis where identification can be named faster than it can be audited. The claim of this paper is not that AI invents inferential failures that did not previously exist, but that it changes their incidence, observability, and persuasive force enough to create a practically distinct governance problem. This results in three failure modes: method-data mismatch, where AI bypasses expertise at execution; confidence laundering, where AI amplifies the credibility of formatted output; and invisible forking, which spans both. What is new is not the failure modes but AI's industrialization of their packaging. The barrier between naming a method and executing it has collapsed, and weak foundations, dressed as rigorous analysis, now reach audiences at a scale, speed, and polish that previously required expertise. I propose the Analysis Contract, a pre-commitment framework that adapts the logic of pre-analysis plans and the Causal Roadmap to the AI-assisted setting. The contract imposes three conditions before a causal claim is made: a method-data contract, a data audit, and a pre-commitment statement defining what would count as a disconfirming result. The framework generalizes across domains of vibe inference through domain-specific instantiation.
0
0
econ.TH 2026-05-11 Recognition

Money burning clears aggregate NTU matchings at fixed prices

Aggregate Stable Matching with Money Burning

Typed agents equalize utilities by waiting on one side, delivering unique equilibria via a convergent generalized deferred-acceptance method

abstract click to expand
We propose an aggregate notion of non-transferable utility (NTU) stability for decentralized matching markets with fixed prices, where market clearing is achieved through one-sided money burning, which can be interpreted as waiting. Agents are grouped into observable types and are indifferent among individuals within type; equilibrium is defined at the type level and delivers equal indirect utility within each type. We introduce money burning into two types of NTU models: In a deterministic model, we relate our notion to classical Gale--Shapley stability and show how money burning decentralizes stable outcomes under aggregation. We then introduce separable random utility, obtaining an NTU counterpart to Choo and Siow (2006). We prove the existence and uniqueness of equilibrium and provide a stationary queueing interpretation. Finally, we develop a generalized deferred acceptance algorithm based on alternating constrained discrete-choice problems and prove its convergence to the unique equilibrium.
0
0
econ.TH 2026-05-11 2 theorems

Partial statistics disclosure expands implementable game outcomes

Coordination Mechanisms with Partially Specified Probabilities

Revealing only expectations of random variables lets players coordinate on jointly coherent outcomes beyond standard correlated equilibria.

abstract click to expand
We study which outcomes are implementable by disclosing coarse statistics of a data-generating process rather than its full distribution. Players observe data whose joint distribution is only partially known: they know the expectations of finitely many random variables and form beliefs by maximum-entropy inference. We obtain two characterizations. When message spaces are unrestricted, implementable outcomes coincide with jointly coherent outcomes, expanding the set of correlated equilibria. With canonical mechanisms, implementability reduces to a single cross-entropy condition: the target outcome must lie on the cross-entropy level set of some correlated equilibrium that passes through that equilibrium itself. Examples and several classes of games illustrate the reach of the framework.
0
0
econ.TH 2026-05-11 Recognition

Pensions increase births but reduce child mental-health investments

Mental Health and Human Capital Composition in a Dynastic OLG Model with PAYG Pensions

A dynastic model shows higher pay-as-you-go rates boost fertility while crowding out education, physical health and mental health spending.

abstract click to expand
This paper develops a two-period dynastic overlapping-generations (OLG) model in which parents simultaneously choose consumption, savings, fertility, and three distinct dimensions of child quality-education, physical health, and mental health-under a pay-as-you-go (PAYG) pension system. The central innovation is modelling mental health as an independent productivity-enhancing input with its own elasticity $\theta$ in a Cobb-Douglas human-capital technology. This yields simple proportional allocation rules and shows how pension policy affects not only the overall level but also the composition of human capital investments. In steady state, higher PAYG contribution rates raise fertility through the Yakita effect but crowd out per-child investments in all quality dimensions, including mental health. An increase in the mental-health elasticity $\theta$ shifts resources toward non-cognitive skill development while reducing fertility. These results reveal a fundamental policy tension for developing economies: pension systems that rely on children for old-age support simultaneously increase birth rates while reducing long-term human capital formation, with disproportionate effects on non-cognitive skills. The framework provides theoretical guidance for complementary policies that protect mental-health investments, with particular relevance for countries such as India where children remain a primary source of retirement security and mental-health services are underfunded.
0
0
econ.GN 2026-05-08 Recognition

Feedback models bring dynamic causality into economics teaching

Introducing Feedback Thinking and System Dynamics Modeling in Economics Education

A pricing example and four-level hierarchy show how system dynamics captures loops that standard classes often miss.

abstract click to expand
System dynamics is a methodology that is widely used in many academic fields. It explains the behavior of social and economic systems with models that capture complex causality and feedback effects. This 'practice paper' discusses the opportunities and barriers for introducing feedback thinking and system dynamics models in the economics curriculum. We start by providing a pricing feedback model that illustrates some of the benefits that system dynamics can provide in enhancing economics education. Then we summarize the experiences of each of the authors in teaching system dynamics on economics educational programs. This includes different approaches to teaching economics with system dynamics that depend on the learning objectives, the preparation of students, and the background of the instructor. We also develop a four-level course hierarchy for using system dynamics in economics teaching. We then point out the tradeoffs that instructors must consider as they introduce new pedagogies for delivering economics material. Finally, we provide some concluding comments with some suggestions for future work. The expected audiences for this paper are instructors as well as graduate students who are considering academia as a profession.
0
0
econ.EM 2026-05-08

Orthogonal moments remove incidental bias in two-way panel regressions

Inference on Linear Regressions with Two-Way Unobserved Heterogeneity

A general procedure delivers root-NT consistent estimates of common parameters under weak conditions on nonparametric first-step estimators.

Figure from the paper full image
abstract click to expand
We develop a general estimation and inference procedure for the common parameters in linear panel data regression models with nonparametric two-way specification of unobserved heterogeneity. The procedure takes as input any first-step estimators of the nonparametric regression function and the fixed effects and relies on two key ingredients: First, we develop moment conditions for the common parameters that are Neyman orthogonal with respect to the nonparametric regression function. Second, we employ a novel adjustment of the nonparametric regression estimator so the estimated fixed effects do not generate incidental parameter biases. Together, these ensure that the resulting estimator of the common parameters is root-NT -- asymptotically normally distributed under weak conditions on the estimators of fixed effects and regression function. Next, we propose a novel two-step estimator of the nonparametric regression function and the fixed effects and verify that this particular estimator satisfies the conditions of our general theory. A numerical study shows that the proposed estimators perform well in finite samples.
0
0
econ.EM 2026-05-08

RL agents narrow equity gaps in NYC 311 complaint routing

Scaling the Queue: Reinforcement Learning for Equitable Call Classification Capacity in NYC Municipal Complaint Systems

By making equitable coverage a direct reward in each domain's MDP, agents learn to route calls while increasing throughput.

abstract click to expand
Municipal 311 call centers and complaint intake systems face a structural mismatch between incoming volume and classification capacity. The staff and heuristics available to triage, route, and prioritize complaints cannot scale with demand. This bottleneck produces differential service quality that follows income and racial lines (\cite{liu2024sla}). We develop an equity-centered reinforcement learning (RL) framework that augments call classification capacity across six New York City Department of Buildings (DOB) operational domains: boiler safety, crane and derrick oversight, heat and hot water complaints, housing complaint triage, scaffold safety, and Natural Area District (SNAD) protection. Rather than replacing human classifiers, our agents act as intelligent intake routers: learning to assign incoming complaints to action categories: escalate, batch, defer, inspect now. The proposed technique is designed to maximize throughput, minimize misclassification cost, and actively narrow historical equity gaps in service delivery. We formalize each domain as a Markov Decision Process (MDP) in which equitable classification coverage is a first-class reward objective. Post-hoc SHAP attribution reveals that complaint recurrence and neighborhood-level statistics are stronger predictors of actionable violations than raw complaint volume. This finding has direct implications for complaint routing given the demographic correlates of those features.
0
0
econ.EM 2026-05-08

Neyman score dictates balancing in debiased machine learning

Covariate Balancing and Riesz Regression Should Be Guided by the Neyman Orthogonal Score in Debiased Machine Learning

Covariate balancing suffices only for covariate-only errors; general case requires regressor balancing on full X

Figure from the paper full image
abstract click to expand
This position paper argues that, in debiased machine learning, balancing functions should be derived from the Neyman orthogonal score, not chosen only as functions of covariates. Covariate balancing is effective when the regression error entering the score can be represented by functions of covariates alone, and it is the natural finite-dimensional approximation for targets such as ATT counterfactual means. For ATE estimation under treatment effect heterogeneity, however, the score error generally contains treatment-specific components because the outcome regression is a function of the full regressor $X=(D,Z)$. In that case, balancing common functions of $Z$ can leave the treatment-specific component unbalanced. We therefore advocate regressor balancing, implemented by Riesz regression with basis functions of $X$, as the general balancing principle for DML. The position is not that covariate balancing is invalid, but that covariate balancing should be understood as the special case that is appropriate when the score-relevant regression error is a function of covariates alone.
0
0
econ.GN 2026-05-08

Migration reduces elderly share in Swiss municipalities

Migration-Driven Demographic Changes: effects on local communities in the canton of Fribourg

Fribourg study finds modest persistent shifts in age structure, school cohorts, and housing from both internal and international inflows.

Figure from the paper full image
abstract click to expand
Migration is reshaping demographic landscapes across Europe, raising urgent questions about adapting to rapid population changes. This study examines the canton of Fribourg, Switzerland, which experienced a 30% population increase over the past 15 years, driven by international and internal migration. As local governments face mounting pressures from demographic shifts in housing, education, and social services, understanding the causal effects of migration is essential for evidence-based policymaking. We study how migration reshapes local demographic, educational, and housing outcomes across 112 Fribourg municipalities (2010-2021). Using the intertemporal difference-in-differences estimator of De Chaisemartin and D'Haultfoeuille (2024), which accommodates staggered timing and cumulative, non-binary treatment, we identify the effect of a one-percentage-point increase in cumulative migration balance (relative to baseline population). Migration exposure generates modest but persistent adjustments across demographic, educational, and housing dimensions. Both migration types reduce the share of elderly residents, and international inflows are associated with higher birth counts. Internal migration increases resident students and alters compulsory and secondary-school cohorts, while international migration slightly reduces the tertiary-education share. Housing adjustments are gradual and concentrated in household composition and selected dwelling types, with international migration increasing mid-sized households and internal migration reducing mixed-use dwellings. Though yearly effects are small, their persistence yields meaningful cumulative changes. Overall, migration acts as a counterweight to population aging and generates incremental adjustments in service demand, underscoring the need to incorporate migration exposure into cantonal and municipal planning.
0
0
econ.GN 2026-05-08

Aesthetic quality adds no premium to AI text bids

Artificial Aesthetics: The Implicit Economics of Valuing AI-Generated Text

Participants notice stylistic differences but do not pay more, because aesthetics and function load as one quality factor.

abstract click to expand
Aesthetic qualities command measurable premiums in traditional goods markets. However, it remains unclear whether users are willing to pay for such qualities in AI-generated text. This paper estimates the willingness to pay for aesthetic attributes in large language model outputs using an online experiment with N = 117 participants. Participants evaluated responses from four anonymized models across academic, professional, and personal contexts, rated outputs along multiple dimensions, and submitted bids for access using a Becker-DeGroot-Marschak (BDM) mechanism. We find no statistically significant relationship between perceived aesthetic quality and willingness to pay. While participants systematically distinguish between outputs and exhibit consistent preferences over stylistic features, these differences do not translate into higher monetary valuation. Further analysis shows that aesthetic and functional attributes load onto a single latent factor, suggesting that users perceive quality as a unified construct rather than a separable aesthetic dimension. These results imply that, in current large language model (LLM) markets, aesthetic improvements function as baseline expectations rather than sources of price differentiation.
0
0
econ.TH 2026-05-07

Counterfactual utilities satisfy vNM axioms on potential outcomes

An Axiomatic Foundation for Decisions with Counterfactual Utility

Defining preferences over every possible result under alternative decisions yields coherent representations and accounts for paradoxes such

abstract click to expand
Counterfactual utilities evaluate decisions not only by the realized outcome under a given decision, but also by the counterfactual outcomes that would arise under alternative decisions. By generalizing standard utility frameworks, they allow decision-makers to encode asymmetric criteria, such as avoiding harm and anticipating regret. Recent work, however, has raised fundamental concerns about the coherence and transitivity of counterfactual utilities. We address these concerns by extending the von Neumann-Morgenstern (vNM) framework to preferences defined on the extended space of all potential outcomes rather than realized outcomes alone. We show that expected counterfactual utility satisfies the vNM axioms on this extended domain, thereby admitting a coherent preference representation. We further examine how counterfactual preferences map onto the realized outcome space through menu-dependent and context-dependent projections. This axiomatic framework reconciles apparent inconsistencies highlighted by the Russian roulette example in the statistics literature and resolves the well-known Allais paradox from behavioral economics. We also derive an additional axiom required to reduce counterfactual utilities to standard utilities on the same potential outcome space, and establish an axiomatic foundation for additive counterfactual utilities, which satisfy a necessary and sufficient condition for point identification. Finally, we show that our results hold regardless of whether individual potential outcomes are deterministic or stochastic.
0
0
econ.EM 2026-05-07

Averaging LP and VAR cuts impulse response MSE

Estimator Averaging of Local Projection and VAR Impulse Responses

Horizon-specific weights derived from finite-sample risk minimization deliver lower error than either method alone.

Figure from the paper full image
abstract click to expand
Local projections (LP) and vector autoregressions (VAR) are the two standard tools for impulse response analysis, but they often display a finite-sample trade-off: LP is typically less biased but more volatile, while VAR is more precise but can be biased under misspecification. We propose an easy-to-implement estimator-averaging approach that combines LP and VAR at each horizon by minimizing the mean squared error of the impulse response itself, rather than in-sample fit. We derive closed-form oracle weights for this finite-sample risk problem, develop feasible AR-sieve-bootstrap procedures, and compare them against an Rsquare-based model-averaging benchmark. For a benchmark class of short-memory linear data generating processes in which LP and VAR are both consistent, we establish the consistency and limiting distribution of the feasible averaged estimator. Monte Carlo results show meaningful risk reductions relative to LP and VAR alone. In an empirical application revisiting Bauer and Swanson (2023), estimator averaging delivers stable and economically intuitive responses for yields, activity, prices, and credit spreads.
0
0
econ.EM 2026-05-07

State-dependent projections recover causal responses under linearity

Causal State-Dependent Local Projections

A condition satisfied in standard models lets local projections identify shock effects without full specification, revising monetary policy

Figure from the paper full image
abstract click to expand
State-dependent local projections (LPs) are widely used to estimate how responses to exogenous aggregate shocks vary as a function of observable state variables, yet their causal interpretation remains unclear. We show that this interpretation obtains under the sufficient condition that the conditional mean is linear in the aggregate shock at each horizon, and that this condition holds in a broad class of canonical micro-macro environments, including first-order perturbation solutions of heterogeneous-agent models and macro-finance models. Under this condition, LPs recover causal impulse responses without requiring specification of the full data-generating process. We further show that the causal interpretation of state-dependent LPs is robust to the choice of state variable. By contrast, commonly used linear interaction LPs generally fail to recover causal objects. We therefore develop a sieve-based nonparametric LP estimator that restores causal interpretation and delivers valid pointwise and uniform inference in micro-macro panels. Empirically, allowing for nonparametric state dependence materially changes both the pattern of heterogeneous firm investment responses and their aggregate implications for the transmission of monetary policy shocks.
0
0
econ.GN 2026-05-07

Automation can exceed the social optimum when low-wealth households drive demand

The Demand Externality of Automation

Firms overlook how automation shifts income away from high-MPC consumers toward concentrated capital owners in this equilibrium model.

Figure from the paper full image
abstract click to expand
Automation raises productivity and reduces paid human labor, but it also reallocates income and ownership claims. This paper studies that tradeoff in a static benchmark and in a stationary heterogeneous-agent general equilibrium. Firms choose automation from a profit function. Households differ by skill and wealth, save in a capital/equity claim, and face incomplete insurance. Wages and returns are determined by market clearing from a Cobb--Douglas final-good firm, while the wealth distribution is pinned down by a Hamilton--Jacobi--Bellman (HJB) equation and a Kolmogorov forward equation (KFE). The paper is deliberately two-sided. With strong productivity growth, high-skill complementarity, low obsolescence, and broad ownership, automation raises output, capital, and consumption. With strong exposure of low-wealth, high-marginal-propensity-to-consume (high-MPC) households and concentrated ownership, privately chosen automation can be excessive even though it raises high-skilled labor income. The central object is the derivative of household consumption demand and collective wage bill with respect to automation. Fiscal policy is modeled as a government problem rather than as an abstract planner: a tax changes the firm's automation first-order condition, raises revenue only on the remaining automation base, and must specify rebates and administrative losses.
0
0
econ.EM 2026-05-07

DiD estimator picks pre-trend length to minimize MSE

MSE-Optimal Difference-in-Differences Estimator

Balances bias from longer windows against variance from shorter ones for lower overall error in small samples.

Figure from the paper full image
abstract click to expand
This paper develops a difference-in-differences (DiD) estimation method that selects the optimal length of pre-trends by minimizing the mean squared error (MSE). Conventional DiD regression models, such as the two-way fixed effects model or the event study model, may suffer from accuracy and validity concerns. If the sample size is small, the estimator may have a larger variance. Also, pre-tests often lack power to detect violations of the parallel trends assumption as Roth (2022) highlights. By focusing on the bias and variance tradeoff, the proposed method derives the MSE-optimal estimator from the optimal length of pre-trends. Simulation results and an empirical application demonstrate the practical applicability of the proposed method.
0
0
econ.EM 2026-05-07

Panel bias corrected by inverting outcome mapping at exponential rate

Approximate Operator Inversion for Average Effects in Nonlinear Panel Models

Approximate operator inversion achieves double-robust bias removal for average effects in models with moderate time dimension T.

abstract click to expand
We study the estimation of average effects in nonlinear panel data models with fixed effects when the time dimension $T$ is only moderately large. Our approach, called approximate operator inversion (AOI), offers a new perspective on bias correction. Instead of first estimating unit-specific fixed effects and then correcting the resulting plug-in bias, AOI approximately inverts the likelihood-induced mapping from the fixed-effect distribution to the outcome distribution. AOI can be interpreted as the limit of an infinitely iterated bias correction scheme, and this limit is available in closed form. We show that the bias of the AOI estimator has a rate double robustness property and converges to zero at an exponential rate in $T$ under regularity conditions. Our asymptotic theory requires $T \to \infty$, but the exponential convergence rate of the bias means that finite-sample performance is very good even for moderately large $T$. We establish asymptotic normality and provide feasible inference.
0
0
econ.EM 2026-05-07

Recentering moments yields efficient GMM under misspecification

Efficient GMM and Weighting Matrix under Misspecification

By augmenting the moment conditions and weighting them optimally, the misspecification-efficient estimator minimizes asymptotic variance for

abstract click to expand
This paper develops efficient GMM estimation when the moment conditions are misspecified. We observe that the influence function of the standard GMM estimator under misspecification depends on both the original moment conditions and their Jacobian, motivating a new class of estimators based on augmented moment conditions with recentering. The standard GMM estimator is a special case within this class, and generally suboptimal. By optimally weighting the augmented system, we obtain a misspecification-efficient (ME) estimator with the smallest asymptotic variance for the same GMM pseudo-true value. In linear models, the asymptotic variance of ME estimator reduces to the textbook efficient-GMM variance formula $(G'W^{*}G)^{-1}$, where $W^{*}$ is the inverse of the variance of residualized moments after projection on the Jacobian $G$. We consider a feasible double-recentered bootstrap estimator, which can be considered as a misspecification-robust and efficient version of Hall and Horowitz (1996) recentered bootstrap GMM estimator, and also consider a split-sample ME estimator. Finally, we establish uniform local asymptotic minimax bounds over a class of weighting matrices. We illustrate the proposed methods in simulation and empirical examples.
0
0
econ.EM 2026-05-07

Test rejects stable preferences from singles to couples

It's complicated: A Non-parametric Test of Preference Stability between Singles and Couples

Non-parametric method finds consumption patterns in Dutch, Russian and Spanish panels inconsistent with unchanging preferences.

Figure from the paper full image
abstract click to expand
This paper develops a method to use singles' data in a non-parametric revealed preference setting of collective household choice. We use it to test the controversial assumption of preference stability between singles and couples, without data on intra-household allocation or marital transitions. We show that, under the preference-stability hypothesis, consumption choices from an endogenously matched population admit a conditional random-utility representation over counterfactual pairings of couples and singles. Preference stability is testable as a feasibility restriction on the observed marginal choice distributions. We reject the hypothesis using consumption data from the Dutch LISS, the Russian RLMS, and the Spanish ECPF panels.
0
0
econ.EM 2026-05-07

Local groups split Bellman operator into exact subproblems

Scalable Structural Estimation of Networked Infrastructure: Exact Decomposition for Localized Coordination

Decomposition lets researchers estimate coordination among 14k GPU nodes without approximation or independence assumptions.

Figure from the paper full image
abstract click to expand
Interaction effects are often economically central in environments where structural dynamic estimation becomes computationally infeasible. Under fixed group membership and sparse within-group interaction structure, the Bellman operator admits a block-diagonal decomposition that allows high-dimensional dynamic programs to be solved through independent group-level subproblems while preserving the original structural problem exactly. The result applies to a class of dynamic discrete choice models in which interactions are confined within stable local groups and state transitions depend only on within-group conditions. We apply the framework to replacement decisions across 14,344 GPU node locations in the Titan supercomputer, where operating environments differ systematically across cage positions. The structural estimates reveal significant spatial coordination: both neighboring failures and recent local replacement activity increase replacement incentives. Accounting for these interaction effects materially shifts predicted replacement timing and reveals significant misoptimization costs in benchmarks that assume conditional independence. More broadly, the results show how exploiting sparsity in interaction structures can make fully structural estimation feasible in large-scale networked systems without relying on simulation-based auxiliary moments or numerical approximation.
0
0
econ.TH 2026-05-06

Full correlation neutralizes attacker edge in multi-surface AI security

The Adversarial Discount - AI, Signal Correlation, and the Cybersecurity Arms Race

The arms race ratio stays independent of surface count when threat signals fully inform detection across surfaces.

Figure from the paper full image
abstract click to expand
We study a contest-theoretic model of adversarial investment in which an attacker and a defender allocate resources to AI-augmented capabilities across multiple attack surfaces. The attacker's investment operates through two channels: it amplifies offensive potency unconditionally and erodes defensive effectiveness conditionally, generating an adversarial discount that deepens endogenously with the defender's own investment. We derive a closed-form arms race ratio decomposing the relative marginal effectiveness of offensive and defensive investment into six structural primitives and establish equilibrium uniqueness and global convergence under a continuous best-response dynamic. The central result concerns signal cross-correlation, the degree to which threat intelligence on one surface informs detection on another. With full cross-correlation, the arms race ratio is independent of the number of attack surfaces: the attacker's structural advantage from surface proliferation is completely neutralized. Under the benchmark full-dilution case, without cross-correlation, per-surface defense effectiveness vanishes as the attack surface grows. Extending the analysis to heterogeneous defenders facing an attacker who targets by expected value, we argue that the model points to a dual inefficiency: overinvestment in private defense (a zero-sum redirective externality) and underinvestment in shared signal correlation (a public good). These formal results, together with public-good reasoning outside the base model, characterize when collective information aggregation can dominate private capability investment as the decisive margin in adversarial contests.
0
0
econ.GN 2026-05-06 3 theorems

VC portfolios no better than random on big successes

Do Venture Capitalists Beat Random Allocation?

High-return tails match constrained random benchmarks, showing limited evidence of selection skill.

Figure from the paper full image
abstract click to expand
Venture capital outcomes are dominated by a small number of extreme successes, making it difficult to distinguish investor skill from favorable realizations in a highly skewed return distribution. We study this question by comparing empirical VC portfolios to a constrained random benchmark that preserves key portfolio characteristics, including timing, geography, sector composition, and portfolio size, while randomizing individual company selection. Across funding stages, empirical portfolio distributions appear remarkably close to their random benchmarks. We find no evidence that portfolio construction increases the probability of high-multiple outcomes: the right tail remains statistically indistinguishable from random allocation. Deviations in the lower part of the distribution are small and sensitive to the interpretation of zero outcomes, suggesting at most weak evidence of downside improvement. We further introduce a rank-based benchmark distribution to evaluate outperformance at each position in the cross-section. This analysis shows that even the best-performing portfolios do not exceed the outcomes expected for their rank under random sampling. Our results suggest that VC portfolio outcomes are largely consistent with constrained random allocation, highlighting the difficulty of identifying aggregate skill in heavy-tailed investment environments. A similar conclusion holds for the performance of financial analysts in predicting future earnings.
0
0
econ.GN 2026-05-06

Interest rate determinants in open economies align with Ramsey model

The Real Interest Rate as a Control Variable in the Open Economy

With interest rate as control, open economy rates depend on discount rates and productivity expectations that raise wages.

abstract click to expand
This paper addresses the structure and dynamics of an open market economy and its relations with the real interest rate. In this respect, the paper is situated within a broad conventional literature. However, it departs from the standard approach to the interest rate by treating it as a control variable. Even so, the analysis concludes that the two main determinants of the interest rate are the future utility discount rate and expectations regarding future multifactor productivity (labor efficiency). Furthermore, increases in such expectations lead to increases in both the interest rate and wages. These results are consistent with to those obtained with the Cass, Koopmans, Ramsey model.
0
0
econ.GN 2026-05-06

Fiscal G works only when every spending tool has the same output effect

Fiscal Aggregation and the Limits of IS--LM--BP: Derivations, Aggregation Bias and Reproducible Adversarial Simulations

Heterogeneous instruments require composition-weighted multipliers once the IS-LM-BP model adds debt dynamics and risk premia

Figure from the paper full image
abstract click to expand
This paper develops a formal critique of scalar fiscal aggregation in the IS LM BP/Mundell Fleming framework. It shows that when fiscal policy is composed of heterogeneous instruments current purchases, public investment and transfers to different households the aggregate variable G is sufficient for output analysis only under a restrictive gradient condition: all instruments must have identical marginal effects on output. The paper proves this condition, derives composition weighted multipliers, identifies aggregation bias and extends the open economy IS LM BP model to incorporate fiscal composition, public capital, debt dynamics and risk-premium effects. A reproducible computational exercise with symbolic checks, derivative tests, accounting identities, adversarial counterexamples, sensitivity sweeps, Monte Carlo simulations and stress tests confirms the internal consistency of the argument. The contribution is methodological: IS LM BP remains useful as a compact equilibrium framework, but fiscal policy analysis requires vector-valued instruments and state-contingent multipliers rather than a single homogeneous spending variable.
0
0
econ.GN 2026-05-06

This paper examines over 23 million US worker retraining records from 2017-2023 to testโ€ฆ

Did US Worker Retraining Reduce Participant Automation Exposure?

WIOA retraining rarely moves participants into less automation-exposed occupations, with wage gains driven mainly by mean reversion insteadโ€ฆ

abstract click to expand
This paper evaluates whether the U.S. Workforce Innovation and Opportunity Act (WIOA) supported American worker resilience to technological automation. Analyzing over 23 million WIOA participation records (2017-2023), we introduce the "Retrainability Index," which measures program outcomes through post-intervention wage recovery and shifts in Routine Task Intensity (RTI). We show WIOA rarely shifts workers into less automation-exposed work, with a significant portion of participants simply returning to their prior field. Successful outcomes driven mostly by wage gains, possibly due to "catch-up" mean reversion, rather than changes in occupation. Outcomes are moderated by a person's prior occupational skill set and area of work, as well as their local economy. We find evidence that employer led programs--notably apprenticeships--are associated with the highest incidence of success. This suggests the United States' existing public active labor market programming can support baseline wage recovery for vulnerable populations, but is not well-equipped to support the large-scale, cross-industry labor transitions.
0
0
econ.EM 2026-05-06

The paper derives the efficient influence function for the local average treatment effectโ€ฆ

Doubly Robust Instrumented Difference-in-Differences

Efficient influence functions deliver consistency for effects on compliers if either the outcome or instrument model is correct, plus an IDi

Figure from the paper full image
abstract click to expand
We study estimation of the local average treatment effect on the treated ($LATT$) in instrumented difference-in-differences (IDiD) designs with covariates and staggered instrument exposure. We derive the efficient influence function (EIF) of the target parameter in both panel and repeated cross-sections settings, allowing for two classes of control groups: never-exposed and not-yet-exposed. Building on the EIF, we construct doubly robust estimands and corresponding estimators from first principles. The resulting procedures are the IDiD analogues of the difference-in-differences (DiD) procedures in Callaway and Sant'Anna (2021), targeting $LATT$ rather than $ATT$. We further establish a Bloom-type result under one-sided compliance and absorbing treatment, linking $LATT$ to a convex combination of exposure-cohort-specific $ATT(g, t)$ parameters, making the connection between IDiD and DiD explicit. Asymptotic properties are established under conditions on the remainder term and either Donsker conditions or via cross-fitting. We also construct double machine learning (DML) estimators for the $LATT$ in both data settings and show their equivalence to cross-fitted estimators. Simulations assess the double robustness and finite-sample performance of the proposed methods. An implementation is available in the Python package \texttt{idid}.
0
0
econ.TH 2026-05-06 2 theorems

Public messages match or beat private ones in group approvals

Going Public: Communication in Collective Decisions

Any outcome from individualized messages can be achieved by one shared announcement, and sometimes strictly better when two agents conflict.

Figure from the paper full image
abstract click to expand
A principal and $n\ge 2$ agents can launch a project if the principal proposes it and at least $k$ agents accept. Their individual payoffs from the project depend on an ex ante unknown state. The principal can conduct a test to learn about the state and then communicate her findings to the agents via cheap talk. This paper focuses on comparing two communication regimes: public and private messaging. We show that public messaging is weakly dominant: any outcome implementable under private messaging can also be implemented under public messaging. Moreover, in a canonical environment with linear payoffs, we characterize the principal's optimal test in each regime and show that public messaging can be strictly dominant if and only if there exist two agents who are the principal's conflicting allies.
0
0
econ.TH 2026-05-05

Large effective commodity counts stabilize all equilibria

Equilibrium Stability and Uniqueness with a Large Number of Commodities and Patient Consumers

In patient-consumer economies, substitution effects scale with the discounted count of goods and dominate income effects under taste spread.

abstract click to expand
We show that a large effective number of commodities can be a source of equilibrium stability and uniqueness: expanding substitution opportunities strengthens aggregate substitution effects. We study finite dated-commodity exchange economies obtained by truncating a countably infinite-horizon environment with discounted, additively separable utilities. In this setting, the effective number of commodities is the discounted count of dated commodities, so greater patience makes distant commodities more relevant. With an appropriate normalization, equilibrium substitution effects accumulate at the rate of the effective number of commodities. When a preference diversification condition holds, equilibrium income effects grow at a lower rate. The condition is satisfied, for example, when agents have sparse or localized taste differences across commodities, or when their taste profiles become sufficiently heterogeneous as the commodity space expands. Hence, whenever the effective number of commodities is sufficiently large, every equilibrium is locally t\^atonnement stable, which in turn implies equilibrium uniqueness.
0
0
econ.TH 2026-05-05

Truthful networks form as precision-sorted cliques

Truthful Communication and Exclusive Information Clubs

Agents link only with similar-precision peers to enable honest reporting, though mixing precisions could raise overall welfare.

abstract click to expand
This paper studies how the possibility of strategic misreporting shapes endogenous communication networks. Agents observe noisy private signals about a common state, form costly communication links, exchange private messages with their neighbors, and then choose actions. Payoffs reward both accuracy and coordination with linked agents. A link is valuable because it gives access to information, but it is useful only if the induced local information structure makes truthful transmission incentive compatible. We show that clique components support truthful communication: within a clique, all members observe the same profile of local signals, choose the same posterior action, and therefore have no incentive to distort reports. With heterogeneous signal precisions and convex linking costs, the core selects assortative information clubs ordered by signal precision. These stable truthful networks need not be socially efficient. Because the informational value of precision is decreasing, concentrating high-precision agents in the same club may be privately stable but socially dominated by more mixed partitions.
0
0
econ.TH 2026-05-05

Agents form precision-sorted truthful cliques that may not optimize welfare

Truthful Communication and Exclusive Information Clubs

When link costs rise convexly, high-quality signal agents cluster together even though mixing precision levels could raise overall decision-

abstract click to expand
This paper studies how the possibility of strategic misreporting shapes endogenous communication networks. Agents observe noisy private signals about a common state, form costly communication links, exchange private messages with their neighbors, and then choose actions. Payoffs reward both accuracy and coordination with linked agents. A link is valuable because it gives access to information, but it is useful only if the induced local information structure makes truthful transmission incentive compatible. We show that clique components support truthful communication: within a clique, all members observe the same profile of local signals, choose the same posterior action, and therefore have no incentive to distort reports. With heterogeneous signal precisions and convex linking costs, the core selects assortative information clubs ordered by signal precision. These stable truthful networks need not be socially efficient. Because the informational value of precision is decreasing, concentrating high-precision agents in the same club may be privately stable but socially dominated by more mixed partitions.
0
0
econ.TH 2026-05-05

Peer pressure evolves to sustain misspecified beliefs

Misspecified beliefs and the evolution of peer pressure

Stable conformity level makes agents exert true-return effort, creating self-confirming Nash without allocative loss

Figure from the paper full image
abstract click to expand
We study the emergence of conformity preferences in an environment in which agents choose effort under heterogeneous, possibly misspecified returns, and social interactions do not directly affect material payoffs. Some agents choose effort by trading off performance and conformity to expected peer behavior. We characterize subjective best responses. For any given beliefs, an optimal and unique level of peer pressure exists and is evolutionarily stable within groups of agents sharing the same misspecification. Such a level is zero for correctly specified agents and may be positive for misspecified ones. When the efficient level of peer pressure is interior, misspecified agents choose effort equal to their true return, resulting in an equilibrium behavior that is both self-confirming and Nash, allowing the persistence of misspecifications. Peer pressure need not generate long-run allocative distortions, but it creates a perceived value of social information. In equilibrium, this value depends only on misspecification, generating scope for informational rents.
0
0
econ.GN 2026-05-05

Demand shifting raises firm losses and lowers GDP by 9.1 percent

The Rise of Negative Earnings and Demand Shifting Investment

A single increase in demand scale elasticity matches the post-1980 rise in negative earnings, their persistence, earnings dispersion, and SG

Figure from the paper full image
abstract click to expand
We document the rise of negative earnings between 1980 and 2019: a secular increase in the percent of firms reporting losses, both among public firms and in the broader universe of US corporations, and a secular increase in the persistence of losses year-to-year among public firms. This rise has occurred alongside a spreading of the sales and earnings distribution and a recomposition of firm spending away from production costs and traditional investment and towards sales general and administrative expenses. We rationalize these phenomena with a model of heterogenous firms engaging in supply and demand shifting investment. Our model includes a scale elasticity of demand determining the relationship between the intensive margin of demand (demand per customer) and the extensive margin of demand (number of customers). We are able to quantitatively match the rise in reported losses and qualitatively match (1) the increased persistence of losses, (2) the spreading of the sales and earning distribution and (3) the recomposition of firm spending with this parameter as the single driver of changes across steady state equilibria. The rise in the scale elasticity associated with the increase in reported losses has non-trivial aggregate implications: in our model it lowers GDP by -9.1% by reallocating labor away from goods and capital production and reallocating demand away from productive firms.
0
0
econ.GN 2026-05-05

One demand parameter explains rising firm losses and cuts GDP 9%

The Rise of Negative Earnings and Demand Shifting Investment

Model shows a single rise in scale elasticity matches loss trends since 1980 and reallocates resources to lower output.

Figure from the paper full image
abstract click to expand
We document the rise of negative earnings between 1980 and 2019: a secular increase in the percent of firms reporting losses, both among public firms and in the broader universe of US corporations, and a secular increase in the persistence of losses year-to-year among public firms. This rise has occurred alongside a spreading of the sales and earnings distribution and a recomposition of firm spending away from production costs and traditional investment and towards sales general and administrative expenses. We rationalize these phenomena with a model of heterogenous firms engaging in supply and demand shifting investment. Our model includes a scale elasticity of demand determining the relationship between the intensive margin of demand (demand per customer) and the extensive margin of demand (number of customers). We are able to quantitatively match the rise in reported losses and qualitatively match (1) the increased persistence of losses, (2) the spreading of the sales and earning distribution and (3) the recomposition of firm spending with this parameter as the single driver of changes across steady state equilibria. The rise in the scale elasticity associated with the increase in reported losses has non-trivial aggregate implications: in our model it lowers GDP by -9.1% by reallocating labor away from goods and capital production and reallocating demand away from productive firms.
0
0
econ.GN 2026-05-05

New RL index re-ranks which jobs AI can learn to do

What Jobs Can AI Learn? Measuring Exposure by Reinforcement Learning

Power-plant operators and conductors score high for reinforcement-learning feasibility while musicians and physicians score low, reversing a

Figure from the paper full image
abstract click to expand
Which jobs can AI learn to do? We examine this for every occupation in the US economy. Existing indices measure the overlap between AI capabilities and occupational tasks rather than which tasks AI systems can learn to perform, and as a result misclassify occupations where the gap between present capability and learnability is large. Reinforcement learning in post-training, now the dominant paradigm at the frontier, is structured around task completion and maps more directly onto the task-based architecture of occupational classifications than prior approaches. Using LLM annotators guided by a rubric developed with RL experts and validated against confirmed deployment cases, we score all 17,951 ONET tasks for training feasibility and aggregate to the occupation level, producing an RL Feasibility Index. The index diverges sharply from existing AI exposure measures for specific occupation groups: power plant operators, railroad conductors, and aircraft cargo handling supervisors score high on RL feasibility but low on general AI exposure, while creative and interpersonal roles (musicians, physicians, natural sciences managers) show the reverse. These divergences carry direct implications for policy interventions.
0
0
econ.EM 2026-05-05

Optimal test-and-roll sample size is one third of the population

Prior-Free Sample Size Design for Test-and-Roll Experiments

A marginal welfare criterion produces this simple benchmark for Bernoulli and Gaussian outcomes without requiring priors.

Figure from the paper full image
abstract click to expand
This paper studies sample-size design for finite-population test-and-roll experiments, where a decision-maker first conducts an experiment on $m$ units and then assigns the remaining $N-m$ units to the treatment that performs better in the experiment. We consider welfare-aware sample-size choice, which involves an exploration-exploitation tradeoff: larger experiments improve the rollout decision but impose welfare losses on experimental units assigned to the inferior treatment. We show that the standard absolute minimax regret criterion can lead to implausibly small experiments by over-penalizing exploration in its worst-case objective. To address this limitation, we propose the Worst-case Marginal Benefit (WMB) rule, which compares the worst-case marginal benefit of adding one more matched pair to the experiment with the corresponding marginal exploration cost. We establish a simple rule-of-thirds benchmark. For Bernoulli outcomes, after excluding pathological cases, the WMB criterion yields the optimal sample size of $m \approx N/3$ through a Gaussian approximation. For Gaussian outcomes with a known common variance, the same benchmark arises exactly. These results provide a prior-free and practically implementable guide for welfare-based sample-size design.
0
0
econ.GN 2026-05-05

Coalition external fights intensify their internal conflicts

Compound Attrition Games: A Unified Model for Inter- and Intra-Coalition Rivalry

New attrition model proves unique mixed equilibrium and shows internal discord weakens external endurance.

abstract click to expand
Strategic competitions in the real world, from wars to geopolitical rivalries, often involve coalitions competing against rival groups. These contests are not simple interactions between unified entities, but multilayered processes in which coalitions face external competition while dealing with internal conflicts over resources and strategy. Existing game-theoretic models typically treat inter-coalition rivalry and intra-coalition competition separately. This paper introduces the Compound Coalition-Attrition Game (CCAG), a unified framework that integrates a war of attrition between coalitions with a simultaneous war of attrition within each coalition. In this model, the endurance of a coalition in external competition is determined by the strategic choices of its members, who compete internally for shares of the outcome. We prove the nonexistence of pure-strategy equilibria and characterize the unique mixed-strategy Nash equilibrium. The analysis reveals feedback effects: external competition intensifies internal conflict, while internal discord weakens external performance. A case study compares traditional commodity markets, including gold, copper, and silver, with cryptocurrency markets, including Bitcoin, Ethereum, and Solana, using data from 2018 to 2023 in a simulation framework. The results demonstrate applicability in industrial strategy, corporate decision-making, and geopolitical competition. The CCAG framework provides a tool for analysing complex strategic environments.
0
0
econ.EM 2026-05-05

LS-MD estimator recovers dynamic coefficients despite measurement error

Analysis of interactive fixed effects dynamic linear panel regression with measurement error

It consistently estimates the autoregressive parameter in panel regressions that include interactive fixed effects and classical measurement

abstract click to expand
This paper studies a simple dynamic linear panel regression model with interactive fixed effects in which the variable of interest is measured with error. To estimate the dynamic coefficient, we consider the least-squares minimum distance (LS-MD) estimation method.
0
0
econ.EM 2026-05-04 2 theorems

Eigenvalue method cuts Monte Carlo paths from 1M to 10

Fast Monte-Carlo

Approximation matches full sampling results on steady-state distributions while reducing variance.

Figure from the paper full image
abstract click to expand
This paper proposes an eigenvalue-based small-sample approximation of the celebrated Markov Chain Monte Carlo that delivers an invariant steady-state distribution that is consistent with traditional Monte Carlo methods. The proposed eigenvalue-based methodology reduces the number of paths required for Monte Carlo from as many as 1,000,000 to as few as 10 (depending on the simulation time horizon $T$), and delivers comparable, distributionally robust results, as measured by the Wasserstein distance. The proposed methodology also produces a significant variance reduction in the steady-state distribution.
0
0
econ.EM 2026-05-04 2 theorems

Two-step method estimates quantiles of panel slope heterogeneity

Estimation and Inference for the ฯ„-Quantile of Individual Heterogeneous Coefficient

The procedure targets the cross-sectional ฯ„-quantile of individual slopes rather than outcome heterogeneity and supplies bootstrap inference

Figure from the paper full image
abstract click to expand
This paper proposes estimation and inference procedures for the quantiles of individual heterogeneous slope coefficients within panel data. We develop a two-step quantile estimation framework for analyzing heterogeneity in individual coefficients. Unlike conventional panel quantile regression, which focuses on outcome heterogeneity, our approach targets the $\tau$-quantile of the cross-sectional distribution of individual-specific slopes. We establish asymptotic theory under both stochastic and deterministic designs, with convergence rates $\sqrt{N}$ and $\sqrt{N\sqrt{T}}$, respectively. We also develop two corresponding bootstrap procedures for practical inference, and formally establish their validity. The suggested methods are of practical interest since they require weaker sample size growth conditions than standard fixed-effect quantile regression, and accommodate large $N$ settings. Numerical simulations and an application to mutual fund performance illustrate the proposed methods and the heterogeneity patterns they reveal across quantiles.
0
0
econ.TH 2026-05-04 3 theorems

Heterogeneity explains stochastic transitivity failures without added complexity

Is Complexity the Problem? Testing Random Choice with Heterogeneity

A revealed-preference test on two lab data sets shows that mixing different preferences reproduces observed inconsistencies, so aggregate 'c

Figure from the paper full image
abstract click to expand
Economic choices are often stochastic: the same person may make a different choice when facing the same alternatives repeatedly. Standard models assume that the degree of randomness reflects the size of utility differences, but choice inconsistencies could also reflect difficulty comparing alternatives. Recent studies estimate such comparison difficulty (or "complexity") by fitting functional forms to aggregate choice data under a representative agent assumption. However, aggregate data could violate standard models of random choice simply because of heterogeneity in preferences, even in the absence of variation in comparison difficulty. This paper develops a revealed preference framework, collective rationalizability, that tests for variation in comparison difficulty from aggregate data while explicitly accounting for heterogeneity. The framework characterizes whether violations of standard models can be explained by comparison difficulty alone, heterogeneity alone, or require both. I provide a statistical test with finite-sample inference and apply the method to two existing experiments. In both cases, heterogeneity alone explains observed failures of stochastic transitivity well, demonstrating that comparison difficulty can be not only theoretically but also empirically confused with heterogeneity in aggregate data.
0
0
econ.TH 2026-05-04

Power forms balance equity and productivity in health evaluations

Integrating equity and productivity in health evaluation

Normative axioms on scale independence and transfers produce tractable functions for assessing interventions where health and output both at

abstract click to expand
This paper develops a unified framework for evaluating health outcomes that jointly incorporates equity and productivity. Extending beyond traditional QALYs, PALYs, and the more recent PQALYs, we introduce a broader class of evaluation functions that integrate equity- and productivity-sensitive conditions. By imposing several normative criteria, including independence from measurement scales and Pigou-Dalton transfer principles, we obtain tractable power-form representations. In balancing equity and efficiency, the framework provides a coherent foundation for assessing interventions in contexts where both health and productive capacity are at stake.
0
0
econ.TH 2026-05-04

VCG job matching is firm-rational only under weak substitutes

Strategy-proof and Efficient Job Matching with Participation Constraints

The standard mechanism works for workers automatically but needs firm utilities to be weakly substitutable or submodular before firms willๅ‚ๅŠ 

abstract click to expand
We study the design of strategy-proof and efficient mechanisms satisfying participation constraints in the job-matching problem. Each firm can hire multiple workers and each worker can be employed at only one firm. While firm utilities over subsets of workers are common knowledge, worker disutilities for working at each firm are private information. The VCG mechanism is the unique mechanism that is strategy-proof, efficient, and individually rational for workers; however, it may not be individual rational for firms. We show that the VCG mechanism is individually rational for firms if and only if firm utilities satisfy a condition called weak substitutes. We then strengthen participation constraints of firms to {\sl strong individual rationality}, which requires that each firm has no incentive to fire some of the workers assigned to it. The VCG mechanism is strongly individual rational if and only if firm utilities satisfy submodularity.
0
0
econ.EM 2026-05-04

Exact analytical expressions for the Gauss-Cauchy convolution density enable stableโ€ฆ

Exact Likelihood Inference and Robust Filtering for Gauss-Cauchy Convolution Models

Closed-form likelihoods using the error function avoid numerical methods and discount outliers in state-space estimation.

Figure from the paper full image
abstract click to expand
The convolution of a Gaussian and a Cauchy distribution, known as the Voigt distribution, is widely used in spectroscopy and provides a natural framework for modeling heavy-tailed measurement noise. We derive analytical expressions for its density, score, Hessian, and conditional moments using the scaled complementary error function, enabling stable maximum likelihood estimation without numerical convolution, finite-difference derivatives, or pseudo-Voigt approximations. The conditional expectation of the latent Gaussian component is governed by a redescending location score, so extreme observations are automatically discounted rather than propagated. This structure motivates the Gauss-Cauchy Convolution (GCC) filter for state-space models with Gaussian latent dynamics and heavy-tailed measurement errors. In an application to log realized volatility for the Technology Select Sector SPDR Fund, the GCC filter separates persistent latent variation from transient measurement noise and improves on Gaussian, Student-$t$, Huber, and related robust alternatives.
0
0
econ.EM 2026-05-04 4 theorems

Orthogonal estimator recovers BLP price coefficients with many characteristics

Estimation of BLP models with high-dimensional controls

Price sensitivity stays root-T consistent even when features outnumber markets, under the assumption that consumers focus on few traits.

abstract click to expand
This study proposes a framework for estimating demand in differentiated product markets with high dimensional product characteristics, building upon the seminal Berry, Levinsohn, and Pakes (1995) model, using market level data. We allow for a very large set of potential product characteristics, where the number of characteristics may exceed the number of market observations. Our contributions are twofold. First, we establish a general estimation theory for BLP models featuring high-dimensional nuisance parameters. We propose a Neyman orthogonal estimator specifically adapted to this framework, utilizing machine learning techniques, such as Lasso, to construct nuisance parameter estimators that are plugged into the Neyman orthogonal estimator. This approach offers a significant advantage: it achieves $\sqrt{T}$-asymptotic normality for parameters of interest--such as the price coefficient and price heterogeneity--even when nuisance parameters are estimated at slower rates due to their high dimensionality. Second, we apply this theory to a specialized BLP model under approximate sparsity, developing an estimation strategy for the high-dimensional nuisance parameters. The approximate sparsity condition posits that nuisance parameters can be controlled, up to a small approximation error, by a small and unknown subset of variables. In an economic context, this implies that while products have a vast array of characteristics, consumers focus on only a small subset of these due to bounded rationality. This condition makes the recovery of parameters of interest feasible by enabling nuisance parameter estimators to converge at the required rates. The practical performance of the method is evaluated through comprehensive Monte Carlo simulations, which demonstrate its efficacy in finite samples.
0
0
econ.EM 2026-05-04 3 theorems

This paper introduces a Hall-Sandpile model that applies a physics analogy of sidewaysโ€ฆ

Hall-Like Transversal Stress and Sandpile Criticality on Real Production Networks

The Hall-Sandpile model on WIOD networks generates four ordered regimes of instability where mean avalanche size and large-eventโ€ฆ

Figure from the paper full image
abstract click to expand
This paper develops a Hall-Sandpile model of economic instability that combines a Hall-like transversal stress mechanism with sandpile threshold dynamics on a real production-network substrate. In analogy with the physical Hall effect, where exposed flows under an external field generate stress in a transversal direction, we model economic shocks as fields that act on flow-intensive, low-redundancy, low-capacity nodes and produce systemic stress through a multiplicative conversion function. The accumulated stress drives a discrete toppling rule and an avalanche dynamics whose effective activation threshold declines with transversal exposure. The model is calibrated on annual World Input--Output Database (WIOD) production networks for 2000--2014 and simulated on the 2014 substrate (2{,}283 country--sector nodes) under three alternative propagation normalisations to avoid mechanical near-criticality from row-stochastic operators. Controlled Monte Carlo experiments over external field intensity and redundancy stress generate four ordered regimes: stable absorption, latent fragility, critical transition, and avalanche regime. Mean avalanche size and the probabilities of finite-size systemic events $\Pr(S\!\geq\!5)$, $\Pr(S\!\geq\!10)$ and $\Pr(S\!\geq\!20)$ rise jointly with field intensity and redundancy stress. Tail diagnostics show regime-dependent thickening of the avalanche distribution, but the estimated tail indices remain too high to interpret as evidence of universal power-law criticality. The contribution is therefore a finite-size, real-network description of how transversal stress activates structural fragility, not a claim of self-organised criticality in the global economy.
0
0
econ.GN 2026-05-04

Remote jobs deliver larger wage gains and promotions than office roles

Remote work expands pathways to upward career mobility

48 million transitions show the biggest boosts for lower-income workers from places with fewer high-skill opportunities by loosening where a

abstract click to expand
Geographic constraints have long structured access to high-growth career opportunities, concentrating upward mobility within a limited set of cities and organizations. The expansion of remote work potentially alters this opportunity structure by decoupling job matching from physical proximity, yet its implications for career mobility remain unclear. Using 48 million U.S. job transitions between 2020 and 2024 linked to employer-level measures of remote eligibility, we estimate how entering remote-eligible jobs shapes career outcomes at job transitions. Workers entering remote-eligible jobs experience significantly higher wage growth and higher rates of upward seniority mobility than comparable workers entering fully on-site roles. These transitions are also associated with greater cross-metropolitan job mobility and moves toward smaller, less prestigious employers. Importantly, effects are largest among lower-income workers and those originating from regions with limited high-skill opportunity density. Together, the findings indicate that remote work relaxes geographic constraints in job matching, reshaping the distribution of upward mobility across places and workers.
0
0
econ.TH 2026-05-04

Unique contract adverse selection reduces to stochastic target control

Principal-agent problems with adverse selection: A stochastic target problem formulation

The agent's hidden cost problem becomes a reachable set that constrains the principal's optimization and values screening alternatives.

Figure from the paper full image
abstract click to expand
We study a principal-agent problem with adverse selection, where the principal does not know the agent's true cost but must design a contract to optimize a specific criterion. Unlike standard screening frameworks that allow for self-selection, we assume the principal can only offer a unique contract. We show that the agent's optimization problem can be reformulated as a stochastic target problem. After characterizing the credible domain of this target problem, we show that the principal's objective can be solved as a stochastic optimal control problem with partial information and state constraints. The description of the credible domain also allows us to obtain the value of screening contracts.
0
0
econ.EM 2026-05-04

Penalized likelihood ensures existence in sparse network models

Penalized Likelihood for Dyadic Network Formation Models with Degree Heterogeneity

It corrects incidental-parameter bias for coefficients and partial effects without trimming agents or assuming bounded fixed effects.

abstract click to expand
Estimating network formation models with degree heterogeneity raises two problems in empirical networks. First, agents that send no links, receive no links, or link to all remaining agents can make the fixed-effects MLE fail to exist. Trimming these agents changes the estimation sample and induces selection bias. Second, the incidental-parameter problem biases common parameters and average partial effects. We resolve both issues through a penalized likelihood approach. Our leading specification is a directed network model with reciprocity, nesting the standard undirected and non-reciprocal directed models. The penalty guarantees finite-sample existence and yields bias corrections for coefficients and partial effects. We establish asymptotic results without imposing compactness on the fixed-effects. Allowing the fixed effects to diverge at a logarithmic rate, our asymptotic framework accommodates the degree sparsity ubiquitous in large empirical networks. A global trade application demonstrates that our estimator avoids selection bias and recovers robust parameters where conventional methods fail.
0
0
econ.TH 2026-05-04

Kantians neutralize Nash free-riding via strategy rescaling

Strategy Rescaling and the Stability of Kantian Optimization

In symmetric games this choice makes Kantian optimization the stable outcome under both dynamic best-response and evolutionary dynamics.

abstract click to expand
This study investigates the properties and stability of the Multiplicative Kantian Equilibrium (MKE) in symmetric games. We first demonstrate that MKE lacks strategic equivalence: the Kantian best-response function is not invariant under monotonic strategy rescaling. This strategic non-equivalence implies that the choice of measurement scale - a subjective interpretation of the game - materially impacts equilibrium outcomes. Exploiting this non-equivalence, in a game where players may be Kantian or Nasher, we propose an efficient strategy rescaling that allows Kantians to neutralize the free-rider advantage of Nashers, while preserving Pareto-efficient outcomes among themselves. In a dynamic framework, we show that the subgame-perfect Nash equilibrium with endogenous choice of optimization type leads all players to prefer Kantian optimization over Nash optimization. In an evolutionary setup, we show that Kantian optimization is an evolutionarily stable strategy (ESS). Our results suggest that the inherent strategic non-equivalence of Kantian optimization provides a robust pathway to stable cooperation.
0
0
econ.EM 2026-05-04

LS estimator distribution unchanged when over-including panel factors

Linear Regression for Panel With Unknown Number of Factors as Interactive Fixed Effects

As long as the true number is not exceeded, the asymptotic law of the regression coefficients does not depend on how many extra factors are

Figure from the paper full image
abstract click to expand
In this paper we study the least squares (LS) estimator in a linear panel regression model with unknown number of factors appearing as interactive fixed effects. Assuming that the number of factors used in estimation is larger than the true number of factors in the data, we establish the limiting distribution of the LS estimator for the regression coefficients as the number of time periods and the number of cross-sectional units jointly go to infinity. The main result of the paper is that under certain assumptions the limiting distribution of the LS estimator is independent of the number of factors used in the estimation, as long as this number is not underestimated. The important practical implication of this result is that for inference on the regression coefficients one does not necessarily need to estimate the number of interactive fixed effects consistently.
0
0
econ.EM 2026-05-04

Bias correction fixes LS estimates in dynamic panels with factors

Dynamic Linear Panel Regression Models with Interactive Fixed Effects

Two sources of asymptotic bias are removed, restoring consistency and chi-squared tests as N and T grow large.

abstract click to expand
We analyze linear panel regression models with interactive fixed effects and predetermined regressors, for example lagged-dependent variables. The first-order asymptotic theory of the least squares (LS) estimator of the regression coefficients is worked out in the limit where both the cross-sectional dimension and the number of time periods become large. We find two sources of asymptotic bias of the LS estimator: bias due to correlation or heteroscedasticity of the idiosyncratic error term, and bias due to predetermined (as opposed to strictly exogenous) regressors. We provide a bias-corrected LS estimator. We also present bias-corrected versions of the three classical test statistics (Wald, LR, and LM test) and show their asymptotic distribution is a chi-squared distribution. Monte Carlo simulations show the bias correction of the LS estimator and of the test statistics also work well for finite sample sizes.
0
0
econ.EM 2026-05-04

Two-step estimator adds interactive fixed effects to BLP demand models

Estimation of random coefficients logit demand models with interactive fixed effects

Handles arbitrary correlation between unobserved characteristics and prices in market-share data.

Figure from the paper full image
abstract click to expand
We extend the Berry, Levinsohn and Pakes (BLP, 1995) random coefficients discrete-choice demand model, which underlies much recent empirical work in IO. We add interactive fixed effects in the form of a factor structure on the unobserved product characteristics. The interactive fixed effects can be arbitrarily correlated with the observed product characteristics (including price), which accommodates endogeneity and, at the same time, captures strong persistence in market shares across products and markets. We propose a two-step least squares-minimum distance (LS-MD) procedure to calculate the estimator. Our estimator is easy to compute, and Monte Carlo simulations show that it performs well. We consider an empirical illustration to US automobile demand.
0
0
econ.GN 2026-05-04

Self-devaluing certificates create honest money for AI agents

RSDM: The Consensus Honest Money in the AI Era

RSDM records gradual metal-weight decay on deposit receipts to replace storage fees and resist fiat depreciation in global AI transactions.

abstract click to expand
The medium of exchange of the traditional economy is mainly the fiat currency of each country or region, and when cross-border transactions occur, they need to be settled according to the exchange rate. In the AI world, however, the medium of exchange tends to be a globally recognized currency. Especially when AI acts as an agent for cross-border capital pool and cross cyclical asset allocation, it needs a sound money that can resist the depreciation of fiat currency and store long-term value. Therefore, we propose a globally consensus and universally accepted monetary rule framework for the AI era. The devaluation of money runs through almost the whole process of history, from the weight reduction and purity decrease of metallic coin to the unanchored over-issuance of paper currency. Whether it is the periodic compulsory recoinage in medieval Europe or Gesell's stamp scrip, both are essentially mechanisms for taxing money holdings. Unlike Gesell's stamp scrip, Redeemable Self-Decaying/Devaluing Money (RSDM) is a tokenized commodity money. Its essential innovation is to fill the hole in the storage fee of metal coins through the self-devaluing of metal weight recorded on the deposit certificate (warehouse receipt) of metal coins. In a sense, RSDM is an innovative version of Jiaozi (a deposit receipt for base metal coin that emerged in Sichuan, China, about a thousand years ago). In this paper, we propose five forms of online and offline issuance of RSDM, providing a prototype for creating a globally recognized modern honest money.
0
0
econ.GN 2026-05-01

Wealth sets energy-efficiency adoption threshold

Optimal Consumption and Investment with Energy-Efficiency Adoption

New model shows households adopt when wealth crosses a price- and uncertainty-dependent level, and subsidies steer total energy use.

abstract click to expand
Despite many decades of research, economically grounded models that analyse energy consumption and energy-efficiency adoption within a unified framework remain underdeveloped. This article addresses this gap by proposing a model of consumption, investment, and energy-efficiency adoption under uncertainty. It develops new definitions of the rebound and backfire effects, and integrates their welfare implications into a model of optimal subsidy design. Macro-level technology diffusion and energy consumption across heterogeneous agents are also formalised. Explicit results for core objects are derived, including the adoption threshold and post-adoption strategies, and these are shown to depend on agent wealth, introducing a novel channel through which financial conditions influence technology-adoption decisions. An approximation scheme is proposed to estimate welfare implications explicitly. Adoption of energy efficiency is shown to be welfare improving in the main. A detailed case study of a representative German single-family home illustrates the theoretical results. Numerical analysis indicates that the subsidy policy effectively steers aggregate energy consumption.
0
0
econ.TH 2026-04-30

Nash equilibria with three or more randomizers are generically improvable

Extreme Equilibria: The Benefits of Correlation

Correlation yields strictly better expected payoffs in almost all such games for Pareto and utilitarian objectives.

abstract click to expand
Correlated equilibria arise naturally when agents communicate or rely on intermediaries such as recommendation systems. We study when a given Nash equilibrium can be improved within the set of correlated equilibria for general objectives. Our key insight is a detail-free criterion: any Nash equilibrium with three or more randomizing agents is generically improvable. We refine this insight to specific classes of games and objectives, including Pareto and utilitarian welfare, and provide constructive methods to obtain improvements. Our findings underscore the ubiquity of improvable Nash equilibria and the crucial role of correlation in enhancing strategic outcomes.
0
0
econ.EM 2026-04-30

Subsampling validates inference in serially correlated two-way clustered panels

Subsampling Under Two-way Clustering with Serial Correlation

Partitioned individuals and consecutive time blocks produce correct intervals even under non-Gaussian limits that break prior methods.

Figure from the paper full image
abstract click to expand
We prove the validity of using subsampling method for inference under a two-way clustered panel in which the time effects are serially correlated. Subsamples should be drawn without replacement from randomly partitioned individual index set and consecutive blocks of time effects. We present two subsampling inference methods: estimating the quantiles directly and constructing the confidence interval by first estimating the asymptotic variance. The quantile method is very adaptive, allowing for non-Gaussian limit which invalidates all existing methods in two-way clustering with serial correlation. Although the variance method only works under Gaussian limit, it comes with a data-driven bandwidth selection algorithm and a bias-correction under suitable estimators. Monte Carlo simulations demonstrate our methods exhibiting the desired coverage level in the finite sample except when the serial correlation is extremely strong. This paper is the first one that allows for inference on non-Gaussian asymptotics under two-way clustering with serial correlation.
0
0
econ.EM 2026-04-30

IFE estimator fails to recover ATT when heterogeneity has factor structure

Treatment-effect heterogeneity and interactive fixed effects: Can we control for too much?

The interactive fixed effects absorb treatment variation and create bias if effects follow a linear factor model, unlike methods that drop a

abstract click to expand
This paper studies the interactive fixed effects (IFE) estimator in a panel-data setting with heterogeneous treatment effects. We show that, if the treatment-effect heterogeneity admits a linear factor structure, the IFE estimator could fail to recover the average treatment effect on the treated units. The problem arises because the interactive fixed effects absorb the heterogeneity in the treatment effect, creating a \textit{bad-control} problem. With time-invariant factors or unit-invariant loadings in the treatment effect heterogeneity, identification may further break down due to multicollinearity. These problems are not present in alternative estimation methods that exclude treated units in post-treatment periods from the factor estimation.
0
0
econ.GN 2026-04-30

Index scores prediction market moves by credibility

The Signal Credibility Index for Prediction Markets: A Microstructure-Grounded Diagnostic with Weighted and Time-Varying Extensions

Persistence on logit prices plus flow concentration separates durable Bayesian updates from liquidity and strategic noise.

Figure from the paper full image
abstract click to expand
Prediction-market price moves are widely treated as informationally equivalent: a price jump is read the same way regardless of whether it reflects durable Bayesian updating, transient liquidity pressure, strategic position adjustment, or genuine disagreement. This paper formalizes the Signal Credibility Index (SCI) introduced in Nechepurenko (2026) as a stand-alone diagnostic. We make four contributions: (i) a revised persistence component using the persistence ratio PR(t,w) on logit prices, well-defined on short rolling windows; (ii) a weighted Cobb-Douglas form SCI({\alpha}\alpha {\alpha}) with flow-based concentration HHI_flow; (iii) a time-varying specification SCI(t; w) for real-time monitoring; and (iv) Monte Carlo validation including an out-of-distribution stress test, coordinated multi-wallet manipulation, and a logistic-regression benchmark. The validation establishes discrimination among designed microstructure regimes, not external evidence of downstream coordination effects. We document two failure modes consistent with the index targeting coordination credibility rather than pure information content: a Type II error on informed-but-concentrated whale repricing, and a Type I error on coordinated multi-wallet manipulation.
4 0
0
econ.TH 2026-04-30

Finite stable matching results transfer to large economies

Many-to-many stable matching in large economies

A mechanical method shows tree-stable and pairwise-stable outcomes exist when agents are individually insignificant.

abstract click to expand
We study stability notions for networked many-to-many matching markets with individually insignificant agents in distributional form. Outcomes are formulated as joint distributions over characteristics of agents and contract choices. Characteristics can lie in an arbitrary Polish space. We provide a mechanical method for transferring existence results for finite matching models to large matching models for many stability notions. In particular, we show that tree-stable and pairwise-stable outcomes exist.
0
0
econ.EM 2026-04-30

Staggered DiD estimator consistent if either model is correct

Doubly robust local projections difference-in-differences

DRLPDID targets the same ATT as LP-DiD but stays valid when only one of two auxiliary models is right.

Figure from the paper full image
abstract click to expand
This paper develops a doubly robust extension of local-projections difference-in-differences (LP-DiD) for staggered absorbing treatments. The resulting estimator, DRLPDID, preserves the LP-DiD local-stack ATT target and is consistent when either the local untreated-outcome regression or the local treatment-probability model is correctly specified. It also delivers influence-function-based inference for post-treatment summaries and multiplier-bootstrap bands for dynamic paths. In Monte Carlo designs with covariate-driven selection, DRLPDID matches regression-adjusted LP-DiD under outcome-model alignment and clearly outperforms the IPT-only variant under propensity-score misspecification. In the no-fault-divorce application, DRLPDID tracks robust staggered-adoption estimators and is less negative than unadjusted LP-DiD.
0

browse all of econ โ†’ full archive ยท search ยท sub-categories