pith. machine review for the scientific record. sign in

econ.TH

Theoretical Economics

Includes theoretical contributions to Contract Theory, Decision Theory, Game Theory, General Equilibrium, Growth, Learning and Evolution, Macroeconomics, Market and Mechanism Design, and Social Choice.

0
econ.TH 2026-05-14 2 theorems

Extended SBA adds two-layer architecture for strategic scenarios

Extended Scenario Bundle Analysis: A Formal Framework for Strategic Scenario Modeling

Separates static database from dynamic trees and adds beliefs, desires, and commitments without needing full payoffs or probabilities

Figure from the paper full image
abstract click to expand
Strategic crisis analysis needs representations that combine qualitative expert judgement, explicit interdependence, and auditable update rules without requiring fully specified payoffs or probabilities. Scenario Bundle Analysis (SBA), developed by Amos Perlmutter and Reinhard Selten, provides such a starting point, but the original formulation leaves several database, topology, and update interfaces implicit. This paper presents a formal refinement and extension of the original SBA framework, introducing a two-layer architecture that separates a static scenario database from a dynamic scenario tree system. The extended framework incorporates a richer attitude vocabulary: beliefs, desires, intentions, fears, and coalitional commitments, with expectations treated as doxastic attitudes. It also adds a domain/modifier layer for contextual framing, a topology on admissible scenario spaces\index{Scenario space}, typed assessment-state updates, and multi-criteria evaluation. Mathematical definitions are stated with sufficient precision to support computational implementation.
0
0
econ.TH 2026-05-14 2 theorems

Signal precision can lower screening accuracy when high

Pitfall of Precision in Noisy Signaling

Low-quality agents increase mimicry as noise falls, outweighing the principal's informational gain.

Figure from the paper full image
abstract click to expand
A principal decides whether to approve an agent based on a noisy signal (e.g., test scores) generated by the agent. High-quality agents can produce high signals on average at lower cost, but the realizations are subject to noise that depends on the screening technology's precision. We uncover a paradoxical "pitfall of precision": when precision is already high, further improvements reduce screening accuracy and lower the principal's welfare. This occurs because greater precision incentivizes strategic signaling from more low-quality agents, outweighing the direct benefit from improved precision. The pitfall of precision also has implications for statistical discrimination: groups with noisier technologies face lower approval rates yet may be favored ex ante -- a reversal of discrimination. We also examine how commitment power helps mitigate the pitfall.
0
0
econ.TH 2026-05-11 3 theorems

Inequality in job search effort lowers matching rates

The Matching Function: A Unified Look into the Black Box

Tracing the matching function to applicant-vacancy networks shows dispersion of search intensities on both sides reduces match efficacy.

Figure from the paper full image
abstract click to expand
In this paper, we use tools from network theory to trace the properties of the matching function to the structure of granular connections between applicants and vacancies. We unify seemingly disparate parts of the literature by recovering multiple functional forms as special cases including the CES. We derive a testable condition under which matching in any network from the broad class we analyze can be thought "as if" it comes from a CES matching function, up to a first-order approximation. We provide a theory of match efficacy in which inequality in search intensities is the key determinant of how well the matching process works. A robust finding of our analysis is that dispersion of search intensities on either side of the market is bad for the matching process. We also show that a rise in the market's mean search intensity can reduce match efficacy when it is associated with a higher Gini coefficient of search intensities.
0
0
econ.TH 2026-05-11 2 theorems

Non-CARA preferences leave prices only partially revealing

On the Possibility of Informationally Inefficient Markets Without Noise

Without noise traders, the mismatch between demand aggregation and Bayesian updating creates a Jensen gap that preserves private information

Figure from the paper full image
abstract click to expand
Noise traders can be dispensed with entirely. Partial revelation of information through prices arises under any non-exponential expected utility preference, including CRRA, without noise traders, random endowments, supply shocks, hedging motives, or behavioral biases. The model contains zero exogenous noise. The mechanism is a mismatch between the space in which market clearing aggregates signals and the Bayesian sufficient statistic. CARA demand is linear in log-odds, so prices aggregate in log-odds space and reveal the statistic exactly. Every other preference aggregates differently; the resulting Jensen gap makes revelation partial. I prove that CARA is the unique fully revealing preference class, characterize the rational expectations equilibrium via a contour integration fixed point, and verify that partial revelation survives learning from prices. The Grossman-Stiglitz paradox is resolved: information acquisition has positive value within the rational class. Numerical solution of the rational expectations fixed point at K = 3 confirms partial revelation, positive trade volume, and positive value of information across the full range of CRRA risk aversion, vanishing only in the CARA limit.
0
0
econ.TH 2026-05-11 Recognition

Deleting actions shifts equilibria that price tweaks leave stuck

Changing the Game: Status-Quo Inertia, Institutional Design, and Equilibrium Transition

Status-quo inertia keeps the old outcome selected as long as it remains feasible, so redesigning available moves outperforms adjusting their

Figure from the paper full image
abstract click to expand
Many economic interventions are designed as marginal changes in incentives. Yet in environments shaped by coordination, institutional persistence, and path dependence, such reforms often leave behavior largely unchanged. This paper studies interventions in games when equilibrium selection displays status-quo inertia: if the pre-intervention equilibrium remains a Nash equilibrium after policy, it continues to be selected. In that environment, price-based interventions and simple option expansion may fail even when they improve welfare in a partial-equilibrium sense. By contrast, interventions that modify the feasible action space, especially deletion and replacement interventions, can be substantially more effective because they remove the strategic basis for persistence. We develop a simple framework, derive general results, provide complete proofs, and illustrate the economics with examples from climate transition, platform regulation, financial reform, and industrial modernization. The analysis highlights a basic policy lesson: when inefficient equilibria are institutionally entrenched, the central problem is often not how to price the existing game more finely, but how to change the game itself.
0
0
econ.TH 2026-05-11 2 theorems

Secret messages with deniability reveal only state cutoffs

Secret Communication with Plausible Deniability

Directional baseline information limits frontier secret communication to binary above-or-below signals while preserving rationalizability.

Figure from the paper full image
abstract click to expand
Communication is secret if a message is independent of the state; however, the receiver's subsequent action may still reveal that she has acted on hidden information. This paper studies when secret communication can also provide plausible deniability: under single-crossing preferences, every action induced by the sender's message must be rationalizable using the receiver's baseline information alone. We characterize joint information structures that satisfy both secrecy and plausible deniability. We show that plausible deniability restricts communication exactly when the baseline message is directional -- meaning its likelihood is monotone in the state. Combining this restriction with secrecy, we show that, for directional messages, frontier communication reveals at most whether the state lies above or below a cutoff. Finally, we identify conditions under which a greatest feasible communication structure exists and can be constructed explicitly in a simple way.
0
0
econ.TH 2026-05-11 2 theorems

Only one rule aggregates multiple Elo ratings while obeying three consistency axioms

Aggregating Elo Ratings: An Axiomatization

Convert each rating to strength, average with weights, convert back; this is the unique method that respects normalization, recursion, and E

abstract click to expand
Many environments assign several Elo ratings to the same agent: a chess player has classical, rapid, and blitz ratings; an online platform may rate by time control, mode, or format; an evaluator may rate performance across tasks or roles. This paper axiomatizes when such a vector of ratings can be reduced to a single scalar rating that is itself on the Elo scale. We impose three substantive conditions: same-scale normalization (a uniform profile keeps its rating), recursive consistency (aggregating in blocks gives the same answer as aggregating directly, provided each block carries the total weight of its members), and marginal Elo-strength consistency (for two equally weighted coordinates, the ratio of marginal contributions to the combined rating equals the ordinary Elo odds). The unique rating rule satisfying these conditions converts each component to its Elo strength, takes a weighted arithmetic mean of strengths, and converts back. We show how this rule differs from a random-format lottery and from rating-scale averaging, prove the axioms are independent, and illustrate the rule on combining classical, rapid, and blitz ratings.
0
0
econ.TH 2026-05-11 Recognition

Money burning clears aggregate NTU matchings at fixed prices

Aggregate Stable Matching with Money Burning

Typed agents equalize utilities by waiting on one side, delivering unique equilibria via a convergent generalized deferred-acceptance method

abstract click to expand
We propose an aggregate notion of non-transferable utility (NTU) stability for decentralized matching markets with fixed prices, where market clearing is achieved through one-sided money burning, which can be interpreted as waiting. Agents are grouped into observable types and are indifferent among individuals within type; equilibrium is defined at the type level and delivers equal indirect utility within each type. We introduce money burning into two types of NTU models: In a deterministic model, we relate our notion to classical Gale--Shapley stability and show how money burning decentralizes stable outcomes under aggregation. We then introduce separable random utility, obtaining an NTU counterpart to Choo and Siow (2006). We prove the existence and uniqueness of equilibrium and provide a stationary queueing interpretation. Finally, we develop a generalized deferred acceptance algorithm based on alternating constrained discrete-choice problems and prove its convergence to the unique equilibrium.
0
0
econ.TH 2026-05-11 2 theorems

Partial statistics disclosure expands implementable game outcomes

Coordination Mechanisms with Partially Specified Probabilities

Revealing only expectations of random variables lets players coordinate on jointly coherent outcomes beyond standard correlated equilibria.

abstract click to expand
We study which outcomes are implementable by disclosing coarse statistics of a data-generating process rather than its full distribution. Players observe data whose joint distribution is only partially known: they know the expectations of finitely many random variables and form beliefs by maximum-entropy inference. We obtain two characterizations. When message spaces are unrestricted, implementable outcomes coincide with jointly coherent outcomes, expanding the set of correlated equilibria. With canonical mechanisms, implementability reduces to a single cross-entropy condition: the target outcome must lie on the cross-entropy level set of some correlated equilibrium that passes through that equilibrium itself. Examples and several classes of games illustrate the reach of the framework.
0
0
econ.TH 2026-05-11 Recognition

Pensions increase births but reduce child mental-health investments

Mental Health and Human Capital Composition in a Dynastic OLG Model with PAYG Pensions

A dynastic model shows higher pay-as-you-go rates boost fertility while crowding out education, physical health and mental health spending.

abstract click to expand
This paper develops a two-period dynastic overlapping-generations (OLG) model in which parents simultaneously choose consumption, savings, fertility, and three distinct dimensions of child quality-education, physical health, and mental health-under a pay-as-you-go (PAYG) pension system. The central innovation is modelling mental health as an independent productivity-enhancing input with its own elasticity $\theta$ in a Cobb-Douglas human-capital technology. This yields simple proportional allocation rules and shows how pension policy affects not only the overall level but also the composition of human capital investments. In steady state, higher PAYG contribution rates raise fertility through the Yakita effect but crowd out per-child investments in all quality dimensions, including mental health. An increase in the mental-health elasticity $\theta$ shifts resources toward non-cognitive skill development while reducing fertility. These results reveal a fundamental policy tension for developing economies: pension systems that rely on children for old-age support simultaneously increase birth rates while reducing long-term human capital formation, with disproportionate effects on non-cognitive skills. The framework provides theoretical guidance for complementary policies that protect mental-health investments, with particular relevance for countries such as India where children remain a primary source of retirement security and mental-health services are underfunded.
0
0
econ.TH 2026-05-07

Counterfactual utilities satisfy vNM axioms on potential outcomes

An Axiomatic Foundation for Decisions with Counterfactual Utility

Defining preferences over every possible result under alternative decisions yields coherent representations and accounts for paradoxes such

abstract click to expand
Counterfactual utilities evaluate decisions not only by the realized outcome under a given decision, but also by the counterfactual outcomes that would arise under alternative decisions. By generalizing standard utility frameworks, they allow decision-makers to encode asymmetric criteria, such as avoiding harm and anticipating regret. Recent work, however, has raised fundamental concerns about the coherence and transitivity of counterfactual utilities. We address these concerns by extending the von Neumann-Morgenstern (vNM) framework to preferences defined on the extended space of all potential outcomes rather than realized outcomes alone. We show that expected counterfactual utility satisfies the vNM axioms on this extended domain, thereby admitting a coherent preference representation. We further examine how counterfactual preferences map onto the realized outcome space through menu-dependent and context-dependent projections. This axiomatic framework reconciles apparent inconsistencies highlighted by the Russian roulette example in the statistics literature and resolves the well-known Allais paradox from behavioral economics. We also derive an additional axiom required to reduce counterfactual utilities to standard utilities on the same potential outcome space, and establish an axiomatic foundation for additive counterfactual utilities, which satisfy a necessary and sufficient condition for point identification. Finally, we show that our results hold regardless of whether individual potential outcomes are deterministic or stochastic.
0
0
econ.TH 2026-05-06

Full correlation neutralizes attacker edge in multi-surface AI security

The Adversarial Discount - AI, Signal Correlation, and the Cybersecurity Arms Race

The arms race ratio stays independent of surface count when threat signals fully inform detection across surfaces.

Figure from the paper full image
abstract click to expand
We study a contest-theoretic model of adversarial investment in which an attacker and a defender allocate resources to AI-augmented capabilities across multiple attack surfaces. The attacker's investment operates through two channels: it amplifies offensive potency unconditionally and erodes defensive effectiveness conditionally, generating an adversarial discount that deepens endogenously with the defender's own investment. We derive a closed-form arms race ratio decomposing the relative marginal effectiveness of offensive and defensive investment into six structural primitives and establish equilibrium uniqueness and global convergence under a continuous best-response dynamic. The central result concerns signal cross-correlation, the degree to which threat intelligence on one surface informs detection on another. With full cross-correlation, the arms race ratio is independent of the number of attack surfaces: the attacker's structural advantage from surface proliferation is completely neutralized. Under the benchmark full-dilution case, without cross-correlation, per-surface defense effectiveness vanishes as the attack surface grows. Extending the analysis to heterogeneous defenders facing an attacker who targets by expected value, we argue that the model points to a dual inefficiency: overinvestment in private defense (a zero-sum redirective externality) and underinvestment in shared signal correlation (a public good). These formal results, together with public-good reasoning outside the base model, characterize when collective information aggregation can dominate private capability investment as the decisive margin in adversarial contests.
0
0
econ.TH 2026-05-06 2 theorems

Public messages match or beat private ones in group approvals

Going Public: Communication in Collective Decisions

Any outcome from individualized messages can be achieved by one shared announcement, and sometimes strictly better when two agents conflict.

Figure from the paper full image
abstract click to expand
A principal and $n\ge 2$ agents can launch a project if the principal proposes it and at least $k$ agents accept. Their individual payoffs from the project depend on an ex ante unknown state. The principal can conduct a test to learn about the state and then communicate her findings to the agents via cheap talk. This paper focuses on comparing two communication regimes: public and private messaging. We show that public messaging is weakly dominant: any outcome implementable under private messaging can also be implemented under public messaging. Moreover, in a canonical environment with linear payoffs, we characterize the principal's optimal test in each regime and show that public messaging can be strictly dominant if and only if there exist two agents who are the principal's conflicting allies.
0
0
econ.TH 2026-05-05

Large effective commodity counts stabilize all equilibria

Equilibrium Stability and Uniqueness with a Large Number of Commodities and Patient Consumers

In patient-consumer economies, substitution effects scale with the discounted count of goods and dominate income effects under taste spread.

abstract click to expand
We show that a large effective number of commodities can be a source of equilibrium stability and uniqueness: expanding substitution opportunities strengthens aggregate substitution effects. We study finite dated-commodity exchange economies obtained by truncating a countably infinite-horizon environment with discounted, additively separable utilities. In this setting, the effective number of commodities is the discounted count of dated commodities, so greater patience makes distant commodities more relevant. With an appropriate normalization, equilibrium substitution effects accumulate at the rate of the effective number of commodities. When a preference diversification condition holds, equilibrium income effects grow at a lower rate. The condition is satisfied, for example, when agents have sparse or localized taste differences across commodities, or when their taste profiles become sufficiently heterogeneous as the commodity space expands. Hence, whenever the effective number of commodities is sufficiently large, every equilibrium is locally t\^atonnement stable, which in turn implies equilibrium uniqueness.
0
0
econ.TH 2026-05-05

Truthful networks form as precision-sorted cliques

Truthful Communication and Exclusive Information Clubs

Agents link only with similar-precision peers to enable honest reporting, though mixing precisions could raise overall welfare.

abstract click to expand
This paper studies how the possibility of strategic misreporting shapes endogenous communication networks. Agents observe noisy private signals about a common state, form costly communication links, exchange private messages with their neighbors, and then choose actions. Payoffs reward both accuracy and coordination with linked agents. A link is valuable because it gives access to information, but it is useful only if the induced local information structure makes truthful transmission incentive compatible. We show that clique components support truthful communication: within a clique, all members observe the same profile of local signals, choose the same posterior action, and therefore have no incentive to distort reports. With heterogeneous signal precisions and convex linking costs, the core selects assortative information clubs ordered by signal precision. These stable truthful networks need not be socially efficient. Because the informational value of precision is decreasing, concentrating high-precision agents in the same club may be privately stable but socially dominated by more mixed partitions.
0
0
econ.TH 2026-05-05

Agents form precision-sorted truthful cliques that may not optimize welfare

Truthful Communication and Exclusive Information Clubs

When link costs rise convexly, high-quality signal agents cluster together even though mixing precision levels could raise overall decision-

abstract click to expand
This paper studies how the possibility of strategic misreporting shapes endogenous communication networks. Agents observe noisy private signals about a common state, form costly communication links, exchange private messages with their neighbors, and then choose actions. Payoffs reward both accuracy and coordination with linked agents. A link is valuable because it gives access to information, but it is useful only if the induced local information structure makes truthful transmission incentive compatible. We show that clique components support truthful communication: within a clique, all members observe the same profile of local signals, choose the same posterior action, and therefore have no incentive to distort reports. With heterogeneous signal precisions and convex linking costs, the core selects assortative information clubs ordered by signal precision. These stable truthful networks need not be socially efficient. Because the informational value of precision is decreasing, concentrating high-precision agents in the same club may be privately stable but socially dominated by more mixed partitions.
0
0
econ.TH 2026-05-05

Peer pressure evolves to sustain misspecified beliefs

Misspecified beliefs and the evolution of peer pressure

Stable conformity level makes agents exert true-return effort, creating self-confirming Nash without allocative loss

Figure from the paper full image
abstract click to expand
We study the emergence of conformity preferences in an environment in which agents choose effort under heterogeneous, possibly misspecified returns, and social interactions do not directly affect material payoffs. Some agents choose effort by trading off performance and conformity to expected peer behavior. We characterize subjective best responses. For any given beliefs, an optimal and unique level of peer pressure exists and is evolutionarily stable within groups of agents sharing the same misspecification. Such a level is zero for correctly specified agents and may be positive for misspecified ones. When the efficient level of peer pressure is interior, misspecified agents choose effort equal to their true return, resulting in an equilibrium behavior that is both self-confirming and Nash, allowing the persistence of misspecifications. Peer pressure need not generate long-run allocative distortions, but it creates a perceived value of social information. In equilibrium, this value depends only on misspecification, generating scope for informational rents.
0
0
econ.TH 2026-05-04 3 theorems

Heterogeneity explains stochastic transitivity failures without added complexity

Is Complexity the Problem? Testing Random Choice with Heterogeneity

A revealed-preference test on two lab data sets shows that mixing different preferences reproduces observed inconsistencies, so aggregate 'c

Figure from the paper full image
abstract click to expand
Economic choices are often stochastic: the same person may make a different choice when facing the same alternatives repeatedly. Standard models assume that the degree of randomness reflects the size of utility differences, but choice inconsistencies could also reflect difficulty comparing alternatives. Recent studies estimate such comparison difficulty (or "complexity") by fitting functional forms to aggregate choice data under a representative agent assumption. However, aggregate data could violate standard models of random choice simply because of heterogeneity in preferences, even in the absence of variation in comparison difficulty. This paper develops a revealed preference framework, collective rationalizability, that tests for variation in comparison difficulty from aggregate data while explicitly accounting for heterogeneity. The framework characterizes whether violations of standard models can be explained by comparison difficulty alone, heterogeneity alone, or require both. I provide a statistical test with finite-sample inference and apply the method to two existing experiments. In both cases, heterogeneity alone explains observed failures of stochastic transitivity well, demonstrating that comparison difficulty can be not only theoretically but also empirically confused with heterogeneity in aggregate data.
0
0
econ.TH 2026-05-04

Power forms balance equity and productivity in health evaluations

Integrating equity and productivity in health evaluation

Normative axioms on scale independence and transfers produce tractable functions for assessing interventions where health and output both at

abstract click to expand
This paper develops a unified framework for evaluating health outcomes that jointly incorporates equity and productivity. Extending beyond traditional QALYs, PALYs, and the more recent PQALYs, we introduce a broader class of evaluation functions that integrate equity- and productivity-sensitive conditions. By imposing several normative criteria, including independence from measurement scales and Pigou-Dalton transfer principles, we obtain tractable power-form representations. In balancing equity and efficiency, the framework provides a coherent foundation for assessing interventions in contexts where both health and productive capacity are at stake.
0
0
econ.TH 2026-05-04

VCG job matching is firm-rational only under weak substitutes

Strategy-proof and Efficient Job Matching with Participation Constraints

The standard mechanism works for workers automatically but needs firm utilities to be weakly substitutable or submodular before firms willๅ‚ๅŠ 

abstract click to expand
We study the design of strategy-proof and efficient mechanisms satisfying participation constraints in the job-matching problem. Each firm can hire multiple workers and each worker can be employed at only one firm. While firm utilities over subsets of workers are common knowledge, worker disutilities for working at each firm are private information. The VCG mechanism is the unique mechanism that is strategy-proof, efficient, and individually rational for workers; however, it may not be individual rational for firms. We show that the VCG mechanism is individually rational for firms if and only if firm utilities satisfy a condition called weak substitutes. We then strengthen participation constraints of firms to {\sl strong individual rationality}, which requires that each firm has no incentive to fire some of the workers assigned to it. The VCG mechanism is strongly individual rational if and only if firm utilities satisfy submodularity.
0
0
econ.TH 2026-05-04

Unique contract adverse selection reduces to stochastic target control

Principal-agent problems with adverse selection: A stochastic target problem formulation

The agent's hidden cost problem becomes a reachable set that constrains the principal's optimization and values screening alternatives.

Figure from the paper full image
abstract click to expand
We study a principal-agent problem with adverse selection, where the principal does not know the agent's true cost but must design a contract to optimize a specific criterion. Unlike standard screening frameworks that allow for self-selection, we assume the principal can only offer a unique contract. We show that the agent's optimization problem can be reformulated as a stochastic target problem. After characterizing the credible domain of this target problem, we show that the principal's objective can be solved as a stochastic optimal control problem with partial information and state constraints. The description of the credible domain also allows us to obtain the value of screening contracts.
0
0
econ.TH 2026-05-04

Kantians neutralize Nash free-riding via strategy rescaling

Strategy Rescaling and the Stability of Kantian Optimization

In symmetric games this choice makes Kantian optimization the stable outcome under both dynamic best-response and evolutionary dynamics.

abstract click to expand
This study investigates the properties and stability of the Multiplicative Kantian Equilibrium (MKE) in symmetric games. We first demonstrate that MKE lacks strategic equivalence: the Kantian best-response function is not invariant under monotonic strategy rescaling. This strategic non-equivalence implies that the choice of measurement scale - a subjective interpretation of the game - materially impacts equilibrium outcomes. Exploiting this non-equivalence, in a game where players may be Kantian or Nasher, we propose an efficient strategy rescaling that allows Kantians to neutralize the free-rider advantage of Nashers, while preserving Pareto-efficient outcomes among themselves. In a dynamic framework, we show that the subgame-perfect Nash equilibrium with endogenous choice of optimization type leads all players to prefer Kantian optimization over Nash optimization. In an evolutionary setup, we show that Kantian optimization is an evolutionarily stable strategy (ESS). Our results suggest that the inherent strategic non-equivalence of Kantian optimization provides a robust pathway to stable cooperation.
0
0
econ.TH 2026-04-30

Nash equilibria with three or more randomizers are generically improvable

Extreme Equilibria: The Benefits of Correlation

Correlation yields strictly better expected payoffs in almost all such games for Pareto and utilitarian objectives.

abstract click to expand
Correlated equilibria arise naturally when agents communicate or rely on intermediaries such as recommendation systems. We study when a given Nash equilibrium can be improved within the set of correlated equilibria for general objectives. Our key insight is a detail-free criterion: any Nash equilibrium with three or more randomizing agents is generically improvable. We refine this insight to specific classes of games and objectives, including Pareto and utilitarian welfare, and provide constructive methods to obtain improvements. Our findings underscore the ubiquity of improvable Nash equilibria and the crucial role of correlation in enhancing strategic outcomes.
0
0
econ.TH 2026-04-30

Finite stable matching results transfer to large economies

Many-to-many stable matching in large economies

A mechanical method shows tree-stable and pairwise-stable outcomes exist when agents are individually insignificant.

abstract click to expand
We study stability notions for networked many-to-many matching markets with individually insignificant agents in distributional form. Outcomes are formulated as joint distributions over characteristics of agents and contract choices. Characteristics can lie in an arbitrary Polish space. We provide a mechanical method for transferring existence results for finite matching models to large matching models for many stability notions. In particular, we show that tree-stable and pairwise-stable outcomes exist.
0
0
econ.TH 2026-04-30

Three measures of choice difficulty prove unrelated

Measuring Choice Difficulty

A binary-option Bayesian model shows understanding, randomness and fail to track each other, with alignment only under narrow conditions.

Figure from the paper full image
abstract click to expand
We provide a theoretical framework to understand how widely used measures of choice difficulty relate. In a binary-option Bayesian expected-utility framework, we show that three measures of difficulty, (i) understanding (ex-ante value), (ii) choice randomness, and (iii) confidence that the chosen option is ex post correct, are, in general, unrelated, and that this result extends to other potential measures like attenuation. We provide intuitive sufficient conditions which align the orders, using both restrictions on Blackwell experiments that capture well known classes (such as logit) and restrictions on payoffs and demonstrate that in psychophysical tasks that pay only for correctness, confidence coincides with understanding. We show willingness-to-accept to switch, when measured in utils, is equivalent to understanding. Our results suggest caution in interpreting measures of choice difficulty as well as the degree of portability between economics and psychophysics experiments
0
0
econ.TH 2026-04-30

Extreme rules strategy-proof only on single-peaked tree domains

A simple characterization of single-peaked domains

Unanimity and anonymity hold on every domain, yet incentive compatibility pins down exactly the single-peaked structure.

abstract click to expand
This paper characterizes the single-peaked domain on a tree via the strategy-proofness of extreme rules defined on that tree. For any tree, these rules are unanimous and anonymous on any preference domain. In particular, we show that they are strategy-proof only on the single-peaked domain associated with that tree.
0
0
econ.TH 2026-04-30

Dynamic no-feedback games let senders reach partial-commitment payoffs

Dynamic Cheap Talk without Feedback

Any equilibrium from a persuasion model with marginal-preserving deviations is attainable, and full Bayesian persuasion when sender payoffs,

Figure from the paper full image
abstract click to expand
We study a dynamic sender-receiver game in which the sender observes a state evolving according to a Markov chain but does not observe the receiver's action. Despite the absence of feedback, dynamic interaction partially restores commitment. We show that any equilibrium payoff of a persuasion model with partial commitment, where the sender can deviate to signaling policies that preserve the marginal distribution over messages, can be achieved as a uniform equilibrium payoff in the dynamic game. Moreover, any convex combination of such payoffs across message distributions can also be sustained. When the sender's payoff is state-independent, she achieves the Bayesian persuasion payoff.
0
0
econ.TH 2026-04-29

Sequential equilibrium defined for infinite games with continuous information

Sequential Equilibria in a Class of Infinite Extensive Form Games

The notion exists, refines Nash, and matches the standard definition on finite games.

abstract click to expand
Sequential equilibrium is one of the most fundamental refinements of Nash equilibrium for games in extensive form. However, it is not defined for extensive-form games in which a player can choose among a continuum of actions. We define a class of infinite extensive form games in which information behaves continuously as a function of past actions and define a natural notion of sequential equilibrium for this class. Sequential equilibria exist in this class and refine Nash equilibria. In standard finite extensive-form games, our definition selects the same strategy profiles as the traditional notion of sequential equilibrium.
0
0
econ.TH 2026-04-29

Core of large economies identified from type distributions alone

The Core in a Distributional Economy

Blocking coalitions can be detected without naming individual agents, yielding core equivalence as a direct distributional result.

abstract click to expand
An economy, large or small, has traditionally been defined in terms of an explicit set of agents and an assignment of characteristics to each agent. But when individual agents are negligible, most economically relevant properties of an economy can be defined in terms of the distribution of characteristics alone. Agents need not be specified. It has been frequently asserted that the distributional description of an economy is too sparse for core analysis. Notions of coalitions and blocking require the individualistic description of agents. This paper shows that this is not so. The presence of blocking coalitions can be directly identified in terms of distributions alone. Indeed, we give a purely distributional proof of the classical core-equivalence theorem that delivers the core-equivalence theorem for individualistic economies as a corollary. Our methods have applications outside of general equilibrium theory. They apply to large matching markets and to analogs of the Shapley-value for atomless economies.
0
0
econ.TH 2026-04-28

Network structure decides if conformity pushes exploration or status quo

Coordination in complex environments

Embedding beauty-contest games in uncertain environments shows how interaction networks steer groups toward new options or known ones, and a

Figure from the paper full image
abstract click to expand
Coordination is an important aspect of innovative contexts, where: the more innovative a course of action, the more uncertain its outcome. To study the interplay of coordination and informational ``complexity'', I embed a beauty-contest game into a complex environment. I identify a new conformity phenomenon. This effect may push towards the exploration of unknown alternatives or constitute a status-quo bias, depending on the network structure of players' interactions. In an application, I show that an organization with decentralized authority can implement profit maximization in a sufficiently complex environment.
0
0
econ.TH 2026-04-28

Solid constraints restore comonotonic risk sharing

Comonotonic improvement under feasibility constraints

When replacing any agent's share with a less risky one keeps the allocation feasible, every feasible arrangement admits a comonotonic Pareto

Figure from the paper full image
abstract click to expand
Regulatory and contractual constraints on individual exposures are standard in insurance and reinsurance markets, but a poorly designed constraint can distort the economic incentives of risk-averse agents. In the unconstrained problem, the classical comonotonic improvement theorem guarantees Pareto-optimal allocations that are nondecreasing in the aggregate loss. A constraint that is not stable under risk reduction can destroy this property. We show by example that Value-at-Risk caps lead to optimal allocations that are non-comonotonic in the aggregate loss. We identify componentwise convex-order solidity as a sufficient condition on the feasible set that restores the comonotonic improvement under constraints. If replacing any agent's allocation by a less risky one preserves feasibility, then every feasible allocation admits a feasible comonotonic improvement for all convex-order-consistent preferences. This criterion covers many constraints typical in risk management, but excludes Value-at-Risk caps and idiosyncratic deductibles. We illustrate the implications of our main result in a mean-variance risk-sharing application.
0
0
econ.TH 2026-04-28

Common agency decomposes into independent screening problems

Decomposing Common Agency

Each principal solves a standard screening task using the agent's best rival payoff as outside option, producing equilibria with discrete or

abstract click to expand
This paper develops a decomposition methodology for common agency games in which each principal's payoff depends on her own outcome and the agent's type, but not on rivals' outcomes. The key step reduces each principal's best-response problem to a standard screening problem defined over the agent's indirect utility -- the upper envelope of her payoff over rivals' offerings. Individually best-responding mechanisms then assemble into a pure-menu perfect Bayesian equilibrium when a compatibility condition (utility-preserving recombination) ensures aligned tie-breaking across principals. Under a non-indifference condition, the decomposition recovers all equilibria except those sustained by menu items that no type of the agent actually selects but which nevertheless discipline the rival's screening problem. When principals' payoffs depend on the full allocation profile, the decomposition adapts only under substantive regularity conditions on the agent's off-path choice behavior, one of which coincides with Luce's choice axiom. I apply the methodology to two settings. In a quadratic-loss delegation model, equilibria feature one principal offering a finite menu of discrete ``regimes'' while the other receives piecewise full delegation within each regime. In a competitive bundling duopoly under intrinsic common agency, the decomposition yields equilibria exhibiting market splitting, in which firms specialize in complementary bundles, and asymmetric equilibria with a take-it-or-leave-it base contract paired with a nested or tree menu of upgrades.
0
0
econ.TH 2026-04-27

Rigidity and debt trigger defaults after production network shocks

Rigidity and default in production networks

Welfare falls below the first best only when both are present, and Hulten's theorem fails with rigidity alone.

Figure from the paper full image
abstract click to expand
This paper studies the transmission of productivity shocks in general equilibrium production networks, when firms in different sectors operate under informational rigidity and rely on external debt. Rigidity breaks the Modigliani-Miller irrelevance of leverage and may generate default following shocks, even in equilibrium. The economy consists of firms, banks, and consumers. Under proportional shock transmission, we prove that a unique Walrasian rigid equilibrium exists and provide explicit expressions for equilibrium quantities, prices, and interest rates. We show that, on the one hand, Hulten's theorem fails under rigidity, even without leverage. On the other hand, we prove that welfare is smaller than in the first best if and only if both leverage and rigidity exist. The latter increase the total cost of debt and have inflationary effects on the levered sectors, which propagate downstream, and shift consumption and labor upstream. The occurrence of default depends solely on real shocks and the network structure, while the magnitude of the losses depends also on the connectedness of the economy and the cost of debt of the connected sectors. We provide conditions for default cascades to occur and study two examples of default propagation.
0
0
econ.TH 2026-04-27

Losing contracts force unique cooperation equilibrium in n-player dilemma

Preplay Losing Contracts: Inducing Strong Nash Equilibrium in the n-player Prisoner's Dilemma

Players commit to self-payoff reductions on defection, aligning incentives without transfers and preserving cooperation under strategy-set l

abstract click to expand
In strategic games such as the prisoner's dilemma, allowing players to make binding offers of utility transfers before play has been shown to alter incentives and potentially support cooperative outcomes. These preplay exchange mechanisms reshape payoffs by transferring utility while being contingent on actions; however, they typically require side payments that can reduce individual benefits relative to joint cooperation. In this paper, we extend the analysis to a finite $n$-player prisoner's dilemma with ordered strategy sets, defined such that any restriction of strategies by any subset of players still yields a prisoner's dilemma. To achieve a robust cooperative outcome that resists group deviations, we introduce a novel class of mechanisms: $\textit{losing contracts}$. Unlike transfer-based preplay mechanisms, losing contracts require players to irrevocably reduce their own utility if they defect, thereby aligning individual incentives with cooperation without inter-player payments. With appropriately chosen loss amounts, losing contracts induce joint cooperation as the unique strong Nash equilibrium in the modified game and in every restricted game within it, ensuring that cooperative incentives persist even under possible external constraints on strategy sets. We show that our contracts can be constructively defined, reducing the preplay stage to a simple and binary decision for each player: whether to sign the contract or not. Furthermore, if the losing contract is only executed when all players sign, signing is a strictly dominant strategy for all. Finally, we extend these results to certain public goods games.
0
0
econ.TH 2026-04-24

Autonomy qualifies the First Welfare Theorem for AGI economies

Post-AGI Economies: Autonomy and the First Fundamental Theorem of Welfare Economics

Competitive equilibria remain efficient once welfare comparisons condition on each agent's degree of autonomy.

abstract click to expand
The First Fundamental Theorem of Welfare Economics assumes that welfare-bearing agents are autonomous and implicitly relies on a binary distinction between autonomy and instrumentality. Welfare subjects are those who have autonomy and therefore the capacity to choose and enter into utility comparisons, while everything else does not. In post-AGI economies this presupposition becomes nontrivial because artificial systems may exhibit varying degrees of autonomy, functioning as tools, delegates, strategic market actors, manipulators of choice environments, or possible welfare subjects. We argue that the theorem ought to be subject to an autonomy qualification where the impact of these changes in autonomy assumptions is incorporated. Using a minimal general-equilibrium model with autonomy-conditioned welfare, welfare-status assignment, delegation accounting, and verification institutions, we set out conditions for which autonomy-complete competitive equilibrium is autonomy-Pareto efficient. The classical theorem is recovered as the low-autonomy limit.
0
0
econ.TH 2026-04-23

Establishing causality needs 1-2 variables

Causal Persuasion

A model shows that selective disclosure can confirm a causal link with minimal data but can only refute one by revealing every potentialconf

Figure from the paper full image
abstract click to expand
We propose a model of causal persuasion, in which a sender selectively discloses a set of variables together with their true joint distribution and proposes a subjective causal model that binds them. A receiver is persuaded by this model only if the data conclusively identifies the causal link of interest. We characterize when such persuasion succeeds or fails, and how easily it can be achieved. We further show that if the receiver holds a pre-existing subjective model, debunking it is similar to persuading a receiver without one. To establish a true causal link, the sender often needs to disclose only one or two well-chosen variables. But to dispel a perceived link -- to persuade the receiver there is no causal relationship -- every common cause must be disclosed. Our results highlight a fundamental asymmetry in causal persuasion: Establishing causality is often much easier than ruling it out.
0
0
econ.TH 2026-04-23

Duality turns constrained route choice into unconstrained concave maximization

Convex Duality in Perturbed Utility Route Choice

The dual objective is differentiable and recovers unique link flows via convex conjugates, enabling scalable gradient methods on large nets.

Figure from the paper full image
abstract click to expand
This paper develops a highly general convex duality framework for the perturbed utility route choice (PURC) model. We show that the traveler's constrained, potentially non-smooth utility maximization problem admits a dual formulation: an unconstrained concave maximization problem with a differentiable objective. The unique optimal flow can be recovered link-by-link from any dual solution via the convex conjugates of link perturbation functions. These properties enable efficient gradient-based optimization for large-scale networks and fast computation for sensitivity analysis. Finally, the framework reveals a structural analogy between PURC and current flow in electrical circuits.
0
0
econ.TH 2026-04-23

Exchangeability limits effort in repeated contests

On Rent Dissipation in Dynamic Multi-battle Contests

A symmetry property in battle sequences triggers discouragement that stops full rent dissipation unless volatility is introduced.

Figure from the paper full image
abstract click to expand
We study dynamic multi-battle contests and examine how the contest structure shapes dynamic incentives and determines the extent of rent dissipation. A discouragement effect often arises -- such as in tug-of-war and best-of-$K$ contests -- preventing full rent dissipation even when the series can extend infinitely. We identify a structural property, exchangeability, that contributes to the effect. Leveraging this insight, we establish a necessary and sufficient condition for almost-full rent dissipation. As an application, we introduce the iterated incumbency contest, which illustrates how volatility in the surrounding environment sustains dynamic incentives and generates almost-full rent dissipation, and thus offers insights into various competitive phenomena.
0
0
econ.TH 2026-04-22

Maximin matches Nash by count in positive-sum games

How damaging is zero-sum thinking to an agent's interests when the world is positive-sum?

Equal numbers of 3x3 games favor maximin over Nash or vice versa, so zero-sum rules generate no systematic payoff loss.

Figure from the paper full image
abstract click to expand
We study whether zero-sum decision rules, maximin and minimax, harm agents' interests in positive-sum strategic environments relative to Nash equilibrium behavior or, more generally, than best response behaviour. Contrary to an influential evolutionary view, we give illustrations where maximin serves an agent's interests better than Nash equilibrium behaviour. We also show that these illustration are not atypical or idiosyncratic because, in our main result, the class of such games where a maximin profile strictly Pareto dominates all Nash equilibria has the same cardinality as the class of games in which a Nash equilibrium strictly Pareto dominates all maximin profiles. Thus, neither behavior is generally superior. We further identify additional mechanisms favoring maximin over Nash equilibrium, including coordination failures under multiple equilibria, where maximin can outperform Nash play in realised-pay-off terms. A systematic analysis of strictly ordinal symmetric 3x3 games shows that these effects arise with non-trivial frequency. Our findings, therefore, suggest that the observed rise in zero-sum thinking in many rich countries, when associated with a maximin decision rule, will not be readily displaced through its generation of inferior pay-offs.
0
0
econ.TH 2026-04-22

Commodity taxes force more rationing than monopolies for fairness

Fair Commodity Taxation

When valuations correlate across independent sellers, optimal fair mechanisms restrict access beyond unregulated levels and avoid any use of

abstract click to expand
We study economies where consumers interact independently with many monopolists. When consumer valuations over goods are correlated, correlation can distort the induced distribution of consumer surplus (information rents). We identify which shifts in the correlation structure over values makes the induced distribution more or less fair, in the sense of second order stochastic dominance. We then investigate the role taxation can have on information rents, and show the tax authority never benefits from randomizing the allocation of goods. We characterize the set of mechanisms that are on the fairness-efficiency frontier under regularity conditions on the distribution of types. Furthermore, under these conditions all allocations on the fairness-efficiency frontier ration the good more than an unregulated monopolist. Finally, we discuss implications of our model for luxury commodity taxation.
0
0
econ.TH 2026-04-21

Pseudo-substitutability maximizes the domain guaranteeing stable matches

Pseudo-Substitutability: A Maximal Domain for Pairwise Stability in Matching Markets with Contracts

It allows limited complementarities beyond classical substitutability while ensuring pairwise stable allocations exist in contract-basedๅŒน้….

Figure from the paper full image
abstract click to expand
We study the existence of pairwise stable allocations in matching markets with contracts and propose a domain restriction that guarantees their existence. Specifically, we define pseudo-substitutable preferences, a domain that strictly extends the classical notion of substitutability while still preserving the existence of pairwise stable allocations. This domain accommodates limited complementarities among contracts while retaining enough structure to preserve the key stability properties of substitutable preferences. Moreover, we show that, among all preference domains that contain the classical substitutable domain and guarantee the existence of pairwise stable allocations, the pseudo-substitutable domain is maximal. Our results establish that pairwise stability extends well beyond the classical substitutable domain.
0
0
econ.TH 2026-04-21

Net revenue bubbles up after meeting needs in hierarchies

Sharing the proceeds from a hierarchical venture when agents have needs

Geometric and serial rules are characterized for distributing joint proceeds in organizational trees with personal requirements.

Figure from the paper full image
abstract click to expand
We consider a setting in which a set of agents are hierarchically organized for a joint venture. They each generate revenues for the joint venture and have individual needs to cover. The aim is to distribute aggregate revenues appropriately. We characterize a family of need-adjusted geometric rules where the net revenue (after covering needs) "bubbles up" in the hierarchy, as well as a need-adjusted serial rule in which the net revenue is equally shared among each agent and his predecessors in the hierarchy.
0
0
econ.TH 2026-04-21

Belief structure links norms

Perceived Social Norms under Uncertainty

Framework reveals how disclosure shapes perceived social norms based on content, publicity and private cue encoding.

abstract click to expand
This paper proposes a belief-based framework for social norms in environments where individuals choose a single action. Relaxing the assumption that the appropriateness standard is common knowledge, the framework allows individuals to be uncertain about this standard and to hold heterogeneous assessments and beliefs about others' assessments. Within the framework, perceived injunctive social norms, personal values, and empirical expectations, while distinct, are systematically connected through a common informational structure. The framework further clarifies how disclosed information shapes perceived norms: its effect depends on what is disclosed, whether it is publicly or privately revealed, and how the disclosed statistic encodes underlying private cues.
0
0
econ.TH 2026-04-21

Seller-pivotal collaboration raises optimal auction revenue

Optimal linear-payment auction design with aftermarket collaboration

Mechanisms allocate to highest virtual surplus bidders and use linear value-sharing contracts, with seller extracting more when leading post

Figure from the paper full image
abstract click to expand
This paper studies optimal auction design when valuations depend endogenously on post-auction collaboration between the seller and the winning bidder. Both parties exert non-contractible efforts after the auction, generating a double moral hazard problem alongside adverse selection. We analyze two role structures -- winner-pivotal and seller-pivotal collaboration -- and characterize optimal direct mechanisms using linear payment schemes that combine cash transfers with proportional value sharing. The optimal mechanism allocates the asset to the bidder with the highest virtual surplus, employs a deterministic value-sharing rule, and achieves full type revelation through the signal realization rule. Comparing the two scenarios yields three main findings. First, regarding value sharing, the seller secures a strictly higher share under seller-pivotal collaboration: for sufficiently low-type winners, the seller extracts the entire value, whereas under winner-pivotal collaboration every winner must retain a positive share to sustain his critical effort. Second, regarding effort exertion, the pivotal party always exerts higher post-auction effort than the supporting party, and each party exerts greater effort when pivotal than when providing support. Third, seller-pivotal collaboration yields strictly higher seller revenue than winner-pivotal collaboration for any type distribution. Finally, these optimal mechanisms can be implemented through ascending auctions with endogenously determined linear contracts.
0
0
econ.TH 2026-04-20

Price-increase limit can raise average prices

Strategic Pricing and Consumer Welfare under One-Sided Price Regulation

Firms set higher first-period prices to retain later flexibility, weakly increasing expected prices when high demand is likely.

Figure from the paper full image
abstract click to expand
Motivated by Germany's April 2026 fuel price regulation, in this note I study a two-period pricing problem with demand uncertainty and a rule that prohibits more than one price increase during the day. Under flexible pricing, the firm chooses the static monopoly price in each period. Under the regulation, by contrast, it may price strategically high in period 1 to preserve flexibility in period 2. I show that the regulation weakly raises expected average prices. The increase is strict when future high demand is sufficiently likely and the gap between high and low demand is large; otherwise, expected average prices are unchanged. Consumer surplus rises when expected prices do not, and decreases otherwise.
0
0
econ.TH 2026-04-20

Fair lotteries can be split without creating likely envy

Decomposition Envy-Freeness in Random Assignment

For three or fewer agents or two preference types, any SD-EF matrix has a decomposition where no agent envies another with high probability.

abstract click to expand
In random assignment, fairness is often captured by stochastic-dominance envy-freeness (SD-EF). We observe that assignments satisfying SD-EF may admit decompositions that result in each agent envying another agent with high probability. To address this, we introduce decomposition envy-freeness (Dec-EF), which is a property of a decomposition rather than of an assignment matrix. We show that an SD-EF assignment matrix always admits a Dec-EF decomposition when there are at most three agents or the agents have at most two distinct preferences.
0
0
econ.TH 2026-04-17

Rational agents cannot know if they know everything

Knowing that you do not know everything

True and updatable knowledge of events still leaves them uncertain about whether their information is complete.

Figure from the paper full image
abstract click to expand
We show that a rational agent with true and refinable knowledge of events cannot know if she knows everything or not. This epistemic limitation is not resolved by introspection about tautologies or by learning about new events.
0
0
econ.TH 2026-04-16

Bundling can trap or accelerate preference discovery

How do you know you won't like it if you've (never) tried it? Preference discovery and data design

Platforms control joint consumption experiences, and the correlation structure of those bundles decides how fast utility surprises correctๅˆๅง‹

Figure from the paper full image
abstract click to expand
Consumers discover their preferences through experience, yet the sequence and composition of those experiences are often designed by firms, digital platforms, or policymakers. We introduce a ``data-design'' framework for preference discovery, in which the structure of consumption data shapes learning. Bundling generates correlated exposure across goods, so utility surprises propagate through the co-consumption network. When estimation errors are known, bias-targeted design can shut down learning and amplify misperceptions. Conversely, robust design uses only the geometry of past co-consumption: popularity-biased bundles slow learning, while correlation-breaking bundles accelerate preference discovery. The framework thus explains how dominant platforms can sustain biased demand through exposure design, and why effective regulation may need to intervene on the structure of exposure itself rather than only on prices or market shares.
0
0
econ.TH 2026-04-16

Unique rule meets balanced contributions in externality networks

Balanced Contributions in Networks and Games with Externalities

The BCE allocation divides value so each component keeps its worth and every link affects its two players equally, matching prior solutions,

Figure from the paper full image
abstract click to expand
For networks with externalities, where each component's worth may depend on the full network structure, balanced contributions and fairness lead to distinct component-efficient allocation rules. We characterize the unique component-efficient allocation rule satisfying balanced contributions -- the BCE rule. Existence is the main challenge: balanced contributions must hold on every edge, but the construction uses only spanning-tree edges. A cycle-sum identity bridges this gap by reducing balanced contributions on non-tree edges to relations in proper subnetworks. The BCE rule coincides with the Myerson value for TU games and with its generalization by Jackson--Wolinsky for network games without externalities, it recovers the externality-free value on the complete network, and -- unlike the fairness-based FCE rule -- it does not reduce to a graph-free formula applied to the graph-restricted game.
0
0
econ.TH 2026-04-15

Sequential prices achieve asymptotic efficiency in online matching

How to Use Prices for Efficient Online Matching

SEM approximates large-market equilibria to deliver fairness and strategy-proofness with probability one in dynamic settings.

abstract click to expand
Many matching markets feature unknown, dynamic arrivals of agents that must match immediately. A caseworker must match an abused child to a foster home, a hospital must assign a patient in critical condition to a room, or a city must place a homeless individual into a shelter. We design an online matching algorithm -- the Sequential Equilibrium Mechanism (SEM) -- that approximates large market equilibria to match arriving agents to objects. SEM is asymptotically efficient, fair, and strategy-proof with probability one. Our application plans to deploy a lab-in-the-field experiment where real caseworkers match vulnerable children to host homes, and we provide simulation evidence that SEM can substantially improve welfare.
0
0
econ.TH 2026-04-14

PAYG pensions designed for optimal balance via equilibrium theory

The Design of Optimally Balanced Pay-as-you-go Social Security Systems

The resulting systems resemble notional accounts and suit reforms during demographic transitions, illustrated with data from five countries

Figure from the paper full image
abstract click to expand
This paper bridges social security design and general equilibrium theory to conceive optimally balanced pay-as-you-go systems. The design is based on the backward calculation algorithm from Dognini (2025), which is used to find optimal monetary equilibria of prone-to-savings non-stationary overlapping generations economies with heterogeneous households. In particular, this algorithm makes the design applicable for reforming pay-as-you-go systems in countries undergoing demographic transitions. Due to households balanced budgets under equilibrium prices (i.e., Walras' law), these optimally balanced pay-as-you-go systems resemble the well-known notional accounts systems. The design is illustrated in a simplified framework using the past and forecast demographic and productivity dynamics of Brazil, China, India, Italy, and the United States from 1950 to 2070.
0
0
econ.TH 2026-04-13

Dutch auctions beat posted prices when waiting costs hit thresholds

Dutch Auctions in Matching Markets with Waiting Costs

A framework shows dominance depends on earnings and timing gaps, with two-sided complementarity amplifying gains when surplus is large.

Figure from the paper full image
abstract click to expand
When time-to-contract is payoff-relevant, how should a matching platform choose between a descending-clock (Dutch) mechanism and posted prices? We introduce a timing--entry--volume (TEV) framework that traces the causal chain from mechanism format through contracting speed, participation incentives, match volume, and revenue. Against immediate posted prices, dominance depends on the earnings and timing gaps and may hold for all waiting costs, only above a floor~$\lambda^*$, only below a ceiling~$\lambda^{**}$, or not at all. Against a batch-clearing benchmark, Dutch dominates through both timing and payment channels. In the two-sided extension, cross-side complementarity amplifies a one-sided advantage into equilibrium dominance on both sides, with welfare gains when match surplus is sufficiently large. All dominance conditions are stated in estimable quantities.
0
0
econ.TH 2026-04-13

Moral hazard in persuasion summarized by one shadow price

Moral Hazard in Delegated Bayesian Persuasion

Alignment conditions on payoff indices decide when the first-best experiment is achievable; otherwise a distorted virtual problem yields the

Figure from the paper full image
abstract click to expand
We study delegated Bayesian persuasion: a principal incentivizes an intermediary to design information via outcome-contingent transfers, while the intermediary privately chooses the experiment subject to convex costs. We characterize first-best implementability through a pair of alignment conditions on the principal's and intermediary's payoff indices. A local condition on the support of the target experiment is necessary; a global affine alignment condition is sufficient. We show that the gap between them is non-empty and provide a partial characterization of the intermediate region. When the first-best is unattainable, the principal's problem admits a virtual Bayesian persuasion representation: the second-best experiment maximizes the same concavified objective as the first-best, with the principal's payoff index distorted by a single scalar shadow price that summarizes the entire agency friction. Under entropy costs, moral hazard compresses posterior dispersion whenever the intermediary's utility differs across the actions it recommends. Explicit closed-form solutions for posteriors, mixing weights, and the optimal transfer schedule are derived for binary environments.
0
0
econ.TH 2026-04-13

Perfect Coalitional Equilibrium is the largest nondiscriminating stable standard

On Conservative Stable Standard of Behavior and Perfect Coalitional Equilibrium

It contains every other nondiscriminating Conservative Stable Standard of Behavior in Greenberg's coalitional repeated games.

abstract click to expand
We show that in Greenberg (1989)'s coalitional repeated game situation, every nondiscriminating Conservative Stable Standard of Behavior is a subset of the set of Perfect Coalitional Equilibrium (Ali and Liu 2026) paths. Moreover, the set of Perfect Coalitional Equilibrium paths itself is a nondiscriminating Conservative Stable Standard of Behavior. The set of Perfect Coalitional Equilibrium paths is therefore the maximal nondiscriminating Conservative Stable Standard of Behavior.
0
0
econ.TH 2026-04-13

Intermediaries prompt monopolists to expand menus

Information Intermediaries in Monopolistic Screening

When recommenders maximize consumer surplus but favor high quality, sellers offer more items and overall welfare falls versus direct seller-

abstract click to expand
We investigate the relationship between product offerings, information dissemination, and consumer decision-making in a monopolistic screening environment in which consumers lack information about their valuation of quality-differentiated products. An intermediary, who is driven by the objective of maximizing consumer surplus but is also biased towards high-quality products, provides recommendations after the monopolist announces the menu of product choices. We characterize the monopolist's profit-maximizing finite-item menu. Our results show that as intermediaries place greater emphasis on consumer surplus over product quality, sellers are prompted to strategically expand their product range. Intriguingly, this augmented product variety decreases economic efficiency compared to scenarios where direct seller-to-consumer information provision is the norm. The role of information intermediaries proves pivotal in shaping consumer welfare, market profitability, and overarching economic efficiency. Our insights underscore the complexities introduced by these intermediaries that policymakers and market designers must consider when designing policies centered on consumer learning and market information transparency.
0
0
econ.TH 2026-04-13

Varying buyer valuations traces full surplus-profit frontier

Market Composition and the Consumer Surplus-Profit Frontier in Monopoly Screening

Upstream choice of the distribution of consumer types lets monopoly outcomes reach any point on the Pareto trade-off between profit and net-

Figure from the paper full image
abstract click to expand
Economic institutions often influence market outcomes not by directly controlling sellers' menus, but by shaping the market composition sellers face. We study the welfare effects of this upstream choice in a monopoly screening model. An upstream actor chooses the distribution of buyer valuations, after which a monopolist screens optimally. We characterize the consumer surplus-profit frontier across market compositions: as the weight on consumer surplus varies, the payoff pair induced by the optimal market composition traces the Pareto frontier. If profit receives at least as much weight as consumer surplus, the optimal market composition collapses to the top type. Otherwise, it exhibits no exclusion, no interior bunching, and a positive mass at the highest valuation. Under a mild curvature condition, the optimal market composition is unique. Greater weight on consumer surplus makes the market less top-heavy: the differentiated interior expands and the premium top segment shrinks.
0
0
econ.TH 2026-04-10

General CES growth model lacks guaranteed saddle stability

On the stability of the steady-state of a general model of endogenous growth with two CES production functions

In a Bond-type two-sector setup the steady state need not be a saddle, so unique convergence cannot be assumed.

abstract click to expand
The main aim of this paper is to study the steady-state properties of a general Bond-type endogenous growth model, considering that both sectors are modeled by two distinct $CES$ production functions. We prove here that in this case, we cannot claim the saddle-path stability.
0
0
econ.TH 2026-04-10 Recognition

Spillovers flip toughness payoff in three-way bargaining

Reputational Spillovers

When one peripheral starts most reputable, the center and strongest lose while the weakest can gain from linked reputation updates.

Figure from the paper full image
abstract click to expand
We analyze a reputational bargaining game in which a central player negotiates simultaneously with two peripheral players. Each player is either rational or a commitment type who never concedes and insists on a fixed share, and concessions are publicly observed. The central player's type is global, so actions in one dispute update beliefs in the other and generate reputational spillovers. The game admits a unique equilibrium, enabling a sharp comparison with the bilateral benchmark of Abreu and Gul (2000). Spillovers are payoff-relevant if and only if a peripheral is uniquely the most reputable player initially. In that case, spillovers overturn the bilateral prediction that toughness pays: the central player is never strictly better off and can be strictly worse off; the strongest peripheral loses; and the weakest peripheral can benefit, especially when the center's higher-stakes dispute is with the other peripheral.
0
0
econ.TH 2026-04-08 2 theorems

Principals with cheap capital force costly outside borrowing

The Screening Cost of Liquidity

Advances pool types while contingent transfers screen but cost more; optimal mix leaves outside exposure in place.

Figure from the paper full image
abstract click to expand
A principal with cheap capital optimally forces her counterparty to borrow at above-market rates. The reason: the form of finance is a screening device. Advances provide liquidity but pool types; contingent transfers separate types, but, because they are not pledgeable, impose financing costs. The optimal contract preserves outside-finance exposure to maintain screening power. Two sufficient statistics pin down the optimal advance share. With complementary counterparties, a uniform subsidy that cheapens finance across every relationship can reduce the value of each. This explains the coexistence of early payment and contingent compensation in trade credit, venture capital, and internal capital markets.
0
0
econ.TH 2026-04-08 2 theorems

Endogenous rule justifies priority violations without consent

Justifiable Priority Violations

New criterion finds efficiency gains in student assignment that consent mechanisms cannot reach under any structure.

Figure from the paper full image
abstract click to expand
Addressing the large inefficiencies generated by the Deferred Acceptance (DA) mechanism requires priority violations, but which ones are justifiable? The leading approach is to ask individuals if they consent to waive their priority ex-ante. We develop an alternative question-free solution, in which a priority violation is justifiable whenever the affected student either (i) directly benefits from the improvement, or (ii) is unimprovable under any assignment that Pareto-dominates DA. This endogenous justifiability criterion permits improvements unattainable by the leading consent-based mechanism under any consent structure. We provide a ``just below cutoffs'' mechanism that always finds a strongly justifiable matching whenever DA's outcome is inefficient, and build on it to construct a polynomial-time algorithm that expands justifiable improvements iteratively, converging to a DA improvement that cannot be Pareto-improved by any justifiable matching without strictly expanding the beneficiary set. Finally, we prove theoretically that both the ex-ante consent and the endogenous justifiability frameworks have important limitations in reaching Pareto-efficient outcomes, and use simulations to quantify how binding these constraints are in practice.
0
0
econ.TH 2026-04-08 Recognition

Lexicographic Robustness and the Efficiency of Optimal Mechanisms

Lexicographic maxmin picks ex post efficient designs for screening and auctions but reveals specific optimal inefficiencies in public goods.

Figure from the paper full image
abstract click to expand
A central challenge in mechanism design is to identify mechanisms whose performance is robust under uncertainty about the environment. The maxmin optimality criterion is commonly used for this purpose, but it often yields a large and economically uninformative set of mechanisms. This paper proposes a lexicographic approach to refining the maxmin criterion and characterizes the efficiency of optimal mechanisms. In canonical screening and auction environments, the strongest refinement $\unicode{x2013}$ proper robustness $\unicode{x2013}$ selects ex post efficient mechanisms. By contrast, in a public good provision environment, it identifies the precise form of optimal inefficiencies, which become severe in large economies.
0
0
econ.TH 2026-04-08 Recognition

Strong paired choice test confirms common ratio effect persists

Robust Testing Of the Allais Paradox By Paired Choices vs. Paired Valuations

Valuation tests are biased under stochastic choice; reanalysis of data shows the Allais paradox violation remains prevalent.

Figure from the paper full image
abstract click to expand
McGranaghan, Nielsen, O'Donoghue, Somerville, and Sprenger [2024] show that standard paired choice tests for the common ratio effect are structurally biased when choice is stochastic, proposing valuation tests as a robust alternative. Using valuation tests, they find no systematic evidence for the common ratio effect, seemingly overturning much of the extant literature. We evaluate this conclusion in light of stochastic choice theory. We argue that valuation tests are inherently biased and lack predictive power under standard expected utility assumptions. In contrast, we advocate for a ``strong'' paired choice test, proving it remains robustly unbiased across common models of stochastic choice. Applying this strong test to existing experimental data, we find that the common ratio effect remains highly prevalent.
0
0
econ.TH 2026-04-08 Recognition

Borda dominates every scoring rule in avoiding Condorcet losers

Condorcet-loser dominance among scoring rules

Its profiles electing a Condorcet loser form a proper subset of those for any other rule, and it is the only rule with this dominance over a

abstract click to expand
This paper studies a dominance relation among scoring rules with respect to avoiding the selection of the Condorcet loser. In a voting model with three or more alternatives, we say that a scoring rule $f$ Condorcet-loser-dominates (CL-dominates) another scoring rule $g$ if the set of profiles where $f$ selects a Condorcet loser is a proper subset of the set where $g$ does. We show that the Borda rule not only CL-dominates all other scoring rules, but also is the only scoring rule that CL-dominates some scoring rule.
0
0
econ.TH 2026-04-07 2 theorems

Fast AI updates block reliable belief improvements

How AI Aggregation Affects Knowledge

Slow aggregation permits training weights that shrink learning gaps across environments, but quick global systems widen them in some cases.

Figure from the paper full image
abstract click to expand
Artificial intelligence (AI) changes social learning when aggregated outputs become training data for future predictions. To study this, we extend the DeGroot model by introducing an AI aggregator that trains on population beliefs and feeds synthesized signals back to agents. We define the learning gap as the deviation of long-run beliefs from the efficient benchmark, allowing us to capture how AI aggregation affects learning. Our main result identifies a threshold in the speed of updating: when the aggregator updates too quickly, there is no positive-measure set of training weights that robustly improves learning across a broad class of environments, whereas such weights exist when updating is sufficiently slow. We then compare global and local architectures. Local aggregators trained on proximate or topic-specific data robustly improve learning in all environments. Consequently, replacing specialized local aggregators with a single global aggregator worsens learning in at least one dimension of the state.
0
0
econ.TH 2026-04-07 2 theorems

Seller needs at most three signal outcomes to screen buyers

Coarse Screening

The limit comes from two post-signal decisions and limited liability keeping allocation and payment choices independent.

Figure from the paper full image
abstract click to expand
A seller investigates a buyer before setting prices, balancing the cost of acquiring information against the gain from tailoring the contract to the buyer's private type. The optimal signal is coarse: no matter how rich the type space, the seller never needs more than three outcomes per buyer. The bound equals the number of independent post-signal decisions plus one, a quantity we call the effective policy dimension. Screening involves two decisions, whether to allocate and what to charge, giving the ternary bound. Limited liability is the source: without it, the price is pinned by the envelope, only the allocation decision remains, and signals are binary as in monitoring. The Myerson exclusion rule is an artifact of not investigating. With investigation, every marginal buyer trades with positive probability, governed by a universal function that connects information design to rational inattention. The bound holds for any strictly convex information cost.
0
0
econ.TH 2026-04-03 2 theorems

Duality proves equilibria exist in large indivisible-goods markets

Constrained optimal transport with an application to large markets with indivisible goods

A constrained transport duality corrects a compactness error and shows prices minimize a potential function.

abstract click to expand
We establish a variant of Monge--Kantorovich duality for a constrained optimal transport problem with a continuum of agents, a finite set of alternatives, and general linear constraints. As an application, we revisit the large-market model of indivisible goods in Azevedo et al. (2013), identify a flaw in the original equilibrium-existence proof stemming from an incorrect compactness claim, and recover equilibrium existence via our duality approach. We also characterize equilibrium prices as minimizers of a potential function, which yields a method for computing equilibrium prices.
0

browse all of econ.TH โ†’ full archive ยท search ยท sub-categories