pith. machine review for the scientific record. sign in

q-fin.PM

Portfolio Management

Security selection and optimization, capital allocation, investment strategies and performance measurement

0
q-fin.PM 2026-05-11 2 theorems

Skew engineering limits downside more than upside to aid drawdown recovery

The Engineering of Skew: A Path-Dependent Framework for Asymmetric Volatility Management

A path-dependent approach shapes portfolio exposure so recovery after losses requires less upside participation than symmetric de-risking.

abstract click to expand
Volatility is the language in which finance often describes risk, but it is not the language in which institutions experience risk. Allocators live through drawdowns, liquidity needs, spending rules, rebalance decisions, board oversight, and the interval between a prior high-water mark and full recovery. This paper develops a path-dependent framework for asymmetric volatility management. The arithmetic of recovery is nonlinear: after a drawdown of depth $D$, the required gain is $R=\frac{1}{1-D}-1$. Lower volatility can improve geometric compounding through the familiar small-return approximation $g \approx \mu-\frac{1}{2}\sigma^2$, but symmetric de-risking can also impair recovery if it sacrifices too much upside participation. The relevant design problem is therefore not volatility reduction in isolation; it is conditional exposure shaping. Skew engineering is defined here as the portfolio construction discipline of reducing harmful downside participation more than productive upside participation, controlling submergence, and preserving enough recovery participation to sustain compounding under adverse regimes. The resulting Recovery-Efficiency Protocol links drawdown depth, time underwater, recovery burden reduction, and rebound participation into an allocator-facing reporting discipline. Machine learning and AI methods are framed as tools for conditional estimation, regime mapping, robustness testing, and model-risk governance, not as market prediction.
0
0
q-fin.PM 2026-05-04

SPO portfolio models inflate forecasts through decision ranking

Decision-Induced Ranking Explains Prediction Inflation and Excessive Turnover in SPO-Based Portfolio Optimization

KKT analysis shows that risk-adjusted marginal scores drive both exaggerated predictions and rapid reallocations; clipping and partial rebal

Figure from the paper full image
abstract click to expand
Decision-focused learning (DFL) is attractive for portfolio optimization because it trains predictors according to downstream decision quality rather than prediction accuracy alone. However, SPO(Smart, Predict then Optimize surrogate)-based DFL may produce inflated return signals and unstable portfolio reallocations. This study provides a KKT-based interpretation showing that portfolio decisions can be viewed as ranking over risk- and transaction-cost-adjusted marginal scores. Empirically, we examine prediction inflation and excessive turnover in SPO-trained portfolios, and evaluate clipping, min-max rescaling, and partial portfolio adjustment as practical stabilization mechanisms. The results suggest that realistic output constraints and portfolio-level turnover control improve the implementability of SPO-based portfolio strategies.
0
0
q-fin.PM 2026-05-01

S&P 500 rose but levered ETFs fell

A Levered ETF Anomaly Explained

Covariance between leverage deviations and index returns accounts for the rest of the 2022-2023 performance gap

Figure from the paper full image
abstract click to expand
Counterintuitively, the S&P 500 Index rose between January 1, 2022, and December 29, 2023, while exchange-traded funds (ETFs) seeking to deliver 2x and 3x daily returns of the index delivered substantially negative returns. Roughly two-thirds of the difference between the returns of the index and the levered ETFs can be attributed to compounding and volatility. The remaining difference is explained by the covariance between the ETFs' deviations from constant leverage and the index's return.
0
0
q-fin.PM 2026-04-30

LLM agents find crypto factors with 44.55% out-of-sample returns

From Hypotheses to Factors: Constrained LLM Agents in Cryptocurrency Markets

A constrained protocol turns hypothesis search into auditable trading factors that hold after costs in 2024-2026 data.

Figure from the paper full image
abstract click to expand
LLM agents are promising tools for empirical discovery, but their flexibility can also turn discovery into uncontrolled search. We study how to use agents under a reproducible protocol through cryptocurrency factor discovery. Our framework casts the task as sequential hypothesis search: an agent reads an append-only experiment trace, proposes falsifiable factor hypotheses, and maps them to executable recipes, while a deterministic engine enforces fixed data splits, selection gates, transaction costs, and portfolio tests. Candidate actions are restricted to a point-in-time factor DSL, making both successful and failed hypotheses auditable. A ridge-combined portfolio trained only on 2020--2022 data achieves a 44.55% annualized return and Sharpe ratio of 1.55 in the 2024--2026 pure out-of-sample period after a 5 basis point one-way trading cost.
0
0
q-fin.PM 2026-04-29

Affine-normal descent scales higher-moment portfolios to thousands of assets

Yau's Affine-Normal Descent for Large-Scale Unrestricted Higher-Moment Portfolio Optimization

By following level-set normals directly on the return matrix the algorithm skips coskewness tensors and keeps exact line searches feasible.

Figure from the paper full image
abstract click to expand
Unrestricted mean-variance-skewness-kurtosis portfolio optimization can capture asymmetry and tail risk, but sample-moment formulations become computationally impractical when the asset universe is large: they produce dense nonconvex quartic objectives with prohibitive coskewness and cokurtosis tensors and anisotropic, ill-conditioned level sets. We develop a structure-exploiting algorithm based on Yau's affine-normal descent that follows affine-normal directions of the current level set while working directly with the return matrix. The method avoids explicit higher-order tensors and exploits the quartic structure for exact sample oracles, derivative evaluation, and exact line search. We also provide theory for the reduced simplex formulation, including regularity and convexity conditions that separate data-map geometry from investor preference coefficients. Computational results show a clear implementation split: a direct configuration is effective on the standard small benchmark, whereas a preconditioned conjugate-gradient configuration with stall recovery becomes the preferred large-scale implementation by the upper end of the hundreds and remains competitive as the asset universe moves into the thousands. On a 5-minute A-share panel with 5,440 stocks, the method makes direct full-universe comparisons with exact mean-variance portfolios feasible and shows on the baseline split that the incremental value of higher moments is strongest at moderate return targets.
0
0
q-fin.PM 2026-04-27

CRISP beats HRP and Markowitz in signal-aware portfolio tests

Beyond De Prado and Cotton: Hierarchical and Iterative Methods for General Mean-Variance Portfolios

Iterative shrinkage on correlations while keeping variances fixed yields superior out-of-sample Sharpe across simulated regimes.

Figure from the paper full image
abstract click to expand
Hierarchical Risk Parity (De Pardo) and the Schur-complement generalization of Cotton are among the most widely adopted regularised portfolio construction methods, yet both are signal-blind: they solve only the minimum-variance problem and cannot accommodate an arbitrary expected-return forecast. This paper introduces three methods that incorporate alpha signals into hierarchical and regularised portfolio construction. HRP-$\mu$ is a hierarchical allocator that accepts an arbitrary signal $\mu$ and nests standard HRP when $\gamma = 0$ and $\mu=\mathbf{1}$. It preserves the tree-based structure of HRP while extending it beyond the minimum-variance setting. HRP-$\Sigma\mu$ strengthens this construction by replacing inverse-variance representatives with recursive local mean-variance optima, thereby using richer within-cluster covariance information at the same $O(N^2)$ asymptotic cost. CRISP (Correlation-Regularised Iterative Shrinkage Portfolios) is an iterative solver for $P_\gamma w = \mu$ with $P_\gamma = (1-\gamma)\operatorname{diag}(\Sigma) + \gamma \Sigma$, so that $\gamma$ interpolates between a diagonal portfolio rule and full Markowitz. At convergence, CRISP is Markowitz applied to a variance-preserving shrunk covariance-diagonal variances unchanged, off-diagonal correlations shrunk-with $\gamma$ tuned for out-of-sample Sharpe rather than covariance-estimation loss. In Monte Carlo experiments across multiple covariance regimes and estimation ratios, HRP-$\mu$ and HRP-$\Sigma\mu$ both outperform plain HRP with HRP-$\Sigma\mu$ consistently improving on HRP-$\mu$. CRISP at intermediate $\gamma$ is the dominant method in both regimes, outperforming HRP, Cotton, Ledoit-Wolf shrinkage, direct Markowitz, and the signal-aware hierarchical methods.
0
0
q-fin.PM 2026-04-27

VC deal correlations boost extreme successes without raising averages

Beyond Picking Winners: Correlation-Driven Tail Risk in Venture Capital Portfolio Construction

Simulations holding success odds fixed show heavier right tails and higher kurtosis driven by attribute-linked dependence.

Figure from the paper full image
abstract click to expand
We propose a Gaussian-copula-based framework that learns deal-level dependence directly from observed joint success frequencies across founder, geography, and market attributes. Holding marginal deal success probabilities fixed, deal-level correlation preserves expected portfolio outcomes but shifts the portfolio distribution toward heavier right tails and higher kurtosis. In portfolio simulations, correlation reduces the probability of modest success counts while sharply amplifying extreme upside outcomes, especially in structurally concentrated portfolios. Our findings suggest that extreme venture capital outcomes may partly reflect correlation-induced tail amplification rather than solely higher average deal quality, with potential implications for portfolio construction and risk management. We note that the observed dataset reflects selected deals with observable outcomes, which inflates apparent success rates relative to the true population base rate; however, the core finding that correlation reshapes the distributional shape while leaving the mean unchanged is structurally robust to the level of marginal success probabilities.
0
0
q-fin.PM 2026-04-22

LLM edge filtering lifts cross-stock Sharpe from 0.74 to 0.82

Cross-Stock Predictability via LLM-Augmented Semantic Networks

Refined 10-K networks keep only economically linked pairs, improving long-short returns and cutting drawdowns on S&P 500 stocks.

Figure from the paper full image
abstract click to expand
Text-based financial networks are increasingly used to study cross-stock return predictability. A common approach constructs links from similarities in firms' disclosure embeddings, but such networks often contain spurious edges because textual proximity does not necessarily imply economic connection. We propose a two-stage framework that first builds a sparse candidate graph from 10-K embeddings and then uses a large language model to classify and filter candidate edges according to their economic relations. The refined graph is used to aggregate pair-level mean-reversion signals into stock-level trading signals with relation-aware and distance-based weights. In a backtest on S&P 500 constituents from 2011 to 2019, LLM-based edge filtering improves the long-short Sharpe ratio from 0.742 to 0.820 and reduces maximum drawdown from $-$10.47% to $-$7.85%. These results suggest that LLM-based reasoning can improve the economic fidelity of text-derived financial networks and strengthen cross-stock predictability.
1 0
0
q-fin.PM 2026-04-21

Backtests reflect launch regimes more than skill

Evaluating Structured Strategy Backtests: Peer Benchmarks, Regime Timing, and Live Performance

Analysis of 1,726 strategies finds pro-forma results weaken sharply in live trading and against peers.

Figure from the paper full image
abstract click to expand
Institutional allocators often evaluate structured strategies on the basis of marketed backtests -- hypothetical track records constructed by applying a strategy's rules to historical data prior to any live trading, also referred to as pro-forma performance. It is unclear how much of that signal survives once the strategy is actually traded. Using 1,726 commercially distributed structured strategies from ten global institutions, this paper shows that raw pro-forma performance has only limited portability into the live period and weakens sharply once live outcomes are measured relative to peer and external benchmarks. The evidence indicates that marketed backtests predominantly reflect the common factor regime present before launch rather than strategy-specific skill. Strategies launched after unusually strong bucket-factor conditions experience materially worse subsequent deterioration. For allocators, the implication is practical: backtests should be judged relative to appropriate peer benchmarks, and the discount applied to them should increase when launch occurs after an extreme factor run.
0
0
q-fin.PM 2026-04-20

Lasso screening before optimization aids high-dimensional portfolios

Post-Screening Portfolio Selection

First screen assets with Lasso regression on excess returns, then optimize weights on the selected assets.

Figure from the paper full image
abstract click to expand
We propose post-screening portfolio selection (PS$^2$), a two-step framework for high-dimensional mean--variance investing. First, assets are screened by Lasso-type regression of a constant on excess returns without an intercept. Second, portfolio weights are estimated on the selected set using standard low-dimensional methods. Because strong factors can destroy sparsity in real data, we further introduce PS$^2$ with factors (FPS$^2$), which defactors returns before screening and allows factor investing in the final step. We establish theoretical guarantees, and simulations and an empirical application show competitive performance, especially when sparse screening is appropriate or strong factors are explicitly accommodated.
0
0
q-fin.PM 2026-04-20

LLM multi-agent picks beat S&P 500 benchmark by over 1% monthly

Signal or Noise in Multi-Agent LLM-based Stock Recommendations?

Strong-buy portfolio returns 2.18% per month versus 1.15% passive over 19 months, with p-value of 0.003 against random selections.

Figure from the paper full image
abstract click to expand
We present the first portfolio-level validation of MarketSenseAI, a deployed multi-agent LLM equity system. All signals are generated live at each observation date, eliminating look-ahead bias. The system routes four specialist agents (News, Fundamentals, Dynamics, and Macro) through a synthesis agent that issues a monthly equity thesis and recommendation for each stock in its coverage universe, and we ask two questions: do its buy recommendations add value over both passive benchmarks and random selection, and what does the internal agent structure reveal about the source of the edge? On the S&P 500 cohort (19 months) the strong-buy equal-weight portfolio earns +2.18%/month against a passive equal-weight benchmark of +1.15% (approximating RSP), a +25.2% compound excess, and ranks at the 99.7th percentile of 10,000 Monte Carlo portfolios (p=0.003). The S&P 100 cohort (35 months) delivers a +30.5% compound excess over EQWL with consistent direction but formal significance not reached, limited by the small average selection of ~10 stocks per month. Non-negative least-squares projection of thesis embeddings onto agent embeddings reveals an adaptive-integration mechanism. Agent contributions rotate with market regime (Fundamentals leads on S&P 500, Macro on S&P 100, Dynamics acts as an episodic momentum signal) and this agent rotation moves in lockstep with both the sector composition of strong-buy selections and identifiable macro-calendar events, three independent views of the same underlying adaptation. The recommendation's cross-sectional Information Coefficient is statistically significant on S&P 500 (ICIR=+0.489, p=0.024). These results suggest that multi-agent LLM equity systems can identify sources of alpha beyond what classical factor models capture, and that the buy signal functions as an effective universe-filter that can sit upstream of any portfolio-construction process.
1 0
0
q-fin.PM 2026-04-20

Correlation tree allocates weights for signed long-short portfolios

Topological Risk Parity

Topological Risk Parity keeps part of each signal at parent nodes because parent-child correlations are imperfect, enabling explicit sector-

abstract click to expand
We develop \emph{Topological Risk Parity} (TRP), a tree-based portfolio construction approach intended for long/short, market neutral, factor-aware portfolios. The method is motivated by the dominance of passive/factor flows that naturally create a tree-like structure in markets. We introduce two implementation variants: (i) a rooted minimum-spanning-tree allocator, and (ii) a market/sector-anchored variant referred to here as \emph{Semi-Supervised TRP}, which imposes SPY as the root node and the 11 sector ETFs as the second layer. In both cases, the key object is a sparse rooted topology extracted from a correlation-distance graph, together with a propagation law that maps signed signals into portfolio weights. Relative to classical Hierarchical Risk Parity (HRP), TRP is non-binary and designed for signed cross-sectional signals and hedged long-short portfolios: it preserves signal direction while using return-dependence geometry to shape exposures. It accounts for the fact that there is imperfect correlation between parent and child nodes, and thus does not propagate weights entirely to the children. We can also impose economically motivated hierarchy that involves industries, sub-industries or factors, etc. This makes it much more robust to macroeconomic shocks and crises, where within-cluster correlations might spike. These features make TRP well suited for market-neutral, equity stat-arb or L/S trend-type strategies, where enforcing neutrality or limiting exposures at the market, sector or factor level is extremely important.
0
0
q-fin.PM 2026-04-17

Duality turns benchmarked risk-sensitive portfolios into explicit LQG games

Risk-Sensitive Investment Management via Free Energy-Entropy Duality

The reformulation gives quadratic value functions, affine controls, and dual Kelly interpretations while enabling reinforcement-learning use

Figure from the paper full image
abstract click to expand
We study a benchmarked risk-sensitive portfolio problem in a factor-based setting to bring together three strands of the literature: benchmarked risk-sensitive investment management, the Kuroda-Nagai change-of-measure method, and the free energy-entropy duality of Dai Pra et al. (1996). We show that the duality yields a direct solution of the benchmarked problem by reformulating it as a linear-quadratic-Gaussian stochastic differential game under a suitable equivalent probability measure, with an entropic regularization. The resulting value function is quadratic, the optimal controls are explicit affine feedback maps, and the optimal allocation admits two complementary interpretations: as a fractional Kelly strategy and as a Kelly portfolio adjusted via the entropic regularization. This formulation, therefore, contributes both a direct analytical route to the solution and a clearer interpretation of risk sensitivity, thereby embedding the classical Kuroda-Nagai change-of-measure approach within a more general framework. An added benefit of this formulation is that it is suitable for implementation via an RL algorithm. A simple implementation on U.S. equity data illustrates the tractability of the framework and numerically confirms the equivalence of the two approaches.
0
0
q-fin.PM 2026-04-14

Temperature anomalies cut returns in most equity sectors

Temperature Anomalies and Climate Physical Risk in Portfolio Construction

Time-varying exposure metrics let investors build climate-resilient portfolios that still diversify and outperform standard benchmarks in a

Figure from the paper full image
abstract click to expand
Driven by the increasing frequency and intensity of natural disasters and chronic climate threats, we investigate the impact of physical climate risk on global equity portfolios. By employing a panel regression analysis on sectoral returns, we provide statistical evidence that extreme temperature events exert a negative effect on most sectors. We introduce two novel metrics based on these temperature anomalies, Climate Risk Exposure and Climate Exposure Volatility, in order to measure the environmental vulnerability of a portfolio. Unlike available static country-level indices, these metrics incorporate the time varying probability of extreme events and their relations with firm-specific asset intensity. We integrate these measures into a multi-objective portfolio optimization framework. This approach extends the traditional Mean-Variance paradigm, allowing investors to construct portfolios that are resilient to physical climate shocks without sacrificing diversification. Finally, we conduct a backtesting analysis to show the practical benefits of incorporating these climate risk metrics into the investment process, evaluating how climate-aware strategies perform relative to traditional benchmarks.
0
0
q-fin.PM 2026-04-10

Recurrent nets approximate optimal DCVaR portfolio policies

Multi periods mean-DCVaR optimization: a Recursive Neural Network resolution

The approach handles path-dependent tail-risk constraints and high-dimensional states in multi-period mean-DCVaR problems without dynamic编程.

Figure from the paper full image
abstract click to expand
We study a discrete-time multi-period portfolio optimization problem under an explicit constraint on the Deviation Conditional Value-at-Risk (DCVaR), defined as the excess of Conditional Value-at-Risk over expected terminal wealth. The objective is to maximize expected return subject to a global tail-risk constraint, leading to a time-inconsistent precommitment problem. We propose a recurrent neural-network-based approach to approximate the optimal precommitment policy, which accommodates path-dependent risk constraints and highdimensional state dynamics without relying on dynamic programming. The explicit constraint formulation allows for exact penalty methods and provides a transparent notion of feasibility. The methodology is validated in a classical complete-market financial model and extended to a multi-period portfolio allocation problem in (re)insurance, capturing the long-term risk dynamics of insurance liabilities.
0
0
q-fin.PM 2026-04-07 Recognition

For weighted exponential utilities

α-robust utility maximization with intractable claims: A quantile optimization approach

For weighted exponential utilities the α-robust criterion depends only on marginals, turning the dynamic problem into a static concave one.

Figure from the paper full image
abstract click to expand
This paper studies an $\alpha$-robust utility maximization problem where an investor faces an intractable claim -- an exogenous contingent claim with known marginal distribution but unspecified dependence structure with financial market returns. The $\alpha$-robust criterion interpolates between worst-case ($\alpha=0$) and best-case ($\alpha=1$) evaluations, generalizing both extremes through a continuous ambiguity attitude parameter. For weighted exponential utilities, we establish via rearrangement inequalities and comonotonicity theory that the $\alpha$-robust risk measure is law-invariant, depending only on marginal distributions. This transforms the dynamic stochastic control problem into a concave static quantile optimization over a convex domain. We derive optimality conditions via calculus of variations and characterize the optimal quantile as the solution to a two-dimensional first-order ordinary differential equation system, which is a system of variational inequalities with mixed boundary conditions, enabling numerical solution. Our framework naturally accommodates additional risk constraints such as Value-at-Risk and Expected Shortfall. Numerical experiments reveal how ambiguity attitude, market conditions, and claim characteristics interact to shape optimal payoffs.
0

browse all of q-fin.PM → full archive · search · sub-categories