pith. machine review for the scientific record. sign in

q-fin.CP

Computational Finance

Computational methods, including Monte Carlo, PDE, lattice and other numerical methods with applications to financial modeling

0
q-fin.CP 2026-05-11 2 theorems

Rule embedding cuts parameters for imbalance price forecasts

A Market-Rule-Informed Neural Network for Efficient Imbalance Electricity Price Forecasting

Hybrid neural model matches deep learning accuracy while using far fewer parameters and less training time in electricity balancing markets.

Figure from the paper full image
abstract click to expand
Accurate and efficient imbalance electricity price forecasting is critical for industrial energy trading systems, especially as battery assets and automated bidding pipelines increasingly participate in balancing markets. However, real-time forecasting is complicated by nonlinear market-rule-based price formation, heterogeneous input signals, and incomplete data availability caused by communication delays, publication lags, and measurement outages. This paper proposes a market-rule-informed neural forecasting framework that embeds imbalance price formation rules into the latent space of an expressive neural network. The proposed framework preserves raw signal information while exploiting transparent market-rule priors. We further analyze operational robustness by removing price-component information and characterize how forecasting performance scales with input length and forecasting horizon. Experimental results show that the proposed model achieves competitive forecasting performance with substantially fewer trainable parameters and shorter training time than generic deep learning baselines. Experimental results show that the proposed model achieves competitive forecasting performance with substantially fewer trainable parameters and shorter training time than generic deep learning baselines, demonstrating that market-rule priors and expressive neural networks should be jointly used for accurate and computationally sustainable forecasting in industrial energy trading applications. The implementation is publicly available at https://runyao-yu.github.io/MRINN/.
0
0
q-fin.CP 2026-05-08

Geometry-aware correction refines SABR volatility formula

A Geometry-Aware Residual Correction of Hagan's SABR Implied Volatility Formula

A neural network learns only the residual error using features from the SABR dynamics, yielding better accuracy than the original formula or

Figure from the paper full image
abstract click to expand
This paper proposes a hybrid methodology to improve the approximation of SABR (Stochastic Alpha Beta Rho) implied volatility by combining analytical structure with machine learning. The approach augments the neural-network input representation with geometric features derived from the stochastic differential equations of the SABR model. Unlike approaches that fully replace analytical formulas with black-box models, the proposed framework preserves the analytical backbone of the model. The hybridization operates along two complementary dimensions. First, geometry-aware variables reflecting intrinsic properties of the SABR dynamics are used as structured inputs to the network. Second, the neural network is trained to learn the residual error relative to Hagan's closed-form approximation rather than implied volatility directly. The resulting model acts as a structured residual correction to the analytical formula, retaining interpretability while capturing higher-order effects that are not included in the asymptotic expansion. Numerical experiments conducted over realistic parameter domains, as well as stressed environments, show that the method improves accuracy and robustness compared with both analytical approximations and standard neural-network approaches. Because the correction remains lightweight and structurally consistent with the underlying model, the framework is well suited for real-time pricing and calibration in practical trading environments.
0
0
q-fin.CP 2026-05-08

Hybrid Newton-bisection computes lambda quantiles reliably

Numerical methods for lambda quantiles: robust evaluation and portfolio optimisation

The procedure guarantees global convergence for variable-confidence risk measures and accelerates portfolio optimization.

Figure from the paper full image
abstract click to expand
Lambda quantiles, originally introduced as lambda value at risk, generalise the classical value at risk by allowing for a variable confidence level. This work presents efficient algorithms for computing lambda quantiles and demonstrates their application in portfolio optimisation. We first develop a robust algorithm, {\Lambda}-Newton-Bis, that combines Newton's method with a bisection strategy to ensure global convergence. The algorithm handles potential discontinuities and achieves local quadratic convergence under standard regularity assumptions. To address cases with multiple roots, we also propose an interval analysis approach. We then demonstrate the algorithm's computational efficiency and practical relevance within a portfolio optimization framework. To this end, we develop two alternative solution methods that incorporate the {\Lambda}-Newton-Bis procedure. Numerical experiments confirm the algorithm's convergence properties and highlight its computational advantages in optimization tasks based on lambda quantiles.
0
0
q-fin.CP 2026-05-07

Interpolation choices drive caplet stripping instability

What Can Go Wrong During Caplet Stripping ?

Flat-linear kernels and midpoint nodes cut oscillations and keep extracted volatilities positive and market-consistent.

Figure from the paper full image
abstract click to expand
We study exact and near exact extraction of caplet volatilities from market cap quotes and identify why some common choices produce extreme oscillations or negative vols. Interpolation scheme and node placement are shown to be the primary drivers of instability, which can be amplified by isolated bad quotes. We propose practical, production ready remedies: continuous flat-linear and C1 flat-smooth kernels that preserve bootstrap equivalence, midpoint node placement with a global solver, positivity enforcement via an exponential reparametrization or Hyman non-negative C1 splines. We also introduce simple data quality checks. Numerical experiments demonstrate substantially reduced oscillations, robust positive caplet curves, and negligible repricing error, delivering a fast and stable caplet stripping workflow suitable for real-world use.
1 0
0
q-fin.CP 2026-05-04 Recognition

Bachelier prices expand in moneyness via negative volatility powers

Analytic approximation for Bachelier option prices and applications

A Taylor series whose coefficients are negative powers of future mean volatility also cuts Monte Carlo variance in the correlated case.

Figure from the paper full image
abstract click to expand
It is well-known that, in the Bachelier model, when asset prices and volatilities are uncorrelated, the implied volatility coincides with the fair value of the volatility swap. In this paper, via classical It\^o calculus and Taylor expansions, we write the price for out-of-the-money (OTM) and in-the-money (ITM) options as an expansion with respect to the moneyness, where the coefficients are related to the negative (non-integer) powers of the future mean volatility. As an a application, we use it as a control variate to reduce the variance of Monte Carlo option prices in the correlated case.
0
0
q-fin.CP 2026-05-04 3 theorems

Bachelier option prices expand via moneyness and volatility powers

Analytic approximation for Bachelier option prices and applications

ItΓ΄-calculus derivation yields series usable as control variate for lower-variance Monte Carlo pricing when correlation is present

Figure from the paper full image
abstract click to expand
It is well-known that, in the Bachelier model, when asset prices and volatilities are uncorrelated, the implied volatility coincides with the fair value of the volatility swap. In this paper, via classical It\^o calculus and Taylor expansions, we write the price for out-of-the-money (OTM) and in-the-money (ITM) options as an expansion with respect to the moneyness, where the coefficients are related to the negative (non-integer) powers of the future mean volatility. As an a application, we use it as a control variate to reduce the variance of Monte Carlo option prices in the correlated case.
0
0
q-fin.CP 2026-05-04

BERT actor-critic beats stock benchmarks over 11 years

SBCA: Cross-Modal BERT-driven Actor-Critic for Multi-Asset Portfolio Optimization

By fusing price trends with news text and adding risk-turnover penalties, SBCA improves returns and limits losses versus simple strategies.

Figure from the paper full image
abstract click to expand
Portfolio optimization is constrained by linear assumptions and insufficient integration of multi-modal information in traditional models. This paper proposes a cross-modal BERT-driven Actor-Critic framework SBCA for multi-asset portfolio optimization to address the deficiencies of existing deep reinforcement learning DRL methods in fusing price data and financial text sentiment, as well as lacking practical trading constraints. The framework adopts a cross-modal gated fusion mechanism to adaptively integrate price time-series features and text semantic features, embeds downside risk and turnover penalty constraints into the reward function, and constructs a complete empirical system for validation. Experiments on 11-year U.S. stock multi-asset datasets show that SBCA outperforms equal weight, buy-and-hold and market benchmark strategies in portfolio value, annual return, Sharpe ratio and maximum drawdown. Ablation studies verify the complementary enhancement of Actor-Critic mechanism and cross-modal fusion module. Cost sensitivity analysis confirms the model's robustness under varying transaction costs. SBCA provides an effective and interpretable end-to-end solution for dynamic quantitative portfolio decision-making.
0
0
q-fin.CP 2026-05-04 2 theorems

Coupled neural nets price American options under Heston

American Options Pricing under Heston Model via Curriculum Learning in Coupled PINNs

They learn both the price and the moving exercise boundary while satisfying the stochastic-volatility PDE.

Figure from the paper full image
abstract click to expand
In American options, the early exercise feature allows the option to be exercised at any time prior to expiration. However, this flexibility introduces a challenge: the pricing model must value the option while simultaneously determining an unknown, time-varying exercise boundary. The Heston model is one of the most popular ways to model real market behavior because it allows volatility to change over time. However, unlike European options, there is no closed-form solution for American options under the Heston model, so we have to use numerical methods. In this paper, we propose a novel approach to solving the stochastic Heston partial differential equation for American options, using coupled physics-informed neural networks (PINNs) to predict both the option price and the free boundary, while employing curriculum learning and adaptive resampling to stabilize model training. Our work builds on recent deep learning methods but introduces a more effective training strategy to address the limitations of these approaches. The numerical results demonstrate the effectiveness of the proposed learning framework, providing a robust and efficient alternative to pricing American options, enabling rapid inference and accurate estimation under stochastic volatility.
0
0
q-fin.CP 2026-04-30

Fast-vollib accelerates implied volatility via PyTorch JAX and CUDA backends

Fast-Vollib: A Fast Implied Volatility Library for Pythonwith PyTorch, JAX, and CUDA Fused-Kernel Backends

It supplies a compatible replacement for py_vollib with vectorized Halley and JΓ€ckel solvers plus fused GPU kernels for Black models.

Figure from the paper full image
abstract click to expand
We present fast-vollib, an open-source Python library that provides high-performance European option pricing, implied volatility (IV) computation, and Greeks under the Black-76, Black-Scholes, and Black-Scholes-Merton models. The library is designed as a drop-in alternative to the de-facto-standard py_vollib and py_vollib_vectorized packages, with pluggable PyTorch and JAX execution backends, a CUDA fused-kernel Triton contribution for batched IV workloads, and a compatibility-first public API. In addition to a vectorized Halley-method IV solver, fast-vollib ships an experimental, fully-vectorized implementation of J\"ackel's "Let's Be Rational" (LBR) algorithm with NumPy/Numba, torch.compile, JAX, and Triton single-pass GPU kernels for batched option chains. This note announces the library and describes its public API surface, with source, documentation, and packaging artifacts available at: GitHub (https://github.com/raeidsaqur/fast-vollib), Docs (https://raeidsaqur.github.io/fast-vollib/), PyPI (https://pypi.org/project/fast-vollib/).
0
0
q-fin.CP 2026-04-29

Closed-form expansions compute VIX option implied vols directly

Implied Volatility Expansions for VIX Options in Forward Variance Models

Weak approximations deliver explicit formulas with corrections, enabling fast calibration without root-finding.

Figure from the paper full image
abstract click to expand
We develop closed-form expansions for the implied volatility of VIX options within the class of forward variance models. Our approach builds on weak-approximation techniques for VIX option prices and yields explicit implied volatility expansions with computable correction terms. The resulting formulas enable fast and accurate calibration without requiring numerical root-finding. We illustrate the performance of the proposed expansions in both standard and rough Bergomi-type models, as well as in mixed specifications, and demonstrate their accuracy through numerical experiments.
0
0
q-fin.CP 2026-04-28

Diverse preferences produce role specialization in simulated markets

Financial Market as a Self-Organized Ecosystem: Simulation via Learning with Heterogeneous Preferences

Agents learn distinct strategies through interaction, generating fat-tailed prices and volatility clustering.

Figure from the paper full image
abstract click to expand
Agent-based models provide a constructive approach to studying emergent dynamics in life-like systems composed of interacting, adaptive agents. Financial markets serve as a canonical example of such systems, where collective price dynamics arise from individual decision-making. In this modeling tradition, investor behavior has typically been captured by two distinct mechanisms -- learning and heterogeneous preferences -- which have been explored as separate paradigms in prior studies. However, the impact of their joint modeling on the resulting collective dynamics remains largely unexplored. We develop a multi-agent reinforcement learning framework in which agents endowed with heterogeneous risk aversion, time discounting, and information access learn trading strategies interactively within an artificial market. The experiment reveals that (i) learning under heterogeneous preferences drives agents to develop functionally differentiated strategies through interaction, rather than trait-specific rules, resulting in role specialization, and (ii) the interactions by the differentiated agents are essential for the emergence of realistic market dynamics such as fat-tailed price fluctuations and volatility clustering. Overall, this study demonstrates that the joint design of heterogeneous preferences and learning mechanisms enables the synthesis of an artificial market in which adaptive interactions drive the self-organization of a market ecology, providing a computational realization of the Adaptive Market Hypothesis.
0
0
q-fin.CP 2026-04-23 2 theorems

Small-rho expansion adds leverage to barrier pricing in clock volatility models

Extrema, Barrier Options, and Semi-Analytic Leverage Corrections in Stochastic-Clock Volatility Models

Stochastic-clock models gain fast semi-analytic corrections for return-volatility correlation while keeping one-dimensional transforms for v

abstract click to expand
Barrier derivatives depend on extrema and first-passage events and are therefore highly sensitive to volatility dynamics -- especially to the instantaneous return-volatility correlation $\rho$, often called ``leverage''. This sensitivity makes accurate and fast pricing under realistic stochastic-volatility specifications difficult: two-dimensional PDE solvers are expensive inside calibration loops, while Monte Carlo methods converge slowly when barrier hits are rare and discretely monitored. In equity markets in particular, the pronounced implied-volatility skew motivates factoring in a negative return-volatility correlation. We study a class of continuous-path stochastic-clock volatility models in which the log-price is represented as a Brownian motion run on a random increasing clock. In the baseline independent-clock case (\rho=0), a broad family of barrier-relevant objects-maximum distributions, survival probabilities, and killed joint laws-reduces to one-dimensional quantities determined by the Laplace transform of the terminal clock. This yields transform-only pricing formulas for single- and double-barrier contracts that are fast and numerically stable once the clock transform is available, notably for affine and quadratic clocks. To incorporate leverage without forfeiting tractability, we develop a systematic small-\rho expansion around the \rho=0 backbone. The expansion produces a hierarchy of forced problems whose forcing terms are semi-analytic and computable from baseline barrier objects. We provide two implementable leverage-correction routes\,: forced PDEs and a Duhamel-type Monte Carlo representation, and we show how Pad{\'e} acceleration can extend practical accuracy to equity-like correlations. Calibration then proceeds by\,: (i) fitting clock parameters from vanillas using only one-dimensional transforms, (ii) precomputing the \rho=0 barrier backbone once, and (iii) iterating on \rho (and any remaining parameters) using the fast semi-analytic corrections-optionally Pad{\'e}-accelerated-inside a standard least-squares loop.
0
0
q-fin.CP 2026-04-22

QR reparametrization diagonalizes conditional Fisher matrix for NSS curves

Orthogonal reparametrization of the Nelson-Siegel-Svensson interest rate curve model: conditioning, diagnostics, and identifiability

This isolates numerical instability from identifiability issues on the degenerate manifold and produces smoother parameter paths in Treasury

Figure from the paper full image
abstract click to expand
The Nelson-Siegel-Svensson (NSS) interest rate curve model yields a separable nonlinear least-squares problem whose inner linear block is often ill-conditioned because the basis functions become nearly collinear. We analyze this instability via an exact orthogonal reparametrization of the design matrix. A thin QR decomposition produces orthogonal linear parameters for which, conditional on the nonlinear parameters, the Fisher information matrix is diagonal. We also derive a finite-horizon analytical orthogonalization: on $[0,T]$, the $4\times 4$ continuous Gram matrix has closed-form entries involving exponentials, logarithms, and the exponential integral $E_1$, yielding an explicit horizon-dependent orthogonal NSS basis. Together with Jacobian-rank and profile-likelihood arguments, this representation clarifies the degenerate manifold $\lambda_1=\lambda_2$, where the Svensson extension loses two degrees of freedom. Orthogonalization leaves the least-squares fit and uncertainty of the original linear parameters unchanged, but isolates the conditioning structure. When the decay parameters are estimated jointly, the full first-order covariance in orthogonal coordinates admits an explicit Schur-complement form. The approach also yields a scalar identifiability diagnostic through the QR element $R_{44}$ and separates model reduction from numerical instability. Synthetic experiments confirm that orthogonal parametrization eliminates correlations among the linear parameters and keeps their conditional uncertainty uniform. A daily U.S. Treasury study on a reduced fixed 9-tenor grid from 1981 to 2026 shows smoother orthogonal parameter series than classical NSS parameters while the moving QR basis remains nearly constant.
0
0
q-fin.CP 2026-04-17

Framework speeds financial literature reviews with AI and expert checks

LR-Robot: An Human-in-the-Loop LLM Framework for Systematic Literature Reviews with Applications in Financial Research

LR-Robot uses expert taxonomies and sample validation so LLMs can classify 12,666 papers while preserving accuracy, then maps trends and cit

abstract click to expand
The exponential growth of financial research has rendered traditional systematic literature reviews (SLRs) increasingly impractical, as manual screening and narrative synthesis struggle to keep pace with the scale and complexity of modern scholarship. While the existing artificial intelligence (AI) and natural language processing (NLP) approaches often often produce outputs that are efficient but contextually limited, still requiring substantial expert oversight. To address these challenges, we propose LR-Robot, a novel framework in which domain experts define multidimensional classification taxonomies and prompt constraints that encode conceptual boundaries, large language models (LLMs) execute scalable classification across large corpora, and systematic human-in-the-loop evaluation ensures reliability before full-dataset deployment.The framework further leverages retrieval-augmented generation (RAG) to support downstream analyses including temporal evolution tracking and label-enhanced citation networks. We demonstrate the framework on a corpus of 12,666 option pricing articles spanning 50 years, designing a four-dimensional taxonomy and systematically evaluating up to eleven mainstream LLMs across classification tasks of varying complexity. The results reveal the current capabilities of AI in understanding and synthesizing literature, uncover emerging trends, reveal structural research patterns, and highlight core research directions. By accelerating labor-intensive review stages while preserving interpretive accuracy, LR-Robot provides a practical, customizable, and high-quality approach for AI-assisted SLRs.
0
0
q-fin.CP 2026-04-15

Factors emerge naturally from asset interaction networks

Emergence of Statistical Financial Factors by a Diffusion Process

Coupled return maps on a Laplacian-derived network produce stable co-movements that explain asset variance in an optimal regime.

Figure from the paper full image
abstract click to expand
Factor models characterize the joint behavior of large sets of financial assets through a smaller number of underlying drivers. We develop a network-based framework in which factors emerge naturally from the structure of interactions among assets rather than being imposed statistically. The market is modeled as a system of coupled iterated maps, where assets' return depends on its own past returns and those of related assets. Effectively modeling the influence of irrational traders whose decisions are based on the past movements of a collection of stocks. The interaction structure between stock returns is defined by a coupling matrix derived from an orthogonal transformation of a Laplacian matrix that gradually links initially isolated clusters into a fully connected network. Within this structure, stable patterns of co-movement arise and can be interpreted as financial factors. The relationship between the initial clustering and the number of observed factors is consistent with a center manifold reduction. We identify an optimal regime in which assets' variance is effectively explained by the set of factors produced by the network. Our framework offers a structural perspective based on interaction-based factor formation and dimension reduction in financial markets.
0
0
q-fin.CP 2026-04-10 Recognition

Hybrid workflows show clearest near-term path for quantum finance

Quantum Computing for Financial Transformation: A Review of Optimisation, Pricing, Risk, Machine Learning, and Post-Quantum Security

Review finds targeted gains in optimisation and pricing but cautions that blanket quantum superiority does not yet hold under realistic cost

Figure from the paper full image
abstract click to expand
Quantum computing is becoming strategically relevant to finance because several core financial bottlenecks are already defined by combinatorial search, expectation estimation, rare-event analysis, representation learning, and long-horizon cryptographic resilience. This review examines that landscape across five connected domains: constrained portfolio optimisation, derivative pricing, tail-risk and scenario estimation, quantum machine learning, and post-quantum security. Rather than treating these topics as isolated demonstrations, the article studies them as linked layers of a financial-computation stack. Across all five domains, the review applies a common evaluative logic: identify the financial bottleneck, specify the relevant quantum primitive, compare it with an explicit classical benchmark, and assess the result under realistic implementation and governance constraints. The main conclusion is measured but consequential. The strongest near-term case for quantum finance lies in carefully designed hybrid workflows rather than blanket claims of universal advantage. Quantum optimisation is most credible when constrained search dominates; amplitude-estimation methods matter most when repeated expectation evaluation is the binding cost; quantum machine learning remains task dependent; and post-quantum cryptography is already strategically necessary because financial infrastructures must migrate before fault-tolerant attacks arrive. By combining system-level synthesis with locally reproducible small-scale case studies on simulated qubit registers, the article is intended both as a review of the field and as a handbook-style entry point for future work.
0
0
q-fin.CP 2026-04-08 2 theorems

Heston model matches market option prices better than Black-Scholes

Beyond Black-Scholes: A Computational Framework for Option Pricing Using Heston, GARCH, and Jump Diffusion Models

Monte Carlo comparisons on November 2024 data show stochastic volatility and jump extensions reduce pricing error for equity options.

Figure from the paper full image
abstract click to expand
This research addresses accurate option pricing by employing models beyond the traditional Black-Scholes framework. While Black-Scholes provides a closed-form solution, it is limited by assumptions of constant volatility, no dividends, and continuous price movements. To overcome these limitations, we use Monte Carlo simulation alongside the GARCH model, Heston stochastic volatility model, and Merton jump-diffusion model. The Black-Scholes-Monte Carlo method simulates diverse stock price paths using geometric Brownian motion. The GARCH model forecasts time-varying volatility from historical data. The Heston model incorporates stochastic volatility to capture volatility clustering and skew. The Merton jump-diffusion model adds sudden price jumps via a Poisson process. Results show the Heston model consistently produces estimates closer to market prices, while the Merton model performs well for volatile assets with sudden price movements. The GARCH model provides improved volatility forecasts for future option price prediction. All experiments used live market data from November 2024.
0
0
q-fin.CP 2026-04-06 Recognition

Two LLMs profit on prediction markets while five lose

PolyBench: Benchmarking LLM Forecasting and Trading Capabilities on Live Prediction Market Data

Benchmark from 38,666 live markets shows most models fail to convert high confidence into positive returns under realistic trading costs.

Figure from the paper full image
abstract click to expand
Predicting real-world events from live market signals demands systems that fuse qualitative news with quantitative order-book dynamics under strict temporal discipline -- a challenge existing benchmarks fail to capture. We present \textbf{PolyBench}, a multimodal benchmark derived from Polymarket that records point-in-time cross-sections of 38,666 binary prediction markets spanning 4,997 events, synchronously coupling each snapshot with a Central Limit Order Book (CLOB) state and a real-time news stream. Using PolyBench, we evaluate seven state-of-the-art Large Language Models -- spanning open- and closed-source families -- generating 36,165 predictions under identical, timestamp-locked market states collected between February 6 and 12, 2026. Our multidimensional framework assesses directional accuracy, our proposed Confidence-Weighted Return (CWR), Annualized Percentage Yield (APY), and Sharpe ratio via realistic order-book execution simulation. The results reveal a pronounced performance divergence: only two of seven models achieve positive financial returns -- MiMo-V2-Flash at \textbf{17.6%} CWR and Gemini-3-Flash at 6.2% CWR -- while the remaining five incur losses despite uniformly high stated confidence. These findings highlight the gap between surface-level language fluency and genuine probabilistic reasoning under live market uncertainty, and establish PolyBench as a contamination-proof, financially-grounded evaluation standard for future LLM research. Our dataset and code available at \underline{\href{https://github.com/PolyBench/PolyBench}{https://github.com/PolyBench/PolyBench}}.
1 0
0
q-fin.CP 2026-04-02 Recognition

Policy gradient scheme prices options under volatility uncertainty

Stochastic Policy Gradient Methods in the Uncertain Volatility Model

Backward actor-critic method with C-vine policies handles high-dimensional robust pricing efficiently and matches benchmark accuracy.

Figure from the paper full image
abstract click to expand
The multidimensional Uncertain Volatility Model leads to robust option pricing problems under joint volatility and correlation uncertainty. Their numerical resolution quickly becomes challenging because the associated stochastic control problem is high-dimensional. We propose a backward actor-critic stochastic policy gradient scheme tailored to this setting. The method combines a discrete dynamic programming principle with Proximal Policy Optimization and shallow neural-network approximations of both the value function and the control policy. A key ingredient is the policy parameterization: continuous controls are represented through a squashed Gaussian policy built on a C-vine representation of correlation matrices, which enforces positive semidefiniteness by construction. Numerical experiments on a range of multidimensional derivatives show that the method yields accurate prices, remains computationally efficient, and compares favorably with existing Monte Carlo and machine-learning-based benchmarks for robust pricing in the Uncertain Volatility Model.
0
0
q-fin.CP 2026-04-02 2 theorems

Deep signatures price annuities under rough Heston and Volterra

Valuation of variable annuities under the Volterra mortality and rough Heston models

Monte Carlo method with truncated path signatures learns optimal surrender when equity and mortality are non-Markovian.

Figure from the paper full image
abstract click to expand
This paper investigates the valuation of variable annuity contracts with an early surrender option under non-Markovian models. Moreover, policyholders are provided with guaranteed minimum maturity and death benefits to protect against the downside risk. Unlike the existing literature, our variable annuity account value is linked to two non-Markovian processes: an equity index modeled by a rough Heston model and a force of mortality following a Volterra-type stochastic model. In this case, the early surrender feature introduces an optimal stopping problem where continuation values depend on the entire path history, rendering traditional numerical methods infeasible. We develop a deep signature Least Squares Monte Carlo approach to learn optimal surrender strategies on a discretized time grid. To mitigate the curse of dimensionality arising from the path-dependent model, we use truncated rough-path signatures to encode the historical paths and approximate the continuation values using a neural network. Numerically, we find that the fair fee increases with the Hurst parameters of both the stock volatility and the force of mortality. Finally, a convergence proof is provided to further support the stability of our method.
0

browse all of q-fin.CP β†’ full archive Β· search Β· sub-categories