Risk governance is not only about identifying and measuring adverse states of the world. It also asks when an institution is entitled to rely on a risk claim. This paper introduces modal epistemic tools for that second layer of QRM. For a risk proposition $p$, $Kp$ denotes assurance-grade endorsement for certification, audit reliance, board sign-off, or regulatory reporting. By contrast, $Bp$ denotes working commitment: a disciplined action-guiding stance under incomplete assurance.
The framework distinguishes object-level risk claims from stances toward them. It develops crisp and fuzzy modal semantics for assurance, working commitment, live possibility, non-exclusion, hesitation, and epistemic inconsistency. The central diagnostics are \[ p\wedge\neg Kp \qquad\text{and}\qquad p\wedge\neg Bp, \] which identify cases in which a risk is present but lacks the relevant stance. Thus QRM should model not only hazards and losses, but also evidential incompleteness, model risk, validation gaps, and failures of escalation.
Two governance principles motivate the analysis. The Risk Management Principle says that if $p$ is a risk, then the absence of the relevant stance, $p\wedge\neg Mp$, is itself risk-relevant. The Risk Reach Principle says that real and decision-relevant risks should be reachable by the appropriate stance. Their unrestricted combination creates Moorean and Fitch-style collapse pressure: treating $p\wedge\neg Kp$ or $p\wedge\neg Bp$ as ordinary targets of the same stance whose absence they record undermines the diagnostic.
The response is architectural. Object-level risk claims should be separated from meta-level epistemic diagnostics. The latter should be governed through an audit layer that records and controls epistemic gaps. This preserves action and precaution without collapsing risk governance into institutional omniscience.
Unification via innovation extraction shows standard, filtered, and displaced variants each assume a chosen return form.
abstractclick to expand
Historical Simulation (HS) and its extensions form a popular class of methods for estimating Value-at-Risk for portfolios of financial assets based on historical data. In this note, we seek to unify several ideas and models from throughout the literature into a single modeling framework. By explicitly defining a parametric model form for the asset returns and extracting the realized increments of the driving innovation process from historical data, we are able to reproduce the Historical Simulation, filtered Historical Simulation, and displaced Historical Simulation methods. This shows beyond a doubt that these methods need more underlying assumptions than what is often alluded to.
In this research, starting from a widely accepted definition of risk, we support the idea that risk reduction is a more realistic objective than risk minimization, which represents a theoretical utopia. Furthermore, significant risk reduction can be achieved without relying on risk measurement and risk minimization. To this end, we propose a generalization of the numerical rank and the condition number of a matrix, specifically the return matrix in this application. This generalization considers the entire matrix spectrum instead of focusing only on the smallest eigenvalue, as the condition number does. The approach directly provides an order among a finite number of risky scenarios. Risk reduction is obtained by identifying the riskiest scenarios and reducing investment exposures corresponding to them. The validity of this theoretical proposal is supported by a comprehensive experiment performed on real data. The capacity of the proposed approach to effectively reduce risk is proven by measuring the variability of out-of-sample returns for benchmark portfolios-constructed by minimizing standard risk measures-compared to the strategy of reducing exposure in high-risk scenarios. Finally, preventing large losses with limited active management-thereby controlling the impact of transaction costs-not only reduces risk but also preserves the average return and, consequently, the portfolio's Sharpe ratio.
This paper investigates two optimal insurance contracting problems under distributional uncertainty from the perspective of a potential policyholder, utilizing a Bregman-Wasserstein (BW) ball to characterize the ambiguity set of loss distributions. Unlike the $p$-Wasserstein distance, BW divergence enables asymmetric penalization of deviations from the benchmark distribution. The first problem examines an insurance demand model where the policyholder adopts an $\alpha$-maxmin preference with Value-at-Risk (VaR). We derive the optimal indemnity function in closed form and study, both analytically and numerically, how the asymmetry inherent in BW divergence influences the optimal indemnity structure. The second problem employs a robust optimization framework, where the policyholder aims to secure robust insurance indemnity by minimizing the worst-case convex distortion risk measure while adhering to a guaranteed VaR constraint. In this context, we provide explicit characterizations of both the optimal indemnity and the worst-case distribution in closed form through a combined approach using the Lagrangian method and modification arguments. To illustrate the practical implications of our theoretical findings, we include a concrete example based on Tail Value-at-Risk (TVaR).
Connectedness measures quantify aggregate risk spillovers but obscure the local interaction patterns that generate systemic risk. We develop a motif-based framework that first extracts multiscale backbones from quantile connectedness networks and then identifies directed triadic motifs whose frequencies exceed randomization baselines. To distinguish how assets' sectoral identities shape local spillover structures, we introduce colored motifs under sector partitions of increasing granularity. Using orbit positions that capture each node's structural role within directed triadic motifs, we construct portfolio strategies that exploit an asset's place in the spillover architecture. Applying the framework to 39 commodity and equity futures across lower, median, and upper conditional quantiles, we find that motif-based portfolios outperform minimum correlation and minimum connectedness benchmarks on risk-adjusted returns. We further show that in tail networks, assets with greater orbit-position diversity tend to act as net spillover transmitters rather than receivers, establishing positional diversity as a tail-specific marker of systemic influence. These findings demonstrate that local triadic topology carries portfolio-relevant information that aggregate connectedness measures miss.
We test a regime-conditional functional-form restriction on aggregate risk-exposure dynamics implied by VaR-constrained intermediary models: exposures contract multiplicatively when capital constraints bind and grow additively (level-independent) when slack. The contraction half follows from binding VaR constraints (Brunnermeier and Pedersen 2009; Adrian and Shin 2010; He and Krishnamurthy 2013). The additive-rebuild prediction is derived under constant-rate capital replenishment; we test the joint restriction on FINRA monthly margin debt (1997-2026).
Two findings. First, regime-interacted regression of detrended margin growth on lagged level (T=350 months) yields calm slope -0.040 (p=0.082, additive) and stress slope -0.205 (p<0.001, multiplicative); Wald test on regime x level interaction rejects equal dependence (p=0.0016). Second, the restriction implies drawdown-recovery duration ratio increases with crash depth. On 73 S&P 500 episodes (1950-2026), Cox model gives depth coefficient -13.75 (p<10^{-7}): 75% lower recovery hazard per 10pp deeper drawdown. Continuous-depth regression yields beta=1.22 (p=0.047); beta=1.59 (p<0.001) excluding 1980-82 Volcker. Median duration ratio for crashes >30% is 3.1x; replicates across eight other equity indices. Calibrated Heston, Markov-switching, and block bootstrap nulls match price-level duration asymmetry but lack an exposure state variable, so cannot speak to the regime-conditional flip on direct exposures.
We do not claim the exposure test identifies the intermediary mechanism: FINRA margin debt is a noisy proxy. We claim only that the regime-conditional functional form is a sharper target than return-level moments alone, and confirming it on margin debt is consistent with -- not proof of -- the constrained-intermediary mechanism. A companion test on CFTC weekly speculative positioning is left for future work (Sections 5.2 and F).
Predicting future operational risk losses gives rise to a significant challenge due to the heterogeneous and time-dependent structures present in real-world data. Furthermore, stress test exercises require examining the relationship with operational losses. To capture such relationship, we propose to use an extension of Hidden Markov Models to multivariate observations. This model introduces a third auxiliary variable designed to accommodate the economic covariates in the time-series data. We detail the unique aspects of operational risk data and describe how model calibration is achieved via the Expectation-Maximization (EM) algorithm. Additionally, we provide the calibration results for the various risk-event types and analyze the relevance of the inclusion of the macroeconomic covariates.
We study OTC bond market making on a size ladder with quadratic inventory penalty and a running target on the dealer's size-weighted hit ratio within a stochastic optimal control approach. We demonstrate that the corresponding reduced Hamilton-Jacobi-Bellman (HJB) equation remains separable by dualizing the hit ratio target term and provides the exact optimal controls through the inverse of the fill-probability function and the Hamiltonian derivative. We then focus on the quadratic approximation \'a la Bergault et al., which yields a Riccati equation for the inventory curvature while retaining the exact quote map. In its linearized form, this approximation produces explicit quote decompositions into riskless spread, inventory-risk correction, and hit-ratio correction. The formulation is general and applies to multi-bond, multi-client-tier scenarios, with special cases obtained by restricting the targeted tiers, their bond coverage, and their associated targets.
Can contagion be inferred from aggregated default data? We study this as a problem of identifiability, asking whether contagion generates components in default count distributions that remain distinct from those induced by macroeconomic fluctuations. We compare three dependence structures: cumulative contagion in the Lo-Davis model, threshold-type contagion in the Torri model, and common-factor dependence in the Vasicek model. Under an i.i.d. specification, the Vasicek model provides the best overall fit, especially in the tail, indicating that a smooth mixture structure captures annual default clustering more effectively than threshold-type contagion at the aggregate level. We then allow the default probability to vary across years through a hierarchical specification. Under this extension, most of the variation in annual default counts is explained by cross-year movements in default conditions rather than by within-year contagion. What remains, however, depends on the interaction mechanism. In the Torri model, threshold-type contagion does not leave a stable component that can be separated from macroeconomic heterogeneity after aggregation. In the Lo-Davis model, by contrast, a small but persistent component remains visible in both the variance decomposition and the tail behavior. These results clarify when contagion can still be inferred from coarse-grained data and when it is effectively absorbed into macroeconomic variation.
Can contagion be inferred from aggregated default data? We study this as a problem of identifiability, asking whether contagion generates components in default count distributions that remain distinct from those induced by macroeconomic fluctuations. We compare three dependence structures: cumulative contagion in the Lo-Davis model, threshold-type contagion in the Torri model, and common-factor dependence in the Vasicek model. Under an i.i.d. specification, the Vasicek model provides the best overall fit, especially in the tail, indicating that a smooth mixture structure captures annual default clustering more effectively than threshold-type contagion at the aggregate level. We then allow the default probability to vary across years through a hierarchical specification. Under this extension, most of the variation in annual default counts is explained by cross-year movements in default conditions rather than by within-year contagion. What remains, however, depends on the interaction mechanism. In the Torri model, threshold-type contagion does not leave a stable component that can be separated from macroeconomic heterogeneity after aggregation. In the Lo-Davis model, by contrast, a small but persistent component remains visible in both the variance decomposition and the tail behavior. These results clarify when contagion can still be inferred from coarse-grained data and when it is effectively absorbed into macroeconomic variation.
We derive five tractable credit risk metrics for DeFi lending vault depositors, grounded in a formal three level decomposition of vault risk into mechanical loss channels (Level 1), governance quality (Level 2) and smart contract code integrity (Level 3). For Level 1, we show that six structural features of onchain execution (oracle execution divergence, endogenous recovery, full information run dynamics, timelock constrained governance, oracle manipulation and congestion driven liquidation failure) break canonical TradFi analogies and generate depositor loss channels absent from standard credit frameworks. Vault credit risk metrics translate these channels into measurable risk components which are aggregated into a vault credit score. The empirical contribution is an implementable estimation architecture for credit risk metrics, including required onchain data, identification strategies for core parameters, partial identification bounds and a coherent stress scenario methodology. The results have direct implications for vault risk management and for minimum transparency standards necessary for depositor risk assessment.
This study examines the disposition effect in both long and short exposure positions in FTSE MIB tracking ETFs using a unique dataset of almost 9 million individual transactions. Building on the integrated framing approach, we extend the analysis to explicitly incorporate leverage and long short exposures, allowing us to assess how portfolio context and systematic risk exposure jointly are associated to investors realization behavior. Methodologically, we generalize Odean canonical Count and Total measures to wide and integrated framing, introduce a novel Value metric that captures the return thresholds required to realize gains versus losses, and implement these measures in dispositionEffect, an open-source R package for large-scale intraday data. We show that short positions exhibit a weaker disposition effect than long positions under narrow framing, but that this asymmetry reverses in positively performing portfolios under integrated framing. Systematic risk further amplifies these behavioral asymmetries across positions. Overall, our findings demonstrate that the disposition effect is not solely asset-specific, but is critically shaped by the interaction between portfolio context, position type, and systematic risk exposure. More broadly, the results are consistent with the joint predictions of Prospect Theory and Regret Theory, highlighting the central role of framing in investor decision-making.
This paper studies optimal insurance design under asymmetric information in a Stackelberg framework, where a monopolistic insurer faces uncertainty about both the insured's risk attitude, captured by a risk-aversion parameter, and the insured's risk type, characterized by the loss distribution. In particular, when the risk type is unobservable, we allow the risk-aversion parameter to depend on the risk type. We construct a menu of contracts that maximizes the mean-variance utilities of both parties under the expected-value premium principle, subject to a truth-telling constraint that ensures the truthful revelation of private information. We show that when risk attitude is private information, the optimal coverage takes the form of excess-of-loss insurance with linear pricing in terms of the risk loading (defined as the premium minus the expected loss), designed to screen risk preferences. In contrast, when risk type is unobserved, we restrict the coverage function to an excess-of-loss form and derive an ordinary differential equation that characterizes the optimal risk loading. Under mild conditions, we establish the existence and uniqueness of the solution. The results show that equilibrium contracts exhibit nonlinear pricing with decreasing risk loadings, implying that higher-risk individuals face lower risk loadings in order to induce self-selection. Finally, numerical illustrations demonstrate how parameter values and the distributions of unobserved heterogeneity affect the structure of optimal contracts and the resulting pricing schedule.
The Kolmogorov-Smirnov (KS) statistic is widely used in credit risk model monitoring and validation to assess discriminatory power. In practice, a material decline in KS often triggers governance review and requires validation teams to identify the breach source and the potential business risk. However, such diagnosis is frequently conducted on an ad hoc basis, relying on the judgment of individual validators rather than a standardized analytical framework. This paper proposes a counterfactual diagnostic framework for explaining KS deterioration in credit risk model validation. The framework sequentially attributes observed KS decline to sampling variability, portfolio composition change, covariate shift, and residual deterioration consistent with model drift, with explicit gateway conditions governing escalation at each stage. Simulation experiments demonstrate that the proposed approach provides more interpretable and governance-relevant explanations than threshold-based review alone, and contributes to more consistent, transparent, and defensible performance-breach assessment in credit risk model validation.
The extension yields closed-form worst-case bounds under uncertainty and shows convexity requires constant Lambda.
abstractclick to expand
This paper introduces the Lambda extension of the R\'{e}nyi entropic value-at-risk ($\Lambda$-EVaR), a novel family of risk measures that unifies the flexible confidence level structure of the $\Lambda$-framework with the higher-moment sensitivity of EVaR. We define $\Lambda$-EVaR, establish its foundational properties including monotonicity, cash subadditivity, and quasi-convexity, and provide a complete axiomatic characterization showing that convexity, concavity in mixtures and cash additivity hold only when $\Lambda$ is constant. A dual representation and an extended Rockafellar-Uryasev-type formula are derived, enabling efficient computation. We further analyze the worst-case behavior of $\Lambda$-EVaR under Wasserstein and mean-variance uncertainty, obtaining closed-form expressions that reveal its robustness properties. The proposed measure bridges the gap between adaptive risk tolerance and moment-sensitive risk assessment, offering a versatile tool for modern risk management.
This paper develops a decomposition of standard Risk Contribution (RC) into two economically interpretable components: inherent risk and correlation risk. Using a leave-one-out representation, each position's RC separates into a term reflecting its own volatility contribution independent of the portfolio and a term capturing its covariance with the remainder of the portfolio. The inherent component is always positive, arising from the intrinsic volatility of the position, while the correlation component may amplify or mitigate total portfolio risk depending on how the position moves relative to other holdings. Because the decomposition operates within standard RC, it preserves the property of strict additivity. This separation provides diagnostic insight not visible from aggregate risk contributions alone. It distinguishes whether a position contributes risk because it is volatile in isolation or because it is highly correlated with the rest of the portfolio, and it clarifies when a negatively correlated position functions as an effective hedge. Two approaches to time-series analysis are presented to track how inherent and correlation risk evolve across market regimes, revealing whether changes in portfolio risk during stress periods are driven by volatility shocks, correlation shifts, or both. Empirical illustrations suggest that the decomposition provides stable, transparent, and easily implementable risk diagnostics that can support portfolio risk reporting, stress testing, and performance attribution.
A credit rating of AAA asserts near-certainty of repayment. This paper asks whether the pre-crisis information environment could have supported that assertion for structured products. Bayes' theorem implies that any reliability target requires a minimum level of statistical discrimination between instruments that will repay and those that will not. At structured-finance base rates, a four-nines reliability target demands discrimination on the order of 10,000 to 1. A three-nines target demands 1,000 to 1. Nothing in the published credit-prediction literature provides an affirmative basis for believing that discrimination of this magnitude was achievable with the data available at rating time. Retrospectively, the realized system fell short of the four-nines benchmark by roughly 90,000-fold. The framework accommodates the historical feasibility of corporate AAA ratings, where high base rates and rich information produce low required discrimination. Illustrative calibrations for contemporary collateralized loan obligations suggest that material tension between the precision target and the information environment persists. The central implication is that the AAA precision claim itself likely exceeded what the available information could support.
Daily ETF risk monitoring can become unreliable when market data quality degrades, market conditions shift, or predictive performance becomes unstable. This paper develops a reliability-aware risk monitoring service for next-day tail-risk surveillance. The proposed framework combines service-time quality checks, lower-tail prediction, uncertainty scoring, and risk-aware adjustment of the tail-risk estimate. We evaluate the system on a daily panel of multiple ETFs augmented with VIX and yield-curve information under a rolling walk-forward design. Empirically, the framework improves tail-risk monitoring, especially during stressed periods, while remaining reliable under simulated input degradation.
MRP shows that high long-term Sharpe ratios do not guarantee resilience when market relationships weaken or alpha compresses.
abstractclick to expand
Systematic investment strategies are exposed to a subtle but pervasive vulnerability: the progressive erosion of their effectiveness as market regimes change. Traditional risk measures, designed to capture volatility or drawdowns, overlook this form of structural fragility. This article introduces a quantitative framework for assessing the durability of systematic strategies through minimum regime performance (MRP), defined as the lowest realized risk-adjusted return across distinct historical regimes. MRP serves as a lower bound on a strategy's robustness, capturing how performance deteriorates when underlying relationships weaken or competitive pressures compress alpha. Applied to a broad universe of established factor strategies, the measure reveals a consistent trade-off between efficiency and resilience -- strategies with higher long-term Sharpe ratios do not always exhibit higher MRPs. By translating the persistence of investment efficacy into a measurable quantity, the framework provides investors with a practical diagnostic for identifying and managing strategy-decay risk, a novel dimension of portfolio fragility that complements traditional measures of market and liquidity risk.
The claimed condition for reduction is self-contradictory and no TWM achieves uniform lowering.
abstractclick to expand
Chitra et al. (2025) claim that Target Weight Mechanism (TWM) in Perpetual Demand Lending Pools (PDLPs) can lower the delta of the portfolio under certain condition. We prove that their condition is self-contradictory. Furthermore, we prove an impossibility result that no TWM can lower the delta uniformly.
The classical tail dependence coefficient (TDC) may fail to capture non-exchangeable features of tail dependence due to its restrictive focus on the diagonal of the underlying copula. To address this limitation, the framework of path-based maximal tail dependence has been proposed, where a path of maximal dependence is derived to capture the most pronounced feature of dependence over all possible paths, and the path-based maximal TDC serves as a natural analogue of the classical TDC along this path. However, the theoretical foundations of path-based tail analyses, in particular the existence and analytical tractability, have remained limited. This paper addresses this issue in several ways. First, we prove the existence of a path of maximal dependence and the path-based maximal TDC when the underlying copula admits a non-degenerate tail copula. Second, we obtain an explicit characterization of the maximal TDC in terms of the tail copula. Third, we show that the first-order asymptotics of a path of maximal dependence is characterized by a one-dimensional optimization involving the tail copula. These results improve the analytical and computational tractability of path-based tail analyses. As an application, we derive the asymptotic behavior of a path of maximal dependence for the bivariate t-copula and the survival Marshall--Olkin copula.
This paper develops an axiomatic framework for ranking metrics, a general class of functionals for evaluating and ordering financial or insurance positions. Unlike traditional risk-adjusted performance measures-such as the Sharpe ratio, RAROC, or Omega-that express reward per unit of risk, ranking metrics assign each position a performance level rather than a normalized return. Relying on monotonicity and a new property called cash-quasiconcavity, we derive representation results linking ranking metrics to families of acceptance sets and risk measures, extending the theory of acceptability indices. Classical ratios arise as special cases, while new examples-based on expected-loss, Lambda-quantile, and bibliometric indices-illustrate the framework's flexibility. Empirical applications to portfolio ranking and climate-risk insurance demonstrate its practical relevance.
Short-horizon risk control matters for hedging and capital allocation. Yet existing Value-at-Risk studies rarely address standardized option books or the next-day valuation frictions that arise in derivatives data. This paper develops a framework for tail-risk control in standardized option books. The analysis focuses on the next-day realized loss and combines a base conditional quantile forecast with sequential conformal recalibration for adaptive Value-at-Risk control. This design addresses two central difficulties: unstable tail-risk forecasts under changing market conditions and the practical challenge of next-day valuation when exact same-contract quotes are unavailable. It also preserves economic interpretability through standardized construction and spot hedging when needed.
Using SPX option data from 2018 to 2025, we show that the uncalibrated base model systematically underestimates downside risk across multiple standardized books. Sequential recalibration removes much of this shortfall, brings exceedance rates closer to target, and improves rolling-window tail stability, with the largest gains in the books where the raw forecast is most vulnerable. The paper also provides an approximate one-step exceedance-control result for the sequential recalibration rule and quantifies the error introduced by next-day marking.
Accurate forecasting of recovery rates (RR) is central to credit risk management and regulatory capital determination. In many loan portfolios, however, RR modeling is constrained by data scarcity arising from infrequent default events. Transfer learning (TL) offers a promising avenue to mitigate this challenge by exploiting information from related but richer source domains, yet its effectiveness critically depends on the presence and strength of distributional shifts, and on potential heterogeneity between source and target feature spaces.
This paper introduces FT-MDN-Transformer, a mixture-density tabular Transformer architecture specifically designed for TL in RR forecasting across heterogeneous feature sets. The model produces both loan-level point estimates and portfolio-level predictive distributions, thereby supporting a wide range of practical RR forecasting applications. We evaluate the proposed approach in a controlled Monte Carlo simulation that facilitates systematic variation of covariate, conditional, and label shifts, as well as in a real-world transfer setting using the Global Credit Data (GCD) loan dataset as source and a novel bonds dataset as target.
Our results show that FT-MDN-Transformer outperforms baseline models when target-domain data are limited, with particularly pronounced gains under covariate and conditional shifts, while label shift remains challenging. We also observe its probabilistic forecasts to closely track empirical recovery distributions, providing richer information than conventional point-prediction metrics alone. Overall, the findings highlight the potential of distribution-aware TL architectures to improve RR forecasting in data-scarce credit portfolios and offer practical insights for risk managers operating under heterogeneous data environments.
We examine whether model-based spot volatility estimators extracted from traded options data enhance the predictive power of the Heterogeneous Autoregressive (HAR) model for realized volatility. Specifically, we infer spot volatility under the rough stochastic volatility model via an iterative two-step approach following Andersen et al. (2015a) and adopt a deep learning surrogate to accelerate model estimation from large-scale options panels. Benchmarked against traditional stochastic volatility models (Heston, Bates, SVCJ) and the VIX index, our results demonstrate that the augmented HAR-RV-RHeston model improves daily realized volatility forecasting accuracy and sustains superior performance across horizons up to one month.