pith. machine review for the scientific record. sign in

arxiv: 2603.14479 · v2 · submitted 2026-03-15 · 📊 stat.AP · stat.ME

Recognition: unknown

Risk-Calibrated Process Capability Approval with Finite Samples

Authors on Pith no claims yet
classification 📊 stat.AP stat.ME
keywords approvalcapabilityruleswidehatdecisiondecisionslossoperational
0
0 comments X
read the original abstract

Process capability indices such as $C_{pk}$ are widely used in manufacturing to support supplier qualification, pilot-build release, and production approval. In practice, approval decisions are often based on deterministic threshold rules of the form $\widehat{C}_{pk} \ge C_0$. Because $\widehat{C}_{pk}$ is estimated from finite samples, however, such decisions are inherently stochastic, especially when the true capability lies near the approval threshold. This paper develops a risk-calibrated decision framework for process capability approval that explicitly accounts for estimation uncertainty and asymmetric operational loss. Capability approval is formulated as a binary statistical decision problem, leading to a rule of the form $\widehat{C}_{pk} \ge C_0 + k\,SE(\widehat{C}_{pk})$, where the calibration constant $k$ is determined either by a tolerable failure probability or by a false-accept/false-reject cost ratio. The resulting formulation unifies several commonly used procedures, including deterministic thresholding, lower confidence bound rules, and probability-based approval rules, and naturally extends them to cost-sensitive decision rules derived from asymmetric operational loss. Simulation experiments and an industrial case study show that risk calibration primarily affects near-threshold decisions, improves approval stability, and can substantially reduce expected operational loss when false acceptance is more costly than false rejection.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Nonlinear Amplification of Finite-Sample Uncertainty in Capability-Based Decisions

    stat.AP 2026-05 unverdicted novelty 5.0

    Finite-sample uncertainty in capability indices is nonlinearly amplified into defect-risk metrics via tail curvature, producing decision instability near thresholds.

  2. A Machine Learning Framework for Uncertainty-Calibrated Capability Decision under Finite Samples

    stat.AP 2026-04 unverdicted novelty 4.0

    A hybrid statistical baseline plus data-driven residual learner framework is proposed to calibrate decision risk for process capability indices under finite-sample uncertainty, showing better stability than convention...