pith. machine review for the scientific record. sign in

arxiv: 2604.14497 · v1 · submitted 2026-04-16 · 💻 cs.CE · stat.AP

Recognition: unknown

Robust Optimal Experimental Design Accounting for Sensor Failure

Authors on Pith no claims yet

Pith reviewed 2026-05-10 09:38 UTC · model grok-4.3

classification 💻 cs.CE stat.AP
keywords optimal experimental designrobust optimizationsensor placementvibration analysisfinite element modelingstructural dynamicssensor failure
0
0 comments X

The pith

Robust optimal designs for accelerometer placement outperform classical designs when sensors fail during vibration tests.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper develops a robust version of optimal experimental design to choose accelerometer locations for structural vibration analysis. Sensors frequently fail under high accelerations, so the method optimizes placement while accounting for possible failures instead of assuming all sensors work. It uses a relaxation technique for gradient-based optimization plus a penalty term that pushes solutions toward binary on/off decisions for each candidate location. This matters because real experiments need designs that still deliver good parameter estimates even after some sensors drop out. The authors apply the approach to a finite-element model and compare it to standard non-robust designs using covariance and error metrics.

Core claim

Although robust and classical designs are similar for the structural dynamics problem of interest, robust designs outperform classical designs on average over relevant failure scenarios of interest. The work employs a relaxation-based approach with gradient-based optimization and a binary-inducing penalty to generate sensor designs that are robust to failures, evaluated using log-determinant of parameter covariance and mean-squared errors.

What carries the argument

Relaxation-based robust optimal experimental design formulation with binary-inducing penalty applied to high-dimensional sensor placement in expensive finite-element models of structural dynamics.

If this is right

  • Robust designs maintain better average parameter estimation accuracy across failure scenarios even when the nominal designs look similar.
  • The relaxation plus penalty method produces usable binary sensor layouts without relying on post-optimization rounding.
  • Metrics based on log-determinant of covariance and on mean-squared parameter or prediction errors can be used interchangeably to drive the placement.
  • The same framework applies directly to other high-dimensional vibration problems where sensor loss is common.
  • Classical designs remain adequate when failure probability is low, but the performance gap widens as failure likelihood increases.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Test engineers could pre-compute both a robust layout and a classical layout and switch to the robust one for high-risk experiments.
  • The method could be extended to continuous failure probabilities rather than discrete scenarios to cover more realistic uncertainty.
  • Similar robust formulations might improve sensor placement in other fields such as structural health monitoring or acoustic testing.
  • Physical validation on a laboratory shaker table with deliberate sensor disconnections would provide the next concrete check.

Load-bearing premise

The specific failure scenarios examined are representative of actual sensor failures that occur during high-acceleration vibration experiments.

What would settle it

Running the same sensor-placement optimization on data from physical vibration experiments that actually experience random sensor failures and measuring whether the robust design still yields lower average estimation error than the classical design.

Figures

Figures reproduced from arXiv: 2604.14497 by Chandler Smith, Drew Kouri, Jace Ritchie, Rebekah White, Timothy Walsh, Wilkins Aquino.

Figure 1
Figure 1. Figure 1: a) Finite element model of the wedding cake and b) candidate sensor locations superimposed over the model. Note that the horizontal levels of the structure are labeled as: base (red), level 1 (green), level 2 (blue), and level 3 (yellow). Uniaxial sensors can be oriented in one of three degrees of freedom. 2.2 Inverse problem formulation The aim of the inverse problem is to leverage measurement data y ∈ Y … view at source ↗
Figure 2
Figure 2. Figure 2: The fractional classical and robust optimal experimental designs determined by optimizing the OED objectives in (10) and (15), respectively, where no binary-inducing penalty is applied, i.e. γ = 0 [PITH_FULL_IMAGE:figures/full_fig_p010_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Comparing the classical and robust fractional optimal designs versus random designs based on the (left) classical OED criterion in (17) versus the (right) robust criterion in (18). dotted line denotes the average performance over these realizations. Note that in this context, accounting for one sensor failing amounts to zeroing out one of the fractional weights. For fairness of comparison, we then renormal… view at source ↗
Figure 4
Figure 4. Figure 4: Comparing the performance of the robust versus classical fractional optimal designs. The his￾togram represents the performance over (left) one sensor or (right) two sensors failing, while the optimal (no sensors failing) and average performances are noted with the vertical lines. solving the more computationally tractable problem of designing against one sensor failure could provide robustness to multiple … view at source ↗
Figure 5
Figure 5. Figure 5: (Left) a depiction of the probabilities of failure (PoFs) for each of the candidate sensor locations. (Right) a histogram of 105 realizations of a Bernoulli random, where the color of the bar corresponds to the PoF [PITH_FULL_IMAGE:figures/full_fig_p011_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: The binary optimal experimental designs associated with optimizing the classical OED criterion in (10) and robust criterion in (14), leveraging the binary-inducing double-well penalty. unlike the approach given in Section 3.1 (which requires sampling over failure scenarios), optimizing (14) is no more computationally expensive than the classical approach. In [PITH_FULL_IMAGE:figures/full_fig_p012_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Comparing the classical and robust binary optimal designs versus random designs based on the (left) classical OED criterion in (17) versus the (right) robust criterion in (18). in terms of failure scenarios we could see in an experiment based on the assigned PoFs. To do so, we leverage the postprocessing metric described in Section 2.4.2 for j = 1, . . . , nsamps Bernoulli random variable samples, where ns… view at source ↗
Figure 8
Figure 8. Figure 8: Comparing the performance of the robust versus classical binary optimal designs. The histogram represents the performance over failures determined by sampling Bernoulli random variables corresponding to the PoFs, where the histogram is depicted for log determinant values less than or equal to 75. The optimal (no sensors failing) and average performances are noted with the vertical lines. to fail, one can i… view at source ↗
Figure 9
Figure 9. Figure 9: A comparison of average parameter (left) and prediction (right) mean squared errors (MSEs) for robust versus classical designs over failure scenarios determined by sampling Bernoulli random variables. The dotted line indicates average (over failure scenarios) performance (average MSE). MSE (as noted by the minimum value associated with classical designs being less than the minimum value associated with rob… view at source ↗
Figure 10
Figure 10. Figure 10: The acceleration response of a candidate sensor that exhibited sensor clipping. The clipping threshold is shown as dashed lines. To illustrate this approach, we employ the wedding cake model (Section 2.1) and simulate a scenario 14 [PITH_FULL_IMAGE:figures/full_fig_p014_10.png] view at source ↗
Figure 11
Figure 11. Figure 11: a) Percentage occurrence of dropout for each candidate sensor. b) Location of sensors that dropped out between 5% and 20% of the time and c) those that dropped out over 80% of the time. OED optimization problem as follows. If candidate accelerometer sensor i is predicted to exceed the clipping threshold, it is noted as a dropout sample with {Rj}ii = 0 in (15), for the j = 1, . . . , 100 force realizations… view at source ↗
Figure 12
Figure 12. Figure 12: (Top) The binary optimal experimental designs associated with optimizing the classical OED criterion in (10) and robust criterion in (14), leveraging the binary-inducing double-well penalty. (Bottom) The corresponding optimal classical (a) and robust (b) optimal designs (physical location and orientations) imposed on the wedding cake structure. 16 [PITH_FULL_IMAGE:figures/full_fig_p016_12.png] view at source ↗
Figure 13
Figure 13. Figure 13: Comparing the performance of the robust versus classical binary optimal designs. The histogram represents the performance over failures determined by predicting clipping behavior over j = 100 random initial force realizations. The histogram is depicted for log determinant values less than or equal to −20. The optimal (no sensors failing) and average performances are noted with the vertical lines. 17 [PIT… view at source ↗
read the original abstract

Optimal experimental design provides a way of determining a-priori the best locations at which to place accelerometers in vibrations analysis experiments. However, in practice, sensors often fail during experimentation due high mechanical accelerations. There have been limited works exploring the use of robust OED in the context of vibrations analysis, where design spaces (i.e. candidate sensor locations and orientations) are high-dimensional and the finite-element models are expensive to compute. Therefore, this work considers the application of more general robust OED formulations to such a structural dynamics problem. We employ a relaxation-based approach that enables the use of efficient gradient-based optimization. Furthermore, we leverage a binary-inducing penalty during optimization to provide a binary sensor design as an alternative to leveraging post-optimization rounding heuristics. We consider performance metrics based on the log-determinant of the parameter covariance as well those based on parameter and prediction mean-squared errors. We find that although robust and classical designs are similar for the structural dynamics problem of interest, robust designs outperform classical designs on average over relevant failure scenarios of interest.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper develops a robust optimal experimental design (OED) method for placing accelerometers in structural vibration experiments that accounts for potential sensor failures due to high mechanical accelerations. It employs a relaxation-based formulation solved via gradient-based optimization, augmented with a binary-inducing penalty to directly yield binary sensor designs without post-hoc rounding. Performance is assessed using the log-determinant of the parameter covariance matrix as well as parameter and prediction mean-squared error criteria. For the structural dynamics test case, the resulting robust designs are similar to classical (non-robust) designs, yet the authors report that they outperform the classical designs on average when evaluated over the modeled failure scenarios.

Significance. If the modeled sensor-failure distribution accurately represents real high-acceleration vibration tests, the approach could improve the reliability of sensor placements in expensive structural-dynamics experiments. The use of an efficient relaxation scheme and a direct binary penalty are methodological strengths that avoid common rounding heuristics. However, the reported similarity between robust and classical designs implies that any advantage is modest and highly sensitive to the choice of failure model; without quantitative results, error bars, or validation against observed failure statistics, the practical impact remains limited.

major comments (2)
  1. [Abstract] Abstract: the central claim that 'robust designs outperform classical designs on average over relevant failure scenarios of interest' is stated without any numerical values, confidence intervals, number of scenarios, or effect-size metrics. Because the designs are described as similar for the nominal problem, the magnitude and statistical reliability of the reported average advantage cannot be assessed from the given information.
  2. [§3] §3 (failure model definition): the outperformance is computed exclusively with respect to the authors' chosen probabilistic failure scenarios (independent dropouts or worst-case subsets). No calibration, sensitivity study, or comparison to empirical failure statistics from high-acceleration tests is provided. Given that the robust and classical designs are similar, this modeling choice is load-bearing for the practical conclusion that robust designs should be preferred.
minor comments (2)
  1. [Abstract] The abstract and introduction would benefit from a brief statement of the dimensionality of the candidate sensor set and the computational cost of the finite-element model to contextualize the need for the relaxation approach.
  2. [§2] Notation for the binary penalty term and the relaxation parameter should be introduced once and used consistently; occasional switches between symbols for the same quantity reduce readability.

Simulated Author's Rebuttal

2 responses · 1 unresolved

We thank the referee for their constructive and detailed feedback on our manuscript. We address each major comment below and will revise the manuscript accordingly to improve clarity and strengthen the presentation of results.

read point-by-point responses
  1. Referee: [Abstract] Abstract: the central claim that 'robust designs outperform classical designs on average over relevant failure scenarios of interest' is stated without any numerical values, confidence intervals, number of scenarios, or effect-size metrics. Because the designs are described as similar for the nominal problem, the magnitude and statistical reliability of the reported average advantage cannot be assessed from the given information.

    Authors: We agree that the abstract would benefit from quantitative details to support the outperformance claim. The manuscript includes Monte Carlo evaluations of the designs over failure scenarios, and we will revise the abstract to report specific metrics such as the average improvement in the log-determinant of the parameter covariance matrix, the number of scenarios sampled, and measures of variability (e.g., standard deviation across scenarios). This will enable readers to better assess the effect size and reliability of the reported advantage. revision: yes

  2. Referee: [§3] §3 (failure model definition): the outperformance is computed exclusively with respect to the authors' chosen probabilistic failure scenarios (independent dropouts or worst-case subsets). No calibration, sensitivity study, or comparison to empirical failure statistics from high-acceleration tests is provided. Given that the robust and classical designs are similar, this modeling choice is load-bearing for the practical conclusion that robust designs should be preferred.

    Authors: The failure models (independent Bernoulli dropouts and worst-case subsets) were selected as plausible representations of sensor risks in high-acceleration environments, given the limited availability of empirical failure statistics in the literature. We acknowledge that this assumption is central to the conclusions. In the revision, we will add a sensitivity study in §3 by varying the dropout probability and report its impact on the relative performance of robust versus classical designs. We will also clarify that the overall framework is general and can accommodate any specified failure distribution, including empirical ones when available. revision: partial

standing simulated objections not resolved
  • Direct calibration or validation of the sensor failure model against empirical statistics from real high-acceleration vibration tests, as such data is not publicly available and would require dedicated experimental collaboration outside the scope of this work.

Circularity Check

0 steps flagged

No significant circularity in derivation chain

full rationale

The paper formulates a robust OED problem using a standard log-determinant covariance objective (augmented by a relaxation and binary penalty), optimizes designs for the structural dynamics model, and numerically evaluates both robust and classical designs on the same failure scenarios. This evaluation is a direct computational consequence of the optimization but does not redefine the metric in terms of itself, rename a fitted quantity as a prediction, or rely on self-citations for load-bearing uniqueness or ansatz justification. The central claim rests on independent numerical comparison rather than reducing to its inputs by construction. The paper is self-contained against external benchmarks with no evident circular steps.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

Based solely on the abstract, the work rests on standard assumptions of optimal experimental design and finite-element modeling without introducing new entities or free parameters beyond typical regularization weights.

axioms (1)
  • domain assumption The finite-element model used to simulate sensor responses accurately captures the relevant structural dynamics.
    Implicit in applying OED to the vibrations problem; no alternative validation mentioned.

pith-pipeline@v0.9.0 · 5485 in / 1127 out tokens · 46845 ms · 2026-05-10T09:38:31.607672+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

23 extracted references · 18 canonical work pages

  1. [1]

    wny]⊤, forw i ∈ {0,1}

    Note that for simplicity, we will refer to the candidate DoFs ascandidate sensorsand the optimal design as theoptimal sensor placement strategy, while always referring to both location and orientation Since for every DoFione can either collect data (w i = 1) or not (w i = 0), we mathematically represent the design as binary, i.e.w= [w 1, . . . wny]⊤, forw...

  2. [2]

    Malinowski

    Wieslaw Ostachowicz, Rohan Soman, and Pawel H. Malinowski. Optimization of sensor placement for structural health monitoring: A review.Structural health monitoring, 18(3):963–988, 2019.doi: 10.1177/1475921719825601

  3. [3]

    Computational methodologies for optimal sensor placement in structural health monitoring: A review.Structural Health Monitoring, 19(4):1287–1308, 2020.doi:10.1177/ 1475921719877579

    Yi Tan and Limao Zhang. Computational methodologies for optimal sensor placement in structural health monitoring: A review.Structural Health Monitoring, 19(4):1287–1308, 2020.doi:10.1177/ 1475921719877579

  4. [4]

    A systematic review of optimization algorithms for structural health monitoring and optimal sensor placement.Sensors (Basel), 23(6):3293, March 2023.doi: 10.3390/s23063293

    Sahar Hassani and Ulrike Dackermann. A systematic review of optimization algorithms for structural health monitoring and optimal sensor placement.Sensors (Basel), 23(6):3293, March 2023.doi: 10.3390/s23063293

  5. [5]

    Fedorov.Theory of Optimal Experiments

    Valerii V. Fedorov.Theory of Optimal Experiments. Academic Press, 1972

  6. [6]

    Andrews and Agnes M

    David F. Andrews and Agnes M. Herzberg. The robustness and optimality of response surface designs.Journal of Statistical Planning and Inference, 3(3):249–257, January 1979. URL:https://www.sciencedirect.com/science/article/pii/0378375879900168,doi:10.1016/ 0378-3758(79)90016-8

  7. [7]

    Imhof, Dale Song, and Weng K

    Lorens A. Imhof, Dale Song, and Weng K. Wong. Optimal design of experiments with possibly failing trials.Statistica Sinica, pages 1145–1155, 2002. URL:https://www.jstor.org/stable/24307020

  8. [8]

    Nicolás, The bar derived category of a curved dg algebra, Journal of Pure and Applied Algebra 212 (2008) 2633–2659

    Lorens A. Imhof, Dale Song, and Weng K. Wong. Optimal design of experiments with anticipated pattern of missing observations.Journal of theoretical biology, 228(2):251–260, 2004.doi:10.1016/j. jtbi.2004.01.002

  9. [9]

    Smucker, Willis Jensen, Zichen Wu, and Bo Wang

    Byran J. Smucker, Willis Jensen, Zichen Wu, and Bo Wang. Robustness of classical and optimal designs to missing observations.Computational Statistics & Data Analysis, 113:251–260, September

  10. [10]

    1016/j.csda.2016.12.001

    URL:https://www.sciencedirect.com/science/article/pii/S0167947316302912,doi:10. 1016/j.csda.2016.12.001

  11. [11]

    Borkowski

    Wanida Limmun, Boonorm Chomtee, and John J. Borkowski. Generating Robust Optimal Mixture Designs Due to Missing Observation Using a Multi-Objective Genetic Algorithm.Mathematics, 11(16), 2023.doi:10.3390/math11163558

  12. [12]

    Exploring Application of the Coordinate Exchange to Generate Optimal Designs Robust to Data Loss

    Asher Hanson. Exploring Application of the Coordinate Exchange to Generate Optimal Designs Robust to Data Loss. Master’s thesis, Utah State University, 2024.doi:10.26076/4385-2a82

  13. [13]

    Youn, and Heung S

    Haichao An, Byeng D. Youn, and Heung S. Kim. Optimal sensor placement considering both sensor faults under uncertainty and sensor clustering for vibration-based damage detection.Structural and Multidisciplinary Optimization, 65(3):102, 2022.doi:10.1007/s00158-021-03159-9

  14. [14]

    Yichao Yang, Mayank Chadha, Zhen Hu, and Michael D. Todd. An optimal sensor design frame- work accounting for sensor reliability over the structural life cycle.Mechanical Systems and Sig- nal Processing, 202:110673, 2023. URL:https://www.sciencedirect.com/science/article/pii/ S0888327023005812,doi:10.1016/j.ymssp.2023.110673

  15. [15]

    Andrews and Agnes M

    David F. Andrews and Agnes M. Herzberg. Some considerations in the optimal design of experiments in non-optimal situations.Journal of Statistical Planning and Inference, B(38):284–289, 1976.doi: 10.1111/j.2517-6161.1976.tb01596.x

  16. [16]

    Sensor selection via convex optimization.IEEE Transactions on Signal Processing, 57(2):451–462, 2009.doi:10.1109/TSP.2008.2007095

    Siddharth Joshi and Stephen Boyd. Sensor selection via convex optimization.IEEE Transactions on Signal Processing, 57(2):451–462, 2009.doi:10.1109/TSP.2008.2007095. 20

  17. [17]

    Global optimality conditions for sensor placement, with extensions to binary low-rank A-optimal designs.Inverse Problems, 41(6):065013, June 2025.doi:10.1088/1361-6420/add9bf

    Christian Aarset. Global optimality conditions for sensor placement, with extensions to binary low-rank A-optimal designs.Inverse Problems, 41(6):065013, June 2025.doi:10.1088/1361-6420/add9bf

  18. [18]

    Crane, David M

    Gregory Bunting, Nathan K. Crane, David M. Day, Clark R. Dohrmann, Brian A. Ferri, Robert C. Flicek, Sean Hardesty, Payton Lindsay, Scott T. Miller, Lynn Munday, et al. Sierra Structural Dynamics- Users notes 4.50. Technical report, Sandia National Lab.(SNL-NM), Albuquerque, NM (United States), 2018.doi:10.2172/1760401

  19. [19]

    Banks, Marie Davidian, John R

    Harvey T. Banks, Marie Davidian, John R. Samuels Jr, and Karyn L. Sutton. An inverse problem sta- tistical methodology summary. InMathematical and statistical estimation approaches in epidemiology, pages 249–302. Springer, 2009.doi:10.1007/978-90-481-2313-1_11

  20. [20]

    Lee, Stefanie Biedermann, and Robin Mitra

    Kim M. Lee, Stefanie Biedermann, and Robin Mitra. Optimal design for experiments with possibly incomplete observations.Statistica Sinica, 28(3):1611–1632, 2018.doi:10.5705/ss.202015.0225

  21. [21]

    Kouri, John D

    Drew P. Kouri, John D. Jakeman, and J. Gabriel Huerta. Risk-Adapted Optimal Experimental Design. SIAM/ASA Journal on Uncertainty Quantification, 10(2):687–716, June 2022. Publisher: Society for Industrial and Applied Mathematics.doi:10.1137/20M1357615

  22. [22]

    Optimal experimental design for infinite-dimensional Bayesian inverse problems governed by pdes: a review.Inverse Problems, 37(4):043001, March 2021.doi:10.1088/1361-6420/ abe10c

    Alen Alexanderian. Optimal experimental design for infinite-dimensional Bayesian inverse problems governed by pdes: a review.Inverse Problems, 37(4):043001, March 2021.doi:10.1088/1361-6420/ abe10c

  23. [23]

    SIAM, 2006.doi:10.1137/1.9780898719109

    Friedrich Pukelsheim.Optimal design of experiments. SIAM, 2006.doi:10.1137/1.9780898719109. 21