SSLA approximates the posterior predictive distribution by refitting Bayesian models on self-predicted data, providing a sampling-free method that improves predictive calibration over classical Laplace approximations in regression tasks.
Title resolution pending
6 Pith papers cite this work. Polarity classification is still indexing.
years
2026 6verdicts
UNVERDICTED 6representative citing papers
All tested counterfactual explanation methods struggle to generate valid counterfactuals on incomplete inputs, though robust variants outperform non-robust ones.
Spline encodings for numerical features show task-dependent performance in tabular deep learning, with piecewise-linear encoding robust for classification and variable results for regression depending on spline family, knot strategy, and backbone.
HTAF is a sigmoid-tanh composite that approximates the Heaviside function to allow stable gradient training of binary activation networks, yielding ICBMs with stable discretization and competitive performance on image tasks.
Venn-Abers predictors are extended to unbounded regression via conformal prediction, producing point regressors that modestly improve efficiency over standard methods for large datasets.
Cognitive models of user reasoning strategies with XAI methods on tabular data fit human forward-simulation decisions better than ML baselines and support hypothesis testing without new user studies.
citing papers explorer
-
Self-Supervised Laplace Approximation for Bayesian Uncertainty Quantification
SSLA approximates the posterior predictive distribution by refitting Bayesian models on self-predicted data, providing a sampling-free method that improves predictive calibration over classical Laplace approximations in regression tasks.
-
Evaluating Counterfactual Explanation Methods on Incomplete Inputs
All tested counterfactual explanation methods struggle to generate valid counterfactuals on incomplete inputs, though robust variants outperform non-robust ones.
-
From Uniform to Learned Knots: A Study of Spline-Based Numerical Encodings for Tabular Deep Learning
Spline encodings for numerical features show task-dependent performance in tabular deep learning, with piecewise-linear encoding robust for classification and variable results for regression depending on spline family, knot strategy, and backbone.
-
A Composite Activation Function for Learning Stable Binary Representations
HTAF is a sigmoid-tanh composite that approximates the Heaviside function to allow stable gradient training of binary activation networks, yielding ICBMs with stable discretization and competitive performance on image tasks.
-
Inductive Venn-Abers and related regressors
Venn-Abers predictors are extended to unbounded regression via conformal prediction, producing point regressors that modestly improve efficiency over standard methods for large datasets.
-
CoAX: Cognitive-Oriented Attribution eXplanation User Model of Human Understanding of AI Explanations
Cognitive models of user reasoning strategies with XAI methods on tabular data fit human forward-simulation decisions better than ML baselines and support hypothesis testing without new user studies.