pith. machine review for the scientific record. sign in

arxiv: 1811.00866 · v1 · submitted 2018-11-02 · 💻 cs.LG · cs.CR· stat.ML

Recognition: unknown

Efficient Neural Network Robustness Certification with General Activation Functions

Authors on Pith no claims yet
classification 💻 cs.LG cs.CRstat.ML
keywords activationfunctionsgeneralrobustnesscertifiedcrowngivenlower
0
0 comments X
read the original abstract

Finding minimum distortion of adversarial examples and thus certifying robustness in neural network classifiers for given data points is known to be a challenging problem. Nevertheless, recently it has been shown to be possible to give a non-trivial certified lower bound of minimum adversarial distortion, and some recent progress has been made towards this direction by exploiting the piece-wise linear nature of ReLU activations. However, a generic robustness certification for general activation functions still remains largely unexplored. To address this issue, in this paper we introduce CROWN, a general framework to certify robustness of neural networks with general activation functions for given input data points. The novelty in our algorithm consists of bounding a given activation function with linear and quadratic functions, hence allowing it to tackle general activation functions including but not limited to four popular choices: ReLU, tanh, sigmoid and arctan. In addition, we facilitate the search for a tighter certified lower bound by adaptively selecting appropriate surrogates for each neuron activation. Experimental results show that CROWN on ReLU networks can notably improve the certified lower bounds compared to the current state-of-the-art algorithm Fast-Lin, while having comparable computational efficiency. Furthermore, CROWN also demonstrates its effectiveness and flexibility on networks with general activation functions, including tanh, sigmoid and arctan.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 3 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Quantitative Linear Logic for Neuro-Symbolic Learning and Verification

    cs.LO 2026-05 unverdicted novelty 7.0

    QLL is a novel logic for neuro-symbolic learning that uses ML-native operations (sum, log-sum-exp) on logits to embed constraints, satisfying most linear logic properties and showing stronger correlation between empir...

  2. Quantitative Linear Logic for Neuro-Symbolic Learning and Verification

    cs.LO 2026-05 unverdicted novelty 6.0

    Quantitative Linear Logic interprets logical connectives via natural ML operations on logits to embed constraints in neural training while satisfying most linear logic laws and correlating performance with independent...

  3. Functional Similarity Metric for Neural Networks: Overcoming Parametric Ambiguity via Activation Region Analysis

    cs.LG 2026-04 unverdicted novelty 6.0

    A functional similarity metric for ReLU networks uses normalized activation region signatures and MinHash to overcome parametric symmetries like neuron permutation and scaling.