Command Palette
Search for a command to run...
An Orthogonal Classifier for Improving the Adversarial Robustness of Neural Networks
Cong Xu Xiang Li Min Yang

Abstract
Neural networks are susceptible to artificially designed adversarial perturbations. Recent efforts have shown that imposing certain modifications on classification layer can improve the robustness of the neural networks. In this paper, we explicitly construct a dense orthogonal weight matrix whose entries have the same magnitude, thereby leading to a novel robust classifier. The proposed classifier avoids the undesired structural redundancy issue in previous work. Applying this classifier in standard training on clean data is sufficient to ensure the high accuracy and good robustness of the model. Moreover, when extra adversarial samples are used, better robustness can be further obtained with the help of a special worst-case loss. Experimental results show that our method is efficient and competitive to many state-of-the-art defensive approaches. Our code is available at \url{https://github.com/MTandHJ/roboc}.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| adversarial-attack-on-cifar-10 | Xu et al. | Attack: AutoAttack: 44.150 Attack: DeepFool: 51.310 Attack: PGD20: 78.680 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.