Command Palette
Search for a command to run...
Robust Single Image Reflection Removal Against Adversarial Attacks
{Jianfeng Lu Wenqi Ren Zhaoxin Fan Wenhan Luo Kaihao Zhang Zhenyuan Zhang Zhenbo Song}

Abstract
This paper addresses the problem of robust deep single-image reflection removal (SIRR) against adversarial attacks. Current deep learning based SIRR methods have shown significant performance degradation due to unnoticeable distortions and perturbations on input images. For a comprehensive robustness study, we first conduct diverse adversarial attacks specifically for the SIRR problem, i.e. towards different attacking targets and regions. Then we propose a robust SIRR model, which integrates the cross-scale attention module, the multi-scale fusion module, and the adversarial image discriminator. By exploiting the multi-scale mechanism, the model narrows the gap between features from clean and adversarial images. The image discriminator adaptively distinguishes clean or noisy inputs, and thus further gains reliable robustness. Extensive experiments on Nature, SIR^2, and Real datasets demonstrate that our model remarkably improves the robustness of SIRR across disparate scenes.
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| reflection-removal-on-real20 | RobustSIRR | PSNR: 23.61 SSIM: 0.835 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.