Command Palette
Search for a command to run...
M$^3$Net: Multilevel, Mixed and Multistage Attention Network for Salient Object Detection
Yao Yuan; Pan Gao; XiaoYang Tan

Abstract
Most existing salient object detection methods mostly use U-Net or feature pyramid structure, which simply aggregates feature maps of different scales, ignoring the uniqueness and interdependence of them and their respective contributions to the final prediction. To overcome these, we propose the M$^3$Net, i.e., the Multilevel, Mixed and Multistage attention network for Salient Object Detection (SOD). Firstly, we propose Multiscale Interaction Block which innovatively introduces the cross-attention approach to achieve the interaction between multilevel features, allowing high-level features to guide low-level feature learning and thus enhancing salient regions. Secondly, considering the fact that previous Transformer based SOD methods locate salient regions only using global self-attention while inevitably overlooking the details of complex objects, we propose the Mixed Attention Block. This block combines global self-attention and window self-attention, aiming at modeling context at both global and local levels to further improve the accuracy of the prediction map. Finally, we proposed a multilevel supervision strategy to optimize the aggregated feature stage-by-stage. Experiments on six challenging datasets demonstrate that the proposed M$^3$Net surpasses recent CNN and Transformer-based SOD arts in terms of four metrics. Codes are available at https://github.com/I2-Multimedia-Lab/M3Net.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| salient-object-detection-on-dut-omron | M3Net-R | MAE: 0.061 S-Measure: 0.848 Weighted F-Measure: 0.769 |
| salient-object-detection-on-dut-omron | M3Net-S | MAE: 0.045 S-Measure: 0.872 Weighted F-Measure: 0.811 |
| salient-object-detection-on-duts-te | M3Net-R | MAE: 0.036 S-Measure: 0.897 Weighted F-Measure: 0.849 |
| salient-object-detection-on-duts-te | M3Net-S | MAE: 0.024 S-Measure: 0.927 Weighted F-Measure: 0.902 |
| salient-object-detection-on-ecssd | M3Net-R | MAE: 0.029 S-Measure: 0.931 Weighted F-Measure: 0.919 |
| salient-object-detection-on-ecssd | M3Net-S | MAE: 0.021 S-Measure: 0.948 Weighted F-Measure: 0.947 |
| salient-object-detection-on-hku-is | M3Net-S | MAE: 0.019 S-Measure: 0.943 Weighted F-Measure: 0.937 |
| salient-object-detection-on-hku-is | M3Net-R | MAE: 0.026 S-Measure: 0.929 Weighted F-Measure: 0.913 |
| salient-object-detection-on-pascal-s | M3Net-R | MAE: 0.06 S-Measure: 0.868 Weighted F-Measure: 0.827 |
| salient-object-detection-on-pascal-s | M3Net-S | MAE: 0.047 S-Measure: 0.889 Weighted F-Measure: 0.864 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.