Command Palette
Search for a command to run...
{ Martin Jagersand Masood Dehghan Chao Gao Chenyang Huang Zichen Zhang Xuebin Qin}

Abstract
Deep Convolutional Neural Networks have been adopted for salient object detection and achieved the state-of-the-art performance. Most of the previous works however focus on region accuracy but not on the boundary quality. In this paper, we propose a predict-refine architecture, BASNet, and a new hybrid loss for Boundary-Aware Salient object detection. Specifically, the architecture is composed of a densely supervised Encoder-Decoder network and a residual refinement module, which are respectively in charge of saliency prediction and saliency map refinement. The hybrid loss guides the network to learn the transformation between the input image and the ground truth in a three-level hierarchy -- pixel-, patch- and map- level -- by fusing Binary Cross Entropy (BCE), Structural SIMilarity (SSIM) and Intersection-over-Union (IoU) losses. Equipped with the hybrid loss, the proposed predict-refine architecture is able to effectively segment the salient object regions and accurately predict the fine structures with clear boundaries. Experimental results on six public datasets show that our method outperforms the state-of-the-art methods both in terms of regional and boundary evaluation measures. Our method runs at over 25 fps on a single GPU. The code is available at: https://github.com/NathanUA/BASNet.
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| camouflaged-object-segmentation-on-camo | BASNet | MAE: 0.159 S-Measure: 0.618 Weighted F-Measure: 0.413 |
| camouflaged-object-segmentation-on-cod | BASNet | MAE: 0.092 S-Measure: 0.685 Weighted F-Measure: 0.352 |
| camouflaged-object-segmentation-on-pcod-1200 | BASNet | S-Measure: 0.837 |
| dichotomous-image-segmentation-on-dis-te1 | BASNet | E-measure: 0.801 HCE: 220 MAE: 0.084 S-Measure: 0.754 max F-Measure: 0.688 weighted F-measure: 0.595 |
| dichotomous-image-segmentation-on-dis-te2 | BASNet | E-measure: 0.836 HCE: 480 MAE: 0.084 S-Measure: 0.786 max F-Measure: 0.755 weighted F-measure: 0.668 |
| dichotomous-image-segmentation-on-dis-te3 | BASNet | E-measure: 0.856 HCE: 948 MAE: 0.083 S-Measure: 0.798 max F-Measure: 0.785 weighted F-measure: 0.696 |
| dichotomous-image-segmentation-on-dis-te4 | BASNet | E-measure: 0.848 HCE: 3601 MAE: 0.091 S-Measure: 0.794 max F-Measure: 0.780 weighted F-measure: 0.693 |
| dichotomous-image-segmentation-on-dis-vd | BASNet | E-measure: 0.816 HCE: 1402 MAE: 0.094 S-Measure: 0.768 max F-Measure: 0.731 weighted F-measure: 0.641 |
| salient-object-detection-on-dut-omron | BASNet | MAE: 0.056 |
| salient-object-detection-on-duts-te | BASNet | MAE: 0.047 S-Measure: 0.876 mean E-Measure: 0.896 mean F-Measure: 0.823 |
| salient-object-detection-on-ecssd | BASNet | MAE: 0.037 |
| salient-object-detection-on-hku-is | BASNet | MAE: 0.032 |
| salient-object-detection-on-pascal-s | BASNet | MAE: 0.076 |
| salient-object-detection-on-soc | BASNet | Average MAE: 0.092 S-Measure: 0.841 mean E-Measure: 0.864 |
| salient-object-detection-on-sod | BASNet | MAE: 0.114 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.