Command Palette
Search for a command to run...
Progressively Normalized Self-Attention Network for Video Polyp Segmentation
Ge-Peng Ji; Yu-Cheng Chou; Deng-Ping Fan; Geng Chen; Huazhu Fu; Debesh Jha; Ling Shao

Abstract
Existing video polyp segmentation (VPS) models typically employ convolutional neural networks (CNNs) to extract features. However, due to their limited receptive fields, CNNs can not fully exploit the global temporal and spatial information in successive video frames, resulting in false-positive segmentation results. In this paper, we propose the novel PNS-Net (Progressively Normalized Self-attention Network), which can efficiently learn representations from polyp videos with real-time speed (~140fps) on a single RTX 2080 GPU and no post-processing. Our PNS-Net is based solely on a basic normalized self-attention block, equipping with recurrence and CNNs entirely. Experiments on challenging VPS datasets demonstrate that the proposed PNS-Net achieves state-of-the-art performance. We also conduct extensive experiments to study the effectiveness of the channel split, soft-attention, and progressive learning strategy. We find that our PNS-Net works well under different settings, making it a promising solution to the VPS task.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| video-polyp-segmentation-on-sun-seg-easy | PNSNet | Dice: 0.676 S measure: 0.767 Sensitivity: 0.574 mean E-measure: 0.744 mean F-measure: 0.664 weighted F-measure: 0.616 |
| video-polyp-segmentation-on-sun-seg-hard | PNSNet | Dice: 0.675 S-Measure: 0.767 Sensitivity: 0.579 mean E-measure: 0.755 mean F-measure: 0.656 weighted F-measure: 0.609 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.