Command Palette
Search for a command to run...
DDANet: Dual Decoder Attention Network for Automatic Polyp Segmentation
Nikhil Kumar Tomar Debesh Jha Sharib Ali Håvard D. Johansen Dag Johansen Michael A. Riegler Pål Halvorsen

Abstract
Colonoscopy is the gold standard for examination and detection of colorectal polyps. Localization and delineation of polyps can play a vital role in treatment (e.g., surgical planning) and prognostic decision making. Polyp segmentation can provide detailed boundary information for clinical analysis. Convolutional neural networks have improved the performance in colonoscopy. However, polyps usually possess various challenges, such as intra-and inter-class variation and noise. While manual labeling for polyp assessment requires time from experts and is prone to human error (e.g., missed lesions), an automated, accurate, and fast segmentation can improve the quality of delineated lesion boundaries and reduce missed rate. The Endotect challenge provides an opportunity to benchmark computer vision methods by training on the publicly available Hyperkvasir and testing on a separate unseen dataset. In this paper, we propose a novel architecture called ``DDANet'' based on a dual decoder attention network. Our experiments demonstrate that the model trained on the Kvasir-SEG dataset and tested on an unseen dataset achieves a dice coefficient of 0.7874, mIoU of 0.7010, recall of 0.7987, and a precision of 0.8577, demonstrating the generalization ability of our model.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| medical-image-segmentation-on-endotect-polyp | DDANet | DSC: 0.7870 FPS: 70.23 mIoU: 0.701 |
| medical-image-segmentation-on-kvasir-seg | DDANet | FPS: 69.59 mIoU: 0.7800 mean Dice: 0.8576 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.