HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Semi-Supervised Semantic Segmentation via Adaptive Equalization Learning

Hanzhe Hu Fangyun Wei Han Hu Qiwei Ye Jinshi Cui Liwei Wang

Semi-Supervised Semantic Segmentation via Adaptive Equalization Learning

Abstract

Due to the limited and even imbalanced data, semi-supervised semantic segmentation tends to have poor performance on some certain categories, e.g., tailed categories in Cityscapes dataset which exhibits a long-tailed label distribution. Existing approaches almost all neglect this problem, and treat categories equally. Some popular approaches such as consistency regularization or pseudo-labeling may even harm the learning of under-performing categories, that the predictions or pseudo labels of these categories could be too inaccurate to guide the learning on the unlabeled data. In this paper, we look into this problem, and propose a novel framework for semi-supervised semantic segmentation, named adaptive equalization learning (AEL). AEL adaptively balances the training of well and badly performed categories, with a confidence bank to dynamically track category-wise performance during training. The confidence bank is leveraged as an indicator to tilt training towards under-performing categories, instantiated in three strategies: 1) adaptive Copy-Paste and CutMix data augmentation approaches which give more chance for under-performing categories to be copied or cut; 2) an adaptive data sampling approach to encourage pixels from under-performing category to be sampled; 3) a simple yet effective re-weighting method to alleviate the training noise raised by pseudo-labeling. Experimentally, AEL outperforms the state-of-the-art methods by a large margin on the Cityscapes and Pascal VOC benchmarks under various data partition protocols. Code is available at https://github.com/hzhupku/SemiSeg-AEL

Code Repositories

hzhupku/semiseg-ael
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
semi-supervised-semantic-segmentation-on-1AEL (DeepLab v3+ with ResNet-101 pretraind on ImageNet-1K)
Validation mIoU: 79.01%
semi-supervised-semantic-segmentation-on-15AEL (DeepLab v3+ with ResNet-101 pretraind on ImageNet-1K)
Validation mIoU: 80.29%
semi-supervised-semantic-segmentation-on-2AEL (DeepLab v3+ with ResNet-101 pretraind on ImageNet-1K)
Validation mIoU: 77.9%
semi-supervised-semantic-segmentation-on-21AEL (DeepLab v3+ with ResNet-101 pretraind on ImageNet-1K)
Validation mIoU: 77.2
semi-supervised-semantic-segmentation-on-22AEL (DeepLab v3+ with ResNet-101 pretraind on ImageNet-1K)
Validation mIoU: 75.83%
semi-supervised-semantic-segmentation-on-35AEL (DeepLab v3+ with ResNet-101 pretraind on ImageNet-1K)
Validation mIoU: 74.28
semi-supervised-semantic-segmentation-on-36AEL (DeepLab v3+ with ResNet-101 pretraind on ImageNet-1K)
Validation mIoU: 76.97
semi-supervised-semantic-segmentation-on-4AEL (DeepLab v3+ with ResNet-101 pretraind on ImageNet-1K)
Validation mIoU: 77.57%
semi-supervised-semantic-segmentation-on-41AEL
Validation mIoU: 28.4
semi-supervised-semantic-segmentation-on-42AEL
Validation mIoU: 33.2
semi-supervised-semantic-segmentation-on-8AEL (DeepLab v3+ with ResNet-101 pretraind on ImageNet-1K)
Validation mIoU: 80.28%
semi-supervised-semantic-segmentation-on-9AEL (DeepLab v3+ with ResNet-101 pretraind on ImageNet-1K)
Validation mIoU: 78.06

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp