Command Palette
Search for a command to run...
Michieli Umberto ; Zanuttigh Pietro

Abstract
Deep learning architectures exhibit a critical drop of performance due tocatastrophic forgetting when they are required to incrementally learn newtasks. Contemporary incremental learning frameworks focus on imageclassification and object detection while in this work we formally introducethe incremental learning problem for semantic segmentation in which apixel-wise labeling is considered. To tackle this task we propose to distillthe knowledge of the previous model to retain the information about previouslylearned classes, whilst updating the current model to learn the new ones. Wepropose various approaches working both on the output logits and onintermediate features. In opposition to some recent frameworks, we do not storeany image from previously learned classes and only the last model is needed topreserve high accuracy on these classes. The experimental evaluation on thePascal VOC2012 dataset shows the effectiveness of the proposed approaches.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| disjoint-10-1-on-pascal-voc-2012 | ILT | mIoU: 5.4 |
| disjoint-15-1-on-pascal-voc-2012 | ILT | mIoU: 7.9 |
| disjoint-15-5-on-pascal-voc-2012 | ILT | Mean IoU: 58.9 |
| overlapped-10-1-on-pascal-voc-2012 | ILT | mIoU: 5.5 |
| overlapped-100-5-on-ade20k | ILT | mIoU: 0.5 |
| overlapped-15-1-on-pascal-voc-2012 | ILT | mIoU: 9.2 |
| overlapped-15-5-on-pascal-voc-2012 | ILT | Mean IoU (val): 61.3 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.