Command Palette
Search for a command to run...
Pretraining boosts out-of-domain robustness for pose estimation
Pretraining boosts out-of-domain robustness for pose estimation
Alexander Mathis Thomas Biasi Steffen Schneider Mert Yüksekgönül Byron Rogers Matthias Bethge Mackenzie W. Mathis
Abstract
Neural networks are highly effective tools for pose estimation. However, asin other computer vision tasks, robustness to out-of-domain data remains achallenge, especially for small training sets that are common for real-worldapplications. Here, we probe the generalization ability with three architectureclasses (MobileNetV2s, ResNets, and EfficientNets) for pose estimation. Wedeveloped a dataset of 30 horses that allowed for both "within-domain" and"out-of-domain" (unseen horse) benchmarking - this is a crucial test forrobustness that current human pose estimation benchmarks do not directlyaddress. We show that better ImageNet-performing architectures perform betteron both within- and out-of-domain data if they are first pretrained onImageNet. We additionally show that better ImageNet models generalize betteracross animal species. Furthermore, we introduce Horse-C, a new benchmark forcommon corruptions for pose estimation, and confirm that pretraining increasesperformance in this domain shift context as well. Overall, our resultsdemonstrate that transfer learning is beneficial for out-of-domain robustness.