HyperAIHyperAI

Command Palette

Search for a command to run...

Expeditious Saliency-guided Mix-up through Random Gradient Thresholding

Minh-Long Luu Zeyi Huang Eric P. Xing Yong Jae Lee Haohan Wang

Abstract

Mix-up training approaches have proven to be effective in improving thegeneralization ability of Deep Neural Networks. Over the years, the researchcommunity expands mix-up methods into two directions, with extensive efforts toimprove saliency-guided procedures but minimal focus on the arbitrary path,leaving the randomization domain unexplored. In this paper, inspired by thesuperior qualities of each direction over one another, we introduce a novelmethod that lies at the junction of the two routes. By combining the bestelements of randomness and saliency utilization, our method balances speed,simplicity, and accuracy. We name our method R-Mix following the concept of"Random Mix-up". We demonstrate its effectiveness in generalization, weaklysupervised object localization, calibration, and robustness to adversarialattacks. Finally, in order to address the question of whether there exists abetter decision protocol, we train a Reinforcement Learning agent that decidesthe mix-up policies based on the classifier's performance, reducing dependencyon human-designed objectives and hyperparameter tuning. Extensive experimentsfurther show that the agent is capable of performing at the cutting-edge level,laying the foundation for a fully automatic mix-up. Our code is released at[https://github.com/minhlong94/Random-Mixup].


Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing

HyperAI Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp