HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information

Chien-Yao Wang I-Hau Yeh Hong-Yuan Mark Liao

YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information

Abstract

Today's deep learning methods focus on how to design the most appropriate objective functions so that the prediction results of the model can be closest to the ground truth. Meanwhile, an appropriate architecture that can facilitate acquisition of enough information for prediction has to be designed. Existing methods ignore a fact that when input data undergoes layer-by-layer feature extraction and spatial transformation, large amount of information will be lost. This paper will delve into the important issues of data loss when data is transmitted through deep networks, namely information bottleneck and reversible functions. We proposed the concept of programmable gradient information (PGI) to cope with the various changes required by deep networks to achieve multiple objectives. PGI can provide complete input information for the target task to calculate objective function, so that reliable gradient information can be obtained to update network weights. In addition, a new lightweight network architecture -- Generalized Efficient Layer Aggregation Network (GELAN), based on gradient path planning is designed. GELAN's architecture confirms that PGI has gained superior results on lightweight models. We verified the proposed GELAN and PGI on MS COCO dataset based object detection. The results show that GELAN only uses conventional convolution operators to achieve better parameter utilization than the state-of-the-art methods developed based on depth-wise convolution. PGI can be used for variety of models from lightweight to large. It can be used to obtain complete information, so that train-from-scratch models can achieve better results than state-of-the-art models pre-trained using large datasets, the comparison results are shown in Figure 1. The source codes are at: https://github.com/WongKinYiu/yolov9.

Code Repositories

WongKinYiu/YOLO
Official
pytorch
Mentioned in GitHub
henrytsui000/YOLO
pytorch
Mentioned in GitHub
wongkinyiu/yolov9
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
real-time-object-detection-on-cocoYOLOv9-M
box AP: 51.4
real-time-object-detection-on-cocoYOLOv9-C
box AP: 53.0
real-time-object-detection-on-cocoYOLOv9-E
box AP: 55.6
real-time-object-detection-on-cocoGELAN-E
box AP: 55.0
real-time-object-detection-on-cocoYOLOv9-S
box AP: 46.8
real-time-object-detection-on-cocoGELAN-M
box AP: 51.1
real-time-object-detection-on-cocoGELAN-C
box AP: 52.5
real-time-object-detection-on-cocoGELAN-S
box AP: 46.7

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp