HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

A large-Scale TV Dataset for partial video copy detection

{and Donatello Conte Mathieu Delalandre Van-Hao LE}

A large-Scale TV Dataset for partial video copy detection

Abstract

This paper is interested with the performance evaluation of the partial video copy detection. Several public datasets exist designed from web videos. The detection problem is inherent to the continuous video broadcasting. The alternative is then to process with TV datasets offering a deeper scalability and a control of degradations for a fine performance evaluation. We propose in this paper a TV dataset called STVD. It is designed with a protocol ensuring a scalable capture and robust groundtruthing. STVD is the largest public dataset on the task with a near 83k videos having a total duration of 10, 660 hours. Perfor- mance evaluation results of representative methods on the dataset are reported in the paper for a baseline comparison.

Benchmarks

BenchmarkMethodologyMetrics
partial-video-copy-detection-on-stvd-pvcdpretrained VGG-16
F1: 0.83

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
A large-Scale TV Dataset for partial video copy detection | Papers | HyperAI