HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Cross-modal Subspace Learning for Fine-grained Sketch-based Image Retrieval

Peng Xu; Qiyue Yin; Yongye Huang; Yi-Zhe Song; Zhanyu Ma; Liang Wang; Tao Xiang; W. Bastiaan Kleijn; Jun Guo

Cross-modal Subspace Learning for Fine-grained Sketch-based Image Retrieval

Abstract

Sketch-based image retrieval (SBIR) is challenging due to the inherent domain-gap between sketch and photo. Compared with pixel-perfect depictions of photos, sketches are iconic renderings of the real world with highly abstract. Therefore, matching sketch and photo directly using low-level visual clues are unsufficient, since a common low-level subspace that traverses semantically across the two modalities is non-trivial to establish. Most existing SBIR studies do not directly tackle this cross-modal problem. This naturally motivates us to explore the effectiveness of cross-modal retrieval methods in SBIR, which have been applied in the image-text matching successfully. In this paper, we introduce and compare a series of state-of-the-art cross-modal subspace learning methods and benchmark them on two recently released fine-grained SBIR datasets. Through thorough examination of the experimental results, we have demonstrated that the subspace learning can effectively model the sketch-photo domain-gap. In addition we draw a few key insights to drive future research.

Benchmarks

BenchmarkMethodologyMetrics
sketch-based-image-retrieval-on-chairsCCA-3V-HOG + PCA
R@1: 53.2
R@10: 90.3

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Cross-modal Subspace Learning for Fine-grained Sketch-based Image Retrieval | Papers | HyperAI